Next Article in Journal
Soil Nutrient, Enzyme Activity, and Microbial Community Characteristics of E. urophylla × E. grandis Plantations in a Chronosequence
Next Article in Special Issue
Bamboo Structure and Its Impact on Mechanical Properties: A Case Study of Bambusa arundinaceae
Previous Article in Journal
Effects of Enhanced UV-B Radiation on Decomposition and Nutrient Release Rates of Litter from Cunninghamia lanceolata (Lamb.) Hook
Previous Article in Special Issue
The Flexural Strength of Three Bamboo Species from Brazil: A Comparative Study of Internal and External Lamina Surfaces Using Static and Dynamic Bending Properties
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Prediction of Physical and Mechanical Properties of Heat-Treated Wood Based on the Improved Beluga Whale Optimisation Back Propagation (IBWO-BP) Neural Network

College of Mechanical and Electrical Engineering, Northeast Forestry University, Harbin 150040, China
*
Author to whom correspondence should be addressed.
Forests 2024, 15(4), 687; https://doi.org/10.3390/f15040687
Submission received: 16 February 2024 / Revised: 7 April 2024 / Accepted: 9 April 2024 / Published: 10 April 2024
(This article belongs to the Special Issue Wood Quality and Mechanical Properties)

Abstract

:
The physical and mechanical properties of heat-treated wood are essential factors in assessing its appropriateness for different applications. While back-propagation (BP) neural networks are widely used for predicting wood properties, their accuracy often falls short of expectations. This paper introduces an improved Beluga Whale Optimisation (IBWO)-BP model as a solution to this challenge. We improved the standard Beluga Whale Optimisation (BWO) algorithm in three ways: (1) use Bernoulli chaos mapping to explore the entire search space during population initialization; (2) incorporate the position update formula of the Firefly Algorithm (FA) to improve the position update strategy and convergence speed; (3) apply the opposition-based learning based on the lens imaging (lensOBL) mechanism to the optimal individual, which prevents the algorithm from getting stuck in local optima during each iteration. Subsequently, we adjusted the weights and thresholds of the BP model, deploying the IBWO approach. Ultimately, we employ the IBWO-BP model to predict the swelling and shrinkage ratio of air-dry volume, as well as the modulus of elasticity (MOE) and bending strength (MOR) of heat-treated wood. The benefit of IBWO is demonstrated through comparison with other meta-heuristic algorithms (MHAs). When compared to earlier prediction models, the results revealed that the mean square error (MSE) decreased by 39.7%, the root mean square error (RMSE) by 22.4%, the mean absolute percentage error (MAPE) by 9.8%, the mean absolute error (MAE) by 31.5%, and the standard deviation (STD) by 18.9%. Therefore, this model has excellent generalisation ability and relatively good prediction accuracy.

1. Introduction

Woods, along with its derivative products, have been utilised across various fields for thousands of years. However, wood’s cells and tissues are primarily aligned axially, leading to anisotropic behaviour, particularly in moisture sensitivity, biodegradability, and mechanical properties, which restrict its application in environments demanding durability and safety. The challenges also contributed to the growth of wood modification technologies, such as chemical and physical modification [1].
Heat treatment is a sustainable modification technique for improving the wood properties. Harry Tiemann [2] was the first to report on the heat treatment of wood in 1915. He heated air-dried wood in superheated steam at 150 °C and discovered that the hygrometric properties of heat-treated wood were reduced. Moreover, heat treatment is typically implemented to change various properties of wood, such as colour [3], bonding strength [4], gloss [5], equilibrium moisture content (EMC) [6], and durability [7]. Previous research has also extensively studied heat treatment’s effects on dimensional stability and mechanical properties [8].
The dimensional stability of wood after heat treatment varies according to the wood species [8]. Cermak et al. [9] studied the swelling kinetics of Norway spruce, Scots pine, European beech, and English oak specimens treated at 180, 200, and 220 °C. The oak wood exhibited relatively lesser swelling and linear characteristics in comparison to spruce, pine, and beech wood. This may be influenced by the density of the wood species, its chemical composition, and its anatomy [9].
Cermak et al. [10] performed rewetting cycles on beech, poplar, and spruce with subsequent heat treatment at temperatures of 180 °C and 200 °C, and revealed a reduction in swelling for beech, poplar, and spruce. Furthermore, they observed that the radial swelling was less than the tangential swelling. Similarly, Liu et al. [11] concluded that the swelling of heat-treated Ailanthus wood was lower in all directions than that of untreated wood, with the tangential swelling greater than the radial swelling.
In the heat treatment process, the higher the temperature used, the lower the swelling rate of the treated wood [12]. Dubey et al. [13] studied Pinus radiata wood specimens’ dimensional stability in a 20 ± 2 °C water bath, 85 ± 5% RH, 20 ± 2 °C high humidity, and dry-freeze-wet three-cycle environment. Specimens treated at 210 °C recorded the maximum ASE, ranging from 53% to 60%, which demonstrated that the degradation of hemicellulose enhanced the dimensional stability of wood. Cermak et al. [9] also found the swelling reduced by increasing the treatment temperature from 180 °C to 220 °C.
Therefore, achieving better dimensional stability of the wood usually requires higher treatment temperatures.
Moreover, heat treatment changes wood’s mechanical properties the most, involving static and dynamic bending resistance, while bending strength (MOR) reduces more than modulus of elasticity (MOE) [14]. Similar results occurred in the study of Eucalyptus regnans by Zhang et al. [15], the MOR of treated wood decreased from about 80 MPa to 40 MPa as the temperature increased from 120 °C to 200 °C, while the MOE increased slightly to about 10,000 MPa from 9000 MPa. Wang et al. [16] reported that when Populus tomentosa was heat treated in the range of 180 °C to 200 °C, the MOE gradually increased with temperature, but began to decrease when the treatment temperature exceeded 200 °C. Wang also pointed out that the increase of the MOE was due to the formation of stable chemical bonds between cellulose, which made the cell wall structure more stable, but excessive temperature leads to the degradation of cellulose, resulting in the MOE decline. Esteves et al. [17] indicated that the MOR of heat-treated Pinus pinaster and Eucalyptus globulus decreased by 40% and 50%, respectively, while the MOE decreased by 5% and 15%.
However, enhancing the dimensional stability of wood may result in a reduction of its mechanical properties. This largely restricts the scope of applications for heat-treated wood, especially in structural applications that demand high strength and stability. Birinci et al. [18] discovered that as heat treatment on Scots pine and beech sapwood increases in temperature and time, dimensional stability improves, while MOE changes non-linearly, decreasing most at 210 °C and the least at 180 °C. Kol et al. [19] studied the mechanical and physical properties of heat-treated fir and found that anti-swelling efficiency and anti-shrink efficiency increased from 8.1% and 4.1% to 33.1% and 30.7%, indicating improved dimensional stability, while MOR dropped by 2.7% at 170 °C and 15% at 212 °C, respectively.
In conclusion, it is imperative to carefully determine the suitable treatment temperatures and time parameters based on the specific application of the target wood during heat treatment modifications. Identifying the optimum combination of treatment parameters requires numerous experimentations, which consumes both materials and time. This is because the relationship between heat treatment parameters and wood properties is complex and nonlinear. Hence, a growing number of researchers are turning to artificial neural networks (ANN) for solutions, as they can match any complex nonlinear relationship.
ANN have been widely used in the field of properties prediction of heat-treated wood [20]. Tiryaki et al. [21] successfully predicted MOR and MOE of heat-treated beech wood and spruce wood based on treatment temperature, time, and wood species, with the  R 2  value greater than 0.99 for all data sets. Tiryaki et al. [22] simulated and predicted the volume swelling and shrinkage of heat-treated beech and pine, with  R 2  values exceeding 0.98. Ozsahin et al. [23] also predicted the EMC and specific gravity (SG) of heat-treated Uludag fir and hornbeam wood with ANN, and the results suggest that the prediction accuracy of EMC and SG exceeds 91.4% and 97.7%, respectively. In particular, back propagation (BP) neural networks have gained widespread use due to their ability to learn and generalize from input and output data [24,25]. Haftkhani et al. [26] predicted water absorption and swelling of heat-treated fir wood by using the single- and multiple-input BP network model. The mean absolute percentage error (MAPE) for the prediction was less than 10%.
Nevertheless, the traditional BP neural network for prediction has the disadvantages of poor data generalization and low fitting accuracy [27]. To overcome these limitations, meta-heuristic algorithms (MHAs) have been used by researchers to optimise the BP neural networks’ threshold and weights [6]. Lei et al. [28] optimised the BP neural network to predict oak absolute dry density employing nonlinear weighted particle swarm optimisation (IPSO). The IPSO-optimised BP neural network has a higher correlation coefficient of 0.938 and a lower root mean square error (RMSE) of 0.0129 than the BP model and particle swarm optimisation (PSO)-optimized BP neural network. Ma et al. [29] optimised BP weights and thresholds, adopting an innovative variant of the gray wolf optimization (GWO). Predicting mechanical properties of heat-treated larch outperformed the BP neural network optimised by the traditional GWO. The optimised BP neural network with the new approach reduced mean absolute error (MAE) by 74.5% and mean square error (MSE) by 94.4% compared to the BP neural network. MAPE decrease exceeds 4%.
Furthermore, beluga whale optimisation (BWO) is a brand-new MHA that has been employed to address real-world challenges across several domains. In this study, we employ the BWO as an optimizer for the parameters of the BP neural network. Even so, Houssein et al. [30] argue that the BWO lacks diversity and is prone to local optimum, which is better than its neighbours, but not the best solution in the entire search space. This makes seeking the best parameters for BP neural networks challenging.
Considering the above, we propose an improved beluga whale optimisation (IBWO) to optimize the BP neural network for predicting physical and mechanical properties of heat-treated wood in this paper. The IBWO algorithm is driven by improving its lacking diversity and defects, which make it easy to fall into the local optimal solution, as well as accelerating its convergence speed, so that it can quickly discover the best solution when predicting the physical and mechanical properties of wood. Subsequently, the IBWO algorithm was employed to optimise the weights and thresholds of the BP neural network, which established the wood properties prediction model.
Previous research has demonstrated that multilayer perceptron ANN models, using inputs involving wood species, treatment temperature, and time, can accurately predict MOR and MOE [21] along with wood volumetric changes [22]. Accordingly, we utilised the proposed model, incorporating these three parameters as inputs, to predict the air-dry volumetric swelling and shrinkage, MOE, and MOR of heat-treated wood. The reliability of the IBWO-BP model in predicting the physical and mechanical properties of heat-treated wood was demonstrated by comparing it to other MHAs and early prediction models.

2. Establishment of the IBWO-BP Model

2.1. The Standard BWO Algorithm

BWO is an innovative MHA that imitates the behaviour of beluga whales [31]. The model is separated into three stages: exploration, exploitation, and whale fall, which duplicate the swimming, hunting, and whale fall of beluga whales in nature.
During the exploration stage, beluga whales make unintentional choices in the problem space to be solved, allowing them to execute better global searches. The locations of the exploration phase are updated using Equations (1) and (2), which is influenced by the paired swimming behaviour of these whales.
x i , j e v e n t + 1 = x 1 , p j e v e n t + x r , p 1 t x i , p j e v e n t 1 + r 1 s i n 2 π r 2
x i , j o d d t + 1 = x 1 , p j o d d t + x r , p 1 t x i , p j o d d t 1 + r 1 c o s 2 π r 2
where  t  represents the current number of iterations,  x i , j e v e n t + 1  and  x i , j o d d t + 1  indicates the updated individual position,  r  represents a beluga whale individual chosen at random,  r 1  and  r 2  are values picked independently from the range of 0 to 1, and  p j e v e n  and  p j o d d  is a random number drawn from the dimensional space.
Equation (3) controls BWO exploration-to-exploitation. The algorithm shifts to the exploitation stage when the value of  B  is less than or equal to 0.5, and then Equation (4) is used to update the position of the beluga whale individual. Otherwise, the algorithm remains in the global exploration stage.
During the exploitation stage, the incorporation of the levy flight strategy serves to enhance algorithm convergence. Equations (5) and (6) establishes the mathematical model of this strategy.
B = r ( 1 t / 2 t m a x )
where  t m a x  represents the upper limit of iterations,  r  undergoes random changes within the range (0, 1) in each iteration.
x i t + 1 = r 3 x b e s t t r 4 x i t + 2 r 4 1 t t m a x L f x r t x i t
L f = 0.05 u σ / υ 1 / β
σ = 1 + β s i n π β / 2 1 + β / 2 β 2 β 1 / 2
where  u  and  v  are random numbers with a normal distribution, let  β  equal 1.5.
The possibility of a whale falling during the iteration is affected by the parameter  W f , which is a linear function, as shown in Equation (7):
W f = 0.1 0.05 t / t m a x
During the whale fall stage, beluga whale position updates are determined by beluga whale position and whale fall step length, as in Equations (8) and (9). We set  u b  and  l b  to represent the higher and lower limits for the beluga whale updating position.
x i t + 1 = r 5 x i t r 6 x r t + r 7 x s t e p
x s t e p = ( u b l b ) e x p W f 2 n t t m a x
where  n  represent the magnitude of the beluga whale population.

2.2. The Proposed IBWO Algorithm

2.2.1. Bernoulli Chaotic Mapping

The initiation phase of most MHAs typically involves randomly generating a population within the boundaries of the search space, resulting in an uneven distribution throughout the space. However, employing an improved technique for generating the initial population can accelerate the discovery of superior solutions during this phase, ultimately reducing the computational burden and enhancing global convergence. It is noteworthy that chaos exhibits characteristics of semi-stochasticity and ergodicity [32]. Integrating both randomness and chaos has demonstrated greater benefits compared to relying exclusively on randomness, since it can enhance the efficiency and efficacy of the method.
Chaotic mappings, such as Logistic mapping and Tent mapping, have been extensively used in previous research studies [33,34]. However, the Logistic mapping tends to produce values within the ranges of (0, 0.1) and (0.9, 1.0), leading to insufficient traversal of the generated initial population and consequently impacting the convergence speed of the algorithm. Conversely, both Bernoulli mapping and Tent mapping exhibit similar well-ergodic properties, as evidenced by the histogram distribution graph after 5000 iterations in the study conducted by Yang et al. [35]. To enrich the population diversity and expedite convergence, we introduce the utilization of Bernoulli mapping in our algorithm. Equation (10) represents the corresponding mathematical expression.
x n + 1 = x n 1 d 0 < x n 1 d x n ( 1 d ) d 1 d < x n < 1
where  d  serves as a chaos control parameter, determining the range of bipartite segments to ensure their non-overlapping nature.
As depicted in Figure 1, the one-dimensional population initialization distribution at d = 0.4 exhibits a higher degree of dispersion following the mapping process, with a reduced number of individuals located on the boundary and overlapping. The broader range of the initialized distribution in the early stages of the optimization process can effectively promote population diversity and mitigate the likelihood of converging to local optima.

2.2.2. Firefly Algorithm (FA) Disturbance

The logic of the standard BWO algorithm is relatively simple, comprising just four position update formulas. However, it often becomes ensnared in local optima [30]. To address this issue, we propose integrating the position update strategy from the FA to improve both accuracy and the capacity to escape from local optima to some extent. The FA introduces perturbations to the positions of all beluga whales within the search space, guiding them towards the best individual. This methodology significantly enhances both the pace of convergence and the precision of the model.
The FA is a meta-heuristic optimization technique inspired by the behaviour of fireflies in nature. It was first proposed by Yang [36], a British scholar. The algorithm mimics the behaviour of light signals emitted by fireflies during breeding and searching for food. It conducts iterative searches in the solution space to gradually find the optimal solution by continuously optimizing the behaviour of fireflies. The algorithm is based on three key principles. Firstly, fireflies are gender-neutral, and any firefly can attract another firefly. Secondly, brightness is positively correlated with attraction and diminishes with distance. Therefore, a firefly tends to move towards a brighter firefly, and if none are present, it will move randomly. Thirdly, the brightness of a firefly is influenced by the value of the objective function.
Since the attraction of fireflies is proportional to the intensity of light seen by neighbouring fireflies, we define the attraction of fireflies in the model of this paper with Equation (11), where  γ  is the light absorption intensity as a constant 1,  r  denotes the distance between fireflies, and  β 0  is the attraction at  r = 0 . Eventually, the position update Equation (12) is introduced into the algorithm.
β ( r ) = β 0 e γ r 2
x i t + 1 = x i t + β ( x j x i ) + α ( r 1 2 )
where  α  is randomization parameter,  r  is randomly generated within the range (0, 1).

2.2.3. Opposition-Based Learning Based on Lens Imaging (lensOBL)

While the perturbation caused by the fireflies can enhance the accuracy and convergence speed of the algorithm, there remains a possibility of getting trapped in local optima. Therefore, to enhance the algorithm’s global search capability and to escape local optima during the iterative process, we incorporate the lensOBL mechanism for the best individuals.
Opposition-based learning (OBL) was introduced by Tizhoos [37]. OBL is an approach that involves searching for solutions in the opposite direction of current positions. It has been found to significantly enhance the search ability of algorithms. A comprehensive review of OBL can be found in the work of Mahdavi et al. [37]. However, the inverted solution obtained with ordinary OBL is fixed. If an individual has fallen into a local optimum and its inverted solution is inferior to the current solution, the OBL strategy cannot help the individual to escape from the limited optimum. On the other hand, lensOBL can effectively address this issue.
The principle of lensOBL is exemplified by the one-dimensional space shown in Figure 2, where the spatial range [a, b] is symmetric about the point  o . Suppose that at point  x , a point in the interval [a, b], there is an object  P  of vertical height  h , which forms an image of height  h *  after passing through a convex lens with focal length of  f  placed at point  o . The corresponding  x *  is the opposite solution of  x . This leads to Equation (13).
a + b 2 x x a + b 2 = h h
Let  k = h h * , then  k  is the scaling factor, and rewrite Equation (13) to obtain the lensOBL inverse solution Equation (14) as follows:
x = a + b 2 + a + b 2 k x k
In our model, lensOBL is taken as Equation (15), and the scaling factor is generated with Equation (16) in each iteration.
x t = u b j + l b j 2 + u b j + l b j 2 k x b e s t t k
k = ( 1 + ( 3 t t m a x ) 1 / 2 ) 8
where  x t  is the inverse solution, and  u b j  and  l b j  are the upper and lower bounds of the  j -dimensional values.

2.3. BP Neural Network

The BP neural network is a forward feedback neural network, also known as Multilayer Perceptron (MLP). It consists of multiple layers of neurons, and each layer is connected to each other by weights. Typically, a BP neural network contains at least an input layer, an output layer and one or more hidden layers, where the neurons in the input layer receive data from external inputs, the neurons in the output layer output the prediction results of the model, and the hidden layer neurons play an overwhelming role in the process of model training.
Specifically, BP neural networks consist of two main steps: forward propagation and backward propagation. The network accepts the external input data and generates the final output result through the computation of neurons in the hidden layer. Then, the error between the output result and the actual result is calculated, and the error is backpropagated and the weights between neurons are adjusted according to the magnitude of the error, so that the output result of the next forward propagation is closer to the actual result. The BP neural network gradually converges in the direction of decreasing mean absolute error gradient. Figure 3 illustrates the structure of the BP neural network with a single hidden layer and multiple inputs and a single output.
We chose a single output network model because our research targets the prediction of distinct physical and mechanical properties of heat-treated wood. Choosing key performance indicators as a single output simplifies the model structure and concentrates our efforts on modelling and optimizing specific properties, thus enhancing predictive accuracy.
Among practical applications, BP neural networks can be used for tasks such as classification, regression, and clustering. It has also been widely used in wood science, for instance, to predict the EMC and SG [6] and the surface roughness [38] of wood.
It should be noted that the initial weights and thresholds of the BP neural network are generated randomly, so it takes a lot of time for training and is easy to fall into the local optimal solution. Therefore, it is necessary to select the appropriate network structure and parameters according to the specific problem, as well as to carry out appropriate optimization algorithms to obtain better training results.

2.4. The IBWO-BP Algorithm

The randomly generated weights and thresholds in the BP model reflect significant parameter uncertainty, resulting in the instability of the model’s calculations [39]. Through the adoption of MHAs, we can boost the model’s ability to forecast by adjusting critical parameters. However, the standard BWO has poor initial population traversal, slow convergence, and is easy to fall into local optimum. In this regard, we propose the IBWO algorithm.
We first apply the Bernoulli mapping with better traversal to initialize the population to improve the population diversity. Then, introduce the position update formula of FA algorithm to perturb the beluga individuals in the search space to speed up the convergence. Finally, apply lensOBL for the optimal beluga individuals to widen the search space, enhance the global search ability, and avoid falling into the local optimum.
The algorithm to update the individual optimal position is to optimize the weights and thresholds of the BP neural network. Figure 4 displays the process chart of the BP model that has been optimised via the IBWO algorithm.

3. Experimental

3.1. Data Preparation

Table A1 in Appendix A displays the dataset utilised for validating the model’s performance in this study, sourced from a 2008 Chinese PhD thesis published on China Knowledge Network [40]. Additionally, data on Chinese white poplar were published in a journal article [41], while findings from the authors’ studies on the dimensional stability and mechanical properties of Chinese fir wood were reported in related articles [42,43]. Consistency between these conclusions and those of other scholars [44,45] provides a basis for assuming the reliability of the experimental data.
The experiment utilized fifteen Chinese fir (Cunninghamia lanceolata (Lamb.) Hook) trees and white poplar (Populous tomentosa) trees. The Random Complete Block Design (RCBD) method was employed to minimize variations from different trees. Prior to steam heat treatment, the boards were dried to a moisture content (MC) of 8% with a high-frequency vacuum dryer. Specimens with a dimension of 50 × 25 × 500 mm3 (radial × tangential × longitudinal) were placed in an airtight chamber with steam medium as a protective gas. They were subjected to twenty-five heat treatments at five temperatures (170, 185, 200, 215, and 230 °C) and five treatment times (1, 2, 3, 4, and 5 h), while keeping the oxygen content in the kiln at less than 2% during the process. The heat-treated timbers’ moisture content was reduced to 4%. Boards with no obvious defects were picked for property testing, whereas untreated boards from the same tree species served as controls.
In this study, the swelling ratio of air-dry volume, shrinkage ratio of air-dry volume, MOE, and MOR of heat-treated Chinese fir heartwood, sapwood, and white poplar were predicted by the IBWO-BP model. MATLAB (R2022a) was employed for training and outputting the predicted values. We divided the 75 datasets into 2 groups at a ratio of 7 to 3, with 53 sets used for training and 22 sets for testing. Given the differing dimensions of the three inputs’ data, we normalized the input data using Equation (17) to mitigate any impact on training speed and prediction accuracy.
y = ( y m a x y m i n ) ( x x m i n ) ( x m a x x m i n ) + y m i n
where  y  is the normalized value of  x y m a x  and  y m i n  are equal to 1 and 0, and  x m a x  and  x m i n  are the maximum and minimum values of  x , respectively.

3.2. Determination of Model Parameters

To evaluate the accuracy of the IBWO algorithm for optimizing BP parameters, we compared it to the results of five widely used MHAs: Genetic Algorithm (GA), Particle Swarm Optimization (PSO) [46], Whale Optimization Algorithm (WOA) [47], Dung Beetle Optimizer (DBO) [48], and BWO [31].

3.2.1. Determine the Parameters of the MHAs

As depicted in Table 1, the comparison algorithms’ parameters are established based on the parameters suggested in the original author’s work. This study also establishes a consistent maximum iteration of 100 and a population size of 50 for each algorithm.

3.2.2. Determine the Activation Function of the BP Model

The BP neural network used in the IBWO-BP model in this study comprises a 3-layer network containing an input layer, a hidden layer, and an output layer. We set the learning rate of the BP model to 0.01, the target deviation to 0.0001, and the upper limit of training iterations to 1000.
In the BP model, activation functions serve to boost the network’s nonlinearity. Otherwise, even multilayer networks can only reflect linear relationships. Thus, selecting an appropriate activation function is critical for the BP model. Common activation functions include tansig, logsig, relu, sigmoid, and purelin. The hidden layer often uses nonlinear activation functions like tansig, logsig, relu, or sigmoid to enhance the network’s ability to learn and simulate complex data relationships. This is a key factor in solving nonlinear problems with BP neural networks. Conversely, regression problems more commonly use linear activation functions at the output level because they offer direct and continuous outputs that are suitable for real number prediction.
Table A2 in Appendix A shows the best combination of activation functions for the number of nodes associated with each wood property. Table A2 exhibits that the optimal hidden layer activation function for forecasting the swelling ratio of air-dry volume is tansig, with the output layer function being purelin. Similarly, activation functions that predict other properties can be identified.

3.2.3. Determine the Neuron Numbers of the BP Model

The input layer has three nodes corresponding to the heat treatment temperature, time, and wood species of the input data. The output layer has one node that corresponds to the predicted wood properties values.
Hidden layer neuron identification is crucial. This directly affects network learning and generalisation. Choose the number of neurons to balance model complexity and generalizability. If the hidden layer neurons are too few, the neural network may struggle to learn and capture complex data features, resulting in under-fitting and poor learning and prediction. Too many neurons may confuse the network, accumulate noise in the input, and overfit the training data, reducing its capacity to generalise. Therefore, an appropriate number of hidden layer neurons can increase the model’s training efficiency and data prediction.
The number of nodes in the hidden layer was determined according to the empirical Equation (18). At the same time, the activation functions employed in the BP model must be taken into account, since different activation functions affect the network’s learning efficiency and ability, which in turn impacts the demand for the number of neurons and overall network performance. To do this, we conduct extensive experiments using every possible number of nodes and activation functions to identify the neural nodes.
h = ( m + n ) + ω
where  ω 1 ,   10 ,   h m , and  n  denote the number of connections in the hidden, input, and output layers, separately.
Each experimental model’s performance will be assessed via a five-fold cross-validation method, with the RMSE serving as the performance evaluation criterion. A smaller RMSE value indicates better performance of the model on the given dataset, suggesting that the combination of the number of selected hidden layer nodes and the activation function is optimal.
Table A2 in Appendix A gives the number of possible hidden layer nodes for each wood property and their evaluation scores. Table A2 reveals that the optimal number of neurons in the hidden layer is six, the hidden layer’s activation function is tansig, and the output layer is purelin for forecasting the swelling ratio of air-dry volume. Similarly, neuron numbers that predict other properties can be identified.

3.3. Model Performance Evaluation

The MAPE, MAE, RMSE, MSE, and STD were selected to assess the validity of each model. For MAPE, MAE, RMSE, MSE, and STD, lower values indicate more reliable predictions. The values of each evaluation index were derived using Equations (19)–(23), separately.
M A P E = 1 N i = 1 N p i p ^ p i 100
M A E = 1 N i = 1 N p i p ^
R M S E = 1 N i = 1 N p i p ^ 2
M S E = 1 N i = 1 N p i p ^ 2
S T D = i = 1 N ( e i e ¯ ) 2 N 1
where  p i  is the actual value of the experimental sample,  p ^  is the predicted value,  N  is the amount of data,  e i  denotes the variation between the forecasted value and the real value, and  e ¯  denotes the mean of all the variances.

4. Results and Discussion

This section discusses the IBWO-BP model for predicting the physical and mechanical properties of heat-treated wood from two distinct angles. The IBWO-BP model’s performance was compared to other MHAs, as well as its effectiveness with four earlier prediction models.

4.1. Result of the Analysis Compared to Other MHAs

This section evaluates the proposed model’s performance in terms of accuracy and convergence speed. To ensure accurate results and minimize random variations in the evaluation process, we report the average results of each method across ten executions. Error and the regression model evaluation index can be used to assess the model’s accuracy in forecasting.
Figure 5a–d exhibit clear visual comparisons of the IBWO-BP model’s performance with different MHAs for predicting the swelling ratio of air-dry volume, shrinkage ratio of air-dry volume, MOE, and MOR of heat-treated wood. In the figure, the values of the five evaluation indexes have been converted to the same dimension and plotted. A closer proximity to the inner circle indicates better performance on the corresponding index.
Generally, the IBWO-BP model is closer to the inner circle, while the other models exhibit different results with different problems. This indicates that it is more reliable at predicting the physical and mechanical properties of heat-treated wood. However, Figure 5c shows that BWO-BP’s STD only outperforms WOA-BP in predicting the MOE of heat-treated wood. Additionally, the STD of IBWO-BP, while still the smallest, exhibits a notable proximity to the value of PSO-BP. These findings suggest that BWO-BP and IBWO-BP encounter challenges when attempting to accurately fit MOE data, leading to a significant variation in prediction errors.
Anyway, IBWO-BP now outperforms all five indexes for predicting MOE. We believe that by adopting the lensOBL approach based on the BWO algorithm, the IBWO algorithm can search in the reverse direction when using MOE data to train the model, thereby avoiding suboptimal answers. As a result, the IBWO-BP model outperforms the BWO-BP model in terms of MOE prediction.
The more detailed data are shown in Table 2. After optimization using MHAs, the predictive performance of the BP model has been improved to varying degrees. However, among the four predicted properties for heat-treated wood, the IBWO-BP model performs the best. In contrast to different models, the IBWO-BP model shows reductions of at least 53.7%, 31.9%, 42.1%, 34.9%, and 19.5% in MSE, RMSE, MAPE, MAE, and STD, respectively. This indicates that the IBWO-BP algorithm yields predictions that are more accurate and have a lesser degree of distribution in the prediction error. Furthermore, the algorithm demonstrates greater levels of uniformity and dependability in its prediction results.
Consequently, the BP optimized with IBWO substantially improves the prediction precision. Despite the various MHAs that contribute to the optimization of BP neural networks, they are still considered to be less effective compared to the proposed IBWO method.
Figure 6a–d displays the outcomes of the IBWO-BP, BWO-BP, and BP models to forecast the physical and mechanical properties of heat-treated wood. Both the BWO-BP and IBWO-BP models have prediction results that are closer to the actual values compared to the BP model, which demonstrates that optimization of BP using the BWO algorithm is a feasible solution to improve the prediction accuracy. However, the image obviously shows that the experimental results of IBWO-BP are more accurate. Furthermore, the convergence curves of GA-BP, PSO-BP, WOA-BP, DBO-BP, BWO-BP, and IBWO-BP models in predicting the four properties of heat-treated wood are shown in Figure 7a–d. Each model has distinct convergence rates and best score across various property prediction tasks. In comparison, the IBWO-BP model consistently exhibits faster convergence speeds and lower score value for all four properties prediction scenarios.

4.2. Result of Comparison Analysis with Earlier Prediction Models

To further assess the IBWO-BP model’s effectiveness in predicting the physical and mechanical properties of heat-treated wood, we conducted a comparison with previous studies. The comparative analysis model will be developed in four different models: the Aquila optimizer-BP model (AO-BP) [6], the nonlinear and adaptive grouping gray wolf optimisation-BP model (NAGGWO-BP) [29], a decision tree regression (DT) [49], and multiple linear regression (MLR) [50].
In the AO-BP model, the researcher employs the Aquila optimizer to optimise the BP model’s weights and thresholds. In the NAGGWO-BP model, the researcher improves the GWO algorithm through population initialization, nonlinear control parameters, adaptive grouping, and a random reverse learning approach, and then uses the new algorithm to optimise BP parameters. DT is a nonlinear model that predicts based on data decision principles learned from trees. MLR predicts by linearly relating the independent and dependent variables. Among them, MLR is a classical linear regression model.
Table 3 presents the results of the comparison analysis. AO-BP, NAGGWO-BP, DT, and MLR display varied property prediction results, and MLR usually offers a higher evaluation index value than other models, indicating poor accuracy in predicting heat-treated wood’s physical and mechanical properties. Besides, NAGGWO-BP frequently beats AO-BP, DT, and MLR. This is possibly attributable to the researcher’s improved GWO algorithm technique, which improves performance.
Despite that, the IBWO-BP model exhibited lower values for MSE, RMSE, MAPE, MAE, and STD compared to other models. By contrast, the MSE decreased by 39.7%, the RMSE by 22.4%, the MAPE by 9.8%, the MAE by 31.5%, and the STD by 18.9%. This suggests that the IBWO-BP model proposed in this study is more precise as well as more stable for predicting the swelling ratio of air-dry volume, shrinkage ratio of air-dry volume, MOE, and MOR of heat-treated wood.
The following explanations may have contributed to the difference in the study results:
The DT and MLR models show relatively poor prediction results, and there are several reasons behind this. On the one hand, the MLR model assumes a linear relationship between the independent variable and the dependent variable, but heat-treated wood performance is affected by complex nonlinear relationships between temperature, time, and wood species. DT can handle nonlinear data, although data noise and outliers can cause overfitting or underfitting. On the other hand, the simplicity of these two models makes it easier to grasp and evaluate model predictions, but also restricts their ability to handle small samples of complex data.
In the IBWO-BP model, the use of Bernoulli mapping enables the IBWO algorithm to enrich population diversity during the initialisation step of the Beluga whale population, providing a solid foundation for the optimisation search process in the succeeding exploration and exploitation stages. This improves the algorithm’s convergence speed, and similar benefits are evident in NAGGWO.
Additionally, the position update formula of FA is introduced in the IBWO algorithm. When the algorithm is trapped in an inferior solution, the firefly disturbance aids in getting it out of the present solution. Furthermore, following a firefly disturbance, beluga whales could move to the individual location with the greatest fitness score in the population, improving the quality of candidate solutions in the search space and making it easier to track down the ideal solution. With more population location update strategies than the standard BWO algorithm, IBWO can locate the optimal solution more accurately. This assists in identifying suitable BP model weights and thresholds.
Last but not least, applying the lensOBL mechanism to the optimal individuals helps minimize the chances of getting stuck in local optimal solutions, thereby enhancing the model’s ability to perform global searches. Meanwhile, the suggested non-linear increasing scaling factor starts with a small value in the initial phase, resulting in a wider range for the reverse solution of the lensOBL. This ultimately improves its capacity for global exploitation. As the number of iterations increases, the value of also becomes larger, and the reverse solution range gradually decreases. This adjustment enhances the algorithm’s subtle search at the local location in the late iteration. NAGGWO introduces random reverse learning [29]. This randomness increases the algorithm’s diversity in the solution space and prevents a local optimal solution, but it also raises uncertainty. Alternatively, lensOBL could adjust the scale factor more liberally to control the search scope.
In conclusion, this research’s proposed IBWO algorithm can quickly identify the optimal solution and enhance the reliability of the model’s output. A comparative study shows that the IBWO-BP model better predicts the physical and mechanical properties of heat-treated wood.

5. Conclusions

  • This paper presents a variant of BWO called IBWO which integrates Bernoulli chaotic mapping, FA, and lensOBL mechanisms to address the problem that the BWO algorithm is prone to trapping local optimal solutions. Firstly, Bernoulli mapping is used to initialize the population. Secondly, firefly perturbation is used to make the beluga whale individuals move toward the optimal one, which fastens the convergence speed. Finally, lensOBL is performed on the optimal individual to augment the ability of the algorithm to escape from the local optimum. As a result, the convergence speed and accuracy of the algorithm have been enhanced.
  • The IBWO-BP model was established to predict the swelling ratios of air-dry volume, shrinkage ratios of air-dry volume, MOE and MOR of heat-treated wood based on wood species, treatment time, and temperature. The model is evaluated by comparing its predictions with those of the BP, GA-BP, PSO-BP, WOA-BP, DBO-BP, and BWO-BP models. The results show that the MAPE, MAE, MSE, RMSE, and STD values of the IBWO-BP model are the lowest.
  • The IBWO-BP model is compared to previous prediction models, including AO-BP, NAGGWO-BP, DT, and MLR. The results showed that, in comparison to these models, IBWO-BP greatly decreased the evaluation indexes MSE, RMSE, MAPE, MAE, and STD, illustrating that the proposed model can predict the properties of heat-treated wood.
  • Although the model in this paper has some advantages in predicting the properties of heat-treated wood, its performance in processing high-dimensional data is unknown. Future improvements to the method for predicting wood properties will involve combining feature selection methods and taking time series factors into account. Furthermore, the IBWO method proposed by this study has some potentially practical value in tackling other challenges in the wood industry, such as solving the one-dimensional cut-stock problem for wood and minimising wood waste through the IBWO algorithm [51]. A detailed examination of this topic may be the focus of our future efforts.

Author Contributions

Conceptualization, Q.W. and W.W.; methodology, Q.W.; software, Q.W.; validation, Y.H. and W.W.; formal analysis, M.L. and Y.H.; investigation, W.W.; resources, Q.W., Y.H. and M.L.; data curation, Q.W.; writing—original draft preparation, Q.W.; writing—review and editing, Q.W. and Y.H.; visualization, Q.W. and M.L.; supervision, W.W.; project administration, W.W.; funding acquisition, W.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Natural Scientific Foundation of Heilongjiang Province, grant number LC201407.

Data Availability Statement

Data are available upon request from the corresponding author. The data that support the findings of this study are openly available in China National Knowledge Infrastructure [40].

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Wood treatment conditions and corresponding physical and mechanical properties.
Table A1. Wood treatment conditions and corresponding physical and mechanical properties.
Wood SpeciesTest Temperature/◦CTest Time/hShrinkage Ratio of Air-Dry Volume/%Swelling Ratio of
Air-Dry Volume/%
Bending Strength/MpaModulus of Elasticity/Gpa
China-Fir heartwood17014.673.772.4610.8
China-Fir heartwood18514.22.7370.5610.69
China-Fir heartwood20014.152.4162.8610.66
China-Fir heartwood21512.752.1255.6810.18
China-Fir heartwood23012.251.8451.919.41
China-Fir heartwood17024.53.2770.6810.88
China-Fir heartwood18524.122.5466.8710.57
China-Fir heartwood20023.482.0461.4110.33
China-Fir heartwood21522.461.5854.319.97
China-Fir heartwood23022.221.4948.69.35
China-Fir heartwood17034.462.7766.6310.9
China-Fir heartwood18534.022.3766.410.54
China-Fir heartwood20033.271.860.2810.43
China-Fir heartwood21532.421.5353.089.95
China-Fir heartwood23032.151.3144.088.96
China-Fir heartwood17044.322.3465.7710.72
China-Fir heartwood18543.892.2165.3510.53
China-Fir heartwood20043.11.7757.810.25
China-Fir heartwood21542.221.4251.179.92
China-Fir heartwood23042.021.2341.718.76
China-Fir heartwood17053.942.2465.0210.41
China-Fir heartwood18553.642.0164.610.29
China-Fir heartwood20052.881.657.4410.07
China-Fir heartwood21552.091.2548.219.42
China-Fir heartwood23051.961.1336.978.6
China-Fir sapwood17016.074.2273.4512.16
China-Fir sapwood18516.023.9770.7812.35
China-Fir sapwood20015.242.866.6311.98
China-Fir sapwood21514.022.4259.1111.82
China-Fir sapwood23013.852.1755.8411.45
China-Fir sapwood17025.863.9171.3612.37
China-Fir sapwood18525.533.6468.112.18
China-Fir sapwood20024.612.7359.7911.83
China-Fir sapwood215242.1756.2911.14
China-Fir sapwood23023.162.0650.1810.39
China-Fir sapwood17035.673.8470.0212.04
China-Fir sapwood18535.473.4168.0711.53
China-Fir sapwood20034.452.552.9611.19
China-Fir sapwood21533.611.8950.2711.01
China-Fir sapwood23032.721.4740.1910.09
China-Fir sapwood17045.443.4169.4611.61
China-Fir sapwood18545.063.2765.1911.14
China-Fir sapwood20043.842.0347.7111.06
China-Fir sapwood21543.271.7940.6910.27
China-Fir sapwood23042.381.3938.269.97
China-Fir sapwood17055.073.3168.8511.38
China-Fir sapwood18554.892.163.6310.37
China-Fir sapwood20053.141.8346.3410.12
China-Fir sapwood21553.511.4738.49.92
China-Fir sapwood23052.371.2634.729.33
Chinese White Poplar17016.412.5378.1711.71
Chinese White Poplar18516.361.9980.311.43
Chinese White Poplar20016.161.7964.4411.49
Chinese White Poplar21514.51.4854.8811.48
Chinese White Poplar23013.291.3943.3211.22
Chinese White Poplar17026.362.4978.8311.81
Chinese White Poplar18525.951.8673.8911.23
Chinese White Poplar20025.141.7759.6311.43
Chinese White Poplar21523.251.3849.111.35
Chinese White Poplar23022.661.3342.2511.1
Chinese White Poplar17036.272.3781.5911.67
Chinese White Poplar18535.711.6573.1411.08
Chinese White Poplar20034.51.5151.7111.38
Chinese White Poplar21533.181.2747.1811.21
Chinese White Poplar23032.431.2537.6910.81
Chinese White Poplar17046.011.9376.9911.35
Chinese White Poplar18545.451.5266.4810.95
Chinese White Poplar20044.251.4448.7111.27
Chinese White Poplar21543.021.2141.6310.95
Chinese White Poplar23042.341.2136.0110.7
Chinese White Poplar17055.821.8276.4811.3
Chinese White Poplar18555.41.4866.1710.74
Chinese White Poplar20054.171.3947.5111.04
Chinese White Poplar21552.531.1436.9710.69
Chinese White Poplar23052.121.0933.5810.48
Table A2. Hidden layer neuron number and optimal activation function evaluation results.
Table A2. Hidden layer neuron number and optimal activation function evaluation results.
Number of Hidden Layer NeuronsActivate FunctionsAverage ScoreFirst-Fold ScoreSecond-Fold ScoreThird-Fold ScoreFourth-Fold ScoreFifth-Fold Score
Swelling ratio of air-dry volume3relu-purelin0.021120.025930.021120.023880.015890.01876
4tansig-purelin0.003150.005050.000970.003150.004340.00224
5tansig-purelin0.002540.001770.003400.002540.001770.00323
6tansig-purelin0.002200.003380.000440.002200.003350.00162
7tansig-purelin0.003910.002920.003330.004940.003910.00446
8tansig-purelin0.003030.004010.002930.001540.003660.00303
9logsig-purelin0.004240.004240.005960.003470.001870.00568
10tansig-purelin0.003220.004660.003220.002940.002820.00248
11logsig-purelin0.011560.014520.024390.003380.003930.01156
12tansig-purelin0.004930.007240.004930.006020.002790.00368
Shrinkage ratio of air-dry volume3logsig-purelin0.001940.002950.001940.001390.001160.00226
4sigmoid-purelin0.003070.003080.002080.001750.002870.00560
5tansig-purelin0.002130.002390.002130.002480.001810.00185
6tansig-purelin0.001150.001520.000330.001390.001150.00135
7logsig-purelin0.003330.003400.003330.001730.003370.00481
8tansig-purelin0.002960.005710.001790.003410.000930.00296
9logsig-purelin0.002200.001570.001810.002200.002020.00339
10tansig-purelin0.006930.009000.004370.007190.006930.00716
11relu-purelin0.003480.001990.006110.001960.003870.00348
12tansig-purelin0.006700.007470.004180.005910.006700.00926
Modulus of Elasticity3logsig-sigmod0.003310.002910.003240.002720.004100.00360
4tansig-purelin0.002810.003390.002650.003920.001590.00251
5tansig-purelin0.002800.003520.002830.002890.002670.00210
6relu-sigmod0.005410.005410.011800.001960.001310.00658
7tansig-purelin0.001500.001840.000280.001500.002420.00148
8tansig-purelin0.002430.002390.003540.001260.002520.00243
9tansig-purelin0.005940.007110.005940.007840.004880.00396
10sigmoid-purelin0.016750.016750.014960.028610.008330.01512
11tansig-purelin0.001830.001920.001540.002390.001830.00147
12tansig-purelin0.001310.001310.000370.001910.001550.00141
Bending Strength3tansig-purelin0.001560.002020.002130.001540.000560.00156
4sigmoid-purelin0.001690.002940.000220.003070.001690.00055
5tansig-purelin0.003080.005170.000370.005840.003080.00095
6logsig-purelin0.001320.001680.000440.001320.001440.00172
7tansig-purelin0.001360.001430.001400.001300.001310.00136
8tansig-purelin0.001060.001080.000400.001060.001290.00145
9tansig-purelin0.001620.000190.001620.002680.002700.00092
10logsig-purelin0.001940.002870.001890.001940.000820.00217
11tansig-purelin0.001270.001010.001890.000430.001780.00128
12sigmoid-purelin0.003280.004170.003660.002570.002630.00338

References

  1. Sandberg, D.; Kutnar, A.; Mantanis, G. Wood Modification Technologies—A Review. iForest 2017, 10, 895–908. [Google Scholar] [CrossRef]
  2. Hill, C.; Altgen, M.; Rautkari, L. Thermal Modification of Wood-a Review: Chemical Changes and Hygroscopicity. J. Mater. Sci. 2021, 56, 6581–6614. [Google Scholar] [CrossRef]
  3. Cao, Y.; Jiang, J.; Lu, J.; Huang, R.; Jiang, J.; Wu, Y. Color Change of Chinese Fir Through Steam-Heat Treatment. BioResources 2012, 7, 2809–2819. Available online: https://bioresources.cnr.ncsu.edu/resources/color-change-of-chinese-fir-through-steam-heat-treatment/ (accessed on 28 March 2024). [CrossRef]
  4. Ozcan, S.; Ozcifci, A.; Hiziroglu, S.; Toker, H. Effects of Heat Treatment and Surface Roughness on Bonding Strength. Constr. Build. Mater. 2012, 33, 7–13. [Google Scholar] [CrossRef]
  5. Bekhta, P.; Proszyk, S.; Lis, B.; Krystofiak, T. Gloss of Thermally Densified Alder (Alnus glutinosa Goertn.), Beech (Fagus sylvatica L.), Birch (Betula verrucosa Ehrh.), and Pine (Pinus sylvestris L.) Wood Veneers. Eur. J. Wood Wood Prod. 2014, 72, 799–808. [Google Scholar] [CrossRef]
  6. Chen, Y.; Wang, W.; Li, N. Prediction of the Equilibrium Moisture Content and Specific Gravity of Thermally Modified Wood via an Aquila Optimization Algorithm Back-Propagation Neural Network Model. BioRes 2022, 17, 4816–4836. [Google Scholar] [CrossRef]
  7. Shukla, S.R. Evaluation of dimensional stability, surface roughness, colour, flexural properties and decay resistance of thermally modified. Acacia auriculiformis. Maderas-Cienc. Tecnol. 2019, 21, 433–446. [Google Scholar] [CrossRef]
  8. Esteves, B.M.; Pereira, H.M. Wood Modification by Heat Treatment: A Review. Bioresources 2009, 4, 370–404. [Google Scholar] [CrossRef]
  9. Cermak, P.; Suchomelova, P.; Hess, D. Swelling Kinetics of Thermally Modified Wood. Eur. J. Wood Wood Prod. 2021, 79, 1337–1340. [Google Scholar] [CrossRef]
  10. Cermak, P.; Rautkari, L.; Horacek, P.; Saake, B.; Rademacher, P.; Sablik, P. Analysis of Dimensional Stability of Thermally Modified Wood Affected by Re-Wetting Cycles. BioResources 2015, 10, 3242–3253. [Google Scholar] [CrossRef]
  11. Liu, X.Y.; Tu, X.W.; Liu, M. Effects of Light Thermal Treatments on the Color, Hygroscopity and Dimensional Stability of Wood. Wood Res. 2021, 66, 95–104. [Google Scholar] [CrossRef]
  12. Kocaefe, D.; Huang, X.; Kocaefe, Y. Dimensional Stabilization of Wood. Curr. For. Rep. 2015, 1, 151–161. [Google Scholar] [CrossRef]
  13. Dubey, M.K.; Pang, S.; Walker, J. Changes in Chemistry, Color, Dimensional Stability and Fungal Resistance of Pinus radiata D. Don Wood with Oil Heat-Treatment. Holzforschung 2012, 66, 49–57. [Google Scholar] [CrossRef]
  14. Kol, H.S. Characteristics of Heat-Treated Turkish Pine and Fir Wood after ThermoWood Processing. J. Environ. Biol. 2010, 31, 1007–1011. Available online: https://pubmed.ncbi.nlm.nih.gov/21506490/ (accessed on 16 February 2024). [PubMed]
  15. Zhang, T.; Tu, D.; Peng, C.; Zhang, X. Effects of Heat Treatment on Physical-Mechanical Properties of Eucalyptus regnans. BioResources 2015, 10, 3531–3540. [Google Scholar] [CrossRef]
  16. Wang, X.; Tu, D.; Chen, C.; Zhou, Q.; Huang, H.; Zheng, Z.; Zhu, Z. A Thermal Modification Technique Combining Bulk Densification and Heat Treatment for Poplar Wood with Low Moisture Content. Constr. Build. Mater. 2021, 291, 123395. [Google Scholar] [CrossRef]
  17. Esteves, B.; Marques, A.V.; Domingos, I.; Pereira, H. Influence of Steam Heating on the Properties of Pine (Pinus pinaster) and Eucalypt (Eucalyptus globulus) Wood. Wood Sci. Technol. 2007, 41, 193–207. [Google Scholar] [CrossRef]
  18. Birinci, E.; Karamanoglu, M.; Kesik, H.I.; Kaymakci, A. Effect of Heat Treatment Parameters on the Physical, Mechanical, and Crystallinity Index Properties of Scots Pine and Beech Wood. BioResources 2022, 17, 4713–4729. [Google Scholar] [CrossRef]
  19. Kol, H.; Sefil, Y.; Aysal, S. Effect of Heat Treatment on the Mechanical Properties, and Dimensional Stability of Fir Wood. In Proceedings of the XXVII International Conference Research for Furniture Industry, Ankara, Turkey, 17–19 September 2015; pp. 269–279. Available online: https://www.researchgate.net/publication/283498477 (accessed on 28 March 2024).
  20. Garcia Esteban, L.; Garcia Fernandez, F.; de Palacios, P. Prediction of Plywood Bonding Quality Using an Artificial Neural Network. Holzforschung 2011, 65, 209–214. [Google Scholar] [CrossRef]
  21. Tiryaki, S.; Hamzaçebi, C. Predicting Modulus of Rupture (MOR) and Modulus of Elasticity (MOE) of Heat Treated Woods by Artificial Neural Networks. Measurement 2014, 49, 266–274. [Google Scholar] [CrossRef]
  22. Tiryaki, S.; Bardak, S.; Aydin, A.; Nemli, G. Analysis of Volumetric Swelling and Shrinkage of Heat Treated Woods: Experimental and Artificial Neural Network Modeling Approach. Maderas-Cienc. Tecnol. 2016, 18, 477–492. [Google Scholar] [CrossRef]
  23. Ozsahin, S. Prediction of Equilibrium Moisture Content and Specific Gravity of Heat-Treated Wood by Artificial Neural Networks. Eur. J. Wood Prod. 2017, 76, 563–572. [Google Scholar] [CrossRef]
  24. Nguyen, T.H.V.; Nguyen, T.T.; Ji, X.; Guo, M. Predicting Color Change in Wood During Heat Treatment Using an Artificial Neural Network Model. BioResources 2018, 13, 6250–6264. [Google Scholar] [CrossRef]
  25. Yang, H.; Cheng, W.; Han, G. Wood Modification at High Temperature and Pressurized Steam: A Relational Model of Mechanical Properties Based on a Neural Network. BioResources 2015, 10, 5758–5776. [Google Scholar] [CrossRef]
  26. Haftkhani, A.R.; Abdoli, F.; Rashidijouybari, I.; Garcia, R.A. Prediction of Water Absorption and Swelling of Thermally Modified Fir Wood by Artificial Neural Network Models. Eur. J. Wood Wood Prod. 2022, 80, 1135–1150. [Google Scholar] [CrossRef]
  27. Ying, T.; Junqi, Y.; Anjun, Z. Predictive Model of Energy Consumption for Office Building by Using Improved GWO-BP. Energy Rep. 2020, 6, 620–627. [Google Scholar] [CrossRef]
  28. Lei, Y.; Jin-hao, C.; Long-fei, L.; Chao, L.; Yi-zhuo, Z. Prediction Model of Wood Absolute Dry Density by Near-Infrared Spectroscopy Based on IPSO-BP. Spectrosc. Spectr. Anal. 2020, 40, 2937–2942. [Google Scholar] [CrossRef]
  29. Ma, W.; Wang, W.; Cao, Y. Mechanical Properties of Wood Prediction Based on the NAGGWO-BP Neural Network. Forests 2022, 13, 1870. [Google Scholar] [CrossRef]
  30. Houssein, E.H.; Sayed, A. Dynamic Candidate Solution Boosted Beluga Whale Optimization Algorithm for Biomedical Classification. Mathematics 2023, 11, 707. [Google Scholar] [CrossRef]
  31. Zhong, C.; Li, G.; Meng, Z. Beluga Whale Optimization: A Novel Nature-Inspired Metaheuristic Algorithm. Knowl.-Based Syst. 2022, 251, 109215. [Google Scholar] [CrossRef]
  32. Tavazoei, M.S.; Haeri, M. Comparison of Different One-Dimensional Maps as Chaotic Search Pattern in Chaos Optimization Algorithms. Appl. Math. Comput. 2007, 187, 1076–1085. [Google Scholar] [CrossRef]
  33. Moysis, L.; Petavratzis, E.; Volos, C.; Nistazakis, H.; Stouboulos, L. A Chaotic Path Planning Generator Based on Logistic Map and modulo Tactics. Robot. Auton. Syst. 2020, 124, 103377. [Google Scholar] [CrossRef]
  34. Yan, Y.; Hongzhong, M.; Zhendong, L. An Improved Grasshopper Optimization Algorithm for Global Optimization. Chin. J. Electron. 2021, 30, 451–459. [Google Scholar] [CrossRef]
  35. Yu, Y.; Gao, S.; Cheng, S.; Wang, Y.; Song, S.; Yuan, F. CBSO: A Memetic Brain Storm Optimization with Chaotic Local Search. Memet. Comput. 2018, 10, 353–367. [Google Scholar] [CrossRef]
  36. Yang, X.-S. Firefly Algorithms for Multimodal Optimization. In Stochastic Algorithms: Foundations and Applications; SAGA 2009, Lecture Notes in Computer Sciences; Springer: Berlin/Heidelberg, Germany, 2009; Volume 5792, pp. 169–178. [Google Scholar] [CrossRef]
  37. Mandavi, S.; Rahnamayan, S.; Deb, K. Opposition Based Learning: A Literature Review. Swarm Evol. Comput. 2018, 39, 1–23. [Google Scholar] [CrossRef]
  38. Wu, X.; Niu, H.; Li, X.-J.; Wu, Y. Research on GA-BP Neural Network Model of Surface Roughness in Air Drum Sanding Process for Poplar. Eur. J. Wood Wood Prod. 2022, 80, 477–487. [Google Scholar] [CrossRef]
  39. Bai, H.; Chu, Z.; Wang, D.; Bao, Y.; Qin, L.; Zheng, Y.; Li, F. Predictive Control of Microwave Hot-Air Coupled Drying Model Based on GWO-BP Neural Network. Dry. Technol. 2022, 41, 1148–1158. [Google Scholar] [CrossRef]
  40. Cao, Y. Properties and Control Theory for Strength Loss of Steam Heat-Treated Wood. Ph.D. Thesis, Chinese Academy of Forestry, Beijing, China, 2008. [Google Scholar]
  41. Guo, F.; Huang, R.; Lu, J.; Chen, Z.; Cao, Y. Evaluating the Effect of Heat Treating Temperature and Duration on Selected Wood Properties Using Comprehensive Cluster Analysis. J. Wood Sci. 2014, 60, 255–262. [Google Scholar] [CrossRef]
  42. Cao, Y.; Lu, J.; Huang, R.; Zhao, X.; Jiang, J. Effect of Steam-Heat Treatment on Mechanical Properties of Chinese Fir. BioResources 2012, 7, 1123–1133. Available online: https://bioresources.cnr.ncsu.edu/resources/effect-of-steam-heat-treatment-on-mechanical-properties-of-chinese-fir/ (accessed on 16 February 2024). [CrossRef]
  43. Cao, Y.; Lu, J.; Huang, R.; Jiang, J. Increased Dimensional Stability of Chinese Fir through Steam-Heat Treatment. Eur. J. Wood Wood Prod. 2012, 70, 441–444. [Google Scholar] [CrossRef]
  44. Bytner, O.; Laskowska, A.; Drozdzek, M.; Kozakiewicz, P.; Zawadzki, J. Evaluation of the Dimensional Stability of Black Poplar Wood Modified Thermally in Nitrogen Atmosphere. Materials 2021, 14, 1491. [Google Scholar] [CrossRef] [PubMed]
  45. Kozakiewicz, P.; Drozdzek, M.; Laskowska, A.; Grzeskiewicz, M.; Bytner, O.; Radomski, A.; Zawadzki, J. Effects of Thermal Modification on Selected Physical Properties of Sapwood and Heartwood of Black Poplar (Populus nigra L.). BioResources 2019, 14, 8391–8404. [Google Scholar] [CrossRef]
  46. Kennedy, J.; Eberhart, R. Particle Swarm Optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar] [CrossRef]
  47. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  48. Xue, J.; Shen, B. Dung Beetle Optimizer: A New Meta-Heuristic Algorithm for Global Optimization. J. Supercomput. 2023, 79, 7305–7336. [Google Scholar] [CrossRef]
  49. Nasir, V.; Fathi, H.; Kazemirad, S. Combined Machine Learning–Wave Propagation Approach for Monitoring Timber Mechanical Properties under UV Aging. Struct. Health Monit. 2021, 20, 2035–2053. [Google Scholar] [CrossRef]
  50. Tiryaki, S.; Ozsahin, S.; Yildirim, I. Comparison of Artificial Neural Network and Multiple Linear Regression Models to Predict Optimum Bonding Strength of Heat Treated Woods. Int. J. Adhes. Adhes. 2014, 55, 29–36. [Google Scholar] [CrossRef]
  51. Montiel-Arrieta, L.J.; Barragan-Vite, I.; Seck-Tuoh-Mora, J.C.; Hernandez-Romero, N.; González-Hernández, M.; Medina-Marin, J. Minimizing the Total Waste in the One-Dimensional Cutting Stock Problem with the African Buffalo Optimization Algorithm. PeerJ Comput. Sci. 2023, 9, e1728. [Google Scholar] [CrossRef]
Figure 1. Bernoulli chaotic mapping: (a) scatter plot; (b) frequency distribution histogram.
Figure 1. Bernoulli chaotic mapping: (a) scatter plot; (b) frequency distribution histogram.
Forests 15 00687 g001
Figure 2. Opposition-based learning strategy based on lens imaging.
Figure 2. Opposition-based learning strategy based on lens imaging.
Forests 15 00687 g002
Figure 3. BP neural network structure.
Figure 3. BP neural network structure.
Forests 15 00687 g003
Figure 4. Flowchart of IBWO-BP algorithm.
Figure 4. Flowchart of IBWO-BP algorithm.
Forests 15 00687 g004
Figure 5. Comparative model performance evaluation: (a) swelling ratio of air-dry volume; (b) shrinkage ratio of air-dry volume; (c) Modulus of Elasticity; (d) Bending Strength.
Figure 5. Comparative model performance evaluation: (a) swelling ratio of air-dry volume; (b) shrinkage ratio of air-dry volume; (c) Modulus of Elasticity; (d) Bending Strength.
Forests 15 00687 g005aForests 15 00687 g005b
Figure 6. Comparison of different models’ predicted and actual values: (a) swelling ratio of air-dry volume; (b) shrinkage ratio of air-dry volume; (c) Modulus of Elasticity; (d) Bending Strength.
Figure 6. Comparison of different models’ predicted and actual values: (a) swelling ratio of air-dry volume; (b) shrinkage ratio of air-dry volume; (c) Modulus of Elasticity; (d) Bending Strength.
Forests 15 00687 g006aForests 15 00687 g006b
Figure 7. Comparison of different models’ convergence curves: (a) swelling ratio of air-dry volume; (b) shrinkage ratio of air-dry volume; (c) Modulus of Elasticity; (d) Bending Strength.
Figure 7. Comparison of different models’ convergence curves: (a) swelling ratio of air-dry volume; (b) shrinkage ratio of air-dry volume; (c) Modulus of Elasticity; (d) Bending Strength.
Forests 15 00687 g007
Table 1. MHAs’ parameters setting.
Table 1. MHAs’ parameters setting.
AlgorithmParameterSetting
GA   a 0.8
  b 0.05
PSO   C 1  and  C 2 2
Inertia weightLinearly reduction from 0.9 to 0.1
WOA   a Gradually reduced from 2 to 0
DBO   k  and  λ 0.1
  b  and  S 0.3 and 0.5
BWO   W f Linearly reduction from 0.1 to 0.05
IBWO   W f Linearly reduction from 0.1 to 0.05
Chaos control parameter  d 0.4
FA light absorption intensity  γ 1
Table 2. Model evaluation results.
Table 2. Model evaluation results.
BPGA-BPPSO-BPWOA-BPDBO-BPBWO-BPIBWO-BP
Swelling ratio of air-dry volumeMSE 10.76850.05480.05840.08170.06160.05140.0078
RMSE 20.87660.23410.24160.28580.24810.22670.0881
MAPE 334.30%11.34%11.74%18.07%13.61%11.47%4.87%
MAE 40.56850.17710.19150.24760.19080.18100.0731
STD 50.79560.21450.19050.17200.25330.16390.0802
Shrinkage ratio of air-dry volumeMSE2.56150.60471.26401.47261.27960.36420.1354
RMSE1.60050.77771.12431.21351.13120.60350.3680
MAPE43.68%19.59%29.31%31.81%30.57%15.84%9.18%
MAE1.43630.60640.85021.03680.99340.48410.2710
STD0.72280.57470.79500.69730.55390.45220.2995
Modulus of ElasticityMSE3.60390.70550.43422.30680.42310.33350.0306
RMSE1.89840.83990.65901.51880.65050.57750.1751
MAPE13.93%7.05%5.65%11.73%5.26%4.13%1.24%
MAE1.53340.78210.62321.30200.58310.45840.1377
STD1.32840.31520.21920.81120.29500.53400.1765
Bending StrengthMSE175.151076.696856.029334.150236.513724.266711.2461
RMSE13.23458.75777.48535.84386.04274.92613.3535
MAPE24.92%11.87%11.72%9.12%11.56%8.03%4.56%
MAE11.94577.05866.27644.52745.50984.07722.6538
STD7.69316.82317.37615.41724.27214.85163.2941
1 mean square error; 2 root mean square error; 3 mean absolute percentage error; 4 mean absolute error; 5 standard deviation.
Table 3. Comparative analysis result.
Table 3. Comparative analysis result.
AO-BPNAGGWO-BPDTMLRIBWO-BP
Swelling ratio of air-dry volumeMSE0.24540.01590.23840.25110.0078
RMSE0.49540.12620.48820.50110.0881
MAPE26.51%7.43%17.20%22.19%4.87%
MAE0.42600.10660.35770.43150.0731
STD0.25880.09890.49460.50320.0802
Shrinkage ratio of air-dry volumeMSE0.32020.22470.96231.20190.1354
RMSE0.56590.47400.98101.09630.3680
MAPE12.46%10.18%24.30%29.99%9.18%
MAE0.45360.39760.84080.94650.2710
STD0.57320.47960.98920.81500.2995
Modulus of ElasticityMSE0.27330.17843.00551.30580.0306
RMSE0.52270.42231.73361.14270.1751
MAPE4.23%2.97%12.92%9.03%1.24%
MAE0.47020.32151.42151.00050.1377
STD0.43570.34981.14821.02480.1765
Bending StrengthMSE79.296424.7224159.1605172.471111.2461
RMSE8.90494.972212.615913.13283.3535
MAPE12.91%7.03%23.39%24.92%4.56%
MAE6.68404.148510.801510.83352.6538
STD8.38064.33908.21699.98673.2941
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Q.; Wang, W.; He, Y.; Li, M. Prediction of Physical and Mechanical Properties of Heat-Treated Wood Based on the Improved Beluga Whale Optimisation Back Propagation (IBWO-BP) Neural Network. Forests 2024, 15, 687. https://doi.org/10.3390/f15040687

AMA Style

Wang Q, Wang W, He Y, Li M. Prediction of Physical and Mechanical Properties of Heat-Treated Wood Based on the Improved Beluga Whale Optimisation Back Propagation (IBWO-BP) Neural Network. Forests. 2024; 15(4):687. https://doi.org/10.3390/f15040687

Chicago/Turabian Style

Wang, Qinghai, Wei Wang, Yan He, and Meng Li. 2024. "Prediction of Physical and Mechanical Properties of Heat-Treated Wood Based on the Improved Beluga Whale Optimisation Back Propagation (IBWO-BP) Neural Network" Forests 15, no. 4: 687. https://doi.org/10.3390/f15040687

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop