Next Article in Journal
Variation in Leaf Functional and Plant Defense Traits of Introduced Eucalyptus Species across Environmental Gradients in Their New Range in Southern China
Next Article in Special Issue
Prediction of Thermally Modified Wood Color Change after Artificial Weathering Based on IPSO-SVM Model
Previous Article in Journal
Assessing and Mapping Forest Functions through a GIS-Based, Multi-Criteria Approach as a Participative Planning Tool: An Application Analysis
Previous Article in Special Issue
Phase-Change-Material-Impregnated Wood for Potential Energy-Saving Building Materials
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Predicting the Mechanical Properties of Heat-Treated Woods Using Optimization-Algorithm-Based BPNN

College of Engineering and Technology, Northeast Forestry University, Harbin 150040, China
*
Author to whom correspondence should be addressed.
Forests 2023, 14(5), 935; https://doi.org/10.3390/f14050935
Submission received: 29 March 2023 / Revised: 21 April 2023 / Accepted: 27 April 2023 / Published: 2 May 2023

Abstract

:
This paper aims to enhance the accuracy of predicting the mechanical behavior of wood subjected to thermal modification using an improved dung beetle optimization (IDBO) model. The IDBO algorithm improves the original DBO algorithm via three main steps: (1) using piece-wise linear chaotic mapping (PWLCM) to generate the initial dung beetle species and increase its heterogeneity; (2) adopting an adaptive nonlinear decreasing producer ratio model to control the number of producers and boost the algorithm’s convergence rate; and (3) applying a dimensional learning-enhanced foraging (DLF) search strategy that optimizes the algorithm’s ability to explore and exploit the search space. The IDBO algorithm is evaluated on 14 benchmark functions and outperforms other algorithms. The IDBO algorithm is then applied to optimize a back-propagation (BP) neural network for predicting five mechanical property parameters of heat-treated larch-sawn timber. The results indicate that the IDBO-BP model significantly reduces the error compared with the BP, tent-sparrow search algorithm (TSSA)-BP, grey wolf optimizer (GWO)-BP, nonlinear adaptive grouping grey wolf optimizer (IGWO)-BP and DBO-BP models, demonstrating its superiority in predicting the physical characteristics of lumber after heat treatment.

1. Introduction

Timber is a widely utilized material in the construction and furniture industries because it has numerous benefits, such as environmental sustainability, aesthetic appeal and ease of processing. However, its limited stability and durability hinder its application [1,2]. These limitations have prompted the development of various wood modification techniques, such as chemical, physical and biological methods [3]. Heat treatment is a prevalent technique that enhances wood properties by altering its chemical, physical and structural characteristics through exposure to specific temperature and humidity conditions [4]. This treatment increases wood stability, durability and resistance to corrosion and hydrolysis while improving mechanical properties such as strength, stiffness and hardness [5,6]. Common heat treatment methods include vacuum, dry and moist treatments [7].
The improvement of timber properties via heat treatment has been demonstrated by studies. Korkut et al. [8] examined how the thermal process affects red bud maple’s surface roughness and mechanical behaviors. The results indicated that increasing temperatures reduce density and moisture content but increase bending strength and surface roughness. Icel et al. [9] demonstrated that heat treatment significantly improves the physical properties, chemical composition and microstructure of spruce and pine, resulting in enhanced stability, durability performance and service life. Xue et al. [10] investigated how high-temperature heat treatment and impregnation modification techniques affect aspen lumber’s physical and mechanical characteristics and found significant improvements in mechanical strength and preservation.
Despite its effectiveness in improving wood mechanical properties, heat treatment has certain limitations. Boonstra et al. [11] reported the decomposition of natural wood components during heat treatment, resulting in reduced wood quality. Hill [12] noted that the efficacy of heat treatment is influenced by various factors such as treatment time, temperature, humidity and wood species, making it challenging to control and optimize the process. Goli et al. [1] investigated the impact of heat treatment on the physical and mechanical properties of birch plywood, revealing an increase in density and hardness but a decrease in moisture content and bending strength.
To address these limitations, researchers have explored the use of neural network models to predict wood mechanical properties. Kohonen [13] introduced self-organizing mapping (SOM) as one of the earliest prototypes for applying neural networks to nonlinear prediction problems. C.G.O. [14] highlighted the potential for neural networks to model complex nonlinear relationships for predicting mechanical properties such as strength and stiffness. Adamopoulos et al. [15] investigated the relationship between the fiber properties of recycled pulp and the mechanical properties of corrugated base paper. Multiple linear regression and artificial neural network models were used to predict the tensile strength and compressive strength of corrugated base paper with different fiber sources, and the results showed that the artificial neural network model was more accurate and stable than the multiple linear regression model. You et al. [16] demonstrated that an artificial neural network (ANN) model based on nondestructive vibration testing can successfully predict the MOE of bamboo–wood composites with high accuracy.
Although employing the BP neural network models to forecast the physical characteristics of heat-treated lumber reduces experimental costs, it presents certain challenges, such as susceptibility to local minima during the learning process and a poor generalization ability, resulting in the inaccurate prediction of new data. To address these limitations, some researchers have explored combining BP neural networks with meta-heuristic algorithms to improve prediction accuracy and model robustness. Chen et al. [17] integrated the Aquila Optimization Algorithm (AOA) [18] with BP neural networks to accurately predict the balance water rate and weight ratio of thermal processing timber, and Wang et al. [19] utilized the Carnivorous Plant Algorithm (CPA) [20] to ameliorate BP neural networks for predicting the adhesion intensity and coarseness of the surfaces of heat-treated wood. Their results indicated that both the AOA-BP and CPA-BP models outperform traditional BP neural network models.
Meta-heuristic algorithms can effectively avoid local optima and improve prediction accuracy when combined with BP neural networks. However, local optima may still occur due to inappropriate algorithm parameters or unreasonable algorithm combinations, resulting in poor model performance. To address this issue, some researchers have suggested improving the original meta-heuristic algorithms before applying them to optimize BP neural networks, aiming to increase the model’s generalization ability and reliability. For example, Li et al. [21] enhanced the sparrow search algorithm (SSA) [22] with tent chaotic mapping and applied it to optimize BP neural networks for predicting the mechanical characteristics of heat-treated timber. They found that the TSSA-BP model performs well. Ma et al. [23] proposed a nonlinear adaptive grouping strategy for the Gray Wolf Optimization (GWO) [24] algorithm and used it to optimize BP neural networks for timber mechanical performance forecasts. They demonstrated that the proposed IGWO-BP model has much higher prediction accuracy than that of conventional models.
Similarly, the original Dung Beetle Optimization (DBO) [25] algorithm has drawbacks in avoiding local optima and achieving satisfactory algorithmic accuracy for practical engineering applications. To address these flaws, this article proposes an Improved Dung Beetle Optimizer (IDBO) for optimizing BP neural networks. The IDBO algorithm incorporates three main improvements: first, utilizing piece-wise linear chaotic mapping (PWLCM) to initialize the dung beetle population to increase diversity; second, introducing an adaptive parameter adjustment strategy to enhance the early-stage best-finding ability and improve algorithmic search efficiency; and finally, balancing local and global search capabilities by incorporating a dimensional learning-enhanced foraging strategy (DLF).
The rest of this article is structured as follows: Section 2 introduces the basic theory of BP and DBO; Section 3 presents the IDBO algorithm model; Section 4 verifies the performance of the IDBO algorithm using benchmark functions; Section 5 evaluates the reliability of the suggested IDBO model for wood mechanical property predictions; and Section 6 concludes.

2. Theoretical Analysis of the Algorithm

2.1. Back-Propagation (BP) Neural Network Models

The BP neural network is a multi-layered feedforward model primarily utilized for supervised learning tasks. It typically includes input, hidden and output layers [26]. Neurons receive inputs from preceding layers and compute a weighted sum that is transformed by an activation function before being output to subsequent layers. The error between the desired and actual outputs is calculated and propagated backward through the network via a back-propagation algorithm. Weights are updated according to each neuron’s contribution to the error using the chain rule. Multiple iterations minimize error and enable the network to approximate desired outputs.
This paper used MATLAB’s machine learning toolbox (2019a) to create a BP network. Input data included heat treatment temperature, time and relative humidity, and output data comprised Longitudinal Compressive Strength (LCS), Transverse Rupture Strength (TRS), Transverse Modulus of Elasticity (TME), Radial Hardness (RH) and Tangential Hardness (TH). Five separate prediction models were developed using trainlm with a learning rate of 0.01. Figure 1 shows the structure of the BP neural network with a single hidden layer.

2.2. The Traditional DBO Algorithm

The DBO is a novel swarm intelligence algorithm that simulates dung beetle habits, such as ball rolling, dancing, foraging, stealing, breeding and other behaviors, and the DBO algorithm comprises four optimization processes: rolling balls, breeding, foraging and stealing [25].

2.2.1. Dung Beetle Ball Rolling

Dung beetle rolling behavior is divided into an obstructed mode and an unobstructed mode.

Obstacle-Free Mode

When the dung beetle is moving forward without obstacles, the dung beetle uses the sun for navigation during dung ball rolling. In this model, the place of the dung beetle alters as the light intensity changes, and the position is renewed, as follows:
  x i t + 1 = x i t + a × k × x i t 1 + b × x i t x w r o s t t
where t indicates the count of the current iterations, and x i t is in terms of the position of the ith dung beetle in the population at the tth permutation. k ϵ ( 0 ,   0.2 ] shows a fixed parameter representing the flexure coefficient, b is an invariant quantity in the range of (0, 1), and α represents a natural coefficient with values of either −1 or 1, with 1 indicating no deviation and −1 indicating deviation from the original direction. x w r o s t t means the worst location in the present specie, and the change in light intensity is simulated by x i t x w r o s t t .

Barrier Mode

The dung beetle, when it encounters an obstacle that prevents it from moving forward, needs to dance to regain a new forward direction. The authors use a tangent function to mimic the dancing behavior as a way to obtain a new rolling direction, which is only considered to be in the range of [0, π ], and the beetle continues rolling the dung ball once it has determined a new direction. The equation for updating the position at this point is as follows:
  x i t + 1 = x i t + tan θ x i t x i t 1
When θ = 0, π 2 , π , no change occurs in the dung beetle’s position.

2.2.2. Dung Beetle Breeding

In nature, female dung beetles roll their dung balls to a safe place suitable for egg laying and hide them as a way to provide a suitable habitat for their progeny. Inspired by this, the authors propose a frontier option strategy to model the brood ball location of female dung beetles:
  L f * = m a x x g b e s t t × ( 1 R ) , L f U f * = m i n x g b e s t t × ( 1 + R ) , U f
where R = 1 t T m a x , and T m a x is the upper limit of iterations. The lower and upper limits of the optimization problem are L f   and   U f , respectively. The current population attains the global optimum at x g b e s t t . The authors define the spawning’s lower and upper edges region with   L f and U f , which means that the region where the dung beetles spawn is dynamically adjusted with the number of iterations.
When a female dung beetle determines the spawning area, she lays her eggs in that area. Each female dung beetle generates a single brood ball per cycle. The area where oviposition occurs is dynamically adjusted with the count of iterations, so the position of the nestling sphere is also dynamic during the iterations, as defined below:
  B i t + 1 = x g b e s t t + b 1 × B i t L f * + b 2 × B i t U f *
where B i t + 1 is the location of the ith brood ball at the tth iteration, b 1 , b 2 represent two random and independent vectors that have D components each, and D is the number of parameters in the optimization problem. The position of the nestling ball must be restricted to the spawning area.

2.2.3. Dung Beetle Foraging

This behavior is mainly aimed at small dung beetles. Some mature dung beetles emerge from the ground in search of food, and the optimal foraging area for small dung beetles is dynamically updated, as indicated below:
  L f l = m a x x l b e s t t × ( 1 R ) , L f U f l = m i n x l b e s t t × ( 1 + R ) , U f
where 𝑅 is the same as the previous definition, and x l b e s t t represents the best local position for the current population. The authors use L f l and U f l to define the bottom and top boundaries of the foraging region of the small dung beetle, respectively. The equation for updating the position at this point is as follows:
  x i t + 1 = x i t + C 1 × x i t L f l + C 2 × x i t U f l
where C 1 is a number that follows a normal distribution when chosen randomly, namely C 1 ~ N 0 ,   1 , and C 2 is a random vector belonging to a range of 0 , 1 of 1 × D .

2.2.4. Dung Beetle Stealing

In the population, there are some dung beetles that steal dung balls from other dung beetles, and the authors update the location of the thieving dung beetles as follows:
x i t + 1 = x l b e s t t + S × g × ( x i t x g b e s t t + x i t x l b e s t t )
where g is a vector of dimension D that is randomly chosen, obeying a normal distribution, and S indicates a constant value.
The diagram of the DBO algorithm’s process is presented in Figure 2. The algorithm first generates a random initial population of dung beetles in the search space and defines its relevant parameters and then calculates the value of each agent’s fitness to adjust their positions based on the objective function, and it finally repeats the above steps until the termination criteria are met, showing the globally optimal solution and its corresponding value of suitability.

3. Proposed Method

3.1. Improved Dung Beetle Optimizer

Despite its simplicity and successful application to several engineering design problems, the DBO algorithm exhibits limitations, such as poor global searchability and premature convergence to local optima. To address these deficiencies, this paper proposes an improved dung beetle optimizer with specific enhancement strategies.

3.1.1. Piece-Wise Linear Chaotic Mapping

When tackling sophisticated optimization projects, the simple random generation of initial populations by the DBO can result in rapid declines in population diversity and excessive convergence during later iterations. Chaotic sequences have recently been adopted for improving population diversity in meta-heuristic algorithms due to their randomness and ergodicity [27]. The basic approach involves mapping chaotic sequences into individual search spaces using chaos models such as Tent [21], Logistic [28] or Kent [29] chaos mapping.
When selecting a chaotic mapping, two important characteristics—simplicity and ergodicity—must be considered. Segmented linear chaotic mapping satisfies these criteria with its relatively uniform phase distribution and simple equations compared to those of other one-dimensional chaotic systems. This paper uses PWLCM mapping to generate a random sequence with dynamical equations [30] defined as follows:
  x i + 1 = F p x i = x i p ,   0 x i < p x i p 0.5 p ,   p x i < 0.5 F p 1 x i ,   0.5 x i < 1
With the control parameter p ϵ (0, 0.5), the x i ϵ (0, 1) system is in a chaotic state. Assigning initial values to the control function p and   x 0 , after circular iterations, a random sequence in the interval (0, 1) can be obtained, which has excellent statistical properties and is commonly applied to generate the initial solution of the algorithm to increase the diversity of the species. When p = 0.4, the initial overall (one-dimensional) distribution is as shown in Figure 3.

3.1.2. Self-Adaptive Parameter Adjustment Tactics

The DBO algorithm comprises four main components: the global development of producers, egg-laying by female dung beetles, foraging by small dung beetles and stealing behavior by stealing dung beetles. The number of producers determines the explored scope and convergence rate of the algorithm. However, in the original algorithm, the authors do not specify the distribution ratio of these four agents, which may result in incomplete coverage of the search space or slow convergence. To address this issue, this article suggests an adaptively derived non-linear decreasing producer ratio model (Equation (9)) with an initial producer ratio set to 0.4. With a sufficient number of producers, the algorithm can conduct a more extensive global search during early iterations and fully exploit potential solutions. As iterations progress and the demand for producers decreases, their proportion decreases from 0.4 to 0.2, and exploitation is minimized during the middle and late stages to facilitate rapid convergence.
    P p e r c e n t = 0.4 t 0.4 0.2 M
This model enhances algorithm diversity and robustness by dynamically adjusting the number of producers and controlling competition and cooperation among them according to certain strategies. This maintains algorithm versatility while gradually reducing producer numbers during the search process to effectively balance convergence speed and exploration ability. As a result, global exploration capability and convergence speed are improved.

3.1.3. Dimension Learning-Enhanced Foraging Search Strategy

During the search process, the DBO algorithm may select a locally optimal solution while ignoring a more optimal global solution due to its random strategy and lack of effective evasion methods. To address this issue, we introduce the Dimension Learning-enhanced Foraging (DLF) search strategy.
In the original DBO algorithm, position updates are obtained according to objective functions corresponding to different agents. This can contribute to slow convergence, trapping in local optima, and the premature loss of population diversity due to random agent selection. In contrast, our proposed DLF search strategy enables agents to update their locations by learning from their neighbors and completing their behaviors accordingly.
In the DLF search strategy, the new location of the dung beetles X i t is obtained from Equation (12), in which the beetle gains information from various neighbors and a randomly chosen agent from the population. Then, in addition to X i D B O t + 1 , the DLF search strategy generates another agent for the new location of beetle X i t , named X i D L F t + 1 . To this purpose, first, using Formula (10), the radius R i t is obtained with the magnitude of the displacement vector between the X i current position X i t and the agent position X i D B O t + 1 .
  R i t = X i t X i D B O t + 1
Next, the neighborhood of X i t expressed by N i t is derived using Equation (11), which is related to the radius R i t , where D i is the length of the line segment joining X i t and X j t .
    N i t = X j t D i X i t , X j t R i t , X j t ϵ P o p
Then, multi-domain learning is performed using Equation (12), where d denotes dimensionality.
  X i D L F , d t + 1 = X i , d t + r a n d × X n , d t X r , d t
Finally, the locations are updated with Formula (13), and the above steps are repeated until a predefined maximum number of iterations is reached and returns the global optimal solution and its corresponding fitness value.
  X i t + 1 = X i D B O t + 1 ,   i f   f X i D B O < f X i D L H X i D L F t + 1                                                         o t h e r w i s e  

3.2. The IDBO-BP Algorithm

The BP neural network models are assigned random weights and thresholds with numerous variable parameters that can cause instability in model computation [31]. The predictive performance of these models can be enhanced by optimizing BP neural networks using DBO. However, the DBO algorithm has issues such as an uneven initial population distribution, susceptibility to local optima and a slow convergence speed. To address these problems, this paper proposes the IDBO algorithm.
First, PWLCM is introduced to initialize the population to produce a more uniform initial solution distribution and high-quality initial solutions while augmenting population richness. Second, using an adaptive parameter adjustment strategy to dynamically tune producer numbers according to the search process accelerates the convergence rate and enhances global exploration capability. Finally, employing the DLF search strategy balances the exploration and exploitation abilities of the algorithm.
The main idea of the IDBO-BP algorithm is to update the weights and thresholds of the BP neural network by continuously updating the positions of the dung beetle swarm until the global best position is found, i.e., the optimal solution.
The diagram of the IDBO-BP algorithm’s process is presented in Figure 4. Data are first normalized using Equation (15) before the proportion of dung beetles pushing the ball is dynamically adjusted according to Formula (9). Dung beetle population locations are then initialized using PWLCM mapping, as shown in Equation (8). Fitness values for all dung beetles are calculated, and their locations are updated according to Formulas (1)–(7). The current optimal solution is updated in combination with the DLF strategy before the positions of all dung beetles are updated again in combination with Equation (13). When the iteration limit is reached, the best solution is output along with the optimal parameters of the BP neural network.

4. Evaluate the Effectiveness of the Suggested IDBO Model

The efficacy of the suggested IDBO method is assessed through a series of experiments utilizing various benchmark functions in this section.

4.1. Benchmark Functions

To objectively appraise the effectiveness of various meta-heuristic algorithms and to validate the usefulness of the IDBO amelioration strategy, 14 standard test functions were selected from the literature [32], and the CEC2017 test function was utilized to evaluate the capability of the IDBO algorithm. Functions F1–F8 are unimodal with a single global optimal solution and were employed to gauge the velocity and exactness of convergence of the algorithm. Functions F9–F14 are multimodal with a single global optimum and several local optima and were used to estimate the global search and excavation capabilities of the algorithm. The details of these benchmark functions, including their expressions, dimensions, search ranges and theoretical optimal solutions, are given in Table 1 and Table 2. To provide a more intuitive understanding of these benchmark functions and their optimal values, Figure 5 and Figure 6 depict 3D views (30 dimensions) of some of these functions.

4.2. Contrast Algorithm and Experimental Parameter Settings

To fully validate the reliability of the presented IDBO model, its results were com- pared with those of four widely used basic metaheuristics: PSO (Eberhart et al., 1995) [33], GWO (Mirjalli et al., 2014), WOA (Mirjalili et al., 2016) [34] and DBO (Xue et al., 2022) [25]. As indicated in Table 3, the parameter settings that were recommended in their respective original works were adopted for the experiments involving these comparison algorithms.
To more accurately evaluate the efficacy of the IDBO algorithm and its comparative algorithms, a population size of N = 30 was uniformly set, and the upper limit of iterations was fixed at 500. Each model was executed separately 30 times. The dimension D was set to 30, 50 and 100 to examine the effectiveness of the suggested approach in searching for merits across different dimensions. To minimize the influence of randomness in the simulation results, the optimal values, means and standard deviations of the optimization results (fitness) were recorded separately to appraise the exploration performance, accuracy and reliability of the models.
The experiment was implemented on a Windows 11 operating system with an 11th Gen Intel® Core™ i7-11700 processor with 2.5 GHz and 16 GB RAM using MATLAB 2019a for simulation. The optimal fitness, mean fitness and standard error of fitness for the IDBO algorithm and its comparative algorithms are presented in Table 4, where bold values indicate the best consequences. Additionally, the bottom three lines of each table show the ’w/t/l’ for the wins (w), ties (t) and losses (l) of each algorithm.

4.3. Evaluation of Exploration and Exploitation

The single-peak functions are well-suited to verify the development capability of algorithms in finding optimal solutions. Multimodal functions with numerous locally optimal solutions can assess the ability of IDBO to evade local optima during exploration.
As indicated in Table 4, the IDBO algorithm demonstrates significant improvement for all seven test functions except F6 across all dimensions. Table 5 reveals that the IDBO algorithm outperforms other algorithms in three different dimensions for all five test functions except F13 and that its optimal value, average and standard error are optimal. Thus, it can be inferred that the IDBO algorithm is more effective than DBO in evaluating optimal solutions, which proves that the modification tactic presented in this article can feasibly enhance the original algorithm’s ability to explore.

4.4. Evaluation of Convergence Curves

To more intuitively observe and compare the convergence rate, accuracy and ability of each algorithm to evade local optima, the convergence curves for IDBO and four basic meta-heuristic algorithms f 1 ~ f 14 (30 dimensions) are presented in Figure 7. The transverse axis represents the number of iterations, whereas the longitudinal axis denotes the order of magnitude of fitness values. Fitness values are expressed as logarithms to base 10 to better illustrate convergence trends.
As shown in Figure 7, IDBO exhibits the fastest convergence and highest accuracy in convergence curves for functions F1–F5, F7–F10 and F14 with a near-linear decrease to theoretical optimal values without stagnation. DBO performs second only to IDBO for these functions and outperforms other algorithms. DBO, GWO and WOA converge to optimal values with minimal stagnation for function F6 but at a slower rate than that of IDBO, and PSO exhibits stagnation at local optima. In the convergence curves for functions F11–F13, IDBO converges rapidly with a quick inflection point to achieve optimal accuracy. This demonstrates that the amelioration method recommended in this paper effectively enhances the original algorithm in terms of convergence rate and accuracy.

4.5. Local Optimal Circumvention Evaluation

As previously mentioned, multimodal functions can be used to examine the search behavior of algorithms. As indicated in Table 5, IDBO achieves the best (fitness) optimal values across three different dimensions of 30, 50 and 100 and outperforms other algorithms. This demonstrates that IDBO effectively equilibrates local and global searches to evade local optima. The improvement approach suggested in this article dramatically augments the exploratory potential of the original model.

4.6. High-Dimensional Robustness Evaluation

General algorithms may not exhibit robustness and stability when solving complex problem functions in high dimensions, and their ability to find optimal solutions may decrease abruptly. To assess the performance of IDBO in high dimensions, results for IDBO and other algorithms were compared in 50 and 100 dimensions. As presented in Table 4, for unimodal functions other than F3 and F6, PSO, WOA and GWO all exhibit decreased convergence accuracy in higher dimensions, and DBO and IDBO show decreased accuracy in 50 dimensions but little change in 100 dimensions, indicating stability for both DBO and IDBO at higher dimensions. For function F3, the convergence accuracy of IDBO is 18 orders of scale above that of DBO in 50 dimensions but increases to 29 orders of magnitude higher than that of DBO in 100 dimensions, indicating slightly inferior performance for DBO at higher dimensions.
According to Table 5, for high-dimensional multimodal function F9, the convergence accuracy for WOA and DBO decreases from theoretical optimal values as dimensionality increases, and only IDBO consistently converges to theoretical optimal values with a mean and standard deviation of zero, indicating stable performance for IDBO when seeking high-dimensional multimodal functions. For four test functions, excluding F9 and F13, IDBO’s performance at high dimensions is comparable to that at 30 dimensions, achieving optimal mean and standard deviation values. Overall, IDBO exhibits a strong performance when finding optimal solutions for high-dimensional optimization problems, demonstrating its stability and robustness at high dimensions.
Table 6 presents a summary of the performance outcomes for IDBO and other algorithms, as shown in Table 4 and Table 5. The total performance metric is employed to calculate TP for each algorithm using Equation (14), in which each algorithm has Q trials and M failed tests.
T P = Q M Q × 100 %

4.7. Statistical Analysis

To further evaluate the validity of the suggested enhancement tactics, this paper used a Wilcoxon signed-rank test to compare IDBO with four meta-heuristic algorithms and applied the Friedman test (Equation (15)) to calculate each algorithm’s ranking. The number of populations N = 30 was set, each test function was subjected to 30 independent runs of each algorithm, dimension D = 30, and the Wilcoxon signed-rank test with α = 0.05 was implemented for IDBO and other algorithms on 14 test functions. The α -values are presented in along with statistics for “}}+”, “}}−” and “}} =”. “}}+” shows that IDBO clearly outperforms other comparison algorithms, “}}−” indicates inferiority, and “}} =” denotes no significant difference. N / A represents not applicable when both searches for superiority result in 0, indicating a comparable performance. Bold text indicates insignificant or comparable differences.
Table 7 shows that WOA, GWO and DBO have comparable search performances with IDBO for F6, and PSO differs significantly from IDBO. WOA and IDBO have a comparable search performances for F9, and DBO differs insignificantly from IDBO. GWO, PSO and DBO have equivalent search behavior with IDBO for F13, and WOA differs insignificantly from IDBO. GWO, PSO and DBO differ significantly from IDBO for all functions except F6, F9 and F13.
Table A2 in Appendix A shows the results of Friedman’s test. The IDBO algorithm has a lower average ranking value than that of the other algorithms in all three dimensions, indicating its superior performance. Moreover, Table A2 reveals that the IDBO algorithm’s mean value decreases relative to DBO as dimensionality increases. This shows that IDBO is more robust in higher dimensions than DBO and further verifies the effectiveness of our optimization strategy.
  F f = 12 n k k + 1 j R j 2 k k + 1 2 4
where 𝑛 is the count of case tests,   k is the quantity of algorithms, and R j is the mean ranking of the j th algorithm.
The IDBO algorithm shows significant improvements in both local and global exploration abilities based on a comprehensive analysis of benchmark function test results, convergence curves, Wilcoxon signed-rank test results, and Friedman test results for each algorithm. It exceeds the original DBO and WOA algorithms and other optimization algorithms that we compare it with in terms of convergence velocity, accuracy and stability. This verifies the performance of the optimization scheme this paper recommends.

5. Experimental Research

5.1. Data Preprocessing

To ensure an accurate comparison of algorithm results, this paper uses the same data as those used in [35]. The authors used larch-sawn timber of 22 mm thickness from Northeast China. Samples were heat treated at atmospheric pressure with temperature, time and relative humidity as the process parameters. The temperature was divided into five levels (120 °C to 210 °C), time was divided into four levels (0.5 to 3 h), and relative humidity was divided into four levels (0 to 100%). After treatment, specimens were placed at an ambient temperature of (20 ± 2) °C and relative humidity of (65 ± 3)% until reaching a balanced moisture level. Mechanical properties were then measured by GB/T1935-2009 to GB/T1941-2009 standards. For each test, the mean of five replicates was computed, yielding 88 sets of data in total.
To guarantee equity in model comparisons, this article uses the same training and testing samples as those used in [23]. The first 58 samples in Table A1 in Appendix A formed the training set, and the last 30 samples constituted the testing set. Input data were normalized using Equation (16) to avoid effects on training speed and prediction accuracy.
Y n o r m = y y m i n y m a x y m i n
Y n o r m denotes the value of y after scaling it to a unit interval, and y is the original value. The range of y is bounded by y m i n and y m a x as the lower and upper limits, respectively.

5.2. Model Parameter Setting

The IDBO-BP model was used to forecast the mechanical features and compare the results with those of BP, TSSA-BP, GWO-BP, IGWO-BP and DBO-BP neural networks to demonstrate its prediction capability. The upper limit of the iterations of the BP neural network was set to 1000 with a target error of 0.0001 and a population size of 50.

5.2.1. Selection of Activation Functions

The activation function is a vital component of a neural network that determines how the neurons produce the output from the input. The activation function gives neural networks nonlinear modeling capabilities, allowing them to approximate complex data and functions. The performance and convergence of neural networks depend on the choice of activation function, so selecting a suitable activation function is a critical step in neural network design. Four common activation functions for BP neural network models in MATLAB are LOGSIG, TANSIG, POSLIN and PURELIN.
Table A3 in Appendix A shows the activation function combinations that minimize the error of different models. Table A3 indicates that the optimal activation function combination for the IDBO-BP model is LOGSIG-PURELIN for LCS, obtained via the exhaustive method. Similarly, the optimal activation functions for other models can be derived.

5.2.2. Determination of the Topology

The number of neurons in each layer and the connection between two adjacent layers constitute the topology of a BP neural network model. The topology affects the neural network’s complexity and expressiveness, which in turn influence the neural network’s performance and convergence. Hence, selecting an appropriate topology is a crucial step in neural network design.

Determination of the Number of Neurons in the Hidden Layer

The BP neural network model’s structure and performance depend on the number of hidden layer neurons, a key parameter that affects the model’s fit to the data. The optimal number of hidden layer neurons should avoid both underfitting and overfitting. Underfitting occurs when the network has too few hidden layer neurons to capture the data’s complex features; overfitting or gradient vanishing occurs when the network has too many hidden layer neurons that fit the training data too closely. The number of hidden layer neurons is not fixed but varies according to the problem’s complexity and the data’s size. Selecting the appropriate number of hidden layer neurons is essential to enhance the model’s generalization ability and prediction accuracy.
This paper proposes an empirical formula (Formula (17)) for estimating the number of neurons in the hidden layer as a reference. The number of neurons in the hidden layer varies from 2 to 7. Table A3 in Appendix A shows the neuron configurations that minimize the error of different models. For example, Table A3 indicates that the optimal neuron configuration for the IDBO-BP model for LCS is 2 (single hidden layer), obtained by the trial-and-error method. Similarly, the optimal neuron configuration for other models can be derived.
  N h = N s α × N i + N o , α ϵ 2,7
where N h , N i and N o are the number of neurons in the hidden, input and output layers, respectively, and N s is the number of samples in the training set.

Determination of the Number of Hidden Layers

The hidden layer of a neural network enables it to process non-linearly separable data. Without hidden layers, neural networks can only represent linearly separable functions or decisions. The number of hidden layers and the activation function influence the neural network’s representational power and fit. Generally, more hidden layers reduce the error but also increase the network’s complexity and training difficulty, and they may cause overfitting. This paper uses neural network models with single and double hidden layers and employs different activation functions to determine the optimal network structure.
Table A3 in Appendix A shows the topologies of the different models at the error minimum. For example, Table A3 indicates that the optimal topology for the IDBO-BP model for TRS is 3-4-6-1, obtained via iterative attempts. Figure 8 shows the corresponding topology schematic diagram. Similarly, the optimal topology for other models can be derived.

5.3. Model Assessment Standards

Statistical error is commonly used to evaluate model prediction properties. Common regression evaluation metrics include the mean absolute error (MAE), mean squared error (MSE), mean absolute percentage error (MAPE) and coefficient of determination (R2). MAE reflects the discrepancy between the algorithm’s optimal value and the theoretical optimal value. It indicates the algorithm’s exploration capability and convergence accuracy. MSE measures the standard error between the predicted and true values, and as the standard error lowers, the model accuracy increases. MAPE measures the relative error between the predicted and true values as a percentage. It is useful for comparing models with different scales of data. R2 measures the model fit to the data, and as it comes closer to 1, the model fit becomes better, and vice versa. As it comes closer to 0, the fit becomes worse. The equations are as follows:
M A E = 1 N i = 1 N Y i Z i
  M S E = 1 N i = 1 N Y i Z i 2
M A P E = 1 N i = 1 N Y i Z i Y i × 100 %
  R 2 = 1 i = 1 n Y i Z i 2 i = 1 n Y i Y i ¯ 2
where Y i and Z i represent the true and predicted values, respectively.

5.4. Model Performance Comparison Analysis

To validate the IDBO-BP model, this paper compares it with the BP, TSSA-BP, GWO-BP, IGWO-BP and DBO-BP models. The BP model is the original back-propagation neural network, and TSSA-BP, GWO-BP, IGWO-BP and DBO-BP are optimized versions of BP neural networks that incorporate the TSSA, GWO, IGWO and DBO models, respectively. Table A4 in Appendix A shows the results. For example, using the test set for illustration, IDBO-BP reduces MAE values of LCS, TRS, TME, RH, and TH models by 56%, 31%, 11%, 35% and 31%, respectively, and it reduces the MAPE by 57%, 45%, 38%, 38% and 38%, respectively, compared with the non-optimized BP neural network. For LCS, the significance values of MAE and MAPE of the test set of IDBO-PB corrected with Bonferroni are 0.002 and 0 (less than 0.05), respectively, indicating significant differences between the IDBO-BP and BP models in predicting the mechanical properties of wood. Moreover, compared with the BP, TSSA-PB, GWO-PB, IGWO-PB and DBO-PB models, the IDBO-PB model’s predictions are closer to real values, indicating its superior prediction capability. Furthermore, compared with the DBO-PB model, IDBO-PB reduces the MAE values of the testing data of LCS, TRS, TME, RH, and TH models by 43%, 12%, 6%, 7% and 18%, respectively; it reduces the MSE by 78%, 21%, 10%, 8% and 26%, respectively; and it reduces the MAPE by 46%, 10%, 8%, 6% and 21%, respectively. This further verifies the effectiveness of the improved strategy in this paper.
Table A5 in Appendix A shows the rank means and overall rankings of the Friedman tests for the six models based on different evaluation metrics on different parameters. In Friedman’s test, the rank mean reflects the solution quality. The rank mean is the average of the solution rank obtained by each algorithm among all the algorithms. As the rank increases, the solution quality increases, and the algorithm performance improves, i.e., it comes closer to the objective function’s optimal value. Table A5 shows that IDBO-BP has the highest ranking for all five parameters, outperforming the other models. Figure 9 illustrates the distribution of the six models on their MAE for the LCS test set. The significance value of the null hypothesis for the six models with the same distribution of solutions for the MAE is 0.006, so the null hypothesis is rejected, i.e., there is a significant difference in the solution quality of the six models for MAE, and IDBO-BP has the best performance based on the rank mean.
Figure 10a–e compares the prediction results for five mechanical properties of wood using the IDBO-BP, DBO-BP and BP neural networks with the actual values. The results show that the optimized BP neural networks with the DBO or IDBO models have predictions closer to the true values, indicating that DBO or IDBO improves the BP neural network prediction accuracy. Moreover, the benchmark functions show that the IDBO model performs better than the DBO model in convergence accuracy, stability and exploration capability. This is mainly due to several factors: (1) The IDBO algorithm initializes its dung beetle population using PWLCM chaotic mapping, which enhances population diversity and initial population solution quality. (2) The IDBO algorithm uses an adaptive parameter adjustment strategy with a nonlinear decreasing producer ratio model, which improves searchability in the early and middle stages of the algorithm and increases search range and efficiency. (3) The IDBO algorithm optimizes location updates for small dung beetles by applying a foraging search strategy based on dimensional learning, which balances exploration and exploitation abilities in late iterations.
In summary, this paper proposes an IDBO algorithm that demonstrates significant improvements in search performance compared to other algorithms, thereby verifying the effectiveness of the enhancement strategy. Additionally, the results demonstrate that the presented IDBO-BP model exhibits outstanding performance in predicting wood mechanical properties.

6. Conclusions

  • This article proposes the IDBO algorithm to address the limitations of the DBO algorithm. PWLCM mapping is employed to initialize the population and preserve versatility. An adaptive parameter adjustment strategy is introduced to enhance search range and efficiency. Additionally, a DLF strategy is implemented to equilibrium for exploration and exploitation search capabilities, increasing the likelihood of escaping local optima and improving later searchability. The performance of IDBO is evaluated against four basic meta-heuristic algorithms, including DBO, for 14 benchmark functions. The algorithms are ranked by applying the Wilcoxon signed-rank and Friedman tests. The outcomes demonstrate that IDBO outperforms other algorithms in finding solutions for both low- and high-dimensional functions with a single mode or multiple modes, which verifies the effectiveness of the improvement strategy, and it is highly competitive with other meta-heuristics.
  • In this paper, five prediction models are separately developed using the IDBO-BP model to predict the LCS, TRS, TME, RH and TH of larch wood after heat treatment with temperature, duration and relative humidity as input variables. The outcomes indicate that the MAE, MSE and MAPE values of the IDBO-BP model are considerably diminished compared with the primitive BP neural network model. The results show that optimizing neural networks model with IDBO significantly improves the prediction accuracy of wood mechanical properties. In addition to comparing the original BP neural network model, this paper also compares it with the TSSA-BP, GWO-BP, IGWO-BP and DBO-BP models. The results denote that the forecast outcomes of the IDBO-BP model are closer to the true values, indicating significant optimization and improved prediction ability.
  • This paper compares the optimal prediction models with different parameters and their corresponding topologies and activation functions, and it shows in Table A3 that the same model with different parameters does not necessarily have the same optimal topology. For the LCS, TRS, TME, RH and TH of heat-treated larch wood predicted in this paper, the five most accurate topologies of IDBO-BP that minimize the error are 3-2-1, 3-4-6-1, 3-4-1, 3-5-1 and 3-4-1, respectively.
  • The Friedman test can only reflect the quality of the solution, not the diversity of the solution. Therefore, some algorithms may have significant differences in the diversity of solutions, but not in the quality of solutions. The Friedman test is also less robust in some extreme cases; for example, if an algorithm obtains an exceptionally good or bad solution, it may influence the rank and rank mean of other algorithms, thus obscuring the differences between other algorithms. This is illustrated in Table A5. The original BP model ranks first in MSE for both the training and test sets, which may be more susceptible to outlier data because MSE magnifies the prediction error. However, Table A5 also shows that, although the original BP model performs well for MSE, its overall ranking for both the test and training sets is inferior to that of the other five models.

Author Contributions

Conceptualization, R.Z.; methodology, R.Z.; software, R.Z.; validation, R.Z.; formal analysis, R.Z.; investigation, R.Z.; resources, R.Z.; data curation, R.Z.; writing—original draft preparation, R.Z.; writing—review and editing, R.Z.; visualization, R.Z.; project administration, R.Z.; funding acquisition, Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Fundamental Research Funds for the Central Universities, grant number 2572020BL01, and the Natural Science Foundation of Heilongjiang Province, grant number LH2020C050.

Data Availability Statement

From this paper, the data are openly available in a public repository that issues datasets with DOIs. The data that support the findings of this study are openly available in Bioresources at http://doi.org/10.15376/biores.10.3.5758-5776, reference number [35]. The data presented in this study are available in the article.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Wood treatment conditions and corresponding mechanical properties.
Table A1. Wood treatment conditions and corresponding mechanical properties.
Test
Temperature/°C
Test
Time/h
Test
Humidity/%
Longitudinal Compressive Strength/MPaTransverse Rupture Strength/MPaTransverse Modulus of Elasticity/GPaRadial Hardness/MPaTangential Hardness/MPa
1200.5041.967.49.09314.1215.56
1200.54039.865.39.03813.0214.69
1200.56039.769.79.114.6715.08
1200.510039.567.28.84514.6515.45
1201039.567.88.64913.9814.36
12014039.566.48.75212.9815.59
12016039.467.89.24513.7815.32
120110039.263.17.89514.5514.23
1202039.266.99.07413.3314.23
12024039.168.28.94512.5514.58
12026039.165.28.85413.2514.89
120210039.163.28.93313.3614.56
1203039.166.58.913.5614.78
12034039.167.68.96313.4514.45
12036038.966.68.74513.0114.69
120310038.964.28.74512.4514.78
1400.5038.966.78.97814.6915.56
1400.54038.967.58.84513.0615.02
1400.56038.766.89.15514.0214.23
1400.510038.765.38.87715.0215.01
1401038.666.59.17914.1615.68
14014038.664.59.13713.0515.01
14016038.567.29.02413.4915.17
140110038.563.18.82313.4515.48
1402038.466.38.82313.5414.69
14024038.465.78.85214.6914.58
14026038.267.18.79913.9914.74
140210038.262.78.914.2815.63
1403038.265.48.81114.3914.23
14034038.264.68.93413.2314.56
14036038.165.58.65414.2313.65
140310038.162.18.79813.5614.02
1600.5038.166.38.78814.8914.99
1600.5403866.99.01114.8714.36
1600.56037.966.38.74514.5814.78
1600.510037.865.88.71214.6915.69
1601037.862.48.67913.4214.56
16014037.661.48.64514.0915.3
16016037.662.28.79814.6915.9
160110037.662.88.67913.5815.63
1602037.662.28.72714.6313.92
16024037.662.18.55714.0214.17
16026037.563.18.68715.1714.28
160210037.560.98.61114.6515.09
1603037.561.98.61113.6514.36
16034037.461.58.53413.4714.56
16036037.360.88.60113.5813.89
160310037.260.58.55213.6914.36
1800.5037.265.98.60115.2114.03
1800.54037.165.38.68915.9814.56
1800.56037.166.18.64516.0113.97
1800.510036.965.78.59914.3214.33
1801036.965.48.62315.0913.79
18014036.964.98.64514.9814.25
18016036.766.38.57915.4514.08
180110036.764.88.54514.3313.64
1802036.665.18.57414.6513.69
18024036.565.88.614.1313.59
18026036.564.58.53213.9914.49
180210036.164.28.54415.113.54
180303664.18.614.2114.06
18034035.964.28.54113.9914.21
18036035.864.88.45614.5813.98
180310035.863.88.49914.9913.69
2000.5035.862.18.4831213.6
2000.54035.560.68.47511.9612.99
2000.56035.459.98.39911.4513.21
2001035.461.98.42211.6912.98
20014035.160.88.48911.4612.64
20016034.661.28.32111.5412.35
2002034.561.28.36911.9913.02
20024034.560.88.35411.1512.69
20026034.260.58.21110.6512.49
2003034.160.98.24910.6812.73
20034034.159.88.23111.0512.57
20036034.158.28.01110.2212.37
2100.5033.950.17.85610.2310.98
2100.54033.850.87.78910.599.98
2100.56033.249.97.86510.5510.23
2101032.950.67.76510.2110.65
21014032.949.87.7129.9810.21
21016032.848.97.49810.0110.65
2102032.549.17.6899.989.64
21024032.149.57.7129.659.35
21026031.849.67.62310.039.67
2103031.547.87.59.218.91
21034030.846.57.4129.18.21
21036030.545.17.3219.038.99
Table A2. Order by Friedman test in dimensions D = 30, 50 and 100.
Table A2. Order by Friedman test in dimensions D = 30, 50 and 100.
Alg.DF1F2F3F4F5F6F7F8F9F10F11F12F13F14Avg. RankOverall Rank
303354.673.672.5331.832.33334.333.333.263
WOA5032.3354.673.672.5332.173.67334.333.333.333
1003354.673.672.5332.173.67334.333.333.384
3044333.332.5443.17443.332.673.333.454
GWO5044333.332.5443.832.6742.672.673.673.384
10044333.332.5442.832.6742.672.673.333.293
305544.33555555552.6754.715
PSO505544.3355554.675552.6754.695
1005544.33555555552.6754.715
3022221.672.5223.172.671.672.672.672.332.242
DBO5022.67221.672.5222.832.671.673.332.671.672.262
10022221.672.5223.52.671.673.332.672.332.312
3011111.332.5111.8311.3312.6711.331
IDBO5011111.332.5111.511.3312.671.331.331
10011111.332.5111.511.3312.6711.311
Table A3. Optimal prediction models with different parameters and the corresponding topologies and activation functions.
Table A3. Optimal prediction models with different parameters and the corresponding topologies and activation functions.
ParmsModelNeuron ConfigurationTopologyHidden and Output ActivationsTrain Test
MAEMSEMAPEMAEMSEMAPE
LCSBP23-2-1LOGSIG-PURELIN0.2206 0.0020 0.9335%0.9770 0.1945 0.0011 0.5332%0.9868
GWO-BP33-3-1LOGSIG-PURELIN0.1956 0.1054 0.5263%0.9785 0.1372 0.0724 0.4919%0.9940
TSSA-BP43-4-1LOGSIG-TANSIG0.1592 0.1282 0.4342%0.9759 0.1481 0.0642 0.4270%0.9887
DBO-BP(3, 7)3-3-7-1LOGSIG-PURELIN0.1412 0.1052 0.3786%0.9830 0.1503 0.0566 0.4260%0.9879
IGWO-BP(2, 2)3-2-2-1PURELIN-LOGSIG0.1652 0.1200 0.4449%0.9805 0.1291 0.0345 0.3591%0.9907
IDBO-BP23-2-1LOGSIG-PURELIN0.1315 0.0996 0.3513%0.9834 0.0856 0.0122 0.2286%0.9978
TRSBP23-2-1LOGSIG-PURELIN1.2157 0.0379 2.4030%0.9363 1.1994 0.0753 2.4487%0.9275
TSSA-BP23-2-1LOGSIG-PURELIN1.2191 2.2895 1.9534%0.9236 1.2609 2.2655 2.0291%0.9430
GWO-BP(3, 7)3-3-7-1LOGSIG-PURELIN1.1097 1.8158 1.7920%0.9433 0.9837 1.9907 1.5331%0.9272
IGWO-BP33-3-1LOGSIG-TANSIG1.1020 2.2158 1.8002%0.9310 0.9519 1.5641 1.5302%0.9557
DBO-BP73-7-1POSLIN-PURELIN1.0349 1.7647 1.6610%0.9510 0.9345 1.4363 1.4881%0.9484
IDBO-BP(4, 6)3-4-6-1TANSIG-PURELIN0.9032 1.4304 1.4750%0.9601 0.8218 1.1362 1.3392%0.9683
TMEBP33-3-1LOGSIG-PURELIN0.1710 0.0006 2.2422%0.8260 0.0928 0.0005 1.4955%0.8929
GWO-BP(3, 7)3-3-7-1LOGSIG-PURELIN0.0925 0.0306 1.0968%0.8638 0.1010 0.0164 1.1576%0.9002
TSSA-BP(2, 2)3-2-2-1LOGSIG-PURELIN0.1207 0.0500 2.0647%0.7992 0.0977 0.0158 1.1322%0.7225
IGWO-BP33-3-1LOGSIG-PURELIN0.1144 0.0324 1.3611%0.8424 0.0923 0.0155 1.0797%0.9018
DBO-BP(3, 7)3-3-7-1LOGSIG-PURELIN0.1024 0.0273 1.2126%0.8658 0.0879 0.0127 1.0182%0.9053
IDBO-BP43-4-1POSLIN-PURELIN0.0849 0.0266 1.0132%0.8743 0.0824 0.0115 0.9340%0.9156
RHBP(5, 6)3-5-6-1POSLIN-PURELIN0.4763 0.0049 3.5198%0.9249 0.4963 0.0078 3.8911%0.8986
TSSA-BP(3, 7)3-3-7-1LOGSIG-PURELIN0.4205 0.3300 4.1028%0.8901 0.3919 0.3644 3.8402%0.9023
GWO-BP53-5-1POSLIN-PURELIN0.4009 0.2375 2.9805%0.9233 0.5008 0.3834 3.8098%0.8785
IGWO-BP(4, 6)3-4-6-1TANSIG-PURELIN0.4256 0.2896 3.2035%0.9107 0.4095 0.2953 3.0822%0.8975
DBO-BP23-2-1TANSIG-PURELIN0.3928 0.2466 2.9423%0.8985 0.3478 0.2053 2.5616%0.9285
IDBO-BP53-5-1POSLIN-PURELIN0.3353 0.2122 2.5198%0.9372 0.3236 0.1889 2.4020%0.9458
THBP(7, 5)3-7-5-1LOGSIG-LOGSIG0.3850 0.0024 2.8660%0.9412 0.4308 0.0073 3.4057%0.8996
GWO-BP23-2-1LOGSIG-PURELIN0.2857 0.2283 2.8154%0.9615 0.3720 0.2421 3.0862%0.8625
TSSA-BP(4, 6)3-4-6-1TANSIG-PURELIN0.3743 0.2175 2.8032%0.9354 0.3839 0.2201 2.9423%0.8896
IGWO-BP43-4-1TANSIG-PURELIN0.3156 0.1671 2.3664%0.9566 0.3824 0.2373 2.9295%0.8899
DBO-BP(7, 5)3-7-5-1LOGSIG-LOGSIG0.3335 0.1700 2.4652%0.9532 0.3604 0.2092 2.6622%0.8996
IDBO-BP43-4-1TANSIG-PURELIN0.2611 0.1249 1.9515%0.9676 0.2962 0.1544 2.1062%0.9399
Table A4. Pairwise comparisons of the prediction performance of different models with IDBO-BP.
Table A4. Pairwise comparisons of the prediction performance of different models with IDBO-BP.
ParmsSample1-Sample 2Train Test
MAEAdj. Sig.aMSEAdj. Sig.aMAPEAdj. Sig.aAdj. Sig.aMAEAdj. Sig.aMSEAdj. Sig.aMAPEAdj. Sig.aAdj. Sig.a
LCSIDBO-BP-DBO-BP6.85%0.7435.32%17.22%0.003−0.05%0.05143.07%0.43678.44%0.74346.35%0.007−1.00%0.743
IDBO-BP-IGWO-BP20.40%117.05%121.05%0.011−0.30%0.26933.71%0.95464.60%0.95436.35%0.007−0.71%0.954
IDBO-BP-GWO-BP32.76%0.5725.52%133.25%0.001−0.50%137.65%0.06883.14%0.13253.53%0.001−0.38%0.011
IDBO-BP-TSSA-BP17.38%122.32%119.10%0.096−0.77%0.94542.21%180.98%146.47%0.016−0.92%0.572
IDBO-BP-BP40.38%0.005−4858.66%0.06862.37%0−0.66%0.10356.00%0.002−1016.77%0.32957.13%0−1.11%0.002
TRSIDBO-BP-DBO-BP12.73%0.4418.94%111.20%1−0.96%0.55612.06%0.42820.89%110.01%1−2.10%0.185
IDBO-BP-IGWO-BP18.04%0.26935.45%118.06%1−3.13%0.26913.67%0.94527.36%112.48%1−1.32%0.638
IDBO-BP-GWO-BP18.61%0.94521.22%117.69%1−1.79%116.46%0.06142.92%112.65%1−4.43%0.02
IDBO-BP-TSSA-BP25.92%0.03537.52%124.49%1−3.96%0.94534.83%0.03549.85%134.00%0.572−2.68%1
IDBO-BP-BP25.71%0.001−3676.83%0.00138.62%0.096−2.55%0.10331.48%0.006−1409.50%045.31%0.002−4.41%0.061
TMEIDBO-BP-DBO-BP17.08%0.7942.37%116.44%0.164−0.98%0.3936.20%0.1329.71%0.5728.27%0.005−1.13%1
IDBO-BP-IGWO-BP25.75%0.26917.97%125.56%0.42−3.79%0.26910.66%0.57225.86%113.49%0.954−1.53%1
IDBO-BP-GWO-BP8.13%0.94513.07%17.62%0.42−1.22%118.36%0.00229.85%0.03419.31%0.181−1.71%0.007
IDBO-BP-TSSA-BP29.62%0.03546.76%150.93%0.035−9.40%0.94515.60%0.09627.14%0.24617.50%0.181−26.72%0.181
IDBO-BP-BP50.33%0.001−4103.20%0.02354.81%0.001−5.86%0.10311.15%0.181−2041.22%0.43637.55%0−2.54%0.068
RHIDBO-BP-DBO-BP14.63%0.36113.98%114.36%1−4.30%0.1856.97%1.0008.00%16.23%1−1.86%0.119
IDBO-BP-IGWO-BP21.21%0.26926.72%121.34%1−2.90%0.26920.99%136.03%122.07%1−5.38%0.638
IDBO-BP-GWO-BP16.36%0.94510.65%115.46%1−1.50%135.40%150.74%136.95%1−7.67%0.02
IDBO-BP-TSSA-BP20.25%0.03535.70%0.57238.58%1−5.29%0.94517.44%0.09648.18%0.95437.45%0.572−4.82%1
IDBO-BP-BP29.60%0.001−4200.16%0.01628.41%0.048−1.32%0.10334.80%1.000−2308.90%0.00538.27%0.011−5.26%0.061
THIDBO-BP-DBO-BP21.71%0.726.50%120.84%0.087−1.50%0.55617.83%0.36626.17%120.88%0.151−4.48%0.113
IDBO-BP-IGWO-BP17.27%0.26925.22%117.53%0.42−1.14%0.26922.56%0.94534.92%128.10%1−5.62%0.638
IDBO-BP-GWO-BP8.62%0.94545.27%130.68%0.42−0.63%120.39%0.06136.20%131.75%0.035−8.97%0.02
IDBO-BP-TSSA-BP30.25%0.03542.55%130.38%0.035−3.44%0.94522.87%0.03529.84%128.42%0.061−5.64%1
IDBO-BP-BP32.18%0.001−5117.57%131.91%0.001−2.80%0.10331.25%0.006−2013.85%138.16%0−4.48%0.061
Each row compares sample 1 with sample 2 and calculates the percentage by which sample 1 reduces the error based on sample 2 (a negative sign indicates the percentage by which R² increases). Asymptotic significances (2-sided tests) are displayed. The significance level is 0.050. a Significance values have been adjusted with the Bonferroni correction for multiple tests.
Table A5. Model ranking by parameters and evaluation metrics with Friedman test.
Table A5. Model ranking by parameters and evaluation metrics with Friedman test.
ParmsModelTrain Test Avg. RankOverall Rank
MAEMSEMAPEMAEMSEMAPEInclude MSEExclude MSEInclude MSEExclude MSE
LCSBP515.422.174.8315.082.252.242.6556
TSSA-BP2.6743.083.923.254.083.53.581.640.8322
GWO-BP3.834.54.083.334.084.754.082.582.431.6965
IGWO-BP3.54.423.583.333.334.173.673.751.951.1733
DBO-BP3.753.923.833.673.584.253.673.671.961.2544
IDBO-BP2.253.1714.581.922.7515.170.29−0.6011
TRSBP3.2515.173.25415.082.921.671.8946
TSSA-BP4.084.333.533.924.253.753.332.191.4965
GWO-BP4.174.253.833.173.584.083.423.582.071.3854
IGWO-BP2.833.752.674.333.543.2531.580.8222
DBO-BP3.173.582.753.673.53.923.333.421.650.9433
IDBO-BP3.54.083.083.582.53.752.174.751.340.4911
TMEBP4.0814.753.834.6715.082.831.741.9936
TSSA-BP3.583.753.173.673.834.53.53.081.951.2243
GWO-BP3.174.673.672.53.6754.332.332.461.6765
IGWO-BP3.25433.753.333.58341.550.8122
DBO-BP3.754.173.53.253.754.253.53.751.991.2554
IDBO-BP3.173.422.9241.752.671.5850.810.0711
RHBP4.5815.083.425.1715.253.751.862.1546
TSSA-BP3.175.083.752.3335.174.252.082.501.6365
GWO-BP3.53.833.083.422.923.533.581.600.9233
IGWO-BP3.54.083.253.583.924.253.423.671.901.1454
DBO-BP3.253.5342.923.332.423.831.320.6322
IDBO-BP33.52.834.253.083.752.674.081.310.5411
THBP4.0814.752.084.1714.332.251.882.1746
TSSA-BP4.174.53.9233.834.083.083.332.161.4554
GWO-BP3.333.8333.673.924.173.753.581.841.1333
IGWO-BP3.834.423.583.084.084.543.172.271.5465
DBO-BP3.423.67342.583.422.423.831.340.6022
IDBO-BP3.173.582.753.472.423.432.453.991.290.5611
For comparison purposes, R² is taken as a negative value when calculating the average and overall rankings.

References

  1. Goli, G.; Negro, F.; Emmerich, L.; Militz, H. Thermal and chemical modification of wood—A combined approach for exclusive, high-demanding performance products. Wood Mater. Sci. Eng. 2022, 18, 58–66. [Google Scholar] [CrossRef]
  2. Bekhta, P. Effect of heat treatment on some physical and mechanical properties of birch plywood. Eur. J. Wood Wood Prod. 2020, 78, 683–691. [Google Scholar] [CrossRef]
  3. Esteves, B.; Ferreira, H.; Viana, H.; Ferreira, J.; Domingos, I.; Cruz-Lopes, L.; Jones, D.; Nunes, L. Termite Resistance, Chemical and Mechanical Characterization of Paulownia tomentosa Wood before and after Heat Treatment. Forests 2021, 12, 1114. [Google Scholar] [CrossRef]
  4. Tjeerdsma, B.F.; Militz, H. Chemical changes in hydrothermal treated wood: FTIR analysis of combined hydrothermal and dry heat-treated wood. Eur. J. Wood Wood Prod. 2005, 63, 102–111. [Google Scholar] [CrossRef]
  5. Kaymakci, A.; Bayram, B. Evaluation of heat treatment parameters’ effect on some physical and mechanical properties of poplar wood with multi-criteria decision making techniques. Bioresources 2021, 16, 4693–4703. [Google Scholar] [CrossRef]
  6. Suri, I.F.; Purusatama, B.D.; Kim, J.H.; Yang, G.U.; Prasetia, D.; Kwon, G.J.; Hidayat, W.; Lee, S.H.; Febrianto, F.; Kim, N.H. Comparison of physical and mechanical properties of Paulownia tomentosa and Pinus koraiensis wood heat-treated in oil and air. Eur. J. Wood Wood Prod. 2022, 80, 1389–1399. [Google Scholar] [CrossRef]
  7. Esteves, B.M.; Pereira, H.M. Wood modification by heat treatment: A review. Bioresources 2008, 4, 370–404. [Google Scholar] [CrossRef]
  8. Korkut, D.S.; Guller, B. The effects of heat treatment on physical properties and surface roughness of red-bud maple (Acer trautvetteri Medw.) wood. Bioresour. Technol. 2008, 99, 2846–2851. [Google Scholar] [CrossRef]
  9. Içel, B.; Guler, G.; Isleyen, O.; Beram, A.; Mutlubas, M. Effects of Industrial Heat Treatment on the Properties of Spruce and Pine Woods. Bioresources 2015, 10, 5159–5173. [Google Scholar] [CrossRef]
  10. Xue, J.; Xu, W.; Zhou, J.; Mao, W.; Wu, S. Effects of High-Temperature Heat Treatment Modification by Impregnation on Physical and Mechanical Properties of Poplar. Materials 2022, 15, 7334. [Google Scholar] [CrossRef]
  11. Boonstra, M.J.; Tjeerdsma, B. Chemical analysis of heat treated softwoods. Eur. J. Wood Wood Prod. 2006, 64, 204–211. [Google Scholar] [CrossRef]
  12. Hill, C.A.S. Wood Modification; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 2006. [Google Scholar] [CrossRef]
  13. Kohonen, T. The self-organizing map. Proc. IEEE 1990, 78, 1464–1480. [Google Scholar] [CrossRef]
  14. Onyiagha, C. From neuronal stochasticity to intelligent resource management of packet data networks. In Proceedings of the Fifth International Conference on Artificial Neural Networks, Venue, UK, 7–9 July 1997. [Google Scholar] [CrossRef]
  15. Stergios, A.; Anthony, K.; Elli, R.; Dimitris, B. Predicting the properties corrugated base papers using multiple linear regressiom amd artificial neural networks. Drewno 2016, 59, 198. [Google Scholar] [CrossRef]
  16. You, G.; Wang, B.; Li, J.; Chen, A.; Sun, J. The prediction of MOE of bamboo-wood composites by ANN models based on the non-destructive vibration testing. J. Build. Eng. 2022, 59, 105078. [Google Scholar] [CrossRef]
  17. Chen, Y.; Wang, W.; Li, N. Prediction of the equilibrium moisture content and specific gravity of thermally modified wood via an Aquila optimization algorithm back-propagation neural network model. Bioresources 2022, 17, 4816–4836. [Google Scholar] [CrossRef]
  18. Abualigah, L.; Yousri, D.; Elaziz, M.A.; Ewees, A.A.; Al-Qaness, M.A.; Gandomi, A.H. Aquila Optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  19. Wang, Y.; Wang, W.; Chen, Y. Carnivorous Plant Algorithm and BP to Predict Optimum Bonding Strength of Heat-Treated Woods. Forests 2022, 14, 51. [Google Scholar] [CrossRef]
  20. Ong, K.M.; Ong, P.; Sia, C.K. A carnivorous plant algorithm for solving global optimization problems. Appl. Soft Comput. 2020, 98, 106833. [Google Scholar] [CrossRef]
  21. Li, N.; Wang, W. Prediction of Mechanical Properties of Thermally Modified Wood Based on TSSA-BP Model. Forests 2022, 13, 160. [Google Scholar] [CrossRef]
  22. Xue, J.; Shen, B. A novel swarm intelligence optimization approach: Sparrow search algorithm. Syst. Sci. Control. Eng. 2020, 8, 22–34. [Google Scholar] [CrossRef]
  23. Ma, W.; Wang, W.; Cao, Y. Mechanical Properties of Wood Prediction Based on the NAGGWO-BP Neural Network. Forests 2022, 13, 1870. [Google Scholar] [CrossRef]
  24. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  25. Xue, J.; Shen, B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J. Supercomput. 2022, 79, 7305–7336. [Google Scholar] [CrossRef]
  26. Tiryaki, S.; Özşahin, Ş..; Yıldırım, I. Comparison of artificial neural network and multiple linear regression models to predict optimum bonding strength of heat treated woods. Int. J. Adhes. Adhes. 2014, 55, 29–36. [Google Scholar] [CrossRef]
  27. Yan, Y.; Hongzhong, M.; Zhendong, L. An Improved Grasshopper Optimization Algorithm for Global Optimization. Chin. J. Electron. 2021, 30, 451–459. [Google Scholar] [CrossRef]
  28. Wu, Q. A self-adaptive embedded chaotic particle swarm optimization for parameters selection of Wv-SVM. Expert Syst. Appl. 2011, 38, 184–192. [Google Scholar] [CrossRef]
  29. Wang, Y.; Chen, S.; Wang, Y. Chaos Encryption Algorithm Based on Kent Mapping and AES Combination. In Proceedings of the 2018 International Conference on Network, Communication, Computer Engineering (NCCE 2018), Chongqing, China, 26–27 May 2018. [Google Scholar] [CrossRef]
  30. Wang, X.; Jin, C. Image encryption using Game of Life permutation and PWLCM chaotic system. Opt. Commun. 2012, 285, 412–417. [Google Scholar] [CrossRef]
  31. Bai, H.; Chu, Z.; Wang, D.; Bao, Y.; Qin, L.; Zheng, Y.; Li, F. Predictive control of microwave hot-air coupled drying model based on GWO-BP neural network. Dry. Technol. 2022, 1–11. [Google Scholar] [CrossRef]
  32. Song, X.; Zhao, M.; Yan, Q.; Xing, S. A high-efficiency adaptive artificial bee colony algorithm using two strategies for continuous optimization. Swarm Evol. Comput. 2019, 50, 100549. [Google Scholar] [CrossRef]
  33. Eberhart, R.; Kennedy, J. A new optimizer using particle swarm theory. In Proceedings of the Sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October 1995. [Google Scholar] [CrossRef]
  34. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  35. Yang, H.; Cheng, W.; Han, G. Wood Modification at High Temperature and Pressurized Steam: A Relational Model of Mechanical Properties Based on a Neural Network. Bioresources 2015, 10, 5758–5776. [Google Scholar] [CrossRef]
Figure 1. BP Network Structure.
Figure 1. BP Network Structure.
Forests 14 00935 g001
Figure 2. Diagram of the DBO algorithm.
Figure 2. Diagram of the DBO algorithm.
Forests 14 00935 g002
Figure 3. Population initialization in PWLCM: (a) scatter map; (b) frequency distribution histogram.
Figure 3. Population initialization in PWLCM: (a) scatter map; (b) frequency distribution histogram.
Forests 14 00935 g003
Figure 4. Diagram of the IDBO-BP algorithm.
Figure 4. Diagram of the IDBO-BP algorithm.
Forests 14 00935 g004
Figure 5. Three-dimensional view of partial unimodal test functions.
Figure 5. Three-dimensional view of partial unimodal test functions.
Forests 14 00935 g005
Figure 6. Three-dimensional view of partial multimodal test functions.
Figure 6. Three-dimensional view of partial multimodal test functions.
Forests 14 00935 g006
Figure 7. Convergence curves of unimodal and multimodal functions.
Figure 7. Convergence curves of unimodal and multimodal functions.
Forests 14 00935 g007aForests 14 00935 g007b
Figure 8. Schematic diagram of the IDBO-BP model for the mechanical properties of transverse rupture strength.
Figure 8. Schematic diagram of the IDBO-BP model for the mechanical properties of transverse rupture strength.
Forests 14 00935 g008
Figure 9. Friedman’s two-way analysis of variances by ranks for related samples (test sets of LCS).
Figure 9. Friedman’s two-way analysis of variances by ranks for related samples (test sets of LCS).
Forests 14 00935 g009
Figure 10. Comparison of results for various models based on predictions and actual values: (a) longitudinal compressive strength; (b) transverse rupture strength; (c) transverse modulus of elasticity; (d) radial hardness; (e) tangential hardness.
Figure 10. Comparison of results for various models based on predictions and actual values: (a) longitudinal compressive strength; (b) transverse rupture strength; (c) transverse modulus of elasticity; (d) radial hardness; (e) tangential hardness.
Forests 14 00935 g010
Table 1. Unimodal benchmark functions.
Table 1. Unimodal benchmark functions.
FunctionDimRange F m i n
f 1 x = i = 1 n x i 2   30/50/100[−100, 100]0
f 2 x = i = 1 n x i + i = 1 n x i         30/50/100[−10, 10]0
f 3 x = i = 1 n j = 1 i x j 2 30/50/100[−100, 100]0
f 4 x = m a x x i , 1 i n 30/50/100[−100, 100]0
f 5 x   =     i = 1 n 1 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 30/50/100[−30, 30]0
f 6 x = i = 1 n ( x i + 0.5 ) 2 30/50/100[−100, 100]0
f 7 x = x 1 2 + 10 6 i = 2 n x i 2 30/50/100[−100, 100]0
f 8 x = i = 1 n x i 2 + i = 1 n 0.5 i x i 2 + i = 1 n 0.5 i x i 4 30/50/100[−5, 10]0
Table 2. Multimodal benchmark functions.
Table 2. Multimodal benchmark functions.
FunctionDimRange F m i n
f 9 x = i = 1 n x i 2 10 cos 2 π x i + 10 30/50/100[−5.12, 5.12]0
f 10 x = i = 1 n x i sin x i + 0.1 x i 30/50/100[−10, 10]0
f 11 x = π n 10 sin π y 1 + i = 1 n 1 y i 1 2 1 + 10 sin 2 π y i + 1 + y n 1 2 + i = 1 n u x i , 10 ,   100 ,   4 ,
w h e r e   y i   = 1 + x i + 1 4 , for all i = 1, …, n
u x i , a , k , m = k x i a m                               x i > a       0                                         a < x i < a       k x i a m                       x i < a  
30/50/100[−50, 50]0
f 12 x = 0.1 sin 2 3 π x 1 + i = 1 n x i 1 2 1 + sin 2 3 π x i + 1 + x n 1 2 1 + sin 2 2 π x n + i = 1 n u x i , 5,100,4 30/50/100[−50, 50]0
f 13 x = 1 n 1 i = 1 n 1 s i × ( sin 50 s i 0.2 + 1 2
s i = x i 2 + x i + 1 2
30/50/100[−100, 100]0
f 14 x = sin 2 π y 1 + i = 1 n 1 y i 1 2 1 + 10 sin 2 π y i + 1
+ y n 1 2 1 + sin 2 2 π y n ,
w h e r e   y i = 1 + x i 1 4 , for all i = 1, …, n
30/50/100[−10, 10]0
Table 3. Algorithms’ parameters setting.
Table 3. Algorithms’ parameters setting.
AlgorithmParameterSetting
WOAaGradually reduced from 2 to 0
GWOaUniformly lowered from 2 to 0
PSO C 1 and C 2 2
Inertia weightLinearly decreased from 0.9 to 0.1
DBO α   a n d   β 0.1
a and b0.3 and 0.5
IDBO α   a n d   β 0.1
a and b0.3 and 0.5
The proportions of the ball-rolling dung beetle, the brood ball, the small dung beetle and the thief were 0.4   0.2 , 0.2, 0.2 and 0.2   0.4
Table 4. Unimodal benchmark function optimization results.
Table 4. Unimodal benchmark function optimization results.
FDIndexWOAGWOPSODBOIDBO
F130Best 3.94 × 1 0 83 7.29 × 1 0 30 6.50 × 1 0 2 1.03 × 1 0 176 5.01 × 1 0 201
Mean 7.03 × 1 0 74 9.20 × 1 0 28 1.77 × 1 0 3 1.77 × 1 0 89 8.31 × 1 0 151
STD 3.41 × 1 0 73 9.69 × 1 0 28 6.99 × 1 0 2 9.68 × 1 0 89 4.46 × 1 0 150
50Best 4.13 × 1 0 89 1.29 × 1 0 29 6.20 × 1 0 2 1.71 × 1 0 163 1.64 × 1 0 199
Mean 6.15 × 1 0 72 9.52 × 1 0 28 1.93 × 1 0 3 1.71 × 1 0 110 8.00 × 1 0 151
STD 4.33 × 1 0 71 1.13 × 1 0 27 8.76 × 1 0 2 1.21 × 1 0 109 5.66 × 1 0 150
100Best 3.31 × 1 0 88 2.43 × 1 0 29 7.52 × 1 0 2 2.11 × 1 0 182 1.89 × 1 0 208
Mean 2.61 × 1 0 72 1.20 × 1 0 27 2.09 × 1 0 3 4.94 × 1 0 105 6.48 × 1 0 152
STD 2.36 × 1 0 71 2.64 × 1 0 27 7.95 × 1 0 2 4.90 × 1 0 104 6.48 × 1 0 151
F230Best 1.71 × 1 0 57 1.67 × 1 0 17 1.41 × 1 0 1 7.24 × 1 0 81 3.30 × 1 0 111
Mean 8.35 × 1 0 50 9.15 × 1 0 17 2.03 × 1 0 1 6.05 × 1 0 55 1.98 × 1 0 83
STD 4.56 × 1 0 49 5.34 × 1 0 17 3.98 × 1 0 0 3.31 × 1 0 54 9.03 × 1 0 83
50Best 2.53 × 1 0 58 2.48 × 1 0 17 9.95 × 1 0 0 1.67 × 1 0 83 1.64 × 1 0 103
Mean 9.04 × 1 0 51 9.55 × 1 0 17 1.95 × 1 0 1 2.02 × 1 0 48 1.46 × 1 0 80
STD 4.88 × 1 0 50 6.54 × 1 0 17 4.77 × 1 0 0 1.43 × 1 0 47 1.04 × 1 0 79
100Best 5.90 × 1 0 59 1.06 × 1 0 17 1.05 × 1 0 1 5.00 × 1 0 84 8.80 × 1 0 107
Mean 3.65 × 1 0 50 9.57 × 1 0 17 1.94 × 1 0 1 2.17 × 1 0 56 1.80 × 1 0 80
STD 3.22 × 1 0 49 7.80 × 1 0 17 4.20 × 1 0 0 1.71 × 1 0 55 1.45 × 1 0 79
F330Best 8.29 × 1 0 3 5.82 × 1 0 9 2.23 × 1 0 3 1.61 × 1 0 148 2.25 × 1 0 187
Mean 4.90 × 1 0 4 1.64 × 1 0 5 4.87 × 1 0 3 5.61 × 1 0 79 3.79 × 1 0 95
STD 1.40 × 1 0 4 3.50 × 1 0 5 1.71 × 1 0 3 3.07 × 1 0 78 2.08 × 1 0 94
50Best 1.48 × 1 0 4 2.04 × 1 0 9 1.34 × 1 0 3 9.33 × 1 0 144 6.40 × 1 0 178
Mean 4.36 × 1 0 4 3.33 × 1 0 5 5.71 × 1 0 3 5.63 × 1 0 81 4.74 × 1 0 99
STD 1.31 × 1 0 4 1.23 × 1 0 4 2.02 × 1 0 3 2.82 × 1 0 80 3.35 × 1 0 98
100Best 1.47 × 1 0 4 3.82 × 1 0 9 2.16 × 1 0 3 2.31 × 1 0 157 9.16 × 1 0 183
Mean 4.27 × 1 0 4 3.43 × 1 0 5 5.39 × 1 0 3 9.29 × 1 0 56 1.63 × 1 0 85
STD 1.26 × 1 0 4 1.57 × 1 0 4 1.60 × 1 0 3 9.29 × 1 0 55 1.63 × 1 0 84
F430Best 2.60 × 1 0 1 4.49 × 1 0 8 1.79 × 1 0 1 2.07 × 1 0 77 1.13 × 1 0 93
Mean 5.48 × 1 0 1 6.54 × 1 0 7 2.85 × 1 0 1 3.47 × 1 0 50 3.07 × 1 0 63
STD 2.85 × 1 0 1 5.12 × 1 0 7 6.42 × 1 0 0 1.90 × 1 0 49 1.68 × 1 0 62
50Best 8.90 × 1 0 1 2.11 × 1 0 8 1.60 × 1 0 1 2.22 × 1 0 85 5.72 × 1 0 100
Mean 4.97 × 1 0 1 7.76 × 1 0 7 2.74 × 1 0 1 1.01 × 1 0 51 1.35 × 1 0 68
STD 2.53 × 1 0 1 1.22 × 1 0 6 5.21 × 1 0 0 7.18 × 1 0 51 9.52 × 1 0 68
100Best 2.73 × 1 0 1 5.70 × 1 0 8 1.57 × 1 0 1 3.24 × 1 0 81 1.78 × 1 0 100
Mean 5.02 × 1 0 1 6.89 × 1 0 7 2.84 × 1 0 1 4.82 × 1 0 48 4.69 × 1 0 66
STD 2.65 × 1 0 1 8.72 × 1 0 7 4.94 × 1 0 0 4.38 × 1 0 47 4.42 × 1 0 65
F530Best 2.71 × 1 0 1 2.59 × 1 0 1 3.58 × 1 0 4 2.54 × 1 0 1 2.48 × 1 0 1
Mean 2.79 × 1 0 1 2.69 × 1 0 1 4.52 × 1 0 5 2.58 × 1 0 1 2.52 × 1 0 1
STD 4.45 × 1 0 1 7.15 × 1 0 1 4.03 × 1 0 5 1.83 × 1 0 1 3.06 × 1 0 1
50Best 2.72 × 1 0 1 2.59 × 1 0 1 4.35 × 1 0 4 2.52 × 1 0 1 2.47 × 1 0 1
Mean 2.82 × 1 0 1 2.72 × 1 0 1 4.51 × 1 0 5 2.58 × 1 0 1 2.52 × 1 0 1
STD 4.52 × 1 0 1 7.16 × 1 0 1 3.86 × 1 0 5 2.68 × 1 0 1 3.10 × 1 0 1
100Best 2.69 × 1 0 1 2.53 × 1 0 1 3.65 × 1 0 4 2.53 × 1 0 1 2.47 × 1 0 1
Mean 2.80 × 1 0 1 2.70 × 1 0 1 4.21 × 1 0 5 2.58 × 1 0 1 2.52 × 1 0 1
STD 4.45 × 1 0 1 7.56 × 1 0 1 3.26 × 1 0 5 2.17 × 1 0 1 2.55 × 1 0 1
30Best 0.00 × 1 0 0 0.00 × 1 0 0 8.05 × 1 0 2 0.00 × 1 0 0 0.00 × 1 0 0
Mean 0.00 × 1 0 0 0.00 × 1 0 0 2.56 × 1 0 3 0.00 × 1 0 0 0.00 × 1 0 0
STD 0.00 × 1 0 0 0.00 × 1 0 0 8.87 × 1 0 2 0.00 × 1 0 0 0.00 × 1 0 0
50Best 0.00 × 1 0 0 0.00 × 1 0 0 9.63 × 1 0 2 0.00 × 1 0 0 0.00 × 1 0 0
Mean 0.00 × 1 0 0 0.00 × 1 0 0 2.60 × 1 0 3 0.00 × 1 0 0 0.00 × 1 0 0
STD 0.00 × 1 0 0 0.00 × 1 0 0 8.53 × 1 0 2 0.00 × 1 0 0 0.00 × 1 0 0
100Best 0.00 × 1 0 0 0.00 × 1 0 0 8.35 × 1 0 2 0.00 × 1 0 0 0.00 × 1 0 0
Mean 0.00 × 1 0 0 0.00 × 1 0 0 2.37 × 1 0 3 0.00 × 1 0 0 0.00 × 1 0 0
STD 0.00 × 1 0 0 0.00 × 1 0 0 1.09 × 1 0 3 0.00 × 1 0 0 0.00 × 1 0 0
F730Best 3.68 × 1 0 77 4.83 × 1 0 23 8.52 × 1 0 8 1.93 × 1 0 159 9.32 × 1 0 196
Mean 1.85 × 1 0 66 1.04 × 1 0 21 1.75 × 1 0 9 6.52 × 1 0 108 9.14 × 1 0 140
STD 9.98 × 1 0 66 1.42 × 1 0 21 5.93 × 1 0 8 2.48 × 1 0 107 5.01 × 1 0 139
50Best 1.87 × 1 0 81 8.86 × 1 0 24 6.67 × 1 0 8 1.04 × 1 0 158 8.72 × 1 0 212
Mean 3.32 × 1 0 66 7.31 × 1 0 22 1.78 × 1 0 9 1.48 × 1 0 105 3.17 × 1 0 148
STD 2.35 × 1 0 65 1.05 × 1 0 21 8.39 × 1 0 8 9.64 × 1 0 105 2.23 × 1 0 147
100Best 1.50 × 1 0 82 2.06 × 1 0 23 3.42 × 1 0 8 2.07 × 1 0 173 1.84 × 1 0 200
Mean 4.67 × 1 0 68 9.61 × 1 0 22 1.83 × 1 0 9 4.86 × 1 0 93 2.34 × 1 0 141
STD 3.28 × 1 0 67 1.86 × 1 0 21 8.66 × 1 0 8 4.86 × 1 0 92 1.68 × 1 0 140
F830Best 7.79 × 1 0 86 3.58 × 1 0 31 6.83 × 1 0 0 2.68 × 1 0 173 2.53 × 1 0 198
Mean 8.40 × 1 0 77 2.89 × 1 0 29 3.41 × 1 0 1 3.19 × 1 0 109 6.83 × 1 0 161
STD 4.17 × 1 0 76 5.64 × 1 0 29 1.84 × 1 0 1 1.68 × 1 0 108 2.95 × 1 0 160
50Best 1.90 × 1 0 95 3.62 × 1 0 31 7.56 × 1 0 0 1.65 × 1 0 180 1.59 × 1 0 215
Mean 1.34 × 1 0 75 1.93 × 1 0 29 2.92 × 1 0 1 1.24 × 1 0 116 3.43 × 1 0 153
STD 5.27 × 1 0 75 2.53 × 1 0 29 2.23 × 1 0 1 6.24 × 1 0 116 2.42 × 1 0 152
100Best 7.99 × 1 0 93 2.83 × 1 0 31 8.12 × 1 0 0 2.98 × 1 0 177 3.62 × 1 0 212
Mean 1.79 × 1 0 74 2.53 × 1 0 29 2.90 × 1 0 1 1.13 × 1 0 103 8.64 × 1 0 153
STD 1.29 × 1 0 73 4.61 × 1 0 29 1.60 × 1 0 1 1.06 × 1 0 102 8.64 × 1 0 152
Rank30w/t/l0/1/70/1/70/0/80/1/77/1/0
50w/t/l0/1/70/1/70/0/80/1/77/1/0
100w/t/l0/1/70/1/70/0/80/1/77/1/0
Table 5. Multimodal benchmark function optimization results.
Table 5. Multimodal benchmark function optimization results.
FDIndexWOAGWOPSODBOIDBO
F930Best 0.00 × 1 0 0 0.00 × 1 0 0 7.52 × 1 0 1 0.00 × 1 0 0 0.00 × 1 0 0
Mean 0.00 × 1 0 0 2.79 × 1 0 0 1.09 × 1 0 2 9.62 × 1 0 1 0.00 × 1 0 0
STD 0.00 × 1 0 0 3.25 × 1 0 0 1.72 × 1 0 1 3.66 × 1 0 0 0.00 × 1 0 0
50Best 0.00 × 1 0 0 0.00 × 1 0 0 7.23 × 1 0 1 0.00 × 1 0 0 0.00 × 1 0 0
Mean 3.41 × 1 0 15 7.50 × 1 0 0 1.13 × 1 0 2 2.79 × 1 0 1 0.00 × 1 0 0
STD 1.78 × 1 0 14 2.92 × 1 0 1 1.82 × 1 0 1 1.27 × 1 0 0 0.00 × 1 0 0
100Best 0.00 × 1 0 0 0.00 × 1 0 0 6.65 × 1 0 1 0.00 × 1 0 0 0.00 × 1 0 0
Mean 1.14 × 1 0 15 2.47 × 1 0 0 1.07 × 1 0 2 3.38 × 1 0 0 0.00 × 1 0 0
STD 8.00 × 1 0 15 4.00 × 1 0 0 1.83 × 1 0 1 1.68 × 1 0 1 0.00 × 1 0 0
F1030Best 4.32 × 1 0 58 1.97 × 1 0 17 6.43 × 1 0 0 4.37 × 1 0 89 9.47 × 1 0 108
Mean 3.25 × 1 0 32 5.71 × 1 0 4 1.08 × 1 0 1 1.26 × 1 0 4 2.44 × 1 0 81
STD 1.78 × 1 0 31 7.99 × 1 0 4 2.60 × 1 0 0 3.44 × 1 0 4 1.34 × 1 0 80
50Best 4.00 × 1 0 58 1.75 × 1 0 16 3.44 × 1 0 0 1.99 × 1 0 88 2.32 × 1 0 101
Mean 3.83 × 1 0 1 4.95 × 1 0 4 1.01 × 1 0 1 1.32 × 1 0 1 8.49 × 1 0 81
STD 2.71 × 1 0 0 5.42 × 1 0 4 2.85 × 1 0 0 9.17 × 1 0 1 6.01 × 1 0 80
100Best 2.83 × 1 0 60 3.32 × 1 0 17 4.88 × 1 0 0 3.72 × 1 0 87 1.69 × 1 0 109
Mean 2.28 × 1 0 1 4.47 × 1 0 4 1.08 × 1 0 1 1.99 × 1 0 3 3.31 × 1 0 80
STD 2.28 × 1 0 0 5.21 × 1 0 4 2.61 × 1 0 0 1.57 × 1 0 2 2.91 × 1 0 79
F1130Best 6.35 × 1 0 3 1.31 × 1 0 2 1.15 × 1 0 1 1.11 × 1 0 7 4.56 × 1 0 6
Mean 2.76 × 1 0 2 5.00 × 1 0 2 1.07 × 1 0 3 3.57 × 1 0 3 5.73 × 1 0 5
STD 2.02 × 1 0 2 2.95 × 1 0 2 4.83 × 1 0 3 1.89 × 1 0 2 1.03 × 1 0 4
50Best 2.47 × 1 0 3 1.22 × 1 0 2 1.26 × 1 0 1 7.74 × 1 0 8 3.29 × 1 0 6
Mean 2.34 × 1 0 2 4.54 × 1 0 2 1.10 × 1 0 3 2.26 × 1 0 3 3.79 × 1 0 5
STD 1.81 × 1 0 2 2.56 × 1 0 2 3.36 × 1 0 3 1.47 × 1 0 2 4.79 × 1 0 5
100Best 3.42 × 1 0 3 1.32 × 1 0 2 7.66 × 1 0 0 5.39 × 1 0 8 2.17 × 1 0 6
Mean 2.26 × 1 0 2 4.46 × 1 0 2 1.61 × 1 0 3 9.76 × 1 0 5 7.55 × 1 0 5
STD 1.92 × 1 0 2 2.36 × 1 0 2 6.56 × 1 0 3 7.08 × 1 0 4 1.96 × 1 0 4
F1230Best 9.44 × 1 0 2 3.15 × 1 0 1 7.93 × 1 0 2 1.79 × 1 0 4 1.50 × 1 0 4
Mean 5.49 × 1 0 1 6.39 × 1 0 1 2.14 × 1 0 5 5.44 × 1 0 1 3.32 × 1 0 2
STD 3.20 × 1 0 1 1.90 × 1 0 1 2.87 × 1 0 5 4.09 × 1 0 1 4.58 × 1 0 2
50Best 1.81 × 1 0 1 1.00 × 1 0 1 6.48 × 1 0 1 7.70 × 1 0 4 5.05 × 1 0 5
Mean 6.09 × 1 0 1 6.13 × 1 0 1 1.89 × 1 0 5 6.14 × 1 0 1 2.66 × 1 0 2
STD 2.75 × 1 0 1 2.44 × 1 0 1 3.94 × 1 0 5 4.19 × 1 0 1 4.10 × 1 0 2
100Best 1.17 × 1 0 1 1.02 × 1 0 1 7.83 × 1 0 1 1.35 × 1 0 3 6.25 × 1 0 5
Mean 4.87 × 1 0 1 6.46 × 1 0 1 3.87 × 1 0 5 7.15 × 1 0 1 3.72 × 1 0 2
STD 2.78 × 1 0 1 2.30 × 1 0 1 4.63 × 1 0 5 4.89 × 1 0 1 6.20 × 1 0 2
F1330Best 0.00 × 1 0 0 0.00 × 1 0 0 0.00 × 1 0 0 0.00 × 1 0 0 0.00 × 1 0 0
Mean 8.23 × 1 0 5 0.00 × 1 0 0 0.00 × 1 0 0 0.00 × 1 0 0 0.00 × 1 0 0
STD 3.25 × 1 0 4 0.00 × 1 0 0 0.00 × 1 0 0 0.00 × 1 0 0 0.00 × 1 0 0
50Best 0.00 × 1 0 0 0.00 × 1 0 0 0.00 × 1 0 0 0.00 × 1 0 0 0.00 × 1 0 0
Mean 5.65 × 1 0 5 0.00 × 1 0 0 0.00 × 1 0 0 0.00 × 1 0 0 0.00 × 1 0 0
STD 3.11 × 1 0 4 0.00 × 1 0 0 0.00 × 1 0 0 0.00 × 1 0 0 0.00 × 1 0 0
100Best 0.00 × 1 0 0 0.00 × 1 0 0 0.00 × 1 0 0 0.00 × 1 0 0 0.00 × 1 0 0
Mean 7.81 × 1 0 5 0.00 × 1 0 0 0.00 × 1 0 0 0.00 × 1 0 0 0.00 × 1 0 0
STD 3.89 × 1 0 4 0.00 × 1 0 0 0.00 × 1 0 0   0.00 × 1 0 0 0.00 × 1 0 0
F1430Best 3.69 × 1 0 1 8.22 × 1 0 1 4.86 × 1 0 2 8.97 × 1 0 2 2.88 × 1 0 4
Mean 9.42 × 1 0 1 1.33 × 1 0 0 1.03 × 1 0 3 5.50 × 1 0 1 8.37 × 1 0 2
STD 4.24 × 1 0 1 2.92 × 1 0 1 3.21 × 1 0 2 4.22 × 1 0 1 9.28 × 1 0 2
50Best 2.55 × 1 0 1 6.37 × 1 0 1 3.73 × 1 0 2 2.69 × 1 0 4 4.52 × 1 0 4
Mean 9.42 × 1 0 1 1.23 × 1 0 0 9.25 × 1 0 2 4.83 × 1 0 1 1.05 × 1 0 1
STD 3.68 × 1 0 1 2.13 × 1 0 1 3.46 × 1 0 2 2.10 × 1 0 1 1.11 × 1 0 1
100Best 1.93 × 1 0 1 8.13 × 1 0 1 4.09 × 1 0 2 1.40 × 1 0 3 2.75 × 1 0 4
Mean 8.70 × 1 0 1 1.25 × 1 0 0 9.66 × 1 0 2 5.16 × 1 0 1 7.45 × 1 0 2
STD 3.82 × 1 0 1 2.29 × 1 0 1 3.04 × 1 0 2 2.94 × 1 0 1 7.85 × 1 0 2
Rank30w/t/l0/1/50/1/50/1/50/1/54/2/0
50w/t/l0/0/60/1/50/1/50/1/54/2/0
100w/t/l0/0/60/1/50/1/50/1/54/2/0
Table 6. Total performance of IDBO and other basic prevalent meta-heuristics algorithms.
Table 6. Total performance of IDBO and other basic prevalent meta-heuristics algorithms.
WOAGWOPSODBOIDBO
w/t/lw/t/lw/t/lw/t/lw/t/l
D = 300/2/120/2/120/1/130/2/1211/3/0
D = 500/1/130/2/120/1/130/2/1211/3/0
D = 1000/1/130/2/120/1/130/2/1211/3/0
Total0/4/380/6/360/3/390/6/3633/9/0
TP9.52%14.29%7.14%14.29%100.00%
Table 7. α -values of Wilcoxon signed-rank test.
Table 7. α -values of Wilcoxon signed-rank test.
FunctionWOAGWOPSODBO
F1 3.02 × 1 0 11 3.02 × 1 0 11 3.02 × 1 0 11 6.07 × 1 0 11
F2 3.02 × 1 0 11 3.02 × 1 0 11 3.02 × 1 0 11 4.20 × 1 0 10
F3 3.02 × 1 0 11 3.02 × 1 0 11 3.02 × 1 0 11 2.57 × 1 0 7
F4 3.02 × 1 0 11 3.02 × 1 0 11 3.02 × 1 0 11 1.61 × 1 0 10
F5 3.02 × 1 0 11 3.02 × 1 0 11 3.02 × 1 0 11 4.18 × 1 0 9
F6N/AN/A 1.21 × 1 0 12 N/A
F7 3.02 × 1 0 11 3.02 × 1 0 11 3.02 × 1 0 11 1.61 × 1 0 10
F8 3.02 × 1 0 11 3.02 × 1 0 11 3.02 × 1 0 11 2.61 × 1 0 10
F9N/A 1.16 × 1 0 12 1.21 × 1 0 12 3.34 × 1 0 1
F10 3.02 × 1 0 11 3.02 × 1 0 11 3.02 × 1 0 11 3.69 × 1 0 11
F11 3.02 × 1 0 11 3.02 × 1 0 11 3.02 × 1 0 11 1.99 × 1 0 2
F12 3.69 × 1 0 11 3.02 × 1 0 11 3.02 × 1 0 11 1.33 × 1 0 10
F13 8.15 × 1 0 2 N/AN/AN/A
F14 3.02 × 1 0 11 3.02 × 1 0 11 3.02 × 1 0 11 1.09 × 1 0 10
+/=/−11/3/012/2/013/1/012/2/0
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, R.; Zhu, Y. Predicting the Mechanical Properties of Heat-Treated Woods Using Optimization-Algorithm-Based BPNN. Forests 2023, 14, 935. https://doi.org/10.3390/f14050935

AMA Style

Zhang R, Zhu Y. Predicting the Mechanical Properties of Heat-Treated Woods Using Optimization-Algorithm-Based BPNN. Forests. 2023; 14(5):935. https://doi.org/10.3390/f14050935

Chicago/Turabian Style

Zhang, Runze, and Yujie Zhu. 2023. "Predicting the Mechanical Properties of Heat-Treated Woods Using Optimization-Algorithm-Based BPNN" Forests 14, no. 5: 935. https://doi.org/10.3390/f14050935

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop