1. Introduction
Timber is a widely utilized material in the construction and furniture industries because it has numerous benefits, such as environmental sustainability, aesthetic appeal and ease of processing. However, its limited stability and durability hinder its application [
1,
2]. These limitations have prompted the development of various wood modification techniques, such as chemical, physical and biological methods [
3]. Heat treatment is a prevalent technique that enhances wood properties by altering its chemical, physical and structural characteristics through exposure to specific temperature and humidity conditions [
4]. This treatment increases wood stability, durability and resistance to corrosion and hydrolysis while improving mechanical properties such as strength, stiffness and hardness [
5,
6]. Common heat treatment methods include vacuum, dry and moist treatments [
7].
The improvement of timber properties via heat treatment has been demonstrated by studies. Korkut et al. [
8] examined how the thermal process affects red bud maple’s surface roughness and mechanical behaviors. The results indicated that increasing temperatures reduce density and moisture content but increase bending strength and surface roughness. Icel et al. [
9] demonstrated that heat treatment significantly improves the physical properties, chemical composition and microstructure of spruce and pine, resulting in enhanced stability, durability performance and service life. Xue et al. [
10] investigated how high-temperature heat treatment and impregnation modification techniques affect aspen lumber’s physical and mechanical characteristics and found significant improvements in mechanical strength and preservation.
Despite its effectiveness in improving wood mechanical properties, heat treatment has certain limitations. Boonstra et al. [
11] reported the decomposition of natural wood components during heat treatment, resulting in reduced wood quality. Hill [
12] noted that the efficacy of heat treatment is influenced by various factors such as treatment time, temperature, humidity and wood species, making it challenging to control and optimize the process. Goli et al. [
1] investigated the impact of heat treatment on the physical and mechanical properties of birch plywood, revealing an increase in density and hardness but a decrease in moisture content and bending strength.
To address these limitations, researchers have explored the use of neural network models to predict wood mechanical properties. Kohonen [
13] introduced self-organizing mapping (SOM) as one of the earliest prototypes for applying neural networks to nonlinear prediction problems. C.G.O. [
14] highlighted the potential for neural networks to model complex nonlinear relationships for predicting mechanical properties such as strength and stiffness. Adamopoulos et al. [
15] investigated the relationship between the fiber properties of recycled pulp and the mechanical properties of corrugated base paper. Multiple linear regression and artificial neural network models were used to predict the tensile strength and compressive strength of corrugated base paper with different fiber sources, and the results showed that the artificial neural network model was more accurate and stable than the multiple linear regression model. You et al. [
16] demonstrated that an artificial neural network (ANN) model based on nondestructive vibration testing can successfully predict the MOE of bamboo–wood composites with high accuracy.
Although employing the BP neural network models to forecast the physical characteristics of heat-treated lumber reduces experimental costs, it presents certain challenges, such as susceptibility to local minima during the learning process and a poor generalization ability, resulting in the inaccurate prediction of new data. To address these limitations, some researchers have explored combining BP neural networks with meta-heuristic algorithms to improve prediction accuracy and model robustness. Chen et al. [
17] integrated the Aquila Optimization Algorithm (AOA) [
18] with BP neural networks to accurately predict the balance water rate and weight ratio of thermal processing timber, and Wang et al. [
19] utilized the Carnivorous Plant Algorithm (CPA) [
20] to ameliorate BP neural networks for predicting the adhesion intensity and coarseness of the surfaces of heat-treated wood. Their results indicated that both the AOA-BP and CPA-BP models outperform traditional BP neural network models.
Meta-heuristic algorithms can effectively avoid local optima and improve prediction accuracy when combined with BP neural networks. However, local optima may still occur due to inappropriate algorithm parameters or unreasonable algorithm combinations, resulting in poor model performance. To address this issue, some researchers have suggested improving the original meta-heuristic algorithms before applying them to optimize BP neural networks, aiming to increase the model’s generalization ability and reliability. For example, Li et al. [
21] enhanced the sparrow search algorithm (SSA) [
22] with tent chaotic mapping and applied it to optimize BP neural networks for predicting the mechanical characteristics of heat-treated timber. They found that the TSSA-BP model performs well. Ma et al. [
23] proposed a nonlinear adaptive grouping strategy for the Gray Wolf Optimization (GWO) [
24] algorithm and used it to optimize BP neural networks for timber mechanical performance forecasts. They demonstrated that the proposed IGWO-BP model has much higher prediction accuracy than that of conventional models.
Similarly, the original Dung Beetle Optimization (DBO) [
25] algorithm has drawbacks in avoiding local optima and achieving satisfactory algorithmic accuracy for practical engineering applications. To address these flaws, this article proposes an Improved Dung Beetle Optimizer (IDBO) for optimizing BP neural networks. The IDBO algorithm incorporates three main improvements: first, utilizing piece-wise linear chaotic mapping (PWLCM) to initialize the dung beetle population to increase diversity; second, introducing an adaptive parameter adjustment strategy to enhance the early-stage best-finding ability and improve algorithmic search efficiency; and finally, balancing local and global search capabilities by incorporating a dimensional learning-enhanced foraging strategy (DLF).
The rest of this article is structured as follows:
Section 2 introduces the basic theory of BP and DBO;
Section 3 presents the IDBO algorithm model;
Section 4 verifies the performance of the IDBO algorithm using benchmark functions;
Section 5 evaluates the reliability of the suggested IDBO model for wood mechanical property predictions; and
Section 6 concludes.
4. Evaluate the Effectiveness of the Suggested IDBO Model
The efficacy of the suggested IDBO method is assessed through a series of experiments utilizing various benchmark functions in this section.
4.1. Benchmark Functions
To objectively appraise the effectiveness of various meta-heuristic algorithms and to validate the usefulness of the IDBO amelioration strategy, 14 standard test functions were selected from the literature [
32], and the CEC2017 test function was utilized to evaluate the capability of the IDBO algorithm. Functions F1–F8 are unimodal with a single global optimal solution and were employed to gauge the velocity and exactness of convergence of the algorithm. Functions F9–F14 are multimodal with a single global optimum and several local optima and were used to estimate the global search and excavation capabilities of the algorithm. The details of these benchmark functions, including their expressions, dimensions, search ranges and theoretical optimal solutions, are given in
Table 1 and
Table 2. To provide a more intuitive understanding of these benchmark functions and their optimal values,
Figure 5 and
Figure 6 depict 3D views (30 dimensions) of some of these functions.
4.2. Contrast Algorithm and Experimental Parameter Settings
To fully validate the reliability of the presented IDBO model, its results were com- pared with those of four widely used basic metaheuristics: PSO (Eberhart et al., 1995) [
33], GWO (Mirjalli et al., 2014), WOA (Mirjalili et al., 2016) [
34] and DBO (Xue et al., 2022) [
25]. As indicated in
Table 3, the parameter settings that were recommended in their respective original works were adopted for the experiments involving these comparison algorithms.
To more accurately evaluate the efficacy of the IDBO algorithm and its comparative algorithms, a population size of N = 30 was uniformly set, and the upper limit of iterations was fixed at 500. Each model was executed separately 30 times. The dimension D was set to 30, 50 and 100 to examine the effectiveness of the suggested approach in searching for merits across different dimensions. To minimize the influence of randomness in the simulation results, the optimal values, means and standard deviations of the optimization results (fitness) were recorded separately to appraise the exploration performance, accuracy and reliability of the models.
The experiment was implemented on a Windows 11 operating system with an 11th Gen Intel
® Core™ i7-11700 processor with 2.5 GHz and 16 GB RAM using MATLAB 2019a for simulation. The optimal fitness, mean fitness and standard error of fitness for the IDBO algorithm and its comparative algorithms are presented in
Table 4, where bold values indicate the best consequences. Additionally, the bottom three lines of each table show the ’w/t/l’ for the wins (w), ties (t) and losses (l) of each algorithm.
4.3. Evaluation of Exploration and Exploitation
The single-peak functions are well-suited to verify the development capability of algorithms in finding optimal solutions. Multimodal functions with numerous locally optimal solutions can assess the ability of IDBO to evade local optima during exploration.
As indicated in
Table 4, the IDBO algorithm demonstrates significant improvement for all seven test functions except F6 across all dimensions.
Table 5 reveals that the IDBO algorithm outperforms other algorithms in three different dimensions for all five test functions except F13 and that its optimal value, average and standard error are optimal. Thus, it can be inferred that the IDBO algorithm is more effective than DBO in evaluating optimal solutions, which proves that the modification tactic presented in this article can feasibly enhance the original algorithm’s ability to explore.
4.4. Evaluation of Convergence Curves
To more intuitively observe and compare the convergence rate, accuracy and ability of each algorithm to evade local optima, the convergence curves for IDBO and four basic meta-heuristic algorithms
(30 dimensions) are presented in
Figure 7. The transverse axis represents the number of iterations, whereas the longitudinal axis denotes the order of magnitude of fitness values. Fitness values are expressed as logarithms to base 10 to better illustrate convergence trends.
As shown in
Figure 7, IDBO exhibits the fastest convergence and highest accuracy in convergence curves for functions F1–F5, F7–F10 and F14 with a near-linear decrease to theoretical optimal values without stagnation. DBO performs second only to IDBO for these functions and outperforms other algorithms. DBO, GWO and WOA converge to optimal values with minimal stagnation for function F6 but at a slower rate than that of IDBO, and PSO exhibits stagnation at local optima. In the convergence curves for functions F11–F13, IDBO converges rapidly with a quick inflection point to achieve optimal accuracy. This demonstrates that the amelioration method recommended in this paper effectively enhances the original algorithm in terms of convergence rate and accuracy.
4.5. Local Optimal Circumvention Evaluation
As previously mentioned, multimodal functions can be used to examine the search behavior of algorithms. As indicated in
Table 5, IDBO achieves the best (fitness) optimal values across three different dimensions of 30, 50 and 100 and outperforms other algorithms. This demonstrates that IDBO effectively equilibrates local and global searches to evade local optima. The improvement approach suggested in this article dramatically augments the exploratory potential of the original model.
4.6. High-Dimensional Robustness Evaluation
General algorithms may not exhibit robustness and stability when solving complex problem functions in high dimensions, and their ability to find optimal solutions may decrease abruptly. To assess the performance of IDBO in high dimensions, results for IDBO and other algorithms were compared in 50 and 100 dimensions. As presented in
Table 4, for unimodal functions other than F3 and F6, PSO, WOA and GWO all exhibit decreased convergence accuracy in higher dimensions, and DBO and IDBO show decreased accuracy in 50 dimensions but little change in 100 dimensions, indicating stability for both DBO and IDBO at higher dimensions. For function F3, the convergence accuracy of IDBO is 18 orders of scale above that of DBO in 50 dimensions but increases to 29 orders of magnitude higher than that of DBO in 100 dimensions, indicating slightly inferior performance for DBO at higher dimensions.
According to
Table 5, for high-dimensional multimodal function F9, the convergence accuracy for WOA and DBO decreases from theoretical optimal values as dimensionality increases, and only IDBO consistently converges to theoretical optimal values with a mean and standard deviation of zero, indicating stable performance for IDBO when seeking high-dimensional multimodal functions. For four test functions, excluding F9 and F13, IDBO’s performance at high dimensions is comparable to that at 30 dimensions, achieving optimal mean and standard deviation values. Overall, IDBO exhibits a strong performance when finding optimal solutions for high-dimensional optimization problems, demonstrating its stability and robustness at high dimensions.
Table 6 presents a summary of the performance outcomes for IDBO and other algorithms, as shown in
Table 4 and
Table 5. The total performance metric is employed to calculate TP for each algorithm using Equation (14), in which each algorithm has
Q trials and
M failed tests.
4.7. Statistical Analysis
To further evaluate the validity of the suggested enhancement tactics, this paper used a Wilcoxon signed-rank test to compare IDBO with four meta-heuristic algorithms and applied the Friedman test (Equation (15)) to calculate each algorithm’s ranking. The number of populations N = 30 was set, each test function was subjected to 30 independent runs of each algorithm, dimension D = 30, and the Wilcoxon signed-rank test with = 0.05 was implemented for IDBO and other algorithms on 14 test functions. The -values are presented in along with statistics for “}}+”, “}}−” and “}} =”. “}}+” shows that IDBO clearly outperforms other comparison algorithms, “}}−” indicates inferiority, and “}} =” denotes no significant difference. represents not applicable when both searches for superiority result in 0, indicating a comparable performance. Bold text indicates insignificant or comparable differences.
Table 7 shows that WOA, GWO and DBO have comparable search performances with IDBO for F6, and PSO differs significantly from IDBO. WOA and IDBO have a comparable search performances for F9, and DBO differs insignificantly from IDBO. GWO, PSO and DBO have equivalent search behavior with IDBO for F13, and WOA differs insignificantly from IDBO. GWO, PSO and DBO differ significantly from IDBO for all functions except F6, F9 and F13.
Table A2 in
Appendix A shows the results of Friedman’s test. The IDBO algorithm has a lower average ranking value than that of the other algorithms in all three dimensions, indicating its superior performance. Moreover,
Table A2 reveals that the IDBO algorithm’s mean value decreases relative to DBO as dimensionality increases. This shows that IDBO is more robust in higher dimensions than DBO and further verifies the effectiveness of our optimization strategy.
where 𝑛 is the count of case tests,
is the quantity of algorithms, and
is the mean ranking of the
th algorithm.
The IDBO algorithm shows significant improvements in both local and global exploration abilities based on a comprehensive analysis of benchmark function test results, convergence curves, Wilcoxon signed-rank test results, and Friedman test results for each algorithm. It exceeds the original DBO and WOA algorithms and other optimization algorithms that we compare it with in terms of convergence velocity, accuracy and stability. This verifies the performance of the optimization scheme this paper recommends.