1. Introduction
Concomitant with the growing intricacy of modern engineering optimization problems, traditional mathematical optimization methods face challenges such as low computational efficiency and premature convergence to suboptimal solutions when dealing with complex optimization problems featuring high-dimensionality, non-linearity, and multimodality. Swarm intelligence optimization algorithms have become important tools for solving non-trivial mathematical programming problems due to their characteristics of self-organization, parallelism, and robustness, such as Gray Wolf Optimization [
1] (GWO), the Whale Optimization Algorithm [
2] (WOA), Butterfly Optimization Algorithm [
3] (BOA), Sparrow Search Algorithm [
4] (SSA), Dolphin Echolocation Optimization [
5] (DEO), Dragonfly Algorithm [
6] (DA), Osprey Optimization Algorithm [
7] (OOA), Black Widow Optimization Algorithm [
8] (BWO), Flock Optimization Algorithm [
9] (FLO), Griffon Vultures Optimization Algorithm [
10] (GVOA), and Hippopotamus Optimization Algorithm [
11] (HOA).
Existing optimization algorithms have limitations in aspects such as exploration–exploitation balance, adaptation to complex scenarios, and universality. The Mantis Shrimp Optimization Algorithm (MShOA) [
12], as a new bionic intelligent algorithm, simulates the unique predatory behavior and visual perception mechanism of mantis shrimps. It effectively balances exploration and exploitation and shrimp high-dimensional spaces by utilizing the “multi-perspective perception” of mantis shrimps in complex engineering problems, thus demonstrating good optimization performance. However, the standard MShOA still has limitations, such as insufficient convergence accuracy and proneness to premature convergence, which urgently need to be improved to enhance its optimization performance.
In the last few years, scholars have developed numerous improvement strategies to enhance the performance of intelligent optimization algorithms. Some scholars have improved algorithms by enhancing population diversity. For example, Ref. [
13] replaces random initialization with ICMIC chaotic sequences, Ref. [
14] dynamically adjusts search strategies based on population entropy values, and Ref. [
15] uses LSTM to predict the trend in population diversity changes, optimizing the sampling diversity and spatial dispersion of initial solutions.
Some researchers expand the algorithm’s search range to enhance global exploration capabilities. Reference [
16] realizes a search mode combining long and short jumps through heavy-tailed distribution, Ref. [
17] dynamically adjusts individual perception ranges, Ref. [
18] uses chaotic sequences to enhance randomness, and Ref. [
19] generates opposite solutions of original solutions to expand the search range.
Others propose strategies to escape local optima. For instance, Ref. [
20] resets some individuals when stagnation is detected; Ref. [
21] achieves efficient global search by adaptively adjusting the mean, step size, and covariance matrix; Ref. [
22] uses a periodic mutation strategy to perturb individual positions periodically, and the Improved Carnivorous Plant Algorithm (I_CPA) [
23] has improved the exploitation phase by introducing the teaching factor strategy, helping the algorithm escape local optima.
However, the question of how to organically integrate these improvement strategies into the MShOA framework to form a synergistic enhancement effect remains a topic worthy of in-depth study.
This study presents an improved Mantis Shrimp Optimization Algorithm (ICPMShOA) incorporating ICMIC chaos improvement [
13], centroid opposition-based learning [
19], and periodic mutation [
22]. The main innovations include using the improved ICMIC to perturb the population, enhancing population diversity, and improving the mechanism’s capacity to overcome local solution traps; dynamically adjusting the mutation period and intensity according to the convergence state, optimizing the trade-off between local refinement and global search in the algorithm, and effectively preventing premature convergence; and calculating the population centroid in each iteration and generating opposite solutions based on the centroid to augment the algorithm’s search space coverage for more effective global exploration.
The hybrid strategy collaboration mechanism achieves the organic integration and dynamic balance of the three improvement strategies to form a synergistic optimization effect.
The rest of this manuscript is organized as follows:
Section 2 introduces the principle of the standard MShOA in detail;
Section 3 expounds on the proposed improvement strategies;
Section 4 conducts experimental design and results analysis;
Section 5 carries out engineering application tests; and
Section 6 summarizes the full text and lays out future research directions.
2. Mantis Shrimp Optimization Algorithm
The MShOA represents an innovative metaheuristic approach, drawing biological inspiration from the adaptive survival mechanisms of mantis shrimps in marine ecosystems. This algorithm aims to tackle global optimization problems and particularly excels in handling complex engineering and practical application scenarios. Specifically, it encompasses the following steps:
2.1. Initialization
The algorithm first randomly generates an initial ensemble of individuals, where everyone corresponds to a point in a multi-dimensional solution space.
Subsequently, a polarization type indicator (PTI) vector is stochastically initialized from the detected polarized light. This process mimics the mantis shrimp’s visual system by calculating the polarization angles of the left and right eyes and determining the polarization type based on these angles. The PTI values are updated according to the polarization type to trigger different behavioral strategies.
Light Types Detected by Mantis Shrimp and Corresponding Behaviors:
- -
Horizontal Linear Polarization: Digging, defense, or evasion.
- -
Vertical Linear Polarization: Foraging.
- -
Circular Polarization: Attack.
2.2. Strategy 1: Foraging
Based on the Brownian motion model, the movement of particles is described using the generalized Langevin equation. The position update formula is as follows:
denotes the position update of the -th individual in the -th generation. represents the current optimal position, is the velocity, is the diffusion constant, and is the random force.
2.3. Strategy 2: Attack
Based on the parametric equation of circular motion, the position update formula is as follows:
Here, is a randomly generated attack angle.
2.4. Strategy 3: Defense or Shelter
Defense or shelter is selected based on the size and threat level of the opponent. The position update formula is as follows:
Here, is a randomly generated scaling factor.
This section mainly introduces the initialization process and three core behavioral strategies of the Mantis Shrimp Optimization Algorithm (MShOA) and generally elaborates on the initial settings of MShOA as well as the key optimization mechanisms that simulate the survival behaviors of mantis shrimps.
3. ICPMShOA
The MShOA uses traditional random initialization for its population, which can induce a non-uniform spatial distribution of candidate solutions within the solution space. Some regions become overly dense, while others remain sparse. This makes the algorithm prone to local optima during the exploitation phase, limits its global exploration capabilities, and makes it challenging to cover potential optimal solution regions. The improved cubic map (ICMIC) is a one-dimensional discrete dynamic system with strong chaotic characteristics, widely used in optimization algorithm initialization, image encryption, and random number generation. Its mathematical expression is as follows:
3.1. ICMIC
To enhance the diversity and ergodicity of the initial population of the algorithm and avoid falling into local optima, this section introduces an ICMIC as a basic component of the improvement strategy. The improved cubic map (ICMIC) is a one-dimensional discrete dynamic system with strong chaotic characteristics, widely used in optimization algorithm initialization, image encryption, and random number generation. Its mathematical expression is as follows:
where
is a control parameter determining the intensity of chaotic behavior, and
is the value at the
-th iteration. The initial value must satisfy
(to avoid a zero denominator).
Figure 1 depicts the bifurcation diagram of the chaotic map. When
, the points in the left region are sparse and even form continuous curves; at this stage, the system is period-dominated, with no chaotic behavior. When
, period-doubling takes place, representing a transition to chaos. At
, periodic behavior reappears. When
, the system enters a strong chaotic state. When
, the chaotic characteristics reach their peak. At this point, population initialization can uniformly cover the solution space, greatly enhance the algorithm’s global exploration capability, and avoid premature convergence caused by “initial distribution bias”. Hence,
is the most crucial parameter interval for optimization algorithms.
In the Mantis Shrimp Optimization Algorithm, the ICMIC is introduced. Before the start of iterations, a chaotic sequence is generated using the chaotic map. At the beginning of the iteration, the parameter in Equation (3) is identified to perform chaotic improvement. Compared with the original value from a uniform random distribution, this method makes it easier to escape local optima.
3.2. Periodic Mutation
To address the problem of convergence stagnation that tends to occur in the standard MShOA, this section designs a periodic mutation strategy, which balances the local exploitation and global exploration capabilities of the algorithm by dynamically adjusting the timing and intensity of mutation.
: Current position of the individual.
: Mutation amplitude control parameter (typically ).
: Symmetrizes the perturbation range to to avoid directional bias.
Figure 2 is the comparison chart of parameter (
) value settings on position changes. It is easy to see from
Figure 2 that when
, the mutation completely fails. When
, weak magnitude mutation emerges. When
, medium magnitude mutation is exhibited. When
, the position of the individual undergoes drastic jumps, showing strong magnitude mutation. At this point, the influence of the mutation term is amplified, greatly enhancing the global exploration ability.
A periodic mutation strategy is introduced in MShOA. Before the end of the loop process for individual position updates, it performs periodic mutation on each individual position. This ensures equal probability of positive and negative perturbations, avoiding bias toward local search or global search. The mutation intensity is proportional to the magnitude of the current solution, which can not only retain the potential structure of high-quality solutions but also make significant adjustments to inferior solutions. The mutation interval adaptively decreases with iterations. In the exploration phase, mutation can expand the search range to discover new potential regions; in the exploitation phase, mutation can maintain refined local search to achieve precise convergence. As a result, the global search capability and convergence performance are significantly enhanced.
3.3. Centroid Opposition-Based Learning
To accelerate algorithm convergence and expand the search range of solutions, this section introduces the centroid opposition-based learning (COBL) strategy, which generates opposite solutions using population centroid information to enhance the coverage of the solution space.
COBL is an intelligent optimization strategy that leverages population distribution information. By dynamically calculating the geometric center (centroid) of the population to generate symmetric solutions, this modification substantially improves the algorithm’s ability to explore the global search space and convergence efficiency. Its mathematical definition is as follows:
Population Centroid Calculation:
Opposite Solution Generation:
Here, is a contraction factor whose value is randomly selected within this range. During the iteration process, a specific dimension is chosen for mutation. The mutated result is then compared with its counterpart from the preceding generation, and the superior solution is preserved.
Centroid opposition-based learning (COBL) is introduced into the MShOA. Before the end of each iteration, the population centroid in the search space is calculated via Equation (7), and then new solutions are generated in symmetric regions using Equation (8). In the exploration phase, opposite solutions guide the population to diffuse into unexplored regions. In the local refinement stage, the current population aggregates near the centroid for local search. This mechanism can adapt to changes in population distribution, maintain population diversity, and mitigate the problem of premature convergence to local optima. It is particularly suitable for multimodal optimization problems, as it can effectively explore potential optimal regions far from the current population and dynamically balance the exploitation and exploration phases.
The proposed integration of three strategies—ICMIC chaotic initialization, periodic mutation, and COBL—significantly enhances the MShOA’s global search capability, convergence speed, and robustness. ICMIC chaotic initialization provides population diversity, while periodic mutation maintains vitality during iterations. The combination of ICMIC-generated chaotic initial solutions and centroid opposition solutions ensures comprehensive coverage of the search space, reducing blind spots. Periodic mutation helps the algorithm escape local optima, and centroid opposition-based learning accelerates convergence toward potential optimal regions. The synergy of the three strategies forms a closed-loop optimization mechanism: global search (enabled by ICMIC), local optimization (driven by centroid opposition), and stagnation escape (achieved via periodic mutation).
3.4. Ablation Study of ICPMShOA
To investigate the roles of each newly added strategy, a comparative ablation experiment was designed. On the MATLAB (R2023a) platform, tests were conducted on the original MShOA and three of its variants: ICPMShOA1 (with the ICMIC removed), ICPMShOA2 (with periodic mutation removed), and ICPMShOA3 (with COBL removed), using the CEC2022 [
24] standard test suite. The experiment was configured with a population size of 30, a maximum iteration count of 300, and 30 independent executions of each test function.
The CEC2022 test suite contains 12 test functions. F1 to F5 are basic functions. Among them, F1 is a unimodal function, testing the algorithm’s convergence speed and accuracy. F2 to F5 are multimodal functions with multiple local optimal solutions, testing the algorithm’s global exploration capability. F6 to F8 are hybrid functions—variables are randomly divided into sub-components—and each component adopts a different basic function. Hybrid functions test the algorithm’s coordinated optimization capability for multiple types of sub-problems. F9 to F10 are composite functions: they integrate multiple basic functions through weights. Composite algorithms need to balance the priority of “local refinement” and “global comparison”. F11 to F12 are multi-objective functions, testing the algorithm’s ability to balance solution diversity and convergence.
In
Figure 3a, the radar chart of ICPMShOA showed the smallest area fluctuation, ranking first in eight functions and second in two functions. The radar chart areas of the other three variants were significantly larger, fully demonstrating the performance-enhancing effects of the three new strategies. Notably, in
Figure 3b, the ICPMShOA achieved the lowest average fitness value (1.17), exhibiting the best comprehensive test performance. From the ranking results, ICPMShOA1 ranked second, ICPMShOA2 occupied the fourth position, and ICPMShOA3 came in third, indicating that the periodic mutation made the most prominent contribution to the algorithm’s improvement.
3.5. Flowchart of ICPMShOA
To intuitively present the complete execution process of the improved ICPMShOA and clearly show the logical connections between core steps, this section provides its detailed flowchart, as shown in
Figure 4.
3.6. Pseudocode of ICPMShOA
To accurately describe the implementation logic and key operations of ICPMShOA, facilitating subsequent reproduction and verification, this section provides the pseudocode for Algorithm 1.
Algorithm 1. Pseudocode of ICPMShOA |
1: Initialization: 2: key operational parameters including the search agent count, population magnitude(pop), and Max_iter. |
3: Initialize the candidate solutions with stochastic sampling. |
4: Assign PTI labels to each individual. |
5: while i < Max_iter do |
6: Generate the ICMIC chaotic sequence according to Equation (5) and improve the iterative parameters. |
7: Strategy Selection: 8: if PTIi == 1 |
9: foraging (Equation (2)). |
10: else if PTIi == 2 |
11: attack (Equation (3)). |
12: else if PTIi == 3 |
13: defense, or shelter (Equation (4)). |
14: end if |
15: Introduce periodic mutation according to Equation (6) to update the position of the solution. |
16: Update the Polarization Type Identifier (PTI) vector. |
17: Update the population. |
18: Calculate the fitness value from the new population. |
19: Update the fitness and the best solution found. |
20: i = i + 1. |
21: Introduce the centroid opposition-based learning strategy according to Equation (8) to update the positions of the solutions. |
22: end while |
23: Present the optimal solution obtained. |
24: end procedure |
3.7. Comparative Analysis of the Time Complexity Between ICPMShOA and MShOA
To evaluate the impact of improvement strategies on algorithm efficiency, this section comparatively analyzes the computational overhead of the standard MShOA and the improved ICPMShOA from the perspective of time complexity.
3.7.1. Time Complexity Analysis of the MShOA
The initialization phase of the MShOA includes population generation and initial fitness calculation. Population generation requires generating pop solutions with dim dimensions, resulting in a computational complexity of O (pop × dim). For initial fitness calculation, the objective function is called for each solution, with a computational complexity of O (pop × f). The total complexity of this phase is O (pop × dim + pop × f), but since it is executed only once, its impact on the overall complexity is minor.
The main loop (with Max_iter iterations) involves five key operations in each iteration. During individual position update, pop individuals require dim arithmetic operations, leading to a total computational complexity of O (pop × dim). For boundary handling, pop individuals need dim comparisons and corrections, resulting in a computational complexity of O (pop × dim). For polarization state update, it is necessary to traverse the positions (dim-dimensional) and fitness of all individuals, with a computational complexity of O (pop × dim). For new fitness calculation, the objective function is called for the updated pop individuals, leading to a computational complexity of O (pop × f). Finally, for the update of the optimal solution, the minimum value is found from pop fitness values, with a computational complexity of O (pop) (which can be ignored, as it is much smaller than the first four items).
The total computational complexity of a single iteration is O (pop × dim + pop × f). Since there are Max_iter iterations, the total complexity of the main loop is O (Max_iter × (pop × dim + pop × f)). The computational load of the initialization phase is far smaller than that of the main loop; thus, the time complexity of the original algorithm is dominated by the main loop, ultimately resulting in O (Max_iter × pop × (dim + f)).
3.7.2. Time Complexity Analysis of the ICPMShOA
The improved MShOA (ICPMShOA) uses “chaotic initialization” instead of random initialization. In essence, it still generates pop solutions with dim dimensions, and the computational load remains O (pop × dim), which is the same as the complexity of the original initialization. Meanwhile, a chaotic sequence of length Max_iter is generated, with a computational load of O (Max_iter) (which is much smaller than O (Max_iter × pop × dim) of the main loop and can be ignored).
Two key steps are added to each iteration of the main loop. In the mutation operation step, mutation adjustments are performed on pop individuals, requiring dim arithmetic operations, with a computational load of O (pop × dim). In the opposition-based learning step, opposition solutions are generated, and the population is updated. This involves generating pop opposition solutions with dim dimensions (O (pop × dim)) and calculating the fitness of these opposition solutions (O (pop × f)), resulting in a total computational load of O (pop × dim + pop × f).
The computational load of these added operations is of the same order as that of “position update” and “fitness calculation” in the original iteration. Thus, the total computational load of a single iteration becomes “computational load of the original iteration + computational load of the added operations”, i.e., O (pop × dim + pop × f) + O (pop × dim + pop × f), which can still be simplified to O (pop × dim + pop × f).
The main loop of the improved algorithm still runs for Max_iter iterations, and the dominant computational load of a single iteration is consistent with that of the original algorithm. No higher-order factors are introduced by the added steps. Therefore, the overall time complexity is still dominated by Max_iter × pop × (dim + f), i.e., O (Max_iter × pop × (dim + f)).
3.7.3. Comparison of the Time Complexities of the Two Algorithms
The improved algorithm, due to the addition of steps such as mutation and opposition-based learning, has a slightly higher actual computational load per iteration than the original algorithm (for example, opposition-based learning requires additional calculation of the fitness of opposition solutions). However, the complexity of these added operations is of the same order as that of the original core steps (position update and fitness calculation) and does not alter the dominant relationship of “number of iterations × population size × (dimension + objective function complexity)”. Therefore, the time complexities of the original MShOA and the improved ICPMShOA are of the same order, both being O (Max_iter × pop × (dim + f)).
This chapter systematically elaborates on the improvement logic and verification process of the ICPMShOA. First, it introduces core improvement strategies such as the enhancement of initial population diversity by ICMIC, the breakthrough of convergence stagnation by periodic mutation, and the expansion of search range by centroid opposition-based learning. It confirms the independent contributions of each strategy to algorithm performance through ablation experiments. It intuitively presents the complete execution steps of the algorithm with the help of flowcharts and pseudocode. Finally, the time complexity analysis shows that while introducing multiple improvements, the ICPMShOA still maintains computational efficiency of the same order as the original MShOA. The above contents collectively construct the theoretical foundation and implementation framework of the ICPMShOA, laying a solid foundation for its subsequent application in practical problems.
4. Simulation Experiments and Analysis
To comprehensively evaluate the performance of ICPMShOA, seven advanced optimization algorithms were selected as references, namely, MShOA, OOA, BWO, FLO, GVOA, I_CPA, and HOA.
This selection ensured the comprehensiveness and objectivity of the evaluation. The experiments rigorously validated the ICPMShOA using standard test suites from CEC2017 [
25] and CEC2022. To reduce the random fluctuations inherent in metaheuristic approaches and guarantee statistical robustness, every experimental setup was repeated 30 times with independent executions, where the final performance indicators were calculated as the average of all 30 trials. All comparative algorithms adopted identical parameter settings: a maximum of 300 iterations, a population size of 30, and a dimensionality of 30 on CEC2017 (20 on CEC2022).
The CEC2017 test suite contains 29 test functions. F1 and F3 are unimodal functions: they have only one global optimal solution, testing the algorithm’s convergence speed and basic optimization capability. F4 to F10 are multimodal functions, which contain multiple local optimal solutions and one global optimal solution. Multimodal functions are used to test the algorithm’s ability to escape from local optima. F11 to F20 are hybrid functions, which are composed of more than three CEC2017 benchmark functions after rotation or translation, with each sub-function having a weight. Hybrid functions simulate complex scenarios with the integration of multiple characteristics. F21 to F30 are composite functions, which are formed by combining at least three hybrid functions or benchmark functions after rotation or translation, with sub-functions containing bias values and weights. Composite functions further increase complexity, testing the algorithm’s global search and multi-scale optimization capabilities.
Table 1 offers a comprehensive overview of the algorithms employed in this research, along with their respective control parameters.
4.1. Algorithm Convergence Curve
To ensure statistical rigor, thirty independent runs were conducted for each optimization method under identical conditions to evaluate robustness. The experimental outcomes are presented through average convergence trajectories (
Figure 5 and
Figure 6).
In F1, F3–11, F14–18, F20–22, F25, F26, and F28–30 test functions of CEC2017 and F1, F2, F4–7, and F9–11 of CEC2022, the curve of the ICPMISOA drops rapidly in the early iterations (e.g., the first 50 iterations), indicating that the algorithm possesses strong exploration ability, enabling it to quickly approach the region of optimal solutions at the initial search stage. After 300 iterations, the average fitness of the ICPMISOA is significantly lower than that of most comparative algorithms, demonstrating its robust exploitation ability to converge precisely to superior solutions. The curve exhibits a trend of rapid decline in the early phase and stabilization in the later phase, with no obvious fluctuations, which signifies that the algorithm has good stability and is less prone to trapping in local optima. Compared with other algorithms, the ICPMISOA shows a more prominent comprehensive optimization performance.
4.2. Algorithm Box Plot
The experimental outcomes are presented through box–whisker diagrams (
Figure 7 and
Figure 8). In F1, F3, F5–10, F14–16, F18, F20–22, F25, F26, and F28–30 of CEC2017 and F1, F2, F4–7, F9, and F11 of CEC2022, the ICPMISOA’s box is narrow, and the whiskers are short, indicating that the dispersion of fitness results across multiple runs is extremely small, thus demonstrating strong robustness and outperforming the other seven algorithms. The median line of the ICPMISOA lies close to the bottom of the box (or directly within the lowest interval), which indicates that the fitness values from most of its runs are extremely low. This reflects high convergence precision and the capability to stably approach the optimal solution. The box plot of ICPMISOA exhibits no obvious outliers (no discrete points beyond the whiskers), suggesting that the algorithm rarely yields extremely poor results. Its stability far surpasses that of the comparative algorithms.
4.3. Statistical Results of Comparative Tests
The experimental outcomes are presented through quantitative validation provided by statistical indicators (optimal solutions, standard deviations, and arithmetic means) in
Table 2 and
Table 3. Empirical results demonstrate that the ICPMShOA outperforms competitors across multiple key metrics. In the CEC2017 test suite, for functions F1, F3–F5, F7, F8, F10–F19, F22, F25, F26, and F28–F30, the ICPMShOA achieved better minimum values, significantly outperforming all comparative algorithms. The same results were observed in F1, F4, and F7–F10 of CEC2020 and F1, F4, and F6–F12 of CEC2022. These findings substantiate the efficacy of the implemented enhancements, confirming that the ICPMShOA is highly competitive in solution quality.
4.4. The Number of Function Evaluations
To more clearly evaluate the performance of each algorithm on each function, the rankings of each algorithm in terms of the avg values for each function, derived from
Table 2 and
Table 3, are summarized in
Table 4 and
Table 5. It is evident that the improved algorithm holds a significant advantage in rankings across both benchmark test sets, demonstrating its superior performance.
4.5. Wilcoxon’s Rank-Sum Test
Table 6 and
Table 7 compares the
p-values of the ICPMShOA and swarm intelligence algorithms through the Wilcoxon rank-sum test. For functions such as F3, F5, F7, F8, F10–18, F21, and F30 on the CEC2017 benchmark, the
p-values of the ICPMShOA relative to the other seven algorithms are much lower than 0.05, indicating that its performance is clearly superior to theirs. For functions such as F1–F5, F7, and F9–12 on the CEC2022 benchmark, the
p-values of the ICPMShOA relative to the other six algorithms except the GVOA are much lower than 0.05.
In general, the ICPMShOA performs outstandingly in most function tests, showing notable differences from and often being better than other algorithms. Even when its performance is like that of some algorithms in certain functions, its advantages remain distinct, which confirms that it has strong competitiveness in solving these benchmark problems.
4.6. Radar Chart and Ranking Chart
In experiments using CEC2017 and CEC2022, we conducted a systematic analysis and visualized the performance on these benchmark test suites through radar charts (
Figure 9) and ranking plots (
Figure 10).
On 20 functions of CEC2017, the improved algorithm achieved the first place in terms of average fitness value. As for CEC2022, the improved algorithm obtained first place in average fitness value on nine functions.
It is easy to see from the comprehensive ranking plot that the improved algorithm achieved first place in average ranking across all four test suites.
4.7. Comparative Analysis of Algorithm Running Times for 30 Dim and 100 Dim Problems on CEC2017
The ICPMShOA, due to the integration of additional improved strategies, has significantly higher computational overhead than the original MShOA, but its optimization accuracy and convergence speed have been greatly improved. There are differences in the impact of increased dimensionality on the time performance of the two algorithms in
Table 8. For all functions, the MShOA’s average time shows a significant increase as dimensionality rises. For the ICPMShOA, however, the average time fluctuates minimally and remains relatively stable for unimodal functions, multimodal functions, and hybrid functions. For F23–F30 among composite functions, the average time of ICPMShOA increases significantly. This places higher demands on the scalability of the algorithm.
In this chapter, through multi-dimensional simulation experiments and analyses, the optimization performance of the ICPMShOA is systematically verified: convergence curves are used to intuitively demonstrate the algorithm’s advantages in convergence speed and accuracy; box plots reflect its stability in multiple runs; statistical results and the Wilcoxon rank-sum test, from the perspectives of quantification and statistical significance, confirm its overall superiority over comparative algorithms; radar charts and ranking charts further comprehensively evaluate the algorithm’s comprehensive performance under multiple indicators; the comparison of the number of function evaluations and running times for problems with different dimensions (30D, 100D) verifies its efficiency and high-dimensional adaptability. In summary, the experimental results comprehensively show that the ICPMShOA achieved significant improvements in convergence performance, stability, robustness, and computational efficiency, providing sufficient experimental support for its practical application.
5. Engineering Application Analysis of ICPMShOA
To verify the applicability and optimization effect of the improved ICPMShOA in actual complex scenarios, this chapter selects three typical engineering optimization problems—Haverly’s pooling problem, the hybrid pooling–separation problem, and the optimization design problem for industrial refrigeration systems. By applying the algorithm to specific engineering scenarios, it analyzes its performance in handling practical problems, such as complex constraint conditions, multi-objective coupling, and high-dimensional variables, to further examine the practical value of the algorithm.
5.1. Haverly’s Pooling Problem
Haverly’s pooling problem is a classic non-convex optimization problem, first proposed by C.A. Haverly in 1978, which is mainly applied to resource blending and production scheduling in the petrochemical field [
26]. Its core is to generate final products (such as different grades of gasoline) that meet quality constraints by optimizing the mixing ratio of raw materials (such as crude oil) in intermediate pools, while minimizing costs or maximizing profits. The system has three supply sources on the left, each providing crude oil with different sulfur (“S”) contents and varying prices; on the right, there are two terminals that receive processed oil, each with specified demand limits, acceptable quality standards, and corresponding purchase prices. Its mathematical formula is expressed as follows:
The ICPMShOA achieved an optimal value of −1.27 × 10
6, significantly lower than the original MShOA and other comparative algorithms in
Table 9. This indicates that it is more competitive in handling Haverly’s pool problems. Compared with the MShOA, the ICPMShOA, by introducing chaotic initialization, periodic mutation, and opposition-based learning, demonstrates the efficacy of the enhanced strategy in addressing Haverly’s pooling problems. The average value of the ICPMShOA is −1.01 × 10
6, which is also better than all comparative algorithms. This shows that the ICPMShOA can stably obtain high-quality solutions in multiple independent runs, verifying its robustness in Haverly’s pooling problems.
5.2. Hybrid Pooling–Separation Problem
The hybrid pooling–separation problem is a complex optimization problem that integrates resource mixing and separation processes [
27]. Its core lies in generating final products that meet the requirements for quality, cost, or energy efficiency in industrial production by optimizing the mixing ratios of raw materials and separation strategies, while balancing resource utilization efficiency and operational feasibility. The system employs a combination of separators and pooling mechanisms, where the operating cost of each separator varies linearly with its throughput. Process constraints are governed by mass balance equations, which apply to individual separation units and their interconnections with mixing nodes.
In
Table 10, the optimal value of the ICPMShOA reaches −5.98 × 10
5, which is significantly lower than that of the MShOA and the other five algorithms, second only to the GVOA. This indicates that it is more accurate in searching for the global optimal solution and can explore better process parameters or operating conditions in the mixed-pool separation problem. The box (interquartile range) of the ICPMShOA is overall at the lowest position, and the median (the horizontal line inside the box) is close to the optimal value, indicating that most of the operation results are concentrated in the high-quality interval; there are few whiskers and outliers, which further reflects the robustness of the algorithm in the complex environment of the mixed pool.
In conclusion, in the mixed-pool separation problem, the ICPMShOA, relying on better solution quality, stable operation performance, and improved strategies adapted to the characteristics of the problem, shows strong competitiveness and exhibits robust optimization performance for solving this type of engineering optimization problem.
5.3. Optimization Design Problem for Industrial Refrigeration Systems
The optimization design problem for industrial refrigeration systems is a comprehensive optimization problem targeting refrigeration equipment and system configuration in industrial scenarios [
28]. Its core is to achieve a multi-objective balance, such as improving energy efficiency, reducing costs, and meeting environmental standards, by optimizing design parameters and operation strategies on the premise of meeting production requirements. This problem widely exists in industrial fields relying on low-temperature environments, such as food processing, chemical synthesis, pharmaceuticals, and cold chain logistics. As a key component of corporate energy consumption, it has become a research hotspot in the field of industrial energy conservation. The optimal design of industrial refrigeration systems requires comprehensive consideration of multiple factors, including performance indicators, economic costs, and energy efficiency. This optimization problem involves 14 design variables, subject to 15 constraints, whose mathematical formulation is described below:
As can be seen from
Table 11, compared with other algorithms, the ICPMShOA algorithm achieves a lower optimal value with higher precision. In the optimal design of industrial refrigeration systems, a lower best score represents practical benefits such as lower energy consumption and higher refrigeration efficiency. The ICPMShOA demonstrates the capability to identify the globally optimal configuration of design parameters, such as the power distribution of refrigeration equipment and the flow regulation of cooling media, more quickly and accurately. Thus, it can reduce the operating costs and improve the refrigeration efficiency and stability of the refrigeration system. Its good stability can also provide reliable design references for engineers, reducing the uncertainty of design schemes, and it has high engineering application value.
This chapter systematically verifies the ICPMShOA’s ability to solve practical optimization problems by applying it to three typical engineering problems. When dealing with engineering problems involving strong constraints and multi-variable coupling, the ICPMShOA not only maintains good convergence accuracy and stability but also efficiently balances multiple optimization objectives. This fully demonstrates its value in transforming from theoretical models to engineering practice, providing an effective tool for optimization decision-making in actual industrial scenarios.
6. Conclusions
The ICPMShOA proposed in this study achieves a robust balance between global exploration of the search space and refinement of local optimal solutions for the first time by integrating an ICMIC, centroid opposition-based learning, and periodic mutation strategy. This mechanism breaks through the limitation that traditional optimization algorithms tend to fall into “insufficient exploration” or “inefficient exploitation”, which is key to the significant improvement in its optimization performance.
The effectiveness of this improvement was verified through twofold validation: in the CEC 2017 and 2022 benchmark tests, the ICPMShOA not only converges faster but also achieves higher solution accuracy, with performance significantly outperforming seven algorithms such as OOA and BWO. In engineering practice, the algorithm successfully solves three types of complex problems—Haverly’s pooling problem (chemical process optimization), the hybrid pooling–separation problem, and industrial refrigeration system optimization—demonstrating high precision and stability, strong adaptability, and practical engineering value, respectively, which verifies its reliability from theory to application.
The core contributions of this study are reflected in two aspects: theoretically, it provides a new paradigm for the “exploration–exploitation balance” of intelligent optimization algorithms through strategy integration, enriching the theory of intelligent optimization algorithms; application-wise, it offers an efficient solution for complex optimization problems in fields such as chemical engineering, resource allocation, and engineering systems. However, in terms of scalability (such as the composite functions of CEC2017), the improved algorithm still has room for optimization.
So, future research will focus on the following directions: First, the improved strategies in this paper will be extended to more algorithms, such as Cat Swarm Optimization [
29] (SCSO) and Snow Ablation Optimizer [
30] (SAO). Second, more types of opposition-based learning (OBL) (e.g., [
31]) will be attempted to be introduced to improve intelligent optimization algorithms. Third, the improved algorithm will be applied to more complex engineering scenarios, such as the dynamic optimization problem of high-speed train bogies [
32] and GPU acceleration [
33]. We will continuously break through existing research directions and constantly realize the theoretical and practical value of intelligent optimization algorithms.