Next Article in Journal
Materials and Structures Inspired by Human Heel Pads for Advanced Biomechanical Function
Previous Article in Journal
Correction: Almutary et al. Development of 3D-Bioprinted Colitis-Mimicking Model to Assess Epithelial Barrier Function Using Albumin Nano-Encapsulated Anti-Inflammatory Drugs. Bimimetics 2023, 8, 41
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Manta Ray Foraging Optimization for PID Control Parameter Tuning in Artillery Stabilization Systems

1
School of Mechanical Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
2
School of Energy and Power Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
3
The George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
*
Author to whom correspondence should be addressed.
Biomimetics 2025, 10(5), 266; https://doi.org/10.3390/biomimetics10050266
Submission received: 29 March 2025 / Revised: 18 April 2025 / Accepted: 21 April 2025 / Published: 26 April 2025

Abstract

:
In this paper, an Improved Manta Ray Foraging Optimization (IMRFO) algorithm is proposed to address the challenge of parameter tuning in traditional PID controllers for artillery stabilization systems. The proposed algorithm introduces chaotic mapping to optimize the initial population, enhancing the global search capability; additionally, a sigmoid function and Lévy flight-based dynamic adjustment strategy regulate the selection factor and step size, improving both convergence speed and optimization accuracy. Comparative experiments using five benchmark test functions demonstrate that the IMRFO algorithm outperforms five commonly used heuristic algorithms in four cases. The proposed algorithm is validated through co-simulation and physical platform experiments. Experimental results show that the proposed approach significantly improves control accuracy and response speed, offering an effective solution for optimizing complex nonlinear control systems. By introducing heuristic optimization algorithms for self-tuning artillery stabilization system parameters, this work provides a new approach to enhancing the intelligence and adaptability of modern artillery control.

1. Introduction

The control problem of artillery stabilization systems has long been a research focus. Early stabilization systems primarily relied on mechanical stabilizers. Traditional control strategies such as PID controllers have been widely applied in artillery stabilization systems. However, with the continuous advancement of artillery technology, control strategies for artillery stabilization have evolved to meet increasingly complex system demands. Due to the complexity of modern artillery systems, particularly those driven by electro-hydraulic actuators, control performance is influenced not only by mechanical design but also by nonlinear factors and external disturbances [1]. These factors significantly impact the stability accuracy of artillery in real combat environments. Moreover, the presence of uncertainties such as mechanical collisions, structural deformations, and friction forces that are difficult to model accurately together with mechanical wear over time make PID parameter tuning more challenging [2]. Therefore, determining appropriate control parameters that can adapt to complex and dynamically changing environments remains an urgent problem to be addressed in artillery stabilization systems.
Various control approaches have been proposed to address the challenges posed by strong nonlinearities and time-varying parameters in artillery stabilization systems, including neural network control, adaptive control, and fuzzy inference control [3,4,5,6,7]. Wang et al. [8] introduced a variable-structure wavelet neural network optimized using an adaptive differential evolution algorithm, improving both system accuracy and response speed. Ma et al. [9] proposed an adaptive robust feedback control strategy and verified its effectiveness through co-simulation. Rafiei et al. [10] incorporated fuzzy logic into a radial basis function (RBF) neural network for online optimization of control parameters. These studies primarily focus on model-based online optimization of control parameters to ensure basic accuracy under complex operating conditions.
Online tuning strategies have proven effective in artillery control systems, but often rely on real-time computation and high-fidelity models, which can increase implementation costs and hardware complexity. This challenge has led to the exploration of offline parameter tuning approaches [11,12], which utilize precalibrated optimization to minimize computational burdens during operation. Offline optimization has been widely explored in areas such as robotics and power systems, where it has shown the potential to enhance system performance and efficiency. To improve control accuracy while tuning parameters in a model-free environment, some researchers have turned to intelligent optimization algorithms for offline PID parameter optimization. For instance, Kong et al. [13] proposed an improved Dung Beetle Optimizer (DBO) for self-coupling PID parameter tuning. Rodrigues et al. [14] developed an enhanced Particle Swarm Optimization (PSO) algorithm to optimize PID parameters in automatic voltage regulators, while Azeez et al. [15] introduced an Artificial Bee Colony (ABC) algorithm with an adaptive learning strategy for optimizing control parameters in robotic arms.
The Manta Ray Foraging Optimization (MRFO) algorithm [16], first proposed in 2020, is an innovative bio-inspired swarm intelligence algorithm. This algorithm draws inspiration from the unique foraging behavior of manta rays, which utilize chain foraging, spiral foraging, and somersault foraging strategies to enhance their search capabilities. The manta ray’s flattened pectoral fins and broad dorsal fins offer exceptional maneuverability, enabling rapid directional changes and adjustments in posture. These foraging behaviors are effectively emulated by the MRFO algorithm, which is distinguished by its simplicity, minimal parameter requirements, and robust abilities in both global and local search. It has been widely applied to solve various optimization problems, including power systems, mechanical design, image segmentation, and path planning [17,18,19,20]. For instance, Ma et al. [21] proposed a two-strategy enhanced MRFO for image segmentation, demonstrating the algorithm’s strong adaptability and competitiveness in visual data analysis. Similarly, Adamu et al. [22] employed MRFO for hyperparameter optimization in skin cancer classification, revealing its potential in complex biomedical signal processing tasks. These applications reflect the potential and applicability of the MRFO algorithm in addressing diverse optimization challenges across multiple fields. Despite its advantages, the standard MRFO algorithm faces challenges, particularly in high-dimensional function optimization problems, where it tends to converge to local optima and exhibits slow convergence rates. These limitations hinder the algorithm’s performance in complex optimization tasks such as parameter tuning in systems with intricate dynamics.
To further enhance the performance of the artillery stabilization control system in a model-free setting, this paper proposes an Improved Manta Ray Foraging Optimization (IMRFO) algorithm that integrates multiple optimization strategies for parameter tuning. First, circle mapping, Lévy flight, and the sigmoid function are introduced to enhance the global search capability of the traditional MRFO algorithm. Then, the effectiveness of the proposed optimization method is validated using five benchmark test functions. Finally, the algorithm’s effectiveness and feasibility are demonstrated through co-simulation and physical system verification.
This paper makes three significant contributions. First, an Improved Manta Ray Foraging Optimization (IMRFO) algorithm incorporating circle chaotic mapping, Lévy flight, and a sigmoid function-based dynamic adjustment strategy is proposed to enhance the global search capability and convergence speed of the traditional MRFO algorithm. Second, a PID control parameter optimization method based on the IMRFO algorithm is designed for artillery stabilization systems. This method effectively addresses the challenges posed by strong nonlinearities and time-varying parameters, significantly improving the system’s stability and response performance. Third, the effectiveness and feasibility of the proposed method are validated through simulations and physical system experiments, demonstrating its potential as a reliable and efficient solution for complex control systems.

2. MRFO Algorithm

The MRFO strategy consists of a swarm intelligence optimization algorithm inspired by the foraging behavior of manta rays. Manta rays are highly adaptive and exhibit unique behaviors that allow them to efficiently locate food sources. In nature, their foraging process demonstrates adaptability, cooperative behavior, and strong local search capabilities, which inspired the development of the MRFO algorithm. The foraging behavior of manta rays can be classified into three types: chain foraging, spiral foraging, and somersault foraging.

2.1. Chain Foraging

The chain foraging strategy simulates the cooperative behavior of manta ray groups when searching for food. Each manta ray adjusts its direction not only based on the local food concentration but also by referencing the position of the manta ray ahead of it. This creates a “chain” in which each individual follows the one in front, continuously moving toward the food source and the leading individuals. This cooperative approach enhances the efficiency of the group in converging toward the optimal location while improving the collective search capability. The mathematical model of chain foraging is as follows:
x i d ( t + 1 ) = x i d ( t ) + r ( x b e s t d ( t ) x i d ( t ) ) + α ( x b e s t d ( t ) x i d ( t ) ) , i = 1 , x i d ( t ) + r ( x i 1 d ( t ) x i d ( t ) ) + α ( x b e s t d ( t ) x i d ( t ) ) , i 2 ,
α = 2 r log ( r ) ,
where x i d ( t ) is the position of the i-th individual in the d-th dimension at the t-th iteration, r is a random number in [0, 1], x b e s t d ( t ) is the individual with the highest fitness value in the current iteration, which can be understood as the current food position, and  α is the chain foraging weight factor, which determines whether the current individual tends to optimize towards the previous individual or the historically best individual in this iteration. Figure 1 illustrates the behavioral strategy of chain foraging.

2.2. Spiral Foraging

Compared to chain foraging, spiral foraging focuses more on global exploration. In chain foraging, the movement of manta rays is primarily determined by the positions of the preceding individuals and the current food source, leading to strong local search characteristics due to the close connections among individuals. This means that most of the searching and adjustments occur within a limited area. In contrast, spiral foraging enables manta rays to move along spiral trajectories toward the food source, allowing them to explore a broader space. Individuals in spiral foraging are influenced not only by the preceding individuals but also by the spiral motion, thereby expanding the overall search area. This approach helps to prevent premature convergence and improves the algorithm’s ability to escape local optima. The mathematical model of spiral foraging is as follows:
x i d ( t + 1 ) = x b e s t d ( t ) + r ( x b e s t d ( t ) x i d ( t ) ) + β ( x b e s t d ( t ) x i d ( t ) ) , i = 1 , x b e s t d ( t ) + r ( x i 1 d ( t ) x i d ( t ) ) + β ( x b e s t d ( t ) x i d ( t ) ) , i 2 ,
β = 2 e r 1 ( T t + 1 ) T sin ( 2 π r 1 ) ,
where β is the weight factor for spiral search and T is the maximum number of iterations. Figure 2 depicts spiral foraging strategy.
In the spiral search strategy, each individual’s search behavior revolves around the current optimal solution, ensuring refined exploration in local regions. To enhance global search capability, a new reference position is randomly selected to drive individuals away from the existing optimal solution, encouraging exploration of new areas.
x i d ( t + 1 ) = x r a n d d ( t ) + r ( x r a n d d ( t ) x i d ( t ) ) + β ( x r a n d d ( t ) x i d ( t ) ) , i = 1 , x r a n d d ( t ) + r ( x i 1 d ( t ) x i d ( t ) ) + β ( x r a n d d ( t ) x i d ( t ) ) , i 2 ,
x r a n d d = L b d + r ( U b d L b d ) ,
where x r a n d d represents a randomly selected position within the current search space, while L b d and U b d respectively denote the lower and upper bounds of the current position space.

2.3. Somersault Foraging

In the somersault foraging behavior, manta rays adjust their positions by rolling around the current optimal solution. This movement pattern allows them to explore new locations in a manner similar to a somersault. By continuously updating their positions relative to the best solution found so far, the individuals enhance their ability to refine the search and improve the optimization process. The mathematical model of somersault foraging is as follows:
x i d ( t + 1 ) = x i d ( t ) + S ( r 2 x b e s t d r 3 x i d ( t ) )
where S is the roll factor that determines the rolling range and where r 2 and r 3 are random numbers in [0, 1]. Figure 3 illustrates the somersault foraging strategy.

3. Improved MRFO Algorithm

3.1. Circle Chaotic Mapping for Population Initialization

In artillery stabilization systems, the complexity of simulations and high time cost of physical experiments often limit the feasible population size in optimization algorithms. Under such constraints, the standard MRFO algorithm’s reliance on random initialization can lead to insufficient population diversity, hindering comprehensive exploration of the solution space. To address this, we introduce circle chaotic mapping for population initialization. This method generates initial solutions with better uniformity and ergodicity compared to random initialization, thereby enhancing the algorithm’s ability to explore the search space comprehensively and avoid premature convergence. By improving the diversity of the initial population, the chaotic mapping strategy effectively mitigates the limitations of standard MRFO during the complex high-dimensional optimization tasks inherent in artillery stabilization systems.
Chaos mapping is a method for generating random behavior based on deterministic systems. It is capable of producing sequences with ergodicity and non-repetition within a finite range [23]. In this paper, circle chaos mapping is used for population initialization. Circle chaos mapping is a chaotic mapping method based on trigonometric functions and modular arithmetic, and its mathematical expression is as follows:
x k + 1 = mod x i + b a 2 π sin ( 2 π x i ) , 1 ,
x i d = L b d + x k + 1 d ( U b d L b d ) ,
where the value range of the circle chaos mapping sequence is [0, 1] and where a and b are control parameters. To ensure the randomness and uniformity of the chaos mapping, a and b are generally set to values around 0.5 and 0.2, respectively. The steps for population initialization using circle chaotic mapping are as follows:
Step 1: 
Set the population size N and dimension D, initializing the first individual x 1 j 0 , 1 in each dimension. The initial values of i and j are 1.
Step 2: 
Initialize the control parameters as follows: a ( 0.45 , 0.55 ) , b ( 0.2 , 0.25 ) .
Step 3: 
With i = i + 1 , generate x i j according to Equation (8).
Step 4: 
If i is greater than N, proceed to Step 5; otherwise, return to Step 3.
Step 5: 
With j = j + 1 , check whether j is greater than D. If no, then return to Step 2 and set i = 1 ; if yes, then the initialization is complete and the initial population positions are output according to Equation (9).

3.2. Sigmoid Function-Based Strategy Selection Factor

In the MRFO algorithm, each iteration alternates between chain and spiral foraging based on a strategy selection factor λ ( t ) , which is typically fixed in standard implementations. However, this approach may not fully address the complexities of artillery stabilization systems, where the dynamics are highly nonlinear and influenced by numerous factors such as barrel vibrations, projectile characteristics, and environmental conditions. In such systems, an adaptive strategy is crucial; early iterations should emphasize global exploration in order to thoroughly search the solution space, while later iterations should focus on local exploitation in order to fine-tune the solution near the optimal region. To balance global search and local exploitation, this paper proposes a strategy selection factor based on the sigmoid function, allowing the algorithm to adaptively adjust in different phases.
The sigmoid function is a commonly used nonlinear function, and its mathematical expression is as follows:
S ( x ) = 1 1 + e k ( x x 0 )
where x is the input variable, k is the parameter that controls the slope of the function, and  x 0 is the center point of the function. In the improved MRFO algorithm, the sigmoid function is used to adjust the strategy selection factor. Its mathematical expression is
λ ( t ) = λ min + λ max λ min 1 + e k ( t 0.5 T ) / T ,
where λ ( t ) is the strategy selection factor for the t-th iteration, λ min and λ max are the minimum and maximum values of the strategy selection factor, respectively, and T is the maximum number of iterations. By setting λ min = 0.25 , λ max = 0.75 , the algorithm tends to favor spiral foraging during the early iterations, which helps it to quickly explore the solution space and avoid becoming trapped in local optima. In the later stages of the iteration, the focus shifts to chain foraging, which enables fine-tuning around the optimal solution and improves convergence accuracy.

3.3. Lévy Flight-Integrated Somersault Foraging

The MRFO algorithm also employs somersault foraging, in which individuals oscillate around the current optimal solution to enhance local exploration. However, using a fixed oscillation range can result in premature convergence to local optima. To address this limitation, the proposed algorithm incorporates Lévy flight step sizes during the somersault phase, introducing random perturbations that expand the search range and improve the algorithm’s global exploration capability. This enhancement is particularly beneficial for optimizing complex systems such as artillery stabilization, where the search space is vast and traditional optimization methods may struggle to escape local minima. By integrating Lévy flight, the ability of the MRFO algorithm to explore diverse solutions is enhanced, leading to more robust and effective optimization outcomes in challenging engineering applications.
Lévy flight is a stochastic walk strategy based on a heavy-tailed distribution, where most step sizes are short but long steps occasionally occur with a small probability [24]. This characteristic allows Lévy flight to balance local exploitation and global exploration. Its mathematical expression is
L s = μ ν 1 / β ,
σ μ = Γ ( 1 + β ) sin ( π β / 2 ) Γ ( ( 1 + β ) / 2 ) 2 ( β 1 ) / 2 β 1 / β , σ ν = 1 ,
where Γ is the Gamma function, L ( s ) is the Lévy flight step length, and  β is the Lévy exponent, which determines the probability of large step lengths. Usually, β = 1.5 is set and both μ and ν follow a normal distribution such that
μ N ( 0 , σ μ 2 ) ,
ν N ( 0 , σ ν 2 ) .
To enhance the search efficiency of the somersault foraging strategy, the Lévy flight step length is introduced into the position update process and Equation (7) is updated as follows:
x i d ( t + 1 ) = x i d ( t ) + S ( r 2 x b e s t d r 3 x i d ( t ) ) + k L ( s ) ( x b e s t d x i d ( t ) )
where k is the Lévy flight step length control factor, which helps to avoid excessive influence of short step lengths on local fine-tuning. Figure 4 shows the position distribution of three individuals after 200 somersaults.
Figure 4 shows the somersault foraging strategy in the standard MRFO algorithm, where the individuals are concentrated within the rolling region after somersaulting. In contrast, using the improved somersault foraging strategy with the random step length characteristic of Lévy flight expands the search space. Most individuals remain concentrated in the rolling region near the current optimal solution for fine-grained search, while a small portion of individuals jump out of the current region through long steps to explore other solution spaces. This approach retains the efficiency of local exploitation while enhancing global exploration capability.
Our proposed improved MRFO algorithm is subsequently referred to as the IMRFO algorithm. The pseudocode for the IMRFO algorithm is shown in Algorithm 1.
Algorithm 1 IMRFO Algorithm
1:
Initialization: Population size N, number of iterations T, upper bound U b , lower bound L b
2:
Generate initial population positions based on Equation (8), calculate fitness values of all individuals, and find the best solution x b e s t
3:
repeat
4:
    t t + 1
5:
   if  rand < λ ( t )  then
6:
     Spiral foraging
7:
     if  t / T < rand  then
8:
        Update positions of all individuals based on Equation (5)
9:
     else
10:
        Update positions of all individuals based on Equation (3)
11:
     end if
12:
   else
13:
     Chain foraging
14:
     Update positions of all individuals based on Equation (1)
15:
   end if
16:
   Update the best fitness value
17:
   for  i = 1 N do
18:
     somersault foraging
19:
     Update positions of all individuals based on Equation (16)
20:
     Update the best fitness value again
21:
   end for
22:
until  t T is no longer true
Output: 
Best fitness value and position of the optimal individual

3.4. Algorithm Validation

To evaluate the performance of the proposed algorithm, five benchmark functions are used for assessment; f 1 and f 2 are unimodal test functions, while f 3 , f 4 , and f 5 are multimodal test functions. The unimodal test functions are primarily used to assess the convergence speed of the algorithm, whereas the multimodal test functions are chosen based on those areas where the standard IMRFO algorithm exhibits relatively weaker performance. This selection aims to further validate the proposed algorithm’s global optimization capability in complex problems. The benchmark functions are shown in Table 1.
We evaluated the IMRFO algorithm against five commonly used optimization algorithms, including the Manta Ray Foraging Optimization (MRFO) algorithm [25], Particle Swarm Optimization (PSO) algorithm [26], Grey Wolf Optimizer (GWO) algorithm [27], Whale Optimization Algorithm (WOA) [28], Ant Lion Optimizer (ALO) algorithm [29], and Covariance Matrix Adaptation Evolution Strategy (CMA-ES) [30]. For all algorithms, the population size was set to 100 with 2000 iterations, and each algorithm was run ten times.
The parameter settings for the comparison algorithms are shown in Table 2. In Particle Swarm Optimization (PSO), the inertia weight was decreased linearly from 0.9 to 0.4 , gradually shifting the search focus from global to local exploration. The cognitive and social coefficients were both set to 1.5 , ensuring a balanced influence between individual experience and social sharing. For the Grey Wolf Optimizer (GWO) and Whale Optimization Algorithm (WOA), the convergence control parameter a decreased linearly from 2 to 0, allowing for a smooth transition between the exploration and exploitation phases. In the Ant Lion Optimizer (ALO) algorithm, the random walk boundaries were adaptively adjusted based on iteration count and ant lion selection was performed via roulette wheel selection to maintain diversity. In the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), the initial step size σ was set to one-third of the search range to provide a balanced initial exploration. The parent number μ was set to half of the population size to ensure sufficient diversity. The step control parameter c s and covariance matrix update rate c 1 were computed based on the problem dimension and effective parent number to maintain stability and efficient adaptation of the covariance matrix.
The optimal and average values of each algorithm on the benchmark functions are presented in Table 3, while the convergence curves are shown in Figure 5. It can be observed that the convergence speed of IMRFO is slightly slower than that of MRFO for the unimodal functions f 1 and f 2 , which is due to the updates in the somersault model. However, both algorithms quickly converge to the optimal solution, and the optimal solution and the mean optimal solution are significantly better than those of other algorithms. For multimodal function f 3 , IMRFO outperforms MRFO in both convergence speed and accuracy, indicating that the improves the strategy selection factor enhances the global optimization capability in the early iterations and focuses more on local exploration in the later stages. For multimodal function f 4 , IMRFO achieves the best optimal solution and mean value, with a mean that is significantly better than the other algorithms. This suggests that the improved somersault foraging mechanism can better escape local optima and enhance the stability of the algorithm. For multimodal function f 5 , the MRFO algorithm was only able to escape local optima in two out of the ten optimization runs, while the proposed IMRFO algorithm was able to find the optimal solution multiple times, further proving the global optimization capability of the improved somersault foraging strategy.
The Wilcoxon Signed-Rank Test (WSRT) is used to assess statistical differences between two optimizers. The results for the ten runs across the various benchmark functions are presented in Table 4 and Table 5. The p-value indicates statistical significance, with values below 0.05 showing a significant difference; “W” represents the Wilcoxon test statistic, calculated as the smaller of the sums of ranks for positive and negative differences between paired observations; the “Result” column shows “+” for better performance by the first algorithm, “−” for the second, and “=” for no significant difference. The results indicate that IMRFO outperforms MRFO with significant differences (p-value < 0.05) in most cases, particularly for multimodal functions f 3 and f 5 . IMRFO also shows better performance than the other algorithms on most functions. However, for function f 5 , IMRFO shows no significant advantage over CMA-ES or PSO. Overall, the test results demonstrate that IMRFO generally outperforms the other algorithms, confirming its effectiveness.

4. Parameter Optimization of Artillery Stabilization Control System Based on IMRFO

4.1. Development of Co-Simulation Platform

To verify the effectiveness of the IMRFO algorithm in the PID controller of an artillery stabilization control system, a system model was established using a complex system modeling and simulation platform. Based on the system structural parameters shown in Table 6, the dynamic model of the artillery stabilization control system was built in a multibody dynamics simulation platform. The control system was designed in a control simulation environment using a PID controller, and the IMRFO algorithm was implemented in an intelligent optimization module to optimize the PID controller parameters.
To achieve seamless coordination between the control system and the mechanical system, a co-simulation platform was established by integrating a multibody dynamics simulation platform with a numerical control system as illustrated in Figure 6. This setup enables real-time data exchange and computation between the control model and the artillery dynamics model. The control system processes the barrel elevation displacement state parameters obtained from the multibody dynamics model and computes the corresponding output torque, which is then fed back to drive the mechanical system. This approach ensures accurate and efficient interaction between the control strategy and the dynamic behavior of the artillery system.

4.2. Simulation Parameter Settings

Let the control parameters P, I, and D be K p , K i , and K d , respectively. Through multiple simulation verification runs, it was found that the control system tends to diverge when the controller parameters are too large or too small. To prevent ineffective searches by the algorithm, the search ranges for the control parameters were set to K p [ 1 , 5000 ] , K i [ 0.5 , 2000 ] , and K d [ 0.5 , 300 ] .
In the IMRFO algorithm, the population size was set to N and the maximum number of iterations was T. Larger values of N and T improve convergence accuracy, but significantly increase training time. Considering these factors, N and T were chosen as N = 10 and T = 20 .
Considering the control requirements of high tracking accuracy and fast response speed, the objective function is defined as
J = ω 1 0 t s s e ( t ) d t + ω 2 0 t s s t e ( t ) d t ,
where t s s is the simulation time, ω 1 and ω 2 are the weight coefficients for response speed and tracking accuracy, respectively, and e ( t ) represents the difference between the actual system output and the desired output, with t s s = 10 , ω 1 = 5 , and ω 2 = 1 .
To verify the effectiveness and accuracy of the proposed IMRFO algorithm in the artillery stabilization control system, a comparative analysis was performed between the IMRFO and WOA algorithms, both of which performed well in benchmark function tests. Sinusoidal were used to validate the dynamic control performance of the artillery elevation stabilization system, while step inputs were used to validate its static performance. The desired elevation angle motion trajectories under both conditions were as follows:
θ ( t ) = 0.15 + 0.1 sin ( 12 π t ) rad ,
θ ( t ) = 0.4 rad .
In addition, to better evaluate the control performance of the controller, we used the Integral Absolute Error (IAE) and Integral Time-weighted Absolute Error (ITAE) as our performance metrics. Among these, the IAE reflects the cumulative error of the system during the entire operation, while the ITAE focuses more on the error performance over a longer period of time. The specific expressions are as follows:
IAE = 0 t s s | e ( t ) | d t ,
ITAE = 0 t s s t | e ( t ) | d t .

4.3. Simulation Results

The simulation results are shown in Figure 7, Figure 8, Figure 9 and Figure 10. In these figures, the two graphs in Figure 7 represent the fitness value convergence curves of the IMRFO algorithm under sinusoidal and step signals. From the simulation results, it can be seen that the IMRFO algorithm demonstrates superior performance over the WOA algorithm in both scenarios. Additionally, the IMRFO algorithm achieves relatively good results in the early stages of training due thanks to its incorporation of chaotic mapping-based population initialization.
The three graphs in Figure 8 present the iteration curves of the control parameters. After multiple iterations, the global optimal solution of the IMRFO algorithm under sinusoidal signal input is K p = 4997.8 , K i = 799.8 , K d = 248.9 , while the global optimal solution of the WOA is K p = 4999.9 , K i = 365.1 , K d = 263.4 . Under step signal input, the global optimal solution of the IMRFO algorithm is K p = 4906.4 , K i = 116.3 , K d = 248.5 , while the global optimal solution of the WOA is K p = 4060.6 , K i = 270.5 , K d = 97.6 .
Figure 9 and Figure 10 compare the control system’s input–output and error, respectively. The controller performance is shown in Figure 11. Under sinusoidal input, the IMRFO and GWO algorithms perform similarly, with IMRFO slightly outperforming GWO in terms of IAE and ITAE values. However, under step output, IMRFO significantly outperforms GWO, with GWO’s IAE value being 51% higher than that of IMRFO and its ITAE value being 11% higher. Additionally, IMRFO converges within 0.3 s without significant oscillation; although the GWO algorithm has a lower steady-state error than the IMRFO algorithm, it has a slower response time and excessive overshoot.

4.4. Physical Validation

To validate the control performance of the PID controller parameters optimized by the IMRFO algorithm in an actual artillery stabilization system, a physical verification test was conducted. The primary objective of the test was to evaluate whether the optimized control parameters could outperform the empirical parameters in real-world applications and to further verify the practicality and reliability of the algorithm.
The physical verification was conducted on a semi-physical co-simulation platform integrating mechanical structure and electrical control. The test platform was provided by the China North Vehicle Research Institute. It was configured to focus on the vertical stabilization subsystem, with the performance of a key electromechanical actuator evaluated under different parameter settings. The mechanical subsystem includes key components such as a cradle, barrel, electric actuator, and adjustable hinges. The electrical control subsystem supports closed-loop control, comprising a host computer and a real-time processor from the mainstream DSP series. The system operates within a local network environment, allowing for real-time transmission of control commands and feedback signals.
Sensors such as a tilt sensor, rotary transformer, and eddy current probe were used to monitor the absolute and relative angular positions as well as the structural clearances. The control algorithm was developed in a mainstream embedded development environment and deployed to the DSP. Simulink-based models were compiled and downloaded to the real-time hardware, with experiment management handled via dSPACE ControlDesk. During the experiment, the system was powered up in a fixed sequence and real-time execution was initiated by switching to animation mode. The entire setup ensured that control strategies could be validated under realistic electromechanical coupling conditions, supporting effective evaluation of algorithm performance.
The experimental platform was configured to investigate the optimization of PI parameters within the velocity loop of the vertical stabilization system. The empirically determined parameters of K p = 0.1 and K i = 10 were used as the benchmark for comparison. To prevent system oscillation and divergence caused by improper control parameters, the optimization search range was set to K p [ 0.02 , 0.11 ] , K i [ 1 , 14 ] .
During the experiment, the reference signals included a sinusoidal signal (frequency 0.3 Hz, amplitude approximately 1°/s) and a step signal (output value of approximately 1°/s). The IMRFO algorithm was used to optimize the PI parameters with the objective of minimizing the tracking error of the control system for the sinusoidal signal, thereby enhancing the system’s dynamic response and stability.
Based on the training results of the IMRFO algorithm under sinusoidal signals, the optimal solution was determined as K p = 1.053 , K i = 12.6807 . Figure 12 shows the convergence curve of the fitness value, while Figure 13a,b presents the system outputs under sinusoidal and step signal inputs using the optimized control parameters.
As shown in Figure 14, the optimized PI parameters generally outperform the empirically determined values under both sinusoidal and step signals. Under sinusoidal inputs, the optimized control system achieves lower IAE and ITAE values, resulting in a smoother response and smaller error. Under step inputs, the optimized control parameters again surpass the empirical values. Although the maximum overshoot increases slightly, both IAE and ITAE decrease significantly, demonstrating a notable improvement in error control.

5. Conclusions

This paper proposes a control parameter optimization method for PID control of artillery stabilization systems based on an improved version of the Manta Ray Foraging Optimization (MRFO) algorithm, which we call IMRFO. By introducing circle chaotic mapping to enhance population diversity and incorporating Lévy flight and a sigmoid function-based dynamic adjustment strategy to improve global search capability, the proposed IMRFO algorithm shows stronger performance on high-dimensional and multimodal optimization problems. Although the proposed IMRFO algorithm exhibits slightly slower convergence than standard MRFO on simple unimodal functions due to its increased exploration behavior, this tradeoff is beneficial in addressing complex control problems. Simulation results confirm that the optimized PID parameters significantly enhance the system’s steady-state accuracy and dynamic response. Compared with the conventional empirical tuning method, the proposed approach demonstrates superior robustness and control performance.

Author Contributions

Methodology, X.W., X.L., Q.S., C.X., and Y.-H.C.; validation, X.L. and C.X.; writing—original draft preparation, X.L. All authors have read and agreed to the published version of the manuscript.

Funding

The research was supported jointly by the Natural Science Foundation of Jiangsu Province (No. BK20230879), the National Natural Science Foundation of China (No. 62303219), and the National Natural Science Foundation of China (No. 52175099, 52475020).

Data Availability Statement

The data and code of the current study can be obtained from the corresponding author upon reasonable request.

DURC Statement

The current research is limited to the field of intelligent control and optimization algorithms, which are primarily used to improve the performance of control systems. This work does not pose a threat to public health or national security. The authors acknowledge the dual-use potential of optimization technologies and confirm that all necessary precautions have been taken to prevent potential misuse.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Yang, G.L.; Ge, J.L.; Sun, Q.Z.; Wang, L.Q. Development and application prospects of gun vibration and control. J. Vib. Diagn. 2021, 41, 1043–1105. [Google Scholar]
  2. Zheng, H.; Rui, X.; Zhang, J.; Gu, J.; Zhang, S. Nonlinear motor-mechanism coupling tank gun control system based on adaptive radial basis function neural network optimised computed torque control. ISA Trans. 2022, 131, 222–235. [Google Scholar] [CrossRef]
  3. Yuan, S.S.; Deng, W.X.; Yao, J.Y.; Yang, G.L. Robust control for bidirectional stabilization system with time delay estimation. Int. J. Control. Autom. Syst. 2024, 22, 1163–1175. [Google Scholar] [CrossRef]
  4. Chen, Y.; Cai, Y.; Yang, G.; Zhou, H.; Liu, J. Neural adaptive pointing control of a moving tank gun with lumped uncertainties based on dynamic simulation. J. Mech. Sci. Technol. 2022, 36, 2709–2720. [Google Scholar] [CrossRef]
  5. Jiang, S.; Tian, F.Q.; Sun, S.Y.; Liang, W.G. Integrated guidance and control of guided projectile with multiple constraints based on fuzzy adaptive and dynamic surface. Def. Technol. 2020, 16, 1130–1141. [Google Scholar] [CrossRef]
  6. Yuan, S.S.; Deng, W.X.; Yao, J.Y.; Yang, G.L. Robust adaptive precision motion control of tank horizontal stabilizer based on unknown actuator backlash compensation. Def. Technol. 2023, 20, 72–83. [Google Scholar] [CrossRef]
  7. Sun, Q.; Lu, Z.; Gui, X.; Chen, Y.H. Biomimetic linkage mechanism robust control for variable stator vanes in aero-engine. Biomimetics 2024, 9, 778. [Google Scholar] [CrossRef]
  8. Wang, C.; Hou, Y. The identification of electric load simulator for gun control systems based on variable-structure WNN with adaptive differential evolution. Appl. Soft Comput. 2016, 38, 164–175. [Google Scholar] [CrossRef]
  9. Ma, Y.Z.; Yang, G.L.; Sun, Q.Q.; Wang, X.Y.; Wang, Z.F. Adaptive robust feedback control of moving target tracking for all-electrical tank with uncertainty. Def. Technol. 2022, 18, 626–642. [Google Scholar] [CrossRef]
  10. Rafiei, H.; Akbarzadeh-T, M.R. Reliable fuzzy neural networks for systems identification and control. IEEE Trans. Fuzzy Syst. 2022, 31, 2251–2263. [Google Scholar] [CrossRef]
  11. Zhou, Y.; Hao, Z. Multi-strategy improved whale optimization algorithm and its engineering applications. Biomimetics 2025, 10, 47. [Google Scholar] [CrossRef] [PubMed]
  12. Garicano-Mena, J.; Santos, M. Nature-inspired metaheuristic optimization for control tuning of complex systems. Biomimetics 2024, 10, 13. [Google Scholar] [CrossRef] [PubMed]
  13. Kong, W.; Zhang, H.; Yang, X.; Yao, Z.; Wang, R.; Yang, W.; Zhang, J. PID control algorithm based on multistrategy enhanced dung beetle optimizer and back propagation neural network for DC motor control. Sci. Rep. 2024, 14, 28276. [Google Scholar] [CrossRef] [PubMed]
  14. Rodrigues, F.; Molina, Y.; Silva, C.; Naupari, Z. Simultaneous tuning of the AVR and PSS parameters using particle swarm optimization with oscillating exponential decay. Int. J. Electr. Power Energy Syst. 2021, 133, 107215. [Google Scholar] [CrossRef]
  15. Azeez, M.I.; Abdelhaleem, A.M.M.; Elnaggar, S.; Moustafa, K.A.; Atia, K.R. Optimization of PID trajectory tracking controller for a 3-DOF robotic manipulator using enhanced Artificial Bee Colony algorithm. Sci. Rep. 2023, 13, 11164. [Google Scholar] [CrossRef]
  16. Zhao, W.; Zhang, Z.; Wang, L. Manta ray foraging optimization: An effective bio-inspired optimizer for engineering applications. Eng. Appl. Artif. Intell. 2020, 87, 103300. [Google Scholar] [CrossRef]
  17. Yildiz, A.R.; Mehta, P. Manta ray foraging optimization algorithm and hybrid Taguchi salp swarm-Nelder–Mead algorithm for the structural design of engineering components. Mater. Test. 2022, 64, 706–713. [Google Scholar] [CrossRef]
  18. Huang, H.; Aolei, L.I.; Lan, Y. Three dimensional path planning of unmanned underwater vehicle based on improved manta ray foraging optimization algorithm. J. Xi’an Jiaotong Univ. 2022, 56, 9–18. [Google Scholar]
  19. Jain, S.; Indora, S.; Atal, D.K. Rider manta ray foraging optimization-based generative adversarial network and CNN feature for detecting glaucoma. Biomed. Signal Process. Control 2022, 73, 103425. [Google Scholar] [CrossRef]
  20. Hu, G.; Li, M.; Wang, X.; Wei, G.; Chang, C.T. An enhanced manta ray foraging optimization algorithm for shape optimization of complex CCG-Ball curves. Knowl.-Based Syst. 2022, 240, 108071. [Google Scholar] [CrossRef]
  21. Ma, B.J.; Pereira, J.L.J.; Oliva, D.; Liu, S.; Kuo, Y.H. Manta ray foraging optimizer-based image segmentation with a two-strategy enhancement. Knowl.-Based Syst. 2023, 262, 110247. [Google Scholar] [CrossRef]
  22. Adamu, S.; Alhussian, H.; Aziz, N.; Abdulkadir, S.J.; Alwadin, A.; Abdullahi, M.; Garba, A. Unleashing the power of Manta Rays Foraging Optimizer: A novel approach for hyper-parameter optimization in skin cancer classification. Biomed. Signal Process. Control 2025, 99, 106855. [Google Scholar] [CrossRef]
  23. Wei, F.; Feng, Y.; Shi, X.; Hou, K. Improved sparrow search algorithm with adaptive multi-strategy hierarchical mechanism for global optimization and engineering problems. Clust. Comput. 2025, 28, 215. [Google Scholar] [CrossRef]
  24. Heidari, A.A.; Pahlavani, P. An efficient modified grey wolf optimizer with Lévy flight for optimization tasks. Appl. Soft Comput. 2017, 60, 115–134. [Google Scholar] [CrossRef]
  25. Zhang, X.Y.; Hao, W.K.; Wang, J.S.; Zhu, J.H.; Zhao, X.R.; Zheng, Y. Manta ray foraging optimization algorithm with mathematical spiral foraging strategies for solving economic load dispatching problems in power systems. Alex. Eng. J. 2023, 70, 613–640. [Google Scholar] [CrossRef]
  26. Song, J.; Li, P. PID control parameter optimization based on an immune fruit fly optimization algorithm. Control Eng. China 2017, 24, 2502–2507. [Google Scholar]
  27. Tian-Hua, J. Flexible job shop scheduling problem with hybrid grey wolf optimization algorithm. Control Decis. 2018, 33, 503–508. [Google Scholar]
  28. Ahmad, N.S. Modeling and hybrid pso-woa-based intelligent pid and state-feedback control for ball and beam systems. IEEE Access 2023, 11, 137866–137880. [Google Scholar] [CrossRef]
  29. Nataraj, D.; Subramanian, M. Design and optimal tuning of fractional order PID controller for paper machine headbox using jellyfish search optimizer algorithm. Sci. Rep. 2025, 15, 1631. [Google Scholar] [CrossRef]
  30. Zhao, F.; Bao, H.; Wang, L.; He, X.; Jonrinaldi. A hybrid cooperative differential evolution assisted by CMA-ES with local search mechanism. Neural Comput. Appl. 2022, 34, 7173–7197. [Google Scholar] [CrossRef]
Figure 1. Chain foraging strategy diagram.
Figure 1. Chain foraging strategy diagram.
Biomimetics 10 00266 g001
Figure 2. Spiral foraging strategy diagram.
Figure 2. Spiral foraging strategy diagram.
Biomimetics 10 00266 g002
Figure 3. Somersault foraging strategy diagram.
Figure 3. Somersault foraging strategy diagram.
Biomimetics 10 00266 g003
Figure 4. Distribution of individuals in two-dimensional space after somersault foraging.
Figure 4. Distribution of individuals in two-dimensional space after somersault foraging.
Biomimetics 10 00266 g004
Figure 5. Convergence characteristics and comparison results of the six algorithms on the test functions.
Figure 5. Convergence characteristics and comparison results of the six algorithms on the test functions.
Biomimetics 10 00266 g005
Figure 6. Block diagram of artillery stabilization control system parameter optimization based on IMRFO.
Figure 6. Block diagram of artillery stabilization control system parameter optimization based on IMRFO.
Biomimetics 10 00266 g006
Figure 7. Variation curve of the optimal individual fitness value.
Figure 7. Variation curve of the optimal individual fitness value.
Biomimetics 10 00266 g007
Figure 8. Control parameter optimization curve.
Figure 8. Control parameter optimization curve.
Biomimetics 10 00266 g008
Figure 9. Tracking curve under two types of signals.
Figure 9. Tracking curve under two types of signals.
Biomimetics 10 00266 g009
Figure 10. Tracking error under two types of signals.
Figure 10. Tracking error under two types of signals.
Biomimetics 10 00266 g010
Figure 11. Comparison of performance metrics between the optimal parameters obtained by the IMRFO and GWO algorithms.
Figure 11. Comparison of performance metrics between the optimal parameters obtained by the IMRFO and GWO algorithms.
Biomimetics 10 00266 g011
Figure 12. Fitness variation curve of the IMRFO algorithm on the physical turret.
Figure 12. Fitness variation curve of the IMRFO algorithm on the physical turret.
Biomimetics 10 00266 g012
Figure 13. Experimental results under different input signals: (a) tracking curve and tracking error under sinusoidal signal and (b) tracking curve and tracking error under step signal.
Figure 13. Experimental results under different input signals: (a) tracking curve and tracking error under sinusoidal signal and (b) tracking curve and tracking error under step signal.
Biomimetics 10 00266 g013
Figure 14. Comparison of performance metrics between parameter values optimized with IMRFO and empirical parameter values.
Figure 14. Comparison of performance metrics between parameter values optimized with IMRFO and empirical parameter values.
Biomimetics 10 00266 g014
Table 1. Test functions.
Table 1. Test functions.
NameFunctionDimensionRange
Schwefel 2.22 f 1 = i = 1 n x i + i = 1 n x i 30[−10, 10]
Rosenbrock f 2 = i = 1 n 1 ( 100 ( x i + 1 x i ) 2 ) + ( x i 1 ) 2 30[−30, 30]
Schwefel f 3 = i = 1 n ( x i sin ( x i ) ) 30[−500, 500]
Penalized f 4 = π n 10 sin 2 ( π y i ) + i = 1 n 1 ( y i 1 ) 2 1 + 10 sin 2 ( π y i + 1 ) + ( y n 1 ) 2 + i = 1 30 u ( x i , 10 , 100 , 4 ) 30[−50, 50]
Penalized2 f 5 = 0.1 sin 2 ( 3 π x 1 ) + i = 1 29 ( x i 1 ) 2 1 + sin 2 ( 3 π x i + 1 ) + ( x n 1 ) 2 1 + sin 2 ( 2 π x 30 ) + i = 1 30 u ( x i , 5 , 100 , 4 ) 30[−50, 50]
Table 2. Parameter settings.
Table 2. Parameter settings.
AlgorithmParameterValue
PSOInertia WeightFrom 0.9 to 0.4 linearly
Cognitive Coefficient 1.5
Social Coefficient 1.5
GWOConvergence Coefficient aFrom 2 to 0 linearly
Encircling Coefficient 2 a r a
Spiral Constant ( a 1 ) · rand ( ) + 1
WOALinearly decreasing control factor aFrom 2 to 0 linearly
Encircling coefficient vectors [ a , a ]
Prey influence coefficient vectors [ 0 , 2 ]
ALORandom walk boundary lb / ub
Antlion selectionroulette
CMA-ESInitial step size σ 0.33 × ( Up Low )
Parent number μ Pop / 2
Step control parameter c s ( μ + 2 ) / ( Dim + μ + 5 )
Covariance matrix update rate c 1 2 / ( ( Dim + 1.3 ) 2 + μ )
Table 3. Results of the six optimization algorithms on the test functions.
Table 3. Results of the six optimization algorithms on the test functions.
FunctionValueMRFOIMRFOPSOGWOWOAALOCMA-ES
F1Mean00 2.07 E 30 2.30 E 107 5.91 E 234 1.72 E 05 8.32 E 13
Best00 1.73 E 30 4.41 E 109 1.65 E 234 7.61 E 06 6.68 E 13
F2Mean7.05507.465021.809726.374821.717262.991129.9983
Best6.08436.46137.599025.996020.777117.098922.3406
F3Mean 8.99 E + 03 1 . 02 E + 04 6.71 E + 03 3.01 E + 03 8.36 E + 03 4.76 E + 03 6.75 E + 03
Best 1.00 E + 04 1 . 13 E + 04 7.98 E + 03 3.72 E + 03 9.00 E + 03 5.09 E + 03 8.28 E + 03
F4Mean0.0104 1 . 57 E 32 0.03110.0530 3.80 E 10 4.34 E 11 4.24 E 05
Best 1 . 57 E 32 1 . 57 E 32 1 . 57 E 32 0.0262 7.72 E 11 9.66 E 12 1.18 E 16
F5Mean1.91740.00440.00330.22660.01832.4383 2 . 73 E 16
Best 1 . 35 E 32 1 . 35 E 32 1 . 35 E 32 1.49 E 06 6.29 E 09 2.1674 1.57 E 16
Table 4. Wilcoxon signed-rank test results: IMRFO vs. MRFO, PSO, and GWO.
Table 4. Wilcoxon signed-rank test results: IMRFO vs. MRFO, PSO, and GWO.
FunctionIMRFO vs. MRFOIMRFO vs. PSOIMRFO vs. GWO
p -ValueWResult p -ValueWResult p -ValueWResult
f11.00000=0.00200+0.00200+
f20.130943=0.00200+0.00200+
f30.00200+0.00200+0.00200+
f41.00000=0.12500=0.00200+
f50.00390+1.00009=0.00200+
Table 5. Wilcoxon signed-rank test results: IMRFO vs. WOA, ALO, and CMA-ES.
Table 5. Wilcoxon signed-rank test results: IMRFO vs. WOA, ALO, and CMA-ES.
FunctionIMRFO vs. WOAIMRFO vs. ALOIMRFO vs. CMA-ES
p -ValueWResult p -ValueWResult p -ValueWResult
f10.00200+0.00200+0.00200+
f20.00200+0.00200+0.00200+
f30.00200+0.00200+0.00200+
f40.00200+0.00200+0.00200+
f50.01955+0.00200+0.537134=
Table 6. System structural parameters.
Table 6. System structural parameters.
StructureNameValue
Turret Launching SystemTurret Mass (kg)5517
Launching System Mass (kg)80
Turret Rotation Radius (m)1.1
Barrel Length (m)5.3
Electric Cylinder Installation ParametersInitial Angle of Electric Cylinder (°) arcsin ( 0.6 )
Initial Length of Electric Cylinder (m)0.6
Distance from Ear to Electric Cylinder Drive Point (m)0.8
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, X.; Li, X.; Sun, Q.; Xia, C.; Chen, Y.-H. Improved Manta Ray Foraging Optimization for PID Control Parameter Tuning in Artillery Stabilization Systems. Biomimetics 2025, 10, 266. https://doi.org/10.3390/biomimetics10050266

AMA Style

Wang X, Li X, Sun Q, Xia C, Chen Y-H. Improved Manta Ray Foraging Optimization for PID Control Parameter Tuning in Artillery Stabilization Systems. Biomimetics. 2025; 10(5):266. https://doi.org/10.3390/biomimetics10050266

Chicago/Turabian Style

Wang, Xiuye, Xiang Li, Qinqin Sun, Chenjun Xia, and Ye-Hwa Chen. 2025. "Improved Manta Ray Foraging Optimization for PID Control Parameter Tuning in Artillery Stabilization Systems" Biomimetics 10, no. 5: 266. https://doi.org/10.3390/biomimetics10050266

APA Style

Wang, X., Li, X., Sun, Q., Xia, C., & Chen, Y.-H. (2025). Improved Manta Ray Foraging Optimization for PID Control Parameter Tuning in Artillery Stabilization Systems. Biomimetics, 10(5), 266. https://doi.org/10.3390/biomimetics10050266

Article Metrics

Back to TopTop