Next Article in Journal
Statistical and Water Management Assessment of the Impact of Climate Change in the Reservoir Basin of the Volga–Kama Cascade on the Environmental Safety of the Lower Volga Ecosystem
Next Article in Special Issue
A Systematic Parameter Analysis of Cloud Simulation Tools in Cloud Computing Environments
Previous Article in Journal
Turbidity and COD Removal from Municipal Wastewater Using a TiO2 Photocatalyst—A Comparative Study of UV and Visible Light
Previous Article in Special Issue
DAG Hierarchical Schedulability Analysis for Avionics Hypervisor in Multicore Processors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

PID Control Model Based on Back Propagation Neural Network Optimized by Adversarial Learning-Based Grey Wolf Optimization

School of Information and Control Engineering, Qingdao University of Technology, No. 777 Jialingjiang East Rd., Qingdao 266525, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(8), 4767; https://doi.org/10.3390/app13084767
Submission received: 9 March 2023 / Revised: 28 March 2023 / Accepted: 6 April 2023 / Published: 10 April 2023
(This article belongs to the Special Issue Advanced Technology of Intelligent Control and Simulation Evaluation)

Abstract

:
In processes of industrial production, the online adaptive tuning method of proportional-integral-differential (PID) parameters using a neural network is found to be more appropriate than a conventional controller with PID for controlling different industrial processes with varying characteristics. However, real-time implementation and high reliability require the adjustment of specific model parameters. Therefore, this paper proposes a PID controller that combines a back-propagation neural network (BPNN) and adversarial learning-based grey wolf optimization (ALGWO). To enhance the unpredictable behavior and capacity for exploration of the grey wolf, this study develops a new parameter-learning technique. Alpha gray wolves use the random walk of levy flight as their hunting method. In beta and delta gray wolves, a search strategy centering on the top gray wolf is employed, and in omega gray wolves, the decision wolves handle the confrontation strategy. A fair balance between exploration and exploitation can be achieved, as evidenced by the success of the adversarial learning-based grey wolf optimization technique in ten widely used benchmark functions. The effectiveness of different activation functions in conjunction with ALGWO were evaluated in resolving the parameter adjustment issue of the BPNN model. The results demonstrate that no unique activation function outperforms others in different controlled systems, but their fitnesses are significantly inferior to those of the conventional PID controller.

1. Introduction

In control manufacturing, standard PID control algorithms are frequently utilized as a control strategy, providing the benefits of unequivocal simplicity and excellent performance [1], among others. PID control algorithms, alongside traditional methods, are mainly used for control-parameter optimization [2,3]. However, the PID control algorithm has several drawbacks, including difficulty in parameter setting and susceptibility to disturbance. More PID tuning algorithms, such as neural networks [4], machine learning [5,6], and heuristic algorithms [7,8], should be developed for use in actual control processes to address these issues.
However, as controlled systems become increasingly complex, there is a growing need for control strategies that can handle time-varying behavior and time delays. The complexity of the controlled system contributes to the frequently subpar control effects of a PID controller [9,10]. At present, many intelligent algorithms have been developed. Systems with time-varying and hysteresis delays can benefit significantly from these approaches [11,12]. By manually modifying the control rules, the expert PID control algorithm can be applied to specific simulation systems [13]. Fuzzy PID control can be included in the controller using a lookup table, making it simpler to obtain better control effects [14,15,16,17,18,19]. However, the expert PID and fuzzy PID controllers always have subpar timing precision and limited anti-interference capabilities. Due to the benefits of rapid adjustment velocity and excellent accuracy [20,21], the neural network PID control approaches have been applied in numerous complicated systems [22,23,24]. In 2022. Wang et al. [25] used the PSO-BPNN-PID algorithm to control the nutrient solution, and the simulation results demonstrated that the reaction time and accuracy of the control system were superior to those of the conventional PID control method. The same year, a BPNN-PID controller was developed using a Xilinx field-programmable gate array (FPGA) technology in [26]. The findings revealed that the suggested system exhibited fast convergence and dependable performance. To better regulate the delay system in 2023, a particle swarm optimization-radial basis function (PSO-RBF) is utilized in place of a traditional PID controller [27]. In general, BP neural network research has thus shown positive results.
In addition, to achieve effective controller performance, these parameters of the BPNN-PID controller must be properly optimized. Moreover, heuristic methods are typically used in the process of parameter optimization [28,29,30,31,32]. In 2018, the BP neural network was optimized using a mind evolutionary algorithm to predict wave heights; the model achieved great accuracy and a fast running time [33]. In 2021, using an algorithm known as the water cycle method, Zhang et al. [34] improved the BP neural network model for landside prediction. In 2022, it was proposed that the Boost circuit’s dynamic and generally pro capabilities be improved by combining a genetic algorithm with BP neural network PID control (GA-BP-PID) [35]. Therefore, an ALGWO-BPNN control model is proposed in this paper as a better way to control time-varying and time delay systems. The primary contributions of the proposed controller are as follows:
  • The GWO technique is used to propose and optimize a PID control model based on a BPNN. Considering that the GWO algorithm is not necessarily the best for solving complex problems, a stochastic adjustment convergence factor formula is proposed, and the beta and delta are adjusted in combination with a differential evolution (DE). With the enhanced GWO method, the connection weight of the BPNN achieved a verifiably better control effect.
  • Several benchmark functions are selected to analyze ALGWO.
  • Some activation functions for the output layer of the BPNN model are analyzed and compared.
  • The ALGWO-BPNN model is simulated and compared with the conventional PID controller.

2. Related Works

2.1. Back Propagation Neural Network

Using the error back propagation technique, a multilayer feedforward neural network is called to become a back propagation neural network. Due to its nature, it has outstanding performance in nonlinear mapping, such as function approximation and pattern recognition.
There are three layers in a back propagation neural network model: input, hidden, and output. The structure of the fundamental BPNN model is depicted in Figure 1. The input layer deals with the kind and volume of input. By the quantity of control layers and activation functions, the hidden layer introduces the possibility for nonlinear mapping. The output layer is in charge of producing certain information.
Output of the neuron model structure is usually expressed as a nonlinear combination of input and weight, as shown in (1).
y ^ j = f ( i = 1 n w i j x i b i j )
where y ^ j is output of the neuron, x i is output of the neuron, b i j and w i j are the bias and weight of the neuron, and f ( ) is an activation function.
Typical loss function for optimization is shown in (2).
E = i = 1 n ( y ^ i y i ) 2
where y ^ i is the truth value and E is a loss function.
Gradient Descent (GD) technology is typically used for weight minimization [36].
w i j = w i j η E ( w i j ) w i j
where η is a learning rate.

2.2. Grey Wolf Optimization Algorithm

Authors of [37] examined the grey wolf’s social behavior before modeling it and creating an algorithm for the grey wolf’s optimization.
The pros and cons of the actual problems’ remedies are used to categorize the entire grey wolf population. The best option is designated as the alpha ( α ) , the second best option is designated as the beta ( β ) , the third best option is designated as the delta ( δ ) , and the remaining gray wolves are designated as the omega ( ω ) .
The following describes how gray wolves’ behavior toward nearby prey is mathematically modeled:
D i = W p r e y X p r e y X i
X i = X p r e y A i D i
where X p r e y is the position of the prey, X i is the position vector of the grey wolf, and A i and W p r e y are coefficient vectors.
The A i and W p r e y are calculated as follows:
A i = 2 r a n d a a
W p r e y = 2 r a n d
where a decreases linearly and rand are random vectors.
Because we do not know where the prey is, we can only roughly determine where the gray wolf is by using our decision-making level to mimic the gray wolf’s hunting behavior. Through the following, each gray wolf can update its location:
D α = W α X α X D β = W β X β X D δ = W δ X δ X
X i α = X α A α D α X i β = X β A β D β X i δ = X δ A δ D δ
X i = X i α + X i β + X i δ 3

3. Model Optimization

The standard grey wolf optimization algorithm has all omega members update their positions until they hit the termination conditions and discover the best answer (alpha) by learning from the first three best leaders. Even when the alpha falls into the local optimum, the standard grey wolf optimization algorithm still has good performance and the ability of fast collection, but the effect is relatively general when solving the problem of complex search space. As a conclusion, the proposed ALGWO enhances strategy optimization from two perspectives, namely the adaptive weight and gray wolf location update methods, predicated on the standard GWO.

3.1. Random Adjustment Strategy

The parameter established by the grey wolf algorithm to balance the capabilities of exploration and exploitation is called the convergence factor. The non-decreasing strategy increases the probability of selecting a larger step size at the beginning of the iteration, increasing the exploration capabilities. At the end of the iteration, the grey wolf is forced to select a smaller step size to increase its development capabilities. The convergence factor a in this study has the potential to take on a greater or smaller value during the algorithm’s iterative procedure by applying the random selection technique. Hence, by achieving a smaller value in the early stages, the convergence speed can be sped up, and by acquiring a higher value in the latter stages, the local optimum can be jumped. These three curves are shown in Figure 2.
a 1 = 2 2 t T
a 2 = 2 2 ln ( 1 + t T ( e 1 ) )
a 3 = 2 2 e t T 1 e 1

3.2. Decision-making Level Update Scheme

Since the alpha gray wolf already represents the optimal value in the current population of gray wolves, and since the gray wolf does not benefit from the gray wolf’s optimal solution, the location update in this study uses levy flight instead of the sub-optimal or third optimal solutions.
X α = X α + s
where s is calculated from the formula proposed by Mantegna.
The differential evolution idea is utilized to update the position to strengthen the leadership influence of the alpha gray wolf over potential leaders in beta and delta gray wolves because of the differential evolution algorithm’s strong local search capability in multimodal function search and grey wolf.
X β = X β + F ( X α X δ ) X δ = X δ + F ( X α X β )
The populations of the beta ( β ) and the delta ( δ ) are set to one or more, and F is the variation factor. It declines in line with the amount of iterations from 0.6 to 0.3.
The entire grey wolf group may slip into the local optimum and lose the ability to perform global searches under the formula and formula update if the alpha ( α ) , beta ( β ) , and delta ( δ ) grey wolves are in the local optimum in the multimodal function. Hence, the conflict between candidate leaders and leaders can be employed to resolve the decline in optimization accuracy induced by this condition.
X i = X i α + X i β + X i δ 3 ,   r > q X i = X i α X i β X i δ 3 ,   r q
where the beta ( β ) and the delta ( δ ) are the second and third candidate solutions of the problem; q is the preset selection threshold, and it decreases linearly with the number of iterations from 0.9 to 0.6.

3.3. PID Controller Based on Back Propagation Neural Network

The back propagation neural network maps the input, output, and the errors nonlinearly to the PID controller’s three parameters, k p , k i , and k d . In addition, the BP neural network has three neuron points for the input layer, five for the buried layer, and three for the output layer. The frequently employed Tanh function is utilized in the hidden layer.
f ( x ) = e x e x e x + e x
The three no-negative gain parameters of the PID control scheme are output by BP neural networks, hence sigmoid functions and other functions with no-negative output values are applied.
g ( x ) = u b 1 1 + e x
t ( x ) = u b e x e x + e x
h ( x ) = min ( max ( 0 , x ) , u b )
where u b is upper bound of the output. It is employed to regulate the output range.
The structure of the PID control scheme based on a back propagation neural network is shown in Figure 3.

3.4. Structure of the Total Algorithm

The search space in this paper consists of the network weight and hyperparameters, since they have a major impact on the performance of the back propagation neural network model. The system’s process is shown in Figure 4.

4. Results and Discussion

4.1. Experiment Settings

Ten different test functions are employed to gauge the performance of the proposed method in order to demonstrate its usefulness. To guarantee a fair comparison, the population size in this paper is fixed at 30 and the number of iterations is set at 1000. The parameters of different algorithms that might be used are displayed in Table 1. In order to avoid random variation, tests are carried out 10 times independently.
However, additional analysis is required to determine how well the ALGWO-BPNN-PID algorithm performs in control issues. In order to assess the superiority of the ALGWO-BPNN control technique, the four tests are also performed using the fitness value, overshoot, and settling time performances. The simulation’s settings are similar. As a controller model, the incremental digital PID is employed. The sampling interval is set at one. The g(x) and the other function are the activation functions described above. The output control signal of the controller is restricted to [−10, 10]. Equation (21) is used to assess the algorithm’s effectiveness.
J = 0 ( w 1 e ( t ) + w 2 u 2 ( t ) ) d t
where e ( t ) is the system error and u ( t ) is the controller output, and w 1 and w 2 are the weights, with values of 0.001 and 0.999. Furthermore, overshoot will cause the fitness to increase more quickly.

4.2. Test Systems

4.2.1. Test System 1

Six unimodal functions are included in the test functions. PSO and GWO are contrasted with the suggested ALGWO algorithm’s solution. The unimodal test functions are described in Table 2. Figure 5 displays the two-dimensional parameter space for these six unimodal functions to show them more easily.
The average and standard deviations for ten independent runs used to assess the resilience and average accuracy of algorithms are shown in Table 3. For each function, the best results are displayed in bold. Figure 6 shows the convergence curves of the six test functions.
As shown in Table 3, the ALGWO algorithm indications are usually noticeably better than those of other algorithms. Figure 6 illustrates that the ALGWO algorithm provides better solution outcomes and faster convergence. As a result, the ALGWO method performs the best overall when solving unimodal test functions when compared to other algorithms.

4.2.2. Test System 2

Four multimodal functions are included in the test functions. They are compared to four metaheuristic algorithms that are currently in use. PSO, DE, GWO, and SOGWO are the algorithms utilized for comparison. Table 4 displays the specifics of the multimodal test function. The two-dimensional parameter space of these four unimodal functions is depicted in Figure 7.
Table 5 contains the average and standard deviation for ten independent runs. The convergence rates of the four test functions for several algorithms are displayed in Figure 8.
Table 5 demonstrates that, despite the ALGWO algorithm’s poor performance in the multimodal test function in the F8 function, the average search accuracy achieves the minimum value of 4.44 × 10−15 in F10 and the optimal solution of 0 in F9 and F11. As a result, the ALWO algorithm’s indication is much superior to that of other algorithms. Figure 8 shows that, for the test functions of F10 and F11, the ALGWO algorithm yields superior solution outcomes and accelerates convergence speed. Therefore, when solving multimodal test functions, the ALGWO method generally outperforms other algorithms.

4.2.3. Test System 3

The first-order linear system is chosen for testing in order to confirm the superiority of the ALGWO-BPNN in the PID control process, and the model transfer function is indicated in (22). The value of the ub is 10, so the mapping range of the activation function is set to 0 to 10. The control findings are shown in Table 6 and Figure 9. The best performance is demonstrated by the g ( x ) activation function. The convergence characteristics can be seen in Figure 10.
G ( s ) = 1.7 320 s + 1
The variance and standard deviation of utilizing the ALGWO method are noticeably lower, as seen in Table 6, as compared to using the GWO algorithm. For instance, the PID control system utilizing the g(x) BPNN reduces the standard deviation from 13.5263 to 2.4216 and also lowers the average value by 1.5. As a result, it has improved resilience and average accuracy. The average value is decreased by roughly 20 after using the BPNN model, so the average accuracy is increased. The g(x) activation function has a better impact when utilized as an activation function.
As shown in Figure 9, the controllers employing BPNN-PID outperform the PID controllers in reducing overshoot when the system reference value changes. As shown in Figure 10, the ALGWO algorithm is used to control the BPNN-PID control scheme in comparison to GWO, and it approaches the ideal solution at iteration 200 with a faster convergence rate, demonstrating ALWO’s more deterministic global search capability and quicker discovery speed of high-dimensional optimal solutions. The optimization values utilizing the ALGWO algorithm are 275.7396 and 301.0937, respectively, in terms of optimization accuracy. It is shown that both in high-dimensional and low-dimensional scenarios, it is superior to GWO. According to the convergence curve, the ALGWO algorithm needs 60 iterations to find the local optimal solution and 90 iterations to depart from it. Compared to the 150 iterations needed by the GWO method, this is a huge reduction. As a result, ALGWO can eliminate local optimal solutions more quickly.

4.2.4. Test System 4

For testing, the second-order linear system is used, and formula 23 illustrates the model transfer function. The activation function’s mapping range is set to 0 to 30. The performance characteristics and step response data are displayed in Table 7 and Figure 11.
G ( s ) = 133 s 2 + 25 s
Table 7 demonstrates how the ALGWO method has lower fitness than the GWO algorithm in some control schemes. As an illustration, the average precision is decreased in a BPNN PID control system applied to g(x) from 1.0651 to 1.0360, and the overshoot index is also decreased from 0.62% to 0.02%. The average accuracy is therefore increased by the ALWO technique. In terms of whether to use BPNN technology or not, the average accuracy decreased from 11.8913 to 1.0238, which significantly indicates that the ALGWO-BPNN-PID control scheme is more effective in second-order linear systems. Furthermore, the g(x) activation function has better results, as shown in Table 7.
As can be clearly seen from Figure 11, the maximum value of the tracking values decreased from 1.10 to 1.007. As a result, ALGWO-BPNN-PID controllers have a lot of benefits over traditional PID controllers in terms of reducing overshoot.

4.2.5. Test System 5

The first-order plus time delay model is selected for testing, and the model transfer function is shown in (24). The mapping range of the activation function is set to 0 to 30. Table 8 shows the mean value and standard deviation of 10 runs. Table 9 and Figure 12 illustrate the performance characteristics and step response outputs represented by one of the results, while Figure 13 displays the 10-time average convergence characteristics of several algorithms.
G ( s ) = 1.7 e 10 s 320 s + 1
As shown in Table 8, compared to the control scheme using GWO and PID, the control scheme using ALGWO-BPNN-PID reduces the fitness from 27.0366 to 23.8964, improving the average accuracy. Between them, the activation function of t(x) achieves the best effect, with the lowest average fitness of 23.8964. As can be seen from Table 9, using the ALGWO-BPNN-PID control scheme causes, the overshoot to decrease from 0.18% to 0.00%, the rise time to increase by about 5s, and the adjustment time to decrease by about 8s. Between the schemes, the control scheme using the g(x) activation function has a better overall effect.
As can be clearly seen in Figure 12, the response speed of the GWO-PID control scheme is slower, reaching 0.98 in about 52 seconds, while the BPNN-PID control scheme is advanced to about 45 seconds. As can be seen from Figure 13, the ALGWO-BPNN-PID control scheme has a faster convergence speed and better optimization accuracy, with a decrease of about two. Therefore, the ALGWO-BPNN-PID control scheme has more advantages in the first-order plus time-delay model.

4.2.6. Test System 6

The time-varying model is selected for testing, and the model’s difference equation is displayed as follows. The mean value and standard deviation for 10 runs are displayed in Table 10. The mapping range of the activation function is set to 0 to 30. Figure 14 shows the step response, while Figure 15 displays the comparison results. The results demonstrate that the ALGWO-BPNN method shows the optimal control effect in slow time-varying systems.
w = 1 0.8 × e 0.1 k
y ( k ) = 0.9969 × w × y ( k 1 ) + 0.0053 × u ( k 1 )
First, as shown in Table 10, using the ALGWO method produces lower variances and standard deviations compared to using the GWO algorithm. For example, a PID control system using t(x) BPNN reduced the standard deviation from 8.8126 to 0.6698, and also reduced the average value by 5.5. Therefore, it improves elasticity and average accuracy. Then, the final average fitness is reduced by roughly 10, making the ALGWO-BPNN-PID controllers significantly superior to conventional PID controllers. Finally, the g(x) activation function has a better impact when used as an activation function, with a fitness of 40.2174.
Figure 14 shows that the controller employing BPNN-PID outperforms the PID controller in terms of reaction speed, shortening the time by around ten seconds, when tracking the unit step response. As seen in Figure 15, the ALGWO algorithm is employed to regulate the BPNN-PID control scheme in comparison to GWO, and by iteration 100, it comes close to an optimal solution, demonstrating ALGWO’s superior global search capabilities. The optimization values for the ALGWO method are 49.2536 and 40.5169, respectively, in terms of optimization accuracy. This demonstrates that it outperforms GWO. In conclusion, GWO-BPNN-PID controllers have better performance when applied to slow time-varying systems.

5. Conclusions

The most popular type of industrial process control is PID control, which has been utilized extensively in the metallurgical industry, the electromechanical industry, and other industries. However, as the controlled system becomes increasingly complex, the time-varying and time-delayed systems of the PID control algorithm exert only a general control effect. In this paper, an incremental PID control algorithm based on a BP neural network was selected. Furthermore, by utilizing a novel natural excitation optimization technique, namely the gray wolf optimization algorithm based on confrontation learning, we sought to enhance the control impact of the BPNN-PID controller. In the first step, a fresh adversarial search approach for GWO was suggested. Ten test functions were used to evaluate the ALGWO algorithm. This technique performs better than other algorithms because it can increase convergence speed and search capability. The ALGWO-BPNN model is based on the ALGWO algorithm, which was the second step. To achieve real-time online modification of the PID control parameters and acquire the optimal control rules, the key parameters of the BPNN were adjusted using ALGWO. In addition, the impact of changing the activation function on BPNN was discussed and comparative tests were conducted using the conventional PID control. The findings indicate that the ALGWO-BPNN controller exerts a superior control effect compared to the conventional PID controller in time-varying or time-delay systems.

Author Contributions

Conceptualization: H.L.; methodology: H.L.; software: H.L.; validation: Q.Y. and Q.W.; investigation: H.L. and Q.Y.; writing—original draft: H.L.; writing—review and editing: Q.Y. and Q.W.; visualization: H.L.; supervision: Q.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Shandong Provincial Natural Science Foundation of China, grant number ZR2017BF043.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Phu, N.D.; Hung, N.N.; Ahmadian, A.; Senu, N. A new fuzzy PID control system based on fuzzy PID controller and fuzzy control process. Int. J. Fuzzy Syst. 2020, 22, 2163–2187. [Google Scholar] [CrossRef]
  2. Ziegler, J.G.; Nichols, N.B. Optimum settings for automatic controllers. Trans. ASME 1942, 64, 759–768. [Google Scholar] [CrossRef]
  3. Cohen, G.H.; Coon, G.A. Theoretical consideration of retarded control. Trans. ASME 1953, 75, 827–833. [Google Scholar] [CrossRef]
  4. Lee, Y.S.; Jang, D.W. Optimization of Neural Network-Based Self-Tuning PID Controllers for Second Order Mechanical Systems. Appl. Sci. 2021, 11, 8002. [Google Scholar] [CrossRef]
  5. Dogru, O.; Velswamy, K.; Ibrahim, F.; Wu, Y.; Sundaramoorthy, A.S.; Huang, B.; Xu, S.; Nixon, M.; Bell, N. Reinforcement learning approach to autonomous PID tuning. Comput. Chem. Eng. 2022, 161, 107760. [Google Scholar] [CrossRef]
  6. Sierra-Garcia, J.E.; Santos, M.; Pandit, R. Wind turbine pitch reinforcement learning control improved by PID regulator and learning observer. Eng. Appl. Artif. Intell. 2022, 111, 104769. [Google Scholar] [CrossRef]
  7. Lee, C.L.; Peng, C.C. Analytic Time Domain Specifications PID Controller Design for a Class of 2nd Order Linear Systems: A Genetic Algorithm Method. IEEE Access 2021, 9, 99266–99275. [Google Scholar] [CrossRef]
  8. Ozana, S.; Docekal, T. PID Controller Design Based on Global Optimization Technique with Additional Constraints. J. Electr. Eng. 2016, 67, 160–168. [Google Scholar] [CrossRef] [Green Version]
  9. Kang, J.; Meng, W.; Abraham, A.; Liu, H. An adaptive pid neural network for complex nonlinear system control. Neurocomputing 2014, 135, 79–85. [Google Scholar] [CrossRef] [Green Version]
  10. Wei, J.; Zhang, L.; Li, Z. Design and implementation of neural network pid controller based on fpga. Autom. Instrum. 2017, 10, 106–108, 113. [Google Scholar]
  11. Hernández-Alvarado, R.; García-Valdovinos, L.G.; Salgado-Jiménez, T.; Gómez-Espinosa, A.; Fonseca-Navarro, F. Neural Network-Based Self-Tuning PID Control for Underwater Vehicles. Sensors 2016, 16, 1429. [Google Scholar] [CrossRef] [Green Version]
  12. Bari, S.; Zehra Hamdani, S.S.; Khan, H.U.; Rehman, M.U.; Khan, H. Artificial neural network based self-tuned pid controller for flight control of quadcopter. In Proceedings of the 2019 International Conference on Engineering and Emerging Technologies (ICEET), Lahore, Pakistan, 21–22 February 2019; pp. 1–5. [Google Scholar]
  13. Nowaková, J.; Pokornỳ, M. Double Expert System for Monitoring and Re-adaptation of PID Controllers. In Innovations in Bio-inspired Computing and Applications; Springer: Berlin/Heidelberg, Germany, 2014; pp. 85–93. [Google Scholar]
  14. Mitra, P.; Dey, C.; Mudi, R. Fuzzy rule-based set point weighting for fuzzy PID controller. SN Appl. Sci. 2021, 3, 651. [Google Scholar] [CrossRef]
  15. Han, S.-Y.; Dong, J.-F.; Zhou, J.; Chen, Y.-H. Adaptive Fuzzy PID Control Strategy for Vehicle Active Suspension Based on Road Evaluation. Electronics 2022, 11, 921. [Google Scholar] [CrossRef]
  16. Zhou, H.; Chen, R.; Zhou, S.; Liu, Z. Design and analysis of a drive system for a series manipulator based on orthogonal-fuzzy PID control. Electronics 2019, 8, 1051. [Google Scholar] [CrossRef] [Green Version]
  17. Najariyan, M.; Zhao, Y. Granular fuzzy PID controller. Expert Syst. Appl. 2021, 167, 114182. [Google Scholar] [CrossRef]
  18. Zhao, X.; Wang, X.; Ma, L.; Zong, G. Fuzzy approximation based asymptotic tracking control for a class of uncertain switched nonlinear systems. IEEE Trans. Fuzzy Syst. 2019, 28, 632–644. [Google Scholar] [CrossRef]
  19. Mohammadi Doulabi Fard, S.J.; Jafari, S. Fuzzy Controller Structures Investigation for Future Gas Turbine Aero-Engines. Int. J. Turbomach. Propuls. Power. 2021, 6, 2. [Google Scholar] [CrossRef]
  20. Rubio, J.D.J.; Angelov, P.; Pacheco, J. Uniformly Stable Backpropagation Algorithm to Train a Feedforward Neural Network. IEEE Trans. Neural Netw. 2011, 22, 356–366. [Google Scholar] [CrossRef] [Green Version]
  21. Kolbusz, J.; Rozycki, P.; Lysenko, O.; Wilamowski, B.M. Error back propagation algorithm with adaptive learning rate. In Proceedings of the 2019 International Conference on Information and Digital Technologies (IDT), Zilina, Slovakia, 25–27 June 2019; pp. 216–222. [Google Scholar]
  22. Milovanović, M.B.; Antić, D.S.; Milojković, M.T.; Nikolić, S.S.; Perić, S.L.; Spasić, M.D. Adaptive PID control based on orthogonal endocrine neural networks. Neural Netw. 2016, 84, 80–90. [Google Scholar] [CrossRef]
  23. Hong, H.; Ni, L.; Sun, H. Design and simulation of a self-driving precision compass based on BP+PID control. Mech. Des. 2021, 38, 78–84. [Google Scholar]
  24. Pei, G.; Yu, M.; Xu, Y.; Ma, C.; Lai, H.; Chen, F.; Lin, H. An Improved PID Controller for the Compliant Constant-Force Actuator Based on BP Neural Network and Smith Predictor. Appl. Sci. 2021, 11, 2685. [Google Scholar] [CrossRef]
  25. Wang, Y.; Liu, J.; Li, R.; Suo, X.; Lu, E. Application of PSO-BPNN-PID Controller in Nutrient Solution EC Precise Control System: Applied Research. Sensors 2022, 22, 5515. [Google Scholar] [CrossRef] [PubMed]
  26. Wang, J.; Li, M.; Jiang, W.; Huang, Y.; Lin, R. A Design of FPGA-Based Neural Network PID Controller for Motion Control System. Sensors 2022, 22, 889. [Google Scholar] [CrossRef] [PubMed]
  27. You, D.; Lei, Y.; Liu, S.; Zhang, Y.; Zhang, M. Networked Control System Based on PSO-RBF Neural Network Time-Delay Prediction Model. Appl. Sci. 2023, 13, 536. [Google Scholar] [CrossRef]
  28. Ivanov, O.; Neagu, B.-C.; Grigoras, G.; Gavrilas, M. Optimal Capacitor Bank Allocation in Electricity Distribution Networks Using Metaheuristic Algorithms. Energies 2019, 12, 4239. [Google Scholar] [CrossRef] [Green Version]
  29. Łysiak, A.; Paszkiel, S. A Method to Obtain Parameters of One-Column Jansen–Rit Model Using Genetic Algorithm and Spectral Characteristics. Appl. Sci. 2021, 11, 677. [Google Scholar] [CrossRef]
  30. Bilandžija, D.; Vinko, D.; Barukčić, M. Genetic-Algorithm-Based Optimization of a 3D Transmitting Coil Design with a Homogeneous Magnetic Field Distribution in a WPT System. Energies 2022, 15, 1381. [Google Scholar] [CrossRef]
  31. Kashyap, A.K.; Parhi, D.R. Particle swarm optimization aided pid gait controller design for a humanoid robot. ISA Trans. 2021, 114, 306–330. [Google Scholar] [CrossRef]
  32. Feleke, S.; Satish, R.; Pydi, B.; Anteneh, D.; Abdelaziz, A.Y.; El-Shahat, A. Damping of Frequency and Power System Oscillations with DFIG Wind Turbine and DE Optimization. Sustainability 2023, 15, 4751. [Google Scholar] [CrossRef]
  33. Wang, W.; Tang, R.; Li, C.; Liu, P.; Luo, L. A BP neural network model optimized by mind evolutionary algorithm for predicting the ocean wave heights. Ocean Eng. 2018, 162, 98–107. [Google Scholar] [CrossRef]
  34. Zhang, Y.; Tang, J.; Liao, R.; Zhang, M.; Zhang, Y.; Wang, X.; Su, Z. Application of an enhanced BP neural network model with water cycle algorithm on landslide prediction. Stoch. Environ. Res. Risk Assess. 2021, 35, 1273–1291. [Google Scholar] [CrossRef]
  35. Wang, Q.; Xi, H.; Deng, F.; Cheng, M.; Buja, G. Design and analysis of genetic algorithm and BP neural network based PID control for boost converter applied in renewable power generations. IET Renew. Power Gener. 2022, 16, 1336–1344. [Google Scholar] [CrossRef]
  36. Curry, H.B. The method of steepest descent for non-linear minimization problems. Q. Appl. Math. 1944, 2, 258–261. [Google Scholar] [CrossRef] [Green Version]
  37. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  38. Dhargupta, S.; Ghosh, M.; Seyedali, M.; Ram, S. Selective Opposition based Grey Wolf Optimization. Expert Syst. Appl. 2020, 151, 113389. [Google Scholar] [CrossRef]
Figure 1. Basic structure of a typical back propagation neural network (BPNN).
Figure 1. Basic structure of a typical back propagation neural network (BPNN).
Applsci 13 04767 g001
Figure 2. The change of convergence factor.
Figure 2. The change of convergence factor.
Applsci 13 04767 g002
Figure 3. The PID controller structure based on back propagation neural network.
Figure 3. The PID controller structure based on back propagation neural network.
Applsci 13 04767 g003
Figure 4. The process of total algorithm.
Figure 4. The process of total algorithm.
Applsci 13 04767 g004
Figure 5. The results of parameter space. (a) The parameter space of F1 and (b) the parameter space of F2 and (c) the parameter space of F3 and (d) the parameter space of F4 and (e) the parameter space of F5 and (f) the parameter space of F6.
Figure 5. The results of parameter space. (a) The parameter space of F1 and (b) the parameter space of F2 and (c) the parameter space of F3 and (d) the parameter space of F4 and (e) the parameter space of F5 and (f) the parameter space of F6.
Applsci 13 04767 g005aApplsci 13 04767 g005b
Figure 6. The results of convergence curves. (a) The convergence curves of F1 and (b) the convergence curves of F2 and (c) the convergence curves of F3 and (d) the convergence curves of F4 and (e) the convergence curves of F5 and (f) the convergence curves of F6.
Figure 6. The results of convergence curves. (a) The convergence curves of F1 and (b) the convergence curves of F2 and (c) the convergence curves of F3 and (d) the convergence curves of F4 and (e) the convergence curves of F5 and (f) the convergence curves of F6.
Applsci 13 04767 g006
Figure 7. The results of parameter space. (a) The parameter space of F8 and (b) the parameter space of F9 and (c) the parameter space of F10 and (d) the parameter space of F11.
Figure 7. The results of parameter space. (a) The parameter space of F8 and (b) the parameter space of F9 and (c) the parameter space of F10 and (d) the parameter space of F11.
Applsci 13 04767 g007aApplsci 13 04767 g007b
Figure 8. The results of convergence curves. (a) The convergence curves of F8 and (b) the convergence curves of F9 and (c) the convergence curves of F10 and (d) the convergence curves of F11.
Figure 8. The results of convergence curves. (a) The convergence curves of F8 and (b) the convergence curves of F9 and (c) the convergence curves of F10 and (d) the convergence curves of F11.
Applsci 13 04767 g008aApplsci 13 04767 g008b
Figure 9. The results of control effect. (a) The simulation results of BPNN-PID using g(x) and (b) simulation results of BPNN-PID using t(x) and (c) simulation results of BPNN-PID using h(x) and (d) simulation results of conventional PID.
Figure 9. The results of control effect. (a) The simulation results of BPNN-PID using g(x) and (b) simulation results of BPNN-PID using t(x) and (c) simulation results of BPNN-PID using h(x) and (d) simulation results of conventional PID.
Applsci 13 04767 g009
Figure 10. The comparison results of algorithm optimization search on test system 3.
Figure 10. The comparison results of algorithm optimization search on test system 3.
Applsci 13 04767 g010
Figure 11. Step response.
Figure 11. Step response.
Applsci 13 04767 g011
Figure 12. The step response results on test system 5. (a) The simulation results of BPNN-PID using g(x) and (b) simulation results of BPNN-PID using t(x) and (c) simulation results of BPNN-PID using h(x) and (d) simulation results of traditional PID.
Figure 12. The step response results on test system 5. (a) The simulation results of BPNN-PID using g(x) and (b) simulation results of BPNN-PID using t(x) and (c) simulation results of BPNN-PID using h(x) and (d) simulation results of traditional PID.
Applsci 13 04767 g012
Figure 13. The comparison results of algorithm optimization search on test system 5.
Figure 13. The comparison results of algorithm optimization search on test system 5.
Applsci 13 04767 g013
Figure 14. The step response results on test system 6. (a) The simulation results of BPNN-PID using g(x) and (b) simulation results of BPNN-PID using t(x) and (c) simulation results of BPNN-PID using h(x) and (d) simulation results of traditional PID.
Figure 14. The step response results on test system 6. (a) The simulation results of BPNN-PID using g(x) and (b) simulation results of BPNN-PID using t(x) and (c) simulation results of BPNN-PID using h(x) and (d) simulation results of traditional PID.
Applsci 13 04767 g014
Figure 15. The comparison results of algorithm optimization search on test system 6.
Figure 15. The comparison results of algorithm optimization search on test system 6.
Applsci 13 04767 g015
Table 1. Parameter setups of different algorithms.
Table 1. Parameter setups of different algorithms.
AlgorithmValues of the Parameters
GWOa = 2 (linear reduction during iteration)
SOGWO [38]a = 2 (linear reduction during iteration)
PSO c 1 = 2 , c 2 = 2 , w M a x = 0.9 , w M i n = 0.2
DEF = 0.5, CR = 0.5
Table 2. Details of the unimodal test function.
Table 2. Details of the unimodal test function.
Function NameExpressionSearch SpaceDim
F1 F 1 = i = 1 n x i 2 [−100,100]30
F2 F 2 = i = 1 n | x i | i = 1 n | x i | [−10,10]30
F3 F 3 = i = 1 n ( j = 1 i x j ) 2 [−100,100]30
F4 F 4 = max i | x i | , 1 i n [−100,100]30
F5 F 5 = i = 1 n 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 [−30,30]30
F6 F 6 = i = 1 n ( | x i + 0.5 | ) 2 [−100,100]30
Table 3. Comparison results of three algorithms on six groups of benchmark functions.
Table 3. Comparison results of three algorithms on six groups of benchmark functions.
Base FunctionGWOALGWOPSO
F1STD6.69 × 10−7001.40 × 10−11
AVE2.49 × 10−7005.48 × 10−12
F2STD3.49 × 10−4101.01 × 10−5
AVE3.89 × 10−413.01 × 10−2387.24 × 10−6
F3STD1.57 × 10−2003.47 × 100
AVE8.72 × 10−2107.16 × 100
F4STD2.04 × 10−1701.20 × 10−1
AVE1.77 × 10−172.70 × 10−2063.90 × 10−1
F5STD8.78 × 10−14.90 × 10−13.46 × 101
AVE2.66 × 1012.62 × 1015.04 × 101
F6STD2.71 × 10−11.09 × 1001.59 × 10−11
AVE3.00 × 10−13.58 × 10−18.16 × 10−12
Table 4. Details of the multimodal test function.
Table 4. Details of the multimodal test function.
Function NameExpressionSearch SpaceDim
F8 F 8 = i = 1 n x i sin ( | x i | ) [−500,500]30
F9 F 9 = i = 1 n x i 2 10 cos ( 2 π x i ) + 10 [−5.12,5.12]30
F10 F 10 = 20 e 0.2 1 n i = 1 n x i 2 e 1 n i = 1 n cos ( 2 π x i ) + 20 + e [−32,32]30
F11 F 11 = 1 4000 i = 1 n x i 2 i = 1 n cos ( x i i ) + 1 [−600,600]30
Table 5. Comparison results of five algorithms on four groups of benchmark functions.
Table 5. Comparison results of five algorithms on four groups of benchmark functions.
Base FunctionGWOALGWOSOGWOPSODE
F8STD8.45 × 1021.26 × 1037.11 × 1028.35 × 1026.86 × 102
AVE−5.98 × 103−5.98 × 103−6.44 × 103−6.80 × 103−5.97 × 103
F9STD0008.22 × 1001.57 × 101
AVE0003.47 × 1012.57 × 102
F10STD1.49 × 10−1502.39 × 10−153.59 × 10−67.20 × 10−1
AVE1.43 × 10−144.44 × 10−151.40 × 10−142.56 × 10−61.72 × 10−1
F11STD0007.30 × 10−32.67 × 100
AVE0007.10 × 10−33.63 × 10−1
Table 6. The comparison of final fitness of algorithms on test system 3.
Table 6. The comparison of final fitness of algorithms on test system 3.
Algorithm g ( x ) t ( x ) h ( x ) PID
GWOAVE277.2945295.4016280.0701301.2233
STD13.526313.402813.53830.2515
ALGWOAVE275.7396280.0453278.7984301.0937
STD2.421612.344611.87260.2811
Table 7. Performance characteristics on test system 4.
Table 7. Performance characteristics on test system 4.
AlgorithmOvershoot (%)Rising Time (s)Settling Time (s)Fitness
GWO and g ( x ) 0.62331.0651
GWO and t ( x ) 0241.0234
GWO and h ( x ) 0221.0978
GWO and PID102311.9876
ALGWO and g ( x ) 0.02231.0360
ALGWO and t ( x ) 0241.0238
ALGWO and h ( x ) 0221.0804
ALGWO and PID62411.8913
Table 8. The comparison of final fitness of algorithms on test system 5.
Table 8. The comparison of final fitness of algorithms on test system 5.
Algorithm g ( x ) t ( x ) h ( x ) PID
GWOAVE24.922225.019024.816927.0366
STD0.61440.70970.66580.0773
ALGWOAVE24.198923.896424.809027.0068
STD0.10290.18830.61130.0245
Table 9. Performance characteristics on test system 5.
Table 9. Performance characteristics on test system 5.
AlgorithmOvershoot (%)Rising Time (s)Settling Time (s)Fitness
GWO and g ( x ) 0.06344124.2989
GWO and t ( x ) 0384925.0178
GWO and h ( x ) 0354424.7274
GWO and PID0.18415226.9809
ALGWO and g ( x ) 0354524.2034
ALGWO and t ( x ) 0344423.9817
ALGWO and h ( x ) 0303624.2842
ALGWO and PID0.11415326.9800
Table 10. The comparison of final fitness of algorithms on test system 6.
Table 10. The comparison of final fitness of algorithms on test system 6.
Algorithm g ( x ) t ( x ) h ( x ) PID
GWOAVE40.874946.695541.252956.4116
STD0.618558.81261.27890
ALGWOAVE40.217441.117440.516949.2536
STD0.55550.66980.67636.6099
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, H.; Yu, Q.; Wu, Q. PID Control Model Based on Back Propagation Neural Network Optimized by Adversarial Learning-Based Grey Wolf Optimization. Appl. Sci. 2023, 13, 4767. https://doi.org/10.3390/app13084767

AMA Style

Liu H, Yu Q, Wu Q. PID Control Model Based on Back Propagation Neural Network Optimized by Adversarial Learning-Based Grey Wolf Optimization. Applied Sciences. 2023; 13(8):4767. https://doi.org/10.3390/app13084767

Chicago/Turabian Style

Liu, Huaiqin, Qinghe Yu, and Qu Wu. 2023. "PID Control Model Based on Back Propagation Neural Network Optimized by Adversarial Learning-Based Grey Wolf Optimization" Applied Sciences 13, no. 8: 4767. https://doi.org/10.3390/app13084767

APA Style

Liu, H., Yu, Q., & Wu, Q. (2023). PID Control Model Based on Back Propagation Neural Network Optimized by Adversarial Learning-Based Grey Wolf Optimization. Applied Sciences, 13(8), 4767. https://doi.org/10.3390/app13084767

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop