Next Article in Journal
Three-Dimensional Fuzzy Modeling for Nonlinear Distributed Parameter Systems Using Simultaneous Perturbation Stochastic Approximation
Previous Article in Journal
Implementation of a Two-Dimensional Finite-Element Fatigue Damage Model with Peridynamics to Simulate Crack Growth in a Compact Tension Specimen
Previous Article in Special Issue
ULYSSES: Automated FreqUentLY ASked QueStions for KnowlEdge GraphS
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatizing Automatic Controller Design Process: Designing Robust Automatic Controller under High-Amplitude Disturbances Using Particle Swarm Optimized Neural Network Controller

by
Celal Onur Gökçe
Software Engineering, Afyon Kocatepe University, Afyonkarahisar 03200, Türkiye
Appl. Sci. 2024, 14(17), 7859; https://doi.org/10.3390/app14177859
Submission received: 10 August 2024 / Revised: 2 September 2024 / Accepted: 3 September 2024 / Published: 4 September 2024
(This article belongs to the Special Issue Applications of Data Science and Artificial Intelligence)

Abstract

:

Featured Application

Featured Application: Automatic control of electrical motors and robotic systems actuated with electrical motors are the main featured applications of this study but are not the limit. Any closed-loop automatic control system with known plant dynamics can be a featured application of this study.

Abstract

In this study, a novel approach of designing automatic control systems with the help of AI tools is proposed. Given plant dynamics, expected references, and expected disturbances, the design of an optimal neural network-based controller is performed automatically. Several common reference types are studied including step, square, sine, sawtooth, and trapezoid functions. Expected reference–disturbance pairs are used to train the system for finding optimal neural network controller parameters. A separate test set is used to test the system for unexpected reference–disturbance pairs to show the generalization performance of the proposed system. Parameters of a real DC motor are used to test the proposed approach. The real DC motor’s parameters are estimated using a particle swarm optimization (PSO) algorithm. Initially, a proportional–integral (PI) controller is designed using a PSO algorithm to find the simple controller’s parameters optimally and automatically. Starting with the neural network equivalent of the optimal PI controller, the optimal neural network controller is designed using a PSO algorithm for training again. Simulations are conducted with estimated parameters for a diverse set of training and test patterns. The results are compared with the optimal PI controller’s performance and reported in the corresponding section. Encouraging results are obtained, suggesting further research in the proposed direction. For low-disturbance scenarios, even simple controllers can have acceptable performance, but the real quality of a proposed controller should be shown under high-amplitude and difficult disturbances, which is the case in this study. The proposed controller shows higher performance, especially under high disturbances, with an 8.6% reduction in error rate on average compared with the optimal PI controller, and under high-amplitude disturbances, the performance difference is of more than 2.5 folds.

1. Introduction

1.1. Automatic Control Concepts

Automatic control is interested in achieving desired performance for dynamic systems [1]. A dynamic system under control is called a plant. The output of the plant is measured with some kind of sensor and compared with the desired output, which is given as reference input. The difference between the desired output and the real output is called error signal and is the main information used in most automatic control systems. The block diagram of a typical automatic control system is shown in Figure 1 below.
The aim of the controller is to minimize the error signal which is the difference between the reference and the output of the plant. In order to design the controller, a mathematical model of the plant is derived first. Plants generally fall into one of the two main classes: linear and non-linear. Linear plants are easier to control since their mathematical models are linear differential equations, which are much easier to analyze mathematically.
Disturbance, also known as external disturbance, is a very important concept for automatic control systems. In order to have a practically useful working system, the controller must be robust to external disturbances, especially expected ones. An important parameter of performance is disturbance rejection, which represents the performance of a system under expected disturbances.
Automatic control systems date back to the 19th century [2]. During World War I and especially World War II, significant advances were made in automatic control applications. With the advances in computer technology, modern control research gained an acceleration and various types of applications emerged. These applications include motor control [3,4,5,6,7], robot manipulator control [8,9,10,11,12,13,14], wheeled mobile robot control [15,16,17,18], legged mobile robot control [19,20], quadcopter control [21,22], among many others.
Automatic control under low-disturbance scenarios is a well-studied problem [2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22]. The challenging open areas of further research regard difficult scenarios with high-amplitude disturbances, which drive the system to the limits of its capability.

1.2. Intelligent Control Systems

Artificial intelligence (AI) has two main schools of thought: classical AI and machine learning. In classical AI, knowledge is hard-coded into systems by domain experts in terms of domain rules. This approach requires extensive human effort to encode the domain knowledge into the computer and is restricted to well-structured applications mainly. In a more recent approach, namely the machine learning (ML) approach, knowledge is learnt automatically from data by the computer.
In ML, there are several types of problems. In supervised learning, the training data are labeled by humans and the computer learns the relations between the data and the labels. In unsupervised learning, the training data are not labeled, so the computer tries to find clusters of data with similar properties by some criteria. In reinforcement learning, there is no label for data but only partial knowledge about the consequences of actions as positive or negative, so that the computer can learn the best actions leading to most positive results.
One important concept in ML is the complexity of the system model that the computer uses to map data to classes or clusters. Selecting the optimal complexity of the system model is important because, if the selected model is too simple, it cannot model the underlying process in a detailed enough manner. On the other hand, if the selected model is too complex, it can learn noises in training samples instead of parameters in the underlying process. This phenomenon is called overfitting and is a major concern in solutions to ML problems. A system with optimal complexity has high generalization performance, which is another important ML concept of performance on the unseen data.
Intelligent control systems utilizing AI, especially ML, are extensively studied in the literature. In particular, fuzzy controllers and neural network-based controllers are utilized to increase the performance and robustness of the systems [17,18,19,20,21,22,23,24,25]. In [17], the speed control of non-holonomic robots is achieved through adaptive gain scheduling and fuzzy multi-model intelligent control and two control strategies are compared. Both implementations are found to have satisfactory performance, while that of the intelligent controller is found to be higher. In [23], the fuzzy intelligent control of servo press systems is studied. Step response experiments and sine tracking experiments are performed and the results are reported. In [24], intelligent control research for power systems is reviewed. Studies using artificial neural networks (ANNs), fuzzy logic, expert systems, and evolutionary methods are reported. In [25], an innovative optimal robust control method for mobile robots is proposed. The kinematic model of a wheeled mobile robot is derived and a Lyapunov function is used for ensuring control stability. In [26], the position tracking control of mobile robots is accomplished using neural network-based fractional order sliding mode control. A radial basis function neural network is used to deal with nonlinear dynamics.

1.3. DC Motor Control

Since the effective control of a DC motor is crucial for the performance of many robotic and automation systems, it is widely studied in the literature. In [27], a fuzzy-based predictive PID controller for controlling the speed of a DC motor is proposed. The mathematical model of a DC motor is derived using corresponding physical laws. Three stages of fuzzy controller, fuzzification, inference and defuzzification are implemented as components of the controller. The fuzzy-based PID controller is combined with the receding horizon controller and it is shown that the performance of the proposed controller is higher than that of the PID controller, fuzzy-based PID controller, and receding horizon controller implemented separately. In [28], a nonlinear PI controller for speed control of a DC motor is proposed. An exponential block placed in front of the PI controller introduces nonlinearity to the control system, improving controller performance. Salp swarm optimization algorithm [29] is used to tune the parameters of the proposed controller. Experiments are conducted on a real DC motor coupled with a real DC generator for disturbance generation and the results are reported. In [30], sliding mode control is used for the performance improvement of a DC servo motor. A closed-loop control system succeeds in achieving first-order system performance and maximum overshoot is reduced.
As stated previously, for general automatic control systems, controlling DC motors under low disturbances is a well-studied problem [27,28,29,30]. The challenging scenarios are mainly the ones involving difficult high-amplitude disturbances. The disturbance amplitude should be increased up to the point that the system can theoretically handle it, which may show the limits of the capabilities of the system. This approach is taken in this study, which makes it original and highly different from most of the studies in the literature.
This study aims to develop a novel automatic design technique for the automatic control of dynamic systems. Given plant dynamics (parameters of the plant to be controlled), expected references to be provided by the user or a higher intelligence part of the control system and expected external disturbances of the system will be exposed; then, the design of the automatic controller process is automatically carried out by the proposed architecture. A human design engineer should only enter the required information about system and working environment, and the proposed system automatically designs the automatic controller. Proof-of-concept demonstration is performed using a DC motor plant as an example, but the proposed approach can be applied to any automatic control system.
The DC motor is a popular electrical actuator used in robotic systems in particular. Its ease of control due to its simplicity makes it preferable in most industrial applications. Robot arms, mobile robots, CNC machines, 3D printers, and many other electromechanical equipment are eligible for DC motor use as actuators. It is worth noting that DC motors are generally used with gearboxes to reduce speed and increase torque. For speed and position sensors, there are many options available, but the optical encoder is among the most popular ones. In Figure 2 below is given a block diagram of a typical DC motor.
As shown in Figure 2, the DC motor mainly consists of three parts: electrical, mechanical, and electromagnetic coupling. The electrical part consists of armature resistance and armature inductance. The mechanical part consists of rotor inertia and viscous friction due to bearings. In between, there is an electromagnetic coupling due to back-emf force denoted by eb.
The mathematical model of the DC motor is derived using corresponding physical laws, as shown in the equations below.
V a = R a i a + L a d i a d t + e b ,
e b = K ω   ω ,
T m = K m i a
T m = J d ω d t + b ω + T l ,
where Va is armature voltage, ia is armature current, Ra is armature resistance, La is armature inductance, eb is back-emf voltage, Kω is back-emf constant, Km is motor torque constant, ω is rotor angular speed, Tm is motor torque, J is rotor inertia, b is viscous friction constant, and Tl is load torque.
If we organize these four equations into proper order and make the necessary replacements, we have a second-order system from armature voltage input to rotor angular speed output, which is given in the following equation below.
Ω ( s ) V a ( s ) = K m L a J s 2 + R a J + b L a s + ( K m K b + R a b )
Note that the brushed DC motor with second-order dynamics is inherently linear with linear differential equations in its mathematical model. Plants with linear dynamics can be controlled with classical controllers using analytical design techniques explained in control theory textbooks, but these analytical analysis and design techniques assume ideal conditions with little or no external disturbance.
Making an additional assumption that La << Ra, that is, armature inductance is much less than armature resistance, which is valid for most practical motors, we have a first-order system from voltage input to angular speed output, which can be represented by the following transfer function:
H ( s ) = 1 c 1 s + c 0 ,
which has only two parameters: constant coefficient c1 and constant coefficient c0.
Note that, even if the armature inductance La is not ignored, the system has second-order dynamics with a very small highest-order coefficient, which practically resembles the dynamics of a first-order system. So, ignoring the small highest-order coefficient does not affect the performance results of the full system.
A real DC motor’s parameters are estimated using parameter estimation experiments and the following parameters are found using the PSO algorithm, details of which are provided in Section 2.3 below:
c1 = 0.00253256, c0 = 0.09365946
The real DC motor’s parameters are used to simulate the system more realistically in the simulation experiments. This leads to more helpful simulation results, which give insight into the performance of real systems to be implemented in the future.
The parameters of the optimal PI controller are also determined using the PSO algorithm and are found as follows:
Kp = 1.5, Ki = 4980.0
Note that the standard PSO algorithm is used to find the optimal parameters of the PI controller with a two-dimensional parameter space, which results in relatively stable findings. Even after 100 epochs, the algorithm converges to optimal values.
The real DC motor setup used to estimate the parameters of real DC motor is shown in Figure 3 below.
Note that a real DC motor system mainly consists of three parts. The DC motor itself, the optical encoder to measure the speed of the motor, and the gear reduction mechanism to increase the output torque.
Optical encoder subsystems consist of a rotating disc with holes on it, an infrared (IR) transmitter, and an IR receiver. In Figure 4 below is shown a close-up photo of the optical encoder used in this study.
The IR transmitter sends continuous IR light to the IR receiver. In between the IR transmitter and the IR receiver, there is a rotating disc with a known number of holes on it. The rotating disc is connected to the shaft of the DC motor, so it rotates at the same speed as the motor. As the disc rotates, IR light falling on the IR receiver is modulated in square wave form period, which depends on the speed of the motor. By calculating the period of the square wave, the speed of the motor is measured.
The gear reduction mechanism is important in many robotic and automation systems in order to obtain the required torque to actuate the system. The output of the gear reduction system is generally connected to the load, which depends on the application. For robot arms, this may be joints connecting each sequential link. For wheeled mobile robots, wheels may be connected to the output shaft of the gear reduction mechanism. For walking mobile robots, the joints of the legs may again be connected to the shaft. Note that the reduction ratio affects the output torque and depends on the application requirements.
In the above-mentioned applications of DC motors, loads vary in time as the system operates. The load to each DC motor is assumed to be external disturbance in this study. Several external disturbance types are applied to the system and the results are observed and reported below.

2. Materials and Methods

2.1. Proposed System

Given plant dynamics, expected references, and expected disturbances, an optimal neural network-based controller is automatically designed using the proposed system. The optimal neural network-based controller is composed of two parts and is shown in Figure 5 below.
The first part is the optimal PI controller whose parameters are found using the PSO algorithm. The second part is used for fine-tuning of the system and has a single-layer neural network structure. The input–output relations of the two controllers are as follows:
u c p i = K p e + K i e d t ,
where e is the error signal which is the difference between the reference and the output.
u c a n n = i = 1 t p a s t w 1 i ω i + j = 1 t p a s t w 2 j ω r e f j + k = 1 t f u t u r e w 3 k ω r e f k ,
Note that the system is single-input single-output (SISO), but the subindices of the system output and the reference values above may seem like a multi-output system at first sight. Actually, the subindices i, j, and k in the above equation refer to the timing index of the same single output and reference taken at different times.
Here, tpast and tfuture are past and future window size in terms of sampling time, w are neural network weights, ω are plant’s output, and ωref are references.
Note that this structure is theoretically equivalent to a controller with only the neural network part, but it is observed that using a purely neural network controller leads to a difficult tuning of weights with sometimes unstable results. So, the controller is divided into two parallel parts with the optimal PI controller addressing coarse error signals and the neural network part improving with the fine-tuning of the controller output.
Neural networks have several advantages over their alternatives and are widely used in various applications. A number of parameters which determine the complexity of the neural network can be adopted very conveniently by just changing the number of neurons. This gives the designer the ability to adopt the complexity of the controller required by the application; hence, the neural controller is chosen in this study. Note that the proposed approach with a neural network controller has many more parameters than the optimal PI controller, so it can be used in more versatile disturbance conditions which are shown in this study.
It is shown in [31] that the complexity of a neural network is mainly related to the number of neurons but not to the number of layers. So, the simplest topology of single-layer linear neural networks is chosen in this study. The performance of the nonlinear multilayer neural network controller may be investigated in detail in future studies.
The proposed controller is designed with the following steps:
(1)
Choose limits of controller output;
(2)
Design an optimal PI controller using the PSO algorithm;
(3)
For fine-tuning, design a neural network controller with the PSO algorithm.

2.2. Past and Future Windows

Input to the neural network-based controller is divided into past, current, and future references and past and current measurements, as shown in Figure 6 below. Two time windows with different scales are used to capture the inputs to the system. The past window is larger than the future window. The past window has both reference and measurement values. The future window has only reference values since measurements of the future cannot be known in advance. A fixed sampling time of 1 ms is used for all experiments. As an example, with a past window size of 50 ms and a future window size of 20 ms, total number of inputs to the neural network is 123. The sizes of the windows can be changed if required by the application and the optimal size of the windows is another research topic of future studies.

2.3. Training Neural Networks with PSO

The optimal weights of the neural network are found using the PSO algorithm. Each reference–disturbance pair in the training set is input to the system and experiments are conducted. The performance of the system is measured using the absolute total error metric.
PSO is an iterative optimization algorithm [32]. PSO is one of the most successful and popular iterative optimization algorithms used in the literature in various applications. Hence, it is chosen as the optimizer of the neural network controller. The vector variable to be optimized is represented by several particles. Particles are initialized randomly from a distribution. Each particle also has a velocity. Position update is simply conducted by adding the velocity of the particle to the previous position. In each iteration, the velocity of each particle is updated according to a well-defined rule. The velocity update rule includes three components. The first component is calculated using the previous velocity of the particle and represents momentum in a sense. The second component is calculated using the particle’s known best point up to that iteration. The third component is calculated using the global known best point of all particles. In this way, randomly initialized particles search through parameter space for an optimum point in a systematic way. Below are the formulas for PSO updates in each iteration.
v n = c 1 v n 1 + c 2 ( p n 1 x n 1 ) + c 3 ( g n 1 x n 1 )
x n = x n 1 + v [ n ] ,
where x is particle’s position, v is particle’s velocity, p is particle’s known best point up to this iteration, g is global best point of all particles up to this iteration, and c are constants multiplied by random numbers.

2.4. Discretization and Implementation of System

Discretization and simulation of the system is implemented using the trapezoid rule for differentiation [33]. So, the derivative of the state variable is calculated from the mathematical model of the system and the state variable, which is the speed of the motor, is calculated using the following formula iteratively:
w n = w n 1 + t d w d t ,
Here, Δt is taken to be 1 millisecond in order to be much lower than the time constant of the plant and to minimize discretization errors.

3. Results

Five types of functions are used for references and disturbances: step, square, sine, sawtooth, and trapezoid. Each function is shown in Figure 7 below.
For step signal, there is only one parameter and that is amplitude. For square, sin, and sawtooth signals, there are two parameters, which are amplitude and frequency. For trapezoid signal, there are three parameters, namely maximum, acceleration, and deceleration.
For training, ten different reference and ten different disturbance signals are used. This makes up a hundred total reference–disturbance pairs. Below are given the reference and disturbance signal parameters in Table 1 and Table 2, respectively.
Waveforms of reference and disturbance signals are taken to represent the most common signal patterns encountered in control systems. Step and sine references are common for regulation and tracking systems, respectively. Step and sine disturbances represent two main disturbance types: constant disturbance and time-varying disturbance. For these reasons, the training set is constructed with these reference–disturbance pairs.
A particle swarm size of 100 is used for training the neural network-based controller for 1000 epochs. An optimal neural weight vector, which is denoted as g below, of size 123, is obtained with the following values:
g (optimal neural weight vector) = [[
−2.00956934 × 10−2, 2.28251818 × 10−2, 3.66802550 × 10−3, 4.68787594 × 10−3, 1.10714396 × 10−2,
−1.25274995 × 10−3, −1.10151555 × 10−2, 7.51440490 × 10−2, −1.35266461 × 10−2, −2.01238715 × 10−2,
5.46645054 × 10−2, −6.47748694 × 10−3, 2.40590535 × 10−2, 1.97057222 × 10−2, 5.17743275 × 10−2,
1.81642138 × 10−2, 4.33055085 × 10−2, 1.74264836 × 10−2, −4.58121589 × 10−3, 1.42641135 × 10−2,
−5.47776784 × 10−4, −1.76951475 × 10−3, 1.28168151 × 10−2, 2.82062853 × 10−2, −1.77549715 × 10−3,
1.81045855 × 10−2, 5.28164547 × 10−2, 2.44388379 × 10−2, −2.53992993 × 10−2, 2.23594722 × 10−5,
−2.43687957 × 10−2, −7.82207853 × 10−2, −1.04065066 × 10−2, −1.26254889 × 10−1, 3.25520224 × 10−3,
1.74701081 × 10−2, −4.47201247 × 10−3, 1.54930364 × 10−3, −2.17759436 × 10−2, 1.09310672 × 10−1,
1.88470285 × 10−1, 1.53151263 × 10−1, 2.12573135 × 10−1, 5.96647710 × 10−2, −6.36103418 × 10−3,
−1.65250932 × 10−3, 4.89734504 × 10−2, −4.26310941 × 10−3, 1.64341826 × 10−3, −5.74545106 × 10−5,
−6.14562688 × 10−1, −8.89053434 × 10−3, −8.76923913 × 10−4, 2.78282565 × 10−2, −7.14593811 × 10−2,
2.30802202 × 10−2, 4.75389489 × 10−2, 3.08541814 × 10−3, −1.33222498 × 10−3, 4.06371354 × 10−3,
−4.34424866 × 10−2, 9.25887701 × 10−2, −1.17017702 × 10−3, −3.28774384 × 10−2, 5.91127735 × 10−3,
−6.76456032 × 10−2, 3.54275769 × 10−3, −2.80618073 × 10−2, −8.29980821 × 10−2, −4.68078831 × 10−2,
−7.70450798 × 10−3, −5.71033232 × 10−4, −1.97019313 × 10−1, 4.10303154 × 10−3, 5.82616963 × 10−5,
−9.41320891 × 10−3, 1.49802579 × 10−1, 3.78894107 × 10−3, −5.41212802 × 10−1, −2.03641023 × 10−1,
6.36523382 × 10−4, −4.50341962 × 10−2, −3.60406881 × 10−1, −3.54286361 × 10−2, −1.45335544 × 10−2,
−8.99566061 × 10−2, 1.89285696 × 10−1, 2.08332606 × 10−1, −5.91208362 × 10−3, 2.05832348 × 10−2,
−2.71000803 × 10−2, 1.92909940 × 10−2, 9.39732840 × 10−1, −4.22809155 × 10−2, −1.97117429 × 10−2,
−2.09135482 × 10−2, −1.09053803 × 10−2, 1.60087125 × 10−3, −1.12127972 × 10−2, 1.26364099 × 10−1,
−1.46189620 × 10−3, 4.01717382 × 10−2, −3.45090396 × 10−1, 1.69375043 × 10−2, 1.18870052 × 10−3,
1.29719444 × 10−2, 4.57404515 × 10−2, 6.09297038 × 10−2, −1.67111878 × 10−2, 2.74538418 × 10−2,
7.76283972 × 10−2, −9.87016593 × 10−3, 9.83296904 × 10−3, 9.26160487 × 10−3, −1.33904678 × 10−1,
−4.46698801 × 10−2, −6.22031478 × 10−2, −2.08484381 × 10−2, −1.36635522 × 10−3, 2.37264064 × 10−2,
8.05530251 × 10−2, 6.80355357 × 10−2, −1.95685395 × 10−2]]
Note that, as cost function, average absolute error, which is the average difference between the reference signal and the measured output, is used. The formula for the cost function used is the following:
c o s t = 1 N n = 1 N ( w r e f [ n ] w [ n ] ) ,
where N is the total number of samples in the experiment and n is the time index of the sample number.
The proposed system is compared with the optimal PI controller tuned with the PSO algorithm since it is shown to have better performance than that tuned with Ziegler–Nichol’s method in the literature. So, showing the higher performance of the proposed algorithm compared to that of the optimal PI-tuned controller with PSO already ensures a higher performance than for the PI tuned with Ziegler–Nichol’s method with the transition rule in comparing performances.
The average error for the optimal PI controller on the training set is obtained to be 1.5201955398258502 rad/s. The average error for the optimal neural network-based controller on the training set is reduced to 1.3890762003883137 rad/s. Hence, an approximately 8.6 percent reduction in average error is observed for the training set.
Note that the training set corresponds to the expected reference–disturbance pairs. If the expected reference–disturbance pairs are estimated correctly, the resulting controller will be more robust.
Below are some examples of system outputs of PI and neural network-based controllers for several reference–disturbance pairs.
In Figure 8 below, the result of the experiment run using the following reference–disturbance pair is shown.
Training reference type = step
Training reference amplitude = 100.0
Training reference frequency = 0.0
Training reference acceleration = 0.0
Training reference deceleration = 0.0
Training disturbance type = sine
Training disturbance amplitude = 3.0
Training disturbance frequency = 20.0
Training disturbance acceleration = 0.0
Training disturbance deceleration = 0.0
In this example configuration, the neural network-based controller has a clearly higher performance than the optimal PI controller with the following costs:
Cost PI = 11.036012620309064
Cost ANN = 4.474398851829462
In another scenario with the following reference–disturbance pairs, the neural network-based controller has a higher performance than the optimal PI controller. The result is shown in Figure 9 below.
Training reference type = sine
Training reference amplitude = 100.0
Training reference frequency = 3.0
Training reference acceleration = 0.0
Training reference deceleration = 0.0
Training disturbance type = step
Training disturbance amplitude = 3.0
Training disturbance frequency = 0.0
Training disturbance acceleration = 0.0
Training disturbance deceleration = 0.0
Cost PI = 4.365012134434239
Cost ANN= 3.639447860741749
These results show that, for the expected reference–disturbance pairs, the neural network-based controller has a clearly higher performance than the optimal PI controller.
An important concept in artificial intelligence is generalization performance, which means the performance of the system for examples unseen during training. So, in order to test the generalization performance, new examples different from the training set must be shown to the system. This new set is called test-set and has samples different from that of the training set. Below is shown in Table 3 and Table 4 the reference–disturbance signal values for the test set used in this study.
Note that some reference signals used for the test set are the same as those of the training set, but this does not violate the dissimilarity of reference–disturbance pairs. It is important that the reference–disturbance pair must be different in the training set and test set. All the reference–disturbance pairs in the training set are different from every reference–disturbance pair used in the test set.
The average error for the optimal PI controller on the test set is obtained to be 53.30442198786117 rad/s. The average error for the optimal neural network-based controller on the test set is reduced to 52.893195124124276 rad/s. A slight reduction in error is observed for unexpected reference–disturbance pairs. Note that this small improvement in error for unexpected reference–disturbance pairs is not that crucial but gives an extra advantage compared to the proposed system since the main objective is to design an automatic controller automatically for the expected reference–disturbance pairs. But this also shows that the proposed system has better generalization performance than the optimal PI controller.
Note that the performance of the proposed controller is very high on low-disturbance experiments, as shown in Figure 10 and Figure 11 below.
Training reference type = step
Training reference amplitude = 50.0
Training reference frequency = 0.0
Training reference acceleration = 0.0
Training reference deceleration = 0.0
Training disturbance type = step
Training disturbance amplitude = 1.0
Training disturbance frequency = 0.0
Training disturbance acceleration = 0.0
Training disturbance deceleration = 0.0
Cost PI = 1.234105520340458
Cost ANN = 1.1895147016157341
Training reference type = sine
Training reference amplitude = 50.0
Training reference frequency = 2.0
Training reference acceleration = 0.0
Training reference deceleration = 0.0
Training disturbance type = step
Training disturbance amplitude = 1.0
Training disturbance frequency = 0.0
Training disturbance acceleration = 0.0
Training disturbance deceleration = 0.0
Cost PI = 0.008298412351134255
Cost ANN = 0.011746471715811488
In fact, on low disturbance, any simple controller can perform well, but the real quality of the controller can be seen in high-disturbance scenarios which are difficult to handle. The proposed controller shows relatively high performance even under the most difficult disturbances, and this supports the main novelty of the proposed approach.
Controller outputs are limited by saturation between +12 Volts and −12 Volts in order to achieve realistic simulation. Outputs of the controller are shown for a reference–disturbance pair scenario in Figure 12 below.

4. Discussion

4.1. Performance Issues

The results clearly show that the proposed method has very promising performance, especially for the expected reference–disturbance pairs. This is expected since it is known in the AI domain that performance on the training set is always higher than performance on the test set. Since the system is trained with expected reference–disturbance pairs, it is understandable to have higher performance for the expected configuration. Note that this AI-based approach is unique in the literature to the author’s best knowledge, so it is not possible to compare results from other works.
As an automatic design method for automatic controller design, the proposed method automates the engineering process by largely performing critical tasks. Expected reference–disturbance pair selection is important in the performance of the proposed automatic control system. Even in the unexpected reference–disturbance pairs, the proposed system shows superior performance compared to the optimal PI controller, ensuring acceptable generalization performance in AI terms. This result may be due to the high flexibility of neural networks learning input to output mapping successfully.
Note that the proposed system shows superior performance especially under difficult disturbances, which is important for realistic missions of robust systems. Under low-disturbance conditions, almost any rational controller will have acceptable performance, but the real quality of the controller is seen under harsh conditions. For example, for an air vehicle carrying humans, either fixed wing or mobile wing, it is important to fly safely under relatively high-power external wind disturbance in order to be utilized more effectively. Most of the works in the literature lack testing the system under high-amplitude disturbances, so the proposed system is compared with an optimal PI control system.
The detailed results given in the Section 3 of this study will hopefully enable comparisons of performance results in future studies. It is important to give the details of the experiments conducted, which is the case in this study, and it is encouraged for automatic controller researchers to give the details of the disturbance signals’ shapes and parameters in particular in order to make fair comparisons between controllers.

4.2. Implementation Issues

Expected reference–disturbance pairs can be generated in several different ways. For example, for an aircraft autopilot application, the commands of a human pilot can be saved for several real flights as expected references and, for realistic disturbances, a neural network-based disturbance observer can be used to estimate the real disturbances during real flights. Similarly, for drone applications, references of a human pilot can be saved for realistic references and again a neural network-based disturbance observer can be used to estimate the real disturbances to occur in real flights under various air conditions. Information from real experts can also be incorporated into realistic reference–disturbance pair generation. Realistic simulations with complex realistic mathematical models can also be used for realistic reference–disturbance pair estimation.
Note that, for simulation experiments, it is very important to limit the outputs of the controller and actuator in order to have realistic results. This is ensured and shown in this study and, without limiting the controller’s output, any theoretical or simulation analysis will be unrealistic and impossible to implement. The important parts of the simulation code are given in Appendix A.
Physical implementation of the proposed approach is feasible with the current sophisticated digital systems. Once the neural network weights are learned with a high-performance computer, the feedforward calculations are tractable and easy to implement with either microcontrollers or even FPGAs.

4.3. Ethical Issues

The usage of AI to replace humans is discussed extensively in terms of ethics and many people have concerns of losing their jobs to machines. In fact, AI-based robot doctors giving more reliable diagnosis results should not be taken as a threat to human doctors’ professions. AI-designed automatic controllers controlling human-carrying air vehicles more reliably must be welcome to humanity’s profession world just as well. As a matter of fact, it should not be scientists’ concerns to advance humanity’s knowledge on any branch of science including AI because, as in any branch of science, advances in AI have very high benefits to humanity and add to humanity’s overall wealth. It is up to leaders and politicians to define how to share the overall wealth gained by using AI and other scientific advances and global decisions involving every single nation must be taken to ensure a fair and humane distribution of the wealth and income of humanity. Just like crashing all dams may have the positive side effect of creating many human jobs but is not a rational decision, stopping or slowing the production of AI tools is not a rational solution as an unfair wealth distribution problem.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All parameters are given in the corresponding sections which are enough to repeat all the experiments conducted in this study.

Conflicts of Interest

The author declares no conflicts of interest.

Appendix A

All simulations of this study are implemented in Python programming language version 3.7.4 in Anaconda environment version 4.14.0. Jupyter Notebook version 6.0.1 is used as IDE. Below are given the important parts of the code.
#PSO-estimated parameters of actual DC motor
coefficient0 = 0.09365946
coefficient1 = 0.00253256
VaMax = 12
VaMin = −12
#Function implementing DC motor dynamic behavior
def DCMotorPlant(w_n_1, Va, disturbance):
   #Forward system model starts here.
   # Compare the Armature Voltage with Max/Min Value
   if Va > VaMax:
      Va = VaMax
   if Va < VaMin:
      Va = VaMin
   #Simulation of DC Motor
   dw_dt = −(coefficient0/coefficient1)*w_n_1 + (1/coefficient1)*(Va + disturbance)
   w = w_n_1 + deltaT*dw_dt
   return w
#Function implementing PI controller
def piController(wref_n, w_n_1, intError, prevError, PIcontrollerParameters):
   KP = PIcontrollerParameters[0]
   KI = PIcontrollerParameters[1]
   error = wref_n − w_n_1
   intError = intError + error*deltaT
   if intError > intErrormax:
      intError = intErrormax
   if intError < intErrormin:
      intError = intErrormin
   piOut = KP*error + KI*intError
   return piOut, intError, prevError
#Function implementing single experiment performance
#with given reference-disturbance pair
def performExperiment(controllerType, PIDcontrollerParameters, ANNcontrollerParameters, reference, disturbance):
   tempReference = np.zeros((int(tmax/deltaT) + 1000))
   tempReference[0:int(tmax/deltaT)] = reference[:]
   inputToANNController = np.zeros((numberOfStateVariables*pastAndCurrentWindowSizeInTermsOfSamples+numberOfReferenceVariables*totalWindowSizeInTermsOfSamples+1,1))
   intError = 0
   prevError = 0
   controllerOutput = 0
   n = 0
   w = np.zeros((int(tmax/deltaT)))
   for n1 in np.linspace(0, n1max, num = n1max-1):
         n = n + 1
         controllerOutput, intError, prevError = piController(tempReference[n-1], w[n-1], intError, prevError, PIcontrollerParameters)
         if n<=nPast + 1 or controllerType=='PI':
               #piOut will be that of PI controller’s output
               additiveTerm = 0
         else:
               #BUILD INPUT TO THE ANN CONTROLLER
               inputToANNController[0,0] = 1 #1
               inputToANNController[1:nPast+2,0] = w[n-nPast-2:n-1]
               inputToANNController[nPast+2:2*nPast+3,0] = tempReference[n-nPast-2:n-1]
               inputToANNController[2*nPast+3:2*nPast+3+nFuture,0] = tempReference[n-1:n+nFuture-1]
               #HERE THE ANN CONTROLLER
               additiveTerm = np.dot(ANNcontrollerParameters.transpose(), inputToANNController)
         controllerOutput = controllerOutput + additiveTerm
         w[n] = DCMotorPlant(w[n-1], controllerOutput, disturbance[n])
   return w

References

  1. Ogata, K. Modern Control Engineering, 5th ed.; Prentice Hall: Saddle River, NJ, USA, 2010. [Google Scholar]
  2. Bissell, C. A history of automatic control. In Springer Handbook of Automation; Springer: Berlin/Heidelberg, Germany, 2009; pp. 53–69. [Google Scholar]
  3. Baidya, D.; Dhopte, S.; Bhattacharjee, M. Sensing System Assisted Novel PID Controller for Efficient Speed Control of DC Motors in Electric Vehicles. IEEE Sens. Lett. 2023, 7, 1–4. [Google Scholar] [CrossRef]
  4. Munagala, V.K.; Jatoth, R.K. A novel approach for controlling DC motor speed using NARXnet based FOPID controller. Evol. Syst. 2023, 14, 101–116. [Google Scholar] [CrossRef]
  5. Saputra, D.; Ma’Arif, A.; Maghfiroh, H.; Chotikunnan, P.; Rahmadhia, S.N. Design and Application of PLC-based Speed Control for DC Motor Using PID with Identification System and MATLAB Tuner. Int. J. Robot. Control Syst. 2023, 3, 233–244. [Google Scholar] [CrossRef]
  6. Ekinci, S.; Izci, D.; Yilmaz, M. Efficient Speed Control for DC Motors Using Novel Gazelle Simplex Optimizer. IEEE Access 2023, 11, 105830–105842. [Google Scholar] [CrossRef]
  7. Yıldırım, Ş.; Bingol, M.S.; Savas, S. Tuning PID controller parameters of the DC motor with PSO algorithm. Int. Rev. Appl. Sci. Eng. 2024. [Google Scholar] [CrossRef]
  8. Son, J.; Kang, H.; Kang, S.H. A Review on Robust Control of Robot Manipulators for Future Manufacturing. Int. J. Precis. Eng. Manuf. 2023, 24, 1083–1102. [Google Scholar] [CrossRef]
  9. Han, D.; Mulyana, B.; Stankovic, V.; Cheng, S. A Survey on Deep Reinforcement Learning Algorithms for Robotic Manipulation. Sensors 2023, 23, 3762. [Google Scholar] [CrossRef]
  10. Pistone, A.; Ludovico, D.; Verme, L.D.M.C.D.; Leggieri, S.; Canali, C.; Caldwell, D.G. Modelling and control of manipulators for inspection and maintenance in challenging environments: A literature review. Annu. Rev. Control 2024, 57, 100949. [Google Scholar] [CrossRef]
  11. Bilal, H.; Yin, B.; Aslam, M.S.; Anjum, Z.; Rohra, A.; Wang, Y. A practical study of active disturbance rejection control for rotary flexible joint robot manipulator. Soft Comput. 2023, 27, 4987–5001. [Google Scholar] [CrossRef]
  12. Chotikunnan, P.; Chotikunnan, R. Dual Design PID Controller for Robotic Manipulator Application. J. Robot. Control 2023, 4, 23–34. [Google Scholar] [CrossRef]
  13. Villa-Tiburcio, J.F.; Estrada-Torres, J.A.; Hernández-Alvarado, R.; Montes-Martínez, J.R.; Bringas-Posadas, D.; Franco-Urquiza, E.A. ANN Enhanced Hybrid Force/Position Controller of Robot Manipulators for Fiber Placement. Robotics 2024, 13, 105. [Google Scholar] [CrossRef]
  14. Chang, Y.-H.; Yang, C.-Y.; Lin, H.-W. Robust Adaptive-Sliding-Mode Control for Teleoperation Systems with Time-Varying Delays and Uncertainties. Robotics 2024, 13, 89. [Google Scholar] [CrossRef]
  15. Kouvakas, N.D.; Koumboulis, F.N.; Sigalas, J. A Two Stage Nonlinear I/O Decoupling and Partially Wireless Controller for Differential Drive Mobile Robots. Robotics 2024, 13, 26. [Google Scholar] [CrossRef]
  16. Bernardo, R.; Sousa, J.M.C.; Botto, M.A.; Gonçalves, P.J.S. A Novel Control Architecture Based on Behavior Trees for an Omni-Directional Mobile Robot. Robotics 2023, 12, 170. [Google Scholar] [CrossRef]
  17. Miquelanti, M.G.; Pugliese, L.F.; Silva, W.W.A.G.; Braga, R.A.S.; Monte-Mor, J.A. Comparison between an Adaptive Gain Scheduling Control Strategy and a Fuzzy Multimodel Intelligent Control Applied to the Speed Control of Non-Holonomic Robots. Appl. Sci. 2024, 14, 6675. [Google Scholar] [CrossRef]
  18. Rodriguez-Castellanos, D.; Blas-Valdez, M.; Solis-Perales, G.; Perez-Cisneros, M.A. Neural Robust Control for a Mobile Agent Leader–Follower System. Appl. Sci. 2024, 14, 5374. [Google Scholar] [CrossRef]
  19. Polakovič, D.; Juhás, M.; Juhásová, B.; Červeňanská, Z. Bio-Inspired Model-Based Design and Control of Bipedal Robot. Appl. Sci. 2022, 12, 10058. [Google Scholar] [CrossRef]
  20. Chi, K.-H.; Hsiao, Y.-F.; Chen, C.-C. Robust Feedback Linearization Control Design for Five-Link Human Biped Robot with Multi-Performances. Appl. Sci. 2023, 13, 76. [Google Scholar] [CrossRef]
  21. Godinez-Garrido, G.; Santos-Sánchez, O.-J.; Romero-Trejo, H.; García-Pérez, O. Discrete Integral Optimal Controller for Quadrotor Attitude Stabilization: Experimental Results. Appl. Sci. 2023, 13, 9293. [Google Scholar] [CrossRef]
  22. Sonugür, G.; Gökçe, C.O.; Koca, Y.B.; Inci, Ş.S.; Keleş, Z. Particle swarm optimization based optimal PID controller for quadcopters. Comptes Rendus l’Acade’mie Bulg. Sci. 2021, 74, 1806–1814. [Google Scholar]
  23. He, Y.; Luo, X.; Wang, X. Research and Simulation Analysis of Fuzzy Intelligent Control System Algorithm for a Servo Precision Press. Appl. Sci. 2024, 14, 6592. [Google Scholar] [CrossRef]
  24. Alhamrouni, I.; Kahar, N.H.A.; Salem, M.; Swadi, M.; Zahroui, Y.; Kadhim, D.J.; Mohamed, F.A.; Nazari, M.A. A Comprehensive Review on the Role of Artificial Intelligence in Power System Stability, Control, and Protection: Insights and Future Directions. Appl. Sci. 2024, 14, 6214. [Google Scholar] [CrossRef]
  25. Hu, Y.; Zhou, W.; Liu, Y.; Zeng, M.; Ding, W.; Li, S.; Li, G.; Li, Z.; Knoll, A. Efficient Online Planning and Robust Optimal Control for Nonholonomic Mobile Robot in Unstructured Environments. IEEE Trans. Emerg. Top. Comput. Intell. 2024, 1–17. [Google Scholar] [CrossRef]
  26. Kumar, N.; Chaudhary, K.S. Neural network based fractional order sliding mode tracking control of nonholonomic mobile robots. J. Comput. Anal. Appl. 2024, 33, 73. [Google Scholar]
  27. Freitas, J.B.S.; Marquezan, L.; Evald, P.J.D.d.O.; Peñaloza, E.A.G.; Cely, M.M.H. A fuzzy-based Predictive PID for DC motor speed control. Int. J. Dyn. Control 2024, 12, 2511–2521. [Google Scholar] [CrossRef]
  28. Çelik, E.; Bal, G.; Öztürk, N.; Bekiroglu, E.; Houssein, E.H.; Ocak, C.; Sharma, G. Improving speed control characteristics of PMDC motor drives using nonlinear PI control. Neural Comput. Appl. 2024, 36, 9113–9124. [Google Scholar] [CrossRef]
  29. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  30. Dawane, M.K.; Malwatkar, G.M.; Deshmukh, S.P. Performance improvement of DC servo motor using sliding mode controller. J. Auton. Intell. 2023, 7. [Google Scholar] [CrossRef]
  31. Gökçe, C.O. On Model Complexity and Representation Power of Artificial Neural Networks. In Engineering from Machine Learning to Virtual Reality; Şaşmaz, M., Ed.; IKSAD: Ankara, Turkey, 2022; Volume 37–52, 168p, ISBN 978-625-6380-94-3. [Google Scholar]
  32. Clerc, M. Particle Swarm Optimization; John Wiley & Sons: Hoboken, NJ, USA, 2010; Volume 93. [Google Scholar]
  33. Thomas, G.B.; Weir, M.D.; Hass, J. Thomas’ Calculus: Multivariable; Pearson Education: Hoboken, NJ, USA, 2010. [Google Scholar]
Figure 1. Block diagram of a typical automatic control system.
Figure 1. Block diagram of a typical automatic control system.
Applsci 14 07859 g001
Figure 2. Block diagram of a DC motor system.
Figure 2. Block diagram of a DC motor system.
Applsci 14 07859 g002
Figure 3. Real DC motor setup used to estimate parameters of real DC motor.
Figure 3. Real DC motor setup used to estimate parameters of real DC motor.
Applsci 14 07859 g003
Figure 4. Close-up photo of optical encoder used to measure speed of the motor.
Figure 4. Close-up photo of optical encoder used to measure speed of the motor.
Applsci 14 07859 g004
Figure 5. System with proposed controller structure.
Figure 5. System with proposed controller structure.
Applsci 14 07859 g005
Figure 6. Input past and future windows illustrated.
Figure 6. Input past and future windows illustrated.
Applsci 14 07859 g006
Figure 7. Five types of functions for references and disturbances.
Figure 7. Five types of functions for references and disturbances.
Applsci 14 07859 g007
Figure 8. An illustrative example of neural controller’s high performance.
Figure 8. An illustrative example of neural controller’s high performance.
Applsci 14 07859 g008
Figure 9. Another illustrative example of neural controller’s high performance.
Figure 9. Another illustrative example of neural controller’s high performance.
Applsci 14 07859 g009
Figure 10. An illustrative example of neural controller’s high performance on low-amplitude disturbance.
Figure 10. An illustrative example of neural controller’s high performance on low-amplitude disturbance.
Applsci 14 07859 g010
Figure 11. Another illustrative example of neural controller’s high performance on low-amplitude disturbance.
Figure 11. Another illustrative example of neural controller’s high performance on low-amplitude disturbance.
Applsci 14 07859 g011
Figure 12. The controller output is saturated between +12 Volts and −12 Volts.
Figure 12. The controller output is saturated between +12 Volts and −12 Volts.
Applsci 14 07859 g012
Table 1. Reference signal values for training set. NA: Not Applicable.
Table 1. Reference signal values for training set. NA: Not Applicable.
Signal NumberTypeAmplitude (rad/s)Frequency (Hz)Acc. Time (ms)Dec. Time (ms)
1step50NANANA
2step70NANANA
3step90NANANA
4step100NANANA
5sine502NANA
6sine503NANA
7sine505NANA
8sine1002NANA
9sine1003NANA
10sine1005NANA
Table 2. Disturbance signal values for training set.
Table 2. Disturbance signal values for training set.
Signal NumberTypeAmplitude (Volts)Frequency (Hz)Acc. Time (ms)Dec. Time (ms)
1step1NANANA
2step1.5NANANA
3step2NANANA
4step3NANANA
5sine15NANA
6sine120NANA
7sine25NANA
8sine220NANA
9sine35NANA
10sine320NANA
Table 3. Reference signal values for test set.
Table 3. Reference signal values for test set.
Signal NumberTypeAmplitude (rad/s)Frequency (Hz)Acc. Time (ms)Dec. Time (ms)
1step100NANANA
2square1002NANA
3square1003NANA
4square1004NANA
5square1005NANA
6square1006NANA
7square1008NANA
8square10010NANA
9trapezoid100NA200200
10trapezoid100NA100100
Table 4. Disturbance signal values for test set.
Table 4. Disturbance signal values for test set.
Signal NumberTypeAmplitude (Volts)Frequency (Hz)Acc. Time (ms)Dec. Time (ms)
1square15NANA
2square110NANA
3square120NANA
4square25NANA
5square210NANA
6square220NANA
7sawtooth15NANA
8sawtooth120NANA
9sawtooth25NANA
10sawtooth220NANA
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gökçe, C.O. Automatizing Automatic Controller Design Process: Designing Robust Automatic Controller under High-Amplitude Disturbances Using Particle Swarm Optimized Neural Network Controller. Appl. Sci. 2024, 14, 7859. https://doi.org/10.3390/app14177859

AMA Style

Gökçe CO. Automatizing Automatic Controller Design Process: Designing Robust Automatic Controller under High-Amplitude Disturbances Using Particle Swarm Optimized Neural Network Controller. Applied Sciences. 2024; 14(17):7859. https://doi.org/10.3390/app14177859

Chicago/Turabian Style

Gökçe, Celal Onur. 2024. "Automatizing Automatic Controller Design Process: Designing Robust Automatic Controller under High-Amplitude Disturbances Using Particle Swarm Optimized Neural Network Controller" Applied Sciences 14, no. 17: 7859. https://doi.org/10.3390/app14177859

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop