Next Article in Journal
Medium Rotation Eucalyptus Plant: A Comparison of Storage Systems
Previous Article in Journal
Assessing the Impact of Investments in Cross-Border Pipelines on the Security of Gas Supply in the EU
Previous Article in Special Issue
Research on Model Predictive Control of IPMSM Based on Adaline Neural Network Parameter Identification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Permanent-Magnet SLM Drive System Using AMRRSPNNB Control System with DGWO

1
Department of Industrial Education and Technology, National Changhua University of Education, Changhua 500, Taiwan
2
Graduate School of Vocational and Technological Education, National Yunlin University of Science and Technology, Yunlin 640, Taiwan
*
Author to whom correspondence should be addressed.
Energies 2020, 13(11), 2914; https://doi.org/10.3390/en13112914
Submission received: 3 April 2020 / Revised: 30 April 2020 / Accepted: 8 May 2020 / Published: 6 June 2020
(This article belongs to the Special Issue Control and Monitoring of Permanent Magnet Synchronous Machines)

Abstract

:
Because permanent-magnet synchronous linear motors (SLM) still exhibit nonlinear friction, ending effects and time-varying dynamic uncertainties, better control performances cannot be achieved by using common linear controllers. We propose a backstepping approach with three adaptive laws and a beating function to control the motion of permanent-magnet SLM drive systems that enhance the robustness of the system. In order to reduce greater vibration in situations with uncertainty actions in the aforementioned control systems, we propose an adaptive modified recurrent Rogers–Szego polynomials neural network backstepping (AMRRSPNNB) control system with three adaptive laws and reimbursed controller with decorated gray wolf optimization (DGWO), in order to handle external bunched force uncertainty, including nonlinear friction, ending effects and time-varying dynamic uncertainties, as well as to reimburse the minimal rebuild error of the reckoned law. In accordance with the Lyapunov stability, online parameter training method of the modified recurrent Rogers–Szego polynomials neural network (MRRSPNN) can be derived by utilizing an adaptive law. Furthermore, to help reduce error and better obtain learning fulfillment, the DGWO algorithm was used to change the two learning rates in the weights of the MRRSPNN. Finally, the usefulness of the proposed control system is validated by tested results.

1. Introduction

Linear motors can provide linear motion in the absence of motion translators, such as ball screws, gears and belts that reduce motor limitations and complexity. Most linear motors have lower load capacity than other types of linear actuators. The permanent-magnet synchronous linear motor (SLM), with a direct drive design, has some good performance characteristics, such as higher speed, higher precision, less friction, no backlash, maintenance-free operation and extra-high thrust force [1]. Hence, the permanent-magnet SLM has been broadly applied in machine tools, semiconductor manufacturing systems and industrial robots [1,2,3]. The permanent-magnet SLM has larger thrust density that is superior to the other linear motors.
The backstepping control kill [4,5,6] has been applied in various kinds of linear feedback systems. First, some proper functions with the state variables are chosen as pseudo-control inputs in the subsystems of the entire system. Then, each backstepping level results in a new pseudo-control variable that is denoted by a term of the pseudo-control design from the preceding design levels. The primary design goal of a feedback design is the true control of input. When the last Lyapunov function from integrating with all of the Lyapunov functions at each individual design level is used, the process thus ends. Moreover, adaptive backstepping controls have been also broadly applied in linear motor controls [7,8,9]. In addition, Ting and Chen [10] also proposed nonlinear adaptive backstepping by using RRHPNN uncertainty observers for controlling synchronous resistance motor drives.
Neural networks (NNs) [11,12,13] have good approximation performance in control and identification of systems. Because artificial NNs are static mapping functions and a feedforward network structure, they cannot respond to dynamic behavior in real time. Recently, as result of high certification and fine control performance, recurrent NNs [14,15,16,17] have been broadly proposed to forecast nonlinear systems and control. A major feature of recurrent NN is the memory of historical tracking at the same node from self-connecting feedback information. In addition, in ordinary recurrent NNs, the designated self-connecting feedback of hidden nodes or output nodes is in charge of memorizing the designated preceding activation of the hidden node or output node. Hence, the outputs of the other nodes are unable to act upon the designated node. In complex nonlinear dynamic system such as the permanent-magnet SLM, the friction force, cogging force and external force all influence the system’s performance. If each node for recurrent NNs depends on one status in the nonlinear dynamic systems, self-connecting feedback levels cannot lead to good approximations for the dynamic systems. Owing to the recurrent nodes, recurrent NNs have certain dynamical merits over static NNs [11,12,13]; one has been proverbially used in electricity spot price prediction and photovoltaic power forecasting [14,15,16,17]. However, these NNs are very time-consuming in terms of online training procedure. Hence, as a result of less calculation complexity, some functional-class NNs [18,19,20] have been applied in control and identification for many nonlinear systems. However, the adjustment mechanics of weights have not been discussed in these control methods that combined with NNs. This leads to larger error in control and identification for systems. In addition, the Rogers–Szego polynomials as the continuous q-Hermite polynomials proposed by Rogers [21] belong to a family of polynomials orthogonal function. However, the Rogers–Szego polynomials neural network (RSPNN) has never been used in any identification and control of nonlinear systems. Although the feedforward RSPNN can approximate a nonlinear function, it may not be able to approximate a dynamic act of nonlinear uncertainties as a result of lacking a reflect loop. Because of more benefits than the feedforward RSPNN, the modified recurrent Rogers–Szego polynomials neural network (MRRSPNN) control is not yet ready to govern a permanent-magnet SLM drive system to raise the certification property of the nonlinear system and reduce calculation complexity. Hence, the backstepping skill by utilizing the MRRSPNN with error reimbursed control to reduce uncertainties influences is the research motivation in this study. Meanwhile, these learning rates by utilizing some acceleration factors were absent so that the convergent speed of weight’s regulation is slower.
Emary et al. [22] first proposed the multi-objective gray wolf optimization (GWO) to apply in property reduction of system. Mosavi et al. [23] presented a sonar dataset classification by utilizing a conventional NN training method and the GWO. Khandelwal et al. [24] offered the modified GWO to track the programming question of transmitting network. Mirjalili et al. [25] provided a hunting mechanism of GWO to mimic the social behavior. However, these algorithms have highly competitive and they have been used in various kinds of fields [26,27,28], once have impoverished exploration ability and endure from regional optima stagnancy. Hence, as to ameliorate the explorative capabilities of GWO, a decorated gray wolf optimization (DGWO) with two adjusted factors is posed in this study. This recently proposed algorithm makes up two revisions. First, it can explore new regions in the study space because of diverse locations that assigned to the leaders. This can help for increasing exploration chances and avoiding local optima stagnancy issue. Second, an opposition-based learning skill has been applied in the initial half of iterations to offer discrepancy among the look for agents. Hence, so as to improve slower delay of DGWO method, the DGWO with two adjusted factors is proposed to regulate two learning rates with two optimal values and to raise convergent speed of weights in this study. We propose the DGWO algorithm to avoid precocious delay and to obtain optimized learning rate with quick delay.
The model error and system disturbance can seriously affect various control performance under the influences of unknown parameters and dynamics system interfaces. In light of the influence of these uncertainties, it is not easy to reach better control fulfillment utilized by the linear controller for the permanent-magnet SLM drive system. Hence, as to enhance robustness the backstepping accuracy with three adaptive laws and a beating function is proposed to govern the motion of permanent-magnet SLM drive system for the tracking of periodic reference trajectories. The backstepping accuracy with three adaptive laws and a beating function, the permanent-magnet SLM drive system holds the merits of good transient control fulfillment and robustness to uncertainties. In addition, so as to reduce great vibration situation with uncertainties action for the before-mentioned control system, we propose the adaptive modified recurrent Rogers–Szego polynomials neural network backstepping (AMRRSPNNB) control system with reimbursed controller and DGWO with two adjusted factors to reckon the external bunched force uncertainty and to reimburse the minimal rebuilt error of the reckoned law. Further, the DGWO algorithm—by using two adjusted factors with cosine function—is an innovative method to quicken convergent speed in this study. Regulating the two variable learning rates of the weights in the MRRSPNN is the use of the DGWO algorithm to speed-up the parameter’s delay. Finally, the usefulness and robustness of the proposed AMRRSPNNB control system with reimbursed controller and DGWO with two adjusted factors were validated by some tested results.
The main structure of this study is as follows: Section 2 presents the configuration of permanent-magnet SLM drive. Section 3 presents the AMRRSPNNB control system with reimbursed controller and DGWO with two adjusted factors. Section 4 presents the tested results for the permanent-magnet SLM by using three control methods at five cases. Section 5 presents the conclusions.

2. Models and Formation of Permanent-Magnet SLM Drive

2.1. Models of Permanent-Magnet SLM

The electric models in the synchronous rotating reference frame for the permanent-magnet SLM with d-q axis model [1,2,3] are denoted by:
u q s = r s i q s + L q s i ˙ q s + ω e s ( L d s i d s + λ p s )
u d s = r s i d s + ( L d s i ˙ d s + λ ˙ p s ) ω e s L q s i q s
ω e s = p s ω r s = p s π   v r s / τ
v e s = p s   v r s = 2 τ   f s
where u   d s ,   u q s denote the d and q axis voltages, i d s ,   i q s denote the d and q axis currents, r s denotes the phase winding resistance, L d s ,   L q s denote the d and q axis inductances, λ p s denotes the permanent magnet flux linkage, ω e s and ω r s denote the electric angular velocity and the mover angular velocity, v r s and v e s denote the linear velocity and the electric linear velocity, p s and τ denote the number of pole pairs and the pole pitch, f s denotes the electric frequency. The electromagnetic power P e s and the electromagnetic force F e s [1,2,3] are denoted by
P e s = F e s v e s = 3 p s [ λ p s i q s + ( L d s L q s ) i d s i q s ] ω e s / 2 )
F e s = 3 π   p s [ λ p s i q s + ( L d s L q s ) i d s i q s ] / 2 τ
Thus, the mover dynamic equation can be denoted by
W s v ˙ r s + Q s v r s = F e s F l s F w s F f s F c s
where W s and Q s are the entire mass with the moving component system and the viscous friction with iron contact part, F e s , F l s , F w s , F f s and F c s are the electromagnetic force, the external load force, the wind resistance force, the flux saturation force and the cogging force, respectively.

2.2. Formation of Permanent-Magnet SLM Drive

The common control skill of a permanent-magnet SLM drive system is to adopt the FOC [1,2,3]. Hall sensors can determine if the flux is in place with the d-q axis. The rotor flux is produced in the d-axis only when the current vector is produced in the q-axis for the FOC. When i d s is equal to zero and λ p s is fixed value in Equations (5) and (6), then the electromagnetic force F e s will be proportional to i q s in the closed-loop control for a permanent-magnet SLM drive. Since the produced force is linearly proportional to the q-axis current under fixed the d-axis rotor flux, the maximum force per ampere can be reached. The electromagnetic force equation in Equation (6) can be represented by
F e s = 3 π λ p s i q s / 2 τ = k s i q s
where k s = 3 π   p s λ p s / 2 τ is the propulsion coefficient; i q s is the command of propulsion current. The permanent-magnet SLM drive system can be understood to control the system with H t ( s ) = 1 / ( s W s + Q s ) that s is the Laplace’s operation. The formation of a field-oriented permanent-magnet SLM drive system is displayed in Figure 1, which makes up a permanent-magnet SLM, a speed control loop, a position control loop, a triangular wave comparison current control, a coordinate transformation, cos θ e s / sin θ e s generator, a linear scale and three Hall sensors. We used the three Hall sensors to find the flux position of the permanent magnet. The mover of permanent-magnet SLM is equipped on various sizes of iron disks to change the entire mass of the moving component system and the variously viscous friction force.
With the realization of FOC, the permanent-magnet SLM drive system can be understood as the block diagram expressed in Figure 2. The better electromagnetic performance is thus enforced by governing the primary current distributions to lie in the q-axis, i.e., i d s = 0 . A linear force per amp characteristic can be reached for the actuator.
The FOC was enforced by the TMS320C32 DSP control system. A main personal computer downloads the program running on the DSP. The specifications of permanent-magnet SLM are 220 V, 1.75 A, 0.5 kW, 112 N in this study. The position control and speed control signals are set at 1 V = 2 mm and 1 V = 2 mm/sec for the convenience of the controller design. The mechanical parameters of the system are as follows W s = 2.4   kg = 0.048   Nsec / V , Q s = 89.54   kg / sec = 0.179   N / V , k s = 0.104   N / A .

3. Design of the AMRRSPNNB Control System with Reimbursed Controller and DGWO

When the actual permanent-magnet SLM drive system operates with parameter variations, external load disturbance, friction force, wind resistance force, flux saturation force and cogging force is acted, then Equation (7) can be rewritten by
z ˙ s = ( c s + Δ c s ) z s + ( d s + Δ d s ) u s + h s ( F l s + F w s + F f s + F c s )
l ˙ r s = v r s = z s
y s = l r s
where l r s and z x denote the mover position and mover velocity of the SLM to be uniformly continuous and bounded; Δ c s and Δ d s denote two parameters uncertainties from W s and Q s to be uniformly continuous and bounded; c s = Q s / W s ; d s = k s / W s > 0 ; h s = 1 / W s are three known constants to be uniformly continuous and bounded; u s = i q s is the control propulsion to the permanent-magnet SLM drive system, i.e., the propulsion current. Reformulate (10), then
z ˙ s = c s z s + d s u s + f s 1 + f s 2 + f s 3 + f s 4
where f s 1 = Δ c s z s and f s 2 = Δ d s u s are two parameters variation to be bounded; f s 3 = h s ( F f s + F c s ) and f s 4 = h s ( F l s + F w s ) are the internal bunched uncertainty and external bunched force uncertainty to be bounded.
Tracking the reference trajectory y m = l r m asymptotically is the control objective. Hence, as to research good position-tracking performance, the backstepping control system with three adaptive laws and a beating function is devised as follows.
Define the tracking error by
e g 1 = y m y s = l r m l r s
and take differential Equation (13) as
e ˙ g 1 = l ˙ r m l ˙ r s = l ˙ r m z s
The stabilizing function is defined by
ρ g 1 = c g 1 e g 1 + l ˙ r m + c g 2 α g
where c g 1 and c g 2 are two positive constants; α g = e g 1 ( v ) d v is the integral factor that makes the tracking error to converge to zero.
The virtual tracking error is defined by
e g 2 = z s ρ g 1
Take differential Equation (16) as
e ˙ g 2 = z ˙ s ρ ˙ g 1 = ( c s z s + d s u s + f s 1 + f s 2 + f s 3 + f s 4 ) ρ ˙ g 1
Hence, as to design the control system, the external bunched force uncertainty f s 4 is presumed to be bounded and meets the state of affairs | f s 4 | γ 1 . f s 1 , f s 2 and f s 3 are three unknown parameters. Three estimation errors can be defined by
e s 1 = f ^ s 1 f s 1
e s 2 = f ^ s 2 f s 2
e s 3 = f ^ s 3 f s 3
where e s 1 , e s 2 and e s 3 are the estimation errors; f ^ s 1 , f ^ s 2 and f ^ s 3 are the estimation values of f s 1 , f s 2 and f s 3 . Then the Lyapunov function can be written by
A s 1 = e g 1 2 / 2 + e g 2 2 / 2 + e s 1 2 / ( 2 η 1 ) + e s 2 2 / ( 2 η 2 ) + e s 3 2 / ( 2 η 3 ) + c g 2 α g 2 / 2
By taking the differential of A s 1 and by utilized (14–20) and the integral factor α g = e g 1 ( v ) d v , then (21) can be obtained by
A ˙ s 1 = e g 1 e ˙ g 1 + e g 2 e ˙ g 2 + e s 1 e ˙ s 1 / η 1 + e s 2 e ˙ s 2 / η 2 + e s 3 e ˙ s 3 / η 3 + c g 2 α g α ˙ g = e g 1 ( l ˙ r m z s ) + e g 2 ( ( c s z s + d s u s + f s 1 + f s 2 + f s 3 + f s 4 ) ρ ˙ g 1 ) + e s 1 e ˙ s 1 / η 1 + e s 2 e ˙ s 2 / η 2 + e s 3 e ˙ s 3 / η 3 + c g 2 α g α ˙ g = e g 1 ( c g 1 e g 1 + ρ g 1 z s ) + e g 2 ( c s z s + d s u s + f s 1 + f s 2 + f s 3 + f s 4 ρ ˙ g 1 ) + ( f ^ s 1 f s 1 ) f ^ ˙ s 1 / η 1 + ( f ^ s 2 f s 2 ) f ^ ˙ s 2 / η 2 + ( f ^ s 3 f s 3 ) f ^ ˙ s 3 / η 3 = e g 1 ( c g 1 e g 1 e g 2 ) + e g 2 ( c s z s + d s u s + f s 4 ρ ˙ g 1 ) + f ^ s 1 f ^ ˙ s 1 / η 1 + ( e g 2 f s 1 f s 1 f ^ ˙ s 1 / η 1 ) + f ^ s 2 f ^ ˙ s 2 / η 2 + ( e g 2 f s 2 f s 2 f ^ ˙ s 2 / η 2 ) + f ^ s 3 f ^ ˙ s 3 / η 3 + ( e g 2 f s 3 f s 3 f ^ ˙ s 3 / η 3 ) + ( e g 2 f s 2 f s 2 f ^ ˙ s 2 / η 2 ) + f ^ s 3 f ^ ˙ s 3 / η 3 + ( e g 2 f s 3 f s 3 f ^ ˙ s 3 / η 3 )
In accordance with (22), the control propulsion u s of the backstepping control system with three adaptive laws and a beating function can be devised by
u s = i q s = d s 1 [ e g 1 c g 3 e g 2 c s z s + ρ ˙ g 1 γ 1 s g n ( e g 2 ) ( f ^ s 1 + f ^ s 2 + f ^ s 3 ) ]
where c g 3 denotes a positive constant; γ 1 denotes a constant; γ 1 sgn ( e g 2 ) / d s denotes the hitting function term. By utilized Equation (23), then Equation (22) can be rewritten by
A ˙ s 1 = c g 1 e g 1 2 c g 3 e g 2 2 e g 2 [ γ 1 s g n   ( e g 2 ) f s 4 ] e g 2 [ f ^ s 1 + f ^ s 2 + f ^ s 3 ] + f ^ s 1 f ^ ˙ s 1 / η 1 + f s 1 ( e g 2 f ^ ˙ s 1 / η 1 ) + f ^ s 2 f ^ ˙ s 2 / η 2 + ( e g 2 f s 2 f s 2 f ^ ˙ s 2 / η 2 ) + f ^ s 3 f ^ ˙ s 3 / η 3 + ( e g 2 f s 3 f s 3 f ^ ˙ s 3 / η 3 ) = c g 1 e g 1 2 c g 3 e g 2 2 e g 2 [ γ 1 s g n   ( e g 2 ) f s 4 ] f ^ s 1 ( e g 2 f ^ ˙ s 1 / η 1 ) + f s 1 ( e g 2 f ^ ˙ s 1 / η 1 ) f ^ s 2 ( e g 2 f ^ ˙ s 2 / η 2 ) + f s 2 ( e g 2 f ^ ˙ s 2 / η 2 ) f ^ s 3 ( e g 2 f ^ ˙ s 3 / η 3 ) + f s 3 ( e g 2 f ^ ˙ s 3 / η 3 )
To reach A ˙ s 1 ( t ) 0 , three adaptive laws f ^ ˙ s 1 , f ^ ˙ s 2 and f ^ ˙ s 3 can be represented by
f ^ ˙ s 1 = η 1 e g 2
f ^ ˙ s 2 = η 2 e g 2
f ^ ˙ s 3 = η 3 e g 2
By utilized (25), (26), (27) and | f s 4 | γ 1 , then Equation (24) can be rewritten by
A ˙ s 1 = c g 1 e g 1 2 c g 3 e g 2 2 e g 2 [ γ 1 s g n   ( e g 2 ) f s 4 ] c g 1 e g 1 2 c g 3 e g 2 2 | e g 2 | [ γ 1 | f s 4 | ] c g 1 e g 1 2 c g 3 e g 2 2 0
Equation (28) shows A ˙ s 1 ( t ) to be negative semi-definite, i.e., A s 1 ( t ) A s 1 ( 0 ) , implying that e g 1 and e g 2 are bounded. The following term can be defined by
B s 1 ( t ) = c g 1 e g 1 2 + c g 3   e g 2 2 = a ˙ s 1
Integration (27) gives
0 t B s 1 ( v )   d v = a s 1 ( e g 1 ( 0 ) ,   e g 2 ( 0 ) ) a s 1 ( e g 1 ( t ) , e g 2 ( t ) )
Because a s 1 ( e g 1 ( 0 ) ,   e g 2 ( 0 ) ) is bounded and a s 2 ( e g 1 ( t ) ,   e g 2 ( t ) ) is nonincreasing and bounded, then l i m t 0 t B s 1 ( v )   d τ < . In addition, B ˙ s 1 ( t ) is bounded and B s 1 ( t ) is uniformly continuous function. Utilized by the Barbalat’s lemma [29,30], it can be displayed that l i m t B s 1 ( t ) = 0 . It means that e g 1 and e g 2 will converge to zero as t . Besides l i m t l r s ( t ) = l r m and l i m t z s = l ˙ r m . Hence, the stability of the backstepping control system with three adaptive laws and a beating function can be guaranteed. Its block diagram is displayed in Figure 3.
Because the upper bound of the unknown external bunched force uncertainty f s 4 is difficult to determine and the reckoned value f ^ s 4 of the unknown external bunched force uncertainty f s 4 is not precisely to reckon. Hence, the modified recurrent Rogers–Szego polynomials neural network (MRRSPNN) is proposed to suit the real value of the external bunched force uncertainty f s 4 . The MRRSPNN with three-layer constitution is displayed in Figure 4, which makes up the first layer (input layer), the second layer (hidden layer 1) and the third layer (output layer). The semaphore intentions in each node for each layer are explained in the following expression.
At the first layer, input semaphore and output semaphore are explained by
n e i 1 = k x i 1 ( R ) v i k 1 y k 3 ( R 1 ) , y i 1 = g i 1 ( n e i 1 ) = n e i 1 , i = 1 ,   2
where x 1 1 = l r m l r s = e g 1 and x 2 1 = e g 1 ( 1 z 1 ) = Δ e g 1 are the speed discrepancy and speed discrepancy alteration, respectively. R is the iteration count. v i k 1 is the recurrent weight through the third layer and the first layer. y k 3 is the output of node at the third layer. The symbol Π is a multiply factor.
At the second layer, input semaphore and output semaphore are explained by
n e j 2 ( R ) = i = 1 2 y i 1 ( R ) + μ   y j 2 ( R 1 ) ,   y j 2 = g j 2 ( n e j 2 ) = R S j ( n e j 2 ; q ) ,   j = 0 ,   1 ,   ,   m 1
where μ is the recurrent gain at the second layer. Rogers–Szego polynomials function [21,31] is adopted as the activation function. R S j ( x ; q ) is the Rogers–Szego polynomials function with 1 < x < 1 . R S 0 ( x ; q ) = 1 , R S 1 ( x ; q ) = 1 + x and R S 2 ( x ; q ) = ( 1 + x ) ( 1 + x ) + x ( q 1 ) are the 0-, 1- and 2-order Rogers–Szego polynomials functions, respectively. The first kind type of Rogers–Szego polynomials function at the recurrence relation [21,31] is given by R S n + 1 ( x ; q ) = ( 1 + x ) R S n ( x ; q ) + x ( q n 1 ) R S n 1 ( x ; q ) . The symbol is a summation factor.
At the third layer, semaphore and output semaphore are explained by
n e k 3 = j = 0 m 1 v k j 2 y j 2 ( R ) , y k 3 = g k 3 ( n e k 3 ) = n e k 3 , k = 1
where v k j 2 is the connecting weight through the second layer and the third layer. g k 3 is the linear activation function. The output y k 3 ( R ) at the third layer of the MRRSPNN can be denoted by
y k 3 ( R ) = f ^ s 4 ( O ) = O T H
where O = [ v 10 2 v 1 , m 1 2 ] T and H = [ y 0 2 y m 1 2 ]   T are the weight vector at the third layer and the input vector at the third layer, respectively.
The minimal rebuilt error e s 4 is defined as
e s 4 = f s 4 f s 4 ( O ) = f s 4 ( O ) T H
where O is an optimum weight vector that reaches the minimal rebuilt error. Hence, as to make up the minimal rebuilt error e s 4 , the reimbursed controller u r with a reckoned law is proposed. The positive number σ s 4 is greater than absolute value of e s 4 , i.e., σ s 4 | e s 4 | . The Lyapunov function is chosen by
A s 2 = A s 1 + ( e s 5 ) 2 / ( 2 η 5 ) + ( O O ) T ( O O ) / ( 2 δ 1 )
where η 5 is an adaptive gain. e s 5 = e ^ s 4 e s 4 is defined as the reckoned error to be bounded. e ^ s 4 is the reckoned value of minimal rebuilt error e s 4 . By taking the derivative of A s 2 and by utilized (14–20) and the integral factor α g = e g 1 ( v ) d v , then (36) is obtained by
A ˙ s 2 = e g 1 ( c g 1 e g 1 e g 2 ) + e g 2 ( c s z s + d s u s + f s 4 ρ ˙ g 1 ) + f ^ s 1 f ^ ˙ s 1 / η 1 + ( e g 2 f s 1 f s 1 f ^ ˙ s 1 / η 1 ) + f ^ s 2 f ^ ˙ s 2 / η 2 + ( e g 2 f s 2 f s 2 f ^ ˙ s 2 / η 2 ) + f ^ s 3 f ^ ˙ s 3 / η 3 + ( e g 2 f s 3 f s 3 f ^ ˙ s 3 / η 3 ) + e s 5 e ˙ s 5 / η 5 + ( O O ) T O ˙ / δ 1
In accordance with (37), the control propulsion u s = u ^ s of the AMRRSPNNB control system with three adaptive laws and reimbursed controller with DGWO can be designed by
u s = u ^ s = i q s = d s 1 [ e g 1 c g 3 e g 2 c s z s + ρ ˙ g 1 ( f ^ s 1 + f ^ s 2 + f ^ s 3 + f ^ s 4 ( O ) + u r ) ]
By utilized (38), then (37) can be rewritten by
A ˙ s 1 = c g 1 e g 1 2 c g 3 e g 2 2 e g 2 [ f ^ s 4 ( O ) + u r f s 4 ] e g 2 [ f ^ s 1 + f ^ s 2 + f ^ s 3 ] + f ^ s 1 f ^ ˙ s 1 / η 1 + f s 1 ( e g 2 f ^ ˙ s 1 / η 1 ) + f ^ s 2 f ^ ˙ s 2 / η 2 + ( e g 2 f s 2 f s 2 f ^ ˙ s 2 / η 2 ) + f ^ s 3 f ^ ˙ s 3 / η 3 + ( e g 2 f s 3 f s 3 f ^ ˙ s 3 / η 3 ) + e s 5 e ˙ s 5 / η 5 + ( O O ) T O ˙ / δ 1
By utilized Equations (25), (26), (27) and e s 5 = σ ^ s 4 σ s 4 , then Equation (39) can be rewritten by
A ˙ s 2 = c g 1 e g 1 2 c g 3 e g 2 2 + e g 2 [ f s 4 f s 4 ( O ) ] e g 2 [ f ^ s 4 ( O ) f s 4 ( O ) ] e g 2 u r + e s 5 σ ^ ˙ s 4 / η 5 + ( O O ) T O ˙ / δ 1 = c g 1 e g 1 2 c g 3 e g 2 2 + e g 2 e s 4 e g 2 ( O O ) T H e g 2 u r + ( σ ^ s 4 σ s 4 ) σ ^ ˙ s 4 / η 5 + ( O O ) T O ˙ / δ 1
To reach A s 2 0 , the adaptive law O ˙ and the reimbursed controller u r with a reckoned law σ ^ s 4 and an adaptive law σ ^ ˙ s 4 to reduce uncertainties effects can be devised by
O ˙ = δ 1   e g 2   H
u r = σ ^ s 4 sgn ( e g 2 )
σ ^ ˙ s 4 = η 5 | e g 2 |
By substituting (41–43) into (40) and by utilized σ s 4 | e s 4 | , then (40) can be presented by
A ˙ s 2 = c g 1 e g 1 2 c g 3 e g 2 2 + e g 2 e s 4 e g 2 ( O O ) T H e g 2 σ ^ s 4 sgn ( e g 2 ) + ( σ ^ s 4 σ s 4 ) η 5 | e g 2 | / η 5 + ( O O ) T δ 1   e g 2   H / δ 1 = c g 1 e g 1 2 c g 3 e g 2 2 + e g 2 e s 4 | e g 2 | σ ^ s 4 + ( σ ^ s 4 σ s 4 ) | e g 2 | = c g 1 e g 1 2 c g 3 e g 2 2 + e g 2 e s 4 σ s 4 | e g 2 | c g 1 e g 1 2 c g 3 e g 2 2 | e g 2 | ( σ s 4 | e s 4 | ) c g 1 e g 1 2 c g 3 e g 2 2 0
Equation (44) shows that A s 2 ( t ) is a negative semi-definite function, i.e., A s 2 ( t ) A s 2 ( 0 ) . It means that e g 1 and e g 2 are bounded. Utilized by the Barbalat’s lemma, it can be presented that B s 1 ( t ) 0 at t from (29), (30), (44), i.e., e g 1 and e g 2 will converge to zero at t . Thence, the stability of the AMRRSPNNB control system with three adaptive laws and reimbursed controller with DGWO can be ensured. Its block diagram is displayed in Figure 5.
Furthermore, the guaranteed delay of tracking error to be zero does not signify delay of the reckoned value of the unknown external bunched force uncertainty to it real values. The persistent excitation condition [29,30] should be satisfied for the reckoned value to converge to its theoretic value.
A training means of parameters in the MRRSPNN can be unearthed utilized by the Lyapunov stability and the gradient descent skill. The DGWO with two adjusted factors is applied to look for two better learning rates in the MRRSPNN to acquire faster delay. The connecting weight parameter presented in Equation (41) can be represented by
v ˙ k j 2 = δ 1   e g 2   y j 2
An objective function that explains online training procedure of the MRRSPNN is defined by
H 2 = e g 2 2 / 2
The fitted learning rule of the connecting weight utilized by the gradient descent skill with the chain rule is represented by
v ˙ k j 2 = δ 1 H 2 v k j 2 = δ 1 H 2 y k 3 y k 3 n e k 3 n e k 3 v k j 2 = δ 1 H 2 y k 3 y j 2
It is known that H 2 / y k 3 = e g 2 from Equation (45) and Equation (47). The fitted learning rule of recurrent weight v i k 1 utilized by the gradient descent technology with chain rule then is represented by
v ˙ i k 1 = δ 2   H 2   y k 3     y k 3   y j 2     y j 2   n e j 2     n e j 2   y i 1   y i 1   n e i 1     n e i 1   v i k 1 = δ 2 e g 2 v k j 2 R S j ( ) x i 1 ( N ) y k 3 ( N 1 )
where δ 2 denotes the learning rate. To acquire good delay, the DGWO is applied to look for two changeable learning rates in the MRRSPNN. In addition, for improving delay and looking for two optimal learning rates, the DGWO with two adjusted factors is proposed in this study.
In the DGWO, the optimization is conducted by α , β and ρ . The DGWO algorithm can be denoted by
G ( l 1 + 1 ) = [ G 1 ( l 1 ) + G 2 ( l 1 ) + G 3 ( l 1 ) ] / 3
where G ( l 1 + 1 ) = [ δ 1   δ 2 ] is a vector that makes up two learning rates, G 1 ( l 1 ) ,   G 2 ( l 1 ) ,   G 3 ( l 1 ) are denoted by
G 1 ( l 1 ) = | α ( l 1 ) F 1 ( l 1 ) [ L 1 ( l 1 ) α ( l 1 ) G ( l 1 ) ] |
G 2 ( l 1 ) = | β ( l 1 ) F 2 ( l 1 ) [ L 2 ( l 1 ) β ( l 1 ) G ( l 1 ) ] |
G 3 ( l 1 ) = | ρ ( l 1 ) F 3 ( l 1 ) [ L 3 ( l 1 ) ρ ( l 1 ) G ( l 1 ) ] |
where α ( l 1 ) ,   β ( l 1 ) , δ ( l 1 ) are the three vectors as three best solutions; F 1 ( l 1 ) ,   F 2 ( l 1 ) ,   F 3 ( l 1 ) and L 1 ( l 1 ) ,   L 2 ( l 1 ) ,   L 3 ( l 1 ) are denoted by
F 1 ( l 1 ) = F 2 ( l 1 ) = F 3 ( l 1 ) = [ 2 a 1 ( l 1 ) b 1 ( l 1 ) ] φ 1
L 1 ( l 1 ) = L 2 ( l 1 ) = L 3 ( l 1 ) = 2 φ 2
where φ 1 and φ 2 are two random values. The updated values in the two adjusted factors a 1 ( l 1 ) and b 1 ( l 1 ) can control the tradeoff between exploration and exploitation. Two adjusted factors a 1 ( l 1 ) and b 1 ( l 1 ) are updated by
a 1 ( l 1 ) = 2 2 cos ( ( π   l 1 ) / ( 2 I i t 1 ) )
b 1 ( l 1 ) = 2 2 cos ( ( π   l 1 ) / ( 2 I i t 2 ) )
where l 1 is the iteration number; I i t 1 and I i t 2 are the total numbers of iteration allowed for the optimization. Finally, G ( l 1 + 1 ) = [ δ 1   δ 2 ] is the best solution in connection with the learning rates δ n ( t ) ,   n = 1 ,   2 of the two weights in the MRRSPNN. Hence, the better numbers could be optimized by utilized DGWO with two adjusted factors yield two changeable learning rates for two weights to look for two optimal values and to speed-up delay of two weights.

4. Test Results

Figure 1 displays the control block diagram for the DSP control a permanent-magnet SLM drive system. Figure 6 displays a photo of the experimental setup. The DSP control system includes 2-channel encoder connection port and a 4-channel digital-analog converter (DAC). The proposed accuracy in the real-time control was realized by utilizing the DSP control system. Figure 7 displays the flowchart of the core program and the sub-core interrupt service routine (SCISR) utilized by the DSP control system. First, we deal with some parameters and input/output (I/O) ports initialization and the interrupt interval for the SCISR. When the interrupt was enabled, the core program was employed to observe all the data. The SCISR with a 2-ms sampling interval was employed for reading the mover position of the permanent-magnet SLM drive system from encoder and three-phase currents from A/D converter, computing reference model and position error, enforcing sin/cos generation and coordinate transformation, enforcing the AMRRSPNNB control system with three adaptive laws and reimbursed controller with DGWO, and outputting three-phase current commands to the interlocked and isolated circuits for switching three sets of insulated-gate-bipolar-transistor (IGBT) power modules of the pulse-width-modulation (PWM) voltage source inverter. A triangular-wave comparison current-controlled PWM with a switching frequency of 15 kHz was adopted to govern the PWM voltage source inverter. Additionally, about 0.1 kHz was adopted in the measured bandwidth of position control loop, and about 1 kHz was adopted in the measured bandwidth of current control loop for the permanent-magnet SLM drive system under no-load test. Hence, as to enhance precision of the sampling signals from A/D converter, the sampling interval in the control cycle at the tested results was set at 1 ms (1 kHz) so that the DSP control system has enough time to process control algorithm. Because of inherent uncertainty in the permanent-magnet SLM drive system (e.g., nonlinear friction, ending effects and time-varying dynamic uncertainties), current output limitation and voltage output limitation for DC bus power from AC source, the DC bus power only operated under maximum current and maximum voltage for avoiding burning out the IGBT power modules for the permanent-magnet SLM drive system. Furthermore, so as to prevent operation over-load and the PWM voltage source inverter burnout, the permanent-magnet SLM drive system displayed in Figure 1 had three sets of over current protection circuits, three sets of over voltage protection circuits and three sets of under-voltage protection circuits. The control system was realized by DSP control system. The control goal was to govern the mover to move 6 mm periodically. The reference model was set to be unit gain in the sinusoidal reference trajectory.
For comparison control performance with the three control systems, five situations were demonstrated in the tested results. The three control systems were the renowned PI controller as the control system K1, the backstepping control system with three adaptive laws and a beating function as the control system K2, the AMRRSPNNB control system with three adaptive laws and reimbursed controller with DGWO as the control system K3. Five tested situations were as follows. Situation Q1 was the nominal situation under periodic step command from 0 mm to 6 mm. Situation Q2 was the parameters variations, the cogging force, the column friction force and the Stribeck effect force with 4 times the nominal value under periodic step command from 0 mm to 6 mm. Situation Q3 was the nominal situation under periodic sinusoidal command from −3 mm to 3 mm. Situation Q4 was the parameters variations, the cogging force, the column friction force and the Stribeck effect force with 4 times the nominal value under periodic sinusoidal command from −3 mm to 3 mm. Situation Q5 was the adding load force with step disturbance force F l s = 2   N at 6 mm situation.
Hence, as to reach good steady-state and transient-state responses, all the gains of the renowned PI controller as the control system K1 are k s p = 4.1 , and k s i = k s p / T s i = 1.5 via certain heuristic knowledge [32,33,34] on the tuning of the PI controller at the nominal situation for the position tracking.
The tested results utilized by the control system K1 for governing the permanent-magnet SLM drive system at Situation Q1 and Situation Q2 are shown in Figure 8. The mover position responses at Situation Q1 and Situation Q2 are shown in Figure 8a,c, respectively. The associated control propulsion responses at Situation Q1 and Situation Q2 are shown in Figure 8b,d, respectively. The mover velocity responses at Situation Q1 and Situation Q2 are shown in Figure 8e,f, respectively. The tested results utilized by the control system K1 for controlling the permanent-magnet SLM drive system at Situation Q3 and Situation Q4 are shown in Figure 9. The mover position responses at Situation Q3 and Situation Q4 are shown in Figure 9a,c, respectively. The associated control propulsion responses at Situation Q3 and Situation Q4 are shown in Figure 9b,d, respectively. The mover velocity responses at Situation Q3 and Situation Q4 are shown in Figure 9e,f, respectively. The mover position responses with well tracking responses can be obtained utilized by the control system K1 at Situation Q1 and Situation Q3 displayed in Figure 8a and Figure 9a, respectively. Furthermore, the mover position responses with worsen tracking responses displayed in Figure 8c and Figure 9c were very evident in light of the great nonlinear commotion. The mover position responses with stagnant tracking responses were clear for governing the permanent-magnet SLM drive system utilized by the control system K1 from these tested results. Because of no adequate gain adjustment, the control system K1 has inferior robustness under the great nonlinear commotion.
The parameters of the backstepping control system with three adaptive laws and a beating function as the control system K2 are given as c g 1 = 2.4 , c g 2 = 2.5 , c g 3 = 2.3 , γ 1 = 8.2 according to heuristic knowledge [6,7,10] in the nominal situation for the position tracking under the periodic step command from 0 mm to 6 mm so as to reach good steady-state and transient-state responses. The tested results utilized by the control system K2 for governing the permanent-magnet SLM drive system at Situation Q1 and Situation Q2 are shown in Figure 10. The tested results utilized by the control system K2 for governing the permanent-magnet SLM drive system at Situation Q3 and Situation Q4 are shown in Figure 11. The mover position responses at Situation Q1 and Situation Q2 are shown in Figure 10a,c, respectively. The associated control propulsion responses at Situation Q1 and Situation Q2 are shown in Figure 10b,d, respectively. The mover velocity responses at Situation Q1 and Situation Q2 are shown in Figure 10e,f, respectively. The mover position responses at Situation Q3 and Situation Q4 are shown in Figure 11a,c, respectively. The associated control propulsion responses at Situation Q3 and Situation Q4 are shown in Figure 11b,d, respectively. The mover velocity responses at Situation Q3 and Situation Q4 are shown in Figure 11e,f, respectively. The mover position responses with well tracking responses can be obtained utilized by the control system K2 at Situation Q1 and Situation Q3 displayed in Figure 10a and Figure 11a. Meanwhile, the mover position responses with fine tracking responses at Situation Q2 and Situation Q4 displayed in Figure 10c and Figure 11c are evident under the great nonlinear commotion. From these tested results, the mover position responses with good tracking responses can be obtained for governing the permanent-magnet SLM drive system utilized by the control system K2 at Situation Q1, Situation Q2, Situation Q3 and Situation Q4. However, great upper bound with beating function causes to very crucial vibration in the control propulsion. Moreover, the vibration control propulsion will penetrate the bearing institution and may stimulate unstable dynamics occurrence.
Hence, as to display the usefulness of the control system with minimal number of neurons, the MRRSPNN adopts 2, 4 and 1 neuron in the first layer, the second layer and the third layer, respectively. The gains and parameters of the proposed AMRRSPNNB control system with three adaptive laws and reimbursed controller with DGWO as the control system K3 were as follows c g 1 = 2.4 , c g 2 = 2.5 , c g 3 = 2.3 , η 5 = 0.18 μ = 0.2 through some heuristic knowledge [6,7,10] in the nominal situation for the position tracking under the periodic step command from 0 mm to 6 mm so as to reach good steady-state and transient-state responses. The parameter’s initialization in the MRRSPNN was adopted in [35] to obtain good initialize the parameters. In addition, the parameter- adjustment process keeps continually active in the tested duration.
The tested results utilized by the control system K3 for governing the permanent-magnet SLM drive system at Situation Q1 and Situation Q2 are shown in Figure 12. The tested results utilized by the control system K2 for governing the permanent-magnet SLM drive system at Situation Q3 and Situation Q4 are shown in Figure 13. The mover position responses at Situation Q1 and Situation Q2 are shown in Figure 12a,c, respectively. The associated control propulsion responses at Situation Q1 and Situation Q2 are shown in Figure 12b,d, respectively. The mover velocity responses at Situation Q1 and Situation Q2 are shown in Figure 12e,f, respectively. The mover position responses at Situation Q3 and Situation Q4 are shown in Figure 13a,c, respectively. The associated control propulsion responses at Situation Q3 and Situation Q4 are shown in Figure 13b,d, respectively. The mover velocity responses at Situation Q3 and Situation Q4 are shown in Figure 13e,f, respectively. The mover position responses with graceful tracking responses that can be obtained by utilizing the control system K2 at Situation Q1 and Situation Q3 displayed in Figure 12a and Figure 13a. Moreover, the mover position responses with excellent tracking responses at Situation Q2 and Situation Q4 are shown in Figure 12c and Figure 13c very conspicuous under the great nonlinear interferences. From these tested results, the mover position responses with better tracking responses are obtained for governing the permanent-magnet SLM drive system utilized by the control system K3 at Situation Q1, Situation Q2, Situation Q3 and Situation Q4.
Furthermore, the vibration was much decreased in the control propulsions utilized by the control system K3 at Situation Q1, Situation Q2, Situation Q3 and Situation Q4 as displayed in Figure 12b,d and Figure 13b,d. However, the robustness of the control performance utilized by the control system K3 under the occurrence of parameter variations for the tracking of periodical trajectories are evident in light of the online adaptive tuning of the MRRSPNN. The control performance utilized by the control system K3 has better control performances than the control system K2 from the tested results.
Finally, tested results for the mover position-regulating responses under adding load force with step disturbance force F l s = 2   N at 6 mm as Situation Q5 are shown in Figure 14. Tested results with the mover position-regulating responses at Situation Q5 utilized by the control system K1, the control system K2 and the control system K3 at Situation Q5 are shown in Figure 14a–c, respectively. From these tested results, performance ability on load force regulation utilized by the control system K3 has better transient response than utilized by the control system K1 and the control system K2. However, the robustness of control performance utilized by the control system K3 was wonderful for governing the permanent-magnet SLM drive system under the occurrences of parameter disturbance and load force interference in light of the online adaptive adjustment of the MRRSPNN with DGWO. From these tested results, control performance responses of the control system K3 was superior to the control system K1 and the control system K2.
In addition, some comparative performances utilized by the control systems K1, K2 and K3 are summarized in Table 1 with regarding to the five tested situations. The maximum errors of e g 1 by utilizing the control systems K1, K2 and K3 at Situation Q1 are 0.55 mm, 0.32 mm and 0.18 mm, respectively. The root mean square errors of e g 1 utilized by the control systems K1, K2 and K3 at Situation Q1 are 0.31 mm, 0.21 mm and 0.08 mm, respectively. The maximum errors of e g 1 utilized by the control systems K1, K2 and K3 at Situation Q2 are 0.73 mm, 0.41 mm and 0.21 mm, respectively. The root mean square errors of e g 1 utilized by the control systems K1, K2 and K3 at Situation Q2 are 0.41 mm, 0.31 mm and 0.11 mm, respectively. The maximum errors of e g 1 utilized by the control systems K1, K2 and K3 at Situation Q3 are 0.53 mm, 0.31 mm and 0.18 mm, respectively. The root mean square errors of e g 1 utilized by the control systems K1, K2 and K3 at Situation Q3 are 0.31 mm, 0.21 mm and 0.08 mm, respectively. The maximum errors of e g 1 utilized by the control systems K1, K2 and K3 at Situation Q4 are 0.72 mm, 0.40 mm and 0.20 mm, respectively. The root mean square errors of e g 1 utilized by the control systems K1, K2 and K3 at Situation Q4 are 0.40 mm, 0.30 mm and 0.10 mm, respectively. The maximum errors of e g 1 utilized by the control systems K1, K2 and K3 at Situation Q5 are 0.79 mm, 0.42 mm and 0.20 mm, respectively. The root mean square errors of e g 1 utilized by the control systems K1, K2 and K3 at Situation Q5 are 0.56 mm, 0.28 mm and 0.11 mm, respectively. As displayed in the table, the control systems K3 produces in smaller tracking error in comparison with the control systems K1 and K2. In the light of the tabulated measurements, the control systems K1 indeed yields the beneficial control performance.
Furthermore, the feature performance comparisons utilized by the control systems K1, K2 and K3 are summarized in Table 2 from tested results. In Table 2, some feature performances with regarding to the vibration in the control propulsion, the dynamic response, the regulation ability of load force disturbance, the convergent speed, the position tracking error and the rejection ability of parameter disturbance utilized by the control systems K3 are superior to utilized by the control systems K1 and K2.

5. Conclusions

A proposed AMRRSPNNB control system with three adaptive laws and reimbursed controller with DGWO was proposed for governing a permanent-magnet SLM drive system under the conditions of parameter disturbance and load-force interference for the tracking of periodical trajectories.
The significant contributions are as follows: (1) The FOC was successfully used for governing the permanent-magnet SLM drive system; (2) the backstepping control system with three adaptive laws and a beating function was successfully derived in light of the Lyapunov function to decrease influence under the external lumped force uncertainty disturbances and load force interferences; (3) the proposed AMRRSPNNB control system with three adaptive laws and reimbursed controller with DGWO to reckoned the external lumped force uncertainty and reimburse the reckoned error was successfully derived in the light of the Lyapunov function for reducing the external lumped force uncertainty disturbances and the load force interferences; (4) two optimal learning rates of the MRRSPNN were successfully regulated by utilizing the DGWO algorithm to speed up the parameter’s contraction. Furthermore, the proposed AMRRSPNNB control system with three adaptive laws and reimbursed controller with DGWO had better control performances than the renowned PI controller and the backstepping control system with three adaptive laws and a beating function in various feature performances as the vibration in the control propulsion, the convergent speed, the position tracking error, the dynamic response, the regulation ability of load force disturbance and the rejection ability of parameter disturbance from Table 1 and Table 2.

Author Contributions

Data curation, S.-C.L.; formal analysis, C.-T.C.; investigation, D.-F.C.; methodology, J.-C.T.; software, Y.-C.S.; writing—original draft, D.-F.C.; writing—review & editing, J.-C.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Boldea, I.; Nasar, S.A. Linear Electric Actuators and Generators; Cambridge University Press: London, UK, 1997. [Google Scholar]
  2. Egami, T.; Tsuchiya, T. Disturbance suppression control with preview action of linear DC brushless motor. IEEE Trans. Ind. Electron. 1995, 42, 494–500. [Google Scholar] [CrossRef]
  3. Sanada, M.; Morimoto, S.; Takeda, Y. Interior permanent magnet linear synchronous motor for high-performance drives. IEEE Trans. Ind. Appl. 1997, 33, 966–972. [Google Scholar] [CrossRef]
  4. Kanellakopoulos, I.; Kokotovic, P.V.; Morse, A.S. Systematic design of adaptive controller for feedback linearizable system. IEEE Trans. Autom. Control 1992, 36, 1241–1253. [Google Scholar] [CrossRef] [Green Version]
  5. Krstic, M.; Kokotovic, P.V. Adaptive nonlinear design with controller-identifier separation and swapping. IEEE Trans. Autom. Control 1995, 40, 426–440. [Google Scholar] [CrossRef] [Green Version]
  6. Stotsky, A.; Hedrick, J.K.; Yip, P.P. The use of sliding modes to simplify the backstepping control method. In Proceedings of the American Control Conference, Albuquerque, NM, USA, 1–3 June 1997; pp. 1703–1708. [Google Scholar]
  7. Bartolini, G.; Ferrara, A.; Giacomini, L.; Usai, E. Peoperties of a combined adaptive/second-order sliding mode control algorithm for some classes of uncertain nonlinear systems. IEEE Trans. Autom. Control 2000, 45, 1334–1341. [Google Scholar] [CrossRef] [Green Version]
  8. Guo, C.; Zhang, A.; Zhang, H.; Zhang, L. Adaptive backstepping control with online parameter estimator for a plug-and-play parallel converter system in a power switcher. Energies 2018, 11, 3528. [Google Scholar] [CrossRef] [Green Version]
  9. Yang, C.; Yang, F.; Xu, D.; Huang, X.; Zhang, D. Adaptive command-filtered backstepping control for virtual synchronous generators. Energies 2019, 12, 2681. [Google Scholar] [CrossRef] [Green Version]
  10. Ting, J.C.; Chen, D.F. Nonlinear backstepping control of SynRM drive systems using reformed recurrent Hermite polynomial neural networks with adaptive law and error estimated law. J. Power Electron. 2018, 8, 1380–1397. [Google Scholar]
  11. Ko, E.; Park, J. Diesel mean value engine modeling based on thermodynamic cycle simulation using artificial neural network. Energies 2019, 12, 2823. [Google Scholar] [CrossRef] [Green Version]
  12. Bagheri, H.; Behrang, M.; Assareh, E.; Izadi, M.; Sheremet, M.A. Free convection of hybrid nanofluids in a C-shaped chamber under variable heat flux and magnetic field: Simulation, sensitivity analysis, and artificial neural networks. Energies 2019, 12, 2807. [Google Scholar] [CrossRef] [Green Version]
  13. Noureddine, B.; Djamel, B.; Vicente, F.B.; Fares, B.; Boualam, B.; Bachir, B. Maximum power point tracker based on fuzzy adaptive radial basis function neural network for PV-system. Energies 2019, 12, 2827. [Google Scholar]
  14. Lee, D.; Kim, K. Recurrent neural network-based hourly prediction of photovoltaic power output using meteorological information. Energies 2019, 12, 215. [Google Scholar] [CrossRef] [Green Version]
  15. Chen, Y.; Wang, Y.; Ma, J.; Jin, Q. BRIM: An accurate electricity spot price prediction scheme-based bidirectional recurrent neural network and integrated market. Energies 2019, 12, 2241. [Google Scholar] [CrossRef] [Green Version]
  16. Li, G.; Wang, H.; Zhang, S.; Xin, J.; Liu, H. Recurrent neural networks based photovoltaic power forecasting approach. Energies 2019, 12, 2538. [Google Scholar] [CrossRef] [Green Version]
  17. Han, L.; Jiao, X.; Zhang, Z. Recurrent neural network-based adaptive energy management control strategy of plug-in hybrid electric vehicles considering battery aging. Energies 2020, 13, 202. [Google Scholar] [CrossRef] [Green Version]
  18. Lin, C.H. Comparative dynamic control for continuously variable transmission with nonlinear uncertainty using blend amend recurrent Gegenbauer-functional- expansions neural network. Nonlinear Dyn. 2017, 87, 1467–1493. [Google Scholar] [CrossRef]
  19. Ting, J.C.; Chen, D.F. Novel mingled reformed recurrent Hermite polynomial neural network control system applied in continuously variable transmission system. J. Mech. Sci. Technol. 2018, 32, 4399–4412. [Google Scholar] [CrossRef]
  20. Lin, C.H.; Chang, K.T. Admixed recurrent Gegenbauer polynomials neural network with mended particle swarm optimization control system for synchronous reluctance motor driving continuously variable transmission system. Proc. IMechE Part I J. Syst. Control Eng. 2020, 234, 183–198. [Google Scholar] [CrossRef]
  21. Szego, G. Beitrag zur theorie der thetafunktionen. Sitz. Preuss. Akad. Wiss. Phys. Math. Kl. 1926, 19, 242–252. [Google Scholar]
  22. Emary, E.; Yamany, W.; Hassanien, A.E.; Snasel, V. Multi-objective gray-wolf optimization for attribute reduction. Procedia Comput. Sci. 2015, 1, 623–632. [Google Scholar] [CrossRef] [Green Version]
  23. Mosavi, M.; Khishe, M.; Ghamgosar, A. Classification of sonar data set using neural network trained by gray wolf optimization. Neural Netw. World 2016, 26, 393–415. [Google Scholar] [CrossRef] [Green Version]
  24. Khandelwal, A.; Bhargava, A.; Sharma, A.; Sharma, H. Modified grey wolf optimization algorithm for transmission network expansion planning problem. Arab. J. Sci. Eng. 2018, 43, 2899–2908. [Google Scholar] [CrossRef]
  25. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  26. Sultana, U.; Khairuddin, A.B.; Mokhtar, A.S.; Zareen, N.; Sultana, B. Grey wolf optimizer based placement and sizing of multiple distributed generation in the distribution system. Energy 2016, 111, 525–536. [Google Scholar] [CrossRef]
  27. Parsian, A.; Ramezani, M.; Ghadimi, N. A hybrid neural network-gray wolf optimization algorithm for melanoma detection. Biomed. Res. 2017, 28, 3408–3411. [Google Scholar]
  28. Duangjai, J.; Pongsak, P. Grey wolf algorithm with borda count for feature selection in classification. In Proceedings of the 3rd International Conference on Control and Robotics Engineering (ICCRE), Nagoya, Japan, 20–23 April 2018; pp. 238–242. [Google Scholar]
  29. Astrom, K.J.; Wittenmark, B. Adaptive Control; Addison-Wesley: New York, NY, USA, 1995. [Google Scholar]
  30. Slotine, J.J.E.; Li, W. Applied Nonlinear Control; Prentice-Hall, Englewood Cliffs: New Jersey, NY, USA, 1991. [Google Scholar]
  31. Gasper, G.; Rahman, M. Encyclopedia of Mathematics and its Applications, 2nd ed.; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  32. Astrom, K.J.; Hagglund, T. PID Controller: Theory, Design, and Tuning; Instrument Society of America: Research Triangle Park, NC, USA, 1995. [Google Scholar]
  33. Hagglund, T.; Astrom, K.J. Revisiting the Ziegler-Nichols tuning rules for PI control. Asian J. Control 2002, 4, 364–380. [Google Scholar] [CrossRef]
  34. Hagglund, T.; Astrom, K.J. Revisiting the Ziegler-Nichols tuning rules for PI control-part II: The frequency response method. Asian J. Control 2004, 6, 469–482. [Google Scholar] [CrossRef]
  35. Lewis, F.L.; Campos, J.; Selmic, R. Neuro-Fuzzy Control of Industrial Systems with Actuator Nonlinearities; SIAM Frontiers in Applied Mathematics: Philadelphia, PA, USA, 2002. [Google Scholar]
Figure 1. Formation of DSP FOC permanent-magnet synchronous linear motor (SLM) drive system.
Figure 1. Formation of DSP FOC permanent-magnet synchronous linear motor (SLM) drive system.
Energies 13 02914 g001
Figure 2. Block diagram of understood control system.
Figure 2. Block diagram of understood control system.
Energies 13 02914 g002
Figure 3. Block diagram of the backstepping control system with three adaptive laws and a beating function.
Figure 3. Block diagram of the backstepping control system with three adaptive laws and a beating function.
Energies 13 02914 g003
Figure 4. Constitution of the MRRSPNN.
Figure 4. Constitution of the MRRSPNN.
Energies 13 02914 g004
Figure 5. Block diagram of the AMRRSPNNB control system with three adaptive laws and reimbursed controller with DGWO.
Figure 5. Block diagram of the AMRRSPNNB control system with three adaptive laws and reimbursed controller with DGWO.
Energies 13 02914 g005
Figure 6. A photo of the experimental structure.
Figure 6. A photo of the experimental structure.
Energies 13 02914 g006
Figure 7. Flowchart of the enforcing program utilized by the DSP control system.
Figure 7. Flowchart of the enforcing program utilized by the DSP control system.
Energies 13 02914 g007
Figure 8. Tested results utilized by the control system K1: (a) mover position response at Situation Q1, (b) control propulsion response at Situation Q1, (c) mover position response at Situation Q2, (d) control propulsion response at Situation Q2, (e) mover velocity response at Situation Q1, (f) mover velocity response at Situation Q2.
Figure 8. Tested results utilized by the control system K1: (a) mover position response at Situation Q1, (b) control propulsion response at Situation Q1, (c) mover position response at Situation Q2, (d) control propulsion response at Situation Q2, (e) mover velocity response at Situation Q1, (f) mover velocity response at Situation Q2.
Energies 13 02914 g008
Figure 9. Tested results utilized by the control system K1: (a) mover position response at Situation Q3, (b) control propulsion response at Situation Q3, (c) mover position response at Situation Q4, (d) control propulsion response at Situation Q4, (e) mover velocity response at Situation Q3, (f) mover velocity response at Situation Q4.
Figure 9. Tested results utilized by the control system K1: (a) mover position response at Situation Q3, (b) control propulsion response at Situation Q3, (c) mover position response at Situation Q4, (d) control propulsion response at Situation Q4, (e) mover velocity response at Situation Q3, (f) mover velocity response at Situation Q4.
Energies 13 02914 g009
Figure 10. Tested results utilized by the control system K2: (a) mover position response at Situation Q1, (b) control propulsion response at Situation Q1, (c) mover position response at Situation Q2, (d) control propulsion response at Situation Q2, (e) mover velocity response at Situation Q1, (f) mover velocity response at Situation Q2.
Figure 10. Tested results utilized by the control system K2: (a) mover position response at Situation Q1, (b) control propulsion response at Situation Q1, (c) mover position response at Situation Q2, (d) control propulsion response at Situation Q2, (e) mover velocity response at Situation Q1, (f) mover velocity response at Situation Q2.
Energies 13 02914 g010
Figure 11. Tested results utilized by the control system K2: (a) mover position response at Situation Q3, (b) control propulsion response at Situation Q3, (c) mover position response at Situation Q4, (d) control propulsion response at Situation Q4, (e) mover velocity response at Situation Q3, (f) mover velocity response at Situation Q4.
Figure 11. Tested results utilized by the control system K2: (a) mover position response at Situation Q3, (b) control propulsion response at Situation Q3, (c) mover position response at Situation Q4, (d) control propulsion response at Situation Q4, (e) mover velocity response at Situation Q3, (f) mover velocity response at Situation Q4.
Energies 13 02914 g011
Figure 12. Tested results utilized by the control system K3: (a) mover position response at Situation Q1, (b) control propulsion response at Situation Q1, (c) mover position response at Situation Q2, (d) control propulsion response at Situation Q2, (e) mover velocity response at Situation Q1, (f) mover velocity response at Situation Q2.
Figure 12. Tested results utilized by the control system K3: (a) mover position response at Situation Q1, (b) control propulsion response at Situation Q1, (c) mover position response at Situation Q2, (d) control propulsion response at Situation Q2, (e) mover velocity response at Situation Q1, (f) mover velocity response at Situation Q2.
Energies 13 02914 g012
Figure 13. Tested results utilized by the control system K3: (a) mover position response at Situation Q3, (b) control propulsion response at Situation Q3, (c) mover position response at Situation Q4, (d) control propulsion response at Situation Q4, (e) mover velocity response at Situation Q3, (f) mover velocity response at Situation Q4.
Figure 13. Tested results utilized by the control system K3: (a) mover position response at Situation Q3, (b) control propulsion response at Situation Q3, (c) mover position response at Situation Q4, (d) control propulsion response at Situation Q4, (e) mover velocity response at Situation Q3, (f) mover velocity response at Situation Q4.
Energies 13 02914 g013
Figure 14. Tested results at Situation Q5: (a) mover position-regulating response utilized by the control system K1, (b) mover position-regulating response utilized by the control system K2; (c) mover position-regulating response utilized by the control system K3.
Figure 14. Tested results at Situation Q5: (a) mover position-regulating response utilized by the control system K1, (b) mover position-regulating response utilized by the control system K2; (c) mover position-regulating response utilized by the control system K3.
Energies 13 02914 g014
Table 1. Performance comparison of control systems.
Table 1. Performance comparison of control systems.
ControllersControl System K1
Performance Situation Q1Situation Q2Situation Q3Situation Q4Situation Q5
Maximum errors of e g 1 0.55 mm0.73 mm0.53 mm0.72 mm0.79 mm
Root mean square errors of e g 1 0.31 mm0.41 mm0.31 mm0.40 mm0.56 mm
ControllersControl System K2
Performance Situation Q1Situation Q2Situation Q3Situation Q4Situation Q5
Maximum errors of e g 1 0.32 mm0.41 mm0.31 mm0.40 mm0.42 mm
Root mean square errors of e g 1 0.21 mm0.31 mm0.21 mm0.30 mm0.28 mm
ControllersControl System K3
Performance Situation Q1Situation Q2Situation Q3Situation Q4Situation Q5
Maximum errors of e g 1 0.18 mm0.21 mm0.18 mm0.20 mm0.20 mm
Root mean square errors of e g 1 0.08 mm0.11 mm0.08 mm0.10 mm0.11 mm
Table 2. Feature performance comparisons of control systems.
Table 2. Feature performance comparisons of control systems.
ControllersControl System K1Control System K2Control System K3
Feature Performance
Vibration value in control propulsionSmall (10% of nominal value at Situation Q2)Large (85% of nominal value at Situation Q2)Smaller (8% of nominal value at Situation Q2)
Dynamic responseSlow (rising time as 0.14 sec at Situation Q2)Fast (rising time as 0.10 sec at Situation Q2)Faster (rising time 0.05 sec at Situation Q2)
Regulation capability for load force disturbancePoor (maximum error as 0.79 mm in Situation Q5)Good (maximum error as 0.42 mm at Situation Q5)Better (maximum error as 0.20 mm at Situation Q5)
Convergent speedSlow (settle time as 0.05 sec at Situation Q2)Fast (settle time as 0.03 sec at Situation Q2)Faster (settle time as 0.01 sec at Situation Q2)
Position tracking errorLarge (maximum error as 0.73 mm at Situation Q2)Middle (maximum error as 0.41 mm at Situation Q2)Small (maximum error as 0.21 mm at Situation Q2)
Rejection ability for parameter disturbancePoor (maximum error as 0.72 mm at Situation Q4)Good (maximum error as 0.40 mm at Situation Q4)Better (maximum error as 0.20 mm at Situation Q4)
Learning rateNoneNoneVary (two optimal learning rates)

Share and Cite

MDPI and ACS Style

Chen, D.-F.; Shih, Y.-C.; Li, S.-C.; Chen, C.-T.; Ting, J.-C. Permanent-Magnet SLM Drive System Using AMRRSPNNB Control System with DGWO. Energies 2020, 13, 2914. https://doi.org/10.3390/en13112914

AMA Style

Chen D-F, Shih Y-C, Li S-C, Chen C-T, Ting J-C. Permanent-Magnet SLM Drive System Using AMRRSPNNB Control System with DGWO. Energies. 2020; 13(11):2914. https://doi.org/10.3390/en13112914

Chicago/Turabian Style

Chen, Der-Fa, Yi-Cheng Shih, Shih-Cheng Li, Chin-Tung Chen, and Jung-Chu Ting. 2020. "Permanent-Magnet SLM Drive System Using AMRRSPNNB Control System with DGWO" Energies 13, no. 11: 2914. https://doi.org/10.3390/en13112914

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop