Next Article in Journal
Information Theory Estimators for the First-Order Spatial Autoregressive Model
Previous Article in Journal
Exact Solutions in Modified Gravity Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Statistical Information Based Single Neuron Adaptive Control for Non-Gaussian Stochastic Systems

1
School of Control and Computer Engineering, North China Electric Power University, Beijing 102206, China
2
State Key Laboratory of Alternate Electrical Power System with Renewable Energy Sources, North China Electric Power University, Beijing 102206, China
*
Author to whom correspondence should be addressed.
Entropy 2012, 14(7), 1154-1164; https://doi.org/10.3390/e14071154
Submission received: 3 May 2012 / Revised: 22 June 2012 / Accepted: 27 June 2012 / Published: 2 July 2012

Abstract

:
Based on information theory, the single neuron adaptive control problem for stochastic systems with non-Gaussian noises is investigated in this paper. Here, the statistic information of the output within a receding window rather than the output value is used for the tracking problem. Firstly, the single neuron controller structure, which has the ability of self-learning and self-adaptation, is established. Then, an improved performance criterion is given to train the weights of the single neuron. Furthermore, the mean-square convergent condition of the proposed control algorithm is formulated. Finally, comparative simulation results are presented to show that the proposed algorithm is superior to the PID controller. The contributions of this work are twofold: (1) the optimal control algorithm is formulated in the data-driven framework, which needn’t the precise system model that is usually difficult to obtain; (2) the control problem of non-Gaussian systems can be effectively dealt with by the simple single neuron controller under improved minimum entropy criterion.

1. Introduction

As a simple and effective controller, the proportional-integral-derivative controller (PID controller), which has advantages in terms of strong robustness, high reliability, good dynamic response and so on, has been widely used in most industrial control systems. Although it attracts many engineers’ attention, even today, the parameters of the traditional PID controller cannot be adjusted adaptively. Satisfactory control quality often cannot be obtained due to the inevitable nonlinearity and disturbances in real processes. Motivated by the structure and function of the cerebral neural networks, artificial neural networks (ANN) were originally developed by McCulloch and Pitts in 1943. For its specific stability of nonlinear adaptive information processing, ANN has been researched extensively by many scholars and engineers to deal with modern complex nonlinear systems [1,2,3]. It is well known that most multi-layer neural networks adopt S-type functions, which increase the amount of calculation. In order to meet the requirements of fast neural control processes, the single neuron adaptive control strategy was proposed in [4]. Besides the ability of self-learning and self-adaptation, the single neuron adaptive control strategy has real time ability due to its rapid numerical calculation. The model-free adaptive control system formed by single neuron, because of its simple structure, has been studied and some results of learning algorithm and applications can be found (see e.g., [5,6]).
Stochastic systems can be regarded as reasonable ones due to the inevitable randomness in practical industrial processes. Under the assumption that the noises obey a Gaussian distribution, some mature control strategies have been established, such as minimum variance control and linear quadratic Gaussian (LQG) control. However, the disturbances in practical systems are not necessarily non-Gaussian; moreover, nonlinearity in stochastic control systems could lead to non-Gaussian randomness even if the disturbances obey a Gaussian distribution. For this case, the stochastic distribution control (SDC) method was proposed by Wang [7], where the shape of the output probability density function (PDF) rather than the output itself was considered. Recently, a linear-matrix-inequalities (LMI)-based convex optimization algorithm was used for the control of output PDF, filtering and fault detection for non-Gaussian systems by Guo and Wang in [8], where a B-spline was utilized to approximate the output PDF. As such, the PDF was characterized by the weights and selected B-splines. Accordingly the evolution of the PDF can be represented by the state equation of the weights [9,10,11]. However, not all the PDFs of the outputs are measurable. Several new methods were introduced to overcome these problems by investigating the closed-loop performance characterized by the entropy of the output tracking error [12,13,14,15]. Since the exact models are difficult to obtain in practical industrial processes, it is a promising approach to use the only statistic information of systems input and output data. [16,17,18] presented the estimation of the α-order Renyi’s entropy directly from the on-line output data. In [19], by using the above data-driven method, a novel run-to-run control methodology was proposed for semiconductor processes with uncertain metrology delay.
Motivated by the above results, in this paper a single neuron adaptive controller, which is a model-free controller, is developed for stochastic systems with non-Gaussian disturbances. The proposed control strategy is based on an improved minimum entropy criterion of closed loop stochastic systems. Our contributions are twofold. First, we present a simple control algorithm with the absence of the exact system model. Second, an improved non-Gaussian performance index is formulated to train the weights of the single neuron. Based on the optimal weights, the control algorithm could drive the tracking error approximating to zero with small deviation.
The remainder of the paper is organized as follows: the structure of the single neuron controller is illustrated in Section 2. An improved performance index including the information potential of the closed loop tracking error, the mean value of squared tracking error and constraints on control input energy is introduced. Using the non-parametric estimation of the PDF of tracking errors, the performance index is then obtained in Section 3. By minimizing the proposed performance index using the decreasing gradient method, weights of the single neuron are updated in Section 4. In addition, mean-square convergent condition of the proposed control algorithm is formulated. Comparative simulation results are given to illustrate the efficiency and validity of the proposed method in Section 5.

2. Structure of Single Neuron Adaptive Controller

The general scheme of the single neuron adaptive control system is illustrated as Figure 1.
Figure 1. The schematic diagram of a single neuron adaptive control system.
Figure 1. The schematic diagram of a single neuron adaptive control system.
Entropy 14 01154 g001
In Figure 1, θ is the non-Gaussian random disturbance involved in the system. r ( k ) and y ( k ) are set-point and controlled variable respectively, e ( k ) = r ( k ) y ( k ) is the tracking error in the closed loop control system. The control input u ( k ) can be expressed as:
u ( k ) = u ( k 1 ) + K l = 1 3 ω l ( k ) x l ( k ) / Σ
where Σ = l = 1 3 ω l , K > 0 is proportional coefficient of neuron. ω l ( k ) ( l = 1 , 2 , 3 ) stands for the weight corresponding to each input. x l ( k ) ( l = 1 , 2 , 3 ) is the input of the neuron coming from the tracking error e ( k ) and can be defined as follows:
{ x 1 = e ( k ) x 2 = e ( k ) e ( k 1 ) x 3 = e ( k ) 2 e ( k 1 ) + e ( k 2 )
Due to the non-Gaussian disturbances θ involved in the systems, it is necessary to utilize an appropriate performance index to characterize the randomness of the tracking error. Accordingly the main task is to train the weights by minimizing the given performance function so as to drive the tracking error approximation to zero with small randomness.

3. Improved Minimum Entropy Criterion

Since the disturbances in most control systems are not necessarily Gaussian, the variance of the output tracking error would not be sufficient to characterize the performance of the tracking error dynamic systems. Therefore, an alternative measure of uncertainty, entropy, is given to construct the performance index for measuring the dispersion of the stochastic systems. Ideally, small entropy of the tracking error means the error has narrow and sharp PDF. Owing to the computational efficiency of its non-parametric estimator, Renyi’s quadratic entropy ( H α , α = 2 ) is selected here. It can be expressed by the following definition:
H k = log + γ e k 2 ( z ) d z = log V k
where V k = + γ e k 2 ( z ) d z is defined as the information potential [17]. It is clear from Equation (3) that Renyi’s quadratic entropy is a monotonic function of the information potential. Hence, in order to simplify computational complexity, the inverse of quadratic information potential is used instead of Renyi’s entropy.
In order to make the tracking error approach zero, the mean value should be considered. Furthermore, since the tracking error may be positive or negative, thus the mean value of the square error should also be included in the performance index:
E ( e k 2 ) = + z 2 γ e k ( z ) d z
Since control inputs in practical control systems are always constrained by physical limitations of actuators, the magnitude of the control input should be minimized too. Above all, the whole performance function can be formulated as the weighted sum of inverse of the information potential of tracking error, the square of tracking error and the constrains on control input:
J k = R 1 1 V k + R 2 E ( e k 2 ) + 1 2 R 3 u k 2
In order to calculate the performance index, it is necessary to estimate the PDF of closed-loop tracking error γ e k from error sequences { e 1 , e 2 , , e N } within a receding window whose width is N . The PDF of the tracking error can be estimated by Parzen windowing technique:
γ ^ e k ( x ) = 1 N i = 1 N κ ( x e i , σ 2 )
where κ ( x , σ 2 ) = 1 2 π σ exp ( x 2 2 σ 2 ) is the Gaussian kernel function, σ 2 and N represent typical symmetric variance and the total data number, respectively. Thus the quadratic Renyi’s entropy can be obtained as follows:
H ( e ( k ) ) = log 1 N 2 i = 1 N ( j = 1 N κ ( e i e j , 2 σ 2 ) )
Subsequently, the quadratic information potential and the mathematical expectation of the tracking error can be formulated from samples within the receding window:
V ( e ( k ) ) = 1 N 2 i = 1 N ( j = 1 N κ ( e i e j , 2 σ 2 ) )
E ( e 2 ( k ) ) = 1 N i = 1 N e i 2
Finally, the performance index can be calculated in the following way:
J ( k ) = R 1 1 V ( e ( k ) ) + R 2 E ( e 2 ( k ) ) + 1 2 R 3 u 2 ( k ) R 1 1 1 N 2 i = 1 N ( j = 1 N Κ ( e i e j , 2 σ 2 ) ) + R 2 1 N i = 1 N e i 2 + 1 2 R 3 u 2 ( k )

4. Optimal Control Algorithm and Stability Analysis

4.1. Computation of Control Input

The purpose of the single neuron adaptive stochastic distribution controller design is to find out a set of optimal weighting coefficient w l ( k ) ( l = 1 , 2 , 3 ) to minimize the performance index Equation (10). As far as this controller concerned in this paper, it is a nonlinear optimization problem that can be solved using the gradient algorithm:
Δ ω l ( k ) = ω l ( k + 1 ) ω l ( k ) = η l J ( k ) ω l ( k )
J ( k ) ω l = R 1 2 V 2 ( e ( k ) ) N 2 σ 2 i = 1 N j = 1 N ( e i e j ) κ ( e i e j , 2 σ 2 ) ( e i ω l e j ω l ) + 2 R 2 i = 1 N e i e i ω l + R 3 u ( k ) u k ω l
e ( k ) ω l = y ( k ) ω l = y ( k ) u ( k ) u ( k ) ω l = y ( k ) u ( k ) Δ u ( k ) ω l
where η l ( l = 1 , 2 , 3 ) is the learning factor. The Jacobian information y ( k ) u ( k ) can be replaced by sgn y ( k ) u ( k ) or calculated by model prediction algorithm of the plant. The influence made by this approximation method can be compensated by adjusting the learning rate. Since Σ changes gently, the derivation can be regarded as a constant. Therefore, we have:
Δ u ( k ) ω l = { e ( k ) , l = 1 e ( k ) e ( k 1 ) , l = 2 e ( k ) 2 e ( k 1 ) + e ( k 2 ) , l = 3
the weights ω l ( l = 1 , 2 , 3 ) can be updated by Equations (11–14). Then the optimal control algorithm can be obtained from Equation (1).
Remark Denote W = [ ω 1 ω 2 ω 3 ] T , η = d i a g { η 1 , η 2 , η 3 } , then the weight can be updated by:
W ( k + 1 ) = W ( k ) η J ( k ) W ( k ) = W ( k ) η J ( k )

2.1. Mean-Square Convergence Analysis

In order to analyze the convergence of improved minimum error entropy control algorithm, we consider a quadratic approximation for the performance index J ( k ) by employing a Taylor series expansion truncated at the linear term for the gradient around the optimal weight vector:
J ( k ) = J W ( k ) + 1 2 W ˜ T ( k ) R W ˜ ( k )
where the optimal solution is defined as W = arg min W J ( k ) , W ˜ ( k ) = W W ( k ) and R : = 2 J ( k ) .
Theorem 1
Assume that J ( k ) is a quadratic surface with a Taylor series approximation given by Equation (16). To ensure mean-square convergence of the control algorithm, a necessary condition is:
0 < η l < 2 λ l ( l = 1 , 2 , 3 )
where λ l ( l = 1 , 2 , 3 ) is the eigenvalue of R .
Proof: 
Let R = Q Λ Q T , where Q and Λ denote the orthonormal eigenvector and diagonal eigenvalue matrices, respectively. Subtracting both sides of Equation (16) from W and substituting J ( k ) = R W ˜ ( k ) and R = Q Λ Q T , it leads to:
W ˜ ( k + 1 ) = W ˜ ( k ) η R W ˜ ( k ) = Q [ I η Λ ] Q T W ˜ ( k )
The weight error along the natural modes ( v ( k ) = Q T W ˜ ( k ) ) is thus given by:
v ( k + 1 ) = [ I η Λ ] v ( k )
The expression for the l t h mode then becomes:
v l ( k + 1 ) = [ 1 η l λ l ] v l ( k )
hence:
v l 2 ( k + 1 ) = [ 1 η l λ l ] 2 v l 2 ( k )
and:
v l ( k + 1 ) 2 = [ 1 η l λ l ] 2 v l ( k ) 2
In order to study the mean-square behavior of the algorithm, we take expectations of both sides of Equation (22) as follows:
E [ v l ( k + 1 ) ] 2 = [ 1 η l λ l ] 2 E [ v l ( k ) ] 2
It is easy to observe from Equation (23) that:
E [ v l ( k + 1 ) 2 ] E [ v l ( k ) 2 ] < 1 | 1 η l λ l | 1
Since the eigenvalues of the Hessian matrix R are positive, the following condition of mean-square convergence of the control algorithm can be obtained from Equation (24):
0 < η l < 2 λ l

5. A numerical Example

In order to illustrate the feasibility and efficiency of the presented optimal control algorithm, let us consider the following nonlinear non-Gaussian stochastic system described by:
y k = 1.8 y k 1 + 0.2 u k + 0.7 u k 1 + θ k ( 1 + y k 1 ) 2
The PDF of random variable θ k ( k = 1 , 2 , ) is defined by:
γ θ ( z ) = { [ 4 α + μ + 1 β ( α + 1 , μ + 1 ) ] 1 ( z + 2 ) α        ( 2 z ) μ , z ( 2 , 2 )       0 ,      o t h e r w i s e
where α = 1 and μ = 8 . The set-point of system Equation (26) is set to r ( k ) = 1 . The sampling period is T = 1 and learning factor η i = 0.001 ( i = 1 , 2 , 3 ) . The weights in Equation (10) are R 1 = 0.98 , R 2 = 0.01 and R 3 = 0.01 respectively.
The advantage of the proposed method is shown by comparing with a PID controller whose transfer function is G P I D ( s ) = k p + k i s + k d s . The optimal PID parameters are tuned using the Matlab NCD toolbox: k p = 2.2 , k i = 0.3 and k d = 0.32 . The comparative results are shown in Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6.
It can be seen from Figure 2 and Figure 3 that the information potentials under PID control and proposed single-neuron adaptive control both increase with time, while the performance indices J k decrease with time. The variances of the single neuron weights tuned by the proposed performance index are presented in Figure 4. In Figure 5, both the range of tracking errors and PDFs of tracking errors are given. It can be seen that the shapes of PDFs of the tracking errors become narrower along with the increasing of the time. In order to clarify the improvements, the PDFs at several typical instants are shown in Figure 6. It can be shown from Figure 5 and Figure 6 that the proposed single neural adaptive controller can drive the tracking errors toward a smaller randomness. Conversely, the PID control strategy cannot minimize the randomness effectively. From these comparative results, it is obvious that the proposed strategy has better randomness rejection ability than PID control law. Therefore, the proposed control strategy is more suitable for nonlinear stochastic systems with non-Gaussian noises.
Figure 2. Information potentials.
Figure 2. Information potentials.
Entropy 14 01154 g002
Figure 3. Performance indices.
Figure 3. Performance indices.
Entropy 14 01154 g003
Figure 4. Weights of the single neuron.
Figure 4. Weights of the single neuron.
Entropy 14 01154 g004aEntropy 14 01154 g004b
Figure 5. PDFs of tracking error.
Figure 5. PDFs of tracking error.
Entropy 14 01154 g005
Figure 6. PDFs at typical instants.
Figure 6. PDFs at typical instants.
Entropy 14 01154 g006

6. Conclusions

This paper has introduced a new statistic information-based single neuron adaptive control algorithm for general control systems with non-Gaussian noises. The performance index of the closed-loop system consists of the information potential of the tracking error, mathematic expectation of squared tracking error and constraints on control input. The optimal control algorithm is obtained by training the neuron weights based on the proposed performance index. Moreover, the implementation of the adaptive single neuron control approach is developed by employing a Parzen windowing technique for estimating PDF of the tracking error. In this work, the given control strategy and the PID control method are both applied to a nonlinear and non-Gaussian stochastic system. Comparative simulation results demonstrate the superiority of the presented control algorithm.

Acknowledgments

This work was supported by National Basic Research Program of China under Grant (973 Program 2011CB710706) and China National Science Foundation under Grant (60974029). These are gratefully acknowledged.

References

  1. Yutaka, M.; Masatoshi, W. Simultaneous perturbation learning rule for recurrent neural networks and its FPGA implementation. IEEE Trans. Neural Netw. 2005, 16, 1664–1672. [Google Scholar]
  2. Qi, D.; Liu, M.; Qiu, M.; Zhang, S. Exponential H synchronization of general discrete-time chaotic neural networks with or without time delays. IEEE Trans. Neural Netw. 2010, 21, 1358–1365. [Google Scholar] [PubMed]
  3. Yi, Y.; Guo, L.; Wang, H. Constrained PI tracking control for output probability distributions based on two-step neural networks. IEEE Trans. Circuits Syst. I: Regul. Pap. 2009, 56, 1416–1426. [Google Scholar]
  4. Sorsa, T.; Koivo, H.N. Application of artificial neural networks in process fault diagnosis. Automatica 1993, 29, 843–849. [Google Scholar] [CrossRef]
  5. Zhang, D.Y.; Liu, Y.Q.; Cao, J. Application of single neuron adaptive PID controller during the process of timber drying. J. For. Res. 2003, 14, 244–248. [Google Scholar]
  6. Liu, T.K.; Juang, J.G. A Single Neuron PID Control for Twin Rotor MIMO System. In Proceedings of the 2009 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, Singapore, 14–17 July 2009; pp. 186–191.
  7. Wang, H. Bounded Dynamic Stochastic Systems: Modeling and Control; Springer-Verlag: London, UK, 2000. [Google Scholar]
  8. Guo, L.; Wang, H. Stochastic Distribution Control System Design: A Convex Optimization Approach; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  9. Guo, L.; Wang, H. Generalized discrete-time PI control of output PDFs using square toot B-spline expansion. Automatica 2005, 41, 159–162. [Google Scholar] [CrossRef]
  10. Wang, H.; Afshar, F. ILC-based fixed-structure controller design for output PDF shaping in stochastic systems using LMI techniques. IEEE Trans. Automat. Contr. 2009, 54, 760–773. [Google Scholar] [CrossRef]
  11. Wang, H.; Zhang, J.H. Bounded stochastic distribution control for pseudo ARMAX systems. IEEE Trans. Automat. Contr. 2001, 46, 486–490. [Google Scholar] [CrossRef]
  12. Yue, H.; Wang, H. Minimum entropy control of closed-loop tracking error for dynamic stochastic systems. IEEE Trans. Automat. Contr. 2003, 48, 118–122. [Google Scholar]
  13. Yin, L.; Guo, L. Fault isolation for multivariate nonlinear non-Gaussian systems using generalized entropy optimization principle. Automatica 2009, 45, 2612–2619. [Google Scholar] [CrossRef]
  14. Guo, L.; Wang, H. Minimum entropy filtering for multivariate stochastic systems with non-Gaussian noises. IEEE Trans. Automat. Contr. 2006, 51, 695–700. [Google Scholar] [CrossRef]
  15. Guo, L.; Yin, L.; Wang, H.; Chai, T. Entropy optimization filtering for fault isolation of nonlinear non-Gaussian stochastic systems. IEEE Trans. Automat. Contr. 2009, 54, 804–810. [Google Scholar]
  16. Erdogmus, D.; Principe, J.C. An error-entropy minimization algorithm for supervised training of nonlinear adaptive system. IEEE Trans. Signal Process. 2002, 50, 1780–1786. [Google Scholar] [CrossRef]
  17. Erdogmus, D.; Principe, J.C.; Kim, S.P.; Sanchez, J.C. A Recursive Renyi’s Entropy Estimator. In Proceedings of the 12nd IEEE Workshop on Neural Networks for Signal Processing, Martigny, Switzerland, 4–6 September 2002; pp. 209–217.
  18. Erdogmus, D.; Principe, J.C. Generalized information potential criterion for adaptive system training. IEEE Trans. Neural Netw. 2002, 13, 209–217. [Google Scholar] [CrossRef] [PubMed]
  19. Zhang, J.; Chu, C.; Jose, M.; Chen, J. Minimum entropy based run-to-ran control for semiconductor processes with uncertain metrology delay. J. Process Control 2009, 19, 1688–1697. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Ren, M.; Zhang, J.; Jiang, M.; Tian, Y.; Hou, G. Statistical Information Based Single Neuron Adaptive Control for Non-Gaussian Stochastic Systems. Entropy 2012, 14, 1154-1164. https://doi.org/10.3390/e14071154

AMA Style

Ren M, Zhang J, Jiang M, Tian Y, Hou G. Statistical Information Based Single Neuron Adaptive Control for Non-Gaussian Stochastic Systems. Entropy. 2012; 14(7):1154-1164. https://doi.org/10.3390/e14071154

Chicago/Turabian Style

Ren, Mifeng, Jianhua Zhang, Man Jiang, Ye Tian, and Guolian Hou. 2012. "Statistical Information Based Single Neuron Adaptive Control for Non-Gaussian Stochastic Systems" Entropy 14, no. 7: 1154-1164. https://doi.org/10.3390/e14071154

APA Style

Ren, M., Zhang, J., Jiang, M., Tian, Y., & Hou, G. (2012). Statistical Information Based Single Neuron Adaptive Control for Non-Gaussian Stochastic Systems. Entropy, 14(7), 1154-1164. https://doi.org/10.3390/e14071154

Article Metrics

Back to TopTop