Next Article in Journal
Design and Implementation of Reconfigurable Array Adaptive Optoelectronic Hybrid Interconnect Shunting Network
Next Article in Special Issue
Improved Command Filtered Backstepping Control for Uncertain Nonlinear Systems with Time-Delay
Previous Article in Journal
Continual Monitoring of Respiratory Disorders to Enhance Therapy via Real-Time Lung Sound Imaging in Telemedicine
Previous Article in Special Issue
Adaptive Dynamic Boundary Sliding Mode Control for Robotic Manipulators under Varying Disturbances
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Nonlinear Subspace Predictive Control Approach Based on Locally Weighted Projection Regression

1
Research Institute of Intelligent Control and Systems, Harbin Institute of Technology, Harbin 150001, China
2
State Key Laboratory of Robotics and Systems, Harbin Institute of Technology, Harbin 150001, China
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(9), 1670; https://doi.org/10.3390/electronics13091670
Submission received: 7 March 2024 / Revised: 13 April 2024 / Accepted: 24 April 2024 / Published: 26 April 2024
(This article belongs to the Special Issue High Performance Control and Industrial Applications)

Abstract

:
Subspace predictive control (SPC) is a widely recognized data-driven methodology known for its reliability and convenience. However, effectively applying SPC to complex industrial process systems remains a challenging endeavor. To address this, this paper introduces a nonlinear subspace predictive control approach based on locally weighted projection regression (NSPC-LWPR). By projecting the input space into localized regions, constructing precise local models, and aggregating them through weighted summation, this approach handles the nonlinearity effectively. Additionally, it dynamically adjusts the control strategy based on online process data and model parameters, while eliminating the need for offline process data storage, greatly enhancing the adaptability and efficiency of the approach. The parameter determination criteria and theoretical analysis encompassing feasibility and stability assessments provide a robust foundation for the proposed approach. To illustrate its efficacy and feasibility, the proposed approach is applied to a continuous stirred tank heater (CSTH) benchmark system. Comparative results highlight its superiority over SPC and adaptive subspace predictive control (ASPC) methods, evident in enhanced tracking precision and predictive accuracy. Overall, the proposed NSPC-LWPR approach presents a promising solution for nonlinear control challenges in industrial process systems.

1. Introduction

Industrial processes constitute the backbone of modern economies, contributing to diverse sectors such as chemical engineering, transportation, and energy production [1,2,3]. The efficient operation and regulation of these processes are essential for achieving optimal resource utilization, product quality, and safety [4]. In pursuit of these objectives, the field of industrial process control has emerged as a crucial discipline, aiming to harness advancements in science and technology to enhance process performance, stability, and reliability [5].
The significance of industrial process control extends beyond mere operational efficiency. It plays a pivotal role in ensuring consistent product quality, minimizing waste, and mitigating environmental impact [6,7,8]. Furthermore, effective control strategies empower industries to adapt swiftly to changing market demands and regulatory requirements, fostering competitiveness and sustainability [9]. However, the realm of industrial process control is not without its challenges. Conventional model-based control approaches encounter limitations when applied to complex industrial systems [10,11,12]. A prominent constraint is the difficulty in obtaining accurate and comprehensive model information. Constructing a detailed mathematical representation for complex processes is often formidable, especially given nonlinear dynamics, intricate interactions, and inherent uncertainties [13]. These challenges hinder the efficacy of conventional model-based control, leading to suboptimal performance, compromised stability, and difficulties in real-time adaptation [14].
To tackle these formidable challenges, data-driven control approaches have emerged as a promising and dynamic solution in the realm of industrial process control [15]. These methodologies, including machine learning [16], deep learning [17], and reinforcement learning [18], leverage the wealth of information derived from sensors, actuators, and historical data to formulate effective and adaptive control strategies.
Within the spectrum of data-driven control strategies, the subspace predictive control (SPC) approach stands out as a particularly compelling choice. SPC ingeniously combines subspace identification techniques with predictive control methodologies, rendering it an attractive option for its simplicity and ease of implementation [19]. The adoption of SPC has spurred extensive research into its practical applications within the realm of industrial process control. For instance, Li et al. [20] have devised a SPC method to regulate the power allocation of server racks and control the supply temperature of cold air. Furthermore, Navalkar et al. [21] have introduced a repetitive SPC approach that demonstrates precise individual blade pitch control on a wind turbine prototype. These applications underscore the potential of SPC in optimizing complex industrial systems. Nevertheless, it remains clear that many intricate industrial systems inherently exhibit nonlinear behaviors, intricate interactions, and uncertain dynamics. While the strength of SPC lies in its foundation on linear models and its reliance on offline data, it encounters formidable challenges when confronted with the inherent complexities of nonlinear processes. This limitation has the potential to curtail its effectiveness in capturing the multifaceted nature of these systems and responding adeptly to dynamic variations [22].
To address the prevalent issue of the inadequacy of the SPC method in dealing with the intricate nonlinear dynamics inherent in industrial processes, there is a compelling imperative to develop and advance the field of adaptive subspace predictive control (ASPC). The primary objective motivating ASPC is to facilitate real-time adjustments of controller parameters in response to dynamic data fluctuations, offering a dynamic and adaptive approach to control. A cornerstone of ASPC involves the utilization of a sliding data window mechanism, which serves as a vital tool for describing the current operational conditions and effectively mitigating the nonlinear complexities often encountered in intricate systems. This approach has been notably applied and refined by pioneering researchers such as Wahab et al. [23], Vajpayee et al. [24], and Hallouzi et al. [25]. Their work has showcased the effectiveness of the sliding data window in applications ranging from wastewater treatment systems to nuclear reactors and even complex models like the Boeing 747 aircraft. While substantial progress has been achieved in the application of ASPC, a noteworthy limitation lies in the fact that these methodologies have predominantly been tailored to linear controllers. This limitation restricts their capability to comprehensively address the intricate nonlinear characteristics commonly found in diverse industrial scenarios.
To achieve a more robust and appropriate solution, some researchers have explored alternative avenues by directly crafting controllers explicitly designed for specific nonlinear systems. For example, a specialized nonlinear subspace predictive controller tailored to bilinear systems is introduced in [26]. Zhou et al. [27] and Luo et al. [28] have extended nonlinear SPC methods to encompass Hammerstein systems and Hammerstein–Wiener systems, expanding the scope of applicability. However, it is essential to recognize that while these endeavors have shown promise, designing controllers for specific nonlinear systems often lacks the necessary universality required for broad industrial implementation. In light of these considerations, the field of nonlinear SPC is confronted with the challenge of achieving a more versatile solution.
In this paper, a nonlinear subspace predictive control approach based on locally weighted projection regression (NSPC-LWPR) is presented to address the aforementioned issues. The locally weighted projection regression (LWPR) algorithm, which is an incremental nonparametric statistical learning technique [29] and is related to the field of linear parameter varying modeling [30,31,32], is integrated into the SPC method. By fitting the local nonlinear relationships between input and output data to construct a predictive model, higher prediction accuracy can be achieved when the expected output of the nonlinear process changes, while maintaining smooth tracking. The main contributions of this approach are listed as follows:
(1)
Seamless integration of LWPR and SPC: The LWPR algorithm and the SPC method are seamlessly integrated for industrial process control. By projecting the input space into localized regions, constructing precise local models, and aggregating them through weighted summation, the proposed approach effectively addresses the complex nonlinear relationships in industrial processes.
(2)
Enhanced adaptability and efficiency: The proposed approach constructs the controller from the trained regression model. This implies that it can adapt the control strategy using online process data and local model parameters. In addition, it removes the necessity for storing offline process data. These advancements highlight improvements in both adaptability and efficiency.
(3)
Improved predictive and tracking performance: The proposed approach shows improvements in both predictive and tracking performance. It creates an accurate predictive model by capturing the dynamic characteristics of the system from input/output (I/O) data. This boosts the accuracy of the predictive controller, especially during transitions from nonlinear to steady-state processes. The increased prediction accuracy also greatly enhances the tracking performance of the predictive controller. In situations where the expected output of the nonlinear process changes, the controlled system adjusts smoothly to match the projected output path, ensuring consistent and smooth tracking.
This paper is structured as follows. Section 2 offers an extensive elucidation of the preliminaries associated with the subspace predictor and the LWPR learning scheme. Section 3 focuses on the design of the controller, including parameter determination criteria and theoretical analysis. The application of the proposed NSPC-LWPR approach in a CSTH benchmark study is showcased in Section 4. Finally, Section 5 concludes the paper by summarizing its main content and suggesting potential directions for future research.

2. Preliminaries

2.1. Subspace Predictor

Assuming discrete time intervals indexed by k where measurements of the I/O data for the system are denoted by u k R m and y k R l , the stacked vector u s , k of length s is introduced as
u s , k = u k T u k + s 1 T T .
The block Hankel matrices U p and U f are constructed as
U p = u s p , k s p + 1 u s p , k s p + N ¯ , U f = u s f , k + 1 u s f , k + N ¯ ,
where the indexes p and f correspond to the past and future block Hankel matrices, respectively. s p and s f both denote the number of row blocks. N ¯ represents the sample length. Similarly, the output data block Hankel matrices Y p and Y f are defined based on the output data.
The subspace predictor model represents the optimal prediction of Y f as a combination of past I/O data and future input data [33]. The subspace predictor can be formulated as
Y ^ f = L w W p + L u U f ,
where L w and L u are the subspace predictor coefficient matrices, and  W p = Y p T U p T T .

2.2. LWPR Learning Scheme

The LWPR algorithm employs the standard regression model y = β T x + ε to approximate the nonlinear function y = f ( x ) + ε , where x is the input vector, y is the scalar output, and  ε is a zero-mean random noise term.
To capture the locality aspect, the position of each data point x is leveraged through a Gaussian kernel to compute the weight w:
w = exp 0.5 ( x x c ) T D ( x x c ) , 0 < w 1 ,
where x c denotes the center of a local subset of data, and D is a positive semi-definite distance metric that determines both the size and shape of the neighborhood contributing to the establishment of the corresponding local model. A smaller D results in a smoother kernel, while a larger D captures finer details. As discussed in [34], besides the Gaussian kernel, alternative kernel functions can also be employed. However, the choice of kernel function only affects the computation of weights and consequently influences the number and shape of local models, but it does not significantly impact the prediction results.
Based on the obtained weights, the following weighted means can be calculated:
x ¯ = n = 1 N w n x n / n = 1 N w n , y ¯ = n = 1 N w n y n / n = 1 N w n .
By subtracting x ¯ and y ¯ from the original measurements, the input and output of the LWPR algorithm can be guaranteed with zero means.
Following the initialization of LWPR without a locally linear model (receptive field, RF), the algorithm proceeds with the training process. For each training sample, the weight is computed using (4). Subsequently, the regressions, projections, and distance metrics of each RF are updated iteratively until no new RF creation is required. The crucial aspects of the LWPR learning scheme for one RF centered at x c , which hold relevance for our extension of locally weighted learning to SPC, are concisely summarized in Table 1. Corresponding symbols and their notations are provided in Table 2.

3. Locally Weighted Projection Regression-Based Subspace Predictive Control

3.1. Controller Design

Considering that only the leftmost column of Y ^ f is considered to predict the output, (3) can be rewritten as
y ^ N p l = L ˜ w w p + L ˜ u u N c m ,
where y ^ N p l is the first N p l row of the leftmost column in Y ^ f , w p is the leftmost column of W p , and  u N c m is the first N c m row of the leftmost column in u f . L ˜ w and L ˜ u are truncated from L w and L u .
Given the congruity in structure between the subspace predictor outlined in (3) and the regression model employed for approximating nonlinear functions within the framework of the LWPR algorithm, it follows that the LWPR algorithm becomes instrumental in the computation of the coefficients L ˜ w and L ˜ u for the subspace predictor. Then, for the query point u N c m , the calculation of the i-th element of its output vector y ^ N p l i in the local prediction output y ^ N p l i , j of the j-th locally linear model can be simplified as follows:
y ^ N p l i , j = β 0 i + r = 1 R β r i , j s r i , j ,
where 1 i N p l , 1 j M , β 0 i is the average of the i-th training output samples calculated in (5). β r i , j signifies the parameter linked to the respective RF, while s r i , j is defined as
s 1 i , j = ( u 1 i , j ) T ϑ , s 2 i , j = ( u 2 i , j ) T I p 1 i , j ( u 1 i , j ) T ϑ , s R i , j = ( u R i , j ) T r = R 1 1 I p r i , j u r i , j T ϑ ,
where ϑ = u N c m u ˜ N c m , and  u ˜ N c m is the average of the training input samples.
Then, y ^ N p l i , j can be rewritten as
y ^ N p l i , j = ζ i , j + L i , j u N c m ,
where
ζ i , j = β 0 i r = 1 R ψ τ u ˜ N c m , L i , j = r = 1 R ψ r ,
ψ r = β 1 i , j ( u 1 i , j ) T , r = 1 β r i , j ( u r i , j ) T d = r 1 1 I p d i , j u d i , j T . r 1
Based on the obtained weights, the total output y ^ N p l i of the LWPR model is the normalized weighted mean of all the predicted outputs y ^ N p l i , j of the M local models, that is,
y ^ N p l i = j = 1 M ω j y ^ N p l i , j / j = 1 M ω j .
To better understand the solving process of global output y ^ N p l i , the information processing unit of the LWPR learning scheme is shown in Figure 1.
Furthermore, we have
y ^ N p l i = L c s t i + L c f t i u N c m ,
where
L c s t i = j = 1 M ω j ζ i , j / j = 1 M ω j , L c f t i = j = 1 M ω j L i , j / j = 1 M ω j .
Then, y ^ N p l can be expressed as
y ^ N p l = L c s t + L c f t u N c m
where
L c s t = L c s t 1 T L c s t i T L c s t N p l T T , L c f t = L c f t 1 T L c f t i T L c f t N p l T T .
To enhance the precision of the system’s behavior modeling and maintain the consistent accuracy of predictions, it is advisable to express the projected output in (13) through an incremental formulation concerning Δ u N c m :
y ^ N p l = A N p l 1 y k + A N p l 2 L c f t Δ u N c m ,
where
A N p l 1 = I l I l I l , A N p l 2 = I l 0 0 I l I l 0 I l I l I l , Δ u N c m = Δ u k + 1 T Δ u k + 2 T Δ u k + N c m T T ,
while I l is the identity matrix in the dimension of l, and  Δ u k + 1 in Δ u N c m is defined as
Δ u k + 1 = u k + 1 u k ,
and other components, such as Δ u k + 2 and Δ u k + N c m , are defined similarly as Δ u k + 1 .
The approach is designed to generate a control signal u k by minimizing a quadratic cost function J. This cost function takes into account the incremental input Δ u k , the provided reference signal r k , and the projected output y ^ k , and is mathematically expressed as follows:
J = n i = 1 N p Q ˜ T W Q Q ˜ + n j = 1 N c R ˜ T W R R ˜ ,
where
Q ˜ = r k + n i y ^ k + n i , R ˜ = Δ u k + n j ,
while N p and N c are the prediction and control horizons. W Q and W R are the weighting matrices of the cost function J.
Based on (20) and (17), J can be rewritten as
J = Q ¯ T W Q Q ¯ + R ¯ T W R R ¯ ,
where Q ¯ and R ¯ are represented as
Q ¯ = A N p l 1 r k y k + A N p l 2 L ˜ u Δ u N c m , R ¯ = Δ u N c m .
Based on (22), it becomes evident that the cost function is exclusively contingent upon Δ u N c m , a quantity attainable through the computation of the derivative of the cost function with respect to Δ u N c m under unconstrained circumstances (UCs). Furthermore, the differential quandary can be redefined as a quadratic problem under constrained circumstances (CCs). Consequently, the representation of Δ u N c m takes the form:
Δ u N c m = J Δ u N c m = 0 , min Δ u N c m J s . t . A Q P B Q p , U C C C
where A Q P and B Q P are constructed from preset constraints.
Upon acquiring Δ u N c m , the initial m components are chosen for utilization. Drawing from (19) and armed with the understanding of u k as well as Δ u k + 1 , one can ascertain the forthcoming controller output to be incorporated into the regulated system, thereby determining u k + 1 .
Subsequently, the control diagram outlining the proposed NSPC-LWPR approach is depicted in Figure 2, while Algorithm 1 succinctly encapsulates the essential steps.
Notably, the training of this locally weighted regression model is closely related to the number of inputs and outputs of the MIMO system. Specifically, the number of inputs and outputs directly impacts the training cost and computational efficiency of the algorithm. In essence, a larger number of inputs and outputs in the MIMO system increases the model’s training cost, reduces the computational efficiency, and prolongs the runtime. Conversely, a small number will have the opposite effect.
Algorithm 1 The proposed NSPC-LWPR approach.
  • Step 1. Initialization
  • a. Fully excite the initialization signal of the system input;
  • b. Initialize the LWPR model with no RF;
  • Step 2. LWPR Regression Model Training
  • a. Normalize the process data;
  • b. Train the LWPR regression model utilizing the learning scheme specified in II-B;
  • c. Continue the training until the predicted output of the controlled object consistently converges to its actual value;
  • Step 3. Subspace Predictor Construction
  • a. Calculate the subspace predictor’s coefficients according to (16);
  • b. Convert the predictor to the incremental form of (17);
  • Step 4. Control Input Signal Calculation
  • a. Select methods to find Δ u N c m based on (24);
  • b. Calculate the control input according to (19);
  • c. Denormalize the solved control signal and input it into the controlled system;
  • Step 5. Judgment
  • a. If the controlled system is still running, return to Step 2 and calculate the next signal;
  • b. If the controlled system stops running, the proposed approach is terminated and the calculation of the next control signal is stopped.
Furthermore, to emphasize the superiority of the proposed NSPC-LWPR approach, a theoretical comparison is performed between it and the MPC, SPC, and ASPC methods as delineated in Table 3.

3.2. Parameters Determination Criteria

Achieving a balance between the computational efficiency and effectiveness of the proposed control approach relies heavily on making careful choices regarding parameters such as s p , s f , N c , N p , W Q , and W R . These parameter selections are critical in ensuring that the control system operates smoothly and effectively. The specific details are as follows.

3.2.1. s p and s f

s p and s f correspond to the number of row blocks contained within the past and future Hankel matrices. Choosing an excessively large value for these parameters can result in a model that has too many parameters, potentially causing problems related to complexity. On the other hand, selecting a value that is too small may result in a model with too few parameters, potentially affecting its accuracy and predictive abilities.

3.2.2. N c and N p

The choice of the control horizon N c influences the behavior of the control signal and the control law’s structure, while the predictive horizon N p is crucial for tracking error calculations. It is recommended to set Nc to be greater than or equal to the system’s order α for precise control performance, and N p should be larger than N c to ensure effective tracking, within the limits defined by the predictive horizon s f . Care must be taken to strike a balance, as selecting excessively large N c and N p values can increase computational demands, particularly in fast systems, while overly small values may compromise effectiveness. In total, the criteria for determining N c and N p are α + 1 N c N p s f , with N c shaping the control signal and N p affecting the tracking accuracy.

3.2.3. W Q and W R

W Q and W R are employed as adjustable parameters in the optimization process, serving to impose penalties on the tracking error and the rate of control signal variation, respectively. Opting for substantial penalties on tracking errors yields a swifter yet potentially more aggressive response, facilitating rapid adaptation. Conversely, assigning a substantial penalty to the control signal engenders a more resilient but potentially slower controller, fostering stability and reducing abrupt changes in control action.

3.3. Theoretical Analysis

For the convenience of the theoretical analysis, the cost function in (20) can be rewritten as
J = n i = 1 N p Q T W Q Q + n j = 1 N c R T W R R ,
where
Q = h k + n i | k + 1 = r k + n i | k + 1 y ^ k + n i | k + 1 , R = g k + n j | k + 1 = Δ u k + n j | k + 1 .
while k + n i | k + 1 represents the prediction at the ( k + n i ) th sampling time when the current time is k + 1 .
Then, the sequences of ϕ h , k + 1 and ϕ g , k + 1 are given by
ϕ h , k + 1 = h k + 2 | k + 1 , h k + 3 | k + 1 , , h k + N p + 1 | k + 1 , ϕ g , k + 1 = g k + 1 | k + 1 , g k + 2 | k + 1 , , g k + N c | k + 1 .
Based on the descriptions mentioned above, the dynamic of the controlled system can be modeled with the following nonlinear discrete-time difference equations:
h k + 2 | k + 2 = f h k + 1 | k + 1 , g k + 1 | k + 1 ,
and the problem to be solved at step k + 1 can be turned into
Problem * : min ϕ g , k + 1 J h k + 1 s . t . : g k + n j | k + 1 G , n j 1 , . . . , N c ; h k + n i | k + 1 H , n i 2 , . . . , N p ; h k + 1 + N p | k + 1 H t , H t H ,
where G is the time-invariant set, H is the convex constraints set subject to the system evolution, and the terminal set H t is 0 .
Theorem 1. 
The proposed control approach is recursively feasible, and the controlled system under the proposed control approach is asymptotically stabilized at the origin.
Proof of Theorem 1. 
The sequence ϕ g , k + 1 * , which is assumed to be the optimal solution to Problem* at step k + 1 , is represented as
ϕ g , k + 1 * = g k + 1 | k + 1 * , g k + 2 | k + 1 * , , g k + N c | k + 1 * ,
and the corresponding optimal sequence ϕ h , k + 1 * is given by
ϕ h , k + 1 * = h k + 2 | k + 1 * , h k + 3 | k + 1 * , , h k + N p + 1 | k + 1 * .
Since h k + N p + 1 | k + 1 * H t according to (29) applies, and H t equals to 0 , we can obtain that Φ Ψ G , and
h k + N p + 2 | k + 1 = f Ψ , Φ Ψ H t ,
where Ψ = h k + N p + 1 | k + 1 * . The terminal controller Φ exists such that Φ x G for all x H t , and f x , Φ x H t for all x H t under the condition that H t , which equals to 0 , is a control invariant set of the system. Φ Ψ characterizes the effect of the terminal controller Φ on Ψ .
The temporary sequences ϕ g , k + 2 t p and ϕ h , k + 2 t p are given by
ϕ g , k + 2 t p = g k + 2 | k + 1 * , , g k + N c + 1 | k + 1 * , Φ Ψ , ϕ h , k + 2 t p = h k + 3 | k + 1 * , , h k + N p + 1 | k + 1 * , h k + N p + 2 | k + 1 ,
where ϕ g , k + 2 t p and ϕ h , k + 2 t p both satisfy constraints of Problem*, and ϕ g , k + 2 t p is a feasible solution of the proposed approach to the Problem* after moving to h k + 2 | k + 1 * at step k + 2 .
Based on the analysis provided above, if a feasible solution to Problem* exists for k = 1 , it implies that there is also a feasible solution for the problem at any k 1 , 2 , 3 , . Therefore, it can be concluded that the proposed control approach, developed by solving Problem*, is recursively feasible.
In what follows, the stability analysis of the proposed control approach is presented.
The difference in cost between J c d h k + 2 and J * h k + 1 can be computed from
J c d h k + 2 J * h k + 1 = h k + N p + 2 | k + 1 T W Q h k + N p + 2 | k + 1 h k + 2 | k + 1 * T W Q h k + 2 | k + 1 * g k + 1 | k + 1 * T W R g k + 1 | k + 1 * + Φ h k + N p + 1 | k + 1 * T W R Φ h k + N p + 1 | k + 1 * ,
where cost J c d h k + 2 is led by the sequence ϕ g , k + 2 t p and ϕ h , k + 2 t p at step k + 2 , and J * h k + 1 is the optimal cost at step k + 1 .
Since both h k + N p + 2 | k + 1 and h k + N p + 1 | k + 1 * belong to H t , it is evident that the right-hand side of (34) is nonpositive. Additionally, J c d h k + 2 serves as an upper bound for the optimal cost J * h k + 2 . Therefore, we can derive the following result:
J * h k + 2 J c d h k + 2 J * h k + 1 .
Based on the fact that J * decreases monotonically as the Lyapunov function, it can be concluded that the controlled system, governed by the solution to Problem*, satisfies J * h k + 2 J * h k + 1 .
Consequently, the controlled system is asymptotically stabilized at the origin. This completes the proof. □

4. Benchmark Study on Continuous Stirred Tank Heater

The continuous stirred tank heater (CSTH) is a vital component in various industrial processes, particularly in the field of chemical engineering. This reactor is designed for the purpose of simultaneously heating and mixing fluid substances. It comprises tanks equipped with both mixing and heating elements, allowing for a continuous flow of fluids in and out of these tanks, thereby ensuring constant movement. During this process, the fluids are subjected to heating through various methods such as electric heaters or steam injection. Concurrently, sophisticated mixing mechanisms are employed to maintain uniform temperatures and prevent the formation of temperature gradients within the system. Precise control over essential variables, including temperature and flow rates, is crucial to optimizing heat transfer efficiency and facilitating desired reaction kinetics. In this paper, the CSTH system has become a valuable platform for evaluating the effectiveness of the proposed NSPC-LWPR approach.
As shown in Figure 3, the Automation Laboratory within the Department of Chemical Engineering at IIT Bombay has developed a widely acknowledged CSTH system [35]. It comprises five distinct inputs and three resultant outputs. Specifically, inputs u 1 , u 2 , and u 3 correspond to flow rates that are governed by individual valves, while inputs u 4 and u 5 pertain to the intensity of heating within two distinct heaters. The three outputs of the system encompass the temperature of the first tank T 1 , the temperature of the second tank T 2 , and the water level within the second tank h 2 .
Considering the needs during the production process, u 4 and u 5 are considered the two adjustable input variables, and T 1 is the predetermined setpoints. The remaining parameters are set to their steady-state values shown in Table 4 unless otherwise specified.
To further enhance the tracking control performance, the smoothing approximation, which is a filtering process, is introduced to make the expected output able to change smoothly from one desired state to the other. Specifically, the expected temperature for T 1 , denoted as y s p T 1 , is set to be
y s p T 1 k = y s p 1 T 1 , k 0 , 600 λ T 1 y s p T 1 k 1 + 1 λ T 1 y s p 2 T 1 , k 600 , 1300 λ T 1 y s p T 1 k 1 + 1 λ T 1 y s p 1 T 1 , k 1300 , 2000
in which y s p 1 T 1 = 50 , y s p 2 T 1 = 52 , and the smoothing coefficient denoted as λ T 1 is set to be 0.998 . To account for the mechanical constraints of the CSTH system, the predictive controller is subject to the constraints with 30 y 60 , 0 u 100 , and 0.5 Δ u 0.5 .
The parameters setup of the CSTH benchmark study is illustrated in Table 5, where E Q and E R are the eigenvalues of W Q and W R , and f s is the sampling frequency.
The outputs and setpoints of T 1 under various control frameworks are presented in Figure 4. It is evident that the outputs of T 1 exhibit inadequate setpoint tracking performance under the SPC framework. The output curves show erratic behavior, characterized by shaking and oscillations, making the tracking of setpoints ineffective during set point changes. Conversely, the tracking performance under the ASPC framework improves, effectively following setpoints after a settling time. However, during the setpoint of T 1 transitions, the tracking performance diminishes, leading to overshooting and less accurate setpoint tracking. The tracking performance under the proposed NSPC-LWPR framework shows the most promising results among the three control approaches. Even when the setpoints of T 1 change at 600 s and 1300 s, the T 1 outputs consistently track the setpoints.
The disparities observed in the tracking performance of T 1 can be attributed to variations in the subspace predictor outputs generated by the controllers under different control frameworks as illustrated in Figure 5. The SPC method employs a fixed, offline-designed subspace predictor, making it unsuitable for effectively controlling nonlinear systems. In contrast, the ASPC method incorporates online learning capabilities to optimize its parameters based on process data, enabling it to adapt to changing conditions and generate corresponding subspace predictor outputs dynamically. However, the ASPC method remains linear and approximates new conditions using fixed nearby sampling points. Consequently, this approach can lead to degraded tracking performance and overshooting issues during smooth setpoint changes. Similar to the ASPC method, the proposed NSPC-LWPR approach is equipped with autonomous learning capabilities, allowing real-time updates of controller information using newly generated process data. However, it surpasses the limitations of the ASPC controller by employing multiple linear working points for weighted summation. This innovative approach constructs a nonlinear subspace predictive controller, leading to a more precise predictive output for the current operating condition.
The results indicate that the proposed NSPC-LWPR approach excels in describing the current nonlinear operating condition and achieves superior tracking performance in nonlinear industrial process control compared to SPC and ASPC methods. This highlights its potential as an advanced and effective controller in nonlinear industrial process system applications.
According to Figure 4 and Figure 5, the outputs of the controlled system and subspace predictor are strictly limited within specific boundaries. Additionally, the change rates of u 4 and u 5 are investigated and shown in Figure 6. It can be observed that the values of Δ u 4 and Δ u 5 both fall within the range of −0.5 and 0.5, which aligns with the constraints set. This result indicates that the proposed NSPC-LWPR approach operates within the imposed constraints, which effectively influence the system behavior.
To demonstrate the predictive performance of the proposed NSPC-LWPR approach, we conduct a comparative analysis using the multi-step prediction means squared error (MPMSE), defined as:
σ N p = k = T s T t j = 1 N p y ^ k + j y k + j 2 / N · N p ,
where T s and T t represent the starting point and the terminal point of the sampling data considered for analysis. To account for the necessary initialization time required by the proposed NSPC-LWPR approach, we set T s to be 400, and T t to be 10,000. The multi-step prediction mean squared error comparison among different control algorithms with varying predictive horizon N p is presented in Table 6.
Table 6 reveals a clear trend in the MPMSE, where the proposed NSPC-LWPR approach consistently outperforms the ASPC method and significantly outpaces the SPC method, all while maintaining a constant value of N p . This performance discrepancy can be attributed to the SPC method’s limited ability to effectively control nonlinear systems, leading to subpar output predictions. In contrast, both the ASPC and NSPC-LWPR approaches exhibit self-learning capabilities, enhancing their control of nonlinear systems. However, the proposed NSPC-LWPR approach stands out by demonstrating superior predictive accuracy under the current operational conditions.
Furthermore, as N p increases, MPMSE decreases across all three methods. This decline is attributed to the broader prediction range, resulting in higher prediction accuracy. The outcomes of this comparative analysis compellingly support the superiority of the proposed NSPC-LWPR approach in terms of predictive performance. This finding indirectly substantiates its efficacy in enhancing tracking capabilities, underscoring its potential for effective control in nonlinear industrial process systems.
In total, data analysis in this study was conducted using Python programming language (V3.9.2) with the following libraries: NumPy (V1.21.6), SciPy (V1.11.0), and Matplotlib (V3.3.1) [36].

5. Conclusions

In this paper, we propose a NSPC-LWPR approach to address tracking issues in nonlinear industrial process control. Our approach integrates the LWPR algorithm into the framework of the SPC method, harnessing the exceptional nonlinear handling capabilities of LWPR. Through the segmentation of the input space into localized regions, the construction of precise local models, and their aggregation through weighted summation, our approach adeptly captures dynamic system characteristics and trains the regression model. The adaptability and efficiency of our approach are further augmented by a dynamic control strategy that adjusts based on the online process data and the parameters of established local models. Furthermore, the verification of our approach against the CSTH benchmark unequivocally demonstrates its superiority over conventional SPC and ASPC methods. This verification affirms its ability to significantly enhance tracking precision and predictive accuracy in industrial process control.
While our proposed control approach has demonstrated exceptional performance, it is important to acknowledge that there remain unexplored avenues for further research. Future investigations could delve into methodological refinements, expanding the applicability of our approach to diverse control problems, or exploring advanced variants of the LWPR algorithm.

Author Contributions

Conceptualization, X.W. and X.Y.; methodology, X.W.; software, X.W.; validation, X.W. and X.Y.; formal analysis, X.W.; investigation, X.W. and X.Y.; resources, X.W. and X.Y.; data curation, X.W.; writing—original draft preparation, X.W.; writing—review and editing, X.W.; visualization, X.W.; supervision, X.Y.; project administration, X.Y.; funding acquisition, X.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under Grant 62373130 and partially funded by the Self-Planned Task of State Key Laboratory of Robotics and System under Grant SKLRS202201A05.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy.

Acknowledgments

The authors are thankful to the anonymous reviewers whose comments helped us to improve the paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Dong, Y.; Lang, Z.Q.; Zhao, J.; Wang, W.; Lan, Z. A Novel Data-Driven Approach to Analysis and Optimal Design of Forced Periodic Operation of Chemical Reactions. IEEE Trans. Ind. Electron. 2023, 70, 8365–8376. [Google Scholar] [CrossRef]
  2. Wen, S.; Guo, G. Distributed Trajectory Optimization and Sliding Mode Control of Heterogenous Vehicular Platoons. IEEE Trans. Intell. Transp. Syst. 2022, 23, 7096–7111. [Google Scholar] [CrossRef]
  3. Du, S.; Wu, M.; Chen, X.; Cao, W. An Intelligent Control Strategy for Iron Ore Sintering Ignition Process Based on the Prediction of Ignition Temperature. IEEE Trans. Ind. Electron. 2020, 67, 1233–1241. [Google Scholar] [CrossRef]
  4. Li, S.; Zheng, Y.; Li, S.; Huang, M. Data-Driven Modeling and Operation Optimization with Inherent Feature Extraction for Complex Industrial Processes. IEEE Trans. Autom. Sci. Eng. 2023, 21, 1092–1106. [Google Scholar] [CrossRef]
  5. Zhou, P.; Zhang, S.; Wen, L.; Fu, J.; Chai, T.; Wang, H. Kalman Filter-Based Data-Driven Robust Model-Free Adaptive Predictive Control of a Complicated Industrial Process. IEEE Trans. Autom. Sci. Eng. 2022, 19, 788–803. [Google Scholar] [CrossRef]
  6. Ren, L.; Meng, Z.; Wang, X.; Zhang, L.; Yang, L.T. A Data-Driven Approach of Product Quality Prediction for Complex Production Systems. IEEE Trans. Ind. Inform. 2021, 17, 6457–6465. [Google Scholar] [CrossRef]
  7. Harbaoui, H.; Khalfallah, S. An Effective Optimization Approach to Minimize Waste in a Complex Industrial System. IEEE Access 2022, 10, 13997–14012. [Google Scholar] [CrossRef]
  8. Mehrtash, M.; Capitanescu, F.; Heiselberg, P.K.; Gibon, T. A New Bi-Objective Approach for Optimal Sizing of Electrical and Thermal Devices in Zero Energy Buildings Considering Environmental Impacts. IEEE Trans. Sustain. Energy 2021, 12, 886–896. [Google Scholar] [CrossRef]
  9. Zhang, F.; Kodituwakku, H.A.D.E.; Hines, J.W.; Coble, J. Multilayer Data-Driven Cyber-Attack Detection System for Industrial Control Systems Based on Network, System, and Process Data. IEEE Trans. Ind. Inform. 2019, 15, 4362–4369. [Google Scholar] [CrossRef]
  10. Kang, E.; Qiao, H.; Chen, Z.; Gao, J. Tracking of Uncertain Robotic Manipulators Using Event-Triggered Model Predictive Control With Learning Terminal Cost. IEEE Trans. Autom. Sci. Eng. 2022, 19, 2801–2815. [Google Scholar] [CrossRef]
  11. Choi, W.Y.; Lee, S.H.; Chung, C.C. Horizonwise Model-Predictive Control With Application to Autonomous Driving Vehicle. IEEE Trans. Ind. Inform. 2022, 18, 6940–6949. [Google Scholar] [CrossRef]
  12. Oshnoei, A.; Kheradmandi, M.; Khezri, R.; Mahmoudi, A. Robust Model Predictive Control of Gate-Controlled Series Capacitor for LFC of Power Systems. IEEE Trans. Ind. Inform. 2021, 17, 4766–4776. [Google Scholar] [CrossRef]
  13. Shang, C.; You, F. A data-driven robust optimization approach to scenario-based stochastic model predictive control. J. Process Control 2019, 75, 24–39. [Google Scholar] [CrossRef]
  14. Yao, L.; Shao, W.; Ge, Z. Hierarchical Quality Monitoring for Large-Scale Industrial Plants With Big Process Data. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 3330–3341. [Google Scholar] [CrossRef] [PubMed]
  15. Han, H.; Fu, S.; Sun, H.; Qiao, J. Data-Driven Multimodel Predictive Control for Multirate Sampled-Data Nonlinear Systems. IEEE Trans. Autom. Sci. Eng. 2023, 20, 2182–2194. [Google Scholar] [CrossRef]
  16. Habib, M.; Wang, Z.; Qiu, S.; Zhao, H.; Murthy, A.S. Machine Learning Based Healthcare System for Investigating the Association Between Depression and Quality of Life. IEEE J. Biomed. Health Inform. 2022, 26, 2008–2019. [Google Scholar] [CrossRef]
  17. Sun, Q.; Ge, Z. A Survey on Deep Learning for Data-Driven Soft Sensors. IEEE Trans. Ind. Inform. 2021, 17, 5853–5866. [Google Scholar] [CrossRef]
  18. Xue, W.; Fan, J.; Lopez, V.G.; Li, J.; Jiang, Y.; Chai, T.; Lewis, F.L. New Methods for Optimal Operational Control of Industrial Processes Using Reinforcement Learning on Two Time Scales. IEEE Trans. Ind. Inform. 2020, 16, 3085–3099. [Google Scholar] [CrossRef]
  19. Kadali, R.; Huang, B.; Rossiter, A. A data driven subspace approach to predictive controller design. Control Eng. Pract. 2003, 11, 261–278. [Google Scholar] [CrossRef]
  20. Li, Z.; Wang, H.; Fang, Q.; Wang, Y. A data-driven subspace predictive control method for air-cooled data center thermal modelling and optimization. J. Frankl. Inst. 2023, 360, 3657–3676. [Google Scholar] [CrossRef]
  21. Navalkar, S.T.; van Solingen, E.; van Wingerden, J.W. Wind Tunnel Testing of Subspace Predictive Repetitive Control for Variable Pitch Wind Turbines. IEEE Trans. Control Syst. Technol. 2015, 23, 2101–2116. [Google Scholar] [CrossRef]
  22. Zhang, X.; Zhang, L.; Zhang, Y. Model Predictive Current Control for PMSM Drives With Parameter Robustness Improvement. IEEE Trans. Power Electron. 2019, 34, 1645–1657. [Google Scholar] [CrossRef]
  23. Wahab, N.A.; Katebi, R.; Balderud, J.; Rahmat, M.F. Data-driven adaptive model-based predictive control with application in wastewater systems. Control Theory Appl. IET 2010, 5, 803–812. [Google Scholar] [CrossRef]
  24. Vajpayee, V.; Mukhopadhyay, S.; Tiwari, A.P. Data-Driven Subspace Predictive Control of a Nuclear Reactor. IEEE Trans. Nucl. Sci. 2018, 65, 666–679. [Google Scholar] [CrossRef]
  25. Hallouzi, R.; Verhaegen, M. Fault-Tolerant Subspace Predictive Control Applied to a Boeing 747 Model. J. Guid. Control Dyn. 2008, 31, 873–883. [Google Scholar] [CrossRef]
  26. Zhou, P.; Zhang, S.; Dai, P. Recursive Learning-Based Bilinear Subspace Identification for Online Modeling and Predictive Control of a Complicated Industrial Process. IEEE Access 2020, 8, 62531–62541. [Google Scholar] [CrossRef]
  27. Zhou, P.; Song, H.; Wang, H.; Chai, T. Data-Driven Nonlinear Subspace Modeling for Prediction and Control of Molten Iron Quality Indices in Blast Furnace Ironmaking. IEEE Trans. Control Syst. Technol. 2017, 25, 1761–1774. [Google Scholar] [CrossRef]
  28. Luo, X.S.; Song, Y.D. Data-driven predictive control of Hammerstein–Wiener systems based on subspace identification. Inf. Sci. 2018, 422, 447–461. [Google Scholar] [CrossRef]
  29. Vijayakumar, S.; D’Souza, A.; Schaal, S. Incremental Online Learning in High Dimensions. Neural Comput. 2006, 17, 2602–2634. [Google Scholar] [CrossRef]
  30. Ferranti, F.; Rolain, Y. A local identification method for linear parameter-varying systems based on interpolation of state-space matrices and least-squares approximation. Mech. Syst. Signal Process. 2017, 82, 478–489. [Google Scholar] [CrossRef]
  31. De Caigny, J.; Camino, J.F.; Swevers, J. Interpolating model identification for SISO linear parameter-varying systems. Mech. Syst. Signal Process. 2009, 23, 2395–2417. [Google Scholar] [CrossRef]
  32. Felici, F.; van Wingerden, J.W.; Verhaegen, M. Subspace identification of MIMO LPV systems using a periodic scheduling sequence. Automatica 2007, 43, 1684–1697. [Google Scholar] [CrossRef]
  33. Favoreel, W.; Moor, B.D.; Gevers, M. SPC: Subspace Predictive Control. IFAC Proc. Vol. 1999, 32, 4004–4009. [Google Scholar] [CrossRef]
  34. Atkeson, C.G.; Schaal, M.S. Locally Weighted Learning. Artif. Intell. Rev. 1997, 11, 11–73. [Google Scholar] [CrossRef]
  35. Thornhill, N.F.; Patwardhan, S.C.; Shah, S.L. A continuous stirred tank heater simulation model with applications. J. Process Control 2008, 18, 347–360. [Google Scholar] [CrossRef]
  36. Johansson, R. Numerical Python: Scientific Computing and Data Science Applications with Numpy, SciPy and Matplotlib; Apress: New York, NY, USA, 2019. [Google Scholar]
Figure 1. Information processing unit of the LWPR learning scheme.
Figure 1. Information processing unit of the LWPR learning scheme.
Electronics 13 01670 g001
Figure 2. The control diagram of the proposed nonlinear subspace predictive control approach based on locally weighted projection regression (NSPC-LWPR) approach.
Figure 2. The control diagram of the proposed nonlinear subspace predictive control approach based on locally weighted projection regression (NSPC-LWPR) approach.
Electronics 13 01670 g002
Figure 3. The continuous stirred tank heater (CSTH) system in IIT Bombay.
Figure 3. The continuous stirred tank heater (CSTH) system in IIT Bombay.
Electronics 13 01670 g003
Figure 4. The outputs and setpoints of T 1 under different control frameworks. (a) Under the subspace predictive control (SPC) framework. (b) Under the adaptive subspace predictive control (ASPC) framework. (c) Under the NSPC-LWPR framework.
Figure 4. The outputs and setpoints of T 1 under different control frameworks. (a) Under the subspace predictive control (SPC) framework. (b) Under the adaptive subspace predictive control (ASPC) framework. (c) Under the NSPC-LWPR framework.
Electronics 13 01670 g004
Figure 5. Subspace predictor outputs u 4 and u 5 under different control frameworks. (a) Under the SPC framework. (b) Under the ASPC framework. (c) Under the NSPC-LWPR framework.
Figure 5. Subspace predictor outputs u 4 and u 5 under different control frameworks. (a) Under the SPC framework. (b) Under the ASPC framework. (c) Under the NSPC-LWPR framework.
Electronics 13 01670 g005
Figure 6. Change rates of u 4 and u 5 .
Figure 6. Change rates of u 4 and u 5 .
Electronics 13 01670 g006
Table 1. Locally Weighted Projection Regression (LWPR) learning scheme for one RF centered at x c  [29].
Table 1. Locally Weighted Projection Regression (LWPR) learning scheme for one RF centered at x c  [29].
1. Initialization: (number of training samples seen n = 0 )
       x 0 0 = 0 , β 0 0 = 0 , W 0 = 0 , u r 0 = 0 , p r 0 = 0 ; r = 1 : R
2. Incorporating new data: Given training point(x,y)
      2a. Compute activation and update the means
            1. w = exp 0.5 x x c T D x x c ; W n + 1 = λ W n + w
            2. x 0 n + 1 = λ W n x 0 n + w x / W n + 1 ;
                 β 0 n + 1 = λ W n β 0 n + w y / W n + 1
      2b. Compute the current prediction error
             x r e s , 1 = x x 0 n + 1 , y ^ = β 0 n + 1
            Repeat for r = 1 : R (projections)
             1 . z r = x res , r T u r n / u r n T u r n 2 . y ^ = y ^ + β r n z r 3 . x res , r + 1 = x res , r z r p r n 4 . MSE r n + 1 = λ MSE r n + w y y ^ 2
      2c. Update the local model
       res 1 = y β 0 n + 1
      Repeat for r = 1 : R (projections)
            2c.1 Update the local regression and compute residuals
             1 . a z z , r n + 1 = λ a z z , r n + w z r 2 ; a z r e s , r n + 1 = λ a z r e s , r n + w z r r e s r 2 . β r n + 1 = a zres , r n + 1 / a zz , r n + 1 3 . res r + 1 = res r z r β r n + 1 4 . a xz , r n + 1 = λ a xz , r n + w x res , r z r
            2c.2 Update the projection directions
              1 . u r n + 1 = λ u r n + w x res , r res r
              2 . p r n + 1 = a xz , r n + 1 / a zz , r n + 1
Table 2. Indexes and symbols used for LWPR [29].
Table 2. Indexes and symbols used for LWPR [29].
NotationDescription
NNumber of training data points
MNumber of local models
RNumber of local projections
z r ( r = 1 : A ) rth element of the lower-dimensional projection of input data x
u r ( r = 1 : A ) rth projection direction
p r ( r = 1 : A ) Regressed input space to be subtracted to maintain
orthogonality of projection directions
WDiagonal weight matrix representing the activation due to all samples
β r ( r = 1 : A ) rth component of slope of the local linear model β = β 1 β R T
λ Forgetting factor used to exclude data and accelerate the learning process
MSE r n Mean square error of the nth sample in the rth projection
a zz , r n , a zres , r n , a xz , r n Sufficient statistics for incremental computation of rth
dimension of variable var after seeing n data points
Table 3. Theoretical comparison among different control strategies.
Table 3. Theoretical comparison among different control strategies.
MethodMPCSPCASPCNSPC-LWPR
Approach Typemodel-baseddata-drivendata-drivendata-driven
Prior Knowledgemodel informationoff-line process datano needno need
Dynamic Abilityableunableableable
Controller Typefixed; linearfixed; linearunfixed; linearunfixed; nonlinear
Table 4. Nominal model parameters and steady state.
Table 4. Nominal model parameters and steady state.
ParameterDescriptionValue
V 1 Volume of tank 1 1.75 × 10 3   m 3
A 2 Cross sectional area of tank 2 7.584 × 10 3   m 2
r 2 Radius of tank 2 0.05  m
UHeat transfer coefficient 235.1  W/ m 2 K
T c Cooling water temperature30 °C
T a Atmospheric temperature25 °C
u 1 Flow F 1  (% Input) 60 %
u 2 Flow F 2  (% Input) 55 %
u 3 Flow F R  (% Input) 50 %
u 4 Heat input Q 1  (% Input) 60 %
u 5 Heat input Q 2  (% Input) 80 %
T 1 Steady state temperature (tank 1)49.77 °C
T 2 Steady state temperature (tank 2)52.92 °C
h 2 Steady state level 0.3599  m
Table 5. Parameters setup of the CSTH benchmark study.
Table 5. Parameters setup of the CSTH benchmark study.
Parameter E Q E R s p s f N c N p f s
Value121053410 Hz
Table 6. Multi-step prediction mean squared error (MPMSE) comparison.
Table 6. Multi-step prediction mean squared error (MPMSE) comparison.
Control Methods N p = 3 N p = 4 N p = 5 N p = 6 N p = 7
SPC1.54291.26491.09230.97320.8983
ASPC0.13210.11470.10260.09340.0896
NSPC-LWPR0.09460.08450.07400.07110.0661
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, X.; Yang, X. A Nonlinear Subspace Predictive Control Approach Based on Locally Weighted Projection Regression. Electronics 2024, 13, 1670. https://doi.org/10.3390/electronics13091670

AMA Style

Wu X, Yang X. A Nonlinear Subspace Predictive Control Approach Based on Locally Weighted Projection Regression. Electronics. 2024; 13(9):1670. https://doi.org/10.3390/electronics13091670

Chicago/Turabian Style

Wu, Xinwei, and Xuebo Yang. 2024. "A Nonlinear Subspace Predictive Control Approach Based on Locally Weighted Projection Regression" Electronics 13, no. 9: 1670. https://doi.org/10.3390/electronics13091670

APA Style

Wu, X., & Yang, X. (2024). A Nonlinear Subspace Predictive Control Approach Based on Locally Weighted Projection Regression. Electronics, 13(9), 1670. https://doi.org/10.3390/electronics13091670

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop