Next Article in Journal
Optimizing Contextonym Analysis for Terminological Definition Writing
Previous Article in Journal
Quantum Edge Detection and Convolution Using Paired Transform-Based Image Representation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fault Detection Based on Kernel Global Local Preserving Projection

School of Ship Electrical Engineering, Dalian Maritime University, Dalian 116026, China
*
Author to whom correspondence should be addressed.
Information 2025, 16(4), 256; https://doi.org/10.3390/info16040256
Submission received: 10 February 2025 / Revised: 5 March 2025 / Accepted: 15 March 2025 / Published: 21 March 2025

Abstract

:
In this paper, a fault detection method based on kernel global local preserving projection is presented to address the nonlinear characteristics of industrial systems. First, data are projected into a high-dimensional feature space through nonlinear mapping, enabling linear separability in this feature space. Subsequently, data features are extracted using the global local preserving projection method in the high-dimensional feature space. Finally, a monitoring model is established based on these features. Experiments on the Tennessee Eastman process and industrial boilers demonstrate that the proposed method balances global and local data structures, reduces nonlinear influences, and improves the fault detection rate.

1. Introduction

The advancement of industrial technology has led to increasingly complex and integrated industrial systems. These systems typically employ multiple sensors to collect and monitor operational data [1,2,3,4]. The data generated during the production process exhibit high-dimensional characteristics. These data are no longer simply one-dimensional or two-dimensional information. There are complex coupling relationships between data, and traditional methods struggle to directly extract effective information from high-dimensional data. Therefore, rapidly and effectively extracting data features and accurately identifying fault information are crucial to ensure system safety [5,6,7,8,9,10].
Traditional fault detection methods primarily rely on linear systems and thus fail to yield satisfactory fault detection results for systems with nonlinear characteristics. In real-world production environments, data exhibit nonlinear characteristics. Consequently, researchers have extensively studied nonlinear systems. For instance, Schölkopf et al. [11] proposed the kernel principal component analysis (KPCA) algorithm. By introducing a kernel function, the original data are mapped to a high-dimensional feature space, where linear principal component analysis is conducted. KPCA can effectively capture these nonlinear relationships, reduce data dimensionality while retaining the main features of the data, and exhibit robustness to noise, thereby enhancing the reliability of fault diagnosis systems under complex real-world conditions. Kong et al. [12] proposed the Input–Output Kernel Partial Least Squares (IO-KPLS) model, which addresses nonlinear system problems by integrating the strengths of kernel methods and partial least squares. Similarly, it maps the original data to a high-dimensional feature space through nonlinear mapping and performs linear analysis in this high-dimensional space. This algorithm not only considers the characteristics of input data but also fully utilizes the correlation between output data and input data, providing a more comprehensive description of the system’s operating state. Jian et al. [13] proposed a kernel sample equivalence substitution-based Kernel Partial Least Squares (KPLS) method, which constructs a linear regression relationship between original variables and outputs and utilizes an improved contribution plot method for fault localization, thus eliminating the nonlinear characteristics of the system while reducing the computational complexity. Guo Jinyu et al. [14] utilized the kernel locality preserving projection (KLPP) algorithm and a kernel function to map nonlinear data into a high-dimensional space, followed by dimensionality reduction using the locality preserving projection (LPP) algorithm. This algorithm emphasizes the preservation of the local structure of data, which is crucial for fault diagnosis. In practical systems, faults often induce changes in data within a local scope. KLPP can highlight these local change features, facilitating a more detailed analysis of fault types and degrees. Li Rong et al. [15] improved the KPCA method by incorporating information entropy and proposed the kernel entropy component analysis algorithm for dimensionality reduction with minimal information entropy loss. Zhang et al. [16] proposed the principal polynomial analysis (PPA) algorithm, which constructs a polynomial function to optimally fit the observed data. Drawing on the idea of PPA, the coefficient matrix of the polynomial model is analyzed. PPA can effectively describe the nonlinear relationship between data through the polynomial model. Its multivariate analysis capability enables it to grasp the overall operation of the system, thereby improving the accuracy and reliability of fault diagnosis. However, these methods fail to simultaneously balance global and local data structures, thus affecting fault detection accuracy and reliability due to incomplete feature extraction.
To overcome this problem, Zhang et al. [17] proposed the global–local structure analysis (GLSA) method for fault detection based on the characteristics of principal component analysis (PCA) and LPP. GLSA takes into account both global and local features of data, enabling a more comprehensive extraction of fault-related information compared to algorithms that focus solely on single-level features. This comprehensive feature extraction approach provides a more accurate and detailed description of the system’s operating state during fault diagnosis, enhancing the reliability of fault diagnosis. Especially, for process data with complex structures, different analysis methods are employed to handle global and local features separately, effectively mining fault information from the data. This is particularly suitable for handling data with nonlinear and non-stationary characteristics. Yu et al. [18] proposed the local and global principal component analysis (LGPCA) method, aiming to preserve global and local data structures. LPCA (Local Principal Component Analysis) can divide data into multiple local regions, capturing the variation characteristics of data within a local scope and providing more detailed fault information than GPCA (Global Principal Component Analysis). LGPCA balances global and local features by assigning different weights to LPCA and GPCA. However, the division method of LPCA relies to some extent on prior knowledge, and different division methods may lead to different local PCA results, thereby affecting the final diagnostic performance of LGPCA. To this end, Luo et al. [19] proposed the global–local preservation projection (GLPP) algorithm, which unifies global and local data structures within a single framework, balancing them through weighting coefficients. The greatest advantage of GLPP lies in its ability to preserve both the global and local structures of data simultaneously. This enables it to comprehensively capture data features in fault diagnosis, avoiding potential information loss that may arise from focusing solely on global or local structures. Compared to methods that consider only a single structure, it can more accurately describe the operating state and fault characteristics of the system. Due to the use of similarity matrices and projection techniques, GLPP can handle nonlinear data relationships to a certain extent.
Given the aforementioned problems, in this paper, a kernel global local preserving projection (KGLPP)-based fault detection algorithm is presented. In the proposed algorithm, data are projected into a high-dimensional feature space through nonlinear mapping, enabling linear separability. Next, the data are projected into a low-dimensional space while considering both global and local characteristics. Subsequently, a fault monitoring model is established. The effectiveness of this algorithm was validated by performing fault detection experiments on the Tennessee Eastman (TE) process and an industrial boiler.

2. KGLPP Algorithm

2.1. Principle of the KGLPP Algorithm

KGLPP is a nonlinear form of GLPP and addresses the problem of nonlinear characteristics encountered in industrial systems by projecting samples into a high-dimensional space through nonlinear mapping, making the data linearly separable. The original dataset can be represented as X = [ x 1 , x 2 x n ] R m × n , where n is the number of samples and m is the number of dimensions of the samples. GLPP preserves the global structure of the data while considering the local structure. The projection matrix is obtained by solving Equation (1) as follows:
J G L P P ( a ) = min a   a T X M X T a s . t . a T N a = 1 ,
where a = [ a 1 , a 2 a n ] T is the projection matrix, M = H R is the Laplacian matrix, and N = η X H X T + ( 1 η ) I . The matrix H satisfies H ii = j R ij , where R ij = η W ij ( 1 η ) W ij Λ is a diagonal matrix, I is the identity matrix, η is the weighting coefficient, W ij is the neighborhood weight matrix, and W ij Λ is the non-neighborhood weight matrix, as shown in Equations (2) and (3) as follows:
W ij = e   x i x j 2 σ   if   x j Ω k ( x i )   o r   x i Ω k ( x j ) 0     other ,
where e   x i x j σ is the heat kernel function,   x i x j is the Euclidean norm between samples x i and x j , σ is a parameter of the heat kernel function, and Ω k ( x j ) represents the top k nearest neighbors of the sample x i .
W ij = e   x i x j 2 σ   if   x j Ω k ( x i )   and   x i Ω k ( x j ) 0     other ,
Solving Equation (4) yields the eigenvalues and eigenvectors, which are then used to construct the optimal transformation vectors as follows:
X M X T a = λ N a ,
The original data X are projected from the original feature space into a high-dimensional space R m F h by using the mapping function ϕ ( x ) . The kernel matrix K R n × n is defined to achieve mapping to the feature space, where the kernel matrix K satisfies K ij = k ( x i , x j ) = ϕ ( x i ) ϕ ( x j ) = ϕ ( x i ) T ϕ ( x j ) . The centralized kernel matrix is K ¯ . Thus, Equation (4) can be transformed into the following:
K ¯ M K ¯ T a = λ N ˜ a ,
where N ˜ = η K ¯ H K ¯ T + ( 1 η ) K ¯ . K ¯ is calculated as follows:
K ¯ = K K E E K + K E K ,
where each element in E R n × n is E ij = 1 / n .
Solving Equation (5) yields the projection matrix, which is composed of the eigenvectors a 1 , a 2 a l corresponding to the l smallest eigenvalues λ 1 λ 2 λ l . By retaining the first l eigenvectors, the projection of the k -th sample can be calculated as follows:
t k , j = i = 1 n a j , i k ¯ ( x i , x k ) = k ¯ ( X , x k ) T a j ,
where a j = [ a j , 1 , a j , 2 a j , n ] T , j = 1 , 2 l and k ¯ ( X , x k ) = [ k ¯ ( x 1 , x k ) , k ¯ ( x 2 , x k ) k ¯ ( x n , x k ) ] T .
Similarly, the projection of the new sample x new can be calculated as follows:
t new , j = i = 1 n a j , i k ¯ ( x new , x i ) = k ¯ ( X , x new ) T a j ,
where k ¯ ( X , x new ) can be calculated using Equation (9) as follows:
k ¯ ( X , x new ) = k ( X , x new ) K e E k ( X , x new ) + E K e ,
where k ( X , x new ) = [ k ( x 1 , x new ) , k ( x 2 , x new ) , k ( x n , x new ) ] T is the kernel vector corresponding to x new . Each element within e R n is 1 / n .
The weighting coefficient η [ 0 , 1 ] is determined based on the characteristics of the dataset. If η is large, it emphasizes preserving the local structure of the data. Conversely, it emphasizes preserving the global structure of the data. To balance the global and local structures, the value of η is determined as follows:
η = ρ ( L ) ρ ( L ) + ρ ( L ) ,
where ρ ( L ) and ρ ( L ) are the spectral radii of matrices L and L that preserve the local and global structures of the data, respectively.
When selecting upper and lower bounds for η , two special cases of KGLPP can be obtained. If η = 0 and the non-neighbor matrix W Λ = 1 n × n , that is, the neighbor relationship between data is ignored, Equation (5) can be simplified as follows:
K ¯ L Λ K ¯ a = λ K ¯ a ,
where L Λ = D Λ W Λ represents a symmetric matrix, L ii Λ = n 1 and L ij Λ = 1 ( i j ) . When the sample size n is sufficiently large, the matrix L Λ / n approximates to an identity matrix, and Formula (11) simplifies to the following:
K ¯ K ¯ a = λ / n K ¯ a K ¯ a = λ Λ a ,
This is consistent with solving the KPCA problem; hence, KPCA is a special case of the KGLPP algorithm, which aims to preserve the global structure of the data. Similarly, if η = 1 , which means ignoring the global structure of the data, Equation (5) can be simplified to the following:
K ¯ L K ¯ a = λ K ¯ D K ¯ a ,
This equation represents the eigenvalue problem of KLPP; thus, KLPP is another special case of KGLPP, retaining only the local structure of the data.

2.2. Process Model

After standardizing the nonlinear dataset X , the original data are projected into the principal component subspace and residual subspace by using the KGLPP model, resulting in the following statistical model:
K ¯ = T A + + E T = K ¯ A E = K ¯ T A + ,
where K ¯ is the centralized kernel matrix, T = [ t 1 , t 2 , t l ] T is the score matrix, A = [ a 1 , a 2 , a l ] T is the projection matrix, E is the residual matrix, and A + is the pseudoinverse of matrix A . Mapping the newly collected sample data x new onto the KGLPP model yields the following:
k ¯ ( X , x new ) = A + T t new + e new t new = A T k ¯ ( X , x new ) e new = k ¯ ( X , x new ) A + T t new ,
where k ¯ ( X , x new ) is the kernel matrix centralized by calculation using Equation (9).

2.3. Monitoring Indicators

The indicators T2 and SPE quantify the data variations in the principal component subspace and residual subspace, respectively. T2 is defined as follows:
T 2 = t T S 1 t ,
where t represents the score vector and S = 1 n 1 T T T is the covariance matrix of the score matrix T . SPE is defined as follows:
S P E =   [ ( A f A f + ) T ( A A + ) T ] k ¯ ( X , x )   2 ,
where A is the projection matrix and A f is a matrix composed of eigenvectors a 1 , a 2 , a DimFs corresponding to the top D i m F s smallest eigenvalues λ 1 λ 2 λ DimFs calculated using Equation (5). D i m F s ( l < D i m F s n ) represents the effective number of dimensions of the feature space, typically determined by empirical knowledge.

2.4. Fault Detectiocentrn Steps

Fault detection in a boiler system by using the KGLPP method involves two main steps, namely, offline modeling and online diagnosis, as illustrated in Figure 1. The specific steps are as follows.
Offline Modeling.
  • Historical data of the industrial system under normal conditions are collected to form the training set;
  • An appropriate kernel function is selected and centralized using Equation (6);
  • The generalized eigenvalues are solved using Equation (7) and then used to establish a monitoring model;
  • The statistical thresholds T a and Q a for the system under normal operating conditions are determined.
Online Diagnosis.
  • Test samples are obtained and preprocessed;
  • The kernel matrix is calculated and centralized using Equation (9);
  • New samples are projected onto the monitoring model by using Equation (15);
  • T2 and SPE values of the test samples are computed;
  • Determine whether the test sample statistics exceed the control limits. If neither exceeds the limits, the test samples are deemed normal, and faulty otherwise.

3. Simulation Results and Analysis

3.1. TE Process Fault Detection

The TE process is an industrial simulation platform developed by Eastman. It generates data with strong coupling and nonlinear characteristics, making it suitable for testing the effectiveness of fault detection and diagnosis algorithms. This process involves five operating units: the reactor, condenser, stripper, separator, and compressor. The TE process utilizes four gases (A, C, D, and E) to produce two liquid products (G and H). The flowchart of the TE process is shown in Figure 2.
To verify the effectiveness of the KGLPP algorithm, experiments were conducted using TE process data, with normal data ranging from 1 to 160 and fault data ranging from 161 to 960. A total of 52 variables were selected for fault detection in the TE process, with T2 and SPE as detection indicators. The KGLPP-based fault detection algorithm was compared with KPCA, KLPP, and GLPP algorithms under identical parameter settings: a confidence level of 99%, a cumulative contribution rate of 85%, and a kernel parameter σ of 400. The detection rates for faults 2, 3, 5, 6, 7, and 13 achieved using the four methods are presented in Table 1. The KGLPP method achieved the highest fault detection rate, validating its effectiveness.
Fault 5, labeled No. 3, is a step-type fault caused by a temperature change at the cooling water inlet of the condenser. KPCA, KLPP, and KGLPP were used to reduce the dimensions of the training set and the fault 5 dataset. Two principal components were selected, and a scatter plot of their data was created (Figure 3). The x-axis represents the first principal component (PC1) and the y-axis represents the second principal component (PC2). KPCA mapped the data along the directions of maximum variance, spreading all data points over a large area, whereas KLPP mapped all data points onto an elongated area. However, neither KPCA nor KLPP distinctly separated fault data from normal data. In contrast, KGLPP mapped normal data onto a small area and fault data onto a large area, effectively separating fault data from normal data.
The results of fault 5 detection by using four different methods are shown in Figure 4. The T2 and SPE indicators of KLPP started detecting the fault at the 167th and 171st sample points, respectively. However, after the 340th sample point, both indicators fell below the control limits, even though the fault persisted. For KPCA, the T2 and SPE indicators began detecting the fault at the 167th and 164th sample points, respectively, and stopped detecting it after the 380th sample point due to the system’s negative feedback mechanism, which caused most variables to return to their pre-fault states. The T2 and SPE indicators of GLPP started detecting the fault at the 178th and 161st sample points, respectively, with the T2 indicator continuously detecting the fault. The T2 and SPE indicators of KGLPP started detecting the fault at the 167th and 161st sample points, respectively, with the SPE indicator detecting the fault until the end. Thus, KGLPP exhibited the highest fault detection.
Fault 2 is labeled No. 1. The four aforementioned methods were used to detect this fault, with the results shown in Figure 5. KLPP detected the fault from the 175th sample point, with a fault detection rate of 98.25%. KGLPP detected the fault from the 165th sample point, with a fault detection rate of 99.12%. These results indicate that KGLPP could detect this fault earlier, thus demonstrating optimal fault detection capability. The KGLPP method is advantageous as it not only takes into account the nonlinear characteristics of the data but also preserves both global and local feature information. This dual consideration significantly enhances detection performance.
Fault 13, labeled No. 6, is characterized by slow drift due to changes in reaction kinetic characteristics. The four aforementioned methods were used to detect this fault, with the results shown in Figure 6. KLPP detected the fault from the 209th sample point, while KPCA gradually detected it from the 208th sample point, with some missed detections. GLPP detected the fault from the 197th sample point, and KGLPP detected it from the 161st sample point, also with some missed detections. These results indicate that KGLPP could detect this fault earlier, thus demonstrating optimal fault detection capability.
The experimental results indicate that the KGLPP method excels in fault detection compared to the KLPP, KPCA, and GLPP methods. This superiority stems from the inability of KPCA, KLPP, and GLPP to fully and effectively extract the feature information from the data. While KPCA and KLPP take into account the nonlinear characteristics of the data, KLPP overlooks the local information, and KPCA fails to utilize the global information adequately. Although the GLPP method retains both global and local feature information, it neglects the nonlinear characteristics. The KGLPP method integrates the strengths of KPCA and KLPP. It not only focuses on the nonlinear characteristics of the data but also preserves its global and local structure, thereby extracting feature information more comprehensively and enhancing the fault detection rate.

3.2. Industrial Boiler Fault Detection

An experiment was conducted on a 56 MW industrial boiler, with process parameters as listed in Table 2. This boiler is a chain-type coal-fired hot water boiler, utilizing coal as the combustion medium and producing hot water as the output.
Figure 7 is the flowchart of the boiler system. The system primarily comprises a combustion system and a water circulation system. Coal is conveyed to the furnace via a coal feeder and a grate machine, where it is mixed and burned with air supplied by a blower. Simultaneously, the pressure within the furnace is controlled by an induced draft fan, maintaining it at a slightly negative pressure. After combustion, the coal heats the water inside the boiler drum. Cold water is then pumped into the drum by a circulating water pump, and hot water is distributed to users. The nonlinear characteristics of the boiler system are primarily manifested in the impacts of fuel variations and environmental changes, as well as the effects of high- and low-load operating conditions. Monitoring its operating status involves both safety and economic factors. Data were sampled at 5 min intervals during steady-state operation from November to February. Abnormal data caused by equipment maintenance, data recording errors, downtime, and other reasons were excluded. A total of 1500 sets of data under normal conditions were collected as training samples, and 131 sets of online data were used as test data, with sets 112–131 representing different fault types. The KLPP, KPCA, GLPP, and KGLPP methods were employed for fault detection, with a cumulative contribution rate of 85% and a confidence level of 99%. Five neighboring data points were considered. The detection results are shown in Figure 5 and Figure 6.
The blower fault detection results are shown in Figure 8. The fault arises from a decline in the blower’s efficiency, causing a discrepancy between the output power indicated by the computer control system and the actual power output by the blower. Typically, the actual output power is lower than the controlled output power, leading to an imbalance in the proportions of coal and air, which significantly reduces combustion efficiency. The T2 indicator of KLPP did not detect any faults, while the SPE indicator detected some faults. Both T2 and SPE indicators of KPCA failed to detect any faults. The T2 indicator of GLPP detected only one fault, while its SPE indicator exhibited a large magnitude at the fault point but did not exceed the control limit. The T2 indicator of KGLPP did not detect faults, but its SPE indicator was effective in detecting most faults. This discrepancy occurred because KLPP lost the global structure of the data, KPCA ignored the local structure, and GLPP did not consider the nonlinear characteristics of the data. In contrast, KGLPP retained both global and local structures while considering the nonlinear characteristics of the data, allowing it to extract more useful information and improve fault detection performance.
During the operation of industrial boiler systems, high humidity in coal bunkers may lead to coal agglomeration, preventing the coal from being smoothly transported to the furnace via the coal feeder and grate machine. This results in a significantly higher air volume in the furnace than the coal volume. Due to the reduction in actual coal input, the water outlet temperature decreases. The computer control system will continue to increase the speed of the grate machine and the blower, which can lead to interruptions in coal feeding. If not detected in a timely manner, it can result in boiler shutdown, posing significant safety hazards and economic losses. To address this type of failure, the system simulates a grate machine fault and employs four methods for fault detection, as shown in Figure 9. Only the grate speed was slightly adjusted, increasing the difficulty of fault detection. KLPP, KPCA, and GLPP struggled to detect grate faults, whereas KGLPP successfully detected most of the faults. KGLPP provided timely fault detection and transmitted fault information to operators for prompt action, thereby preventing more serious damage.
The experimental results of the industrial boiler mentioned above demonstrate that KGLPP exhibits excellent usability for industrial boiler systems with nonlinear characteristics. Especially, for minor faults in industrial boiler systems, as long as the operating state of the industrial boiler changes, they can be quickly detected and identified, while the effects of the other three methods are not obvious. This is because the data, after being processed as described above, eliminate the adverse effects of nonlinear characteristics on fault detection and effectively retain their global and local feature information. Therefore, establishing a monitoring model for fault detection will improve its detection accuracy. For systems like industrial boilers, it is not only possible to detect faults when the boiler equipment malfunctions. When the operating state of the boiler significantly deviates from its normal operating state, it can also be detected using the KGLPP method. If we use high energy efficiency operating data of industrial boilers to establish a model, we can also detect in a timely manner when the boiler’s operating state deviates from its high energy efficiency operating state due to manual operation. Therefore, using KGLPP to monitor the operating state of boilers also has significant economic benefits.

4. Conclusions

In response to nonlinear problems in industrial systems, we proposed a KGLPP-based fault detection algorithm. Data are first projected into a high-dimensional feature space via nonlinear mapping, making them linearly separable. The global–local preserving projection method is then used to extract data features in this high-dimensional space. Based on these features, a monitoring model is established. Considering both global and local data structures, KGLPP can better extract useful information than KPCA and KLPP. By selecting appropriate kernel functions for nonlinear mapping, the capability of KGLPP to handle nonlinear systems is further enhanced. Experiments on the TE process and an industrial boiler system revealed that KGLPP outperforms KLPP, KPCA, and GLPP in fault detection. Simultaneously, it also exhibits excellent performance in diagnosing minor faults in industrial boilers.

Author Contributions

W.W.: supervision and project administration. Q.Z.: methodology and conceptualization. Y.H.: writing—original draft and writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (52071047, 62073054).

Data Availability Statement

The data required to reproduce these findings are available upon request from the corresponding authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Liu, Q.; Zhou, J.; Lang, Z.Q. Perspectives on Data-driven Operation Monitoring and Self-optimization of Industrial Processes. Acta Autom. Sin. 2018, 44, 26–38. [Google Scholar]
  2. Wang, J. Data-Driven Complex Industrial Processes Monitoring and Fault Diagnosis. Master’s Thesis, Hunan Normal University, Changsha, China, 2020. [Google Scholar]
  3. Chen, Z.X.; Fang, H.J.; Chang, Y. Weighted data-driven fault detection and isolation: A subspace-based approach and algorithms. IEEE Trans. Ind. Electron. 2016, 63, 3290–3298. [Google Scholar]
  4. Monteiro, R.P.; Bastos-Filho, C.J.A.; Cerrada, M.; Cabrera, D.; Sanchez, R.V. Using the kullback-leibler divergence and kolmogorov-smirnov test to select snput sizes to the fault diagnosis problem based on a CNN model. Learn. Nonlinear Models 2021, 18, 16–26. [Google Scholar] [CrossRef]
  5. Xu, Y.L.; Zhang, J.; Li, N.; Gan, Z. Online operational state monitoring of boiler based on data mining. J. Eng. Therm. Energy Power 2019, 34, 82–87. [Google Scholar]
  6. Zhang, S.; Zhao, C. Stationarity test and Bayesian monitoring strategy for fault detection in nonlinear multimode processes. Chemom. Intell. Lab. Syst. 2017, 168, 45–61. [Google Scholar]
  7. Wang, W.; Tian, Z.; Wang, S.; Yin, Y. Optimal operation method based on cross and piecewise PCA for industrial boilers. Inf. Control 2020, 49, 507–512. [Google Scholar]
  8. Gao, H.Y.; Gan, H.B.; Zheng, Z.; Yang, Y. Application of particle swarm optimization neural network in fault diagnosis of marine auxiliary boiler. Comput. Appl. Softw. 2020, 37, 137–141+148. [Google Scholar]
  9. Hong, C.S.; Huang, J.; Guan, Y.Y.; Ma, X.Q. Combustion control of power station boiler by coupling BP/RBF neural network and fuzzy rules. J. Eng. Therm. Energy Power 2021, 36, 142–148. [Google Scholar]
  10. Niu, P.K.; Hong, H.; Wang, W.Z. Optimization of boiler combustion efficiency based on improved genetic algorithm. J. Eng. Therm. Energy Power 2020, 35, 111–115. [Google Scholar]
  11. Schölkopf, B.; Smola, A.; Müller, K. Nonlinear Component Analysis as a Kernel Eigenvalue Problem. Neural Comput. 1998, 10, 1299–1319. [Google Scholar] [CrossRef]
  12. Kong, X.; Luo, J.; Feng, X.; Liu, M. A General Quality-Related Nonlinear Process Monitoring Approach Based on Input-Output Kernel PLS. IEEE Trans. Instrum. Meas. 2023, 72, 3505712. [Google Scholar] [CrossRef]
  13. Jiao, J.; Zhen, W.; Wang, G.; Wang, Y. KPLS–KSER based approach for quality-related monitoring of nonlinear process. ISA Trans. 2021, 108, 144–153. [Google Scholar] [CrossRef]
  14. Guo, J.Y.; Guo, J.Y. Fault detection based on variable moving window KLPP. J. Dalian Polytech. Univ. 2023, 42, 463–468. [Google Scholar]
  15. Li, R.; Li, Y. Fault diagnosis of Industrial process based on kernel entropy Component Analysis. J. Shenyang Univ. (Nat. Sci.) 2023, 35, 397–405. [Google Scholar]
  16. Zhang, X.; Kano, M.; Li, Y. Principal Polynomial Analysis for Fault Detection and Diagnosis of Industrial Processes. IEEE Access Sin. 2018, 6, 52298–52307. [Google Scholar] [CrossRef]
  17. Zhang, M.; Ge, Z.; Song, Z.; Fu, R. Global–local structure analysis model and its application for fault detection and identification. Ind. Eng. Chem. Res. 2011, 50, 6837–6848. [Google Scholar] [CrossRef]
  18. Yu, J. Local and global principal component analysis for process monitoring. J. Process Control 2012, 22, 1358–1373. [Google Scholar] [CrossRef]
  19. Luo, L. Process monitoring with global-local preserving projections. Ind. Eng. Chem. Res. 2014, 53, 7696–7705. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the fault detection process.
Figure 1. Flowchart of the fault detection process.
Information 16 00256 g001
Figure 2. Flowchart of the TE process.
Figure 2. Flowchart of the TE process.
Information 16 00256 g002
Figure 3. Two principal component scores for fault 5 and normal data: (a) KLPP; (b) KPCA; (c) KGLPP.
Figure 3. Two principal component scores for fault 5 and normal data: (a) KLPP; (b) KPCA; (c) KGLPP.
Information 16 00256 g003
Figure 4. Fault 5 detection using four different methods: (a) KLPP; (b) KPCA; (c) GLPP; (d) KGLPP.
Figure 4. Fault 5 detection using four different methods: (a) KLPP; (b) KPCA; (c) GLPP; (d) KGLPP.
Information 16 00256 g004
Figure 5. Fault 2 detection using four different methods: (a) KLPP; (b) KPCA; (c) GLPP; (d) KGLPP.
Figure 5. Fault 2 detection using four different methods: (a) KLPP; (b) KPCA; (c) GLPP; (d) KGLPP.
Information 16 00256 g005
Figure 6. Fault 13 detection using four different methods: (a) KLPP; (b) KPCA; (c) GLPP; (d) KGLPP.
Figure 6. Fault 13 detection using four different methods: (a) KLPP; (b) KPCA; (c) GLPP; (d) KGLPP.
Information 16 00256 g006
Figure 7. Industrial boiler flowchart.
Figure 7. Industrial boiler flowchart.
Information 16 00256 g007
Figure 8. Diagrams of blower faults detected using four different methods: (a) KLPP; (b) KPCA; (c) GLPP; (d) KGLPP.
Figure 8. Diagrams of blower faults detected using four different methods: (a) KLPP; (b) KPCA; (c) GLPP; (d) KGLPP.
Information 16 00256 g008
Figure 9. Diagrams of grate faults detected using four different methods: (a) KLPP; (b) KPCA; (c) GLPP; (d) KGLPP.
Figure 9. Diagrams of grate faults detected using four different methods: (a) KLPP; (b) KPCA; (c) GLPP; (d) KGLPP.
Information 16 00256 g009
Table 1. Fault detection rates on the test sets (%).
Table 1. Fault detection rates on the test sets (%).
No.Fault No.KLPPKPCAGLPPKGLPP
T2SPET2SPET2SPET2SPE
1298.398.398.198.898.997.598.399.1
23000.611.410.40026
3523.125.52233.421.110024.6100
4698.510098.8100100100100100
5710010033.610010032.37100100
61393.495.692.59595.689.193.196.1
Table 2. Boiler process parameters.
Table 2. Boiler process parameters.
No.VariableUnitNo.VariableUnit
1Flue gas temperature at the furnace outlet°C7Outlet flow ratet·h−1
2Temperature at the furnace outlet°C8Induced draft fan speedr·s−1
3Flue gas temperature at the economizer°C9Blower speedr·s−1
4Outlet water temperature°C10Grate speedr·s−1
5Inlet water temperature°C11Coal feeder speedr·s−1
6Furnace pressureMPa12Oxygen content in the flue gas%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, W.; Zhang, Q.; Hao, Y. Fault Detection Based on Kernel Global Local Preserving Projection. Information 2025, 16, 256. https://doi.org/10.3390/info16040256

AMA Style

Wang W, Zhang Q, Hao Y. Fault Detection Based on Kernel Global Local Preserving Projection. Information. 2025; 16(4):256. https://doi.org/10.3390/info16040256

Chicago/Turabian Style

Wang, Wenbiao, Qianqian Zhang, and Youwei Hao. 2025. "Fault Detection Based on Kernel Global Local Preserving Projection" Information 16, no. 4: 256. https://doi.org/10.3390/info16040256

APA Style

Wang, W., Zhang, Q., & Hao, Y. (2025). Fault Detection Based on Kernel Global Local Preserving Projection. Information, 16(4), 256. https://doi.org/10.3390/info16040256

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop