*Article* **SACGNet: A Remaining Useful Life Prediction of Bearing with Self-Attention Augmented Convolution GRU Network**

**Juan Xu <sup>1</sup> , Shiyu Duan 1 , Weiwei Chen 2 , Dongfeng Wang <sup>3</sup> and Yuqi Fan 4, \***


**Abstract:** In recent years, the development of deep learning-based remaining useful life (RUL) prediction methods of bearings has flourished because of their high accuracy, easy implementation, and lack of reliance on a priori knowledge. However, there are two challenging issues concerning the prediction accuracy of existing methods. The run-to-failure sequential data and its RUL labels are almost inaccessible in real-world scenarios. Meanwhile, the existing models usually capture the general degradation trend of bearings while ignoring the local information, which restricts the model performance. To tackle the aforementioned problems, we propose a novel health indicator derived from the original vibration signals by combining principal components analysis with Euclidean distance metric, which was motivated by the desire to resolve the dependency on RUL labels. Then, we design a novel self-attention augmented convolution GRU network (SACGNet) to predict the RUL. Combining a self-attention mechanism with a convolution framework can both adaptively assign greater weights to more important information and focus on local information. Furthermore, Gated Recurrent Units are used to parse the long-term dependencies in weighted features such that SACGNet can utilize the important weighted features and focus on local features to improve the prognostic accuracy. The experimental results on the PHM 2012 Challenge dataset and the XJTU-SY bearing dataset have demonstrated that our proposed method is superior to the state of the art.

**Keywords:** self-attention; gated neural network; remaining useful life prediction; health indicator

#### **1. Introduction**

Bearings are one of the key components in a rotating machinery system. The remaining useful life (RUL) of a bearing is often defined as the length of a bearing from the current time to failure [1]. If the damage time or the trend of the vibration signal can be predicted from the collected vibration signal of a bearing, it is beneficial for identifying the adverse running condition in time to avoid the sudden danger of bearings. Thus, the RUL of a bearing is essential for the maintenance and management of mechanical systems [1,2].

In general, the RUL prediction of bearings can be sorted into two different directions: physics-based methods and data-driven methods. Physics-based methods focus on physical and mathematical models, e.g., partial differential equations and state-space models, which require extensive prior knowledge [3–6].

Data-driven RUL methods directly use historical data to model the degradation process of bearings without any prior knowledge.

Deep learning is a popular approach among data-driven methods, which can directly build a deep neural network to model the degradation process as a functional relationship between health states and original sensory data [7].

**Citation:** Xu, J.; Duan, S.; Chen, W.; Wang, D.; Fan, Y. SACGNet: A Remaining Useful Life Prediction of Bearing with Self-Attention Augmented Convolution GRU Network. *Lubricants* **2022**, *10*, 21. https://doi.org/10.3390/ lubricants10020021

Received: 11 January 2022 Accepted: 1 February 2022 Published: 3 February 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

Deep learning-based RUL approaches typically include the steps of data acquisition, health indicator (HI) construction, and remaining useful life prediction [7].

Data acquisition is to collect the run-to-failure signals from different sensors that can reflect the degradation process of bearings. The complete lifecycle data of a bearing are usually high-dimension and nonlinear. Therefore, a suitable processing method to retain the degradation features (i.e., HI) is necessary.

Health indicators such as the RUL labels are constructed by selecting the appropriate characteristics from the original sensory signal. Since the damage extent of bearings cannot be directly observed, the RUL labels are almost inaccessible in real-world scenarios. Thereby, the critical information from the original data are extracted as HI to train the prediction model, which is a crucial issue to the model [8].

Afterwards, several deep neural networks are designed to extract deep features from the original sensory data and then predict the RUL. The prevalent models include RNN, LSTM, CNN, etc. RNN are often used for predicting the time-series data. The prevalent RNNs are mainly LSTM, GRU, and their variants, which can learn the general degradation trend of the input data. However, they often overlook the local features in input data. With respect to the sequence vibration signal of the bearing, these vibration data have merely small fluctuations in long time series. Until the end of the bearing's life, the vibration fluctuates dramatically, which is often difficult to predict; hence, the existing models cannot obtain satisfactory prediction results [9].

To tackle the aforementioned issues, we propose a novel health indicator-based remaining useful life prediction approach of bearings. The main contributions of this paper are summarized as follows:


The remaining part of this paper is organized as follows: In Section 2, we introduce related works in the field of RUL prediction. We describe our proposed method in detail in Section 3. The experimental results are discussed in Section 4. Finally, we conclude the paper in Section 5.

#### **2. Related Works**

#### *2.1. Health Indicator Construction*

In deep learning-based RUL methods, HI construction currently has two branches in general. One approach extracts simple physical fault characterization from the original vibration signals as HI, using statistical methods or signal processing methods. For instance, the root mean square (RMS) of the original vibration signal [10] or the percentage of useful life (the current life divided by the total useful life) [11–14]. However, such HIs cannot

represent enough useful degradation information of the original data. Thereby, using such HIs as model input makes the model fail to accurately capture the degradation trend for RUL prediction.

The other branch constructs the virtual HI by fusing multiple physical characteristics or multi-sensor signals. These HI can filter out abnormal trends in the early degradation stages, which is more suitable for model learning [15–18]. Guo et al. selected six relatedsimilarity features and combined eight time-frequency features so as to form an original feature set that contains rich degradation signatures of bearings. Then, the selected features are fused into an HI through an RNN [19]. Li et al. used KPCA to integrate multiple features and introduced the EWMA to reduce the fluctuations for the constructed HI [20]. Li et al. designed the generative adversarial network to learn the data distribution in the health states of machine, using the output of the discriminator as HI [21]. Liang et al. proposed a novel index by calculating offset distance and offset angle between the current state and normal state of devices [22].

In summary, the existing HI can only represent the global degradation process of the vibration signal, but it fails to retain more local features. In order to extract more representative features from the vibration signal and facilitate for the model learning, a more effective HI construction method is proposed in this paper.

#### *2.2. Prediction Model*

With respect to regression model design, LSTM, GRU, CNN, and the attention mechanism have been successively introduced into the field of RUL prediction.

LSTM uses input gates, forgetting gates, and output gates to regulate the information of the input sequence, which enables the network to learn the long-term dependence of the data and gain favorable results. Hinchi et al. used convolutional layers to directly extract local features from sensor data, combined them with LSTM layers to capture the degradation process of the bearing, and finally output the prediction values [23]. Whereas LSTM solves the problem of gradient disappearance of traditional RNN to some extent, the deliberate design of LSTM for RUL prediction is very time consuming [24,25].

GRU is a variant of LSTM, whose structure is further simplified to show better performance than LSTM in smaller datasets [26,27]. Cao et al. use the BiGRU model to solve the problem of distribution discrepancy [28]. However, with regard to LSTM and GRU, they only use the features learned in the previous time step for regression prediction and often do not pay attention to local features in the long time series [29].

CNN can extract features with less computational effort because of the sparsity of parameter sharing of the convolutional kernel and inter-layer connectivity. More importantly, CNN focuses on the local features in the original vibration signal, which is suitable for RUL prediction [30–33]. Wang et al. proposed a multi-scale convolutional network to improve the domain adaptation capability of the RUL prediction model [34].

It is noted that the original vibration signal often contains different features with different levels of importance. The features that contain more important information should be paid more attention. Hence, an attention mechanism is introduced to RUL prediction of bearing to adaptively extract input features [35]. The self-attention mechanism aims to correlate different states of sequences, which reduces the dependence on external information and is more suitable for capturing the internal relevance of data or features [36,37]. Chen et al. constructed an encoder–decoder model based on the attention mechanism to mine useful degradation information from a long historical vibration signal [38]. Chen et al. proposed an attention-based deep learning framework for RUL prediction, which adopted LSTM to extract features, and then combined with the attention layer to fusion the features, LSTM extracted and manually extracted features [39].

#### **3. Proposed Method**

Without loss of generality, given a bearing, vibration signals *V* = {*v*1, *v*2, . . . *vm*}, input *V* to the health indicator construction module and get the *H I* = {*h*1, *h*2, . . . *hm*}. We expect the model to predict the *ht*+<sup>1</sup> value after input *h*1, *h*2, . . . *h<sup>t</sup>* .

Then, our proposed SACGNet learns the deep features in HI:

$$F(h\_1, h\_2, \dots, h\_t, \theta) : h \to h\_{t+1} \tag{1}$$

where *θ* is the parameter on the model.

There is also a testing dataset of the vibration signal *V* ′ = {*v* ′ 1 , *v* ′ 2 , . . . *v* ′ *<sup>m</sup>*}, which after HI construction obtains *H*′ = {*h* ′ 1 , *h* ′ 2 , . . . *h* ′ *n*}.

Finally, inputing the *H*′ to the trained SACGNet, the model will predict the correct value: ˆ *y* ′ *<sup>t</sup>* = *F*(*h* ′ 1 , *h* ′ 2 , . . . , *h* ′ *t* ), where ˆ *y* ′ *t* is the model's prediction value.

The whole structure of SACGNet is shown in Figure 1, including the health indicator construction module and remaining useful life prediction module. First, input the original signal of the bearings to the health indicator construction module to obtain the HI. After data normalization and sliding window processing, input it to SACGNet for training. In the testing stage, the predicted values are output by autoregression.

**Figure 1.** Proposed complete model.

*3.1. Health Indicator Construction Module*

If the dimension of the original signal is set to *d*, the matrix form of the original vibration signal *V* = {*v*1, *v*2, . . . *vm*}, where *v<sup>i</sup>* denoted an acquired vibration data that can be written as:

$$V = \begin{pmatrix} v\_{11} & \dots & v\_{1d} \\ \vdots & \ddots & \vdots \\ v\_{m1} & \dots & v\_{md} \end{pmatrix} . \tag{2}$$

Principal components analysis (PCA) linearly transforms the data into a new coordinate system such that the first major variance of any data projection is at the first coordinate

(called the first principal component), the second major variance is at the second coordinate, and so on.

*V T* is the de-averaged data. The singular value decomposition of *V* is:

$$V = W \sum H^T \tag{3}$$

where the matrix *W* is the eigenvector matrix of *VV<sup>T</sup>* , ∑ is a non-negative rectangular diagonal matrix, and *H* is the eigenvector matrix of *V <sup>T</sup>V*.

Assuming zero empirical means, the principal component *w*(1) of the dataset *V* can be defined as:

$$w(\mathbf{1}) = \arg\max\_{\|w\|=1} \text{Var}\{\mathcal{W}^T V\}.\tag{4}$$

To obtain the *k*-th principal component, the previous k-1 principal components must first be subtracted from *V*.

$$V\_{k-1}^{\uparrow} = V - \sum\_{i=1}^{k-1} w\_i w\_i^T V \tag{5}$$

Then, the *k*-th principal component is obtained to update a new dataset and continue to search for principal components.

$$w\_k = \arg\max\_{\|w\|=1} \mathbb{E}\{ (w^T V\_{k-1}^\uparrow)^2 \}\tag{6}$$

Through PCA, we can reduce the original vibration signal's dimensionality from *d* to *k*. We retain the *k* principal components of the original signals; thus, the dimension of *Vpca* is *k*, which can be abbreviated as {*vpca*1, *vpca*2, *vpca*3, . . . , *vpcan*}.

$$V\_{pca} = pca(V) = \begin{pmatrix} w\_{11} & \dots & w\_{1k} \\ \vdots & \ddots & \vdots \\ w\_{m1} & \dots & w\_{mk} \end{pmatrix} = \{v\_{pca1}, v\_{pca2}, v\_{pca3}, \dots, v\_{pcam}\} \tag{7}$$

Using standard PCA to reduce the dimensionality of the original vibration data, we can only retain the principal components of the data. In this paper, based on PCA, to reduce the dimensionality of vibration data, we use Euclidean distance to calculate the distance between the low-dimensional data to construct HI. The metric Euclidean distance can obtain the similarity between one data in the time series and the neighboring points, which can better reflect the trend of the neighboring data in the original vibration signal, which means "capturing local features" we mentioned.

By calculating the average of the Euclidean distance from each point in *Vpca* to the sequential neighboring points, we can obtain the HI corresponding to each point. HI = {*h*1, *h*2, . . . *hm*}. The calculation process of *h<sup>i</sup>* is as follows:

$$h\_i = \frac{1}{2}(\sqrt{\sum\_{j=1}^k (v\_{pcaj} - v\_{pca(i+1)\_j})^2} + \sqrt{\sum\_{j=1}^k (v\_{pcaj} - v\_{pca(i-1)\_j})^2}. \tag{8}$$

HI will be input to the constructed SACGNet to make the model learn the relationship between them. In order to make HI meet the dimensionality requirements of the model input, a sliding window is set to process the data into the shape required by the model, with a sliding window size of 20. Then, we obtain the *X* = {*x*1, *x*2, . . . *xn*}.

#### *3.2. Remaining Useful Life Prediction Module*

In this section, we describe our SACGNet in detail, as shown in the Table 1. We combine a 1D convolution (Conv1d) block with self-attention mechanisms to extract deep features from the input data. The Conv1d block focuses more on local features, and the self-attention mechanism can extract global features of the data. GRU can identify longterm features in the input data, which is beneficial to adapt the bearings under different operating conditions, thereby improving the prediction accuracy of our model [40,41].


**Table 1.** The architecture of SACGNet.

For convenience, we use C, P, FC, D, MHA, and GRU to denote the Conv1d layer, the pooling layer, the fully connected layer, the dropout layer, Multi-Head attention layer, and the GRU layer, respectively.

In the convolution layer, the calculation of the input data can be written as follows:

$$X\_{\mathcal{C}} = \operatorname{ReLU}(X \odot f\_{\hat{l}} + b\_{\hat{l}}) \tag{9}$$

where ⊙ represents convolution operation. *f<sup>i</sup>* represents the *i*th convolution filter, and *b<sup>i</sup>* is the bias. The convolution layers used *ReLU* as the activation function. Compared to images, the vibration signal is time-series data; hence, the one-dimensional convolution (Conv1d) neural network can be used to perform convolutional operations. The filters of the Conv1d layer are set to 80, the kernel size is set to 4, the stride is set to 1. In our paper, we select the average pooling layer.

The specific calculation process of the self-attention mechanism can be summarized into two processes: calculation the weight coefficients based on the Query and Key, and summation of the weight values based on the weight coefficients. The first process can further include the following: first, calculate the similarity or relevance between Query and Key, and then normalize the found relevance. Its attention function can be described as mapping a Query and a pair of key-value pairs to an output, where Queries, Keys, and values are vectors and the output is computed as a weighted sum of values, where the weight assigned to each value is computed by the compatibility function of the Query with the corresponding Key.

In order to learn the expression of multiple meanings, the input data will be transformed; *WQ*, *WK*, *W<sup>V</sup>* is the matrix of assigned weights. Self-attention represents a focus on itself, so the equation can be denoted as follows:

$$\begin{cases} Q = \mathcal{X}\_{\mathcal{c}} \mathcal{W}\_{\mathcal{Q}} = Linear(\mathcal{X}\_{\mathcal{c}})\\ \mathcal{K} = \mathcal{X}\_{\mathcal{c}} \mathcal{W}\_{\mathcal{K}} = Linear(\mathcal{X}\_{\mathcal{c}})\\ V = \mathcal{X}\_{\mathcal{c}} \mathcal{W}\_{V} = Linear(\mathcal{X}\_{\mathcal{c}}). \end{cases} \tag{10}$$

The output matrix of self-attention is expressed as:

$$Attention(Q, K, V) = softmax(\frac{QK^T}{\sqrt{d\_k}})V\tag{11}$$

where *d<sup>k</sup>* is the dimension of *<sup>K</sup>*, and the use of <sup>√</sup> *dk* is to change the attention matrix into a standard normal distribution.

Multi-head attention can make the model pay attention to the information from different representational subspaces; the output of the self-attention mechanism layer is three-dimensional vectors, which are written as *Xa*:

$$X\_a = Multi-Head(Q, K, V) = Concat(head\_1, \dots, head\_h) \text{W.} \tag{12}$$

In this paper, we choose *h* = 8, *d<sup>k</sup>* = *d<sup>q</sup>* = *d<sup>v</sup>* = 80, *W* ∈ *R hdv*×*dmodel* , which are the empirical values selected in the experiment.

Gated Recurrent Units (GRUs) are a gating mechanism in recurrent neural networks. The calculation of the GRU can be written as follows:

$$\begin{cases} \mathbf{X}\_{a} = \{a\_{1}, a\_{2}, \dots, a\_{n}\} \\ z\_{t} = \sigma\_{\mathcal{S}}(\mathsf{W}\_{z}a\_{t} + \mathsf{U}\_{z}h\_{t-1} + b\_{z}) \\ r\_{t} = \sigma\_{\mathcal{S}}(\mathsf{W}\_{r}a\_{t} + \mathsf{U}\_{r}h\_{t-1} + b\_{r}) \\ \hat{h}\_{t} = \phi\_{\mathcal{h}}(\mathsf{W}\_{h}\mathbf{x}\_{t} + \mathsf{U}\_{l} \* (r\_{t} \* h\_{t-1}) + b\_{\mathcal{h}}) \\ h\_{t} = (1 - z\_{t}) \* h\_{t-1} + z\_{t} \* \hat{h}\_{t}. \end{cases} \tag{13}$$

Among them, *a<sup>t</sup>* is input vector *X<sup>a</sup>* at time *t*, *h<sup>t</sup>* is output vector, ˆ*h<sup>t</sup>* is the candidate activation vector, *z<sup>t</sup>* is the update gate vector, *r<sup>t</sup>* is reset gate vector, *W*, *U*, and *b* are the parameter matrices, vector *σ<sup>g</sup>* is a sigmoid function, and *φ<sup>h</sup>* is a hyperbolic tangent. The GRU layer receives the features extracted from the Conv1d layer and the Multi-Head attention layer and then outputs the prediction value. The units of GRU are set to 80.

Finally, after the fully connected layer, the final output is obtained:

$$
\mathfrak{Y}\_{\mathfrak{l}} = \text{FCN}(h\_{\mathfrak{l}}).\tag{14}
$$

SACGNet is trained using the error back-propagation algorithm and gradient descent method. The loss function of the training process is the mean square error function:

$$\text{MSE} = \frac{1}{n} \sum\_{i=1}^{n} (y\_i - \hat{y}\_i)^2 \tag{15}$$

where *y<sup>i</sup>* is the true value, *y*ˆ*<sup>i</sup>* is the prediction value, and *n* is the total number of samples.

In addition, Adam is chosen as the optimizer of this paper, and the learning rate is set to 10−<sup>3</sup> [42].

Dropout is added to our SACGNet with the parameter set to 0.5 in order to reduce overfitting by preventing complex co-adaptations on training data. The algorithm pseudocode is shown in as follow Algorithm 1.

#### **Algorithm 1: Proposed SACGNet.**

1. The SACGNet algorithm for training is defined as follows: **Input:** Hyper-parameters of model (batch size, epoch, dropout rate, learning rate, etc.), original signal *V* = {*v*1, *v*2, *v*3, . . . *vm*} 2. *H I* = {*h*1, *h*2, . . . *hm*} 3. By sliding window processing: *H I* = {*h*1, *h*2, . . . *hm*} → *X* = {*x*1, *x*2, . . . *xn*} Each x represents a batch h, the number of a batch is *i* 4. *Y* = {*y*1, *y*2, . . . *yn*} = {*hi*+<sup>1</sup> , *hi*+<sup>2</sup> , . . . *hm*} 5. For *i* = 1, 2, . . . *n* do: *X* = *xi*−*Min*(*X*) *Max*(*X*)−*Min*(*X*) ,*Y* = *yi*−*Min*(*Y*) *Max*(*Y*)−*Min*(*Y*) end 6. Build SACGNet model 7. *w* (parameters of the SACGNet) and *b* (biases) are initialized to zeros 8. Input *X* and *Y* to train SACGNet *X<sup>c</sup>* = *Conv*(*X*) *Q* = *XcW<sup>Q</sup>* = *Linear*(*Xc*) *K* = *XcW<sup>K</sup>* = *Linear*(*Xc*) *V* = *XcW<sup>V</sup>* = *Linear*(*Xc*) *Attention*(*Q*, *K*, *V*) = *so f tmax*( *QKT* √ *dk* )*V X<sup>a</sup>* = *Multi* − *Head*(*Q*, *K*, *V*) *Output* = *GRU*(*Xa*) *Compute MSE by* (15) *w* ← *Adam*(*MSE*, *w*) *b* ← *Adam*(*MSE*, *b*) *end* **Output:** Trained SACGNet model for prediction **END**

#### **4. Experiments and Results**

In this section, we use the IEEE PHM Challenge 2012 bearing dataset and the XJTU-SY Bearing dataset to validate the effectiveness of our method.

#### *4.1. Dataset Description*

The IEEE PHM 2012 Challenge dataset was collected from the PRONOSTIA testbed, as shown in Figure 2.

**Figure 2.** Pronostia bearing testbed.

The PRONOSTIA test platform contains a rotating part, load part, and data collection part. The motor power of the rotating part is 250 W. The power is transferred to the bearing by the axis of rotation. The load part provides a load of 4000 N to make the bearing degrade quickly. The acceleration sensor is placed on a bearing seat in horizontal and vertical directions to select the vibration signals. The sampling frequency of the acceleration sensor is 25.6 kHz. When the test platform starts to work, the vibration signal is recorded every 10 s, and the sampling time is 0.1 s [43].

The data provided by IEEE PHM Challenge 2012 include three different operating conditions. Seven bearings (bearings 1-1 to 1-7) work in the first condition, the motor speed is 1800 rpm, and the load is 4000 N. Seven bearings (bearings 2-1 to 2-7) work in the second condition, the motor speed is 1650 rpm, and the load is 4200 N. Three bearings (bearings 3-1 to 3-3) work in the third condition, the motor speed is 1500 rpm, and the load is 5000 N. Table 2 illustrates the details of the PHM 2012 dataset.

In this paper, the vibration dada of bearings 1-1, 1-2, 2-1, 2-2, and 3-1 are selected, respectively, as the training set, while the rest of the bearings are selected as the testing set. Table 2 illustrates the details of the PHM dataset.


**Table 2.** The detail of the PHM dataset.

The XJTU-SY bearing dataset is provided by the Institute of Design Science and Fundamental Research of Xi'an Jiaotong University and contains the run-to-failure vibration data from 15 rolling bearings [44].

As shown in Figure 3, the bearing testbed is composed of an alternating current (AC) induction motor, a motor speed controller, a support shaft, two support bearings (heavy duty roller bearings), and a hydraulic loading system. This testbed is designed to conduct the accelerated degradation tests of the testing bearings under different operating conditions (i.e., different radial force and rotating speed). The radial force is generated by the hydraulic loading system and applied to the housing of tested bearings, and the rotating speed is set and kept by the speed controller of the AC induction motor [44].

**Figure 3.** XJTU-SY bearing testbed.

Three different operating conditions are set in the accelerated degradation experiments, and five bearings are used under each operating condition. The sampling frequency is 25.6 kHz, and the sampling period is 1 min. Table 3 illustrates the details of the XJTU-SY dataset.

**Table 3.** The details of the XJTU-SY dataset.


In this paper, the mean square error (MSE), root mean square error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE) are used to evaluate the prediction accuracy. They are respectively computed as follows:

$$\text{MSE} = \frac{1}{n} \sum\_{i=1}^{n} (y\_i - \hat{y}\_i)^2 \tag{16}$$

$$\text{RMSE} = \sqrt{\text{MSE}} = \sqrt{\frac{1}{n} \sum\_{i=1}^{n} (y\_i - \hat{y}\_i)^2} \tag{17}$$

$$\text{MAE} = \frac{1}{n} \sum\_{i=1}^{n} |y\_i - \hat{y}\_i| \tag{18}$$

$$\text{MAPE} = \frac{1}{n} \sum\_{i=1}^{n} |\frac{y\_i - \mathcal{Y}\_i}{y\_i}|. \tag{19}$$

In Equations (18)–(21), *y<sup>i</sup>* is the label, *y*ˆ*<sup>i</sup>* is the model's prediction value, and *n* is the total number of samples.

#### *4.2. Different HIs Results*

In this section, we compare different HI construction methods to validate the superiority of our proposed HI construction method.

Figure 4 shows the results of different HI construction methods on the bearing 1-3 of the PHM dataset. It can be clearly observed from Figure 4a that the original vibration signal of the bearing 1-3 is in a very smooth state with little fluctuation when the bearing has just begun to work. In the degradation state, the vibration signal usually fluctuates slightly, whereas the overall trend is upward. The signal fluctuation will increase sharply when the bearing finally completely degrades.

**Figure 4.** Different HI construction methods on the PHM dataset.

As shown in Figure 4b–h, the HIs constructed by the combination of TSNE and other distance metrics basically do not have any regular change trend. Meanwhile, the methods of SAE and Euclidean distance metric can retain the change trend of the original vibration signal, but the early degradation and complete degradation stage of the bearing cannot be completely distinguished. For the vibration signal of the bearing, PCA is a linear transformation method for each of its principal components. Specifically, the linearity of each point is calculated to obtain the principal components and then downscaled; thus, the global trend of the original signal can be retained. Meanwhile, SAE is a nonlinear learning model that requires a lot of training data to get a satisfactory performance. In contrast, our proposed method, as shown in Figure 4i, is more suitable to reflect the change trend of the original signal and can distinguish the early degradation and complete degradation stage of the bearing, which is beneficial for SACGNet to improve its prediction accuracy.

In order to further illustrate the superiority of our HI construct method, we use the percentage of the use life as the HI; then, we compare the RUL prediction results with our proposed methods on the PHM dataset. The red lines are true values of HI, and the blue lines are the prediction values of the model. Figure 5a–c is the RUL results of comparison HI, while Figure 5d–f is the RUL results of our methods.

**Figure 5.** The remaining useful life prediction using the True RUL label and our method as the label.

It can be seen that when the bearing true remaining useful life percentage is used as the HI label, the model fails to learn the degradation trend of the bearing vibration signal, and the final prediction results are not well-fitted. In contrast, using our proposed HI for prediction, the degradation trend of the vibrtion data is depicted more accurately, and the prediction accuracy of the model is improved. Note that our model can properly predict RUL in the stages of rapid degradation of bearing operation, which is of great value for the actual industrial scenarios.

#### *4.3. Ablation Experiments*

In order to observe the effects of the different layers in the proposed model, we conduct ablation experiments on the PHM dataset and the XJTU-SY dataset. We let the model constructs remain and respectively remove the Conv1d layer and Multi-Head Attention layer for comparison models. We call them the NoAttention model and NoConv1d model, respectively.

As seen in Table 4, with respect to the PHM dataset, our method achieved the best results in 10 of the 11 bearing data for MSE, RMSE, MAE, and MAPE metrics. Using the MSE metric, our model did not achieve the best result for bearing 2-5; the discrepancy with respect to the best results (i.e., NoAttention model) is 0.002. For the RMSE, MAE, and MAPE metrics, our model does not achieve the optimal results for bearing 2-7, the discrepancy with respect to the best results (i.e., the NoAttention model) is 0.148, 0.124, and 177.228, respectively.


**Table 4.** Ablation experiments in PHM dataset.

Furthermore, we also conducted the ablation experiment on the XJTU-SY dataset, and the results are shown in Table 5. Using the MAE and MAPE metrics, our model did not achieve the best results for bearings 1-5, 2-3, and 2-4 but only a discrepancy of 1.9% from the best results. Using the MSE and RMSE metrics, our model did not achieve the best results for bearings 2-3 and 2-4, but the discrepancy with the best results (i.e., NoAttention model) was only 3.03%. Except for the afore-mentioned results, the performance of our model is superior to the comparison models.

**Table 5.** Ablation experiments in the XJTU-SY dataset.


The results of the ablation experiments conducted on both bearing datasets prove that our proposed model achieves the best results on the largest number of testing bearings. In a comprehensive analysis, the degradation features of different bearings with different operating conditions are different, Conv1d can extract the local features of the original vibration signals, and the Self-Attention mechanism focuses on the global features, which can be integrated to achieve more excellent results.s

#### *4.4. Results of Different Models*

In this section, we compare the overall prediction accuracy of our proposed model with state-of-the-art methods on the PHM dataset and XJTU-SY dataset. The compared models include CNN, RNN, LSTM, and GRU.

As shown in Table 6, with respect to the PHM dataset, our model achieved the best results in nine out of 11 bearings for MSE and RMSE metrics, and in nine out of 11 bearings for MAE and MAPE metrics. Using the MSE and RMSE metrics, our model did not achieve the best results for bearings 2-5 and 2-7. Using the MAE and MAPE metrics, our model did not achieve the best values for bearings 2-7.

**Table 6.** Different models' result in the PHM dataset.


In order to indicate the superior performance of our model, we add the prediction results of all the comparison models in the PHM dataset. Without loss of generality, we visualize the prediction results for bearing 1-4, 1-5, and 1-6 in order to compare them with the previous prediction results, as shown in Figure 6.

From Figure 6a,d,g,j,m, it can be seen that CNN has the worst fitting results on the testing data. The difference between the CNN predicted values and the original signal is very obvious, because the single CNN models are unsuitable for processing the time-series data. The prediction results of RNN and GRU are slightly superior to those of CNN, but there is still a visible difference from the original vibration signal. Furthermore, the RNN performance is significantly inferior to LSTM on long-term series.

The prediction results of SACGNet and LSTM are similar on bearing 1-4, and the fitting results of SACGNet are more favorable when combined with the four evaluation metrics in Table 6. From Figure 6b,c,e,f,h,i,k,l,n,o, it can be seen that on bearing 1-5 and 1-6, SACGNet has superior prediction results, which is consistent with the comparison results of the four evaluation metrics in Table 6.

(**a**) SACGNet prediction result of bearing1-4 (**b**) SACGNet prediction result of bearing1-5 (**c**) SACGNet prediction result of bearing1-6

(**j**) LSTM prediction result of bearing1-4 (**k**) LSTM prediction result of bearing1-5 (**l**) LSTM prediction result of bearing1-6

**Figure 6.** The prediction results of different models in the PHM dataset.

69

(**d**) CNN prediction result of bearing1-4 (**e**) CNN prediction result of bearing1-5 (**f**) CNN prediction result of bearing1-6

(**g**) RNN prediction result of bearing1-4 (**h**) RNN prediction result of bearing1-5 (**i**) RNN prediction result of bearing1-6

Upon observation of the original vibration signal of bearing 2-7, the early degradation states of bearing 2-7 show very sharp fluctuations, and the amplitude difference between the early degradation and the complete degradation stage is very small. The vibration fluctuation of the early degradation is even more severe than that of the complete degradation stage. That may be the reason why our model did not achieve optimal results on bearing 2-7.

As shown in Table 7, with respect to the XJTU-SY dataset, it can be seen that our model has achieved the best resluts on most of the testing bearings, expect for the MSE of bearings 2-3 and 3-5, RMSE of bearings 2-3, 2-5 and 3-5, MAE of bearings 1-5, 2-3, 2-5, and 3-5, and MAPE of bearings 1-5, 2-3, and 2-5, respectively.


**Table 7.** Different models' results in the XJTU-SY dataset.

The reason for such results may be due to the fact that the experimental environment of 2-3, 2-5, and 3-5 is slightly different from the conditions of the bearing dataset we chose as the training set, and these datasets produce fluctuations in the degradation stage that are equal to or even higher than the final complete damage. Specifically, the CNN model achieved the best results on the bearing 2-3 dataset for four metrics, which may be because the degradation process of bearing 2-3 is filled with many small-scale local fluctuations, allowing the CNN to fit this process more directly. As for bearings 2-5 and 3-5, they do not show much sharp fluctuations and also due to the small amount of data compared to the other datasets, RNN and LSTM are able to parse their long-term serial relationships on these two datasets.

In summary, from the comparison experiments on the two datasets, in most of the cases, our model achieves the best prediction accuracy compared to the other existing models, which demonstrates that our proposed model is applicable for RUL prediction.

#### **5. Conclusions**

In this paper, we explored a health indicator-based remaining useful life prediction method. First, we combine principal component analysis (PCA) with Euclidean distance measurement to construct the health indicator for tackling the dependency on RUL labels. Then, we design a self-attention augmented convolution GRU network (SACGNet) to

predict the RUL task, which utilizes the globe features as well as local important features to improve the prognostic accuracy. To verify the effectiveness of the model, we conducted extensive experiments on two bearing datasets, respectively, and the results demonstrate that SACGNet is superior to these existing models under several evaluation criteria. Meanwhile, the model has excellent generalization performance for multiple bearings.

**Author Contributions:** J.X. contributed to the conception of the study; S.D. performed the experiment and wrote the manuscript; W.C. contributed significantly to analysis; D.W. performed the data analyses; Y.F. helped perform the analysis with constructive discussions. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work was supported in part by the National Key Research And Development Plan under Grant 2018YFB2000505, in part by the Key Research and Development Plan of Anhui Province under Grant 202104a04020003, and in part by the Fundamental Research Funds for the Central Universities under Grant PA2021KCPY0045.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Public datasets used in our paper: https://github.com/wkzs111/phmieee-2012-data-challenge-dataset (accessed on 10 December 2021), https://biaowang.tech/xjtu-sybearing-datasets/ (accessed on 10 December 2021).

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**

