Next Article in Journal
Foil Strain Gauges Using Piezoresistive Carbon Nanotube Yarn: Fabrication and Calibration
Next Article in Special Issue
Auto-Calibration and Fault Detection and Isolation of Skewed Redundant Accelerometers in Measurement While Drilling Systems
Previous Article in Journal
Nanomaterial-Based Sensing and Biosensing of Phenolic Compounds and Related Antioxidant Capacity in Food
Previous Article in Special Issue
New Fault Recognition Method for Rotary Machinery Based on Information Entropy and a Probabilistic Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Naive Bayes Bearing Fault Diagnosis Based on Enhanced Independence of Data

1
College of Information Engineering, Capital Normal University, Beijing 100048, China
2
Beijing Key Laboratory of Electronic System Reliability Technology, Capital Normal University, Beijing 100048, China
3
Beijing Key Laboratory of Light Industrial Robot and Safety Verification, Capital Normal University, Beijing 100048, China
4
Beijing Advanced Innovation Center for Imaging Technology, Capital Normal University, Beijing 100048, China
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(2), 463; https://doi.org/10.3390/s18020463
Submission received: 21 December 2017 / Revised: 25 January 2018 / Accepted: 1 February 2018 / Published: 5 February 2018
(This article belongs to the Special Issue Sensors for Fault Detection)

Abstract

:
The bearing is the key component of rotating machinery, and its performance directly determines the reliability and safety of the system. Data-based bearing fault diagnosis has become a research hotspot. Naive Bayes (NB), which is based on independent presumption, is widely used in fault diagnosis. However, the bearing data are not completely independent, which reduces the performance of NB algorithms. In order to solve this problem, we propose a NB bearing fault diagnosis method based on enhanced independence of data. The method deals with data vector from two aspects: the attribute feature and the sample dimension. After processing, the classification limitation of NB is reduced by the independence hypothesis. First, we extract the statistical characteristics of the original signal of the bearings effectively. Then, the Decision Tree algorithm is used to select the important features of the time domain signal, and the low correlation features is selected. Next, the Selective Support Vector Machine (SSVM) is used to prune the dimension data and remove redundant vectors. Finally, we use NB to diagnose the fault with the low correlation data. The experimental results show that the independent enhancement of data is effective for bearing fault diagnosis.

1. Introduction

The rolling bearing is the main component of rotating machinery. It carries the entire rotating machinery and equipment operation, and a small fault may have a significant impact on the operation of the entire device. Most of the problems with rotating machines are caused by bearing failure [1]. Therefore, the bearing fault diagnosis is of great significance. After the fault diagnosis of rotating machinery, the machine can be repaired and handled in time, so as to avoid the catastrophic effect caused by mechanical failure [2]. The related contents and techniques of fault diagnosis are introduced in the literature [3,4,5,6]. Before the machine failure, the maintenance and treatment of the machine can prevent the probability of failure and reduce the maintenance costs of the machine, as well as avoid casualties caused by equipment failure.
The vibration analysis is the main tool for the diagnosis of rotating machinery [7], and vibration signals analysis has been widely used in the field of fault diagnosis. In this field, the vibration spectrum analysis technique has successful identified the faults [8,9,10,11]. Through the analysis of vibration signals, the state of rotating machinery can be reflected. Sensors can be used to collect vibration signals of operating machinery, which contains rich information about the working state of machinery [12]. The mechanical health state is determined by analyzing the collected vibration signals. However, the collected vibration signals are chaotic and irregular. Therefore, it is necessary to extract the most representative, reliable and effective features from the acquired vibration signals.
The time domain signal feature of statistical analysis can be used to detect faults, which is mainly to extract feature of data. However, it can only reflect whether rotating machinery and its state are normal, and it can give diagnostic messages but not diagnose the fault, so further fault diagnosis is needed. Nowadays, with the successful application of machine learning methods in various fields, more and more machine learning methods are used in mechanical fault diagnosis. Neural networks, as a typical method of machine learning, has been applied to the field of fault diagnosis. As the most popular classifier, Support Vector Machines (SVM) has achieved some success in the field of fault diagnosis. SVM is a powerful tool for classification, and it also plays a significant role in machine fault diagnosis [13]. Samanta [14] proposed time-domain characteristics of the rotating machine that can be used as the input of artificial neural network (ANN) to verify the effective application in bearing fault diagnosis. Jack et al. [15] put forward SVM for bearing fault diagnosis. Yang et al. [16] proposed that vibration signals can be decomposed to stationary intrinsic mode functions (IMFs), and the input of ANN is the energy features extracted from IMF so as to identify the rolling bearing failure. Al-Raheem et al. [17] proposed a new technique that used genetic algorithm to optimize the application of Laplace wavelet shape and classifier parameters of ANN for bearing faults. In addition, Shuang et al. [18] proposed a fault pattern recognition method based on the principal component analysis (PCA) and SVM. However, the extracted multi-dimensional feature vector contains a large amount of information, with high data redundancy, which results in higher computational costs. Therefore, the high-dimensional characteristics need to be processed. Wu et al. [19] used the Manifold Learning algorithm to reduce the dimension of the high-dimensional features and then the processed are used as the input of wavelet neural network for bearing fault diagnosis. Sugumaran et al. [20] applied Decision Tree to selecting feature, and then carried on the bearing fault diagnosis with the kernel neighborhood fractional multiple support vector machine (MSVM). In another article [21], first, the time domain statistical feature and histogram feature was extracted from time domain signals, then the main feature was selected by the Decision Tree, last SVM and Proximal Support Vector was used for fault diagnostics of roller bearing. In a recent study, Ran et al. [22] proposed a neural network-based method to directly identify the original time series sensor data without feature selection and signal processing for bearing fault diagnosis. In his other article [23], the network is a combination of linear and nonlinear method, and also uses the depth network classifier of the original time series sensor data to diagnose faults.
ANN and SVM have an extensive application in fault diagnosis. However, there are some limitations. For example, the fitting problem and the local extremum can lead to slow operation speed and inaccurancy in ANN training results, respectively [24]. Moreover, SVM has a problem with the speed of testing and training. There are some limitations on multi-class, nonlinear and parameters problem. The training of ANN and SVM is complex, and the cost on training space is high. NB is used not only a small mount of training data, but also its simple structure, fast calculation speed and high accuracy [25,26]. Due to the reliable theoretical basis, comprehensive prior knowledge and the assumption of independent conditions among attributes, NB successfully applied in machine fault diagnosis. Hemantha et al. used the Bayes classifier to diagnose the bearing fault, and verified that NB on fault diagnosis has a good performance [27]. Girish et al. successfully applied NB classifier to the welded joints fault diagnosis [28]. However, the independence assumption of vibration signal of bearing fault is difficult to be realized in actual situations, which limits the algorithm. Therefore, this paper mainly carries on the vector pruning from two aspects of the characteristic attributes and the data dimension. First, Decision Trees are mainly used to select the main feature attributes [29]. Then, the redundancy of dimension vectors is removed by the proposed selective support vector machine (SSVM). In this way, the redundant data is processed from two aspects, and the limitation of the independence hypothesis on the NB is reduced. Finally, fault diagnosis model is established.
In this paper, NB, which is based on data independence improvement in fault diagnosis, is proposed. The remainder of paper is organized as follows: in Section 2, there is a brief introduction of the NB model. The fault diagnosis based on improved data independence is given by Section 3. In Section 4, the fault diagnosis based on improved data independence is applied to roller bearing diagnosis. Section 5 draws the conclusion of this paper.

2. NB Model

NB is a supervised learning classification method based on probability. NB has received much attention due to its simple classification model and excellent classification performance. The training model is shown in Figure 1.
(a)   Preparatory stage
Suppose there are m categories and categories L = { L 1 , L 2 , , L m } . Each sample has n attributes A t = { A t 1 , A t 2 , , A t n } , and each attribute set has d-dimensional feature vector X = { X 1 , X 2 , , X d } .
(b)   Training stage
P ( L i ) is the prior probability of each category, only related to the ratio of each category to the total category, that is,
P ( L i ) = n i n , 1 i m ,
where n is the number of known sample, and  n i is the number of i-th categories.
Bayes is a classifier based on the maximum posterior probability. There is an unknown sample classes Y = { y 1 , y 2 , , y z } , and the idea is to calculate the probability of unknown samples in each category. Finally, if the probability of the unknown sample Y is maximum in class L i , the unknown sample is classified into category L i . NB is based on the Bayes theorem, and the NB classification method is shown below:
P ( L i / y h ) > P ( L j / y h ) , 1 i m , 1 j m , 1 h z , i j .
According to Bayes’s theorem, the probability formula of P ( L i / y h ) can be obtained. The NB is a Bayes theorem based on the independence of the characteristic conditions, so P ( L i / y h ) can be defined as follows:
P ( L i / y h ) = P ( y h / L i ) P ( L i ) P ( y h ) ,
P ( y h ) = i = 1 m P ( y h / L i ) P ( L i ) ,
where P ( y h ) is a constant, and it is only necessary to compute the formula P ( y h / L i ) P ( L i ) of Equation (3).
According to the NB classification method, the value of the discriminant function P ( y h / L i ) P ( L i ) in each class is calculated for the unknown sample, where P ( L i ) is a priori probability of each category, as shown in Equation (1), and where P ( y h / L i ) is the probability of y h under the condition of L i . The attribute A t g i is continuous property and independent of each other. In general, the attribute variable obeys the Gaussian distribution A t g i N ( u g i , δ g i 2 )  [30]; then,  P ( y h / L i ) is defined as follows:
P ( y h / L i ) = 1 2 π δ g i e x p ( y h u g i ) 2 2 δ g i 2 ,
where u g i and δ g i 2 are mean and variance of samples, respectively, and the formula is as follows:
u g i = i = 1 n i X i g n i ,
δ g i 2 = i = 1 n i ( X i g u g i ) n i 1 .
From the above Equations (2) and (5)–(7), the posterior probability equation can be obtained:
P ( L i / y h ) = P ( L i ) g = 1 n 1 2 π δ g i e x p ( y h u g i ) 2 2 δ g i 2 .
In the same way:
P ( L j / y h ) = P ( L j ) g = 1 n 1 2 π δ g j e x p ( y h u g j ) 2 2 δ g j 2 .
(c)   Application stage  
According to the Equation (2), if P ( L i ) P ( L i / y h ) > P ( L j ) P ( L j / y h ) , the unknown sample is judged as class i; otherwise, it is judged as j.

3. NB Fault Diagnosis Model Based on Enhanced Independence of Data

3.1. Fault Diagnosis Model

In order to improve the classification effect of NB, this paper enhances the independence between data from two aspects of attribute characteristics and data dimension. The proposed fault diagnosis model is shown in Figure 2. The fault diagnosis model includes three parts: signal acquisition, signal processing and fault diagnosis.
  • Signal acquisition:   Acceleration sensor is used to obtain vibration signals of rolling bearings.
  • Signal processing:   The original vibration signal of the rolling bearing obtained from the sensor contains a large amount of noise, so it is necessary to process the data to obtain valid data signals. Firstly, feature extraction is performed on the original signal acquired by using the time-domain signal method. Then, the Decision Tree is used to select the main feature attributes from the feature attributes. The data are processed from two directions of feature attribute and data dimension, so that the data with strong independence can be obtained, which is beneficial to the fault diagnosis of the bearing.
  • Fault diagnosis:  After the data is processed, we obtain data with low redundancy. Thus, the impact of data independence assumption on NB model is reduced, and the fault diagnosis can be made effectively.

3.2. Feature Selection Using Decision Tree

The Decision Tree is a tree structure, which is mainly composed of nodes and branches, and the nodes contain leaf nodes and intermediate nodes. The intermediate nodes are used to represent a feature, and leaf nodes are used to represent a class label. The Decision Tree can be used for feature selection [29]. The attributes that appear in the Decision Tree nodes provide important information to promote classification. The J48 algorithm is mainly used to construct Decision Tree. Therefore, we construct a Decision Tree using J48 algorithm. Then, we find the characteristic attribute corresponding to the middle node of the decision tree, and remove the feature attribute that without important information. The following describes the J48 algorithm for feature extraction:
(a)
The acquired data is used as the input of the algorithm, and the output is the node of the Decision Tree.
(b)
The output Decision Tree nodes are divided into leaf nodes and intermediate nodes. The leaf node represents the classification, the intermediate node represents the decision attribute, and the branch represents the condition that the next decision data comes from the previous decision attribute.
(c)
The Decision Tree is used to find feature attributes from top to bottom until all nodes become leaf nodes.
(d)
Finding the criteria of decision attributes: the information gain of each feature is calculated and the maximum information gain is chosen as the intermediate node of the Decision Tree.
Information gain is used to determine how to select the most appropriate features from a number of attributes. Information gain is mainly determined by the information entropy. Information gain of attribute A t for the data set is: entropy of all attribute information minus the entropy of split attributes. The  A t is a continuous attribute based on Gaussian distribution, so information entropy properties of A t is defined as follows:
G a i n ( A t ) = I n f o ( L ) i n f o A t ( L ) ,
I n f o ( L ) = j = 1 m P ( j / L ) l o g P ( j / L ) ,
I n f o A t ( L j ) = j = 1 m L j L L o g 2 π e δ i j 2 2 .
G a i n ( A t ) is the information gain of the attribute A t , I n f o ( L ) is the undivided information entropy, and i n f o A t ( L ) is the information entropy A t after splitting. The variance x is given by the Formula (6), and m is the number of classifications, and  L j is a subset of data set L.

3.3. SSVM

SVM is a traditional classification method for two categories. In this paper, an optimal classification hyperplane is constructed in the sample set, and two classes of samples are separated from each other on the hyperplane. Generally, in the case of too much data, SVM can not completely classify the two kinds of data into both sides of the hyperplane. Thus, we propose an SSVM algorithm to remove the spatial redundancy problem of the vector.
SSVM data processing is divided into several steps, as shown in Figure 3.
Step 1: Constructing the optimal hyperplane of data.  
In most cases, SVM is targeted at two types of problems [31]. The data set ( X , Y ) is divided into training set and test set. The training set is ( X 1 , Y 1 ) , ( X 2 , Y 2 ) , , ( X n , Y n ) . if X i is the first class, Y i = 1 . if X i is second class, Y i = 1 . As shown in Figure 4, hyperplane H ( X ) separates the two-class data on both sides.
The hyperplane H (X) equation is given as in Equation (13) [32]:
w T K X + b = 0 .
The function K ( X ) is a kernel function, which maps the low-dimensional space to the high-dimensional space, and avoids the fact that the data cannot be separated in the low-dimensional space, where w is a vector, b is constant, and their values can be obtained by the optimization of the following Equation [31]:
m i n : 1 2 w + C i = 1 n ξ i , w = w T w ,
s . t . y i ( w T K X i + b ) 1 ξ i , ξ i 0 .
Parameter C is mainly used to adjust training error. ξ i is a slack variable [33]. After the solution of the parameter, the optimal hyperplane H ( X ) is obtained [31]:
H ( X ) = s g n ( i ε S V y i a i K ( x i , x ) + b ) ,
where S is the support vector for the DataSet ( X , Y ) , where s g n is a symbolic function that mainly returns the positive and negative of the parameter. K ( x i , x ) is a kernel function, and there are many kinds of kernel functions. The Gaussian kernel function is better in the application, so the Gaussian kernel function is used in this paper:
K ( x , y ) = e x p ( x y 2 σ 2 ) .
Step 2: Using the constructed hyperplane to select the data and remove the redundancy.  
Firstly, a suitable threshold is selected, and the hyperplane K ( X ) is used to test the data. When the test result does not reach the threshold, this data is chosen to be pruned.
Then, find the hyperplane boundary support vector.
Finally, find the point closest to each support vector, and judge if the closest distance is consistent with the classification of the vector; then, keep it, or otherwise delete it.
This article uses the Euclidean distance to measure the distance between two points. For high-dimensional data, the distance between two points is the distance of two vectors, for example, X = ( x 1 , x 2 , , x n ) and Y = ( y 1 , y 2 , , y n ) , X and Y distance D ( X , Y ) is written as:
D ( X , Y ) = i = 1 n ( x i , y i ) 2 , 1 i n .
Step 3: Reorganizing processing data, and obtaining new data.  
SVM is mainly used for two types of data. This article mainly uses multiple categories of data. First of all, the data in multiple categories were put into pairs, respectively. Then, two kinds of data are pruned with SSVM. The data processing is divided into the following steps:
(1)
Construct hyperplane for training.
(2)
Test the data with a trained hyperplane.
(3)
Set the appropriate threshold to find out the classification of the training data and training results below the threshold.
(4)
Finding the nearest neighbor of each support vector form data obtained in step (3), calculating the distance between the support vectors and the data points, and setting the distance between the points to itself be infinity.
(5)
Find the nearest vector point of each support vector.
(6)
Determine whether the support vector is consistent with its corresponding nearest neighbor vector classification result, and mark it as 0 if inconsistent.
(7)
Remove the data marked as 0 in the data.
(8)
Reorganize data to get new data.
According to the description of the SSVM, the SSVM pruning algorithm is the most important part of the SSVM. The details of SSVM pruning algorithm are shown in Algorithm 1.
Algorithm 1 SSVM pruning algorithm.
Input:
  The selected training sample 〈X,Y〉, X = ( X 1 , X 2 , , X n ) ;
Output:
  Trimmed sample 〈X1,Y1〉
  1:      Begin
  2:     Obtain support vector 〈Z,H〉 by SVM, Z = ( Z 1 , Z 2 , , Z n )
  3:         for i:=1 to n do
  4:             for i:=1 to m do
  5:                   Calculate the distance D ( Z i , X j ) between Z and X by Equation (18), When Z i is the same as X j , define the distance D as infinite.
  6:             end
  7:              Find the nearest dimension vector X j between Z i and X
  8:              Judge whether H i and Y j are the same, if not, let Y j = 0
  9:          end
  10:          Delete the sample data of the Y=0
  11:         return X 1 , X 2
  12:    end

4. Experiment and Analysis

4.1. Bearing Data Preprocessing

The data in this article is from bearing fault signals provided by the Case Western Reserve University (CWRU) laboratories [34]. The experimental platform is shown in Figure 5. The experimental platform consists of a torque tachometer, a 1.5 KW motor and a dynamometer. The experimental data uses the acceleration signals collected by the acceleration sensors. The sensor is fixed to the position of the driving end and the fan end of the motor shell at 12 o’clock with the magnetic base, and the vibration signal is collected through the recorder. The type of bearing used in the test is SKF6205-2RS deep groove ball bearing. The sampling frequency of the experiment is 12 KHz, the speed is 1797 rpm, and the main data is collected from normal vibration signal and the fault vibration signal.
In this paper, the normal vibration signals and fault signals of bearings are analyzed, and the samples of each type of signals are at least 12,100. The main samples of this paper are those samples with no load and the 0.021 (inches) radius fault. Table 1 describes a normal bearing signal and five kinds of fault bearing signals used in this paper. Six kinds of bearing data are described in Figure 6.

4.2. Application of Improved Algorithm in Bearing Fault

Fault diagnosis model is constructed according to Figure 2. This paper chooses West University of rolling bearing samples and the numbers of each state are at least 121,200. Test data and training data account for half of the total data. The detailed description of various bearing States is shown in Table 2.
In this paper, the vibration signal is mainly processed from three aspects.
First, the feature extraction is performed by the time domain method.
The statistical characteristics of signal vibration amplitude will change with the location and the size of the fault. The time domain waveform is dynamically transformed over time. The amplitude of the vibration signal can reflect the characteristic information of the signal intuitively. The time domain waveform information can be used to diagnose the state of the bearing by analyzing the amplitude, shape and other characteristics of the waveform. The time domain characteristic parameters are different due to different fault types and different fault degree. Generally speaking, the time domain feature provides the global characteristics of bearing state, and can effectively extract the bearing fault feature.
In the actual situation, there is various information of bearing fault, and a faults are often accompanied with other faults, such as bearing deformation, corrosion and so on. In order to diagnose the fault more effectively, we need to extract the feature of bearing fault data. In this paper, 17-time domain extraction methods are used to extract the features of the signal.
In Table 3, X ( n ) is the representative of the signal sample n = 1 , 2 , , m , and m represents the number of samples. Seventeen time domain feature attributes is: T 1 the average value, T 2 absolute mean, T 3 effective value, T 4 average power, T 5 square amplitude, T 6 peak, T 7 peak-to-peak, T 8 variance, T 9 standard deviation, T 10 skewness, T 11 kurtosis, T 12 waveform, T 13 Crest index, T 14 impluse index, T 15 margin index, T 16 skewness index and T 17 kurtosis index.
Second, the main feature selection of feature extraction data is made by the Decision Tree.
The main description of the J48 algorithm is given in Chapter 3, and the output tree structure shown in Figure 7. It can be seen from the diagram that the main characteristics of bearing data are T 1 , T 5 , T 12 and T 17 .
The 17 characteristic attributes obtained by feature extraction are interrelated with each other, which leads to data redundancy. The attributes with low correlation are obtained by extracting the main features with J48 so that the independence of data can be enhanced.
The description and significance of these four main time-domain features are as follows:
  • average value ( T 1 ) : T 1 is mainly used to reflect the trend of the bearing fault signal,
  • square amplitude ( T 5 ) : T 5 is mainly used to describe the energy of signals,
  • waveform index ( T 12 ) : T 12 is sensitive to fault signals with stable waveform,
  • kurtosis index T 17 : kurtosis is sensitive to bearing defects and can reliably reflect the state of rolling bearings. It is not easy to be affected by temperature, speed, etc. and comprehensive analysis of kurtosis, peak factor, and effective value.
In Figure 7, the intermediate node represents the attribute of the decision with an ellipse, and the leaf node represents the classification result with a rectangle. The data between nodes are the classification condition. The graph is a part of the Decision Tree. Class label is a class with the highest probability in classification result when it has little effect on feature selection.
Third, the main feature of extraction is pruned with SSVM.
The J48 algorithm is mainly used to extract attribute vector so that the connection between data is reduced and the independence between data is enhanced. This paper mainly uses SSVM as mentioned above to reduce the similar attributes on the data dimension. The more similar the attribute is, the more redundant it would be. The data redundancy between the pruned data will be reduced so that the independence of the data dimension can be enhanced.
SSVM is used to select the appropriate data for pruning. When the data is removed excessively or removed too little, the classification result will be affected. Therefore, it is very important to choose the appropriate threshold. The threshold in this article is the accuracy rate of test data tested by SVM. When the accuracy is greater than a certain value, we think that these kinds of data are not redundant, so we do not prune it. Therefore, the classification data, which is below the threshold, is selected, and then remove the nearest neighbor inconsistent data. Table 4 shows the selected data corresponding to the pruning data and the pruned training data set, and Figure 8 is the test accuracy of the bearing data corresponding to the selection threshold. From Table 4 and Figure 8, it can be concluded that the data trimming is too small to make the classification effect not obvious, and too much data pruning will result in important data loss. It can be seen from Figure 8 that, when the threshold is 0.9, the corresponding accuracy is the highest than others. Therefore, the training data with a threshold below 0.9 is selected for SSVM pruning. Only in this way can the fault diagnosis be performed effectively.
After processing, the vibration data from the three aspects above, the redundant data is removed from the feature vector and the dimension vector, respectively. Figure 9 shows the three-dimensional data of time domain feature extractiont, three-dimensional data after J48 select feature, and three-dimensional data after J48 and SSVM trimming. The axes x, y, and z in Figure 9 are dimensional features. Among them, Figure 9a selects three dimensions of mean, absolute mean and effective value, Figure 9b,c select three dimensions of mean, waveform index and kurtosis index. It can be seen from Figure 9 that each class of data has obvious overlap in Figure 9a, the overlap ratio of each kind of data in Figure 9b is obviously lower than of Figure 9a, and Figure 9c obviously separates each type of category data. Therefore, it is shown from Figure 9 that the redundancy between the processed data is greatly reduced, so that the correlation between the data is reduced, and the influence of NB independence assumption on the fault diagnosis is finally reduced.
The processing bearing fault data correlation is low, which reduces the limitation of the independence assumption on NB fault diagnosis. Table 5 is the confusion matrix of NB fault diagnosis for the processed data, and Table 6 is a confusion matrix for bearing fault diagnosis using an NB model without redundant vibration data. As can be seen from the table, the model has been improved for each category after redundancy removal.
In order to verify the validity of this algorithm in bearing data, the data simulation is carried out by MATLAB (Version 8.6, The MathWorks, MA, USA). Figure 10 and Table 7 are bearing fault diagnosis results. In Figure 10 and Table 7,the meaning of NB+J48+SVM is that first data is selected by J48,then the data after feature selection is pruned by SVM and the fault diagnosis of NB is finally carried out. Compared with other experimental results, the bearing fault diagnosis experimental results on JSSVM-NB is better than removing the data redundancy by feature vector and data vector. Compared with other experiments, the accuracy of the fault diagnosis model is 99.17%. Table 8 shows the comparison of results of about JSSVM-NB and reference [35], which have the same data for bearing fault diagnosis. It can be seen from Table 7 and Table 8 that the JSSVM-NB model is effective for rolling bearing fault diagnosis.

5. Conclusions

In this paper, in order to improve the independence assumption, the bearing data processing is carried out from two aspects of the attribute vector and the dimension vector, and the bearing data with higher data independence is obtained for the bearing fault diagnosis of the NB. NB is based on the conditional independence hypothesis of Bayes. However, in the actual case, it is difficult for the bearing data vector to achieve independence. Therefore, the redundancy is removed from the feature attribute vector and dimension of bearing data in this paper, so that the connection between data is reduced and the bearing condition monitoring on NB be enhanced. It be seen from the simulation results. The NB improved the data independence has realized the fault diagnosis of the different parts of rolling bearing, and can be applied to the other fault diagnosis of the industrial.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (61702348, 61772351, 61602326), the National Key Technology Research and Development Program (2015BAF13B01, the National Key R&D Plan (2017YFB1303000, 2017YFB1302800), and the Project of the Beijing Municipal Science & Technology Commission (LJ201607). The work is also supported by the Youth Innovative Research Team of Capital Normal University.

Author Contributions

Nannan Zhang and Lifeng Wu conceived and designed the experiments; Jing Yang performed the experiments; Yong Guan analyzed the data; Lifeng Wu contributed analysis tools; Nannan Zhang wrote the paper. All authors contributed to discussing and revising the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jacobs, W.; Malag, M.; Boonen, R.; Moens, D.; Sas, P. Analysis of bearing damage using a multibody model and a test rig for validation purposes. Struct. Health Monit. 2011, 14, 971–978. [Google Scholar]
  2. Li, W.; Mechefske, C.K. Detection of induction motor faults: A comparison of stator current, vibration and acoustic methods. J. Vib. Control 2006, 12, 165–188. [Google Scholar] [CrossRef]
  3. Ding, S.X.; Yang, Y.; Zhang, Y.; Li, L. Data-driven realizations of kernel and image representations and their application to fault detection and control system design. Automatica 2014, 50, 2615–2623. [Google Scholar] [CrossRef]
  4. Chadli, M.; Abdo, A.; Ding, S.X. H-/H∞ fault detection filter design for discrete-time Takagi-Sugeno fuzzy system. Automatica 2013, 49, 1996–2005. [Google Scholar] [CrossRef]
  5. Chibani, A.; Chadli, M.; Peng, S.; Braiek, N.B. Fuzzy Fault Detection Filter Design for T-S Fuzzy Systems in Finite Frequency Domain. IEEE Trans. Fuzzy Syst. 2016, 25, 1051–1061. [Google Scholar] [CrossRef]
  6. Youssef, T.; Chadli, M.; Karimi, H.R.; Wang, R. Actuator and sensor faults estimation based on proportional integral observer for TS fuzzy model. J. Frankl. Inst. 2016, 354, 2524–2542. [Google Scholar] [CrossRef]
  7. Paya, B.; Esat, I.I.; Badi, M.N.M. Artificial Neural Network Based Fault Diagnostics of Rotating Machinery Using Wavelet Transforms as a Preprocessor. Mech. Syst. Signal Process. 1997, 11, 751–765. [Google Scholar] [CrossRef]
  8. Lynagh, N.; Rahnejat, H.; Ebrahimi, M.; Aini, R. Bearing induced vibration in precision high speed routing spindles. Int. J. Mach. Tools Manuf. 2000, 40, 561–577. [Google Scholar] [CrossRef]
  9. Wardle, F.P. Vibration Forces Produced by Waviness of the Rolling Surfaces of Thrust Loaded Ball Bearings Part 1: Theory. Arch. Proc. Inst. Mech. Eng. Part C J. Mech. Eng. Sci. 1988, 202, 305–312. [Google Scholar] [CrossRef]
  10. Mevel, B.; Guyader, J.L. Routes to Chaos in Ball Bearings. J. Sound Vib. 2007, 162, 471–487. [Google Scholar] [CrossRef]
  11. Vafaei, S.; Rahnejat, H. Indicated repeatable runout with wavelet decomposition (IRR-WD) for effective determination of bearing-induced vibration. J. Sound Vib. 2003, 260, 67–82. [Google Scholar] [CrossRef]
  12. Lei, Y.; Lin, J.; He, Z.; Zi, Y. Application of an improved kurtogram method for fault diagnosis of rolling element bearings. Mech. Syst. Signal Process. 2011, 25, 1738–1749. [Google Scholar] [CrossRef]
  13. Wang, T.; Qi, J.; Xu, H.; Wang, Y.; Liu, L.; Gao, D. Fault diagnosis method based on FFT-RPCA-SVM for Cascaded-Multilevel Inverter. ISA Trans. 2016, 60, 156–163. [Google Scholar] [CrossRef] [PubMed]
  14. Samanta, B.; Al-Balushi, K.R. Artificial Neural Network Based Fault Diagnostics of Rolling Element Bearings Using Time-Domain Features. Mech. Syst. Signal Process. 2003, 17, 317–328. [Google Scholar] [CrossRef]
  15. Jack, L.B.; Nandi, A.K. Support vector machines for detection and characterization of rolling element bearing faults. Arch. Proc. Inst. Mech. Eng. Part C J. Mech. Eng. Sci. 2001, 215, 1065–1074. [Google Scholar] [CrossRef]
  16. Yang, Y.; Yu, D.; Cheng, J. A roller bearing fault diagnosis method based on EMD energy entropy and ANN. J. Sound Vib. 2006, 294, 269–277. [Google Scholar]
  17. Al-Raheem, K.F.; Roy, A.; Ramachandran, K.P.; Harrison, D.K.; Grainger, S. Application of the Laplace-Wavelet Combined with ANN for Rolling Bearing Fault Diagnosis. J. Vib. Acoust. 2008, 130, 3077–3100. [Google Scholar] [CrossRef]
  18. Shuang, L.; Meng, L. Bearing Fault Diagnosis Based on PCA and SVM. In Proceedings of the International Conference on Mechatronics and Automation, Harbin, China, 5–8 August 2007; pp. 3503–3507. [Google Scholar]
  19. Wu, L.; Yao, B.; Peng, Z.; Guan, Y. Fault Diagnosis of Roller Bearings Based on a Wavelet Neural Network and Manifold Learning. Appl. Sci. 2017, 7, 158. [Google Scholar] [CrossRef]
  20. Sugumaran, V.; Sabareesh, G.R.; Ramachandran, K.I. Fault diagnostics of roller bearing using kernel based neighborhood score multi-class support vector machine. Expert Syst. Appl. 2008, 34, 3090–3098. [Google Scholar] [CrossRef]
  21. Sugumaran, V.; Ramachandran, K.I. Effect of number of features on classification of roller bearing faults using SVM and PSVM. Expert Syst. Appl. 2011, 38, 4088–4096. [Google Scholar] [CrossRef]
  22. Zhang, R.; Peng, Z.; Wu, L.; Yao, B.; Guan, Y. Fault Diagnosis from Raw Sensor Data Using Deep Neural Networks Considering Temporal Coherence. Sensors 2017, 17, 549. [Google Scholar] [CrossRef] [PubMed]
  23. Zhang, R.; Wu, L.; Fu, X.; Yao, B. Classification of bearing data based on deep belief networks. In Proceedings of the Prognostics and System Health Management Conference, Chengdu, China, 19–21 October 2017; pp. 1–6. [Google Scholar]
  24. Mohamed, E.A.; Abdelaziz, A.Y.; Mostafa, A.S. A neural network-based scheme for fault diagnosis of power transformers. Electr. Power Syst. Res. 2005, 75, 29–39. [Google Scholar] [CrossRef]
  25. Sharma, R.K.; Sugumaran, V.; Kumar, H.; Amarnath, M. A comparative study of NB classifier and Bayes net classifier for fault diagnosis of roller bearing using sound signal. Decis. Support Syst. 2015, 1, 115. [Google Scholar] [CrossRef]
  26. Mccallum, A.; Nigam, K. A Comparison of Event Models for NB Text Classification. In Proceedings of the AAAI-98 Workshop on Learning for Text Categorization, Madison, WI, USA, 26–27 July 1998; Volume 62, pp. 41–48. [Google Scholar]
  27. Kumar, H.; Ranjit Kumar, T.A.; Amarnath, M.; Sugumaran, V. Fault Diagnosis of Bearings through Vibration Signal Using Bayes Classifiers. Int. J. Comput. Aided Eng. Technol. 2014, 6, 14–28. [Google Scholar] [CrossRef]
  28. Krishna, H. Fault Diagnosis of welded joint through vibration signals using NB Algorithm. In Proceedings of the International Conference on Advances in Manufacturing and Material Engineering, Mangalore, India, 27–29 March 2014. [Google Scholar]
  29. Sugumaran, V.; Muralidharan, V.; Ramachandran, K.I. Feature selection using decision tree and classification through Proximal Support Vector Machine for fault diagnostics of roller bearing. Mech. Syst. Signal Process. 2007, 21, 930–942. [Google Scholar] [CrossRef]
  30. Quinlan, J.R. Improved Use of Continuous Attributes in C4.5. J. Artif. Intell. Res. 1996, 4, 77–90. [Google Scholar]
  31. Brereton, R.G.; Lloyd, G.R. Support vector machines for classification and regression. Analyst 2010, 135, 230–267. [Google Scholar] [CrossRef] [PubMed]
  32. Cristianini, N.; Shawe-Taylor, J. An Introduction to Support Vector Machines: And Other Kernel-Based Learning Methods; Cambridge University Press: Cambridge, UK, 2000; pp. 1–28. [Google Scholar]
  33. Zhang, X.; Liang, Y.; Zhou, J.; Zang, Y. A novel bearing fault diagnosis model integrated permutation entropy, ensemble empirical mode decomposition and optimized SVM. Measurement 2015, 69, 164–179. [Google Scholar] [CrossRef]
  34. Loparo, K. Bearings Vibration Data Set; Case Western Reserve University: Cleveland, OH, USA. Available online: http://www.eecs.case.edu/laboratory/bearing/welcome-overview.htm (accessed on 20 July 2012).
  35. Wu, S.D.; Wu, C.W.; Wu, T.Y.; Wang, C.C. Multi-Scale Analysis Based Ball Bearing Defect Diagnostics Using Mahalanobis Distance and Support Vector Machine. Entropy 2013, 15, 416–433. [Google Scholar] [CrossRef]
Figure 1. Naive Bayes training model.
Figure 1. Naive Bayes training model.
Sensors 18 00463 g001
Figure 2. Fault diagnosis model based on the enhanced independence of data.
Figure 2. Fault diagnosis model based on the enhanced independence of data.
Sensors 18 00463 g002
Figure 3. Selective Support Vector Machine data processing flow chart.
Figure 3. Selective Support Vector Machine data processing flow chart.
Sensors 18 00463 g003
Figure 4. Two categories of Support Vector Machine.
Figure 4. Two categories of Support Vector Machine.
Sensors 18 00463 g004
Figure 5. Experimental diagram of experimental platform for rolling bearing fault.
Figure 5. Experimental diagram of experimental platform for rolling bearing fault.
Sensors 18 00463 g005
Figure 6. The time domain waveform of rolling bearings is shown in the figure. The x-axis is the time unit of the second and y-axis is the driving end bearing accelerator data. (a) normal bearing signal waveform; (b) inner fault signal waveform; (c) roller fault signal waveform; (d) outer fault signal waveform at center @6:00; (e) outer ring fault signal at orthogonal @3:00; (f) outer fault signal waveform at opposite @12:00.
Figure 6. The time domain waveform of rolling bearings is shown in the figure. The x-axis is the time unit of the second and y-axis is the driving end bearing accelerator data. (a) normal bearing signal waveform; (b) inner fault signal waveform; (c) roller fault signal waveform; (d) outer fault signal waveform at center @6:00; (e) outer ring fault signal at orthogonal @3:00; (f) outer fault signal waveform at opposite @12:00.
Sensors 18 00463 g006
Figure 7. A part of the Decision Tree.
Figure 7. A part of the Decision Tree.
Sensors 18 00463 g007
Figure 8. The accuracy of the data corresponding to the threshold.
Figure 8. The accuracy of the data corresponding to the threshold.
Sensors 18 00463 g008
Figure 9. Bearing data description. (a) the original signal time domain feature extraction fault three-dimensional; (b) J48 select the characteristics of the three-dimensional fault data diagram; (c) the three-dimensional fault data diagram after J48 and SSVM pruning.
Figure 9. Bearing data description. (a) the original signal time domain feature extraction fault three-dimensional; (b) J48 select the characteristics of the three-dimensional fault data diagram; (c) the three-dimensional fault data diagram after J48 and SSVM pruning.
Sensors 18 00463 g009
Figure 10. Testing accuracy comparison of each condition in the experiment.
Figure 10. Testing accuracy comparison of each condition in the experiment.
Sensors 18 00463 g010
Table 1. Description of CWRU dataset.
Table 1. Description of CWRU dataset.
Data TypeMotor Load (HP)Fault Diameter (Inches)Label
Normal001
Inner race00.0212
Ball00.0213
Out race fault at center @6:0000.0214
Out race fault at orthogonal @3:0000.0215
Out race fault at opposite @12:0000.0216
Table 2. Description of the data sets.
Table 2. Description of the data sets.
Data TypeThe Number of TrainingThe Number of TestingLabel
Normal1211211
Inner race1211212
Ball1211213
Out race fault at center @6:001211214
Out race fault at orthogonal @3:001211215
Out race fault at opposite @12:001211216
Table 3. Time domain analysis of bearing fault data.
Table 3. Time domain analysis of bearing fault data.
NumberCharacteristic EquationNumberCharacteristic Equation
1 T 1 = n = 1 d X n d 2 T 2 = n = 1 d X n d
3 T 3 = n = 1 d ( X n ) 2 d 4 T 4 = n = 1 d ( X n ) 2 d
5 T 5 = ( n = 1 d X n d ) 2 6 T 6 = m a x X ( n )
7 T 7 = m a x ( X ( n ) ) m i n ( X ( n ) ) 8 T 8 = n = 1 d ( X n T 1 ) 2 d 1
9 T 9 = n = 1 d ( X n T 1 ) 2 d 1 10 T 10 = n = 1 d ( X n ) 3 d
11 T 11 = n = 1 d ( X n ) 4 d 12 T 12 = T 3 T 2
13 T 13 = T 6 T 3 14 T 14 = T 6 T 2
15 T 15 = T 6 T 5 16 T 16 = T 10 T 9 3
17 T 17 = T 11 T 9 4
Table 4. The corresponding threshold data.
Table 4. The corresponding threshold data.
Threshold (Training Accuracy)10.950.900.850.800.60
The number of pruning1901871791551020
The number of training536539547571624726
Table 5. Confusion matrix of the processing bearing fault data on test sets.
Table 5. Confusion matrix of the processing bearing fault data on test sets.
Actual ClassesPredicted Classes
123456
112100000
201210000
300121000
400012100
500001210
600000121
Table 6. Confusion matrix of NB on test sets.
Table 6. Confusion matrix of NB on test sets.
Actual ClassesPredicted Classes
123456
112001000
201210000
300117004
401011721
501031170
600000121
Table 7. The corresponding threshold data.
Table 7. The corresponding threshold data.
MethodsAccuracies
NB98.21%
NB + J48 + SVM98.48%
NB + J48 + SSVM (JSSVM-NB)99.17%
Table 8. The comparison results in bearing fault diagnosis.
Table 8. The comparison results in bearing fault diagnosis.
StateJSSVM-NBReference [35]
Normal100%98.31%
Inner race100%97.73%
Ball97.5%95.04%
Out race fault at center (@6:00, @3:00 and @12:00)99.17%98.02%

Share and Cite

MDPI and ACS Style

Zhang, N.; Wu, L.; Yang, J.; Guan, Y. Naive Bayes Bearing Fault Diagnosis Based on Enhanced Independence of Data. Sensors 2018, 18, 463. https://doi.org/10.3390/s18020463

AMA Style

Zhang N, Wu L, Yang J, Guan Y. Naive Bayes Bearing Fault Diagnosis Based on Enhanced Independence of Data. Sensors. 2018; 18(2):463. https://doi.org/10.3390/s18020463

Chicago/Turabian Style

Zhang, Nannan, Lifeng Wu, Jing Yang, and Yong Guan. 2018. "Naive Bayes Bearing Fault Diagnosis Based on Enhanced Independence of Data" Sensors 18, no. 2: 463. https://doi.org/10.3390/s18020463

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop