Author Contributions
Conceptualization, R.W. and P.F.; methodology, R.W. and P.F.; software, R.W.; validation, P.F., X.F. and A.A.; investigation, R.W.; writing—original draft preparation, R.W. and P.F.; writing—review and editing, R.W., P.F., X.F. and A.A.; visualization, R.W., P.F., X.F. and A.A.; supervision, P.F. All authors have read and agreed to the published version of the manuscript.
Figure 1.
Percentage contribution of common IM faults.
Figure 1.
Percentage contribution of common IM faults.
Figure 2.
Bearing test rig of Paderborn University experiment.
Figure 2.
Bearing test rig of Paderborn University experiment.
Figure 3.
Convolutional neural network model architecture.
Figure 3.
Convolutional neural network model architecture.
Figure 4.
Spectrogram of phase current waveform.
Figure 4.
Spectrogram of phase current waveform.
Figure 5.
Traditional sequential segmentation preprocessing technique for motor current samples.
Figure 5.
Traditional sequential segmentation preprocessing technique for motor current samples.
Figure 6.
Overlapping segmentation preprocessing technique for motor current.
Figure 6.
Overlapping segmentation preprocessing technique for motor current.
Figure 7.
Shifting window data augmentation technique for motor current samples: (a) sequential segmentation and (b) overlapping segmentation.
Figure 7.
Shifting window data augmentation technique for motor current samples: (a) sequential segmentation and (b) overlapping segmentation.
Figure 8.
Flowchart of the proposed preprocessing and augmentation techniques.
Figure 8.
Flowchart of the proposed preprocessing and augmentation techniques.
Figure 9.
Model accuracy in the CNN training.
Figure 9.
Model accuracy in the CNN training.
Figure 10.
Cross-entropy losses in CNN training.
Figure 10.
Cross-entropy losses in CNN training.
Figure 11.
Confusion matrix for traditional segmentation techniques: (a) sequential segmentation and (b) overlapping segmentation.
Figure 11.
Confusion matrix for traditional segmentation techniques: (a) sequential segmentation and (b) overlapping segmentation.
Figure 12.
Confusion matrix for segmentation techniques using shifting window augmentation: (a) sequential segmentation and (b) overlapping segmentation.
Figure 12.
Confusion matrix for segmentation techniques using shifting window augmentation: (a) sequential segmentation and (b) overlapping segmentation.
Figure 13.
Model accuracy of each technique for all runs.
Figure 13.
Model accuracy of each technique for all runs.
Figure 14.
Training and testing signals segmented around their zero-crossing for sequential segmentation.
Figure 14.
Training and testing signals segmented around their zero-crossing for sequential segmentation.
Figure 15.
Epoch metrics for model trained with sequential segmentation, sampled at predetermined intervals: (a) accuracy and (b) cross-entropy losses.
Figure 15.
Epoch metrics for model trained with sequential segmentation, sampled at predetermined intervals: (a) accuracy and (b) cross-entropy losses.
Figure 16.
Confusion matrix for model trained with sequential segmentation, sampled at predetermined intervals.
Figure 16.
Confusion matrix for model trained with sequential segmentation, sampled at predetermined intervals.
Figure 17.
Training signals segmented around their zero-crossing for overlapping segmentation using shifting window augmentation.
Figure 17.
Training signals segmented around their zero-crossing for overlapping segmentation using shifting window augmentation.
Figure 18.
Epoch metrics for model trained using shifting window augmentation, sampled at predetermined intervals: (a) accuracy and (b) cross-entropy losses.
Figure 18.
Epoch metrics for model trained using shifting window augmentation, sampled at predetermined intervals: (a) accuracy and (b) cross-entropy losses.
Figure 19.
Confusion matrix for model trained using shifting window augmentation, sampled at predetermined intervals.
Figure 19.
Confusion matrix for model trained using shifting window augmentation, sampled at predetermined intervals.
Figure 20.
Epoch metrics for LSTM model trained using sequential segmentation, sampled at predetermined intervals: (a) accuracy and (b) cross-entropy losses.
Figure 20.
Epoch metrics for LSTM model trained using sequential segmentation, sampled at predetermined intervals: (a) accuracy and (b) cross-entropy losses.
Figure 21.
Epoch metrics for LSTM model trained using shifting window augmentation, sampled at predetermined intervals: (a) accuracy and (b) cross-entropy losses.
Figure 21.
Epoch metrics for LSTM model trained using shifting window augmentation, sampled at predetermined intervals: (a) accuracy and (b) cross-entropy losses.
Figure 22.
Confusion Matrix for LSTM model trained with samples at predetermined intervals using: (a) sequential segmentation and (b) shifting window augmentation.
Figure 22.
Confusion Matrix for LSTM model trained with samples at predetermined intervals using: (a) sequential segmentation and (b) shifting window augmentation.
Table 1.
Operating conditions of Paderborn University’s testbed.
Table 1.
Operating conditions of Paderborn University’s testbed.
Number | Rotational Speed [rpm] | Load Torque [Nm] | Radial Force [N] | Name of File |
---|
0 | 1500 | 0.7 | 1000 | N15_M07_F10 |
1 | 1500 | 0.1 | 1000 | N15_M01_F10 |
2 | 1500 | 0.7 | 400 | N15_M07_F04 |
Table 2.
Bearing damage files considered.
Table 2.
Bearing damage files considered.
Bearing Damage | Bearing Codes | Class Label |
---|
Healthy Bearing | K001, K002, K003, K004, K005, K006 | K0 |
Outer Ring | KA04, KA15, KA16, KA22, KA30 | KA |
Inner Ring | KI04, KI14, KI16, KI17, KI18, KI21 | KI |
Table 3.
Structural parameters of the CNN model.
Table 3.
Structural parameters of the CNN model.
Layer | Output Shape | Parameter Numbers |
---|
Input | (None, 33, 33, 1) | 0 |
Conv2d ((5, 5), 32) | (None, 29, 29, 32) | 832 |
MaxPool2d (2, 2) | (None, 15, 15, 32) | 0 |
Conv2d ((3, 3), 32) | (None, 13, 13, 32) | 9248 |
Conv2d ((3, 3), 32) | (None, 11, 11, 32) | 9248 |
MaxPool2d (2, 2) | (None, 6, 6, 32) | 0 |
Conv2d ((3, 3), 64) | (None, 4, 4, 64) | 18,496 |
Conv2d ((3, 3), 64) | (None, 2, 2, 64) | 36,928 |
MaxPool2d (2, 2) | (None, 1, 1, 64) | 0 |
Dense (256) | (None, 256) | 16,640 |
Dense (1024) | (None, 1024) | 263,168 |
Dense (128) | (None, 128) | 131,200 |
Dense (3) | (None, 3) | 387 |
Total Parameters: | 486,147 |
Trainable Parameters: | 486,147 |
Non-trainable Parameters: | 0 |
Table 4.
Number of samples in each dataset.
Table 4.
Number of samples in each dataset.
Training Technique | Total Samples |
---|
Sequential segmentation | 60,294 |
Shifting window with sequential segmentation | 48,022 |
Overlapping segmentation | 80,736 |
Shifting window with overlapping segmentation | 64,384 |
Table 5.
Benchmark model parameter settings.
Table 5.
Benchmark model parameter settings.
Parameter | Setting |
---|
STFT Frame Length | 100 |
STFT Frame Step | 10 |
STFT Window | Hann |
Batch Size | 32 |
Initial LR | 0.001 |
LR Factor | 0.5 |
Optimizer | Adam |
Loss Function | Categorical Cross-Entropy |
Train/Val | 0.8/0.2 |
Table 6.
Analysis of sequential segmentation.
Table 6.
Analysis of sequential segmentation.
| Precision | Recall | F1-Score | Support |
---|
K0 | 0.9137 | 0.9164 | 0.9151 | 18,000 |
KA | 0.8631 | 0.8741 | 0.8685 | 15,000 |
KI | 0.8802 | 0.8683 | 0.8742 | 18,000 |
Accuracy | 0.8870 |
Table 7.
Analysis of overlapping segmentation.
Table 7.
Analysis of overlapping segmentation.
| Precision | Recall | F1-Score | Support |
---|
K0 | 0.9087 | 0.9183 | 0.9135 | 18,000 |
KA | 0.8723 | 0.8790 | 0.8756 | 15,000 |
KI | 0.8923 | 0.8772 | 0.8847 | 18,000 |
Accuracy | 0.8922 |
Table 8.
Analysis of the shifting window with sequential segmentation.
Table 8.
Analysis of the shifting window with sequential segmentation.
| Precision | Recall | F1-Score | Support |
---|
K0 | 0.9588 | 0.9628 | 0.9608 | 18,000 |
KA | 0.9336 | 0.9336 | 0.9336 | 15,000 |
KI | 0.9453 | 0.9413 | 0.9433 | 18,000 |
Accuracy | 0.9466 |
Table 9.
Analysis of the shifting window with overlapping segmentation.
Table 9.
Analysis of the shifting window with overlapping segmentation.
| Precision | Recall | F1-Score | Support |
---|
K0 | 0.9566 | 0.9636 | 0.9601 | 18,000 |
KA | 0.9351 | 0.9271 | 0.9311 | 15,000 |
KI | 0.9399 | 0.9397 | 0.9398 | 18,000 |
Accuracy | 0.9444 |
Table 10.
Analysis of model trained with sequential segmentation, sampled at predetermined intervals.
Table 10.
Analysis of model trained with sequential segmentation, sampled at predetermined intervals.
| Precision | Recall | F1-Score | Support |
---|
K0 | 0.0235 | 0.0235 | 0.0235 | 13,728 |
KA | 0.0062 | 0.0073 | 0.0067 | 11,420 |
KI | 0.0282 | 0.0240 | 0.0259 | 13,758 |
Accuracy | 0.0189 |
Table 11.
Analysis of model trained using shifting window augmentation, sampled at predetermined intervals.
Table 11.
Analysis of model trained using shifting window augmentation, sampled at predetermined intervals.
| Precision | Recall | F1-Score | Support |
---|
K0 | 0.9538 | 0.9663 | 0.9600 | 13,728 |
KA | 0.9400 | 0.9321 | 0.9360 | 11,420 |
KI | 0.9455 | 0.9396 | 0.9425 | 13,758 |
Accuracy | 0.9468 |
Table 12.
Structural parameters of the LSTM model.
Table 12.
Structural parameters of the LSTM model.
Layer | Output Shape | Parameter Numbers |
---|
LSTM (128, (247,65)) | (None, 128) | 99,328 |
Dropout (0.2) | (None, 128) | 0 |
Dense (128) | (None, 128) | 16,512 |
Dense (64) | (None, 64) | 8256 |
Dropout (0.4) | (None, 64) | 0 |
Dense (48) | (None, 48) | 3120 |
Dropout (0.4) | (None, 48) | 0 |
Dense (3) | (None, 3) | 147 |
Total Parameters: | 127,363 |
Trainable Parameters: | 127,363 |
Non-trainable Parameters: | 0 |