Next Article in Journal
Towards Seamless Human–Robot Interaction: Integrating Computer Vision for Tool Handover and Gesture-Based Control
Previous Article in Journal
The Interaction of Fitness and Fatigue on Physical and Tactical Performance in Football
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Fault Diagnosis Method for Rotating Machinery Based on Edge Computing

by
Mo Chen
1,
Chao Ge
2,
Qirui Yang
2 and
Zhe Wei
1,3,*
1
School of Mechanical Engineering, Shenyang University of Technology, Shenyang 110870, China
2
Ansteel Group Automation Co., Ltd., Anshan 114021, China
3
The Key Laboratory of Intelligent Manufacturing and Industrial Robots in Liaoning Province, Shenyang 110870, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(7), 3577; https://doi.org/10.3390/app15073577
Submission received: 17 February 2025 / Revised: 19 March 2025 / Accepted: 20 March 2025 / Published: 25 March 2025
(This article belongs to the Section Mechanical Engineering)

Abstract

:
Rotating machinery, as a vital and inevitable component in industrial production and processing, plays a crucial role in ensuring the normal operation of production processes. However, most of the existing fault diagnosis methods for rotating machinery are either offline or cloud-based online approaches, which suffer from long latency and large data volumes, making them unable to meet real-time requirements. To reduce latency and data transmission volume, this research proposes a fault diagnosis method for rotating machinery based on edge computing. This research constructs an edge node that integrates signal acquisition, data preprocessing, feature extraction, and fault diagnosis classification to accurately and in real-time identify the fault status of equipment. To address the issues of low fault diagnosis recognition rate and data redundancy associated with single sensors under complex working conditions, this research proposes a fault diagnosis method based on dual-channel CNN decision-level fusion. To alleviate the computational pressure on edge nodes, the equipment fault status diagnosis model is trained on the upper computer, and the data preprocessing and diagnosis model are embedded into the edge nodes. The correctness and real-time performance of the proposed method were validated through comparisons with other methods and online fault diagnosis experiments.

1. Introduction

Fault diagnosis is a critical task in large-scale industrial maintenance. With the advent of Industry 4.0 and the continuous increase in automation, a vast amount of equipment operational data are being generated, leading to the rapid development of data-driven fault diagnosis methods [1,2]. As a type of machinery widely used across various industries, rotating machinery holds a significant position in industrial systems such as petrochemicals, nuclear and thermal power, and steel forging [3]. Electric motors and gear reducers are key rotating machinery in the steel forging industry.
In recent years, data-driven intelligent fault diagnosis techniques have been applied to the fault diagnosis of rotating machinery. To address the limitations of the cross-scale coupling relationship between amplitude modulation and frequency modulation characteristics in nonlinear vibration signals, Zheng Jinde et al. proposed a fault diagnosis method for rotating machinery based on improved holographic Hilbert spectrum analysis [4]. This method provides a more comprehensive representation and visualization of the internal modulation relationships in nonlinear fault vibration signals and demonstrates enhanced fault identification capabilities. Simultaneously, sudden unbalanced faults in rotating machinery remain a challenging issue in the field of condition monitoring in engineering practice. To address the lack of fault data, many scholars have undertaken related research efforts [5]. Zhang Zongzhen et al. employed normalized generalized nonlinear convolution activation to address the problem of distributional changes caused by scale inconsistency in nonlinear functions. The proposed nonlinear sparse blind deconvolution method for early bearing fault diagnosis offers superior noise adaptability, reduced computational time, and enhanced robustness [6]. While these fault diagnosis methods exhibit high accuracy and strong generalization capabilities, they often rely on deep learning models with complex structures. These models typically require raw signals or time-frequency transformations of the original signals as inputs. Additionally, both the training and testing datasets are usually offline data, which limits their ability to meet the real-time diagnostic requirements of specific application scenarios [7].
Consequently, a significant number of scholars have turned their attention to online fault diagnosis [8,9,10]. These studies typically utilize cloud computing or host computers, where trained models are stored to perform online diagnostics on real-time data. However, these research outcomes encounter two primary challenges in engineering applications: ①. Uploading real-time high-frequency data to the cloud requires substantial network bandwidth and continuously depletes the storage and computational resources of cloud servers. Moreover, the storage and processing capabilities of host computers are often inadequate to meet the demands of practical applications. ②. If feature data pre-processed by lower-level computers are chosen for upload, the data volume is reduced; however, this approach introduces latency issues. Even if the cloud can perform effective diagnostics, the time delays inherent in communication and computation processes may compromise the fidelity of the original signals, potentially adversely affecting the timeliness and accuracy of fault diagnosis [7]. Therefore, for machines that require real-time diagnostics and dynamic control, cloud computing is unable to meet the stringent response time requirements. To overcome challenges such as high bandwidth load and resource limitations inherent in traditional cloud computing models in industrial manufacturing, edge computing technology has been introduced.
Edge computing technologies can mitigate the computational load on host computers by being closer to user devices, thereby addressing several challenges in the development of the Industrial Internet of Things (IIoT), such as real-time control of production processes, security and privacy of edge devices, and localized processing of production data. Moreover, in practical applications, edge computing offers advantages in enhancing performance, ensuring data security and privacy, and reducing operational costs [11]. Due to these advantages, edge computing has been applied in the field of fault diagnosis.

2. Related Work

To address the limitations of lightweight diagnostic networks in edge computing, Ding Ao et al. proposed a Weighted Multi-Scale Convolution (WSMSC) method and an adaptive pruning technique that eliminates redundant network structures during training without requiring manual intervention [12]. This approach captures multi-time-scale features from vibration signals at a very low computational cost, enabling intelligent diagnosis of train bogie bearing faults in edge computing environments. Perminov et al. investigated the application of bearing fault diagnosis models at the edge, utilizing the Kendryte K210 edge AI device to accelerate the inference speed of convolutional neural networks [13]. In terms of data privacy and security, edge devices in bearing condition monitoring systems can function as gateways, preventing data from being transmitted from sensors to cloud servers [14]. Another approach involves using a decentralized model to ensure the privacy and security of machinery fault data within the Industrial Internet of Things (IIoT) [15].
It is evident that edge computing has become a significant research focus, showing excellent accuracy and real-time performance in the fault diagnosis of rotating machinery, such as motors and bearings. However, most studies focus on fault diagnosis using single-modal sensors. In practical industrial environments that integrate edge computing with fault diagnosis, information obtained from a single sensor can be limited and isolated. If the sensor experiences issues such as improper installation, severe noise interference, or poor connectivity, the data it collects may become entirely unreliable. Therefore, fault diagnosis methods based on multi-sensor data fusion are essential in industrial environments.
As mentioned in this paper, Dempster–Shafer (D-S) evidence theory has been widely applied in many fields of MSIF [16]. D-S evidence theory can be used for feature-level fusion or decision-level fusion. Scholar [17] proposed a fault diagnosis method for distribution networks based on a Bayesian network and D-S evidence theory, where the fault probabilities obtained from two Bayesian networks are fused using D-S evidence theory, thus achieving fault diagnosis in the distribution network. Unlike probability theory, which only considers a single proposition, D-S evidence theory introduces the concept of fuzziness, allowing it to better express the uncertainty of the existence of a proposition [17].
Both domestic and international research on multi-sensor data fusion primarily focus on different fusion algorithms at various data levels. The data fusion levels are generally classified into data-level fusion, feature-level fusion, and decision-level fusion, with specific methods for each level presented in Table 1.
Currently, there is limited research on multi-source fault diagnosis and prediction technologies for production line equipment in complex working environments. Furthermore, existing studies face challenges such as low fault diagnosis accuracy with single sensors under complex conditions, insufficient evidence, and data redundancy. Therefore, it is necessary to investigate edge-computing-driven equipment fault diagnosis methods based on multi-level event interconnected intelligence. By utilizing edge computing technology, which leverages the embedded computing capabilities of IoT devices, in combination with cloud computing through interactive collaboration, this approach can be applied to the fault diagnosis and prediction of production line equipment, thereby achieving process intelligence. The scientific and technological contributions of this paper include:
(1)
Designing an edge node that integrates signal acquisition, data preprocessing, feature extraction, and fault diagnosis classification to reduce equipment fault diagnosis latency and data transmission volume. The data preprocessing and the improved CNN-DS model are deployed on the edge node to enable fault diagnosis of rotating machinery based on edge computing.
(2)
Addressing the issue that most of the collected data during the acquisition process are normal, while fault data are relatively scarce, necessitating logical data augmentation. A sliding window method is employed for data augmentation.
(3)
Proposing a fault diagnosis method based on dual-channel CNN decision-level fusion. This method increases the CNN input layers to process data from different sensors while maintaining accuracy. Additionally, DS evidence theory is incorporated into the fully connected layer to achieve decision-level fusion of the dual-input CNN, significantly enhancing the model’s robustness, generalization, and overall performance.
The structure of this paper is as follows: Section 3 elaborates on the theoretical foundations of convolutional neural networks and Dempster–Shafer evidence theory. Section 4 introduces the proposed fault diagnosis method based on edge computing, detailing the design and implementation of the edge node. Section 5 presents experimental studies conducted on the main motor of a tension leveler in a cold rolling plant, utilizing the proposed fault diagnosis approach. The experimental results substantiate the effectiveness and advantages of the method, offering valuable insights and references for the practical fault diagnosis of tension leveler main motors. Finally, Section 6 provides a conclusion.

3. Theoretical Background and Edge Node Design

3.1. Convolutional Neural Networks

Convolutional neural networks (CNNs) are a typical deep learning technique that can process raw data with minimal preprocessing. A classic CNN primarily consists of an input layer, alternating convolutional layers, pooling layers, and fully connected layers. The alternating convolutional and pooling layers primarily extract features from the input data. After feature extraction, the fully connected layers and the Softmax function are used to learn and classify the extracted feature information [24].
Convolutional layer: this is the core component of a convolutional neural network (CNN). Each randomly initialized convolutional kernel performs convolution along the length of the input. The convolution results of multiple kernels are concatenated to produce a feature map [24]. A convolution kernel (Kernel or Filter) is a core computational unit used for feature extraction from input data. Essentially, it is a small matrix (tensor) that slides over the input feature map and performs a convolution operation [25]. In this study, the ReLU function is selected as the activation function, and its expression is:
x j l = f ( i M j x i l 1 w i j l + b j l )
f ( x ) = R e l u ( x ) = x , x > 0 0 , x 0
where x j l represents the j -th feature map at the l -th layer; f ( x ) is a nonlinear activation function; M j is the set of input images; and represents the convolution operation. w i j l is the weight matrix of the convolutional kernel; b j l is the bias term.
The value in the k -th channel of the feature map in the l -th convolutional layer can be expressed as:
h k l = w k l x l + b k l
where x l is the input to the l -th convolutional layer; w k l is the weight of the k -th convolutional kernel in the l -th convolutional layer; and b k l is the bias of the k -th convolutional kernel in the l -th convolutional layer.
Pooling layer: the pooling layer primarily performs nonlinear subsampling on the feature maps of the preceding layer using pooling kernels; therefore, it is also referred to as the subsampling layer. In this study, max-pooling is employed, which represents regional features by extracting the maximum value of local features [26]. The expression for max-pooling is as follows:
x j l = max ( ω ( s 1 , s 2 ) x j l 1 )
where x j l represents the feature map after the pooling operation; ω ( s 1 , s 2 ) represents the local region of the pooling operation; s 1 and s 2 denote the size of the pooling region; represents the overlap between the pooling region and the output feature map of the channel; and x j l 1 is the feature map of the previous layer.
Fully connected layer: to achieve feature fusion of the dual input layers, the features from both input layers can be flattened and then connected before being fed into the fully connected layer [27].
Suppose the flattened feature of the first input layer is X 1 with a size of m × n 1 and the flattened feature of the second input layer is X 2 with a size of m × n 2 . Here, m represents the batch size (i.e., the number of samples), and n 1 and n 2 are the feature dimensions of the first and second input layers, respectively. X 1 and X 2 are concatenated column-wise to form a new feature matrix X 1 , X 2 with a size of m × ( n 1 + n 2 ) . The weight matrix of the fully connected layer is W , with a size of k × ( n 1 + n 2 ) , where k is the dimension of the output features. The bias vector of the fully connected layer is b with a size of k × 1 . The activation function is f .
After concatenating the flattened features, the output through the fully connected layer is computed as follows:
Y = f ( X 1 , X 2 W + b )
where X 1 , X 2 represents the column-wise concatenation of X 1 and X 2 , resulting in a matrix of size m × ( n 1 + n 2 ) . W is a weight matrix of size k × ( n 1 + n 2 ) . b is a bias vector of size k × 1 . f is the activation function, which is typically a nonlinear function, such as ReLU, Sigmoid, or Tanh.

3.2. Dempster–Shafer Evidence Theory

Dempster–Shafer (DS) evidence theory (also known as the theory of belief functions) is a mathematical framework for reasoning under uncertainty that can be applied to feature-level fusion or decision-level fusion. It is based on Basic Probability Assignment (BPA) and calculates belief functions (Bel) and plausibility functions (Pl) by combining information from different sources of evidence [28].
Calculating the belief assignment for each feature: assume there are two features, A and B. Their belief assignments can be represented as two sets containing belief and plausibility values.
Belief assignment of feature A :
m A = ( A , B e l ( A ) ) , ( A , P l ( A ) ) , ( , 1 B e l ( A ) P l ( A ) )
Belief assignment of feature B:
m B = ( B , B e l ( B ) ) , ( B , P l ( B ) ) , ( , 1 B e l ( B ) P l ( B ) )
where B e l ( A ) and P l ( A ) represent the belief and plausibility of feature A , respectively, while B e l ( B ) and P l ( B ) represent the belief and plausibility of feature B . denotes the empty set, with its corresponding probability being the portion of belief assignment not allocated to A or B .
Combining using the DS combination rule: the DS combination rule is used to combine two belief assignments, AAA and BBB, to obtain a fused belief assignment CCC. The calculation steps for the combination rule are as follows:
Compatibility check: check whether m A and m B are compatible by examining if there is any conflict. A conflict is defined as the presence of at least one common element in both sets where the sum of their belief and plausibility values exceeds 0.
Combination formula: if m A and m B are compatible, then calculate the combined belief B e l ( A B ) and plausibility P l ( A B ) :
B e l ( A B ) = B e l ( A ) · B e l ( B ) 1 K
P l ( A B ) = P l ( A ) · P l ( B ) 1 K
where K = X A , B B e l ( X ) · P l ( ) is the degree of conflict. If m A and m B are not compatible, combination is not possible and it is necessary to handle the conflict or adopt alternative strategies.

3.3. Normalization

Normalization is an essential step in data preprocessing, primarily aimed at eliminating the influence of different feature scales, improving numerical stability, and ensuring that data are distributed within a specific range (e.g., [0,1][0,1][0,1] or [−1,1][−1,1][−1,1]) [29]. This process accelerates model training and enhances generalization ability.
In machine learning, deep learning, and signal processing, normalization helps optimize gradient descent algorithms, making parameter updates more stable and preventing training instability caused by large feature values [30]. Common normalization methods include min–max normalization, Z-score normalization, decimal scaling, and Manhattan normalization, among others [30].

4. Fault Diagnosis Method Based on Edge Computing

Motor fault diagnosis based on cloud computing involves installing multiple sensors on the drive and non-drive ends of the motor shaft to collect operational data. The collected data are then directly transmitted to the host computer for data preprocessing, feature extraction, and fault diagnosis. This generates a large volume of operational data at the device end, imposing significant pressure on data transmission and storage. Furthermore, due to limitations in transmission speed, this often results in relatively large diagnostic delays, failing to meet the real-time requirements of fault diagnosis.
To reduce data transmission volume and minimize diagnostic delays, a fault diagnosis method based on edge computing is proposed. The basic framework of this method is illustrated in Figure 1. The core of this approach is the design of an edge node that integrates signal acquisition, signal preprocessing, and fault diagnosis, while also enabling the display of diagnostic results and feature values transmitted from the edge node on the host computer.
To reduce the computational load on the edge node, the proposed method is divided into two parts. First, model training and parameter optimization are performed on the host computer. Second, the trained model is embedded into the edge node for fault diagnosis.
As shown in the blue section of the figure, the host computer analyzes the correlation coefficient and mean squared error of time-domain signals to determine the relevant parameters for sliding window data augmentation. Meanwhile, the CNN-DS model is trained. Once the model training is complete, the model, along with signal acquisition, signal preprocessing, and data augmentation, is embedded into the edge node, as shown in the orange section of the figure.
Finally, the sensor signals from the main motor of the equipment are collected and transmitted to the edge node, where the trained model performs fault diagnosis.

4.1. Signal Preprocessing

4.1.1. Empirical Mode Decomposition (EMD)

In this study, Empirical Mode Decomposition (EMD) is employed for denoising. Typically, measurement noise is mainly present in high-frequency components, while low-frequency fault characteristic spectra are still adequate for fault diagnosis. The reconstructed signal removes interference while retaining the key features required for fault diagnosis, as shown in Figure 2. The steps are as follows:
Step 1: Decompose the original signal into multiple Intrinsic Mode Functions (IMFs).
Step 2: Select certain IMFs for signal reconstruction. Typically, the first few IMFs contain high-frequency noise components, which can be discarded or processed. The selection of IMFs is based on frequency component analysis, usually choosing IMFs that contain low-frequency signals for reconstruction.
Step 3: Reconstruct the signal using the selected IMFs. By combining the chosen IMFs in their original time sequence with appropriate weighting, a denoised signal is obtained.

4.1.2. Normalization Processing

Since the method employed in this study includes a CNN, which is an algorithm optimized through gradient descent, normalization can significantly enhance the training speed and model performance. Furthermore, the temperature data and vibration data collected in this study differ in magnitude, and the data distributions of different features vary considerably. Normalization helps to mitigate these differences [29,30]. The min–max normalization method is adopted, and its formula is as follows:
X n o r m = X X m i n X m a x X m i n
where X is the original data, X m i n is the minimum value in the data set, X m a x is the maximum value in the data set, and X n o r m is the normalized data, which fall within the range of 0,1 .

4.2. Data Augmentation

Modern equipment is typically subjected to rigorous design and testing, resulting in high reliability and, consequently, a low frequency of faults. This implies that, during normal operation, the amount of fault data collected are significantly less than the data gathered under regular operating conditions. To address this issue, one solution is to apply data augmentation to the collected fault data. Since the collected data are time-series data, this study employs a sliding window method to enhance both the temperature data and vibration data separately. The formula for this process is as follows:
S W i = 1 n j = 0 n 1 x i + j
where S W i represents the sliding window result at the i -th position; n denotes the size of the window (i.e., the number of elements it contains); and x i + j represents the i + j -th element in the data sequence.
Different strides and multiple iterations in the sliding window can generate a large amount of data for training the model. As shown in Figure 3, during the data generation process using a sliding window, the stride and window size are two important parameters. Their ratio is denoted by Relative Windowing Width (RWW), and the specific formula is as follows [31]:
R W W = S W
where S is the window stride and W is the window size.
Due to the differences in the modalities of vibration data and temperature data, different methods are used to quantitatively characterize the correlation of the data before and after augmentation.
For periodic data, such as vibration signals, the correlation coefficient (CC) of time-domain signals is more suitable for quantitatively characterizing the correlation of data before and after augmentation. The Pearson Correlation Coefficient is commonly used for this purpose. The formula for calculating the Pearson Correlation Coefficient is as follows:
r = n ( x y ) ( x ) ( y ) n ( x 2 ( x ) 2 n ( y 2 ( y ) 2
where n is the number of samples. x and y are the two variables (in this case, the original data and the augmented data); x y is the sum of the products of the corresponding elements x and y ; x and y are the sums of x and y , respectively; and x 2 and y 2 are the sums of the squares of x and y , respectively.
For temperature data, the mean squared error (MSE) is more appropriate. The calculation formula is as follows:
M S E = 1 n i = 1 n x ^ i x i 2
where n is the number of samples; x ^ i is the augmented data of the i -th sample; and x i is the original data of the i -th sample.
  • Vibration Data
From a statistical perspective, the relationship between the magnitude of the correlation coefficient and the strength of the correlation is shown in Table 2. The results of the correlation analysis of vibration signals at different RWWs are shown in Figure 4. As shown in Figure 4, when the stride of the sliding window equals the window size (RWW = 1), there is no overlap between adjacent windows, meaning that the data in each window are independent of the others. Therefore, when calculating the correlation coefficient, the data in each window will have a correlation of 1 with itself, which may result in an overall correlation coefficient of 1. In this case, the value of the correlation coefficient is influenced by the choice of the sliding window, rather than accurately reflecting the inherent correlation of the data. Additionally, it can be observed that the data exhibit a high degree of correlation for most RWW values. Since a smaller RWW generates more data, selecting RWW = 0.5 with a window size of 2200 for vibration data can meet the data requirements.
  • Temperature Data
Since the structure of temperature data is not suitable for correlation coefficient analysis, a mean squared error (MSE) analysis is performed for temperature data at different RWW values, as shown in Figure 5. The MSE value typically represents the magnitude of error between two datasets. Specifically, if the mean squared error between two datasets is small, it can be concluded that there is a closer relationship or a higher correlation between them. Similar to the case with vibration data, when the stride of the sliding window equals the window size (RWW = 1), the MSE equals 0. In this situation, MSE fails to reflect the inherent correlation of the data. Additionally, as shown in the figure, when RWW = 0.9, the MSE is relatively the smallest; however, the amount of data augmented at RWW = 0.9 are limited. A smaller RWW generates more data windows due to the overlap between windows, which increases the number of training samples. Therefore, selecting RWW = 0.1, RWW = 0.1, RWW = 0.1, which produces more augmented data, ensures that the MSE is relatively low overall, meeting the data requirements.

4.3. Construction of the Dual-Channel CNN Decision-Level Fusion Model

This paper proposes three optimizations for the traditional CNN. Firstly, a dual-input channel design is introduced, combining temperature and vibration sensors for fault analysis and diagnosis. Independent CNN convolutional and pooling layers are constructed for each modality of data, and the decision probabilities output by each CNN are fused. Secondly, in the vibration sensor channel, the envelope spectrum is used as a substitute for the raw measurement input, as described in Section 4.3.1. Finally, the Dempster–Shafer evidence theory method is incorporated into the fully connected layer to perform decision-level fusion of the dual-input CNNs, as detailed in Section 4.3.2. These optimizations significantly enhance the model’s robustness, generalization ability, and performance. The optimized model in this study is denoted as CNN-DS. This optimization effectively improves the accuracy of fault diagnosis and, by integrating data from different sensors, provides a more comprehensive approach to analyzing and identifying fault patterns in rotating machinery. The results of the model are shown in Figure 6.

4.3.1. Envelope Spectrum

In numerous existing studies, the envelope spectrum has been proven to be highly effective for the frequency-domain features of fault diagnosis [29]. Theoretically, obtaining the envelope spectrum of an input signal requires two steps: first, the signal envelope is obtained and then a frequency transformation is performed. In this study, the Hilbert transform is used to capture the signal envelope. First, the input is decomposed into a finite set of components, known as Intrinsic Mode Functions (IMFs), using Empirical Mode Decomposition (EMD). The IMFs form a complete and orthogonal basis of the initial signal [32,33]. The formula is shown below:
x ( t ) = i = 1 n c i ( t ) + r n ( t )
where x ( t ) is the original signal, c i ( t ) is the i -th IMF component, r n ( t ) is the final residual component, and n is the number of IMFs.
After obtaining the IMF components, each IMF component is subjected to a Hilbert transform to obtain an analytical signal, which is expressed as follows [33]:
z i ( t ) = c i ( t ) + j H c i ( t )
where H c i ( t ) is the Hilbert transform of c i ( t ) and j is the imaginary unit.
The purpose of the Hilbert transform is to convert a real signal into a complex signal in order to compute the instantaneous amplitude and instantaneous phase. By calculating the absolute value of the analytical signal z i ( t ) , the envelope of the signal can be obtained [33].
E n v e l o p e ( t ) = z i ( t ) = c i ( t ) 2 + H c i ( t ) 2
In addition to envelope calculation, this study also employs EMD for denoising, as described in Section 3.1. After filtering out high-frequency noise and capturing the signal envelope, a Fourier transform is performed. Figure 7 shows the results of the Fourier transform, where (a) is the original vibration signal without any signal processing and (b) is the envelope signal obtained through EMD and Hilbert transform. As shown in the figure, the signal obtained after EMD and Hilbert transform contains fewer high-frequency components than the original. This indicates that EMD significantly reduces the high-frequency components of the faulty bearing while preserving its characteristic frequencies. The spectrum in Figure 7b, known as the envelope spectrum, will be used as the input for the vibration data channel of the CNN [32].

4.3.2. Decision-Level Fusion

In a typical CNN, the fully connected layer functions solely as a classifier, outputting the probability for each classification label. However, when implementing the fusion of dual input layers, the features from the two input channels can be flattened and concatenated before being fed into the fully connected layer. Therefore, this study incorporates the Dempster–Shafer evidence theory method in the fully connected layer to perform decision-level fusion of the dual-input CNNs, which can significantly enhance the model’s robustness, generalization ability, and performance. By effectively fusing the beliefs from the dual-input CNN models and handling information conflicts and uncertainties, this method can provide more accurate and stable prediction results. The Dempster–Shafer evidence theory fuses the output features of multiple feature extraction networks from two CNNs, enhancing the accuracy and robustness of feature representation.

4.4. Edge Node Design

The edge node is designed using a Jetson Orin NX development board and an ADC module with an I2C interface. Each core of the development board operates at a maximum clock frequency of 1.5 GHz. The edge node is equipped with functions for analog–digital conversion (ADC), signal preprocessing, data augmentation, feature extraction, and fault diagnosis classification based on an embedded CNN model. It can also upload the extracted feature values and classification results to the host computer for storage and further analysis. The edge node is configured with two ADC channels to receive analog signals from the accelerometer and temperature sensor of the main motor, respectively. These signals are sampled and stored using direct memory access (DMA) interrupts.

5. Experimental Validation

5.1. Experimental Setup

This section uses the main motor of a tension leveler in a cold rolling plant as an example to validate the proposed algorithm’s average accuracy, rapidity, and stability and analyzes the experimental results. The experiment was conducted on Python 3.8.3, using an RTX 4060Ti GPU in a 64-bit Windows environment. The CNN was executed in the Keras 2.3.1 environment. The CNN-DS structure parameters and specific parameter values are shown in Table 3. The dual-channel neural network has approximately 2.35 million parameters, and each parameter typically occupies a floating-point number. In this study, we use 64-bit floating-point numbers, so 2.35 million parameters occupy about 18.8 MB of memory, which is much smaller than the memory available on the selected edge nodes. Of course, this is just the memory usage for the model parameters. The actual memory usage also includes the following: memory for intermediate activation values, memory for optimizer states, etc.
The fault data set of the main motor is derived from the actual operation of a tension leveler leveling machine unit in a cold rolling plant. The main motor model is 1PQ4 500-4CM90-Z, the vibration sensor is RH104T, the signal acquisition bearing is a 6232 deep groove ball bearing, the temperature sensor is CA-YD-170, and the edge node uses a Jetson Orin NX development board. The main motor operates at a speed of 873 RPM. Temperature data are collected at a low frequency of 30 s per sample, with a sampling frequency of 0.03 Hz. The vibration data, sampled at a frequency of 16 kHz, pertain to drive-end faults. In addition to normal bearings, there are four types of faults: Ball Pass Frequency of the Inner Race (BPFI), Ball Pass Frequency of the Outer Race (BPFO), Fundamental Train Frequency (FTF), and Ball Spin Frequency (BSF). The specific number of sampling points is shown in Table 4 and Table 5.

5.2. Dataset Description

The fault dataset of the main motor is derived from the actual operation of a tension leveler unit in a cold rolling plant. Before being input into the model, the equipment fault data are first normalized and denoised. Then, data augmentation is performed using a sliding window. The vibration data are converted into an envelope spectrum before being input into the CNN. As described in Section 3.2, data augmentation is performed using the sliding window method, with an analysis based on the size of each type of data. In this experiment, the window size for temperature data is 1500 and, for vibration data, it is 2200. The temperature data and vibration data use RWWs of 0.1 and 0.5, respectively. The different values are chosen due to their varying dynamic characteristics and sampling frequencies, which require different window sizes to capture important features. The sizes of the computed datasets are shown in Table 6.

5.3. Performance Evaluation Methods

In previous studies, most algorithms have been evaluated solely based on test accuracy, without considering other aspects of performance [32,34,35]. In fact, beyond accuracy, a good algorithm should demonstrate stability, meaning that the results are consistent across different measurements. Additionally, speed or computational efficiency should also be addressed. Therefore, this section first introduces three metrics: average accuracy, rapidity, and stability to comprehensively evaluate the performance of the CNN-DS. The definitions of these three metrics will be provided in the following sections [32].

5.3.1. Average Accuracy

As shown in Equations (18) and (19), the CNN-DS is run NNN times, and the accuracy for each run is defined as the percentage of correctly classified samples. The final accuracy is defined as the average accuracy over NNN simulations.
A c c u r a c y = i = 1 N Y i N
Y i = y i M
where y i is the number of correctly classified samples, M is the total number of samples, and N is the number of simulations. The M values for each label are summarized in the table. Without loss of generality, in this study, N is chosen as 30.

5.3.2. Rapidity

Rapidity is defined as the average time consumed over N simulations, where the time consumed in each simulation T i is defined as the interval from the start of training to the end of validation.
R a p i d i t y = i = 1 N T i N

5.3.3. Stability

As shown in Equation (21), stability is defined as the standard deviation of accuracy over N simulations.
S t a b i l i t y = i = 1 N ( Y i A c c u r a c y ) 2 N 1

5.4. Results and Analysis

5.4.1. Impact of Input Layer Modifications

To investigate the impact of input data types on the performance of motor fault diagnosis, the CNN is trained using two types of input data for vibration data: raw signals and envelope spectra. Each type of input data was trained 30 times. Figure 8 presents the comparison results of the two different inputs. The accuracy of using envelope spectra as input is 97.5%, whereas the accuracy of using raw signals as input is 94.2%. Additionally, in terms of stability and speed, the performance of envelope spectra as input is better than that of raw signals. Figure 9 shows the confusion matrices for training the CNN with raw signals and envelope spectra as inputs. Figure 10 shows the feature visualization results using t-SNE. It can be seen from the figure that the method using envelope spectra as input achieves better classification. Therefore, the envelope spectrum is considered a better input type and is used in further experiments.

5.4.2. Impact of CNN and DS Fusion

In this study, when handling dual-modal data, the Dempster–Shafer evidence theory is integrated into the fully connected layer of the CNN. To verify the effectiveness of this method, the algorithms before and after optimization are analyzed. The proposed algorithm in this paper is denoted as CNN-DS. Figure 11 shows the results of motor fault diagnosis using a single temperature sensor, as well as the results of motor fault diagnosis using optimized dual-modal data. As shown in Figure 11a, the accuracy before optimization is 97.36%. Except for the results of the normal, cooling system fault, and overload fault labels, the other labels exhibit varying degrees of misdiagnosis. Figure 11b shows the optimized diagnostic results with an accuracy of 99.71%, which is an improvement of 2.35% compared to before optimization. Based on the analysis of the results in the figure, the optimized method shows a significant improvement in diagnosing BPFI, BPFO, FTF, and BSF. The reason for this result is that the dual-modal data combine information from different sources (vibration signals and temperature signals), providing more dimensions and more comprehensive information for fault diagnosis. By complementing each other, data from different modalities can reduce information loss and misjudgment that might occur with single-modal data, thereby enhancing the overall accuracy of the diagnosis. Therefore, the CNN-DS algorithm proposed in this study is considered effective.

5.4.3. Comparison with Other Machine Learning Methods

The proposed method is compared with other machine learning methods based on three evaluation metrics. As shown in Table 7, the fused CNN-DS and CNN-SVM algorithms outperform other algorithms in terms of accuracy. Moreover, the CNN-DS algorithm demonstrates superior accuracy, speed, and stability compared to the CNN-SVM algorithm. Although the CNN, DT, and RF algorithms exhibit relatively fast speed, their accuracy and stability are far inferior to those of the CNN-SVM and CNN-DS algorithms. In summary, compared to other learning methods and CNN, the CNN-DS method proposed in this study offers significant advantages.

5.4.4. Performance Comparison of Edge Node Embedding

To validate the advantages of deploying the CNN-DS method on edge nodes, this study conducted a verification and analysis of the running speed and accuracy of CNN-DS in both edge and cloud environments. Cloud Server 1 has better computing resources than the edge side and Cloud Server 2 has better computing resources than Cloud Server 1. The network configuration of the cloud and edge nodes is shown in Table 8.
  • Running speed evaluation
This study selected 100 sets of data to compare diagnostic speed and the experimental results are shown in Figure 12. Green represents the diagnostic execution speed when deployed on Cloud Server 1, with an average speed of 58.6 s/case. Purple represents the upload speed of Cloud Server 1. Pink represents the diagnostic execution speed when deployed on the edge node, with an average speed of 63.1 s/case. Black represents the diagnostic execution speed when deployed on Cloud Server 2, with an average speed of 51.4 s/case. Blue represents the upload speed of Cloud Server 2.
As shown in Figure 12, the diagnostic speed in the three cases follows the order: Cloud Server 2 > Edge Node > Cloud Server 1 (from fastest to slowest). When the data upload time is not considered, Cloud Server 2, which has the best computational resources, achieves the fastest execution speed at 51.4 s/case, while the edge node is the slowest. However, when considering the data upload time, the edge node outperforms Cloud Server 1 in overall speed since it does not require data transmission. This phenomenon occurs because, although Cloud Server 1 has slightly higher computational performance than the edge node in this experiment, its speed is negatively impacted by the need for data upload, making it slower than the edge node. In contrast, Cloud Server 2, with its superior computational performance, achieves a higher processing speed than edge computing.
These results indicate that, within a certain range of computational resources, the diagnostic efficiency of edge nodes surpasses that of cloud computing, significantly improving fault diagnosis efficiency.
  • Accuracy evaluation
In this study, a total of 20 sets of data were selected to compare diagnostic accuracy. Experiments were conducted in three different scenarios: Cloud Server 2, Edge Node, and Cloud Server 1. The experimental results are shown in Figure 13.
In the figure, the red color represents the diagnostic accuracy of the edge node, while the blue and black colors represent the diagnostic accuracy of Cloud Server 1 and Cloud Server 2, respectively. It can be observed that the accuracy in all three scenarios is identical, reaching 99.71%.
This study utilizes the TCP/IP protocol, which means that, even with different computing resources, the accuracy of the same model remains the same.

5.5. Analysis and Discussion

To validate the effectiveness of the proposed method, this study analyzed the decision-level fusion fault diagnosis method based on the convolutional neural network (CNN) integrated with D-S evidence theory for multidimensional feature data from various perspectives. Firstly, it was verified that modifying the input layer for vibration data increased the diagnostic accuracy by 3.2%. Secondly, the CNN fully connected layer was optimized by incorporating the D-S evidence theory method. This approach combines the advantages of CNN feature extraction with the flexibility of D-S evidence theory, resulting in a further 2.35% increase in diagnostic accuracy. The reason for this result is that the dual-modal data combine information from different sources (vibration signals and temperature signals), providing more dimensions and more comprehensive information for fault diagnosis. Subsequently, the proposed method was compared with other machine learning methods, and its performance was analyzed in terms of average accuracy, rapidity, and stability. The results demonstrate that the CNN-DS method proposed in this study offers significant advantages and shows the greatest potential for motor fault diagnosis using dual-modal data. Finally, the proposed algorithm was embedded into the edge node, and the running speed of the algorithm on both the edge node and the cloud was analyzed. The results confirmed that deploying the algorithm on the edge node can improve the fault diagnosis speed.
Based on the experimental results of this study, it can be observed that the CNN-DS algorithm proposed in this paper has lower recognition rates for cage fault (FIF) and rotor imbalance fault (Unbalance). The possible reasons for this are the similarity between cage faults and rotor imbalance faults, as well as the interference of noise during the data collection process. In future work, we will further analyze the impact of different noise interferences on model performance, determine the noise threshold, and clarify the limitations of its practical application.

6. Conclusions

This paper focuses on rotating machinery and proposes a fault diagnosis method for rotating machinery based on edge computing. Firstly, an edge node is designed that integrates signal acquisition, data preprocessing, feature extraction, and fault diagnosis classification. Secondly, a fault diagnosis method based on dual-channel CNN decision-level fusion was proposed, which was optimized in three aspects to enhance its performance in motor fault diagnosis: (1) a dual-input channel design was employed, combining temperature and vibration sensors for fault analysis and diagnosis. Separate CNNs were constructed for each data modality, and the decision probabilities output by each CNN were then fused. This eliminates the requirement for consistency in data sampling during feature fusion in the CNN model, allowing fusion even when the data sampling frequencies are not the same. It also avoids issues such as low recognition rates, lack of evidence, and data redundancy in equipment fault diagnosis when using a single sensor under complex operating conditions. (2) In the vibration sensor channel, the envelope spectrum is used as a substitute for the raw measurement input. (3) The D-S evidence theory method is incorporated into the fully connected layer to perform decision-level fusion of the dual-input CNNs, which significantly enhances the model’s robustness, generalization ability, and performance. After the modifications, three metrics (average accuracy, rapidity, and stability) were defined to evaluate the overall performance of the optimized algorithm. The experimental data validation demonstrates that the proposed optimizations can improve the algorithm’s performance to varying degrees. Finally, the data preprocessing and the improved CNN-DS model were deployed on the edge node, and its effectiveness was validated, achieving fault diagnosis for rotating machinery based on edge computing.

Author Contributions

Conceptualization, Z.W. and M.C.; methodology, M.C.; validation, M.C.; formal analysis, C.G.; investigation, Q.Y.; resources, Q.Y.; data curation, C.G.; writing—original draft preparation, M.C.; writing—review and editing, M.C.; supervision, Z.W. All authors have read and agreed to the published version of the manuscript.

Funding

The support of this work by the National Natural Science Foundation of China (No. 51975386), Liaoning Province “Unveiling and Commanding” technology projects (2022020630-JH1/108).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. (The datasets generated during and/or analysed during the current study are not publicly available due to Information related to enterprise product processing).

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Chao Ge and Qirui Yang are employees of Ansteel Group Automation Co., Ltd., but the company did not provide funding or technical support for this research. The two funding projects mentioned in the paper were all awarded to Professor Zhe Wei, and the funders had no role in the design of the study, in the collection, analysis, or interpretation of data, in the writing of the manuscript, or in the decision to publish the results.

References

  1. Rane, N. YOLO and Faster R-CNN object detection for smart Industry 4.0 and Industry 5.0: Applications, challenges, and opportunities. Available SSRN 4624206 2023. [Google Scholar] [CrossRef]
  2. Liu, J.F.; Cen, J.; Huang, H.K.; Liu, X.; Zhao, B.C.; Si, W.W. Reviewon Zeroor Few Sample Rotating Machinery Fault Diagnosis. Comput. Eng. Appl. 2024, 60, 1–14. [Google Scholar] [CrossRef]
  3. Feng, Z.P.; Song, X.G.; Xue, D.K.; Xie, Y.; Deng, D. Survey of vibration fault diagnosis of rotational machinery. J. Vib. Shock 2001, 20, 34–39. [Google Scholar] [CrossRef]
  4. Zheng, J.D.; Ying, W.M.; Pan, H.Y.; Tong, J.Y.; Liu, Q.Y. Improved Holo-Hilbert Spectrum Analysis-Based Fault Diagnosis method for rotating machines. Chin. J. Mech. Eng. 2023, 59, 162. [Google Scholar] [CrossRef]
  5. Xiao, Y.; Wang, Q.F.; Yang, Z.; Xu, W.; Shu, Y.; Chen, W.W. Research on Incipient Warning and Diagnosis Method of Sudden Unbalance Fault for rotating Machinery. J. Mech. Eng. 2023, 59, 162–174. [Google Scholar] [CrossRef]
  6. Zhang, Z.Z.; Wang, J.R.; Han, B.K.; Bao, H.Q.; Li, S.M. Early stage fault diagnosis method of bearings based on nonlinear sparse blind deconvolution. J. Mech. Eng. 2023, 59, 157–166. [Google Scholar] [CrossRef]
  7. Wang, D.D.; Huang, W.D.; Zhang, J.H.; Zhao, S.J.; Yu, B.; Liu, S.G.; Xu, B. Wear State Identification Method for Axial Piston Pumps Based on Edge Computing. J. Mech. Eng. 2024, 60, 189–199. Available online: https://link.cnki.net/urlid/11.2187.TH.20240205.0831.002 (accessed on 19 March 2025).
  8. Jiang, W.; Li, Z.; Zhang, S.; Wang, T.; Zhang, S. Hydraulic Pump fault Diagnosis method based on EWT decomposition denoising and deep learning on cloud platform. Shock Vib. 2021, 2021, 6674351. [Google Scholar] [CrossRef]
  9. Ou, Y.; Huang, D.; Hu, C.; Hao, H.; Gong, J.; Zhao, L. A gear fault diagnosis method based on EEMD cloud model and PSO_SVM. In Proceedings of the 2022 IEEE 11th Data Driven Control and Learning Systems Conference (DDCLS), Chengdu, China, 3–5 August 2022; pp. 860–865. [Google Scholar] [CrossRef]
  10. Huang, X.; Qi, G.; Mazur, N.; Chai, Y. Deep residual networks-based intelligent fault diagnosis method of planetary gearboxes in cloud environments. Simul. Model. Pract. Theory 2022, 116, 102469. [Google Scholar] [CrossRef]
  11. Lu, S.; Lu, J.; An, K.; Wang, X.; He, Q. Edge computing on IoT for machine signal processing and fault diagnosis: A review. IEEE Internet Things J. 2023, 10, 11093–11116. [Google Scholar] [CrossRef]
  12. Ding, A.; Qin, Y.; Wang, B.; Jia, L.; Cheng, X. Lightweight multiscale convolutional networks with adaptive pruning for intelligent fault diagnosis of train bogie bearings in edge computing scenarios. IEEE Trans. Instrum. Meas. 2022, 72, 1–13. [Google Scholar] [CrossRef]
  13. Perminov, V.; Ermakov, V.; Korzun, D. Edge Analytics for bearing fault diagnosis based on convolution neural network. In Fuzzy Systems and Data Mining VII; IOS Press: Amsterdam, The Netherlands, 2021; pp. 94–103. [Google Scholar] [CrossRef]
  14. Tritschler, N.; Dugenske, A.; Kurfess, T. An automated Edge Computing-Based condition health monitoring system: With an application on rolling element bearings. J. Manuf. Sci. Eng. 2021, 143, 071006. [Google Scholar] [CrossRef]
  15. Shao, H.D.; Xiao, Y.M.; Min, Z.S.; Han, S.Y.; Zhang, H.Z. Blockchain and Edge Computing Enabled Federated Learning Fault Diagnosis Framework Federated learning fault diagnosis framework empowered by blockchain and edge computing. J. Mech. Eng. 2023, 59, 283–292. [Google Scholar] [CrossRef]
  16. Xiao, F. GEJS: A Generalized Evidential Divergence measure for multisource information fusion. IEEE Trans. Syst. Man Cybern. Syst. 2022, 53, 2246–2258. [Google Scholar] [CrossRef]
  17. Wu, X.; Zhao, H.; Xu, W.; Pan, W.; Ji, Q.; Hua, X. Fault diagnosis of the distribution network based on the DS evidence theory Bayesian network. Front. Energy Res. 2024, 12, 1422639. [Google Scholar] [CrossRef]
  18. Jiang, T.; Xu, Z.; Cao, J.; Bao, Z.; Gao, F.; Zhang, J.; Vidal, P.P. BECT spike detection based on novel multichannel data weighted fusion algorithm. IEEE Trans. Circuits Syst. II Express Briefs 2022, 69, 4613–4617. [Google Scholar] [CrossRef]
  19. Zhang, Y.Z.; Wik, T.; Bergström, J.; Zou, C.F. State of health estimation for lithium-ion batteries under arbitrary usage using data-driven multimodel fusion. IEEE Trans. Transp. Electrif. 2023, 10, 1494–1507. [Google Scholar] [CrossRef]
  20. Chen, Z.; Fu, L.; Yao, J.; Guo, W.; Plant, C.; Wang, S. Learnable graph convolutional network and feature fusion for multi-view learning. Inf. Fusion 2023, 95, 109–119. [Google Scholar] [CrossRef]
  21. Zhang, S.; Wang, Y.; Wan, P.; Zhuang, J.; Zhang, Y.; Li, Y. Clustering Algorithm-Based Data Fusion Scheme for robust cooperative spectrum sensing. IEEE Access 2020, 8, 5777–5786. [Google Scholar] [CrossRef]
  22. Song, Q.; Wang, M.; Lai, W.; Zhao, S. On Bayesian Optimization-Based Residual CNN for estimation of Inter-Turn short circuit fault in PMSM. IEEE Trans. Power Electron. 2022, 38, 2456–2468. [Google Scholar] [CrossRef]
  23. Jiao, Z.; Wu, R. A new method to improve fault location accuracy in transmission line based on fuzzy Multi-Sensor Data fusion. IEEE Trans. Smart Grid 2018, 10, 4211–4220. [Google Scholar] [CrossRef]
  24. Xu, Y.; Feng, K.; Yan, X.; Yan, R.; Ni, Q.; Sun, B.; Lei, Z.; Zhang, Y.; Liu, Z. CFCNN: A novel convolutional fusion framework for collaborative fault identification of rotating machinery. Inf. Fusion 2023, 95, 1–16. [Google Scholar] [CrossRef]
  25. Chen, Q.; Chen, K.K.; Dong, X.J.; Huangfu, Y.F.; Peng, Z.K.; Meng, G. Interpretable Convolutional Neural Network for Mechanical Equipment Fault Diagnosis. J. Mech. Eng. 2024, 60, 65–76. Available online: https://link.cnki.net/urlid/11.2187.TH.20240813.1738.004 (accessed on 19 March 2025).
  26. Gao, H.; Huo, X.; Hu, R.; He, C. Optimized DTW-Resnet for fault diagnosis by data augmentation toward unequal length time series. IEEE Trans. Instrum. Meas. 2023, 72, 1–11. [Google Scholar] [CrossRef]
  27. Shao, H.; Xia, M.; Han, G.; Zhang, Y.; Wan, J. Intelligent fault diagnosis of Rotor-Bearing system under varying working conditions with modified transfer convolutional neural network and thermal images. IEEE Trans. Ind. Inform. 2021, 17, 3488–3496. [Google Scholar] [CrossRef]
  28. Zhao, G.; Chen, A.; Lu, G.; Liu, W. Data fusion algorithm based on fuzzy sets and D-S theory of evidence. Tsinghua Sci. Technol. 2020, 25, 12–19. [Google Scholar] [CrossRef]
  29. Shantal, M.; Othman, Z.; Bakar, A.A. A novel approach for data feature weighting using correlation coefficients and min–max normalization. Symmetry 2023, 15, 2185. [Google Scholar] [CrossRef]
  30. Adeyemo, A.; Wimmer, H.; Powell, L.M. Effects of normalization techniques on logistic regression in data science. In Proceedings of the Conference on Information Systems Applied Research ISSN, Norfolk, Virginia, 31 October–3 November 2018; Volume 12, p. 37. Available online: https://jisara.org/2019-12/n2/JISARv12n2.pdf#page=37 (accessed on 2 September 2024).
  31. Pan, X.; Zhang, X.; Jiang, Z.; Bin, G. Real-Time intelligent diagnosis of co-frequency vibration faults in rotating machinery based on Lightweight-Convolutional neural networks. Chin. J. Mech. Eng. 2024, 37, 41. [Google Scholar] [CrossRef]
  32. Ruan, D.; Zhang, F.; Guhmann, C. Exploration and Effect Analysis of Improvement in Convolution Neural Network for Bearing Fault Diagnosis. In Proceedings of the 2021 IEEE International Conference on Prognostics and Health Management (ICPHM), Detroit, MI, USA, 7–9 June 2021. [Google Scholar] [CrossRef]
  33. Luo, C.; Jia, M.; Wen, Y. The diagnosis approach for rolling bearing fault based on Kurtosis criterion EMD and Hilbert envelope spectrum. In Proceedings of the 2017 IEEE 3rd Information Technology and Mechatronics Engineering Conference (ITOEC), Chongqing, China, 3–5 October 2017; pp. 843–847. [Google Scholar] [CrossRef]
  34. Wen, L.; Li, X.; Gao, L.; Zhang, Y. A new convolutional neural Network-Based Data-Driven fault diagnosis method. IEEE Trans. Ind. Electron. 2017, 65, 5990–5998. [Google Scholar] [CrossRef]
  35. Gao, D.; Zhu, Y.; Wang, X.; Yan, K.; Hong, J. A Fault Diagnosis Method of Rolling Bearing Based on Complex Morlet CWT and CNN. In Proceedings of the 2018 Prognostics and System Health Management Conference (PHM-Chongqing), Chongqing, China, 26–28 October 2018; pp. 1101–1105. [Google Scholar] [CrossRef]
Figure 1. Framework of the fault diagnosis method based on edge computing: BPFI: Ball Pass Frequency of the Inner Race; BPFO: Ball Pass Frequency of the Outer Race; FTF: Fundamental Train Frequency; BSF: Ball Spin Frequency.
Figure 1. Framework of the fault diagnosis method based on edge computing: BPFI: Ball Pass Frequency of the Inner Race; BPFO: Ball Pass Frequency of the Outer Race; FTF: Fundamental Train Frequency; BSF: Ball Spin Frequency.
Applsci 15 03577 g001
Figure 2. Empirical Mode Decomposition (EMD).
Figure 2. Empirical Mode Decomposition (EMD).
Applsci 15 03577 g002
Figure 3. Schematic diagram of data augmentation using the sliding window method.
Figure 3. Schematic diagram of data augmentation using the sliding window method.
Applsci 15 03577 g003
Figure 4. Correlation coefficient of vibration data at different RWWs.
Figure 4. Correlation coefficient of vibration data at different RWWs.
Applsci 15 03577 g004
Figure 5. Mean squared error of temperature data at different RWW values.
Figure 5. Mean squared error of temperature data at different RWW values.
Applsci 15 03577 g005
Figure 6. CNN-DS structure.
Figure 6. CNN-DS structure.
Applsci 15 03577 g006
Figure 7. Vibration data spectrum: (a) the original vibration signal without any signal processing; (b) the envelope signal obtained through EMD and Hilbert transform.
Figure 7. Vibration data spectrum: (a) the original vibration signal without any signal processing; (b) the envelope signal obtained through EMD and Hilbert transform.
Applsci 15 03577 g007
Figure 8. Performance comparison of two input layers for vibration data.
Figure 8. Performance comparison of two input layers for vibration data.
Applsci 15 03577 g008
Figure 9. Confusion matrices for two input types: (a) the confusion matrix when the raw signal is used as input to train the CNN; (b) the confusion matrix when the envelope spectrum is used as input to train the CNN.
Figure 9. Confusion matrices for two input types: (a) the confusion matrix when the raw signal is used as input to train the CNN; (b) the confusion matrix when the envelope spectrum is used as input to train the CNN.
Applsci 15 03577 g009
Figure 10. t-SNE feature visualization for two input types: (a) t-SNE feature visualization of raw signals; (b) t-SNE feature visualization of envelope spectrum.
Figure 10. t-SNE feature visualization for two input types: (a) t-SNE feature visualization of raw signals; (b) t-SNE feature visualization of envelope spectrum.
Applsci 15 03577 g010
Figure 11. Performance comparison before and after CNN-DS fusion: (a) the confusion matrix for temperature data diagnosis; (b) the confusion matrix for optimized dual-modal data diagnosis.
Figure 11. Performance comparison before and after CNN-DS fusion: (a) the confusion matrix for temperature data diagnosis; (b) the confusion matrix for optimized dual-modal data diagnosis.
Applsci 15 03577 g011
Figure 12. Evaluation of CNN-DS running speed on edge and cloud.
Figure 12. Evaluation of CNN-DS running speed on edge and cloud.
Applsci 15 03577 g012
Figure 13. Evaluation of CNN-DS accuracy on edge and cloud.
Figure 13. Evaluation of CNN-DS accuracy on edge and cloud.
Applsci 15 03577 g013
Table 1. Data fusion methods.
Table 1. Data fusion methods.
Methods and CharacteristicsData-Level FusionFeature-Level FusionDecision-Level Fusion
MethodsWeighted fusion algorithms [18], Kalman filter [19], least squares method, etc.Neural networks [20], compressed clustering method [21], principal component analysis (PCA), etc.Bayesian estimation [22], Dempster–Shafer theory (DS theory) [16], fuzzy theory [23], expert systems, etc.
Data volumeLowerModerateLarger
Information lossMoreModerateLess
Anti-interference capabilityStrongerModerateWeaker
Fault toleranceBetterModeratePoorer
Real-time performanceBetterModeratePoorer
StabilityBetterModeratePoorer
Fusion levelHighestModerateLowest
Fusion accuracyLowerModerateGreater
Fusion costLowerModerateGreatest
Fusion targetHomogeneous/
heterogeneous sensors
Homogeneous/
heterogeneous sensors
Homogeneous/
heterogeneous sensors
Application domainCommand and decision-makingTarget State TrackingTarget State Tracking
Table 2. Relationship between correlation coefficient and degree of correlation.
Table 2. Relationship between correlation coefficient and degree of correlation.
Value of CCDegree of Relevance
0.8 ≤ CC < 1High
0.5 ≤ CC < 0.8Moderate
0.3 ≤ CC < 0.5Low
0 ≤ CC < 0.3Very weak
Table 3. Parameters and parameter values of the CNN-DS neural network.
Table 3. Parameters and parameter values of the CNN-DS neural network.
LayerParameterCNN-DS
TemperatureVibration
#1 Convolution-poolingNumber of convolution kernel12832
Convolution kernel size33
Activation functionReLuReLu
Convolution stride11
Pooling size22
Pooling stride22
#2 Convolution-poolingNumber of convolution kernel6464
Convolution kernel size33
Activation functionReLuReLu
Convolution stride11
Pooling size22
Pooling stride22
#3 Convolution-poolingNumber of convolution kernel32-
Convolution kernel size3-
Activation functionReLu-
Convolution stride1-
Pooling size2-
Pooling stride2-
Fully connection (D-S)Units256128
Activation functionBelief Function
Plausibility Function
Output layerUnits8 + 1
Activation functionBPA
Loss function-Evidence Loss
Total parameters-1.30 million1.05 million
Table 4. Temperature data set.
Table 4. Temperature data set.
LabelSpeed (r/min)Sampling Frequency (Hz)Number of Sampling PointsType
08730.0333,177Normal
18730.0315,000BPFI
28730.0315,000BPFO
38730.0315,000FIF
48730.0315,000BSF
58730.0315,000Cooling
68730.0315,000Unbalance
78730.0315,000Overload
Table 5. Vibration data set.
Table 5. Vibration data set.
LabelSpeed (r/min)Sampling Frequency (Hz)Number of Sampling PointsType
087316320,000Normal
187316320,000BPFI
287316320,000BPFO
387316320,000FIF
487316320,000BSF
Table 6. Data set sizes after data augmentation.
Table 6. Data set sizes after data augmentation.
Data Set Size
NormalBPFIBPFOFIFBSFCoolingUnbalanceOverload
Training SetTemperature254,400109,200109,200109,200109,200109,200109,200109,200
Vibration508,640508,640508,640508,640508,640///
Test SetTemperature63,60027,30027,30027,30027,30027,30027,30027,300
Vibration470,400127,160127,160127,160127,160///
Table 7. Comparison with other machine learning methods.
Table 7. Comparison with other machine learning methods.
AlgorithmAverage Accuracy (%)Rapidity (s)Stability (%)
CNN-DS99.7163.10.059
CNN97.3654.60.082
RNN92.13135.60.043
DT54.8027.50.48
RF79.4035.70.35
CNN-SVM99.3972.50.076
Table 8. Network configuration.
Table 8. Network configuration.
ModeCPUMemoryBandwidthRTT
Cloud Server 1ECS General Purpose Instance; ecs.t58 GB1 Gbps50 ms
Cloud Server 2ECS Compute Optimized Instance; ecs.c616 GB1 Gbps50 ms
Edge NodeNVIDIA Carmel ARMv8.2 64-bit CPU 1.5 GHz8 GB//
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, M.; Ge, C.; Yang, Q.; Wei, Z. Research on Fault Diagnosis Method for Rotating Machinery Based on Edge Computing. Appl. Sci. 2025, 15, 3577. https://doi.org/10.3390/app15073577

AMA Style

Chen M, Ge C, Yang Q, Wei Z. Research on Fault Diagnosis Method for Rotating Machinery Based on Edge Computing. Applied Sciences. 2025; 15(7):3577. https://doi.org/10.3390/app15073577

Chicago/Turabian Style

Chen, Mo, Chao Ge, Qirui Yang, and Zhe Wei. 2025. "Research on Fault Diagnosis Method for Rotating Machinery Based on Edge Computing" Applied Sciences 15, no. 7: 3577. https://doi.org/10.3390/app15073577

APA Style

Chen, M., Ge, C., Yang, Q., & Wei, Z. (2025). Research on Fault Diagnosis Method for Rotating Machinery Based on Edge Computing. Applied Sciences, 15(7), 3577. https://doi.org/10.3390/app15073577

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop