Next Article in Journal
Research on Yield Prediction Technology for Aerospace Engine Production Lines Based on Convolutional Neural Networks-Improved Support Vector Regression
Next Article in Special Issue
Holistic Approach Promotes Failure Prevention of Smart Mining Machines Based on Bayesian Networks
Previous Article in Journal
Experimental Validation of Current Sensors Fault Detection and Tolerant Control Strategy for Three-Phase Permanent Magnet Synchronous Motor Drives
Previous Article in Special Issue
Simultaneous Fault Diagnostics for Three-Shaft Industrial Gas Turbine
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning for Enhanced Fault Diagnosis of Monoblock Centrifugal Pumps: Spectrogram-Based Analysis

by
Prasshanth Chennai Viswanathan
1,
Sridharan Naveen Venkatesh
1,
Seshathiri Dhanasekaran
2,*,
Tapan Kumar Mahanta
1,
Vaithiyanathan Sugumaran
1,
Natrayan Lakshmaiya
3,
Prabhu Paramasivam
4 and
Sakthivel Nanjagoundenpalayam Ramasamy
5
1
School of Mechanical Engineering, Vellore Institute of Technology, Vandalur—Kelambakkam Road, Keelakottatiyur, Chennai 600127, India
2
Department of Computer Science, UiT the Arctic University of Norway, 9037 Tromsø, Norway
3
Department of Mechanical Engineering, Saveetha School of Engineering, SIMATS, Chennai 602105, India
4
Department of Mechanical Engineering, College of Engineering and Technology, Mattu University, Mettu 318, Ethiopia
5
Department of Mechanical Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Coimbatore 641112, India
*
Author to whom correspondence should be addressed.
Machines 2023, 11(9), 874; https://doi.org/10.3390/machines11090874
Submission received: 1 August 2023 / Revised: 27 August 2023 / Accepted: 28 August 2023 / Published: 31 August 2023
(This article belongs to the Special Issue Deep Learning and Machine Health Monitoring)

Abstract

:
The reliable operation of monoblock centrifugal pumps (MCP) is crucial in various industrial applications. Achieving optimal performance and minimizing costly downtime requires effectively detecting and diagnosing faults in critical pump components. This study proposes an innovative approach that leverages deep transfer learning techniques. An accelerometer was adopted to capture vibration signals emitted by the pump. These signals are then converted into spectrogram images which serve as the input for a sophisticated classification system based on deep learning. This enables the accurate identification and diagnosis of pump faults. To evaluate the effectiveness of the proposed methodology, 15 pre-trained networks including ResNet-50, InceptionV3, GoogLeNet, DenseNet-201, ShuffleNet, VGG-19, MobileNet-v2, InceptionResNetV2, VGG-16, NasNetmobile, EfficientNetb0, AlexNet, ResNet-18, Xception, ResNet101 and ResNet-18 were employed. The experimental results demonstrate the efficacy of the proposed approach with AlexNet exhibiting the highest level of accuracy among the pre-trained networks. Additionally, a meticulous evaluation of the execution time of the classification process was performed. AlexNet achieved 100.00% accuracy with an impressive execution (training) time of 17 s. This research provides invaluable insights into applying deep transfer learning for fault detection and diagnosis in MCP. Using pre-trained networks offers an efficient and precise solution for this task. The findings of this study have the potential to significantly enhance the reliability and maintenance practices of MCP in various industrial settings.

1. Introduction

Monoblock centrifugal pumps (MCP) are widely applied in various sectors such as agriculture, industry and civil engineering due to their reliable performance and robustness. In addition, these centrifugal pumps are extensively utilized for domestic purposes including, gardens, apartments, bungalows, small farms, hospitals, hotels and farmhouses [1]. A monoblock pump is a mechanical apparatus where the motor and the pump are assembled within a single enclosure. The pump system consists of rotating components on its shaft. Functioning on the principles of centrifugal force, an MCP harnesses the energy supplied by the motor and converts it into kinetic energy in the pumped liquid. It is advisable to employ fresh water and non-corrosive fluids to ensure the longevity of the pump components. The operational mechanism of MCPs is uncomplicated as they convert the rotational energy generated by a motor into kinetic energy in a flowing fluid. However, it is important to note that MCPs can encounter various faults including bearing, sealing, cavitation and impeller faults. Neglecting these faults can have detrimental consequences, potentially leading to the failure of the entire system. The significance of addressing MCPs’ fault diagnosis arises from their substantial energy consumption, representing a notable portion of global energy production [2]. While these pumps have extended operational lifespans, the risk of sudden failures leading to disruptions and costly breakdowns necessitates continuous monitoring. Current practices involving human intervention for monitoring MCPs suffer from limitations such as scalability issues, subjectivity, lack of real-time analysis, and high false positive rates. Moreover, dependence on human expertise can be hindered by turnover and skilled operator shortages [3].
A shift toward scalable and automated solutions is crucial to overcome these limitations and enhance MCP fault diagnosis reliability. Signal processing and AI technologies offer transformative possibilities. By adopting automation, monitoring systems can efficiently handle data from multiple MCPs concurrently, reducing the need for an increasingly extensive workforce. Algorithms provide objective analysis, eliminating human subjectivity and fatigue, thus ensuring consistent assessments and faster responses. Real-time insights enabled by AI enable quick anomaly detection, curbing potential problems before they escalate. AI’s data-driven predictive maintenance and continuous learning capabilities enable the anticipation of faults based on patterns, minimizing downtime and repair costs. Human–machine collaboration emerges as a synergy between human expertise and machine capabilities, allowing human operators to focus on intricate decision-making while AI handles routine monitoring tasks [4].
Therefore, it is crucial to promptly address and rectify any issues that arise to ensure the smooth and reliable operation of the pump. The main focus of this study is on the detection and diagnosis of cavitation (CAV), bearing and impeller faults (BFIF), bearing faults (BF) and seal defects and impeller faults (IF) in MCPs. Unusual noise, leakage, excessive vibration, decreased hydraulic performance (resulting in a reduced head capacity and efficiency) and potential damage to the pump through pitting, erosion and structural vibration are the significant problems that can arise due to malfunction in MCP components [1]. Neglecting fault detection and diagnosis can lead to system failures, suboptimal performance, compromised safety, shortened equipment lifespan and escalated costs.
In the realm of fault detection and diagnosis (FDD), many techniques can be found in the literature. These techniques encompass model-based approaches such as structural graphs and data-driven approaches including pattern recognition and neural networks [5]. While encompassing various approaches, data-driven methods have gained considerable popularity due to their ability to swiftly identify faulty situations, ease of implementation and reduced dependency on prior knowledge [6]. In conventional fault diagnosis, several steps are involved, starting with collecting data from sensors connected to the component and then significant feature extraction, selection and classification. Data types such as current, temperature, vibration, acoustic and speed are collected during the data acquisition phase. Among these signals, vibration signals are often prioritized due to their significant capability to offer valuable insights into the state of mechanical systems. Following this, relevant information is extracted from the acquired signals and these extracted features are carefully chosen. Depending on the specific application, the fault detection technique may vary, allowing for selecting the most suitable approach. Various options such as vibration analysis [7], sound and acoustic emission analysis [8], current and voltage analysis [9], infrared analysis [10], oil analysis [11], pressure analysis [12] and noise analysis [13] are widely considered. This study employs a data-driven approach with a specific emphasis on vibration analysis, whereby the vibration signals are transformed into spectrogram images.
Recent times have seen an excessive incorporation of advanced signal processing techniques and machine learning algorithms such as neural networks [14], Bayesian networks [15], principal component analysis [16], k-nearest neighbors [17], fuzzy C-means [17], support vector machines [18], hierarchical clustering [19], etc., to enhance the accuracy and effectiveness of fault diagnosis methods. Utilizing these classifiers holds immense significance in assessing and categorizing the equipment’s condition, facilitating efficient fault diagnosis. Extensive research has been undertaken to explore various facets of fault diagnosis in numerous realms; numerous studies have delved into different aspects of the subject matter, aiming to enhance understanding and knowledge in this field. By utilizing gene expression programming (GEP), a study by Sakthivel et al. compares the classification accuracy of fault detection and isolation in rotating machinery with a support vector machine (SVM), Wavelet-GEP and proximal support vector machine (PSVM) [20]. The study revealed that GEP and SVM outperform other classifiers and demonstrate their effectiveness in achieving industrial maintenance and cost savings. By considering flow instabilities and fault interactions in centrifugal pumps (CPs), Rapur and Tiwari utilized vibration and current data, applying an SVM classifier with novel features. The approach achieved a robust fault identification and severity assessment across various operating conditions [21]. Dutta et al. proposed a machine learning (ML)-based computational technique with a three-axis accelerometer to automatically detect faults in a cascade pumping system. The multiclass support vector machine (MSVM) algorithm outperformed other algorithms in terms of accuracy, prediction speed and training time, highlighting the effectiveness of ML for automated fault detection in the pumping system [22]. Cao et al. proposed a fault diagnosis method for centrifugal pump blades using principal component analysis (PCA) and the Gaussian mixed model (GMM). The authors combined signal processing and knowledge, producing a highly effective classifier for crack faults under various working conditions [23]. Manikandan and Duraivelu performed a vibration-based fault diagnosis method for industrial mono-block centrifugal pumps using a deep convolution neural network (DCNN). The model achieved a high accuracy of 99.07% in detecting broken impeller and seal failure conditions [24]. This underscores the importance of applying machine learning techniques in fault diagnosis to achieve more reliable and efficient systems. Table 1 presents the various ML approaches for mechanical systems.
With the advent of industry 4.0, current industrial processes are undergoing a metamorphosis into intelligent systems. Specifically, numerous modernized industrial processes are equipped with a plethora of well-developed sensors to gather process-related data to enable fault diagnosis for existing or emerging issues. As a result of this evolution in industrial environments, complete equipment and process automation is essential with higher levels of careful supervision. This includes comprehensive process control and suitable corrective actions to ensure accurate fault diagnosis, maximizing process efficiency [29]. Intelligent fault diagnosis involves detecting and identifying system faults through a compelling synergy with Deep Learning. Deep Learning has emerged as a promising tool in fault diagnosis with the ability to extract intricate patterns and features from raw data using deep architectures [30]. Implementing a deep learning (DL) model typically consists of three consecutive phases: initial data comprehension and preprocessing, construction and training of the DL model followed by validation and interpretation of the results. Figure 1 presents the various stages in ML and DL.
A pre-trained neural network has undergone training on a substantial dataset for a specific task, acquiring valuable features and patterns from the data. In the current study, we utilized this concept by selecting 15 pre-trained networks, originally designed for related tasks, and integrated additional layers to their architectures. These added layers were tailored to extract intricate patterns from pump vibration data. Subsequently, the modified networks were trained on our dataset through fine-tuning, enabling them to learn task-specific representations. This approach combined the advantages of pre-trained features with customized adaptations, enhancing the network’s capacity to derive meaningful insights from the complex pump vibration data.
Many research studies have recently embraced deep learning techniques for fault diagnosis in mechanical systems. For instance, Xie et al. proposed a fusion of deep learning (FFP-DL) framework, integrating fault frequency into deep learning to enhance interpretability. The authors applied an FFP-CNN model to offshore wind turbines data with experimental results demonstrating improved fault diagnosis accuracy and interpretability even with reduced training data. The pre-trained algorithm accelerates the rate of convergence and training speed [31]. Zhang et al. introduced an approach for fault diagnosis in wind turbine gearboxes employing empirical mode decomposition and a 1D CNN, demonstrating the enhanced representation capacity and efficacy of deep learning models under variable working conditions [32]. Wang et al. introduced an innovative algorithm for diagnosing faults in industrial motor bearings utilizing a multi-local model approach to resolve decision conflicts. The model effectively integrates diverse information sources, thereby enhancing fault diagnosis accuracy in the motor bearings [33]. These findings emphasize the notable performance of deep learning methods in fault detection and diagnosis for mechanical systems.
  • The contribution of the study:
1.
Analysis of six conditions of the centrifugal pump, namely cavitation (CAV), bearing and impeller fault (BFIF), bearing fault (BF), good, seal fault (SF) and impeller fault (IF).
2.
Leveraging ResNet-50, InceptionV3, GoogLeNet, DenseNet-201, ShuffleNet, VGG-19, MobileNet-v2, InceptionResNetV2, VGG-16, NasNetmobile, EfficientNetb0, AlexNet, ResNet-18, Xception, ResNet101 and ResNet-18 for MCP fault diagnostic.
3.
Evaluating pre-trained networks performance and their effectiveness with varied hyperparameters.
4.
Identifying the optimal network for MCP fault diagnosis based on the results.
  • The novelty of the study:
This research study presents a groundbreaking approach to fault diagnosis, incorporating components such as 1D vibration signals, sensor technology, spectrogram image conversion and transfer learning. The study expands the available information by transforming the 1D signals into spectrogram images. It facilitates the visualization and extraction of intricate fault patterns that might be challenging to discern in the original signal domain. Leveraging the power of transfer learning, a pre-trained network is employed to train the system for fault classification, allowing the model to leverage the knowledge and patterns learned from a large dataset. This novel and integrated approach revolutionizes traditional fault diagnosis methodologies, providing an efficient and accurate solution with great promise for real-world industrial applications. By combining advanced signal processing techniques, cutting-edge sensor technology and transfer learning capabilities, this research study presents an innovative framework that enhances fault diagnosis capabilities that pave the way for improved fault detection, diagnosis and maintenance strategies in various industrial settings.
Overall, integrating fault detection and diagnosis (FDD) techniques with deep learning (DL) approaches proves crucial in enhancing the accuracy and effectiveness of fault diagnosis in mechanical systems, particularly in the case of MCP. This integration is vital for maintaining the overall smooth operation of these systems, ensuring safety, prolonging equipment lifespan and reducing maintenance costs. This study significantly contributes to the ongoing efforts to enhance MCP’s fault detection and diagnosis capabilities by employing advanced techniques and leveraging the power of data-driven approaches. Figure 2 explains the procedure involved in the fault diagnosis of MCP using DL.

2. Experimental Studies

This section presents a comprehensive overview of the experimental setup and procedure, outlining the entire methodology employed in the study.

2.1. Experimental Setup

Figure 3 illustrates the Motor Control Pump (MCP) setup configuration employed in the current research endeavor. The system is driven by a 2 horsepower motor that imparts motion to the MCP. This motor is equipped with a sophisticated valve control mechanism, which assumes the crucial role of regulating the fluid flow at the pump’s inlet and outlet. The inlet valve, integral to the setup, is designed to mimic the occurrence of cavitation. Its operation reduces pressure between the impeller’s eye and the suction zone, emulating conditions conducive to cavitation phenomena. To facilitate the observation and study of cavitation events, a transparent acrylic pipe, measuring one meter in length, is affixed to the input and output ends of the impeller. This configuration provides a visual window into the cavitation process, enabling researchers to closely examine the intricate details of these occurrences. In addition, an accelerometer, employing piezoelectric technology, is strategically positioned to capture vibration signals of critical importance for the subsequent analysis. In Figure 3, the placement of the accelerometer is delineated; it is securely attached to the pump’s inlet using adhesive. The output signal generated by the accelerometer is then directed towards a dedicated signal processing unit. This unit incorporates an analog-to-digital converter (ADC) and a charge amplifier, refining the incoming signal. Subsequently, the processed signal is stored within the memory system, primed for further scrutiny. The stored information is harnessed in the subsequent stages of analysis. Employing this stored data, the signal is subjected to a series of computational procedures to extract distinctive features and characteristics. These derived characteristics serve the pivotal purpose of characterizing a spectrum of scenarios and fault states. Thus, Figure 3 serves as a comprehensive visual guide to the elaborate arrangement that underpins the research study, delineating the MCP setup and its associated apparatus, culminating in the systematic extraction of pertinent information for the analysis of cavitation events and their underlying causes.

2.2. Experimental Procedure

Vibration measurements were conducted under typical working conditions using an MCP operating at a consistent rotational speed of 2880 rpm. The objective was to investigate the performance metrics of the pump during normal operation. For the experiment, the pump was prepared and the supply valve was gradually opened after starting the pump. Multiple values were recorded and tabulated, including delivery head, suction head, the duration of 30 spins of the energy meter disc and the duration for a 40 cm increase in water level in the measuring tank. This experimental procedure was repeated for various delivery heads. Performance characteristic curves such as discharge vs. efficiency, discharge vs. total head and discharge vs. input power were obtained to analyze the pump’s behavior. These curves were derived from the data collected during the trials. Simultaneously, vibration data were recorded using an accelerometer positioned on the pump inlet, specifically capturing distinctive failure scenarios. For all the pump operating conditions, a sampling frequency of 24 kHz was employed and the sample length was set to 1024. The choice of the sample length (of 1024) was aimed to strike a balance by obtaining meaningful statistical measures through a larger number of samples while still managing the computational time required. It is important to note that increasing the number of samples also increases the computation time. This carefully chosen sample length allowed for extracting relevant information while considering computational efficiency. A comprehensive analysis of the pump behavior and fault conditions was conducted using these data-driven techniques. To adhere to the requirements of certain feature extraction approaches, the total number of samples needed to be a power of 2 (2n). Given that 1024 is the nearest 2n value to 1000, it was chosen as the sample size for consistency across all scenarios. Each scenario of the MCP underwent 250 trials, ensuring a robust and comprehensive analysis. The resulting vibration signals from these trials were carefully saved and stored in data files preserving the valuable data for further analysis and investigation. Each fault was systematically introduced individually, and the pump’s performance characteristics and the vibration signals were subsequently recorded. A total of 150 instances were recorded for each condition. A brief of MCP faults is discussed below.

2.2.1. Bearing Fault

This experiment examined a specific form of pump bearing defect that might cause vibrations and impeller damage. A pair of KBC 6203 roller bearings were used in the experiment. One of these bearings was in perfect condition with no flaws; however, the other was intentionally modified using wire-cut electrical discharge machining. This procedure was carried out precisely to produce a controlled defect with precise dimensions. The flaw was purposefully introduced on the outer race, measuring 0.973 mm deep and 0.657 mm wide. Vibration signals were acquired and the pump performance characteristics were examined in the event of a bearing failure.

2.2.2. Impeller Fault

During the experiment, two impellers with a circumference of 125 mm were utilized. One impeller was in pristine condition without any flaws while the other impeller had a minor flaw resulting from a small amount of metal being removed during machining. Impeller faults such as rusting, deterioration or imbalance can generate vibration and hinder the pump’s performance. Vibration signals were used to monitor the operation of the pump under different impeller conditions.

2.2.3. Bearing and Impeller Fault

Bearings can wear out over time owing to causes such as friction and insufficient lubrication resulting in increased vibration and possible impeller damage. Impeller issues such as corrosion induced by reactions of chemicals, erosion from abrasive particles in the fluid or unbalance caused by manufacturing flaws can also produce vibration and reduce pump performance. These inaccuracies not only reduce the effectiveness of the pump, but also lead to wear and increased maintenance requirements.

2.2.4. Cavitation

The pump was initiated and its supply valve was closed once the initial pump priming was completed. The delivery valve was fully opened and the suction side valve was progressively closed. An odd noise was heard at a suction head of 540 mm Hg, followed by strong pump vibration and the formation of vapor bubbles in the acrylic pipe. This experimental arrangement adequately mimicked the pump cavitation situation, allowing evaluation of its performance under such conditions. During cavitation, the impeller creates low pressure causing bubbles to develop inside the fluid. When these bubbles hit areas of higher pressure, they forcefully collapse posing a risk to the impeller and contributing to vibrations and noise within the pump.

2.2.5. Seal Fault

The seal comprises two parts: the stationary seal on the outside and the revolving seal on the inside. When a sealing defect occurs, fluid leakage might occur which could lead to failure of the motor or other parts of the machine. Two seals with a circumference of 25 mm were utilized in this investigation. One seal was flawless while the other had a problem created during installation. Vibration signals were captured particularly for the broken seal while assuring that all other components functioned normally.

3. Pre-trained Networks for Fault Diagnosis in MCPs

This section presents an overview of the dataset creation and acquisition and the pre-trained networks employed in the study.

Creating and Preparing the Dataset:

In order to create and prepare the dataset, the vibration data collected from the experimental setup were utilized. The collected vibration signals were converted into spectrogram images to create a normal and faulty pump conditions dataset. Various faulty conditions including IF, CAV, BFIF, BF and SF were considered during the study. To ensure compatibility with the pre-trained network being used, the captured images were resized to dimensions of either 224 × 224, 227 × 227 or 299 × 299.
Spectrograms offer many advantages in the domain of fault detection and diagnosis. By providing a comprehensive time–frequency analysis, spectrograms enable the visualization of frequency variations over time, thereby facilitating the identification of fault patterns that may not be discernible in the original time–domain signal. Furthermore, spectrograms allow for multiresolution analysis, capturing fault-related information across diverse temporal scales and frequency ranges. By extracting salient features from spectrograms, engineers can pinpoint distinctive fault indicators, facilitating subsequent analysis and classification. This feature extraction process is pivotal in automating fault detection algorithms and enhancing diagnostic accuracy. Additionally, spectrograms serve as a valuable tool for pattern recognition. Experts can establish reference patterns or templates to identify specific fault signatures by comparing spectrograms obtained from healthy and faulty signals. The intuitive visual representation provided by spectrograms enhances the comprehension and interpretation of intricate data, fostering effective communication and informed decision-making among stakeholders. Moreover, the real-time computability of spectrograms enables continuous monitoring and proactive fault mitigation. Figure 4a–e represents the obtained spectrogram images for various MCP conditions.
Converting vibration signals into images starts with preprocessing the acquired vibration data, involving resampling, filtering and segmentation for improved quality. The time–domain signal was divided into shorter segments of equal length to generate the spectrograms. Each segment was then processed using the Fast Fourier Transform (FFT). A spectrogram was created, which visualizes the intensity of the signal over time at different frequencies of the waveform. This spectrum was plotted for each segment within the spectrogram. As part of this approach, multiple spectrograms were computed for each signal in the training, validation and test datasets. This ensured a comprehensive representation of the vibration signals in different scenarios. Subsequently, these spectrograms are formed using the Fast Fourier Transform (FFT), incorporating windowing for precision and stacking magnitudes to craft a 2D representation highlighting frequency characteristics across time intervals. These spectrograms are treated as image data, allowing the utilization of CNNs and similar deep learning models to discern patterns and features. Integrating techniques such as transfer learning and data augmentation bolsters the models’ capacity to generalize and learn intricate details. The efficacy of the models is gauged through evaluation metrics such as accuracy, precision and recall. Further comprehension of predictions might encompass the visualization of saliency maps, elucidating influential aspects of the input data. By embracing this methodology, image-based analysis and deep learning is combined to distill valuable insights from vibration data. Using spectrograms in fault detection and diagnosis gives comprehensive insights into fault characteristics, optimizing maintenance strategies and improving operational reliability. Table 2 showcases the utilized pre-trained networks and their corresponding properties and Figure 5 provides the relationship between the pre-trained networks and the number of parameters trained.

4. Results and Discussions

In the present study, a comprehensive investigation was conducted to determine the most effective pre-trained network among the 16 networks considered (ResNet-50, InceptionV3, GoogLeNet, DenseNet-201, ShuffleNet, VGG-19, MobileNet-v2, InceptionResNetV2, VGG-16, NasNetmobile, EfficientNetb0, AlexNet, ResNet-18, Xception, ResNet101 and ResNet-18). The selection process involved experimenting with parameters such as the solver type, learning rate, train-test split ratio and batch size. The experiments were performed using MATLAB R2023a, a widely recognized and extensively used tool for scientific computing. By systematically varying these key parameters, the optimal pre-trained network was identified, ensuring the highest performance and accuracy for the intended task.

4.1. Train-Test Split Ratio Influence on Pre-trained Network Performance

The train-test split ratio refers to the proportion of a dataset divided into training and testing subsets for deep learning. This allocation allows for evaluating the model performance on unseen data and detect overfitting. By increasing the split ratio, more training data are made available that have the potential to enhance performance. However, this comes at the expense of reduced reliability in the test set. On the other hand, decreasing the split ratio expands the size of the test set, facilitating a more robust evaluation. However, if the training set becomes excessively small, it may result in a diminished performance. Striking a balance is crucial, considering dataset size, model complexity and evaluation requirements. Common ratios depend upon the specific needs.
This experiment divided the dataset into two sets: a training set and a testing set. To evaluate the performance of the pre-trained networks, five different split ratios were conducted for each network. During these iterations, the other hyperparameters such as batch size (10), solver (sgdm), epochs (30) and learning rate (0.0001) were kept constant. This experimental setup allowed for a comprehensive exploration of the impact of different train-test split ratios (0.6, 0.7, 0.75, 0.8, 0.85) on the performance of the pre-trained networks. Table 3 illustrates the performance variations of each network concerning the TR (train-test split ratio). Table 3 shows that AlexNet consistently achieves 100% accuracy across all train-test split ratios. Additionally, the table reveals that AlexNet attains optimal performance, achieving 100% accuracy with a train-test split ratio of 0.6 while exhibiting the lowest computational time of 28 s compared to other ratios. The optimal TR ratios were found to be 0.60 for all the pre-trained networks.

4.2. Solver Influence on Pre-trained Network Performance

Optimizer algorithms are instrumental in improving the performance of deep learning models. These optimization techniques significantly impact the accuracy and speed of training these models. The solvers ultimately minimize the loss function by modifying the neural network weights during each epoch. An optimizer is a function or algorithm that adjusts parameters such as weights and learning rates within a neural network. This adjustment process aids in reducing the overall loss and enhances the model’s precision. To assess the performance of pre-trained networks, the solvers, including sgdm, adam and rmsprop, were varied. However, other hyperparameters such as batch size (10), learning rate (0.0001) and epochs (30) remained constant. Additionally, the optimal TR split ratio from the previous section was utilized for every network.
Table 4 depicts the performance differences among various networks based on the optimizers used. Upon analyzing, it becomes apparent that AlexNet achieves 100% accuracy exclusively when the optimizer employed is sgdm, taking a computational time of 25 s. In contrast, it attains an accuracy of 99.70% when using other optimizers.

4.3. Batch Size Influence on Pre-trained Network Performance

The batch size refers to the number of samples used in each training iteration of a neural network and plays a critical role in determining the model performance and training time. A smaller batch size allows for faster training iterations that can be advantageous when time is crucial or when dealing with large datasets. On the other hand, a larger batch size provides more accurate gradient estimates, leading to potentially better generalizations and higher accuracy on unseen data. However, larger batch sizes can result in longer training times due to increased memory requirements and computational overhead. The optimal batch size depends on dataset size, model complexity, available resources and desired training time. Finding the right balance is essential for efficient training and model performance.
In order to determine the optimal batch size that yields the best performance, the learning rate and epochs were maintained at a constant value of 0.0001 and 30, respectively. The optimal combination of training data and optimizers obtained in the previous sections was utilized, while the batch size was systematically varied across the values of 8, 10, 16, 24 and 32. By exploring this range of batch sizes, the goal was to identify the batch size that maximizes the model performance in terms of accuracy and convergence. Based on these considerations, the following hyperparameters were found to yield the best performance: VGG-19, AlexNet (0.60 TR and sgdm optimizer), Xception (0.60 TR and sgdm optimizer), InceptionV3 (0.60 TR and sgdm optimizer), DenseNet-201 (0.60 TR and sgdm optimizer), GoogLeNet, ResNet-18, ResNet-50, ResNet101, ShuffleNet, VGG-16, EfficientNetb0, MobileNet-v2, NasNetmobile and InceptionResNetV2 (0.60 TR and sgdm optimizer).
Table 5 displays the performance of pre-trained networks across different batch sizes. AlexNet achieves 100% accuracy for all batch sizes except for batch size 24, as shown in Table 5. The table illustrates that AlexNet attains the shortest computational time of 17 s specifically for a batch size of 32.

4.4. Learning Rate Influence on Pre-trained Network Performance

The learning rate is a vital hyperparameter in neural network training. It is assigned a small positive value, usually between 0.0 and 1.0, to greatly influence the learning process. It determines the speed at which a model learns and finds the right balance. Raising the learning rate decreases computational time, but also carries the risk of improper training for the model. On the other hand, a smaller learning rate increases the computational time needed for training. Achieving an optimal learning rate balances computational efficiency and effective model training. Finding the optimal learning rate often involves experimentation and fine-tuning. It requires balancing fast convergence and avoiding convergence issues such as overshooting or getting stuck in local minima. Understanding the data, model complexity and the problem at hand can guide the selection of an appropriate learning rate for effective neural network training.
The optimizers, train-test ratio and batch size were all fixed and the learning rate (0.0001,0.0003,0.001) was varied to evaluate the model performance. The fixed parameters are as follows: GoogLeNet (0.60 TR, sgdm, 24 BS), ResNet-18 (0.60 TR, sgdm, 32 BS), ResNet-50 (0.60 TR, sgdm, 32 BS), ResNet101 (0.60 TR, sgdm, 32 BS), ShuffleNet (0.60 TR, sgdm, 32 BS), VGG-19 (0.60 TR, sgdm, 16 BS), VGG-16 (0.60 TR, sgdm, 10 BS), EfficientNetb0 (0.60 TR, sgdm, 10 BS), MobileNet-v2 (0.60 TR, sgdm, 32 BS), DenseNet-201 (0.60 TR, sgdm, 32 BS), NasNetmobile (0.60 TR, sgdm, 24 BS), AlexNet (0.60 TR, sgdm, 32 BS), Xception (0.60 TR, sgdm, 10 BS), InceptionResNetV2 (0.60 TR, sgdm, 16 BS) and InceptionV3 (0.60 TR, sgdm, 24 BS).
Table 6 displays the performance analysis of different learning rates. It is evident that AlexNet achieves 100% accuracy across all the learning rates. Furthermore, it can be observed that AlexNet consistently maintains a computational time of 17 s, irrespective of the learning rate employed in Table 6. These findings emphasize the robustness and effectiveness of these models across different learning rate configurations, solidifying their reliability as top choices for image classification tasks.

4.5. Comparison of the Pre-Trained Models

The effectiveness of pre-trained neural networks in fault diagnosis for MCPs was assessed and their performance was evaluated using various metrics. Among the pre-trained models, AlexNet stood out as the most accurate model with an accuracy of 100% and a relatively low execution time of 17 s, as shown in Table 7. Therefore, it is highly recommended for fault diagnosis in MCPs. Further analysis using Figure 6 and Figure 7 demonstrated the successful training progression of AlexNet and its confusion matrix. This matrix provides a comprehensive overview of the model classification capabilities by representing correct identifications on the diagonal and misclassifications on the non-diagonal elements. Throughout the training period, the total loss was dramatically decreased, illustrating the efficiency of the selected hyperparameters. Table 7 presents the overall classification accuracy of the pre-trained networks and the best performing network has been highlighted. Figure 6 illustrates the confusion matrix for the AlexNet network, further validating its effectiveness in fault identification and classification for MCP.

4.6. A Comparative Study: Pre-Trained Networks and Cutting-Edge Works

A comparison analysis is undertaken in this part to illustrate how the proposed approach outperforms other cutting-edge studies accessible in the literature. Table 8 compares the efficacy of several techniques with that of the suggested methodology. Table 8 demonstrates that the proposed approach exceeds all prior works by achieving a classification accuracy of 100%.

5. Conclusions

In conclusion, this research paper focused on the fault diagnosis of monoblock centrifugal pumps (MCP) using pre-trained neural networks. Six defects including CAV, BFIF, BF, Good, SF and IF were analyzed to develop an effective diagnostic system. The vibration signals acquired from MCP were processed using a spectrogram image conversion technique. This technique transformed the signals into spectrogram images which were then utilized as inputs for the pre-trained networks. Several state-of-the-art networks, namely ResNet-50, InceptionV3, GoogLeNet, DenseNet-201, ShuffleNet, VGG-19, MobileNet-v2, InceptionResNetV2, VGG-16, NasNetmobile, EfficientNetb0, AlexNet, ResNet-18, Xception, ResNet101 and ResNet-18, were leveraged for the fault diagnosis task. By evaluating the effectiveness of the pre-trained networks with varied hyperparameters, it is observed that AlexNet exhibited the highest level of accuracy among the tested models. Its exceptional classification precision makes it a strong candidate for MCP fault diagnosis. Furthermore, the execution time of the classification process was thoroughly analyzed. AlexNet achieved an impressive execution time of 17 s, indicating its applicability for real-time fault diagnosis. Based on the comprehensive evaluation of the pre-trained networks and their performance in MCP fault diagnosis, it can be concluded that AlexNet with optimized hyperparameters is the optimal choice. Its high accuracy and efficient execution time make it a reliable tool for identifying and classifying faults in MCPs. The findings of this research provide valuable insights for maintenance and troubleshooting purposes, enabling the timely and accurate detection of MCP faults to ensure operational efficiency and reliability.

Author Contributions

Conceptualization, P.C.V. and V.S.; methodology, S.N.V.; software, T.K.M.; validation, N.L., P.P. and S.D.; formal analysis, S.N.V. and V.S.; investigation, P.C.V., S.N.V. and V.S.; resources, S.N.R., N.L. and S.D.; data curation, S.N.R., S.N.V., P.P. and S.D.; writing—original draft preparation, P.C.V. and S.N.V.; writing—review and editing, S.N.V., V.S. and S.D.; visualization, S.N.V., T.K.M., N.L., P.P. and S.D.; supervision, V.S. and S.D.; project administration, S.N.V., V.S. and S.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data associated with the study can be obtained upon request from corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sakthivel, N.R.; Sugumaran, V.; Babudevasenapati, S. Vibration Based Fault Diagnosis of Monoblock Centrifugal Pump Using Decision Tree. Expert Syst. Appl. 2010, 37, 4040–4049. [Google Scholar] [CrossRef]
  2. Arun Shankar, V.K.; Umashankar, S.; Paramasivam, S.; Hanigovszki, N. A Comprehensive Review on Energy Efficiency Enhancement Initiatives in Centrifugal Pumping System. Appl. Energy 2016, 181, 495–513. [Google Scholar] [CrossRef]
  3. Sunal, C.E.; Dyo, V.; Velisavljevic, V. Review of Machine Learning Based Fault Detection for Centrifugal Pump Induction Motors. IEEE Access 2022, 10, 71344–71355. [Google Scholar] [CrossRef]
  4. Zaman, W.; Ahmad, Z.; Siddique, M.F.; Ullah, N.; Kim, J.M. Centrifugal Pump Fault Diagnosis Based on a Novel SobelEdge Scalogram and CNN. Sensors 2023, 23, 5255. [Google Scholar] [CrossRef] [PubMed]
  5. Abid, A.; Khan, M.T.; Iqbal, J. A Review on Fault Detection and Diagnosis Techniques: Basics and Beyond. Artif. Intell. Rev. 2020, 54, 3639–3664. [Google Scholar] [CrossRef]
  6. Tidriri, K.; Chatti, N.; Verron, S.; Tiplica, T. Bridging Data-Driven and Model-Based Approaches for Process Fault Diagnosis and Health Monitoring: A Review of Researches and Future Challenges. Annu. Rev. Control 2016, 42, 63–81. [Google Scholar] [CrossRef]
  7. Ribeiro, R.F., Jr.; dos Santos Areias, I.A.; Gomes, G.F. Fault Detection and Diagnosis Using Vibration Signal Analysis in Frequency Domain for Electric Motors Considering Different Real Fault Types. Sens. Rev. 2021, 41, 311–319. [Google Scholar] [CrossRef]
  8. Deng, S.; Pei, J.; Wang, Y.; Liu, B. Research on Fault Diagnosis of Mud Pump Fluid End Based on Acoustic Emission. Adv. Mech. Eng. 2017, 9, 1687814017711393. [Google Scholar] [CrossRef]
  9. Cheng, S.; Zhao, J.; Chen, C.; Li, K.; Wu, X.; Yu, T.; Yu, Y. An Open-Circuit Fault-Diagnosis Method for Inverters Based on Phase Current. Transp. Saf. Environ. 2020, 2, 148–160. [Google Scholar] [CrossRef]
  10. Zou, H.; Huang, F. A Novel Intelligent Fault Diagnosis Method for Electrical Equipment Using Infrared Thermography. Infrared Phys. Technol. 2015, 73, 29–35. [Google Scholar] [CrossRef]
  11. Gao, J.; Zhang, P.; Liu, B.; Xie, Z. An Integrated Fault Diagnosis Method of Gearboxes Using Oil Analysis and Vibration Analysis. In Proceedings of the 2007 8th International Conference on Electronic Measurement and Instruments, ICEMI, Xi’an, China, 16–18 August 2007; pp. 3371–3374. [Google Scholar] [CrossRef]
  12. Tang, S.; Zhu, Y.; Yuan, S. An Adaptive Deep Learning Model towards Fault Diagnosis of Hydraulic Piston Pump Using Pressure Signal. Eng. Fail. Anal. 2022, 138, 106300. [Google Scholar] [CrossRef]
  13. Patil, S.; Wani, K. Gear Fault Detection Using Noise Analysis and Machine Learning Algorithm with YAMNet Pretrained Network. Mater. Today Proc. 2023, 72, 1322–1327. [Google Scholar] [CrossRef]
  14. Sorsa, T.; Koivo, H.N.; Koivisto, H. Neural Networks in Process Fault Diagnosis. IEEE Trans. Syst. Man Cybern. 1991, 21, 815–825. [Google Scholar] [CrossRef]
  15. Cai, B.; Huang, L.; Xie, M. Bayesian Networks in Fault Diagnosis. IEEE Trans. Ind. Inform. 2017, 13, 2227–2240. [Google Scholar] [CrossRef]
  16. Ding, S.; Zhang, P.; Ding, E.; Yin, S.; Naik, A.; Deng, P.; Gui, W. On the Application of PCA Technique to Fault Diagnosis. Tsinghua Sci. Technol. 2010, 15, 138–144. [Google Scholar] [CrossRef]
  17. Elshenawy, L.M.; Chakour, C.; Mahmoud, T.A. Fault Detection and Diagnosis Strategy Based on K-Nearest Neighbors and Fuzzy C-Means Clustering Algorithm for Industrial Processes. J. Frankl. Inst. 2022, 359, 7115–7139. [Google Scholar] [CrossRef]
  18. de Souza, D.L.; Granzotto, M.H.; de Almeida, G.M.; Oliveira-Lopes, L.C. Fault Detection and Diagnosis Using Support Vector Machines—A SVC and SVR Comparison. J. Saf. Eng. 2014, 3, 18–29. [Google Scholar] [CrossRef]
  19. Yu, L.; Qu, J.; Gao, F.; Tian, Y. A Novel Hierarchical Algorithm for Bearing Fault Diagnosis Based on Stacked LSTM. Shock Vib. 2019, 2019, 2756284. [Google Scholar] [CrossRef]
  20. Sakthivel, N.R.; Nair, B.B.; Sugumaran, V. Soft Computing Approach to Fault Diagnosis of Centrifugal Pump. Appl. Soft Comput. 2012, 5, 1574–1581. [Google Scholar] [CrossRef]
  21. Rapur, J.S.; Tiwari, R. On-Line Time Domain Vibration and Current Signals Based Multi-Fault Diagnosis of Centrifugal Pumps Using Support Vector Machines. J. Nondestruct. Eval. 2018, 38, 1–18. [Google Scholar] [CrossRef]
  22. Dutta, N.; Kaliannan, P.; Shanmugam, P. SVM Algorithm for Vibration Fault Diagnosis in Centrifugal Pump. Intell. Autom. Soft Comput. 2023, 35, 2997–3020. [Google Scholar] [CrossRef]
  23. Cao, S.; Hu, Z.; Luo, X.; Wang, H. Research on Fault Diagnosis Technology of Centrifugal Pump Blade Crack Based on PCA and GMM. Measurement 2021, 173, 108558. [Google Scholar] [CrossRef]
  24. Manikandan, S.; Duraivelu, K. Vibration-Based Fault Diagnosis of Broken Impeller and Mechanical Seal Failure in Industrial Mono-Block Centrifugal Pumps Using Deep Convolutional Neural Network. J. Vib. Eng. Technol. 2023, 11, 141–152. [Google Scholar] [CrossRef]
  25. Lakshmanan, K.; Gil, A.J.; Auricchio, F.; Tessicini, F. A Fault Diagnosis Methodology for an External Gear Pump with the Use of Machine Learning Classification Algorithms: Support Vector Machine and Multilayer Perceptron. Loughb. Univ. Conf. Contrib. 2020. [Google Scholar] [CrossRef]
  26. Kim, S.; Choi, J.H. Convolutional Neural Network for Gear Fault Diagnosis Based on Signal Segmentation Approach. Struct. Health Monit. 2019, 18, 1401–1415. [Google Scholar] [CrossRef]
  27. Liu, S.; Jiang, W.; Niu, H. Fault Diagnosis of Hydraulic Pump Based on Rough Set and PCA Algorithm. In Proceedings of the 2008 Fifth International Conference on Fuzzy Systems and Knowledge Discovery, Jinan, China, 18–20 October 2008; Volume 5, pp. 256–260. [Google Scholar] [CrossRef]
  28. Fu, X. Bayesian Network Based Fault Diagnosis of Aero Hydraulic Pump. In Proceedings of the CSAA/IET International Conference on Aircraft Utility Systems (AUS 2020), Online Conference, 18–21 September 2020; pp. 539–543. [Google Scholar] [CrossRef]
  29. Park, Y.J.; Fan, S.K.S.; Hsu, C.Y. A Review on Fault Detection and Process Diagnostics in Industrial Processes. Processes 2020, 8, 1123. [Google Scholar] [CrossRef]
  30. Hoang, D.T.; Kang, H.J. A Survey on Deep Learning Based Bearing Fault Diagnosis. Neurocomputing 2019, 335, 327–335. [Google Scholar] [CrossRef]
  31. Xie, T.; Xu, Q.; Jiang, C.; Lu, S.; Wang, X. The Fault Frequency Priors Fusion Deep Learning Framework with Application to Fault Diagnosis of Offshore Wind Turbines. Renew. Energy 2023, 202, 143–153. [Google Scholar] [CrossRef]
  32. Zhang, L.; Fan, Q.; Lin, J.; Zhang, Z.; Yan, X.; Li, C. A Nearly End-to-End Deep Learning Approach to Fault Diagnosis of Wind Turbine Gearboxes under Nonstationary Conditions. Eng. Appl. Artif. Intell. 2023, 119, 105735. [Google Scholar] [CrossRef]
  33. Wang, X.; Li, A.; Han, G. A Deep-Learning-Based Fault Diagnosis Method of Industrial Bearings Using Multi-Source Information. Appl. Sci. 2023, 13, 933. [Google Scholar] [CrossRef]
  34. Muralidharan, V.; Sugumaran, V.; Indira, V. Fault Diagnosis of Monoblock Centrifugal Pump Using SVM. Eng. Sci. Technol. Int. J. 2014, 17, 152–157. [Google Scholar] [CrossRef]
  35. Muralidharan, V.; Sugumaran, V.; Sakthivel, N.R. Fault Diagnosis of Monoblock Centrifugal Pump Using Stationary Wavelet Features and Bayes Algorithm. Asian J. Sci. Appl. Technol. 2014, 3, 1–4. [Google Scholar] [CrossRef]
  36. Sakthivel, N.R.; Sugumaran, V.; Nair, B.B. Application of Support Vector Machine (SVM) and Proximal Support Vector Machine (PSVM) for Fault Classification of Monoblock Centrifugal Pump. Int. J. Data Anal. Tech. Strateg. 2010, 2, 38–61. [Google Scholar] [CrossRef]
Figure 1. Stages in ML and DL.
Figure 1. Stages in ML and DL.
Machines 11 00874 g001
Figure 2. Illustration of the procedure involved in fault diagnosis of MCP using deep learning.
Figure 2. Illustration of the procedure involved in fault diagnosis of MCP using deep learning.
Machines 11 00874 g002
Figure 3. Experimental setup of condition monitoring of MCP and the location of an accelerometer.
Figure 3. Experimental setup of condition monitoring of MCP and the location of an accelerometer.
Machines 11 00874 g003
Figure 4. Spectrogram images depicting various conditions of the MCP: (a) Cavitation; (b) Bearing and impeller fault; (c) Bearing fault; (d) Impeller fault; (e) Good condition.
Figure 4. Spectrogram images depicting various conditions of the MCP: (a) Cavitation; (b) Bearing and impeller fault; (c) Bearing fault; (d) Impeller fault; (e) Good condition.
Machines 11 00874 g004
Figure 5. Relationship between pre-trained networks and the number of parameters trained.
Figure 5. Relationship between pre-trained networks and the number of parameters trained.
Machines 11 00874 g005
Figure 6. Confusion matrix of AlexNet for fault diagnosis of MCPs.
Figure 6. Confusion matrix of AlexNet for fault diagnosis of MCPs.
Machines 11 00874 g006
Figure 7. Training progress of AlexNet for fault diagnosis of MCPs.
Figure 7. Training progress of AlexNet for fault diagnosis of MCPs.
Machines 11 00874 g007
Table 1. ML approaches in mechanical systems.
Table 1. ML approaches in mechanical systems.
Machine Learning ApproachMechanical SystemReferences
SVM and Multi-layer perceptionGear pump[25]
CNNGear[26]
Rough set and PCAHydraulic pump[27]
Bayesian NetworkAero hydraulic pump[28]
Table 2. Properties of the pre-trained network used.
Table 2. Properties of the pre-trained network used.
ModelComputational ComplexityInput Image Size
AlexNetHigh227 × 227
VGG-16Very High224 × 224
GoogLeNetModerate-High224 × 224
Inception-V3High299 × 299
DenseNet-201High224 × 224
MobileNet-V2High224 × 224
ResNet-50Very High224 × 224
ResNet-101Very High224 × 224
XceptionHigh299 × 299
InceptionResNetV2Low moderate299 × 299
ShuffleNetVery High224 × 224
VGG-19Very High224 × 224
NasNetmobileModerate224 × 224
EfficientNetb0High224 × 224
ResNet-18Moderate224 × 224
Table 3. Performance of pre-trained network with varying train-test split ratio.
Table 3. Performance of pre-trained network with varying train-test split ratio.
Pre-Trained NetworkClassification Accuracy (%)
0.60.70.750.80.85
ResNet-50100.00 (225 s)100.00 (257 s)100.00 (259 s)100.00 (268 s)100.00 (280 s)
GoogLeNet100.00 (132 s)100.00 (143 s)100.00 (144 s)100.00 (154 s)100.00 (158 s)
DenseNet-201100.00 (1553 s)100.00 (1696 s)100.00 (1825 s)99.60 (1938 s)100.00 (2039 s)
ShuffleNet100.00 (359 s)100.00 (413 s)100.00 (419 s)100.00 (431 s)100.00 (441 s)
VGG-19100.00 (683 s)100.00 (695 s)100.00 (742 s)100.00 (748 s)99.60 (753 s)
MobileNet-v2100.00 (448 s)100.00 (464 s)100.00 (475 s)99.50 (476 s)97.80 (483 s)
InceptionResNetV2100.00 (1496 s)99.00 (1582 s)100.00 (1658 s)100.00 (1713 s)100.00 (5985 s)
VGG-16100.00 (435 s)100.00 (480 s)100.00 (502 s)100.00 (529 s)100.00 (547 s)
NasNetmobile100.00 (2011 s)99.30 (2124 s)98.90 (2184 s)98.00 (2199 s)99.60 (2266 s)
EfficientNetb0100.00 (1050 s)99.30 (1060 s)100.00 (1123 s)100.00 (1144 s)100.00 (1149 s)
AlexNet100.00 (28 s)100.00 (33 s) 100.00 (35 s)100.00 (38 s)100.00 (40 s)
ResNet-18100.00 (84 s)99.30 (90 s)100.00 (103 s)100.00 (106 s)100.00 (113 s)
Xception100.00 (2389 s)99.70 (2448 s)98.00 (4184 s)100.00 (4327 s)100.00 (4341 s)
ResNet101100.00 (401 s)100.00 (432 s)100.00 (450 s)100.00 (528 s)100.00 (535 s)
InceptionV3100.00 (568 s)100.00 (582 s)100.00 (633 s)100.00 (724 s)100.00 (752 s)
Table 4. Solver influence on pre-trained network performance.
Table 4. Solver influence on pre-trained network performance.
Pre-Trained NetworkOptimizer
sgdmadamrmsprop
ResNet-50100.00 (225 s)98.20 (272 s)100.00 (250 s)
GoogLeNet100.00 (132 s)100.00 (171 s)100.00 (158 s)
DenseNet-201100.00 (1696 s)100.00 (2622 s)100.00 (2207 s)
ShuffleNet100.00 (389 s)100.00 (461 s)100.00 (430 s)
VGG-19100.00 (683 s)100.00 (1306 s)20.10 (1008 s)
MobileNet-v2100.00 (448 s)100.00 (529 s)100.00 (486 s)
InceptionResNetV2100.00 (1496 s)100.00 (2188 s)99.10 (1875 s)
VGG-16100.00 (435 s)100.00 (1202 s)66.00 (871 s)
NasNetmobile100.00 (2011 s)100.00 (3232 s)100.00 (2656 s)
EfficientNetb0100.00 (1050 s)100.00 (1213 s)100.00 (1085 s)
AlexNet100.00 (28 s)99.70 (35 s)99.70 (31 s)
ResNet-18100.00 (84 s)100.00 (95 s)100.00 (88 s)
Xception100.00 (2389 s)100.00 (3512 s)100.00 (4640 s)
ResNet101100.00 (401 s)100.00 (655 s)100.00 (521 s)
InceptionV3100.00 (568 s)100.00 (903 s)100.00 (829 s)
Table 5. Performance of pre-trained network with varying batch size.
Table 5. Performance of pre-trained network with varying batch size.
Pre-Trained NetworkClassification Accuracy (%)
810162432
ResNet-50100.00 (284 s)100.00 (225 s)100.00 (161 s)100.00 (140 s)100.00 (122 s)
GoogLeNet98.2 (179 s)100.00 (132 s)100.00 (87 s)100.00 (68 s)100.00 (156 s)
DenseNet-201100.00 (2308 s)100.00 (1696 s)100.00 (1648 s)100.00 (1527 s)100.00 (1195 s)
ShuffleNet99.10 (515 s)100.00 (389 s)100.00 (243 s)100.00 (174 s)100.00 (139 s)
VGG-19100.00 (824 s)100.00 (683 s)100.00 (606 s)100.00 (884 s)100.00 (838 s)
MobileNet-v2100.00 (553 s)100.00 (448 s)99.10 (324 s)100.00 (270 s)100.00 (232 s)
InceptionResNetV2100.00 (1880 s)100.00 (1496 s)100.00 (959 s)100.00 (920 s)100.00 (1376 s)
VGG-16100.00 (666 s)100.00 (435 s)100.00 (626 s)100.00 (677 s)100.00 (645 s)
NasNetmobile96.40 (2768 s)100.00 (2011 s)100.00 (1160 s)100.00 (837 s)99.10 (1146 s)
EfficientNetb0100.00 (1329 s)100.00 (1050 s)100.00 (660 s)100.00 (484 s)100.00 (690 s)
AlexNet100.00 (32 s)100.00 (28 s)100.00 (23 s)98.00 (20 s)100.00 (17 s)
ResNet-18100.00 (101 s)100.00 (84 s)100.00 (58 s)100.00 (44 s)100.00 (37 s)
Xception99.00 (2990 s)100.00 (2389 s)99.60 (3187 s)97.80 (2505 s)97.80 (2385 s)
ResNet101100.00 (500 s)100.00 (401 s)100.00 (293 s)100.00 (239 s)100.00 (231 s)
InceptionV3100.00 (917 s)100.00 (568 s)100.00 (427 s)100.00 (332 s)99.50 (332 s)
Table 6. Performance of pre-trained network with varying learning rate.
Table 6. Performance of pre-trained network with varying learning rate.
Pre-Trained NetworkClassification Accuracy (%)
0.00010.0010.0003
ResNet-50100.00 (122 s)100.00 (122 s)100.00 (122 s)
GoogLeNet100.00 (68 s)99.10 (68 s)97.30 (69 s)
DenseNet-201100.00 (1195 s)100.00 (1547 s)100.00 (1322 s)
ShuffleNet100.00 (139 s)100.00 (137 s)100.00 (151 s)
VGG-19100.00 (606 s)98.30 (1055 s)100.00 (1063 s)
MobileNet-v2100.00 (232 s)100.00 (234 s)100.00 (236 s)
InceptionResNetV2100.00 (959 s)100.00 (882 s)100.00 (842 s)
VGG-16100.00 (435 s)20.00 (414 s)100.00 (428 s)
NasNetmobile100.00 (837 s)100.00 (816 s)100.00 (814 s)
EfficientNetb0100.00 (1050 s)100.00 (1059 s)100.00 (1237 s)
AlexNet100.00 (17 s)100.00 (17 s)100.00 (17 s)
ResNet-18100.00 (37 s)100.00 (37 s)100.00 (37 s)
Xception100.00 (2389 s)100.00 (4057 s)100.00 (4047 s)
ResNet101100.00 (231 s)100.00 (250 s)100.00 (306 s)
InceptionV3100.00 (332 s)100.00 (340 s)99.50 (346 s)
Table 7. Overall classification accuracy of pre-trained networks with optimal hyperparameters.
Table 7. Overall classification accuracy of pre-trained networks with optimal hyperparameters.
Pre-Trained ModelsSplit RatioOptimizerBatch SizeLearning RateAccuracy (%)
ResNet-500.60sgdm320.0003100 (122 s)
GoogLeNet0.60sgdm240.0001100 (68 s)
DenseNet-2010.60sgdm320.0001100 (1195 s)
ShuffleNet0.60sgdm320.001100 (137 s)
VGG-190.60sgdm160.0001100 (606 s)
MobileNet-v20.60sgdm320.0001100 (232 s)
InceptionResNetV20.60sgdm160.0003100 (842 s)
VGG-160.60sgdm100.0003100 (428 s)
NasNetmobile0.60sgdm240.0003100 (814 s)
EfficientNetb00.60sgdm100.0001100 (1050 s)
AlexNet0.60sgdm320.0001100 (17 s)
ResNet-180.60sgdm320.0001100 (37 s)
Xception0.60sgdm100.0001100 (2389 s)
ResNet1010.60sgdm320.0001100 (231 s)
InceptionV30.60sgdm240.0001100 (332 s)
Table 8. Comparisons of performance with various cutting-edge works.
Table 8. Comparisons of performance with various cutting-edge works.
Fault Diagnosis ApproachClassification Accuracy (%)References
SVM99.84[34]
Bayes algorithm82.00[35]
PSVM96.66[36]
SVM99.66[36]
AlexNet (Proposed)100.00
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chennai Viswanathan, P.; Venkatesh, S.N.; Dhanasekaran, S.; Mahanta, T.K.; Sugumaran, V.; Lakshmaiya, N.; Paramasivam, P.; Nanjagoundenpalayam Ramasamy, S. Deep Learning for Enhanced Fault Diagnosis of Monoblock Centrifugal Pumps: Spectrogram-Based Analysis. Machines 2023, 11, 874. https://doi.org/10.3390/machines11090874

AMA Style

Chennai Viswanathan P, Venkatesh SN, Dhanasekaran S, Mahanta TK, Sugumaran V, Lakshmaiya N, Paramasivam P, Nanjagoundenpalayam Ramasamy S. Deep Learning for Enhanced Fault Diagnosis of Monoblock Centrifugal Pumps: Spectrogram-Based Analysis. Machines. 2023; 11(9):874. https://doi.org/10.3390/machines11090874

Chicago/Turabian Style

Chennai Viswanathan, Prasshanth, Sridharan Naveen Venkatesh, Seshathiri Dhanasekaran, Tapan Kumar Mahanta, Vaithiyanathan Sugumaran, Natrayan Lakshmaiya, Prabhu Paramasivam, and Sakthivel Nanjagoundenpalayam Ramasamy. 2023. "Deep Learning for Enhanced Fault Diagnosis of Monoblock Centrifugal Pumps: Spectrogram-Based Analysis" Machines 11, no. 9: 874. https://doi.org/10.3390/machines11090874

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop