Next Article in Journal
Study on the Ensuring of Reliability and Repeatability of Research in the Area of Marine Ecology through Calibration of Underwater Acoustics Devices
Previous Article in Journal
Use of New and Light Materials in Automotive Engineering for Towing System
Previous Article in Special Issue
Computer-Aided Design and Additive Manufacturing for Automotive Prototypes: A Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Intelligent Framework Design for Quality Control in Industry 4.0

1
Electrical Engineering Department, University of Engineering and Technology, Peshawar 25000, KP, Pakistan
2
Faculty of Mechanical Engineering, Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Swabi 12430, KP, Pakistan
3
Department of Industrial Engineering, College of Engineering, King Saud University, P.O. Box 800, Riyadh 11421, Saudi Arabia
4
Department of Electrical Engineering, KTH Royal Institute of Technology, Teknikringen 33, 114 28 Stockholm, Sweden
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(17), 7726; https://doi.org/10.3390/app14177726 (registering DOI)
Submission received: 22 July 2024 / Revised: 19 August 2024 / Accepted: 26 August 2024 / Published: 2 September 2024

Abstract

:
This research aims to develop an intelligent framework for quality control and fault detection in pre-production and post-production systems in Industry 4.0. In the pre-production system, the health of the manufacturing machine is monitored. In this study, we examine the gear system of induction motors used in industries. In post-production, the product is tested for quality using a machine vision system. Gears are fundamental components in countless mechanical systems, ranging from automotive transmissions to industrial machinery, where their reliable operation is vital for overall system efficiency. A faulty gear system in the induction motor directly affects the quality of the manufactured product. Vibration data, collected from the gear system of the induction motor using vibration sensors, are used to predict the motor’s health condition. The gear system is monitored for six different fault conditions. In the second part, the quality of the final product is inspected with the machine vision system. Faults on the surface of manufactured products are detected, and the product is classified as a good or bad product. The quality control system is developed with different deep learning models. Finally, the quality control framework is validated and tested with the evaluation metrics.

1. Introduction

Quality control is an essential process to ensure that a product meets the expected standard and customer expectations, which enhances loyalty and satisfaction. Quality control helps detect faulty and defective products and thus avoid costly recalls and material wastage. QC helps in complying with regulation and safety protocols and thus avoids legal issues and protects the reputations of businesses. By the early detection of anomalies in a product, quality control can minimize product failure and safeguard the finances and reputations of businesses. Quality control has evolved through the ages by technological and industrial advancements. Initially, QC was managed by experts and individual craftsmen. These craftsmen were reputed for their workmanship [1]. Then came the first industrial revolution, which saw the rise in industries and mass production. The first industrial revolution shifted the focus from experts and individual craftsmen to production line efficiency [2]. The second industrial revolution evolved in the early 20th century and saw the introduction of standards and management methods. Statistical methods, including control charts, were developed to monitor and manage production quality [3]. Total Quality Management (TQM) emerged in the third industrial revolution, emphasizing continuous improvement, customer focus, and employee involvement. The implementation of ISO standards and the Six Sigma brought superior QC practices as it made use of international standards and statistic tools for decreasing defects [4]. The integration of technologies like AI, the Internet of Things, and cloud computing has transformed QC with smart factories and sophisticated monitoring systems. AI and machine learning continue to enhance QC, enabling more adaptive and autonomous systems. This era is termed Industry 4.0 or the fourth industrial revolution. Overall, QC has evolved from manual inspections to complex, data-driven systems, reflecting technological advancements and changing industry and consumer demands.
Two facets of quality control are the subject of this study, pre-production quality control and post-production quality control. The goal of pre-production quality control is to ensure that the production machines are in good working order since this directly affects the quality of the products that the machines will produce. The target is the gear of an induction motor used in manufacturing. Post-production quality control examines the condition of products that have previously been manufactured in Section 2. To assess quality control after manufacture, we use the hex nut as a test product. In many industrial applications, machinery performance—especially that of gears—must be optimal. On the other hand, gear problems can result in expensive downtime, lower production, and even safety risks [5]. Reactive methods are frequently used in traditional maintenance procedures, which can lead to unanticipated failures and consequent operational disruptions. Therefore, to avoid unplanned shutdown and catastrophic failure, an early prediction of faults and proactive maintenance of the gear system is in high demand. The use of both vibration data as the input feature (to monitor health) and the labeled output results corresponding to time domain features is a very interesting method that can be broadly used for gear fault prediction. Vibration signals contain significant information on gear conditions, such as wear, cracks, and misalignment [6]. Vibration data analysis provides an effective way to monitor trends that indicate impending failure and schedule timely repair procedures for risk mitigation as well as extending the life of the industrial machines. Although multiple benefits are included, this method is accompanied by some challenges during the implementation of gear fault prediction and health monitoring systems based on vibration data. Secondly, in the modern manufacturing industries, it is essential to prioritize product quality to satisfy customers, stay competitive, and adhere to regulations. Traditional quality checks often rely on human inspections, which can be subjective, labor intensive, and prone to mistakes. This has led to a growing need for automated and dependable quality control solutions that can enhance efficiency, consistency, and precision in manufacturing processes [7]. A promising technology that is gaining traction for automating quality control tasks in industries is machine vision. By utilizing cameras, sensors, and image processing algorithms, machine vision systems can capture, analyze, and interpret data from products. This enables the accurate identification of defects, discrepancies, and irregularities in real time [8].
Our research objectives aim to address key challenges and opportunities in the field of gear fault prediction and health monitoring using vibration data and machine vision-based quality control, ultimately contributing to the development of more efficient, reliable, and cost-effective maintenance practices in industrial machinery applications. The research focuses on developing advanced techniques for extracting relevant features from data collected from gear systems. These features may include time-domain analysis, image processing, and statistical measures to capture characteristic patterns indicative of different types of gear faults. This study explores the effectiveness of various machine learning algorithms, such as artificial neural networks, convolutional neural networks, and transfer learning for gear fault prediction based on vibration data and images. The study compares the efficiency of different techniques in terms of accuracy, generalization, and computational efficiency. The research also aims to provide real-time gear health monitoring systems that can analyze vibration data streams continuously and send out alerts and messages in a timely manner when they notice anomalous behavior or approaching failures.
Quality control in Industry 4.0 is the process of employing advanced technologies, including machine learning algorithms, big data analytics, the Internet of Things, and statistical methods, for the detection of defects, failures, and any other abnormal operating conditions in industrial systems and processes. The integration of digital technology, robotic control, and data acquisition to create smart factories and optimize industrial process productivity, adaptability, and efficiency is a concept that was named Industry 4.0; it is also frequently referred to as the fourth industrial revolution [9]. Sensors are said to be the most important component in the whole process of Industry 4.0. Sensors acquire data on the fly from machines, equipment, and production lines. The data acquired by sensors include image, temperature, pressure, vibration, flow rate, and electrical signals, among others [10]. Potential defects and anomalies can be detected by continuously monitoring the data collected by the sensor and checking it for any deviation from standard operating conditions by using machine learning models. The use of IoT technology ensures that there is proper interaction between the sensors, devices, and systems that are present in the industrial world. Cognitive devices or IoT-enabled smart devices collect data and transfer it to the cloud or control center for analysis in Industry 4.0. The usage of IoT data streams could automatically discover and predict faults, trends, and patterns that indicate that a particular piece of equipment is faulty or failing with the help of machine learning-based fault detection algorithms [11].
The analysis of vibration data produced by rotating machines has emerged as a powerful tool for investigating the health condition of machines that use gear systems as their primary component for power transmission. Vibration data have valuable information about the dynamic behavior of gear systems, including fault-induced patterns, characteristic frequencies, and harmonics [12]. By analyzing vibration data, researchers and practitioners can identify abnormal operating conditions, detect incipient faults, and predict the remaining useful life (RUL) of gears. Vibration data are collected from motors and other industrial equipment with vibration sensors installed on the surface of this rotatory equipment. The vibration patterns are transferred to the sensor via the shaft of motors [13]. Vibration sensors include accelerometers and laser sensors. Microphones can also be used to sense and collect sound energy generated during the operation of different gear systems of machines. Accelerometers are widely used due to their availability and they are less expensive [14]. Lasers provide highly accurate vibration patterns; however, they are expensive and they cannot be installed on hot surfaces of industrial equipment [15]. Microphones are also not preferred in industrial environments because they can collect background noise along with the sound of the machine [16]. Using a microphone would require speech-processing techniques to separate the background noise from the sound patterns of the machines [17]. Figure 1 provides the vibration pattern obtained from the accelerometer for eccentricity fault under an 80 Nm load.
Machine vision-based systems use industrial automation with image processing. Quality control has been one of the core areas of the application of machine vision in industry. Machine vision, along with machine learning, provides economical, reliable, and fast inspection that enhances quality as well as productivity [7]. Developing a machine vision model is a challenging job as each application is unique and has its own needs and desired outcomes. The machine vision model combines the use of lighting, optics, and digital cameras, coupled with machine learning and image processing applications. The applications in the developed machine vision model process the images and decide actions based on processed data. The machine vision system uses advanced hardware including the optics, cameras, and carefully selected lighting systems to collect the correct image data under different challenging conditions. In certain cases, if the collected images do not have a better contrast in white light, a series of LED banks can be used through red, blue, and green lights to capture acceptable images. The frequency and intensity of the lighting system can also be changed to obtain workable images [18]. An illustration of a machine vision system is shown in Figure 2. Once the workable image data are collected, each pixel of the image is assigned an integer value ranging from 0 to 255. In grayscale images, a completely white pixel is assigned a value of 255, and a completely dark pixel in the image is assigned a numeric value of zero. After the image processing software has assigned all the pixels with the numerical values in the image, the software system checks for patterns, variations, and edges and then combines them to detect shapes. The underlying software also enables the machine vision system to detect the objects and identify the sizes and shapes of the objects in the image. The software identifies the presence of faults and their precise location on a product and whether the faults are a reasonable size to pass the quality inspection.
The image preprocessing process is carried out on raw images obtained from cameras. These raw images have many problems like noise, improper edges, and blurred focus, and sometimes, the image quality is disturbed by lighting sources, causing shadows to appear in the images. Preprocessing is carried out to obtain images free of the above-mentioned problems. The preprocessing methods may include scaling, contrasting, edge detection, histogram generation, image segmentation, etc. [20].
The rest of the paper is organized as follows: Related works have been provided in Section 2. Section 3 provides details about the methodology, data insights, and data processing techniques. The deep learning model used for quality control is described in this section. Section 4 of the paper provides the results and discussion of the pre-production and post-production quality control models. The accuracy, loss, and confusion matrices are plotted to elaborate the results. Finally, Section 5 provides the conclusion, advantages, shortcomings, and future directions for the study.

2. Related Work

Quality control in industrial settings is crucial for ensuring product reliability, safety, and customer satisfaction. With the advancement of technology, intelligent frameworks are being explored to enhance industrial quality control processes. Gehrmann and Gunnarsson [21] proposed an industrial automation and control system architecture based on digital twins. This study revealed that the security and reliability of control systems and industrial automation are crucial factors in quality manufacturing. The digital twin idea was introduced as a methodology aimed at generating an analogous digital counterpart of a physical object, system, or process, thereby enabling real-time observation and monitoring. The authors underlined the possibility of digital twin technology as the perspective instrument for the increase in the efficiency of the quality control processes in industries due to the provision of precise and up-to-date data for the analysis and making of decisions. Pérez et al. [22] provided a comparative survey on the use of machine vision techniques for robotic control in industrial settings. This study also stressed the necessity of implementing sophisticated machine vision systems in industrial automation and control. Furthermore, the authors explained how machine vision is used to improve the quality and consistency of industrial processes. Cui et al. [23] presented a survey about the use of machine learning for IoT-based machines and stressed their use in industrial quality management. The focus of the study was to reveal the potential of machine learning algorithms to enhance quality control processes, as well as improve the defect detection capability of industrial products. In [24], Yin et al. developed a real-time monitoring and control framework for industrial cyber–physical systems. The examined case focused on the importance of continuous data collection and processing during the manufacturing process through machine vision approaches aimed at ensuring better quality within industrial cyber–physical systems. The research by Wang et al. [25] proposed an intelligent critic control model for the disturbance attenuation of industrial systems. The research was focused on the use of machine vision methods for the purpose of detecting defective products and improving the manufacturing processes for the development of products that conform to standards. The defect inspection and detection of industrial products using machine vision were demonstrated by Benbarrad et al. [26] using a machine learning model. The study was centered on using machine learning models to conduct the automated detection of flaws and enhance quality in industries. Ong et al. [27] introduced the wavelet neural network, machine vision, and the application of a tool condition monitoring system for CNC milling machines. The findings of the study focused on the use of machine vision systems in the real-time monitoring and management of the condition of tools used in industrial machining operations. Feng et al. [28] introduced an efficient method on how best to formulate invariants for fault detection in industrial control systems. This work emphasized the analysis of vibration data to detect anomalies and other potential issues adding to the degradation of the quality of industrial products. Villalba-Díez et al. [29] combined visual data and vibration data analysis to predict the quality of products and monitor the health of machines in industries. An integration of both visual data and vibration data allowed the researchers to develop an effective quality control framework addressing issues connected with product quality and machine performance. In their research paper, Gundewar et al. [30] presented a comparative study of the advanced signal processing algorithms utilized for diagnosing the gear faults and prognostics used for detecting the health condition of the machines used in industries. The assessment provided a comparison of the wavelet transform, Hilbert–Huang analysis, and empirical mode decomposition of signals for vibration data under different fault conditions. Akash Patel et al. [6] described methodology that analyzes and investigates cracks on the surfaces of spur gears. They used vibrational mode decomposition along with short-time Fourier transform to analyze the vibration pattern for the crack condition of gears. The effectiveness of the methodology was validated by comparing results with various machine learning algorithms. In [31], Mohamed Habib et al. used numerical modeling to detect root cracks and eccentricity faults of a single-stage gear system; they used short-time Fourier transform, Fast Fourier transform, kurtograms, and a squared envelope spectrum to investigate the vibration patterns of faults induced under variable operating conditions. Cui et al. [32] used multi-atom matching and dictionary decomposition methods to diagnose gears in different fault conditions in high-speed milling in the steel industry.
Despite the progress in utilizing vibration data for industrial quality control, several knowledge gaps warrant further investigation. First, there is a need for the more robust validation of deep learning algorithms in real-world industrial applications. Additionally, research focusing on the development of integrating computer vision and vibration analysis could contribute to the widespread adoption of advanced quality control systems. Furthermore, the long-term effects of industrial activities on environmental quality, including vibration-related impacts, require in-depth exploration to establish sustainable industrial practices. Moving forward, addressing the identified knowledge gaps and exploring the suggested research directions can significantly advance the field of industrial quality control and contribute to the development of sustainable industrial practices.

3. Materials and Methods

Figure 3 depicts the proposed framework of pre-production and post-production quality control. For the pre-production model, vibration data are collected from the sensors installed on the surface of the induction motor. The sensors collect the data from the gearbox via the shaft of the motors. The data are routed from the sensor to the AI engines via internet gateways. The AI engine runs the deep learning model and monitors the health of the gear of the induction motor. The health condition is displayed on the screen attached to the AI engine. The database server is continuously storing the historic data and fault conditions. The display system helps the engineers and operators in the industries if any prognostic measures are required to avoid unexpected shutdowns. The post-production part is depicted in the top left of the figure. The machine vision system consists of a lighting system and camera to capture images of the final product on the conveyor belt of the production line. The images are sent to the AI engines, which are also trained for images of defective and good products. The AI engines perform preprocessing on the vibration data and image data set prior to making the classification and prediction of fault conditions.
The database server will similarly store the historical data of faulty products and defect-free products. An alarm system can be added to the display to inform packaging personnel of defective products.
Figure 4 provides the methodology for the proposed quality control framework; the framework consists of the following parts:
  • Collection of vibration data from sensors;
  • Data cleaning and preprocessing;
  • Training and testing the ANN model for vibration data;
  • Collection and cleaning of product images;
  • Post-fabrication product fault detection using a CNN.
The proposed framework is divided into two parts, pre-production quality control and post-production quality control. In the pre-production quality control, the vibration pattern obtained from sensors installed on the surface of the motor is inspected for gear faults. The gear health of a production motor has a direct relation with the quality of the product. The faulty gear system of a motor will result in different surface faults on the product. Therefore, to make sure the product is manufactured as per quality standards, the health of the gear system of the production motor needs to be monitored. In the pre-production phase, an experiment is performed to ensure the health of the gear system of the motors used in production. The data are collected from two sensors installed on the surface of the induction motor. The sensors used in the data collection are accelerometers. Two sensors are installed horizontally and vertically near the shaft of the motor, respectively, to collect data along the x-axis and y-axis. Different experiments are performed to monitor the health of the gear system. The second part takes into account the images of the product already manufactured. The images are collected with a machine vision system and processed for fault detection using a convolutional neural network (CNN). The metal hex nut data are used as a test case for the fault detection experiment.
The first experiment in the study involves utilizing an artificial neural network (ANN) to monitor the gear system’s health under varying loads. Data from the sensors are categorized as sensor 1 data along the x-axis of the shaft and sensor 2 data along the y-axis of the shaft. The vibration data collected from the sensors are recorded as data points in a CSV file, encompassing 900,000 points from sensor 1 and sensor 2. The gear system operates at two load conditions, 0 Nm and 80 Nm, with the load controlled by a brake connected to the shaft. The sampling period is set at 0.0002 s, equivalent to 5000 samples per second. The gear data are recorded for six fault conditions: eccentricity fault, missing tooth, no fault, root crack, surface faults, and chipping tip. Each of these conditions is recorded for two loads, 0 Nm and 80 Nm, resulting in a total of 12 classes. There are 75,000 data points for each class, which adds up to a total of 900,000 data points. Table 1 presents the data point count and statistical parameters (e.g., standard deviation, maximum, and minimum) for sensor 1 and sensor 2 for all 12 classes [33].
The sensor data undergo data normalization, with missing values substituted with average values, and fault conditions are encoded using label encoding, which converts categorical labels into numerical format. Label encoding is performed in such a way that each of the 12 classes of the gear fault gets encoded with numeric values from 0 to 11. The sensor data are subjected to min–max normalization. Normalization scales data to a specific range; this is crucial for ensuring all the features in the data contribute equally to the model training process [34]. Normalization can reduce bias in the results as data with large-scale data samples can dominate the model, leading to biased results. Stability issues in a model can be reduced with normalization. An unstable model often leads to the problem of an exploding gradient [35].
Table 2 displays the final data frame following data cleaning and preprocessing.
Figure 5 displays the data distributions for all 12 classes from sensor 1, with individual plots for each class labeled from a to l. The x-axis represents the number of recorded samples, while the y-axis represents the displacement of vibration in millimeters.
The plots in Figure 5a–l show that eccentricity faults at 0 Nm exhibit mid frequencies and mid displacement, while the eccentricity fault at the 80 Nm load displays higher-displacement and high-frequency points. Similarly, the missing tooth at the 0 Nm load has mid-displacement and low-frequency data points, whereas the missing tooth at the 80 Nm load shows low-displacement and low-frequency data points. In contrast, both the 0 Nm and 80 Nm loads with no faults demonstrate mid-frequency and mid-displacement points, with slightly more high data points at 80 Nm. The high-frequency and high-displacement points at 0 Nm and 80 Nm indicate that the root crack has more high data points at 80 Nm. In the plot, the surface fault at 0 Nm shows high-displacement data points concentrated in the middle, while the surface fault at 80 Nm exhibits more high-displacement data points in the middle and at the end of the samples. At 0 Nm, the chipped tooth displays mid-frequency and high-displacement points, while at 80 Nm, the chipped tooth demonstrates high-frequency and high-displacement points. These distinctions enable the deep learning model to make predictions and classify different fault conditions.

3.1. Pre-Production Model Selection for Vibration Data

The vibration data gathered constitute time series data; thus, the preference for the artificial neural network (ANN) over other deep learning models is justified for several reasons. To start, meticulously chosen ANN architecture has exhibited exceptional performance when dealing with time series classification problems by effectively capturing long-term dependencies in data [36]. Furthermore, ANN architecture utilizing Adam and RMSprop optimizers can effectively tackle the issue of a vanishing gradient [37]. The ANN’s adaptability and versatility enable its use in a wide array of problems including natural language processing, time series analysis, and speech recognition [38]. LSTM, ARIMA, and CNNs can be used for time series data; however, they have got some inherent limitations. CNNs and ARIMA would need manual feature engineering as they are less efficient in learning the related features from raw data automatically; this reduces the need for manual feature engineering in ANN models [39]. ARIMA and exponential smoothing are sequential and they cannot leverage parallel processing like ANNs, which can be trained very quickly by parallel data processing; this is advantageous when dealing with very large data sets [40]. LSTM and RNNs are specifically designed for time series data, but they cannot be customized based on the needs of time series problems, when different activation functions are used in a model, or when multiple neural network architectures are combined for more complex data sets. CNNs have got limitations when dealing with long-term dependencies in time series data [41].
The ANN architecture for the variable load model is detailed in Table 3. An 80% portion of the data is allocated for training, while the remaining 20% is designated for testing the model. The input shape is (720,000, 4), and the batch size is set at 64. To begin, the first layer in the ANN is a flattened layer, enabling the conversion of the data to a one-dimensional form. Following this, the second layer consists of a dense layer housing 64 neurons. This layer incorporates a level-1 regularizer with a regularization rate of 0.01 and utilizes the activation function. Subsequently, the third layer also comprises a dense layer featuring 32 neurons and adopting relu as the activation function. Notably, a level-2 regularizer is implemented with a regularization parameter of 0.001 in the third layer. Finally, the last layer is dense and employs softmax as the activation function. Table 3 provides the parameter count for each layer. An Adam optimizer is employed for optimization, with a learning rate set at 0.001. The loss function utilized is categorical cross-entropy. The model undergoes training for 50 epochs and is subsequently assessed based on accuracy, loss, F1 score, precision, and recall. Subsequently, the results are presented in the following section.

3.2. Post-Production Model for Machine Vision

The second experiment is performed to inspect the quality of fabricated products; hence, it is post-production quality control. The machine vision system as described in Section 1 is deployed to monitor the quality of the product and to classify between a good product and a defective one. We use the metal hex nut data set [33] for our experimentation. The convolutional neural network (CNN) is trained for the quality control of hex nut products using machine vision. The convolutional neural network (CNN) model is selected, which outperforms others for image classification problems. The CNN is preferred over other machine learning models due to a variety of reasons, including its inherent ability to capture spatial and hierarchal features in image patterns [42]. The convolutional layer enables a CNN to recognize the patterns irrespective of the location in the images; this is called translational invariance [43]. The pooling layer allows the down sampling of the spatial dimension of the feature map, thus reducing the computational complexity [44]. The data set consists of 1800 images of good hex nuts and 1800 images of defective hex nuts. The 1800 images are divided into 1400 for training and 400 for test purposes. Images of good and defective parts are provided in Figure 6 below.
The following Table 4 summarizes the test and training samples of both classes.
As mentioned earlier, the CNN is the most preferred deep learning technique for image classification; therefore, a CNN is used to predict the quality of hex nuts. The architecture of the CNN is provided in Table 5. The input image size is (150,150), the first layer has 64 neurons, and relu is used as an activation function followed by max pooling. The second layer has 32 neurons with the relu activation function. Sigmoid is used in the output layer for binary classification of the good and defective products. Adam is used as an optimizer with a learning rate of 0.0001. The loss function is binary cross-entropy.

4. Results and Discussion

The models’ results include accuracy, loss, F1 score, precision, and recall. In Figure 7, the accuracy plot of the training and validation data are displayed for the variable load model. Training accuracy is an indicator of how well the model is performing for the training data, and validation accuracy provides the generalization efficiency and performance of the model of unseen data. It is evident that the model reaches convergence at the 36th epoch with the training and validation accuracies of 98.8% and 98.7%, respectively. Convergence in machine learning refers to a point in the model training where further training does not significantly improve the accuracy and reduce the loss of the model. Very fast convergence may lead to the overfitting problem. Smooth convergence is an indicator of model stability and generalization for unseen data. The accuracy plot shown provides better stability, and it is evident that the model is computationally efficient.
In Figure 8, the confusion matrix for the variable load model is depicted. As mentioned earlier, 20% of the data is used for test purposes. About 15,000 samples are used for testing per class. The confusion matrix shows how well each fault is classified. Fault types related to each class are mentioned in the plot caption. The model demonstrates strong performance across all fault conditions, except for class_2, class_8, and class_12, corresponding to the eccentricity fault at 80 Nm, root crack fault at 80 Nm, and chipped tooth fault at 80 Nm. In class_2, out of 15,066 samples, 13,982 samples are correctly identified, and 14,736 samples are correctly identified in class_8. The model classifies faults in class_2 and class_8 with accuracies of 92.8% and 97.6%, respectively. For class_12, the model can identify 9992 samples correctly out of 15,033 samples; the accuracy for this fault class is 67%. It is evident from Figure 4 that the aforementioned three fault patterns are inherently more complex and hard to distinguish from other faults. Also, the features of these faults overlap the other fault patterns; that is why it becomes difficult for the model to perform better for these classes, and it also results in low accuracies in these three classes. The overall accuracy of the model on the test data is 96.44%, with test data loss of 0.0825. Additionally, the F1 score, precision, and recall stand at 96%, 96%, and 97%, respectively.
The results of the machine vision-based post-production quality control system are provided in terms of accuracy, precision, recall, and loss. Accuracy and confusion matrixes are plotted in Figure 9 and Figure 10, respectively. The model is trained for 30 epochs with a batch size of 32. It can be seen in Figure 8 that the model converges at epoch 26, giving training and validation accuracy of 98.72% and 98.69%, respectively. The training and validation losses are 0.0440 and 0.0447, respectively. The test accuracy is 96.56%.
Figure 10 plots the confusion matrix for the model. The test samples are 400 images per class. Class_0 in the confusion matrix refers to bad products, and class_1 refers to good product classes. The model successfully identifies 384 out of 400 samples of good products, while it identifies 392 out of 400 bad samples. The test accuracies for both classes are 96% and 98%, respectively. The precision, recall, and F1 score are 0.9681, 0.9675, and 0.9675, respectively.
The results of both the models are provided in Table 6.
Very little to no literature has been reported regarding studies where pre-production and post-production quality systems have been integrated into an intelligent framework. The following tables provide a comparison of the accuracies of different algorithms and models with our research. Table 7 provides a comparison of the accuracies of the fault prediction of the gear system using vibration data. The KNN model enhanced with the DISA algorithm was able to predict gear faults with accuracy of 90%. Similarly, IDCNN with SVM and EMD with ASO-SVM were able to correctly classify gear faults with accuracies of 90.4% and 94.0%, respectively. The DNN and ANN could gain accuracies of 94.9% and 94.17%. It is evident from Table 8 that our proposed model with ANN architecture was able to obtain accuracy of 96.44% with variable load conditions of the gear system.
A comparison of the results of machine vision-based industrial applications is provided in Table 8. It can be seen that our model successfully achieves better results in terms of accuracy than the other listed models.

5. Conclusions

This paper has provided an intelligent framework for quality control from the perspective of Industry 4.0. The paper takes into account pre-production and post-production quality control. Two models are developed; one is an ANN with vibration data to monitor the health condition of the induction motor used in production, as the health of the production machine has a direct impact on the quality of the finished product. In the second part, a CNN is used with the machine vision system to collect images of manufactured products and classify the products as defective and good products. The pre-production model is capable of achieving accuracy of 96.44% in identifying defects in the machine under variable conditions. The machine vision-based model used for post-production quality control is successful in achieving accuracy of 96.56%. The accuracies of models can be further improved by collecting more and cleaner vibration data and high-quality images. The size of the data can increase the performance of both models by providing a better insight of data to the machine learning models. The framework can be extended to other sensor data, including temperature, acoustic sensors, and lasers. The temperature is time series data that would require a few adjustments in model parameters. Acoustic data can be analyzed with the developed framework with the addition of speech-processing techniques. Collecting vibration data with lasers can increase the efficiency of the model at the added cost of expensive laser installation. The LSTM model can be deployed for pre-production, and Vision Transformer can be explored for the post-production part of the framework in future studies. The framework can also be deployed in a variety of industries where vibration data and machine vision-based quality control are desired.

Author Contributions

The following statements specify the contribution of every author to the research and preparation of the manuscript: Y.A.: Conceptualization, Methodology, Visualization, Formal Analysis, Investigation, Software, Writing—Review and Editing. S.W.S.: Supervision, Project Administration, Conceptualization, Methodology, Visualization, Formal Analysis, Investigation, Validation. A.A.: Formal Analysis, Investigation, Validation, Visualization. M.T.: Writing—Reviewing and Editing, Supervision, Investigation, Validation. M.R.S.: Reviewing and Editing. All authors have read and agreed to the published version of the manuscript.

Funding

The authors extend their appreciation to King Saud University for providing financial support for this work through the Researchers Supporting Project number (RSPD2024R685), King Saud University, Riyadh, Saudi Arabia.

Institutional Review Board Statement

Not Applicable.

Informed Consent Statement

Not Applicable.

Data Availability Statement

The vibration data were recorded at Southern Denmark University. The data set can be accessed from [33]. The hex nut data set is available upon request.

Acknowledgments

The authors acknowledge ‘Researchers Supporting Project number (RSPD2024R685), King Saud University, Riyadh, Saudi Arabia’. The support provided by the Machines Lab staff at the Department of Electrical Engineering, University of Engineering and Technology, Peshawar, is greatly acknowledged.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Pilevari, N. Industry revolutions development from Industry 1.0 to Industry 5.0 in manufacturing. J. Ind. Strateg. Manag. 2020, 5, 44. [Google Scholar]
  2. Sharma, A.; Singh, B.J. Evolution of industrial revolutions: A review. Int. J. Innov. Technol. Explor. Eng. 2020, 9, 66–73. [Google Scholar] [CrossRef]
  3. Groumpos, P.P. A critical historical and scientific overview of all industrial revolutions. IFAC-PapersOnLine 2021, 54, 464–471. [Google Scholar] [CrossRef]
  4. Xu, M.; David, J.M.; Kim, S.H. The fourth industrial revolution: Opportunities and challenges. Int. J. Financ. Res. 2018, 9, 90–95. [Google Scholar] [CrossRef]
  5. Chen, Q.; Yao, Y.; Gui, G.; Yang, S. Gear Fault Diagnosis under Variable Load Conditions Based on Acoustic Signals. IEEE Sens. J. 2022, 22, 22344–22355. [Google Scholar] [CrossRef]
  6. Patel, A.; Shakya, P. Spur gear crack modelling and analysis under variable speed conditions using variational mode decomposition. Mech. Mach. Theory 2021, 164, 104357. [Google Scholar] [CrossRef]
  7. Golnabi, H.; Asadpour, A. Design and application of industrial machine vision systems. Robot. Comput.-Integr. Manuf. 2007, 23, 630–637. [Google Scholar] [CrossRef]
  8. Ren, Z.; Fang, F.; Yan, N.; Wu, Y. State of the art in defect detection based on machine vision. Int. J. Precis. Eng. Manuf.-Green Technol. 2022, 9, 661–691. [Google Scholar] [CrossRef]
  9. Ghobakhloo, M. Industry 4.0, digitization, and opportunities for sustainability. J. Clean. Prod. 2020, 252, 119869. [Google Scholar] [CrossRef]
  10. Godina, R.; Matias, J.C. Quality control in the context of industry 4.0. In Proceedings of the Industrial Engineering and Operations Management II: XXIV IJCIEOM, Lisbon, Portugal, 18–20 July 2019; Springer International Publishing: Berlin/Heidelberg, Germany, 2019; pp. 177–187. [Google Scholar]
  11. Bode, G.; Thul, S.; Baranski, M.; Müller, D. Real-world application of machine-learning-based fault detection trained with experimental data. Energy 2020, 198, 117323. [Google Scholar] [CrossRef]
  12. Zhang, S.; Zhou, J.; Wang, E.; Zhang, H.; Gu, M.; Pirttikangas, S. State of the art on vibration signal processing towards data-driven gear fault diagnosis. IET Collab. Intell. Manuf. 2022, 4, 249–266. [Google Scholar] [CrossRef]
  13. Samanta, B. Artificial neural networks and genetic algorithms for gear fault detection. Mech. Syst. Signal Process. 2004, 18, 1273–1282. [Google Scholar] [CrossRef]
  14. Mones, Z.; Zeng, Q.; Hu, L.; Tang, X.; Gu, F.; Ball, A.D. Planetary gearbox fault diagnosis using an on-rotor MEMS accelerometer. In Proceedings of the 2017 23rd International Conference on Automation and Computing (ICAC), Huddersfield, UK, 7–8 September 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–6. [Google Scholar]
  15. Emmanuel, S.; Yihun, Y.; Nili Ahmedabadi, Z.; Boldsaikhan, E. Planetary gear train microcrack detection using vibration data and convolutional neural networks. Neural Comput. Appl. 2021, 33, 17223–17243. [Google Scholar] [CrossRef]
  16. Qu, Y.; Bechhoefer, E.; He, D.; Zhu, J. A new acoustic emission sensor based gear fault detection approach. Int. J. Progn. Health Manag. 2013, 4, 32–45. [Google Scholar] [CrossRef]
  17. Yu, L.; Yao, X.; Yang, J.; Li, C. Gear fault diagnosis through vibration and acoustic signal combination based on convolutional neural network. Information 2020, 11, 266. [Google Scholar] [CrossRef]
  18. Shahin, M.A.; Symons, S.J. A machine vision system for grading lentils. Can. Biosyst. Eng. 2001, 43, 7. [Google Scholar]
  19. Ali, Y.; Shah, S.W.; Khan, W.A.; Waqas, M. Cyber Secured Internet of Things-Enabled Additive Manufacturing: Industry 4.0 Perspective. J. Adv. Manuf. Syst. 2022, 22, 239–255. [Google Scholar] [CrossRef]
  20. Baygin, M.; Karakose, M.; Sarimaden, A.; Erhan, A.K.I.N. Machine vision based defect detection approach using image processing. In Proceedings of the 2017 International Artificial Intelligence and Data Processing Symposium (IDAP), Malatya, Turkey, 16–17 September 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–5. [Google Scholar]
  21. Gehrmann, C.; Gunnarsson, M. A Digital Twin Based Industrial Automation and Control System Security Architecture. IEEE Trans. Ind. Inform. 2020, 16, 669–680. [Google Scholar] [CrossRef]
  22. Pérez, L.; Rodríguez, Í.; Rodríguez, N.; Usamentiaga, R.; García, D.F. Robot guidance using machine vision techniques in industrial environments: A comparative review. Sensors 2016, 16, 335. [Google Scholar] [CrossRef]
  23. Cui, P.H.; Wang, J.Q.; Li, Y. Data-driven modelling, analysis and improvement of multistage production systems with predictive maintenance and product quality. Int. J. Prod. Res. 2022, 60, 6848–6865. [Google Scholar] [CrossRef]
  24. Yin, S.; Rodriguez-Andina, J.J.; Jiang, Y. Real-time monitoring and control of industrial cyberphysical systems: With integrated plant-wide monitoring and control framework. IEEE Ind. Electron. Mag. 2019, 13, 38–47. [Google Scholar] [CrossRef]
  25. Wang, D. Intelligent critic control with robustness guarantee of disturbed nonlinear plants. IEEE Trans. Cybern. 2019, 50, 2740–2748. [Google Scholar] [CrossRef]
  26. Benbarrad, T.; Salhaoui, M.; Kenitar, S.B.; Arioua, M. Intelligent machine vision model for defective product inspection based on machine learning. J. Sens. Actuator Netw. 2021, 10, 7. [Google Scholar] [CrossRef]
  27. Ong, P.; Lee, W.K.; Lau, R.J.H. Tool condition monitoring in CNC end milling using wavelet neural network based on machine vision. Int. J. Adv. Manuf. Technol. 2019, 104, 1369–1379. [Google Scholar] [CrossRef]
  28. Feng, C.; Palleti, V.R.; Mathur, A.; Chana, D. A Systematic Framework to Generate Invariants for Anomaly Detection in Industrial Control Systems. In Proceedings of the 26th Annual Network and Distributed System Security Symposium, NDSS 2019, San Diego, CA, USA, 24–27 February 2019; pp. 1–15. [Google Scholar]
  29. Villalba-Diez, J.; Schmidt, D.; Gevers, R.; Ordieres-Meré, J.; Buchwitz, M.; Wellbrock, W. Deep Learning for Industrial Computer Vision Quality Control in the Printing Industry 4.0. Sensors 2019, 19, 3987. [Google Scholar] [CrossRef] [PubMed]
  30. Gundewar, S.K.; Kane, P.V. Condition monitoring and fault diagnosis of induction motor. J. Vib. Eng. Technol. 2021, 9, 643–674. [Google Scholar] [CrossRef]
  31. Farhat, M.H.; Hentati, T.; Chiementin, X.; Bolaers, F.; Chaari, F.; Haddar, M. Numerical model of a single stage gearbox under variable regime. Mech. Based Des. Struct. Mach. 2023, 51, 1054–1081. [Google Scholar] [CrossRef]
  32. Cui, L.; Kang, C.; Wang, H.; Chen, P. Application of composite dictionary multi-atom matching in gear fault diagnosis. Sensors 2011, 11, 5981–6002. [Google Scholar] [CrossRef]
  33. Mechanical Gear Vibration Dataset. Kaggle. 9 May 2023. Available online: https://www.kaggle.com/datasets/hieudaotrung/gear-vibration (accessed on 3 February 2023).
  34. Singh, D.; Singh, B. Investigating the impact of data normalization on classification performance. Appl. Soft Comput. 2020, 97, 105524. [Google Scholar] [CrossRef]
  35. Huang, L.; Qin, J.; Zhou, Y.; Zhu, F.; Liu, L.; Shao, L. Normalization techniques in training dnns: Methodology, analysis and application. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 10173–10196. [Google Scholar] [CrossRef]
  36. Tealab, A.; Hefny, H.; Badr, A. Forecasting of nonlinear time series using ANN. Future Comput. Inform. J. 2017, 2, 39–47. [Google Scholar] [CrossRef]
  37. Bas, E.; Egrioglu, E.; Kolemen, E. Training simple recurrent deep artificial neural network for forecasting using particle swarm optimization. Granul. Comput. 2022, 7, 411–420. [Google Scholar] [CrossRef]
  38. Abiodun, O.I.; Jantan, A.; Omolara, A.E.; Dada, K.V.; Umar, A.M.; Linus, O.U.; Arshad, H.; Kazaure, A.A.; Gana, U.; Kiru, M.U. Comprehensive review of artificial neural network applications to pattern recognition. IEEE Access 2019, 7, 158820–158846. [Google Scholar] [CrossRef]
  39. Schmidl, S.; Wenig, P.; Papenbrock, T. Anomaly detection in time series: A comprehensive evaluation. Proc. VLDB Endow. 2022, 15, 1779–1797. [Google Scholar] [CrossRef]
  40. Ma, Q. Comparison of ARIMA, ANN and LSTM for stock price prediction. In E3S Web of Conferences; EDP Sciences: Ulys, France, 2020; Volume 218, p. 01026. [Google Scholar]
  41. Torres, J.F.; Hadjout, D.; Sebaa, A.; Martínez-Álvarez, F.; Troncoso, A. Deep learning for time series forecasting: A survey. Big Data 2021, 9, 3–21. [Google Scholar] [CrossRef]
  42. Chen, Q.; Wu, R. CNN is all you need. arXiv 2017, arXiv:1712.09662. [Google Scholar]
  43. O’Shea, K.; Nash, R. An introduction to convolutional neural networks. arXiv 2015, arXiv:1511.08458. [Google Scholar]
  44. Cong, S.; Zhou, Y. A review of convolutional neural network architectures and their optimizations. Artif. Intell. Rev. 2023, 56, 1905–1969. [Google Scholar] [CrossRef]
  45. Zhang, J.; Zhang, Q.; Qin, X.; Sun, Y. An intelligent fault diagnosis method based on domain adaptation for rolling bearings under variable load conditions. Proc. Inst. Mech. Eng. Part C J. Mech. Eng. Sci. 2021, 235, 8025–8038. [Google Scholar] [CrossRef]
  46. Zhang, X.; Han, P.; Xu, L.; Zhang, F.; Wang, Y.; Gao, L. Research on bearing fault diagnosis of wind turbine gearbox based on 1DCNN-PSO-SVM. IEEE Access 2020, 8, 192248–192258. [Google Scholar] [CrossRef]
  47. Wang, Y.; Sun, W.; Liu, L.; Wang, B.; Bao, S.; Jiang, R. Fault diagnosis of wind turbine planetary gear based on a digital twin. Appl. Sci. 2023, 13, 4776. [Google Scholar] [CrossRef]
  48. Zhang, R.; Peng, Z.; Wu, L.; Yao, B.; Guan, Y. Fault diagnosis from raw sensor data using deep neural networks considering temporal coherence. Sensors 2017, 17, 549. [Google Scholar] [CrossRef]
  49. Umbrajkaar, A.M.; Krishnamoorthy, A.; Dhumale, R.B. Vibration analysis of shaft misalignment using machine learning approach under variable load conditions. Shock Vib. 2020, 2020, 1650270. [Google Scholar] [CrossRef]
  50. Yan, J.; Wang, Z. YOLO V3 + VGG16-based automatic operations monitoring and analysis in a manufacturing workshop under Industry 4.0. J. Manuf. Syst. 2022, 63, 134–142. [Google Scholar] [CrossRef]
  51. Irfan, D.; Tang, X.; Narayan, V.; Mall, P.K.; Srivastava, S.; Saravanan, V. Prediction of quality food sale in mart using the AI-based TOR method. J. Food Qual. 2022, 2022, 6877520. [Google Scholar] [CrossRef]
  52. Farhangi, O.; Sheidaee, E.; Kisalaei, A. Machine Vision for Detecting Defects in Liquid Bottles: An Industrial Application for Food and Packaging Sector. Cloud Comput. Data Sci. 2024, 5, 183–254. [Google Scholar] [CrossRef]
  53. Brambilla, P.; Conese, C.; Fabris, D.M.; Chiariotti, P.; Tarabini, M. Algorithms for Vision-Based Quality Control of Circularly Symmetric Components. Sensors 2023, 23, 2539. [Google Scholar] [CrossRef]
Figure 1. Accelerometer data for eccentricity fault at 80 Nm load.
Figure 1. Accelerometer data for eccentricity fault at 80 Nm load.
Applsci 14 07726 g001
Figure 2. Machine vision system [19].
Figure 2. Machine vision system [19].
Applsci 14 07726 g002
Figure 3. Proposed intelligent framework.
Figure 3. Proposed intelligent framework.
Applsci 14 07726 g003
Figure 4. Methodology: The ANN model is pre-production quality control and the CNN is used for the post-production quality control system.
Figure 4. Methodology: The ANN model is pre-production quality control and the CNN is used for the post-production quality control system.
Applsci 14 07726 g004
Figure 5. Plot of sensor 1 data for different fault conditions.
Figure 5. Plot of sensor 1 data for different fault conditions.
Applsci 14 07726 g005aApplsci 14 07726 g005b
Figure 6. A defective and a good hex nut.
Figure 6. A defective and a good hex nut.
Applsci 14 07726 g006
Figure 7. Training and validation accuracy of variable load model.
Figure 7. Training and validation accuracy of variable load model.
Applsci 14 07726 g007
Figure 8. Confusion matrix of the variable load model. Class1: Ecc_0Nm, Class2: Ecc_80Nm, Class3: Miss_tooth_0Nm, Class4: Miss_tooth_80Nm, Class5: No_fault_0Nm, Class6: No_fault_80Nm, Class7: Root_Crack_0Nm, Class8: Root_Crack_80Nm, Class9: Surface_fauls_0Nm, Class10: Surface_fauls_80Nm, Class11: Chip_Tooth_0Nm, Class12: Chip_Tooth_0Nm.
Figure 8. Confusion matrix of the variable load model. Class1: Ecc_0Nm, Class2: Ecc_80Nm, Class3: Miss_tooth_0Nm, Class4: Miss_tooth_80Nm, Class5: No_fault_0Nm, Class6: No_fault_80Nm, Class7: Root_Crack_0Nm, Class8: Root_Crack_80Nm, Class9: Surface_fauls_0Nm, Class10: Surface_fauls_80Nm, Class11: Chip_Tooth_0Nm, Class12: Chip_Tooth_0Nm.
Applsci 14 07726 g008
Figure 9. Accuracy of machine vision-based quality control model.
Figure 9. Accuracy of machine vision-based quality control model.
Applsci 14 07726 g009
Figure 10. The confusion matrix of machine vision-based quality control model.
Figure 10. The confusion matrix of machine vision-based quality control model.
Applsci 14 07726 g010
Table 1. Data statistics.
Table 1. Data statistics.
ClassFault TypeData PointsSensor 1 DataSensor 2 Data
Standard Deviation (SD)Maximum Value (mm)Minimum Value (mm)Standard Deviation (SD)Maximum Value (mm)Minimum Value (mm)
1Eccentricity @ 0 Nm75,0000.0040672.5512242.4906130.0043562.4561192.404050
2Eccentricity @ 80 Nm75,0000.0078472.5642002.4674530.0080532.4868352.380726
3Missing tooth @ 0 Nm75,0000.0046432.5733982.4710670.0050342.4966912.381219
4Missing tooth @ 80 Nm75,0000.0107122.7691932.2503080.0113482.6742522.240452
5No fault at @ 0 Nm75,0000.0047002.5505672.4889710.0049022.4628542.404050
6No fault at @ 80 Nm75,0000.0102392.5927812.4513560.0083732.4932412.378755
7Root crack @ 0 Nm75,0000.0044352.5504022.4871640.0043822.4582552.402408
8Root crack @ 80 Nm74,9990.0091322.5740552.4654820.0086442.4850282.370542
9Surface faults @ 0 Nm74,9990.0157072.8209342.2194280.0151882.6666962.205302
10Surface faults @ 80 Nm75,0000.0259732.8122282.2435730.0305202.7098962.161939
11Chipped tooth @ 0 Nm75,0000.0046802.5512242.4892990.0048302.4605542.391074
12Chipped tooth @ 80 Nm75,0000.0105012.5932732.4594050.0097742.5172232.340319
Table 2. Data after preprocessing.
Table 2. Data after preprocessing.
IndexSensor 1 (mm)Sensor 2 (mm)Time (s)Load Value (Nm)Gear Fault (Label Encoded)
00.49320.49220.908904.0
10.49320.49190.1239809.0
20.50000.49160.792200.0
30.50270.49820.9311803.0
40.50250.49070.8197801.0
Table 3. ANN model structure for the variable load (pre-production).
Table 3. ANN model structure for the variable load (pre-production).
Model: Sequential_1
Layer_TypeOutput_ShapeNumber of Parameters
flatten_1 (Flatten)(None, 4)0
dense_3 (Dense)(None, 64)320
dense_4 (Dense)(None, 32)2080
dense_5 (Dense)(None, 12)396
Total parameters: 2796 (10.92 KB). Trainable parameters: 2796 (10.92 KB). Non-trainable parameters: 0 (0.00 Bytes).
Table 4. Training and test samples of the good and defective products.
Table 4. Training and test samples of the good and defective products.
S.No.Fault TypeNumber of Samples
1Good hex nut training images1400
2Good hex nut test images400
3Defective hex nut training images1400
4Defective hex nut test images400
Table 5. CNN structure for post-production quality control.
Table 5. CNN structure for post-production quality control.
Layer (Type)Output ShapeNumber of Parameters
conv2d (Conv2D)(None, 75, 75, 64)640
MaxPooling2(None, 37, 37, 64)0
conv2d_1 (Conv2D)(None, 19, 19, 32)18,464
max_pooling2d_1(None, 9, 9, 32)0
flatten (Flatten)(None, 2592)0
dense (Dense)(None, 1)2593
Total parameters: 21,697 (84.75 KB). Trainable parameters: 21,697 (84.75 KB). Non-trainable parameters: 0 (0.00 Bytes).
Table 6. Results of the model.
Table 6. Results of the model.
ModelDataAccuracyF1 ScorePrecisionRecallLossConvergence @ Epoch
ANN_Variable_Load75,000 points (60,000 points for training and 15,000 points for testing per class)0.96440.960.970.960.082536th
CNN_Machine Vision model1800 images (1400 images for training and 400 for testing the model per class)0.96560.970.96810.96750.044026th
Table 7. Comparison of gear health prediction systems.
Table 7. Comparison of gear health prediction systems.
ReferenceYearTechnique/ModelTest Accuracy
[45]2021DISA-KNN90%
[46]20201DCNN-SVM90.4%
[47]2023EMD-ASO-SVM94%
[48]2017DNN94.9%
[49]2020ANN94.17%
This research2024ANN using vibration data at variable loads96.44%
Table 8. Comparison of machine vision systems.
Table 8. Comparison of machine vision systems.
ReferenceYearTechnique/ModelTest Accuracy
[26]2021CNN90.4%
[50]2022VGG1692.7%
[51]2022SVM74%
[52]2024Image processing92%
[53]2023ResNet5089%
This research2024CNN using images of hex-Nut96.56%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ali, Y.; Shah, S.W.; Arif, A.; Tlija, M.; Siddiqi, M.R. Intelligent Framework Design for Quality Control in Industry 4.0. Appl. Sci. 2024, 14, 7726. https://doi.org/10.3390/app14177726

AMA Style

Ali Y, Shah SW, Arif A, Tlija M, Siddiqi MR. Intelligent Framework Design for Quality Control in Industry 4.0. Applied Sciences. 2024; 14(17):7726. https://doi.org/10.3390/app14177726

Chicago/Turabian Style

Ali, Yousaf, Syed Waqar Shah, Arsalan Arif, Mehdi Tlija, and Mudasir Raza Siddiqi. 2024. "Intelligent Framework Design for Quality Control in Industry 4.0" Applied Sciences 14, no. 17: 7726. https://doi.org/10.3390/app14177726

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop