Next Article in Journal
Physical Layer Security Based on Non-Orthogonal Communication Technique with Coded FTN Signaling
Previous Article in Journal
Source Quantization and Coding over Noisy Channel Analysis
Previous Article in Special Issue
Improved Snake Optimizer Using Sobol Sequential Nonlinear Factors and Different Learning Strategies and Its Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Production Quality Evaluation of Electronic Control Modules Based on Deep Belief Network

1
School of Science, Shenyang Ligong University, Shenyang 110159, China
2
Liaoning Key Laboratory of Intelligent Optimization and Control for Ordnance Industry, Shenyang 110159, China
3
Faculty of Science and Technology, Beijing Normal University—Hong Kong Baptist University United International College, Zhuhai 519087, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2024, 12(23), 3799; https://doi.org/10.3390/math12233799
Submission received: 13 November 2024 / Revised: 28 November 2024 / Accepted: 28 November 2024 / Published: 30 November 2024
(This article belongs to the Special Issue Intelligence Optimization Algorithms and Applications)

Abstract

:
The electronic control module is an important part of a digital electronic detonator, which undergoes a complex production process that includes three electrical performance tests and three visual inspection procedures. In each inspection procedure, several different types of data are generated daily, including numerical and categorical data. To evaluate the production quality of electronic control modules, an algorithm based on a Deep Belief Network with Multi-mutation Differential Evolution (MDE-DBN) is designed in this study. First, key indicators are extracted to construct a production quality evaluation index system. A Multi-mutation Differential Evolution algorithm is designed to optimize the initial network weights of the Deep Belief Network (DBN) and integrate the production quality information into the pre-training phase. Subsequently, the preprocessed experimental data are input into the MDE-DBN algorithm to obtain the distributions of excellent, general, and unqualified production statuses, verifying the effectiveness of the algorithm. The experimental results show that the MDE-DBN algorithm has significant advantages in evaluation accuracy when compared with DBNs improved by other intelligent optimization algorithms and machine learning methods.

1. Introduction

Product quality is essential for the survival and development of enterprises, being an ongoing pursuit for manufacturing enterprises and an important criterion for evaluating the operating efficiency of production lines. With the advancement of modern industrial technology, the manufacturing processes and functionalities of products are becoming more complex. To accurately evaluate the production status of products and enhance manufacturing capabilities, it is necessary to evaluate production quality. In the field of industrial blasting, there is a continuous demand for improvements to the safety, reliability, and efficiency of products. Consequently, digital electronic detonators have been widely adopted. Digital electronic detonators are industrial detonators that combine traditional detonators with electronic control technology. The electronic control module is a circuit module and the core component of the digital electronic detonator. This module contains the detonator’s identity information and controls the initiation delay. An internal structure diagram of a digital electronic detonator is shown in Figure 1. During the manufacturing process, the electronic control module sequentially undergoes a series of processing and inspection procedures. The quality characteristics of each process can influence the final quality of the product. Therefore, it is crucial to accurately evaluate the production quality of electronic control modules to reduce the impact of their blasting functions and ensure the safe performance of digital electronic detonators.
The production process for electronic control modules involves several steps, starting with Printed Circuit Boards (PCBs). Initially, PCBs must be mounted and soldered on an automated production line using Surface Mount Technology (SMT). After this, the boards are inspected by an Auto Optical Inspection (AOI) system, which marks the transition to the semi-finished product stage. The subsequent steps in the production and testing process include electrical performance testing for the semi-finished product, injection molding and visual inspection of the injection molding, electrical performance testing of the finished products, spot welding and visual inspection of the spot welding, electrical performance testing with all-in-one machines, and visual inspection of the resistance and bridge wire. The overall production process and detection indicators for electronic control modules are shown in Figure 2.
This study mainly investigates product quality evaluation in the testing production process for semi-finished products, as shown in Figure 2. Semi-finished products are first tested for electrical performance, with indicators including operating current and communication current. After injection molding, a visual inspection is carried out on the semi-finished products, with indicators including nozzle height and less mucilage. Once the injection molding is visually inspected, the finished products are obtained. Then, testing is performed on the finished products to verify the accuracy of their electrical performance after injection molding, with the indicators being the same as those for electrical performance testing in semi-finished products. After spot welding, a visual inspection is conducted on the finished products, with indicators including less welding and tin-balls. After the spot welding is visually inspected, electrical performance testing is carried out by all-in-one machines. Unlike electrical performance testing for semi-finished and finished products, a resistance value is added to electrical performance testing indicators for all-in-one machines, and baud rate accuracy and other indicators are reduced. Finally, the resistance and bridge wire are visually inspected, with indicators including bridge wire deformation and bridge wire disconnection. Different indicators represent different product quality characteristics. Making full use of the data from each testing indicator in the inspection process is beneficial for evaluating the production quality of electronic control modules.
Traditional or machine learning-based evaluation methods are commonly used in research on production quality evaluation. Traditional evaluation methods include the Analytic Hierarchy Process (AHP), Six Sigma Quality Indices (SSQIs), fuzzy methods, and other foundational methods. Machine learning-based evaluation methods include Support Vector Machines (SVMs), Random Forests (RFs), Artificial Neural Networks (ANNs), and Deep Belief Networks (DBNs).
Many studies have been conducted on traditional quality evaluation methods. Gao et al. [1] and Liu et al. [2] utilized the AHP to evaluate the quality of large piston compressors and cigarettes, respectively. Wu et al. [3] and Chen et al. [4] employed SSQIs and two unilateral SSQIs to evaluate the production quality of gears and micro-hard disks, respectively. Shu et al. [5], Yu et al. [6], and Chen et al. [7] researched production quality evaluation using fuzzy set theory and fuzzy membership functions, along with other fuzzy methods. Sygut [8] employed the Pareto–Lorenz diagram to assess the production quality of paving stones, while Shen et al. [9] applied the rough sets theory to evaluate valve quality for multi-cylinder diesel engines.
In production quality evaluation, there is typically a complex and nonlinear relationship between process data and product quality, making it challenging to effectively model with exact mathematical models. Compared with traditional quality evaluation methods, machine learning-based methods are more flexible and accurate and can better deal with the relationship between product quality characteristics and final product quality. Many studies have been conducted on evaluating production quality through SVMs. He et al. [10] proposed product quality models based on the wavelet relevance vector machine. Su et al. [11] employed the RS-PSO-LSSVM synthesis algorithm to evaluate the quality of parts in the production process. Tree models are also widely used and varied models in the field of machine learning. In research on evaluating production quality using tree models, Hur et al. [12] and Rokach et al. [13], respectively, utilized hybrid decision trees and multiple oblivious trees for quality evaluation in manufacturing processes. Lingitz et al. [14] and Antosz et al. [15], respectively, employed RF to enhance the reliability of quality evaluation and analyze the influence of process parameters on product quality. Ji et al. [16] utilized an improved RF method to analyze the inherent relationship between process parameters and production quality. Many studies have been conducted on evaluating production quality using neural networks. Stock et al. [17] employed the ANN to address the problem of product quality evaluation in lithium-ion battery production. Wang et al. [18] proposed a production quality evaluation framework based on multi-task joint deep learning to evaluate the impact of production tasks on product quality in a multi-stage manufacturing system. Among other machine learning methods, Liao et al. [19] employed the fuzzy K-Nearest Neighbor algorithm to solve the problem of welding flaw detection.
Machine learning methods have advantages in modeling complex production quality evaluation problems. However, with the development of processing technologies and increased detection items, the amount of data generated in the production process grows exponentially. Some machine learning methods may struggle to accurately extract quality features, resulting in reduced model adaptability. As a type of machine learning method, DBNs possess excellent feature representation and high-dimensional data-processing capabilities, making them widely applied in production quality assessment. Che et al. [20], Liu et al. [21], and Zhao et al. [22], respectively, utilized DBNs to evaluate the status of aircraft systems and classify faults; monitor and diagnose the overall manufacturing processes of products; and evaluate noise quality in electronic expansion valves. Gao et al. [23] and Zhou et al. [24], respectively, employed an adaptive DBN and an adaptive continuous DBN to evaluate the fault conditions of rolling bearings. Liu et al. [25] combined the DBN and extended the Kalman filter to achieve a high-accuracy battery state for charge estimation. Jeong et al. [26] proposed a new hybrid DBN model to generate a more efficient threat library for radar signal classification. Ding et al. [27] utilized the DBN and Hausdorff distance to evaluate the health status of photovoltaic arrays. Ma et al. [28] proposed a method for assessing the health status of machines by combining the DBN with an ant colony optimization algorithm. Pan et al. [29] proposed an optimal DBN model combined with the Grey Wolf Optimizer to improve detection accuracy for inconsistencies in single batteries by optimizing connection weights. Although DBNs have certain advantages in processing high-dimensional data, their unsupervised pre-training process cannot guarantee a direct correlation between features and the production quality of electronic control modules. This lack of correlation negatively impacts the production quality classification, reducing the accuracy of the model. For these reasons, this study designs a Deep Belief Network based on the Multi-mutation Differential Evolution (MDE-DBN) algorithm to evaluate the quality of electronic control modules. The main contributions are as follows.
  • Constructing a quality evaluation index system to produce electronic control modules. Based on an analysis of six production and testing processes for electronic control modules, specific indicators are selected to construct an index system. According to the data characteristics of different indicators, standardization, binary assignment, or fuzzy methods processing is carried out. This ensures that data with different dimensions can be used in the evaluation process while improving the stability and accuracy of subsequent training.
  • Designing a Multi-mutation Differential Evolution (MDE) algorithm. Through improving the selection method for mutation strategies in the Differential Evolution (DE) algorithm, we randomly select a mutation operation from seven mutation strategies. This approach enhances the exploration rate of the algorithm within the solution space.
  • Designing an MDE-DBN algorithm. By replacing the DBN’s unsupervised pre-training method with the MDE algorithm, the production quality information of electronic control modules can be introduced into the pre-training phase. The weights obtained from pre-training are thus directly related to the production quality of electronic control modules. This production quality information improves the learning efficiency of the network.
The remaining sections of this study are organized as follows: Section 2 constructs a quality evaluation index system for electronic control module production and introduces the data preprocessing methods. In Section 3, the MDE-DBN algorithm is designed to evaluate the production quality of the electronic control module. Section 4 reports the experimental results, and, finally, Section 5 presents the conclusion.

2. Quality Evaluation Index System for the Production of Electronic Control Modules

The production process of electronic control modules is analyzed to extract their key indicators and establish a quality evaluation index system for production. To improve the accuracy of the evaluation results, robust scalers are utilized to preprocess the data associated with electrical performance indicators, while binary assignment and fuzzy methods handle the data related to visual inspection indicators.

2.1. Establishing an Evaluation Index System

The production process for electronic control modules includes six inspection procedures: electrical performance testing of semi-finished products, visual inspection of the injection molding, electrical performance testing of finished products, visual inspection of spot welding, electrical performance testing with all-in-one machines, and visual inspection of the resistance and bridge wire. These procedures are carried out sequentially to obtain a complete electronic control module. Electrical performance testing for semi-finished products focuses on indicators related to electrical performance, such as current and capacitance, identifying any potential related issues early in the production process. Visual inspection for injection molding focuses on the appearance and structure of the electronic control module after injection molding, including aspects such as nozzle height and component burr. This helps to prevent assembly difficulties or defects caused by appearance or structural issues. Electrical performance testing for finished products re-evaluates the relevant electrical performance indicators to ensure the performance and reliability of the electronic control module. After spot welding, the spot welding is visually inspected to check for issues such as less welding, ensuring that each weld point transitions from a short circuit to a closed circuit. Electrical performance testing using all-in-one machines primarily examines indicators such as resistance and current to verify the overall electrical performance of the module. Visually inspecting the resistance and bridge wire reveals deformations and scratches, ensuring the stability of the connection quality and the appearance of the bridge wire.
When constructing a quality evaluation index system for electronic control module production, it is essential to select indicators that are directly or indirectly related to production quality and significantly impact final production quality. This ensures the comprehensiveness and accuracy of the evaluation. Additionally, an index system should be constructed adhering to five principles: specific, measurable, assignable, realistic, and time-bound [30]. This ensures the scientific, reasonable, and standardized nature of the index. Inspection procedures should be the primary indicators of the evaluation index system, with the specific inspection items of each procedure serving as secondary indicators. There are 6 primary indicators and 43 secondary indicators, including 25 electrical performance indicators and 18 visual inspection indicators. The resulting quality evaluation index system is shown in Figure 3.

2.2. Data Preprocessing

When utilizing the MDE-DBN algorithm to evaluate the quality of electronic control modules, the performance and outcomes of the algorithm are significantly influenced by the data. In the electrical performance testing data for electronic control modules, there are notable variations in the dimensions of different indicators, which can hinder subsequent network training. To standardize the dimensions and effectively handle outliers, a robust scaler is employed to standardize these data. This ensures that the quantitative data falls within a comparable standard range. There are two types of visual inspection data for electronic control modules: qualitative data, described as “qualified” and “unqualified”, and quantitative data, described by percentages or counts. Binary assignment and fuzzy methods are utilized to preprocess these two data types, respectively. The preprocessed data are then applied to the electronic control module quality evaluation.

2.2.1. Preprocessing the Data of Electrical Performance Testing Indicators

The electrical performance testing indicator data are preprocessed by a robust scaler. In contrast to the traditional min–max scaling method, the robust scaler employs statistical methods based on the range of the median and quartile, which are more efficient in handling outliers in the data. Through centralizing and standardizing the data, they can be scaled to a narrower range, minimizing the impact of outliers on the scaling results and enhancing the robustness of the scaled data. The calculation formula is as follows:
x scaled = x Q 1 ( x ) Q 3 ( x ) Q 1 ( x )
where x denotes the original data, x scaled denotes the scaled data, Q 1 ( x ) is the first quartile of the original data, and Q 3 ( x ) is the third quartile of the original data.

2.2.2. Preprocessing of Visual Inspection Indicators Data

The visual inspection indicator data are preprocessed using binary assignment and fuzzy methods. In visual inspection, certain indicators such as nozzle height and bridge wire deformation have strict standards that are typically classified as “qualified” or “unqualified”. These indicators are handled through binary assignment. As the data cannot accurately reflect the quality condition, fuzzy functions are applied to process this portion of data for other indicators, such as less mucilage, component burr, multi-tin, and scratches. The quality conditions are partitioned based on membership functions, and values are assigned accordingly. The specific processing method steps are outlined as follows:
  • Binary assignment
Perform binary assignment processing on the visual inspection indicators data, such as nozzle height, rectangular-shaped area, less welding, and bridge wire deformation. The assignment criteria are provided in Table 1.
2.
Fuzzy methods
Employ fuzzy functions to process the visual inspection indicator data, such as less mucilage, component burr, bridge trestle, bridge wire, multi-tin, multi-flux, hickey, tin at both ends, scratch diameter, scratch area, and scratch times. The fuzzy function is defined as follows: for a fuzzy set, A, on the domain, U, a map from U to [0,1] is specified: μ A : U [ 0 , 1 ] and u μ A ( u ) , where μ A represents the membership function of A, and μ A ( u ) represents the membership degree of u in the fuzzy set A, denoted as A = { ( u , μ A ( u ) ) u U } . When μ A ( x ) = { 0 , 1 } , A degenerates into a crisp set.
Common membership functions include the triangular membership function, Gaussian membership function, and trapezoidal membership function. Of these, the trapezoidal membership function is particularly suitable for interval partitioning, as it can accurately represent various levels of qualification and offers more flexible parameter adjustment. Consequently, the trapezoidal membership function performs fuzzy processing on certain visual inspection indicator data. The curve of the trapezoidal membership function is illustrated in Figure 4.
Let μ ( e x c e l l e n t ) ( x ) represents the membership function of the excellent fuzzy set, with a fuzzy set range of (0, b); μ ( g e n e r a l ) ( x ) represents the membership function of the general fuzzy set, with a fuzzy set range of (a, d); μ ( u n q u a l i f i e d ) ( x ) represents the membership function of the unqualified fuzzy set, with a fuzzy set range of (c, +∞). The expressions for the trapezoidal membership functions are shown in Equations (2)–(4).
μ ( e x c e l l e n t ) ( x ) = 1 , x < a b x b a , a x b 0 , x > b
μ ( g e n e r a l ) ( x ) = x a b a , a x < b 1 , b x < c d x d c , c x d 0 , x < a , x > d
μ ( u n q u a l i f i e d ) ( x ) = 0 , x < c x c d c , c x d 1 , x > d
where a, b, c, and d are the parameters of the membership functions, which need to be determined based on the domain of the variable values, the range of the fuzzy sets, and the actual situation. Assuming the domain of the less mucilage indicator is 0~0.05 mm2, let a = 0.025 mm2, b = 0.033 mm2, c = 0.041 mm2, and d = 0.05 mm2. Consequently, the fuzzy membership for a less mucilage value of 0.03 mm2 is provided by μ ( e x c e l l e n t ) ( 0.03 ) = 0.375 , μ ( g e n e r a l ) ( 0.03 ) = 0.625 , and μ ( u n q u a l i f i e d ) ( 0.03 ) = 0 . Based on this result, a less mucilage value of 0.03 mm2 falls into the general level. After fuzzy processing, the data are assigned values of 0, 0.5, or 1 according to whether they fall into the excellent, general, or unqualified levels. By analyzing the actual production data for the electronic control module, the values of a, b, c, and d for different visual inspection indicators can be determined, as shown in Table 2.
Figure 5 illustrates the comparative effect of the data before and after preprocessing using the Kernel Density Estimate (KDE) method. It can be clearly seen that before the preprocessing, the distribution range of the data was quite wide and showed a relatively dispersed state, while after the preprocessing, the distribution of the data became more concentrated and symmetrical, which fully demonstrates that the impact of data preprocessing is extremely significant.
The Spearman correlation coefficient was used to calculate the correlation coefficients of each indicator item in the quality evaluation index system for electronic control modules, as shown in Figure 6. It demonstrates that most of the features exhibit high correlation coefficients, indicating a strong correlation. For instance, features 11 and 14, features 11 and 15, and so on exhibit a correlation coefficient above 0.90. However, a few features exhibit a low correlation with other features. For instance, features 1 and 14, features 1 and 15, and so on exhibit a correlation coefficient lower than 0.10. Notably, even if certain indicators have a low Spearman correlation, they can still substantially influence the overall evaluation effect through their interactions with other indicators.

3. MDE-DBN Algorithm

The initial network weights obtained from the unsupervised pre-training of the original DBN exhibit a limited correlation with the actual production quality of the electronic control modules, which is not conducive to subsequent production quality evaluation. By designing the MDE algorithm, the initial weights of the output network are directly associated with the actual production quality. The MDE-DBN algorithm substitutes the DBN’s unsupervised pre-training algorithm with the MDE algorithm, incorporating production quality information for electronic control modules into the pre-training stage to enhance the effectiveness of the evaluation.

3.1. DBN

The DBN [31] is constructed by sequentially stacking multiple Restricted Boltzmann Machines (RBMs). The RBM is a probabilistic model based on an energy function [32]. Each RBM unit consists of a visible layer, v, and a hidden layer, h, as shown in Figure 7. Both the visible and hidden layer nodes are represented as binary data. However, in practical applications, this binary representation often results in the loss of input data information. Therefore, in this study, a non-binary RBM [33] is utilized, incorporating linear units with Gaussian noise to replace the binary units of the visible and hidden layers.
In the RBM structure, the visible layer neurons are denoted as v = { v 1 , v 2 , , v n } , and the hidden layer neurons are denoted as h = { h 1 , h 2 , , h m } . The energy function of the RBM can be expressed as follows:
E v , h = i = 1 n ( a i v i ) 2 2 σ i 2 + j = 1 m ( b j h j ) 2 2 σ j 2 i = 1 n j = 1 m v i σ i h j σ j w i j
In practice, σ i is usually set to 1 [34], enhancing the learning capability of the RBM. Consequently, the energy function can be represented as follows:
E v , h = i = 1 n ( a i v i ) 2 2 + j = 1 m ( b j h j ) 2 2 i = 1 n j = 1 m v i h j w i j
where a i is the bias of the visible layer neuron, v i ; b j is the bias of the hidden layer neuron, h j ; and w i j is the weight of the connections between neuron v i and neuron h j . The joint probability distribution function between the visible layer and the hidden layer can be expressed as follows:
p ( v , h ) = 1 Z e E ( v , h )
where Z = i = 1 n j = 1 m e E ( v , h ) is the partition function.
The conditional probability distributions of the visible layer, v, and the hidden layer, h, can be expressed as follows:
p ( h j = 1 | v ) = N ( b j + i = 1 n v i w i j , 1 )
p ( v i = 1 | h ) = N ( a i + j = 1 m w i j h j , 1 )
where N ( μ , σ 2 ) represents the Gaussian distribution; μ represents the mean; and σ 2 represents the variance, which is set to 1.
The DBN consists of multiple layers of unsupervised RBMs and one layer of supervised backpropagation (BP). The multi-layer architecture of the DBN gradually extracts lower-level features and transforms them into higher-level features, effectively capturing the fundamental characteristics of the original training data. DBNs have strong nonlinear mapping capabilities, enabling them to handle complex relationships between data. The structure of the DBN is illustrated in Figure 8.

3.2. MDE Algorithm for Determining Initial Weights of DBN

The DE algorithm is a global searching algorithm [35] with high-dimensional optimization capabilities, capable of optimizing the numerous parameters contained in a DBN. The basic process of DE is depicted in Figure 9.
The MDE algorithm is an enhancement of the DE algorithm. By designing a fitness function, the correlation between the pre-trained weights and the production quality of the electronic control modules can be improved. This enhances the performance of the MDE-DBN algorithm in evaluating production quality. In this study, the model performance is comprehensively evaluated using Accuracy, Precision, Recall, and the F1 score. The fitness function is determined by taking the average of Accuracy and the F1 score with the calculation formulas shown in Equations (10)–(14).
  • Accuracy: The proportion of correctly classified samples to the total number of samples.
A c c u r a c y = T P + T N T P + F P + T N + F N
2.
Precision: The proportion of correctly classified positive samples to the total number of samples classified as positive.
P r e c i s i o n = T P T P + F P
3.
Recall: The proportion of correctly classified positive samples to the total number of actual positive samples.
R e c a l l = T P T P + F N
4.
F1 score: The weighted average of Precision and Recall, accounting for the Accuracy and Recall of the classifier.
F 1 = 2 P r e c i s i o n R e c a l l P r e c i s i o n + R e c a l l
5.
Fitness: The fitness function.
f i t n e s s = A c c u r a c y + F 1 2
where TP represents the count of true positive samples when both the actual and predicted values are positive. TN represents the count of true negative samples, where both the actual and predicted values are negative. FP represents the count of false positive samples, where the actual value is negative, but the predicted value is positive. FN represents the count of false negative samples, where the actual value is positive, but the predicted value is negative.
When the MDE algorithm is used to pre-train the training set to acquire the initial parameters of the DBN, seven mutation strategies must be randomly selected in each iteration to execute the mutation operation on individuals. Subsequently, crossover and selection operations generate a new generation of the population. Then, the fitness of each individual in the population is evaluated. Finally, the individual with the highest fitness in the population is selected for retention. The specific steps of the MDE algorithm are outlined as follows:
Step 1. Initialize the population as
x i , 0 = ( x r 1 , 0 , x r 2 , 0 , , x r D , 0 ) , i = 1 , 2 , , N P
where NP represents the population size; D represents the dimension of individuals, which refers to the total number of DBN parameters; and x i , 0 represents the i-th individual in the initial generation. Each individual in the population represents a complete set of DBN parameters. The j-th dimension of the i-th individual in the initial generation is shown in Equation (16).
x i j , 0 = L j + r a n d ( 0 , 1 ) ( H j L j ) , j = 1 , 2 , , D
where (Lj, Hj) represents the range of value for the j-th dimension, Hj represents the upper limit, Lj represents the lower limit, and rand represents a random number uniformly distributed between 0 and 1.
Step 2. Mutation. The MDE algorithm generates new solution vectors through mutation, generating new sets of DBN parameters. It enhances the DE algorithm by establishing a set of mutation strategies that incorporates seven distinct mutation strategies. Each time a new mutation operation is executed, the current mutation strategy is randomly selected from the set. This approach enables the MDE algorithm to explore the solution space more comprehensively and effectively, increasing the likelihood of finding potential optimal solutions. The overall process is as follows:
(1)
Initialize the mutation strategy set: M = { m m = DE 1 , DE 2 , DE 3 , DE 4 , DE 5 , DE 6 , DE 7 } . The elements of the set M are as follows:
  • DE1: DE/rand/1 strategy, v i , G = x r 1 , G + F ( x r 2 , G x r 3 , G ) ;
  • DE2: DE/rand/2 strategy, v i , G = x r 1 , G + F ( x r 2 , G x r 3 , G ) + F ( x r 4 , G x r 5 , G ) ;
  • DE3: DE/best/1 strategy, v i , G = x b e s t , G + F ( x r 1 , G x r 2 , G ) ;
  • DE4: DE/best/2 strategy, v i , G = x b e s t , G + F ( x r 1 , G x r 2 , G ) + F ( x r 3 , G x r 4 , G ) ;
  • DE5: DE/rand-to-best/1 strategy, v i , G = x r 1 , G + F ( x b e s t , G x r 1 , G ) + F ( x r 2 , G x r 3 , G ) ;
  • DE6: DE/current-to-best/1 strategy, v i , G = x i , G + F ( x b e s t , G x i , G ) + F ( x r 1 , G x r 2 , G ) ;
  • DE7: DE/current-to-rand/1 strategy, v i , G = x i , G + r a n d 0 , 1 ( x r 1 , G x i , G ) + F ( x r 2 , G x r 3 , G ) .
    where x b e s t , G represents the individual in the G-th generation population with the highest fitness value. r1, r2, r3, r4, and r5 are randomly selected individuals from the population, N, with the condition that r 1 r 2 r 3 r 4 r 5 i 1 , , N P . v i , G represents the generated difference vector. F is the scaling factor that controls the scaling degree of the difference vector.
(2)
Select mutation strategy: Randomly select a mutation strategy, m, from M.
Step 3. Crossover. Based on a specific probability, genes are selected from multiple individuals and combined to create a new individual. Crossover is performed between the temporary mutation vector, v i , G —generated through the mutation operation—and the current individual to obtain a new individual u i , G . The crossover formula is shown in Equation (17).
u i j , G = v i j , G , r a n d ( 0 , 1 ) C R   o r   j = j r a n d x i j , G , o t h e r w i s e
where CR represents the crossover probability factor, typically ranging from 0 to 1. j r a n d is an integer randomly selected from 1 to D to ensure that the trial vector, u i , G , has at least one dimension of data coming from the mutation vector, v i , G .
Step 4. Selection. The MDE algorithm employs a greedy strategy for the selection operation. Compare the fitness values of the experimental individual, u i , G , generated through mutation and crossover operations, with the fitness value of the individual, x i , G . If the fitness value of u i , G is better than x i , G , then u i , G is chosen as the offspring individual. Otherwise, x i , G is retained as the offspring individual. The selection formula is shown in Equation (18).
x i , G + 1 = u i , G , f ( u i , G ) f ( x i , G ) x i , G , o t h e r w i s e

3.3. MDE-DBN Algorithm for Evaluating Production Quality

The MDE-DBN algorithm evaluates the production quality of electronic control modules. Initially, data preprocessing is conducted, followed by dividing data into training and testing sets. Once the network parameters of the MDE-DBN algorithm and the MDE algorithm are determined, the training set is inputted into the MDE-DBN algorithm, where parameters are adjusted through error backpropagation. Finally, the ability of the algorithm to evaluate production quality is validated using the testing set. The specific steps the MDE-DBN algorithm takes to solve the production quality evaluation problem for electronic control modules are as follows:
Step 1. Preprocess the raw data. Utilize a robust scaler to process the electrical performance testing indicator data obtained during the detection process. For the visual inspection indicator data, apply binary assignment and fuzzy methods. These steps enhance the data’s applicability in evaluating the production quality of electronic control modules.
Step 2. Divide the dataset into training and testing sets in an 8:2 ratio, ensuring that the data distribution of each production quality characteristic remains consistent between the training set and the testing set.
Step 3. Determine the network structure and parameters of the MDE-DBN algorithm. Firstly, the number of nodes in the input and output layers is determined based on the number of indicators in the production quality evaluation system and the number of production quality characteristics of electronic control modules. Secondly, the scaling factor, crossover probability, and mutation strategy set in the MDE algorithm are initialized. Thirdly, the number of hidden layers, the number of network nodes, the learning rate, the number of epochs, and the population size in the MDE-DBN algorithm are determined using the control variable method.
Step 4. Implement the MDE algorithm. When the maximum number of epochs is reached, utilize the value of the optimal individual to assign weights to the network nodes in the MDE-DBN algorithm.
Step 5. Update the MDE-DBN node weights through error backpropagation. The error is determined as the difference between the output of the MDE-DBN and the actual production quality data in the training set, represented by the mean squared error loss function.
Step 6. Terminate updating of MDE-DBN parameters once the maximum number of epochs is reached. Input the test set for evaluation and utilize the output of the classifier to assess the performance of the MDE-DBN algorithm in evaluating the production quality of electronic control modules.
The algorithm flowchart is depicted in Figure 10.

4. Computational Experiments

In this section, the parameters of the MDE-DBN algorithm are determined using the control variable method to ensure the objectivity of the experimental results. These parameters include the number of hidden layers, the number of network nodes, the learning rate, the number of epochs, and the population size. The results of the MDE-DBN algorithm in evaluating the production quality of electronic control modules are analyzed to verify its effectiveness. Next, the feasibility of the MDE algorithm is verified by conducting comparative experiments with the DE algorithm using a single mutation strategy, the particle swarm optimization algorithm, and the Grey Wolf Optimizer combined with DBN. This verification is based on a comparison of six aspects: Accuracy, Precision, Recall, F1 score, runtime, and the fitness function value. Further comparisons are made with the original DBN, Logistic Regression, Decision Trees, Random Forests, Support Vector Machines, Artificial Neural Networks, K-Nearest Neighbors, and Naive Bayes to validate the superiority of the MDE-DBN algorithm in solving this problem.

4.1. Experimental Environment and Parameter Settings

The MDE-DBN algorithm is coded in Python, and the program is run on PyCharm Professional Edition 2023.1.3. The experiments are conducted on a personal computer with Intel(R) Xeon(R) Gold 6248R CPU @ 3.00 GHz 2.99 GHz (2 CPU) and 256.00 GB RAM.
The parameter settings for the MDE-DBN algorithm are as follows: the maximum number of epochs, Gmax, is set to 200; the crossover rate, CR, is set to 0.5; the mutation rate, F0, is set to 2; the lower bound is set to −1; and the upper bound is set to 1. Ten sets of experimental data are collected, with each set containing 20,000 data points. The control variable method determines the number of hidden layers, the number of network nodes, the learning rate, the number of epochs, and the population size of the MDE-DBN algorithm. The final results for each evaluation metric in this experiment are calculated as the averages of the 10 datasets.

4.1.1. Number of Hidden Layers

In the MDE-DBN algorithm, the number of hidden layers represents the number of RBMs. With too few hidden layers, the algorithm lacks learning capability, leading to underfitting. Conversely, too many hidden layers can degrade performance and lead to overfitting. The number of hidden layers is controlled as a single variable, with a range of 2 to 6. The evaluation metrics for different settings are shown in Table 3.
Table 3 shows that Accuracy, Precision, Recall, and the F1 score reach their highest values at 0.955, 0.957, 0.955, and 0.952, respectively, when the number of hidden layers is set to 3. Consequently, the number of hidden layers for the MDE-DBN algorithm is set to 3.

4.1.2. Number of Network Nodes

When the number of network nodes is small, the accuracy of the MDE-DBN algorithm decreases. Conversely, the runtime of the algorithm will increase when the number is large. To determine the number of network nodes, experiments are conducted using five types of arrangements for the number of nodes in the hidden layer: constant type, middle convex type, middle concave type, constant increment type, and constant subtraction type. Each arrangement is set with three groups of nodes. The results for each evaluation metric are shown in Table 4.
Table 4 shows that when the number of hidden layer network nodes is 50-70-50, the MDE-DBN algorithm achieves the highest Accuracy, Precision, Recall, and F1 score at 0.968, 0.969, 0.968, and 0.967, respectively. Therefore, the number of network nodes for the MDE-DBN algorithm is set to 50-70-50.

4.1.3. Learning Rate

The learning rate directly affects the convergence speed and performance of the MDE-DBN algorithm. A high learning rate can cause oscillations and a failure to converge, while a low learning rate can decrease the convergence speed and increase training time. An inappropriate learning rate can also lead to overfitting or underfitting. Experiments are conducted to determine the optimal learning rate for the MDE-DBN algorithm. The results for each evaluation metric are shown in Table 5.
Table 5 shows that when the learning rate is 5 × 10−3, the algorithm achieves the highest Accuracy, Precision, Recall, and F1 score at 0.969, 0.971, 0.969, and 0.969, respectively. Therefore, the learning rate for the MDE-DBN algorithm is set to 5 × 10−3.

4.1.4. Number of Epochs

The number of epochs denotes the number of training iterations in the MDE-DBN algorithm, specifically indicating the frequency at which its parameters are updated through backpropagation. A high number of epochs allows the algorithm to learn more data patterns and features, enhancing its performance. Conversely, fewer epochs may lead to underfitting due to insufficient learning. Experiments are conducted to determine the optimal number of epochs for the MDE-DBN algorithm. The results for each evaluation metric are shown in Table 6.
Table 6 shows that when the number of epochs is either 10,000 or 15,000, the algorithm achieves the highest Accuracy, Precision, Recall, and F1 score at 0.973, 0.974, 0.973, and 0.972, respectively. Accounting for the runtime, the number of epochs for the MDE-DBN algorithm is set to 10,000.

4.1.5. Population Size

The population size is the number of candidate solutions simultaneously present in the MDE algorithm, determining the breadth and depth of exploration in the solution space. Increasing the population size enhances the search breadth, potentially revealing the global optimum, but it also increases the runtime. Conversely, decreasing the population size limits the search scope, reducing the likelihood of finding the global optimal solution, but it also decreases the runtime. Experiments are conducted with population sizes of 5, 10, 20, and 50. The evaluation metrics and runtime are shown in Table 7.
Table 7 shows that when the population size is either 10 or 50, Accuracy, Precision, Recall, and F1 score are the highest. However, a significant advantage in runtime can be observed with a population size of 10 compared with 50. Consequently, the population size for the MDE algorithm in the MDE-DBN algorithm is set to 10.

4.2. Results Analysis for Solving Evaluation Problems Based on MDE-DBN Algorithm

This analysis focuses on the results obtained from the MDE-DBN algorithm in evaluating the production quality of electronic control modules. It includes trends for the fitness function, the loss function, the evaluation metrics, and the confusion matrix. This analysis verifies the effectiveness of the MDE-DBN algorithm in evaluating the production quality of electronic control modules.

4.2.1. Fitness Function Variation Trend

The fitness curve of the initial weights obtained by the MDE algorithm is shown in Figure 11. The fitness function is calculated as the average of Accuracy and the F1 score. The graph indicates that the fitness function value tends to rise with the increasing number of epochs. Around the 180th epoch, the algorithm converges, obtaining optimal initial weights for the MDE-DBN algorithm.

4.2.2. Loss Function Variation Trend

The trend for the loss function with the number of epochs during the MDE-DBN algorithm training process is shown in Figure 12. The graph indicates that the loss function of the network decreases as the number of epochs increases, eventually converging at about 8000 epochs. The light-colored area represents the distribution curves of the loss function across 10 experiments, while the dark-colored line represents the average loss function curve from these 10 experiments.

4.2.3. Evaluation Metrics Variation Trend

Trends for Accuracy, Precision, Recall, and the F1 score with the number of epochs are shown in Figure 13. The number of epochs increases, and all metrics demonstrate an upward trend, converging toward one. This indicates that the MDE-DBN algorithm can effectively solve the evaluation problem and achieve favorable results. The light-colored area represents the distribution curves of the evaluation metrics across 10 experiments, while the dark-colored lines represent the average evaluation metrics curves from these 10 experiments.

4.2.4. Confusion Matrix

The confusion matrix of the MDE-DBN algorithm is shown in Figure 14. The horizontal axis represents the evaluation labels, while the vertical axis represents the actual labels. Analysis of the matrix reveals that the MDE-DBN algorithm demonstrates a high level of classification accuracy for the labels “excellent” and “general”. There are only 4 instances where the label “excellent” is incorrectly classified as label “unqualified”. Label “general” is classified with complete accuracy. However, the algorithm performs slightly worse in classifying label “unqualified”, with 81 incorrect classifications. Among these, 67 are classified as label “excellent”, and 14 are classified as label “general”.

4.3. Comparative Results of MDE-DBN Algorithm Experiments

To validate the feasibility and superiority of the MDE-DBN algorithm, it is compared with various intelligent optimization algorithms combined with DBN and machine learning methods. The performance is evaluated by comparing the Accuracy, Precision, Recall, F1 score, and runtime of each algorithm.

4.3.1. Comparison with Intelligent Optimization Algorithms

The MDE algorithm is compared with the DE algorithm with a single mutation strategy, the Particle Swarm Optimization (PSO) algorithm, and the Grey Wolf Optimizer (GWO) algorithm combined with DBN. The Accuracy, Precision, Recall, F1 score, and runtime of each algorithm are shown in Table 8.
The results demonstrate that when compared with the DE algorithm with a single mutation strategy, the PSO algorithm, and the GWO algorithm combined with DBN, the Accuracy of the MDE-DBN algorithm increases by at least 1.25% and at most 2.74%, Precision increases by at least 1.14% and at most 2.41%; Recall increases by at least 1.25% and at most 2.74%; the F1 score increases by at least 1.35% and at most 3.07%; and the runtime increases by at least 1.65% and at most 15.35%. The experimental results indicate that the MDE algorithm can obtain superior initial weights, enhancing the performance of the MDE-DBN algorithm in evaluating the production quality of electronic control modules. Figure 15 illustrates the curves of the fitness function values for each algorithm.
Figure 15 shows the convergence of the fitness functions of the MDE-DBN algorithm, the DE algorithm with a single mutation strategy, the PSO algorithm, and the GWO algorithm as the number of epochs increases. Notably, the MDE-DBN algorithm demonstrates the highest fitness function value. This indicates that, compared with the other algorithms, the MDE algorithm enhances exploration efficiency in the solution space and enables the acquisition of superior initial weights for the DBN. Consequently, the MDE-DBN algorithm achieves more accurate evaluation results.

4.3.2. Comparison with Machine Learning Methods

To validate the superiority of the MDE-DBN algorithm, it is compared with the DBN, Logistic Regression (LR), Decision Tree (DT), Random Forests (RFs), Support Vector Machines (SVMs), Artificial Neural Networks (ANNs), K-Nearest Neighbors (KNNs), and Naive Bayes (NB). The Accuracy, Precision, Recall, and F1 score of each algorithm are shown in Table 9.
The results demonstrate that the MDE-DBN algorithm outperforms the DBN, LR, DT, RF, SVM, ANN, KNN, and NB algorithms. Accuracy increases by at least 1.25% and at most 15.93%; Precision increases by at least 1.14% and at most 16.05%; Recall increases by at least 1.25% and at most 15.93%; and the F1 score increases by at least 1.46% and at most 16.07%. These findings suggest that the MDE-DBN algorithm, specifically designed for this quality evaluation problem, can achieve superior evaluation outcomes. Figure 16 presents box plots of the evaluation metrics for different algorithms.
The analysis demonstrates that the MDE-DBN algorithm achieves the highest average scores for all the evaluation metrics. The data distributions for LR, DT, RF, SVM, ANN, KNN, NB, and MDE-DBN are more concentrated, while the data distribution for DBN is more dispersed.

5. Conclusions

This study investigated the problem of evaluating the production quality of electronic control modules in digital electronic detonators and proposed the MDE-DBN algorithm, designed to accurately assess the production quality of these control modules. First, key indicators in electrical performance testing and visual inspection were analyzed to establish an evaluation index system. Additionally, the data from the electrical performance testing and visual inspection were preprocessed. Then, by improving the mutation process of the DE algorithm, the MDE algorithm was developed to replace the DBN’s unsupervised pre-training method, forming the MDE-DBN algorithm. The parameters of the MDE-DBN algorithm—including the number of hidden layers, the number of network nodes, the learning rate, the number of epochs, and the population size—were determined using the control variable method to ensure the objectivity of the experimental results. Finally, compared with the DE algorithm with a single mutation strategy, the PSO algorithm, and the GWO algorithm combined with DBN, the MDE algorithm provided better initial weights for the DBN, achieving superior evaluation results. Furthermore, a comparison with the DBN, LR, DT, RF, SVM, ANN, KNN, and NB algorithms demonstrated that the MDE-DBN algorithm performs better in assessing the production quality of electronic control modules. However, some limitations should be noted. The differential evolution algorithm is a global optimization algorithm that has a high computational cost. This cost increases the training time and may affect the real-time evaluation requirements for producing electronic control modules. In future research, the focus will be on improving the computational efficiency and robustness of the algorithm to prevent overfitting or unstable optimization processes. By enhancing the computational efficiency of the model, it will be possible to satisfy the real-time requirements for quality evaluations of electronic detonator control modules. The evaluation results will guide subsequent production, aiming to achieve data-driven full-process production and evaluation for electronic control modules.

Author Contributions

Conceptualization, H.G.; methodology, W.X. and W.S.; software, W.X. and C.C.; validation, H.G. and W.S.; formal analysis, H.G. and C.C.; resources, C.C.; data curation, W.X.; writing—original draft preparation, W.X.; writing—review and editing, H.G. and W.S.; supervision, H.G.; project administration, H.G.; funding acquisition, H.G. and W.S. All authors have read and agreed to the published version of this manuscript.

Funding

This research was funded by the Special Funds for Basic Scientific Research Operating Expenses of Universities of Liaoning Province, grant number SYLUGXJT2, the Shenyang Xing-Shen Talents Plan Project for Master Teachers, grant number XSMS2206003, and the Scientific Research funds Project of the Educational Department of Liaoning Province, grant number JYTMS20230201.

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Gao, G.F.; Wu, H.P.; Sun, L.L.; He, L.X. Comprehensive Quality Evaluation System for Manufacturing Enterprises of Large Piston Compressors. Procedia Eng. 2017, 174, 566–570. [Google Scholar] [CrossRef]
  2. Liu, R.D.; Wang, H.Q.; Bao, J.; Lou, L.Y.; Zheng, L.H. Construction and Application of Quality Assurance Capability Evaluation Model for Co-production of Cigarette Materials Based on AHP-Entropy Method. In Proceedings of the 2023 8th International Conference on Engineering Management (ICEM 2023), Wuhan, China, 8–10 September 2023; pp. 326–335. [Google Scholar]
  3. Wu, M.F.; Chen, H.Y.; Chang, T.S.; Wu, C.F. Quality evaluation of internal cylindrical grinding process with multiple quality characteristics for gear products. Int. J. Prod. Res. 2019, 57, 6687–6701. [Google Scholar] [CrossRef]
  4. Chen, K.S.; Hsu, C.H.; Chiou, K.C. Product quality evaluation by confidence intervals of process yield index. Sci. Rep. 2022, 12, 10508. [Google Scholar] [CrossRef]
  5. Shu, M.H.; Wu, H.C. Measuring the manufacturing process yield based on fuzzy data. Int. J. Prod. Res. 2009, 48, 1627–1638. [Google Scholar] [CrossRef]
  6. Yu, C.-M.; Luo, W.-J.; Hsu, T.-H.; Lai, K.-K. Two-Tailed Fuzzy Hypothesis Testing for Unilateral Specification Process Quality Index. Mathematics 2020, 8, 2129. [Google Scholar] [CrossRef]
  7. Chen, K.-S.; Huang, T.-H. A Fuzzy Evaluation Model Aimed at Smaller-the-Better-Type Quality Characteristics. Mathematics 2021, 9, 2513. [Google Scholar] [CrossRef]
  8. Sygut, P. Evaluation of paving stone production quality. Prod. Eng. Arch. 2015, 6, 14–16. [Google Scholar] [CrossRef]
  9. Shen, L.X.; Tay, F.E.H.; Qu, L.S.; Shen, Y.D. Fault diagnosis using Rough Sets Theory. Comput. Ind. 2000, 43, 61–72. [Google Scholar] [CrossRef]
  10. He, F.; Xu, J.W.; Li, M.; Yang, J.H. Product quality modelling and prediction based on wavelet relevance vector machines. Chemometrics and Intelligent Lab. Syst. 2013, 121, 33–41. [Google Scholar]
  11. Su, Y.Y.; Han, L.J.; Wang, J.N.; Wang, H.M. Quantum-behaved RS-PSO-LSSVM method for quality prediction in parts production processes. Concurr. Comput. Pract. Exp. 2019, 34, e5522. [Google Scholar]
  12. Hur, J.; Lee, H.; Baek, J.-G. An Intelligent Manufacturing Process Diagnosis System Using Hybrid Data Mining. Advances in Data Mining. Applications in Medicine, Web Mining, Marketing. Image Signal Min. 2006, 4065, 561–575. [Google Scholar]
  13. Rokach, L.; Maimon, O. Data Mining for Improving the Quality of Manufacturing: A Feature Set Decomposition Approach. J. Intell. Manuf. 2006, 17, 285–299. [Google Scholar] [CrossRef]
  14. Lingitz, L.; Gallina, V.; Breitschopf, J.; Finamore, L.; Sihn, W. Quality in production planning: Definition, quantification and a machine learning based improvement method. Procedia Comput. Sci. 2023, 217, 358–365. [Google Scholar] [CrossRef]
  15. Antosz, K.; Gola, A.; Paśko, Ł.; Malheiro, T.; Gonçalves, A.M.; Varela, L. Six Sigma and Random Forests Application for Product Quality System Control Development. In Advances in Manufacturing III; Springer: Cham, Switzerland, 2022; pp. 99–112. [Google Scholar]
  16. Ji, Y.J.; Yong, X.Y.; Liu, Y.L.; Liu, S.X. Random Forest Based Quality Analysis and Prediction Method for Hot-Rolled Strip. J. Northeast. Univ. (Nat. Sci.) 2019, 40, 11–15. [Google Scholar]
  17. Stock, S.; Pohlmann, S.; Günter, F.J.; Hille, L.; Hagemeister, L.; Reinhart, G. Early Quality Classification and Prediction of Battery Cycle Life in Production Using Machine Learning. J. Energy Storage 2022, 50, 104144. [Google Scholar] [CrossRef]
  18. Wang, P.; Qu, H.; Zhang, Q.L.; Xu, X.; Yang, S. Production quality prediction of multistage manufacturing systems using multi-task joint deep learning. J. Manuf. Syst. 2023, 70, 48–68. [Google Scholar] [CrossRef]
  19. Liao, T.W.; Li, D.-M.; Li, Y.-M. Detection of welding flaws from radiographic images with fuzzy clustering methods. Fuzzy Sets Syst. 1999, 108, 145–158. [Google Scholar] [CrossRef]
  20. Che, C.C.; Wang, H.W.; Fu, Q.; Ni, X.M. Combining multiple deep learning algorithms for prognostic and health management of aircraft. Aerosp. Sci. Technol. 2019, 94, 105423. [Google Scholar] [CrossRef]
  21. Liu, Y.M.; Zhou, H.F.; Tsung, F.G.; Zhang, S. Real-time quality monitoring and diagnosis for manufacturing process profiles based on deep belief networks. Comput. Ind. Eng. 2019, 136, 494–503. [Google Scholar] [CrossRef]
  22. Zhao, B.; Wu, C.J. Sound quality evaluation of electronic expansion valve using Gaussian restricted Boltzmann machines based DBN. Appl. Acoust. 2020, 170, 107493. [Google Scholar] [CrossRef]
  23. Gao, S.Z.; Xu, L.T.; Zhang, Y.M.; Pei, Z.M. Rolling bearing fault diagnosis based on SSA optimized self-adaptive DBN. ISA Trans. 2022, 128, 485–502. [Google Scholar] [CrossRef]
  24. Zhou, M.; Wang, J.; Shi, Y.T.; Wang, Z.H.; Puig, V. Remaining Useful Life Prediction of Rolling Bearings Based on Adaptive Continuous Deep Belief Networks and Improved Kernel Extreme Learning Machine. Int. J. Adapt. Control. Signal Process. 2024. [Google Scholar] [CrossRef]
  25. Liu, X.Y.; Chen, L.; Zhu, L.J.; Wang, J.; Chen, L.; Zeng, X.K.; Song, Z.; Wang, L.J. High-Accuracy Battery State of Charge Estimation Strategy Based on Deep Belief Network Cascaded With Extended Kalman Filter. J. Electrochem. Energy Convers. Storage 2024, 21, 031006. [Google Scholar] [CrossRef]
  26. Jeong, C.M.; Jung, Y.G.; Lee, S.J. Deep belief networks based radar signal classification system. J. Ambient. Intell. Humaniz. Comput. 2024, 15, 1599–1611. [Google Scholar] [CrossRef]
  27. Ding, K.; Chen, X.; Weng, S.; Liu, Y.J.; Zhang, J.W.; Li, Y.L.; Yang, Z. Health status evaluation of photovoltaic array based on deep belief network and Hausdorff distance. Energy 2023, 262, 125539. [Google Scholar] [CrossRef]
  28. Ma, M.; Sun, C.; Chen, X.F. Discriminative Deep Belief Networks with Ant Colony Optimization for Health Status Assessment of Machine. IEEE Trans. Instrum. Meas. 2017, 66, 3115–3125. [Google Scholar] [CrossRef]
  29. Pan, B.; Gao, W.; Peng, Y.H.; Hu, Z.L.; Wang, L.J.; Jiang, J.C. Simple and Effective Fault Diagnosis Method of Power Lithium-Ion Battery Based on GWA-DBN. J. Electrochem. Energy Convers. Storage 2023, 20, 031009. [Google Scholar]
  30. Drucker, P.F. The Practice of Management; Harper Business: New York, NY, USA, 2006. [Google Scholar]
  31. Hinton, G.E.; Osindero, S.; Teh, Y.-W. A Fast Learning Algorithm for Deep Belief Nets. Neural Comput. 2006, 18, 1527–1554. [Google Scholar] [CrossRef]
  32. Rumelhart, D.E.; McClelland, J.L. Information Processing in Dynamical Systems: Foundations of Harmony Theory. In Parallel Distributed Processing: Explorations in the Microstructure of Cognition: Foundations; MIT Press: Cambridge, MA, USA, 1987; pp. 194–281. [Google Scholar]
  33. Hinton, G.E. A Practical Guide to Training Restricted Boltzmann Machines. In Neural Networks: Tricks of the Trade; Springer: Berlin/Heidelberg, Germany, 2012; Volume 7700, pp. 599–619. [Google Scholar]
  34. Taylor, G.W.; Hinton, G.E.; Roweis, S. Modeling Human Motion Using Binary Latent Variables. In Advances in Neural Information Processing Systems 19: Proceedings of the 2006 Conference; MIT Press: Cambridge, MA, USA, 2007; pp. 1345–1352. [Google Scholar]
  35. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
Figure 1. An internal structure diagram of the digital electronic detonator.
Figure 1. An internal structure diagram of the digital electronic detonator.
Mathematics 12 03799 g001
Figure 2. The overall production process and detection indicators for electronic control modules.
Figure 2. The overall production process and detection indicators for electronic control modules.
Mathematics 12 03799 g002
Figure 3. The quality evaluation index system for the production of electronic control modules.
Figure 3. The quality evaluation index system for the production of electronic control modules.
Mathematics 12 03799 g003
Figure 4. The curve of the trapezoidal membership function.
Figure 4. The curve of the trapezoidal membership function.
Mathematics 12 03799 g004
Figure 5. The contrast effect of partial data before and after preprocessing.
Figure 5. The contrast effect of partial data before and after preprocessing.
Mathematics 12 03799 g005
Figure 6. The Spearman correlation coefficient heatmap among the features.
Figure 6. The Spearman correlation coefficient heatmap among the features.
Mathematics 12 03799 g006
Figure 7. The structure of RBM.
Figure 7. The structure of RBM.
Mathematics 12 03799 g007
Figure 8. The structure of DBN.
Figure 8. The structure of DBN.
Mathematics 12 03799 g008
Figure 9. The basic process of DE.
Figure 9. The basic process of DE.
Mathematics 12 03799 g009
Figure 10. The flow chart of the MDE-DBN algorithm.
Figure 10. The flow chart of the MDE-DBN algorithm.
Mathematics 12 03799 g010
Figure 11. Fitness function curve of the MDE-DBN algorithm.
Figure 11. Fitness function curve of the MDE-DBN algorithm.
Mathematics 12 03799 g011
Figure 12. Loss function curve of the MDE-DBN algorithm.
Figure 12. Loss function curve of the MDE-DBN algorithm.
Mathematics 12 03799 g012
Figure 13. Variation trend of the evaluation metrics: (a) Accuracy; (b) Precision; (c) Recall; and (d) F1 Score.
Figure 13. Variation trend of the evaluation metrics: (a) Accuracy; (b) Precision; (c) Recall; and (d) F1 Score.
Mathematics 12 03799 g013
Figure 14. Confusion matrix of MDE-DBN algorithm.
Figure 14. Confusion matrix of MDE-DBN algorithm.
Mathematics 12 03799 g014
Figure 15. The curves of the fitness function value for different algorithms.
Figure 15. The curves of the fitness function value for different algorithms.
Mathematics 12 03799 g015
Figure 16. The box plots of the evaluation metrics for different algorithms.
Figure 16. The box plots of the evaluation metrics for different algorithms.
Mathematics 12 03799 g016
Table 1. Binary assignment criteria for visual inspection indicators.
Table 1. Binary assignment criteria for visual inspection indicators.
ProcedureQualified CriteriaScoreUnqualified CriteriaScore
Visual inspection of the injection moldingOptimal nozzle height0Improper nozzle height1
No adhesive in rectangular-shaped area0Adhesive in rectangular-shaped area1
No deformation in rectangular-shaped area chips0Deformation in rectangular-shaped area chips1
Visual inspection of the spot weldingNo less welding0Less welding1
Tin-balls match pad diameter, not oversized or too high0Tin-balls oversized or too high1
No pull tip from different angles0Pull tip from different angles1
Visual inspection of the resistance and bridge wireNo bridge wire deformation0Bridge wire deformation1
No bridge wires disconnected0Bridge wires disconnected1
Table 2. The values of a, b, c, and d for different visual inspection indicators.
Table 2. The values of a, b, c, and d for different visual inspection indicators.
Visual Inspection Indicatorsabcd
Less mucilage0.0250.0330.0410.05
Component burr0.150.20.250.3
Bridge trestle and bridge wire11.331.662
Multi-tin0.250.330.410.5
Multi-flux0.250.330.410.5
Hickey0.250.330.410.5
Tin at both ends0.50.660.831
Scratch diameter0.250.330.410.5
Scratch area0.050.0670.0840.1
Scratch times11.331.662
Table 3. The average evaluation metrics for different layers.
Table 3. The average evaluation metrics for different layers.
LayersAccuracyPrecisionRecallF1 Score
20.9500.9530.9500.947
30.9550.9570.9550.952
40.9410.9460.9410.937
50.9460.9490.9460.944
60.9270.9330.9270.922
Table 4. The average evaluation metrics for different numbers of network nodes.
Table 4. The average evaluation metrics for different numbers of network nodes.
TypeNumbers of Network NodesAccuracyPrecisionRecallF1 ScoreRuntime (s)
constant type30-30-300.9630.9650.9630.96270
50-50-500.9650.9670.9650.96495
70-70-700.9670.9690.9670.966129
middle convex type30-50-300.9620.9640.9620.96077
50-70-500.9680.9690.9680.967105
70-100-700.9670.9680.9670.966147
middle concave type30-10-300.9470.9510.9470.94460
50-30-500.9580.9610.9580.95684
70-50-700.9670.9680.9670.966113
constant increment type30-40-500.9530.9560.9530.95182
50-60-700.9620.9630.9620.960108
70-80-900.9660.9670.9660.965149
constant subtraction type30-20-100.9590.9610.9590.96058
50-40-300.9650.9670.9650.96483
70-60-500.9670.9680.9670.966110
Table 5. The average evaluation metrics for different learning rates.
Table 5. The average evaluation metrics for different learning rates.
Learning RateAccuracyPrecisionRecallF1 scoreRuntime (s)
1 × 10−30.9680.9690.9680.967105
2 × 10−30.9690.9700.9690.968105
5 × 10−30.9690.9710.9690.969105
1 × 10−40.9420.9430.9420.939105
2 × 10−40.9570.9580.9570.956104
5 × 10−40.9660.9670.9660.965106
1 × 10−50.6800.7270.6800.692106
2 × 10−50.8270.8470.8270.829105
5 × 10−50.9180.9160.9180.915104
Table 6. The average evaluation metrics for different numbers of epochs.
Table 6. The average evaluation metrics for different numbers of epochs.
EpochsAccuracyPrecisionRecallF1 ScoreRuntime (s)
50000.9720.9730.9720.971348
10,0000.9730.9740.9730.972699
15,0000.9730.9740.9730.9721039
Table 7. The average evaluation metrics for different population sizes.
Table 7. The average evaluation metrics for different population sizes.
Population SizeAccuracyPrecisionRecallF1 ScoreRuntime (s)
50.9720.97309720.9711005
100.9730.9740.9730.9721286
200.9720.9740.9720.9721868
500.9730.9740.9730.9723551
Table 8. Comparison of evaluation metrics and runtime of different algorithms.
Table 8. Comparison of evaluation metrics and runtime of different algorithms.
AlgorithmsAccuracyPrecisionRecallF1 ScoreRuntime (s)
DE1-DBN0.9570.9580.9580.9551291
DE2-DBN0.9620.9630.9620.9611296
DE3-DBN0.9630.9650.9630.9621300
DE4-DBN0.9560.9570.9560.9541295
DE5-DBN0.9580.9590.9580.9561378
DE6-DBN0.9610.9620.9610.9601300
DE7-DBN0.9580.9590.9580.9561304
PSO-DBN0.9490.9530.9490.9461450
GWO-DBN0.9550.9570.9550.9531465
MDE-DBN0.9750.9760.9750.9751270
Table 9. Comparison of evaluation metrics of different algorithms.
Table 9. Comparison of evaluation metrics of different algorithms.
AlgorithmsAccuracyPrecisionRecallF1 Score
DBN0.8410.8410.8410.840
LR0.9630.9650.9630.961
DT0.9420.9550.9030.920
RF0.9500.9560.9510.949
SVM0.9590.9620.9590.958
ANN0.9460.9580.9100.926
KKN0.9380.9430.9380.934
NB0.9450.9570.9090.925
MDE-DBN0.9750.9760.9750.975
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gong, H.; Xu, W.; Chen, C.; Sun, W. Production Quality Evaluation of Electronic Control Modules Based on Deep Belief Network. Mathematics 2024, 12, 3799. https://doi.org/10.3390/math12233799

AMA Style

Gong H, Xu W, Chen C, Sun W. Production Quality Evaluation of Electronic Control Modules Based on Deep Belief Network. Mathematics. 2024; 12(23):3799. https://doi.org/10.3390/math12233799

Chicago/Turabian Style

Gong, Hua, Wanning Xu, Congang Chen, and Wenjuan Sun. 2024. "Production Quality Evaluation of Electronic Control Modules Based on Deep Belief Network" Mathematics 12, no. 23: 3799. https://doi.org/10.3390/math12233799

APA Style

Gong, H., Xu, W., Chen, C., & Sun, W. (2024). Production Quality Evaluation of Electronic Control Modules Based on Deep Belief Network. Mathematics, 12(23), 3799. https://doi.org/10.3390/math12233799

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop