Next Article in Journal
Existence and Iterative Method for Some Riemann Fractional Nonlinear Boundary Value Problems
Next Article in Special Issue
Efficient Pipelined Broadcast with Monitoring Processing Node Status on a Multi-Core Processor
Previous Article in Journal
Online Bi-Criteria Scheduling on Batch Machines with Machine Costs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Using a Time Delay Neural Network Approach to Diagnose the Out-of-Control Signals for a Multivariate Normal Process with Variance Shifts

Department of Statistics and Information Science, Fu Jen Catholic University Xinzhuang Dist., New Taipei City 24205, Taiwan
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(10), 959; https://doi.org/10.3390/math7100959
Submission received: 5 September 2019 / Revised: 6 October 2019 / Accepted: 8 October 2019 / Published: 13 October 2019

Abstract

:
With the rapid development of advanced sensor technologies, it has become popular to monitor multiple quality variables for a manufacturing process. Consequently, multivariate statistical process control (MSPC) charts have been commonly used for monitoring multivariate processes. The primary function of MSPC charts is to trigger an out-of-control signal when faults occur in a process. However, because two or more quality variables are involved in a multivariate process, it is very difficult to diagnose which one or which combination of quality variables is responsible for the MSPC signal. Though some statistical decomposition methods may provide possible solutions, the mathematical difficulty could confine the applications. This study presents a time delay neural network (TDNN) classifier to diagnose the quality variables that cause out-of-control signals for a multivariate normal process (MNP) with variance shifts. To demonstrate the effectiveness of our proposed approach, a series of simulated experiments were conducted. The results were compared with artificial neural network (ANN), support vector machine (SVM) and multivariate adaptive regression splines (MARS) classifiers. It was found that the proposed TDNN classifier was able to accurately recognize the contributors of out-of-control signal for MNPs.

1. Introduction

Effective process improvement is always the top concern for industries. Over the last eight decades, extensive studies have reported that process improvements can be greatly achieved by applying the statistical process control (SPC) chart to the monitoring of a process. The main function of the SPC chart is to trigger an out-of-control chart when process disturbances or faults intrude into the underlying process. Since the signal implies the occurrence of an unstable state, the process personnel usually start searching for the root causes of the faults when the signal is given. Accordingly, the process can be brought into a state of in-control [1].
Nowadays, modern technology has had a huge impact on the industry. While advanced sensor technology and automatic data acquisition systems have played an important role in industrial settings, univariate monitoring layouts have rapidly expanded to multivariate monitoring settlements. Accordingly, it is common to simultaneously monitor two or more correlated quality variables of a multivariate process through multivariate SPC (MSPC) charts [2,3].
Typically, based on overall statistics, MSPC charts are effective in detecting multivariate process faults and signaling out-of-control signals. However, the primary difficulty of MSPC charts is that they can detect out-of-control faults but do not explicitly instruct which quality variable or set of quality variables has caused the out-of-control signal. The process personnel initially need to determine the quality variables at fault in order to take remedial actions. Nonetheless, the determination of the contributor of the signal is complicated in the real world [4]. In fact, the more quality variables that are included in a multivariate process, the more increased the difficulty degree for this determination. As a consequence, the determination of the contributor for an out-of-control signal has become a challenging task for manufacturing processes.
Because of their importance, many studies have investigated the contributor of an MSPC signal. Graphical techniques have been used to identify fault quality variables when an SPC signal is given. These include polygonal charts [5], line charts [6], multivariate profile charts [7], and boxplot charts [8]. Since those graphical approaches are subjective, statistical decomposition methods have been investigated to interpret the contributors to an SPC signal. Since the Hotelling’s T2 control chart is one of the most common MSPC techniques for monitoring multivariate processes [9,10], the well-known T2 decomposition method was developed to reflect the contribution of every single quality variable [11,12].
Following the concept of the T2 decomposition method, a number of studies have proposed different approaches to determine the possible fault quality variables which cause the out-of-control signal [13,14,15,16,17,18,19]. However, the abovementioned methods have not been reported in terms of their percentage of success in the classification of the fault quality variables [19,20]. In addition, if the underlying multivariate process is not monitored by a Hotelling’s T2 control chart, it is not feasible to obtain the T2 statistics. Thus, the T2 decomposition method is not feasible to capture fault variables. Consequentially, soft computing classification methods are widely used in practice [21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38].
Among various soft computing classifiers, we have observed that the artificial neural network (ANN), support vector machine (SVM) and the hybrid-based models are widely used to recognize fault variables which cause the out-of-control signals. Specifically, even though numerous studies have addressed problems for the interpretation of out-of-control signal, very little research has discussed the recognition of fault variables through the use of time delay neural networks (TDNNs). A TDNN can be viewed as a special structure of recurrent neural networks [39]. Since they are able to seize the dynamics of a system and to foresee the outputs in the current time, TDNNs have typically been reported to be successful for prediction and classification [40,41,42].
Furthermore, while most of the related research has examined the process mean vector shift as a major fault, the present study considers process variance shifts as a process fault. The present study is concerned with the situation whereby a multivariate normal process (MNP) with five or nine quality variables is monitored by the generalized variance chart. The structure of this study is organized as follows: Section 2 addresses the MNP models. The results of the simulated experiments are provided in Section 3, as the diagnostic performance of the proposed TDNN technique was compared with the performance of the ANN, the SVM and multivariate adaptive regression splines (MARS). The final section discusses the research findings and conclusions inferred from this study.

2. The Process and TDNN

In general, there are two categories of SPC charts’ applications: Variable and attribute processes. The variable control charts’ are used to evaluate variation in a process whereby the outputs can be measured on a continuous scale, such as the length or the weight of the products. For multivariate applications, one of the major representations of the variable processes is the MNP. Accordingly, this study considers the MNP for demonstrating the diagnosis of out-of-control signals.

2.1. MNP

Multivariate normal distribution is a multidimensional generalization of the univariate normal distribution. Typically, the p-multivariate distribution with mean vector u and covariance matrix ∑ is expressed as Np(u, ∑). The probability density function of a multivariate normal distribution is described as:
N p ( u , ) = | | 1 / 2 ( 2 π ) p / 2 exp ( 1 2 ( X μ ) T Σ ( X μ ) )
where u is the mean vector and ∑ is the covariance matrix.
Assume that an MNP with p quality variables is monitored by an |S| control chart. Let
X i ˜ = [ X i 1 ,   X i 2 ,   ,   X i p ] T ,   i = 1 ,   2 ,   ,   n
be a p × 1 vector in which denotes p quality variables on the ith observation with the multivariate normal distribution. For Equation (2), the sample covariance matrix is then expressed as:
S = 1 n 1 i = 1 n ( X ˜ i X ˜ ¯ )   ( X ˜ i X ˜ ¯ ) T
where X ˜ ¯ =   1 n i = 1 n X ˜ i .
Now, we define ∑0 as a covariance matrix in which the process is in-control and ∑0 can be described as:
Σ 0 = [ σ 1 , 1 σ 2 , 1 σ 1 , 2 σ 2 , 2 σ 1 , p σ 2 , p σ i , 1 σ p , 1 σ p , 2 σ i , p σ p , p ]     p X p
Typically, when a variance shift intrudes into a multivariate process, we can use the sample generalized variance |S| control chart to trigger an out-of-control signal. The researchers in [43] provided the upper control limit (UCL) and lower control limit (LCL):
U C L = | Σ 0 | ( b 1 + 3 b 2 ) L C L = m a x ( 0 , | Σ 0 | ( b 1 3 b 2   )
where
b 1 = 1 ( n 1 ) p i = 1 p ( n i ) , b 2 = 1 ( n 1 ) 2 p i = 1 p ( n i ) ( i = 1 p ( n i + 2 ) i = 1 p ( n i ) )
In addition, we define ∑1 as a covariance matrix in which the process is out-of-control [24].
Σ 1 = [ σ 1 , 1 σ 1 , 2 θ σ 1 , j σ 1 , j + 1 σ 1 , p σ 2 , 1 σ 2 , 2 θ σ 2 , j σ 2 , j + 1 σ 2 , p θ σ i , 1 θ σ i , 2 θ 2 σ i , j θ σ i , j + 1 θ σ i , p σ i + 1 , 1 σ i + 1 , 2 θ σ i + 1 , j σ i + 1 , j + 1 σ i + 1 , p σ p , 1 σ p , 2 θ σ p , j σ p , j + 1 σ p , p ]     p × p
where θ stands for the inflated ratio.
When an out-of-control signal is triggered by the |S| control chart, there is no extra information about which quality variable or set of quality variables is responsible for this signal. Since the quality variables at fault are usually associated with specific root causes that adversely affect the multivariate processes, it is important to diagnose the out-of-control signals and to identify the contributors for the signals.

2.2. Time-Delay Neural Network

TDNNs can be referred to as feedforward neural networks, except that the input weight has a delay element associated with it. The time series data are often used in the input and the finite responses of the network can be captured. Accordingly, a TDNN can be considered as an ANN architecture whose main purpose is to work on sequential data.
For TDNN processing, TDNN units perceive traits which are independent of time-shift and usually form part of a larger pattern recognition system. A TDNN has multiple layers and sufficient inter-connection between units in each layer to ensure the ability to learn complex nonlinear decision surfaces. In addition, the actual abstraction learned by the TDNN should be invariant under in time translation [40,41,42].
Figure 1 shows the architecture of a TDNN. The structure of the TDNN includes an input layer, one or more hidden layers, and an output layer. Each layer contains one or more nodes determined through a trial and error process of the given data, as there is no theoretical basis. In here, the present study employed one hidden layer in the TDNN’s structure. In addition, as shown in Figure 1, the network input layer utilized the delay components embedded between the amounts of input-units to attain the time-delay. Each node had its own values and through the network computation to achieve the output results. Under the input–output relationship function of the network, if the output result is the next time prediction of the input x, there must be a certain relationship between present and future. This is given by y = x(t + 1) = H[x(t), x(t − 1), …, x(t − p)]. Consequentially, the TDNN is to seek the relationship function H of the input–output in the network. This is given by n e t j = l = 0 p W i j × X ( n l ) + θ j and y ( n ) = j W j k f ( n e t j ) , where n e t j and y ( n ) are the function at the input and output layers, respectively; p is the number of tapped delay nodes; W i j is the weight of the ith neurons in the input layer into the jth neurons in the hidden layer; and θ j is the bias weight of the jth neurons. The function f represents a nonlinear sigmoid function.

3. Experimental Results

A series of computer experiments were performed in order to demonstrate the performance of the proposed TDNN technique. Additionally, the performance of the ANN, SVM and MARS are discussed in this section.
In this study, we considered two cases of an MNP to be analyzed. While the first case of the process contained five quality variables, the second case of the process involved nine quality variables. For the case of an MNP with five quality variables (denoted as MNP5), we should have 25 − 1 possible types of faults. It is not feasible to study all the possible data structures, so this study arbitrarily selected ten combinations of faults in the process. For the case of an MNP with nine quality variables (denoted as MNP9), this study also arbitrarily considered ten combinations of faults in the process. Table 1 displays those ten combinations of process faults in an MNP5 and an MNP9.
In Table 1, the meaning of F5-1 is that the first quality variable of an MNP5 was at fault and the last four remaining quality variables (i.e., the second, third, fourth, and fifth variables) had no fault. The meaning of F9-1 is that the first quality variable of an MNP9 was at fault and that the last eight remaining quality variables had no fault. Similarly, F5-5 or F9-9 denotes that the five quality variables of an MNP5 or the nine quality variables of an MNP9 were all at fault, respectively.
In this study, the data vectors for the classifiers were generated by computer simulations. In order to understand the effects of various correlations (denoted as ρ) between any two quality variables, this study arbitrarily set ρ = 0.2, 0.5, and 0.8 to represent the low, moderate, and high correlations, respectively. This study also arbitrarily considered the case of θ = 1.7 and the sample size n = 10. The approximate ratio of 7:3 for training and testing data vectors were used for all cases. The four soft computing classifiers—TDNN, ANN, SVM and MARS—were used for the identification of the contributors of the out-of-control signal. For all the classifier models, we used five input variables. They represent the averaged values of each column in the out-of-control covariance matrix, ∑1 (i.e., Equation (7)). There was one output node (Y) for the classifiers. In here, the value of the output node was designed as follows: When we consider the first combination of process faults (i.e., C5-1), the value of Y = 0 indicates the presence of contributor of F5-1, the value of Y = 1 represents the presence of contributor of F5-2, and Y = 2 stands for the presence of contributor of F5-3. The analogous output node design was employed for other combinations.
Considering the case of C5-1, this study utilized 1500 data vectors in the training phase. Whereas the first 500 data vectors were generated from the process fault of F5-1, the data vectors from 501 to 1000 were generated from the process faults of F5-2, and the data vectors from 1001 to 1500 were generated from the variance faults of F5-3. This study used 600 data vectors in the testing phase. The first 200 data vectors were generated from the process fault of F5-1, the data vectors from 201 to 400 were generated from the process faults of F5-2, and the data vectors from 401 to 600 were generated from the process faults of F5-3. Furthermore, all other data structure designs were adopted as this data structure.
After performing identification tasks with the TDNN classifier, Table 2; Table 3 show the identification results for an MNP5 and an MNP9, respectively. The notation of {TDni, TDnd, TDnh, TDno} is the parameter setting for the TDNN design. They represent the number of neurons in the input layer, the number of delay neurons, the number of neurons in the hidden layer, and the number of neurons in the output layer, respectively. For TDNN designs, there is no unique mechanism to determine those parameters. In this study, we used the rules of thumb and our experience to determine those parameters. Accordingly, for the TDNN design in this study, TDnd was chosen to range from 1 to TDni, TDnh was chosen from (2, 4, 6,… 2 × TDni), and TDno = 0. Therefore, For the MNP5 design, TDni = 5, TDnd ranged from 1 to 5, TDnh was chosen from (2, 4, 6, 8 to 10), and TDno = 0. For the MNP9 design, TDni = 9, TDnd ranged from 1 to 9, TDnh was chosen from (2, 4, 6, …, 18), and TDno = 0.
In this study, the accurate identification rate (AIR) was employed to measure the classifiers’ identification performance. The AIR is defined as follows:
AIR = n a N
where N is the total number of data vectors used for the identification process and na is the number of data vectors in N where the true contributor is accurately identified.
In addition, the AIR values obtained through the use of the TDNN, ANN, SVM and MARS classifiers are denoted as AIR-TDNN, AIR-ANN, AIR-SVM and AIR-MARS, respectively.
In Table 2, in the intersection of ρ = 0.2 and C5-1, the meaning of the AIR-TDNN (i.e., 79.83%) can be described as follows: Suppose an out-of-control signal is triggered for an MNP5 process with a correlation of 0.2 between any two quality variables. When we use the TDNN classifier with the parameter setting of {5, 5, 10, 1}, we could have a 79.83% chance to accurately identify the true contributor (e.g., the first quality variable is at fault) for this signal. The same implication applies to all of the AIR values in Table 2; Table 3. Observing Table 2, we can clearly notice that we had higher AIRs for the case of ρ = 0.5. In addition, we can observe that higher AIRs were achieved for the case of ρ = 0.8.
Additionally, the parameter setting of {Ani, Anh, Ano} is widely used for the ANN designs of classification tasks. Three parameters {Ani, Anh, Ano} represent the number of neurons in the input layer, hidden layer, and output layer, respectively. For ANN designs, there is no unique mechanism to determine the number of hidden nodes. While fewer hidden nodes restrain the generalization characteristics, a considerable number of hidden nodes could result in problems of overtraining. Accordingly, rules of thumb were used in this study. For the MNP5 design, the hidden nodes were chosen to range from (2n − 2) to (2n + 2), where n (i.e., n = 5 in here) is the number of input variables. For the MNP9 design, the hidden nodes were chosen to range from (n − 2) to (n + 2), where n is equal to 9. Additionally, this study employed the learning rate for all ANN models at the default value (i.e., 0.01) to ensure consistency [44].
Table 4; Table 5 present the identification results for an MNP5 and an MNP9, respectively, when the ANN classifiers were implemented. Observing Table 4; Table 5, one can notice that the AIR values were higher for the cases of ρ = 0.5 and ρ = 0.8.
For the SVM classification design, the performance is affected by the values of two parameters, C and γ [45,46]. There are no general rules for the choice of 𝐶 and 𝛾. The grid search method uses exponentially growing sequences of 𝐶 and 𝛾 to identify good parameters (e.g., 𝐶 = 2−5, 2−3, 2−1, ..., 215; 𝛾 = 2−5, 2−3, 2−1, ..., 215). The parameter settings for 𝐶 and 𝛾 that generate the highest AIR are considered to be ideally set. Table 6; Table 7 demonstrate the results of using the SVM classifier for the cases of an MNP5 and an MNP9. Observing Table 4; Table 5, we can discover the AIR values were higher for the cases of ρ = 0.5 and ρ = 0.8.
For the MARS design, this study simply reports the parameter settings as {null} since there were no specific parameter settings. The results obtained by using the MARS classifier for the cases of an MNP5 and an MNP9 are shown in Table 8; Table 9. Similarly to the results for the other classifiers, we can observe that the AIR values were higher for the cases of ρ = 0.5 and ρ = 0.8.

4. Classification Performance

This study used a TDNN, an ANN, an SVM and MARS to classify the quality variables at fault when an out-of-control signal was triggered in an MNP. Table 10; Table 11 present the average AIRs of the four classifiers for an MNP5 and an MNP9. High average AIRs values are associated with better recognition accuracy. As shown in Table 10; Table 11, the proposed TDNN models outperformed the other three classifiers.
In addition, for the recognition of an MNP5 and an MNP9 by the TDNN classifier, the AIR percentage improvements (AIRPI) over the ANN, SVM, and MARS classifiers are defined as:
AIRPI i _ TDNN _ j = ( AIR _ TDNNi AIR _ ( j , i ) ) AIR _ ( j ,   i ) × 100 %
where i holds 5 or 9; j can be an ANN, an SVM or MARS; AIR_TDNNi is the AIR from performing the TDNN classifier for an MNPi; and AIR_(j,i) is the AIR from performing the j classifier for an MNPi.
For the quality variables at fault recognition in an MNP5 process, the AIRPI using the TDNN classifier over the ANN classifier was 29.54%. The AIRPI using the TDNN classifier over the SVM and MARS classifiers were 20.94% and 40.80%, respectively. For the quality variables at fault recognition in an MNP9 process, the AIRPI using the TDNN classifier over the ANN, SVM and MARS classifiers were 15.25%, 7.86%, and 26.49%, respectively. Figure 2; Figure 3 display the AIRPI obtained by employing the proposed TDNN classifier over the ANN, SVM and MARS classifiers for an MNP5 and an MNP9. As shown in Figure 2; Figure 3, considerable accuracy improvements can be achieved by using the proposed TDNN classifier.

5. Conclusions

Monitoring and recognizing the sources of a process fault is important for process improvement. In this study, four soft computing techniques, TDNN, ANN, SVM and MARS, were presented to determine the quality variables at fault when variance shifts occur in an MNP. The proposed TDNN classifier maintained satisfactory performance in recognizing the quality variables at fault for an MNP process. The TDNN benefits from an extra set of tuning/adjustable parameters (e.g., delay neurons), and it may be best effective for a smaller number of quality variables (e.g., MNP5), as can be observed Figure 2; Figure 3. Accordingly, the general study of TDNNs is worth investigation. One important research direction is to improve the robustness of the TDNN classifier with respect to a considerable number of quality variables (e.g., MNP13 or MNP15) and compare its classification results with other methods. In addition, although the cases considered in this study did not cover all possible changes of covariance matrix, it may be worth knowing what will happen when the first and second component inflate by two different values of the inflated ratio, θ.
In this study, widely used variable multivariate processes (i.e., an MNP) were investigated, and an attempt to attribute multivariate processes (e.g., the multinomial process) would be a valuable contribution to future studies. However, one difficulty in the recognition phase that will be encountered is that the number of categories of classifiers’ output nodes increases when more categories are involved with an attribute multivariate process. Other soft computing classifiers, such as the extreme learning machine, rough set, random forest and hybrid modeling techniques [37,38], may be worth investigating to decrease the number of output categories in the future.

Author Contributions

Conceptualization, Y.E.S.; methodology, Y.E.S.; software, S.-C.L.; writing and editing, Y.E.S. and S.-C.L.

Funding

This work is partially supported by the Ministry of Science and Technology of the Republic of China (Taiwan), Grant No. 108-2221-E-030-005.

Acknowledgments

The authors would like to thank Editors and anonymous reviewers for their careful reading and helpful remarks.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Reis, M.S.; Gins, G. Industrial process monitoring in the big data/industry 4.0 era: From detection, to diagnosis, to prognosis. Processes 2017, 5, 35. [Google Scholar] [CrossRef]
  2. Hotelling, H. Multivariate quality control, illustrated by the air testing of sample bombsights. In Selected Techniques of Statistical Analysis; Eisenhart, C., Hastay, M.W., Wallis, W.A., Eds.; McGraw-Hill: New York, NY, USA, 1947. [Google Scholar]
  3. Lowry, C.A.; Woodall, W.H.; Champ, C.W.; Rigdon, C.E. A multivariate exponentially weighted moving average control chart. Technometrics 1992, 34, 46–53. [Google Scholar] [CrossRef]
  4. Weese, M.; Martinez, W.; Megahed, F.M.; Jones-Farmer, L.A. Statistical learning methods applied to process monitoring: An overview and perspective. J. Qual. Technol. 2016, 48, 4–24. [Google Scholar] [CrossRef]
  5. Blazek, L.W.; Novic, B.; Scott, M.D. Displaying multivariate data using polyplots. J. Qual. Technol. 1987, 19, 69–74. [Google Scholar] [CrossRef]
  6. Subramanyan, N.; Houshmand, A.A. Simultaneous representation of multivariate and corresponding univariate x-bar charts using a line graph. Qual. Eng. 1995, 7, 681–682. [Google Scholar] [CrossRef]
  7. Fuchs, C.; Benjamin, Y. Multivariate profile charts for statistical process control. Technometrics 1994, 36, 182–195. [Google Scholar] [CrossRef]
  8. Atienza, O.O.; Ching, L.T.; Wah, B.A. Simultaneous monitoring of univariate and multivariate SPC information using boxplots. Int. J. Qual. Sci. 1998, 3, 194–204. [Google Scholar] [CrossRef]
  9. Hotelling, H. Multivariate Quality Control. In Techniques of Statistical Analysis; Eisenhart, C., Hastay, M., Wallism, W.A., Eds.; McGraw-Hill: New York, NY, USA, 1947. [Google Scholar]
  10. Yang, H.-H.; Huang, M.-L.; Yang, S.-W. Integrating auto-associative neural networks with Hotelling T2 control charts for wind turbine fault detection. Energies 2015, 8, 12100–12115. [Google Scholar] [CrossRef]
  11. Mason, R.L.; Tracy, N.D.; Young, J.C. Decomposition of T2 for multivariate control chart interpretation. J. Qual. Technol. 1995, 27, 99–105. [Google Scholar] [CrossRef]
  12. Mason, R.L.; Tracy, N.D.; Young, J.C. A practical approach for interpreting multivariate T2 control chart signals. J. Qual. Technol. 1997, 29, 396–406. [Google Scholar] [CrossRef]
  13. Doganaksoy, N.; Faltin, F.W.; Tucker, W.T. Identification of out of control quality characteristics in a multivariate manufacturing environment. Commun. Stat. Theory Methods 1991, 20, 2775–2790. [Google Scholar] [CrossRef]
  14. Runger, G.C.; Alt, F.B.; Montgomery, D.C. Contributors to a multivariate SPC chart signal. Commun. Stat. Theory Methods 1996, 25, 2203–2213. [Google Scholar] [CrossRef]
  15. Vives-Mestres, M.; Daunis-i-Estadella, P.; Martín-Fernández, J. Out-of-control signals in three-part compositional T2 control chart. Qual. Reliab. Eng. Int. 2014, 30, 337–346. [Google Scholar] [CrossRef]
  16. Vives-Mestres, M.; Daunis-i-Estadella, J.; Martín-Fernández, J.A. Signal interpretation in Hotelling’s T2 control chart for compositional data. IIE Trans. 2016, 48, 661–672. [Google Scholar] [CrossRef]
  17. Kim, J.; Jeong, M.K.; Elsayed, E.A.; Al-Khalifa, K.N.; Hamouda, A.M.S. An adaptive step-down procedure for fault variable identification. Int. J. Prod. Res. 2016, 54, 3187–3200. [Google Scholar] [CrossRef]
  18. Pina-Monarrez, M. Generalization of the Hotelling’s T2 decomposition method to the R-chart. Int. J. Ind. Eng. Theory Appl. Pract. 2018, 25, 200–214. [Google Scholar]
  19. Aparisi, F.; Avendaño, G.; Sanz, J. Techniques to interpret T2 control chart signals. IIE Trans. 2006, 38, 647–657. [Google Scholar] [CrossRef]
  20. Shao, Y.E.; Hsu, B.S. Determining the contributors for a multivariate SPC chart signal using artificial neural networks and support vector machine. Int. J. Innov. Comput. Inf. Control 2009, 5, 4899–4906. [Google Scholar]
  21. Niaki, S.T.A.; Abbasi, B. Fault diagnosis in multivariate control charts using artificial neural networks. Qual. Reliab. Eng. Int. 2005, 21, 825–840. [Google Scholar] [CrossRef]
  22. Guh, R.S. On-line identification and quantification of mean shifts in bivariate processes using a neural network-based approach. Qual. Reliab. Eng. Int. 2007, 23, 367–385. [Google Scholar] [CrossRef]
  23. Hwarng, H.B.; Wang, Y. Shift detection and source identification in multivariate autocorrelated processes. Int. J. Prod. Res. 2010, 48, 835–859. [Google Scholar] [CrossRef]
  24. Cheng, C.S.; Cheng, H.P. Identifying the source of variance shifts in the multivariate process using neural network and support vector machines. Expert Syst. Appl. 2008, 35, 198–206. [Google Scholar] [CrossRef]
  25. Salehi, M.; Kazemzadeh, R.B.; Salmasnia, A. On line detection of mean and variance shift using neural networks and support vector machine in multivariate processes. Appl. Soft Comput. 2012, 12, 2973–2984. [Google Scholar] [CrossRef]
  26. Salehi, M.; Bahreininejad, A.; Nakhai, I. On-line analysis of out-of-control signals in multivariate manufacturing processes using a hybrid learning-based model. Neurocomputing 2011, 74, 2083–2095. [Google Scholar] [CrossRef]
  27. Shao, Y.E.; Hou, C.D. Hybrid artificial neural networks modeling for faults identification of a stochastic multivariate process. Abstr. Appl. Anal. 2013, 2013, 386757. [Google Scholar] [CrossRef]
  28. Shao, Y.E.; Lu, C.J.; Wang, Y.C. A hybrid ICA-SVM approach for determining the fault quality variables in a multivariate process. Math. Probl. Eng. 2012, 2012, 284910. [Google Scholar] [CrossRef]
  29. Shao, Y.E.; Hou, C.D. Fault identification in industrial processes using an integrated approach of neural network and analysis of variance. Math. Probl. Eng. 2013, 2013, 516760. [Google Scholar] [CrossRef]
  30. Alfaro, E.; Alfaro, J.L.; Gámez, M.; García, N. A boosting approach for understanding out-of-control signals in multivariate control charts. Int. J. Prod. Res. 2009, 47, 6821–6834. [Google Scholar] [CrossRef]
  31. Chang, S.I.; AW, C.A. A neural fuzzy control chart for detecting and classifying process mean shifts. Int. J. Prod. Res. 1996, 34, 2265–2278. [Google Scholar] [CrossRef]
  32. Cook, D.F.; Zobel, C.W.; Nottingham, Q.J. Utilization of neural networks for the recognition of variance shifts in correlated manufacturing process parameters. Int. J. Prod. Res. 2001, 39, 3881–3887. [Google Scholar] [CrossRef]
  33. Guh, R.S. Integrating artificial intelligence into on-line statistical process control. Qual. Reliab. Eng. Int. 2003, 19, 1–20. [Google Scholar] [CrossRef]
  34. He, S.G.; He, Z.; Wang, G.A. Online monitoring and fault identification of mean shifts in bivariate processes using decision tree learning techniques. J. Intell. Manuf. 2013, 24, 25–34. [Google Scholar] [CrossRef]
  35. He, S.; Wang, G.A.; Zhanga, M.; Cook, D.F. Multivariate process monitoring and fault identification using multiple decision tree classifiers. Int. J. Prod. Res. 2013, 51, 3355–3371. [Google Scholar] [CrossRef]
  36. Yu, J.B.; Xi, L.F. A neural network ensemble-based model for on-line monitoring and diagnosis of out-of-control signals in multivariate manufacturing processes. Expert Syst. Appl. 2009, 36, 909–921. [Google Scholar] [CrossRef]
  37. Yu, J.B.; Xi, L.F.; Zhou, X. Intelligent monitoring and diagnosis of manufacturing processes using an integrated approach of KBANN and GA. Comput. Ind. 2008, 59, 489–501. [Google Scholar] [CrossRef]
  38. Bersimis, S.; Sgora, A.; Psarakis, S. Methods for interpreting the out-of-control signal of multivariate control charts: A comparison study. Qual. Reliab. Eng. Int. 2017, 33, 2295–2326. [Google Scholar] [CrossRef]
  39. Waibel, A.; Hanazana, T.; Hinton, G.; Shikano, K.; Lang, K.J. Phoneme recognition using time delay neural networks. IEEE Trans. Acoust. Speech Signal Process. 1989, 37, 328–339. [Google Scholar] [CrossRef]
  40. Kelo, S.M.; Dudul, S.V. Short-term Maharashtra state electrical power load prediction with special emphasis on seasonal changes using a novel focused time lagged recurrent neural network based on time delay neural network model. Expert Syst. Appl. 2011, 38, 1554–1564. [Google Scholar] [CrossRef]
  41. Khansa, L.; Liginlal, D. Predicting stock market returns from malicious attacks: A comparative analysis of vector autoregression and time-delayed neural networks. Decis. Support Syst. 2011, 51, 745–759. [Google Scholar] [CrossRef]
  42. Jha, G.K.; Sinha, K. Time-delay neural networks for time series prediction: An application to the monthly wholesale price of oilseeds in India. Neural Comput. Appl. 2014, 24, 563–571. [Google Scholar] [CrossRef]
  43. Alt, F.B. Multivariate quality control. In Encyclopedia of Statistical Sciences; Johnson, N.L., Kotz, S., Eds.; John Wiley & Sons: New York, NY, USA, 1985; Volume 6. [Google Scholar]
  44. Shao, Y.E.; Chiu, C.C. Applying emerging soft computing approaches to control chart pattern recognition for an SPC—EPC process. Neurocomputing 2016, 201, 19–28. [Google Scholar] [CrossRef]
  45. Cherkassky, V.; Ma, Y. Practical selection of SVM parameters and noise estimation for SVM regression. Neural Netw. 2004, 17, 113–126. [Google Scholar] [CrossRef] [Green Version]
  46. Shao, Y.E.; Chang, P.Y.; Lu, C.J. Applying two-stage neural network based classifiers to the identification of mixture control chart patterns for an SPC-EPC process. Complexity 2017, 2017, 1–10. [Google Scholar] [CrossRef]
Figure 1. The architecture of a time delay neural network (TDNN).
Figure 1. The architecture of a time delay neural network (TDNN).
Mathematics 07 00959 g001
Figure 2. The AIR percentage improvements (AIRPI) obtained by using the TDNN classifier over the ANN, SVM and MARS classifiers for an MNP5.
Figure 2. The AIR percentage improvements (AIRPI) obtained by using the TDNN classifier over the ANN, SVM and MARS classifiers for an MNP5.
Mathematics 07 00959 g002
Figure 3. The AIRPI obtained by using the TDNN classifier over the ANN, SVM and MARS classifiers for an MNP9.
Figure 3. The AIRPI obtained by using the TDNN classifier over the ANN, SVM and MARS classifiers for an MNP9.
Mathematics 07 00959 g003
Table 1. Ten combinations of faults for an multivariate normal process with five quality variables (MNP5) and an MNP with nine quality variables (MNP9).
Table 1. Ten combinations of faults for an multivariate normal process with five quality variables (MNP5) and an MNP with nine quality variables (MNP9).
MNP5MNP9
(1) C5-1 = {F5-1, F5-2, F5-3}(1) C9-1 = {F9-1, F9-2, F9-3}
(2) C5-2 = {F5-1, F5-2, F5-4}(2) C9-2 = {F9-1, F9-2, F9-9}
(3) C5-3 = {F5-1, F5-2, F5-5}(3) C9-3 = {F9-1, F9-5, F9-9}
(4) C5-4 = {F5-1, F5-3, F5-4}(4) C9-4 = {F9-1, F9-8, F9-9}
(5) C5-5 = {F5-1, F5-3, F5-5}(5) C9-5 = {F9-2, F9-4, F9-8}
(6) C5-6 = {F5-1, F5-4, F5-5}(6) C9-6 = {F9-2, F9-6, F9-8}
(7) C5-7 = {F5-2, F5-3, F5-4}(7) C9-7 = {F9-4, F9-5, F9-6}
(8) C5-8 = {F5-2, F5-3, F5-5}(8) C9-8 = {F9-4, F9-5, F9-9}
(9) C5-9 = {F5-2, F5-4, F5-5}(9) C9-9 = {F9-5, F9-7, F9-9)}
(10) C5-10 = {F5-3, F5-4, F5-5}(10) C9-10 = {F9-7, F9-8, F9-9}
Table 2. TDNN identification results, accurate identification rate (AIR)-TDNN {TDni, TDnd, TDnh, TDno}, for ten combinations of faults of a multivariate normal process (MNP5).
Table 2. TDNN identification results, accurate identification rate (AIR)-TDNN {TDni, TDnd, TDnh, TDno}, for ten combinations of faults of a multivariate normal process (MNP5).
Types of Combinationρ = 0.2ρ = 0.5ρ = 0.8
C5-179.83%
{5, 5, 10, 1}
98.99%
{5, 5, 10, 1}
75.97%
{5, 5, 10, 1}
C5-286.39%
{5, 5, 8, 1}
98.99%
{5, 5, 8, 1}
75.71%
{5, 3, 8, 1}
C5-382.69%
{5, 5, 6, 1}
98.99%
{5, 5, 10, 1}
71.52%
{5, 3, 8, 1}
C5-482.69%
{5, 5, 6, 1}
98.32%
{5, 5, 8, 1}
74.20%
{5, 3, 10, 1}
C5-585.38%
{5, 5, 10, 1}
99.16%
{5, 5, 8, 1}
94.37%
{5, 3, 6, 1}
C5-683.36%
{5, 5, 10, 1}
99.16%
{5, 5, 8, 1}
75.21%
{5, 3, 6, 1}
C5-778.99%
{5, 5, 10, 1}
98.99%
{5, 5, 8, 1}
93.63%
{5, 3, 10, 1}
C5-885.71%
{5, 5, 8, 1}
99.33%
{5, 5, 4, 1}
97.15%
{5, 3, 6, 1}
C5-985.38%
{5, 5, 8, 1}
99.33%
{5, 5, 10, 1}
94.97%
{5, 3, 10, 1}
C5-1075.97%
{5, 5, 10, 1}
97.98%
{5, 5, 10, 1}
89.11%
{5, 3, 10, 1}
Table 3. TDNN identification results, AIR-TDNN {TDni, TDnd, TDnh, TDno}, for ten combinations of faults of an MNP9.
Table 3. TDNN identification results, AIR-TDNN {TDni, TDnd, TDnh, TDno}, for ten combinations of faults of an MNP9.
Types of Combinationρ = 0.2ρ = 0.5ρ = 0.8
C9-173.78%
{9, 5, 10, 1}
79.58%
{9, 5, 10, 1}
97.32%
{9, 3, 4, 1}
C9-284.20%
{9, 5, 8, 1}
89.92%
{9, 5, 10, 1}
95.99%
{9, 2, 8, 1}
C9-389.09%
{9, 4, 10, 1}
90.79%
{9, 3, 8, 1}
96.49%
{9, 2, 10, 1}
C9-474.29%
{9, 5, 10, 1}
88.24%
{9, 5, 6, 1}
92.98%
{9, 2, 10, 1}
C9-581.34%
{9, 5, 6, 1}
94.30%
{9, 4, 6, 1}
97.83%
{9, 2, 6, 1}
C9-679.66%
{9, 5, 10, 1}
91.29%
{9, 3, 10, 1}
97.66%
{9, 2, 4, 1}
C9-758.32%
{9, 5, 10, 1}
87.23%
{9, 5, 6, 1}
94.97%
{9, 3, 6, 1}
C9-878.15%
{9, 5, 8, 1}
91.60%
{9, 5, 6, 1}
97.15%
{9, 3, 4, 1}
C9-974.79%
{9, 5, 10, 1}
92.79%
{9, 4, 8, 1}
95.82%
{9, 2, 10, 1}
C9-1054.62%
{9, 5, 10, 1}
87.73%
{9, 5, 8, 1}
95.81%
{9, 3, 4, 1}
Table 4. Artificial neural network (ANN) identification results, AIR-ANN {Ani, Anh, Ano}, for ten combinations of faults of an MNP5.
Table 4. Artificial neural network (ANN) identification results, AIR-ANN {Ani, Anh, Ano}, for ten combinations of faults of an MNP5.
Types of Combinationρ = 0.2ρ = 0.5ρ = 0.8
C5-161.33%
{5, 12, 1}
67.67%
{5, 8, 1}
68.67%
{5, 8, 1}
C5-265.00%
{5, 8, 1}
69.67%
{5, 11, 1}
66.00%
{5, 9, 1}
C5-365.50%
{5, 9, 1}
69.17%
{5, 8, 1}
58.00%
{5, 10, 1}
C5-465.17%
{5, 8, 1}
69.33%
{5, 8, 1}
68.17%
{5, 11, 1}
C5-567.00%
{5, 8, 1}
70.83%
{5, 11, 1}
64.00%
{5, 10, 1}
C5-665.33%
{5, 9, 1}
73.17%
{5, 12, 1}
64.33%
{5, 9, 1}
C5-760.67%
{5, 12, 1}
69.33%
{5, 12, 1}
79.33%
{5, 11, 1}
C5-863.17%
{5, 12, 1}
71.00%
{5, 12, 1}
82.17%
{5, 10, 1}
C5-962.67%
{5, 12, 1}
68.83%
{5, 8, 1}
84.33%
{5, 11, 1}
C5-1055.67%
{5, 11, 1}
63.67%
{5, 8, 1}
78.50%
{5, 8, 1}
Table 5. ANN identification results, AIR-ANN {Ani, Anh, Ano}, for ten combinations of faults of an MNP9.
Table 5. ANN identification results, AIR-ANN {Ani, Anh, Ano}, for ten combinations of faults of an MNP9.
Types of Combinationρ = 0.2ρ = 0.5ρ = 0.8
C9-164.17%
{9, 8, 1}
70.67%
{9, 11, 1}
84.83%
{9, 10, 1}
C9-262.67%
{9, 7, 1}
72.67%
{9, 10, 1}
81.67%
{9, 9, 1}
C9-371.33%
{9, 8, 1}
81.67%
{9, 11, 1}
88.83%
{9, 11, 1}
C9-466.33%
{9, 9, 1}
74.17%
{9, 10, 1}
86.00%
{9, 7, 1}
C9-569.17%
{9, 11, 1}
77.67%
{9, 11, 1}
90.83%
{9, 7, 1}
C9-666.67%
{9, 7, 1}
77.83%
{9, 11, 1}
91.67%
{9, 8, 1}
C9-758.33%
{9, 10, 1}
66.17%
{9, 8, 1}
82.50%
{9, 8, 1}
C9-868.67%
{9, 11, 1}
76.83%
{9, 9, 1}
90.17%
{9, 9, 1}
C9-964.33%
{9, 8, 1}
76.67%
{9, 9, 1}
87.33%
{9, 10, 1}
C9-1058.33%
{9, 11, 1}
68.17%
{9, 10, 1}
82.83%
{9, 11, 1}
Table 6. Support vector machine (SVM) identification results, AIR-SVM {C, γ}, for ten combinations of faults of an MNP5.
Table 6. Support vector machine (SVM) identification results, AIR-SVM {C, γ}, for ten combinations of faults of an MNP5.
Types of Combinationρ = 0.2ρ = 0.5ρ = 0.8
C5-167.17%
{2−3, 21}
73.50%
{2−3, 20}
67.17%
{20, 21}
C5-270.33%
{2−3, 20}
79.33%
{2−1, 20}
66.83%
{20, 22}
C5-371.17%
{2−3, 21}
77.50%
{2−1, 2−1}
68.83%
{20, 24}
C5-468.17%
{2−3, 20}
76.67%
{2−1, 2−1}
65.00%
{20, 22}
C5-571.50%
{2−3, 21}
78.83%
{2−1, 20}
68.83%
{21, 21}
C5-669.50%
{2−1, 2−2}
78.00%
{2−1, 20}
61.83%
{21, 20}
C5-764.33%
{2−3, 21}
74.83%
{2−2, 21}
86.83%
{20, 20}
C5-867.67%
{2−3, 22}
77.50%
{2−1, 20}
89.33%
{20, 22}
C5-965.83%
{2−3, 20}
76.83%
{2−1, 20}
89.50%
{20, 22}
C5-1060.83%
{2−2, 2−3}
73.33%
{2−3, 24}
85.50%
{20, 21}
Table 7. SVM Identification results, AIR-SVM {C, γ}, for ten combinations of faults of an MNP9.
Table 7. SVM Identification results, AIR-SVM {C, γ}, for ten combinations of faults of an MNP9.
Types of Combinationρ = 0.2ρ = 0.5ρ = 0.8
C9-166.50%
{2−5, 2−3}
73.17%
{2−3, 23}
88.17%
{2−3, 24}
C9-278.33%
{2−2, 2−1}
82.67%
{2−1, 2−1}
91.83%
{20, 21}
C9-382.67%
{2−3, 21}
81.33%
{2−2, 20}
95.33%
{20, 21}
C9-469.05%
{2−2, 2−3}
77.17%
{2−2, 21}
88.83%
{20, 21}
C9-576.83%
{2−2, 21}
83.50%
{2−2, 20}
95.00%
{2−1, 22}
C9-676.67%
{2−3, 21}
84.00%
{2−2, 20}
96.67%
{2−1, 22}
C9-757.33%
{2−5, 2−3}
72.67%
{2−3, 22}
87.67%
{2−3, 25}
C9-872.67%
{2−4, 2−3}
82.50%
{2−2, 2−0}
93.17%
{2−1, 22}
C9-969.13%
{2−3, 21}
81.67%
{2−4, 24}
93.67%
{2−2, 24}
C9-1059.83%
{2−4, 2−3}
70.50%
{2−4, 24}
85.33%
{2−2, 24}
Table 8. Multivariate adaptive regression splines (MARS) identification results, AIR-MARS{null}, for ten combinations of faults of an MNP5.
Table 8. Multivariate adaptive regression splines (MARS) identification results, AIR-MARS{null}, for ten combinations of faults of an MNP5.
Types of Combinationρ = 0.2ρ = 0.5ρ = 0.8
C5-156.67%62.33%61.83%
C5-259.67%62.33%58.00%
C5-361.67%65.67%53.50%
C5-458.83%62.33%58.33%
C5-560.50%67.00%56.67%
C5-658.17%67.67%56.00%
C5-758.33%64.17%73.50%
C5-858.67%65.50%76.83%
C5-957.17%66.67%76.83%
C5-1055.50%63.67%70.67%
Table 9. MARS Identification results, AIR-MARS {null}, for ten combinations of faults of an MNP9.
Table 9. MARS Identification results, AIR-MARS {null}, for ten combinations of faults of an MNP9.
Types of Combinationρ = 0.2ρ = 0.5ρ = 0.8
C9-162.00%62.50%78.00%
C9-263.50%67.67%76.50%
C9-366.17%73.00%79.00%
C9-462.17%68.17%76.00%
C9-559.67%68.67%83.50%
C9-660.17%67.67%83.00%
C9-754.17%60.67%74.17%
C9-862.00%68.50%83.33%
C9-959.67%65.00%81.83%
C9-1054.67%61.33%75.67%
Table 10. The average AIRs of the four classifiers for an MNP5.
Table 10. The average AIRs of the four classifiers for an MNP5.
Types of CombinationTDNNANNSVMMARS
ρ = 0.282.86%63.15%67.65%58.52%
ρ = 0.598.92%69.27%76.63%64.73%
ρ = 0.882.18%71.35%73.97%64.22%
Table 11. The average AIRs of the four classifiers for an MNP9.
Table 11. The average AIRs of the four classifiers for an MNP9.
Types of CombinationTDNNANNSVMMARS
ρ = 0.274.82%65.00%70.90%60.42%
ρ = 0.589.35%74.25%78.92%66.32%
ρ = 0.896.20%86.67%91.57%79.10%

Share and Cite

MDPI and ACS Style

Shao, Y.E.; Lin, S.-C. Using a Time Delay Neural Network Approach to Diagnose the Out-of-Control Signals for a Multivariate Normal Process with Variance Shifts. Mathematics 2019, 7, 959. https://doi.org/10.3390/math7100959

AMA Style

Shao YE, Lin S-C. Using a Time Delay Neural Network Approach to Diagnose the Out-of-Control Signals for a Multivariate Normal Process with Variance Shifts. Mathematics. 2019; 7(10):959. https://doi.org/10.3390/math7100959

Chicago/Turabian Style

Shao, Yuehjen E., and Shih-Chieh Lin. 2019. "Using a Time Delay Neural Network Approach to Diagnose the Out-of-Control Signals for a Multivariate Normal Process with Variance Shifts" Mathematics 7, no. 10: 959. https://doi.org/10.3390/math7100959

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop