o Dataset scaling

In order to scale the dataset, the upper and lower boundaries of the input data were firstly specified by matching the maximum and minimum values of the input parameters. Then, the desired data were scaled and expanded into fitting values according to the type of the used transfer functions. Using the scaling method, all of the above operations were performed with the program written in Matlab. In the given formula, *A* is the "original value", *Ascale* is the "normalized value", *Amin* is the "minimum observable value", and *Amax* is the "maximum observable value". *Amin* and *Amax* might be estimated depending on the nature of data.

$$\text{Ascale} = \min + \frac{\max - \min}{A \max - Am \text{int}} (A - Am \text{in}). \tag{7}$$

In the current application, the intervals used to scale the inputs of MLPs, considering the maximum and minimum ranges for the parameters of the unnatural patterns (<sup>±</sup>σ3) and given the mean (μ = 5.4) and standard deviation (σ = 0.1), were [5.1, 5.7], which were scaled with confidence intervals of [3.4, 7.4]. In the LVQ network, the data scaling range (training and testing) was [−5, +5]. In this study, all input and output data of MLP networks were scaled in a [−1, +1] interval; however, before the output values were scaled in the [−1, +1] interval, each parameter was scaled to the separate maximum and minimum values. The different scaling values corresponding to the outputs of the MLPs are visible in Table 6. The maximum cumulative error (MCE) for training the LVQ was 0.047 (188 in 4000 training data), and that for testing was 0.0525. Table 7 shows the MCE and training iteration of each MLP network.

**Table 7.** Module I, evaluation results.


#### 4.1.4. Neural Network Test

After training the network with the training dataset, the network was evaluated by the test dataset. In the training phase, network efficiency was increased by minimizing errors between actual outputs. In the test phase, solely the input vector was given to the network, where, in this case, the network was validated by predicting the response values for input and output.

#### • **Module I, Evaluation of LVQ Network**

For each input vector, the LVQ network decides about the production process situation. Therefore, a chance of errors in decision-making issues will arise. In case the network incorrectly recognizes the natural variation of the process abnormally, it commits a type I error. If it does not recognize the abnormal pattern in the process, a type II error takes place. An incorrect identification error occurs when random deviations cause the basic patterns in the early parts of the formation to have similar behavioral characteristics. The same will apply to concurrent patterns. As each random pattern warns of a particular disturbance in the process, incorrect pattern recognition has di fferent costs. Moreover, if a basic abnormal pattern is identified in the form of a concurrent pattern comprising a basic pattern, it is considered indirect identification. On the other hand, if only one of the unnatural patterns is identified during the simultaneous occurrence of two abnormal patterns, the identification is putatively incomplete. The performance of Module I was measured according to the instructions and definitions performed by 400 test vectors, where each of them represents 25 samples of the plaster production process, which represents one of the eight patterns identified by the neural network. We applied each of the samples as input to the network and then compared the network response with the target response and calculated the network error rate. Table 7 presents the merged results for the 400 test vectors. As can be seen in the table below, the maximum LVQ network error in pattern recognition was 0.052 (21 in 400 data), which demonstrates that the proposed model was successful and e ffective due to the variety of trained patterns in the identification problem.

#### • **Module II, Evaluation the MLP Networks**

One of the important issues in neural network training is the overfitting problem of the training data. To put it bluntly, the network learns data very well, and it even remembers the noise in the data (disturbances)—excessive compliance—but it has serious problems identifying and generalizing new data [26]. To solve the problem, when the test data error increases while maintaining or decreasing errors related to training data, the training is stopped, and the final parameters are considered with the minimum error of the test data. The performance of the MLP networks in Module II was examined with numerous examples, and the results were satisfactory. As seen in Table 8, the calculated cumulative error of each MLP network was less than 0.02, which indicates that Module II was successful in identifying the parameters.


**Table 8.** Module II, evaluation results.

## *4.2. Expert Systems*

In designing the general framework for the proposed expert system, a rule-based approach was used. ES assists quality control engineers, and it can be used for training operators as well. Therefore, the proposed system runs in three modes: A tutorial mode that o ffers explanation and training if requested by the user, a status mode that concludes from the evidence and responses provided

by the user, and a diagnosis mode that provides inference or reasoning with the rules within the knowledge base.
