*5.2. Parameter Setting*

To test the feature extraction performance of the proposed scheme, the proposed scheme is compared with wavelet, Contourlet, DST, GLCM, etc. Some important parameters are listed as follows.

DNST: The parameters chose [1 2], [0 1 2], etc. Take an example to explain parameters. When the parameter is set to [1 2], the first scale has eight shearlet direction filters, and the second scale has 16 direction filters, which produced a total of 24 high-frequency subbands and one low frequency subband. The number of DNST features is (8 + 16 + 1) × 2 = 50.

GLCM: The gray level of the original image was compressed to 8, 16, 24 levels, etc. The distance parameter was set to 1.

Wavelet: The wavelet type chose "Haar," "db2," etc. The decomposition level was set to 2,3,4.

Contourlet: The Laplacian filter chose "9-7," and the directional filter chose "pkva." The direction parameter chose [2 3 4], [3 4 5], etc.

DST: The scale filter chose "Symmlet" with the fourth-order vanishing moment, scale parameter was set to [2 1 1], [2 1 1 0], etc. The directional parameters were set to [2 2 2], [2 2 3 3], etc.

KSR: The kernel type chose "Gaussian," and the kernel parameter was set to 0.001.

We chose the radial basis kernel (RBF) as the kernel function of the SVM classifier, and the kernel parameter gamma γ was set to iterate through all values from 0 to 4 with step length 0.01. The other parameters took the default values. Our experiments were based on using a ThinkPad E440 PC equipped with a 2.29 GHz Intel i7 processor and 8GB of RAM. The application software is MATLAB published by MathWork company.

#### *5.3. Comparison of DNST Feature*

Numerous experiments on each method were carried out and took the best value and average value as the objective basis of the comparison. The experimental results are shown in Figure 4. From Figure 4, the DNST feature achieves the highest classification results. The best accuracy is 89.92%, and the average accuracy is 89.36%. The Contourlet and DST schemes obtained the same recognition result. The average accuracy of DNST is 1.34% higher than that of DST and Contourlet, 3.15% higher than that of wavelet, and 5.97% higher than that of GLCM. This is due to the fact that DNST has excellent spatial localization properties and directional selectivity, and it can capture defects features more accurately for continuous casting slabs. The classification accuracy obtained by GLCM is the lowest, which indicates that only extracting texture information is not sufficient to represent features of the continuous casting slabs. Figure 4 also shows that the four multi-scale multi-directional feature extraction methods are superior to GLCM. Besides, the difference between the best accuracy and the average accuracy of the wavelet is 88.31% − 86.21% = 2.1%, which indicates that the wavelet is less robust to parameter changes. The difference between the best accuracy and average accuracy of DNST

is 89.92% − 89.36% = 0.56%, the value is lowest, which shows the DNST has better robustness for parameter changes.

**Figure 4.** Classification results of different features.

#### *5.4. Comparison of Feature Combination*

In addition to the classification accuracy evaluation metrics, the other commonly used evaluation metrics include recall, false positive rate (FPR), F-measurement, precision, the area under the receiver operating characteristic (ROC) curve (AUC), etc. [27]. The lower value of the FPR metrics indicates the better feature extraction performance, while the higher value of the other metrics indicates the better feature extraction performance.

In order to verify the superiority of the proposed scheme DNST-GLCM-KSR, we compared it with the Contourlet-KLPP scheme in reference [1], the DST-KLPP scheme in reference [2], Contourlet-GLCM-KLPP in reference [11], as well as with several similar combination feature schemes. Table 1 lists classification results of some evaluation metrics. From Table 1, the DNST-GLCM-KSR scheme achieves the lowest value of FPR, the highest values of precision, F-measure, AUC, and accuracy, indicating that the proposed scheme obtained the best comprehensive performance in the seven schemes. Besides, when only extracting DNST features, using the KLPP algorithm to reduce dimension can achieve better metrics than that of using KSR except AUC metrics. When extracting DNST-GLCM features, using KSR algorithm to reduce dimension can achieve better metrics than that of using KLPP. The above shows that both dimensionality reduction technologies are effective, and which one is better to use depends on experiments. The principle of the two technologies is the same, but the calculation technic is different.

**Table 1.** Results of different evaluation metrics.


Taking the accuracy metrics as an example, DNST-GLCM-KSR scheme achieved the highest accuracy of 96.37%, which is 2.82% higher than that of reference [1] and 2.02% higher than that of [2] and [11], indicating that the proposed scheme is superior to the traditional ones. When using the same dimensionality reduction technology, the accuracy of Contourlet-KLPP is 93.55%, DST-KLPP is 94.35%, and DNST-KLPP is 95.16%, which indicates that the DNST can extract more discriminant features of continuous casting slabs than that of Contourlet and DST. The above results show that DNST-GLCM-KSR is an excellent feature fusion approach for continuous casting slabs.

#### *5.5. Comparisons of Dimensionality Reduction*

Table 2 lists the results of different combined features of DNST. It can be seen that the accuracy of the training set and the test set of DNST-GLCM is 96.77% and 90.73% respectively, the accuracy of the training set and the test set of DNST is 93.55% and 89.92% respectively, and both results of DNST-GLCM are higher than those of DNST, which indicates DNST feature combined with GLCM texture features can improve the recognition accuracy. In addition, the accuracy of DNST-KSR is 94.35% − 89.92% = 4.43% higher than that of DNST, and the accuracy of DNST-GLCM-KSR is 96.37% − 90.73% = 5.64% higher than that of DNST-GLCM. The above shows that KSR can effectively remove redundancy and interference features and improve recognition accuracy. At the same time, we noticed that the classification time was shortened from tens of seconds to several seconds by KSR. This is because KSR reduces feature dimensionality to *c* − 1, where *c* is the number of classes. The continuous casting slabs samples include positive samples and negative samples—that is to say, the number of classes is 2. The feature number was reduced to 1 dimensionality. Finally, it should be noted that the number of subbands by DNST decomposition is different when different feature combinations achieve the highest recognition accuracy.


**Table 2.** Result of feature combination.

Figure 5 shows the recognition accuracy of the training set and test set when using KSR and not using KSR. With the increase of SVM kernel parameter γ, the accuracy of the DNST-GLCM training set gradually increases and is finally close to 100%, while the accuracy of the test set first increases and then decreases, which indicates that DNST-GLCM feature data is sensitive to SVM kernel parameter; in other words, the feature is complex and not conducive to classifier learning. For DNST-GLCM-KSR, the accuracy of test set and training set are both high, and the curve fluctuation are small. The above results show that the DNST-GLCM-KSR scheme makes the extracted features more discriminative, easier to learn and classify, and it has a strong robustness to the kernel parameter of SVM.

**Figure 5.** The recognition accuracy of training set and test set.
