*3.6. Determination of the Optimal Feature-Level Fusion Model*

Currently, the above results were based on two separate hyperspectral systems; meanwhile, typical hyperspectral imaging systems rarely extended the wavelength range of 399–1701 nm. Generally, more detailed and comprehensive feature information can be acquired from a wider spectral range, which makes sense given that integrating spectral and corresponding texture parameter data from Vis-SWNIR and LWNIR systems. The feature-level fusion model built by characteristic variables selected by VCPA obtained the most reliable classification result, hence, these variables (9 spectral features and 12 energy features for the Vis-SWNIR region, and 12 spectral features and 12 energy features for the LWNIR region) were integrated to build the classification model. The overall prediction accuracy was 96.11% and 95% for calibration and prediction sets, respectively, which was greater than the pixel-level fusion model and feature-fusion model of independent sensor. Although the number of variables used for modeling increased, it was far lower than the number of variables for full wavelength data. Similar results were also obtained in the internal bruising detection of blueberry by combining two hyperspectral systems with feature fusion strategies [56]. Figure 8 shows that the overall predicted results for the maize with different moldy levels. Except for the moderate level, all the moldy maize groups reached a high accuracy of more than 95%. In particular, all healthy maize was correctly classified. Some moderate levels samples were misclassified as mild or severe levels, resulting in a classification ability of only 90% for moderate levels, which agreed with the result of Yao et al. [57]. The moldy maize at moderate levels was difficult to accurately identify, which may be caused by the reduced variation between different moldy levels. This phenomenon also could be found through the determination of the CAT activity; with the aggravation of moldy level, the increase of CAT activity value among different categories decreased. However, it was worth emphasizing that none of the moldy samples were misclassified as healthy level, illustrating that the classification model had a certain practicality and objectivity.

**Figure 8.** Confusion matrix of overall prediction results for all samples.

By comparing previous studies, some non-destructive testing techniques to identify grain mold have been studied extensively. These single technologies, such electronic nose [13], machine vision [15], Vis-SWNIR hyperspectral systems [22,23], and LWNIR hyperspectral systems [20,21] have been used to monitor the health condition of maize during storage. Remarkably, these studies obtained satisfactory results. Due to the growth and multiplication of mold, both the internal quality and external characteristics of maize change during the moldy process. Hence, the strategy of using single technologies to evaluate the quality of maize was limited. In our study, the spectral and different texture

parameter data were extracted based on the collected Vis-SWNIR and LWNIR hyperspectral images. The data fusion strategy significantly improved the richness of information, which was helpful for building a robust classification model. Therefore, it can be concluded that feature-level fusion model based on spectral and texture information of two hyperspectral systems could be used to improve the classification accuracy of maize with different moldy levels.
