*3.4. Pixel-Level Fusion Based on Full Wavelengths Spectra and Texture Data*

Spectra and texture data carried the component content and component distribution information of the target sample, respectively. In order to develop a higher accuracy classification model, four texture parameters, including contrast, correction, energy, and homogeneity, were extracted and used to create a new fusion matrix by fusing with the spectral data at the pixel-level level. Then the most optimal combination of spectra and texture was determined by establishing classification models based on the fused data.

The classification results of the pixel-level fusion of spectral and texture information were shown in Table 4. Compared with the results obtained by individual spectral data, the classification ability of fusion data varied with the participation of different texture features. The texture feature of energy and contrast had a positive effect on improving the classification models, with the accuracy of prediction sets of 90% and 90% for Vis-SWNIR and LWNIR regions, respectively. In general, the contrast parameter reflects the clarity of the image according to the depth of the texture groove, and the energy parameter reflects the randomness of the image texture. The amount of mold increased with the increase of cultured time, and the mold mainly concentrated in the embryo region of the maize, which may be the reason why both texture features were more conducive to the classification of maize with different moldy levels. It should be pointed out that, other combinations of spectra and texture features had not yielded the desired results, suggesting that the prediction ability of pixel-level fusion was not the accumulation of data quantity. Although the pixel-level fusion of spectra and textures directly merged the data of different sources, this method could input valuable information to the model, but it can also add a large number of uncorrelated and noisy variables, resulting in the fused data could not significantly improve the predictive power of the model. Similar results were obtained using NIR and ATR-FTIR data blocks to detection of adulteration in honey [39], models based on a data matrix generated by pixel-level data fusion show no significant improvement in accuracy.


**Table 4.** The classification results of the pixel-level fusion of spectral and texture information.
