*2.5. Spectral Analysis Methods*

SqueezeNet, MobileNet, and ShuffleNet were used to construct a classification model for the rapid, intelligent identification of various PAHs. SqueezeNet is a lightweight network based on a model-compression strategy [37]. The structure of the SqueezeNet used in this study is shown in Figure S3 in the Supporting Information. The first convolution layer and pooling layer are first used for the initial feature extraction; then, a 1 × 1 convolution

layer (squeeze layer) is added, followed by a 1 × 1 convolution and a 3 × 1 convolution extended width (expand layer). The features of the two convolution layers are connected and sent to the flatten, dropout, and dense layers. Notably, the pooling operation of SqueezeNet is delayed, which ensures that a larger feature map is convolved, retaining more feature information, thereby effectively improving network performance. The parameter settings for SqueezeNet are listed in Table S1.

The core idea of MobileNet is to use depthwise separable convolution (DSC) instead of general convolution [38]. The MobileNet structure designed in this study is illustrated in Figure S4 in the Supporting Information. The DSC is implemented using DepthwiseConv and the common 1 × 1 convolution module, both of which are followed by batch normalisation and a rectified linear unit (ReLU) for batch normalisation and nonlinearisation. MobileNet comprises two DSC modules with an additional maximum pooling layer for dimensionality reduction, followed by a flatten layer and two dense layers. The parameter settings for MobileNet are listed in Table S1.

The ShuffleNet network includes group convolution and channel shuffling [39], the structure of which is shown in Figure S5 in the Supporting Information. Initial feature extraction is performed using a common convolution layer and a maximum pooling layer, followed by group convolution and channel shuffle using two shuffle layers. In the shuffle layer, the input is first convolved using a 1 × 1 group convolution, after which the channel shuffle module is used to shuffle the feature graphs of each group. The input is then convolved using a 3 × 1 DepthwiseConv and 1 × 1 group. Finally, the obtained input is added to the initial input to realise group convolution with channel shuffle. After extracting features through the two shuffle layers, the entire network can be completed through the flatten, dropout, and two dense layers. The parameter settings for ShuffleNet are listed in Table S1.
