*3.2. Data Augmentation*

Data augmentation is a method to increase the data samples during the training of the model. After employing data balancing and sampling, we contained 300 images of each class for better and generalized model training. We applied the data augmentation method to enhance and increase the dataset instances for better training of the model and to avoid overfitting [31]. Different data augmentation methods were applied in random rotation and random horizontal translation, as described in Table 1, which yielded an augmented dataset batch during training of the proposed model.

**Table 1.** Augmentation parameter details.


#### *3.3. Proposed CNN Architecture*

The proposed CNN architecture COV-Net used in this study included four convolutional blocks. Each block was constituted of a convolutional layer, batch normalization, and activation function, namely ReLU, followed by max-pooling as shown in Figure 3. In convolutional layers, filters convolved over the input image. The convolutional function performed the dot product of filter and valued and extracted features from the input images. CNN used a backpropagation algorithm for dynamic feature extraction. One of the advantages of CNN over ANN is that it automatically extracts domain-specific features from the images. By further using an edge operator (max-pooling), it learned profoundly discriminative features to train the model. In the pooling, layer down-sampling was also performed, which enhanced the performance of the model by making a small variation in the input image and by decreasing the non-linear dimensions of the resulting feature maps.

To highlight the features for classification, resulting feature maps were extracted from a fully connected layer. A dropout layer was added at the end to avoid overfitting. Detailedd layer wise description of propsed imodel is illusterated in Figure 4. The crossentropy function was used as a cost function along with the softmax function. To categorize COVID-19, healthy people, and viral pneumonia, we used traditional ML classifiers, namely, random forest, Naïve Bayes, support vector machine (SVM), k-Nearest Neighbor (k-NN), and ensemble model.

**Figure 3.** Detailed overview of proposed COV-Net.


**Figure 4.** Architectural detail of proposed COV-Net.

In this study, we used MATLAB to run the code. In the training phase of our proposed COV-Net model, we used the "rmsprop" function as an optimizer. It is a gradient-based

method. It normalized the gradient by balancing the momentum, diminishing the progression for a large gradient to obtain from exploding, and expanding the progression for a small gradient to obtain from vanishing [32]. After an experimental analysis, an optimal learning rate of "0.0001" was selected. To reduce computational complexity, the batch size was set to 16 per epoch, which is a small size. To improve generalization, L1 regularization was used. As a cost function, the cross-entropy function was used along with the softmax function.
