2.1.3. UERD

UERD [10] is a steganographic embedding scheme that aims to minimize the probability of steganographically encoded information's presence being detected, by minimizing the embedding's impact on the statistical parameters of the cover information. It achieves this by analyzing the parameters of DCT coefficients of given modes, as well as whole DCT blocks and their neighbors. Through this, the method can determine whether the region can be considered "noisy" and whether embedding will impact statistical features such as histograms of the file. "Wet" regions are those where statistical parameters are predictable and where embedding would cause noticeable changes. The scheme does not exclude values such as the DC mode coefficients or zero DCT coefficients from being used when embedding, as their statistical profiles can make them suitable from the security perspective. UERD attempts to uniformly spread the relative changes of statistics resulting from embedding. UERD employs syndrome trellis coding (STC) to hide message bits in the desired values.

Figure 1 shows a sample clean image, the same image with random data hidden using the UERD algorithm at 0.4 bpnzac (bits per non-zero AC DCT coefficient) rate, and an image which is the difference between them. As can be observed, despite there being almost 5% hidden data in the image (b) no artifacts can be perceived. What is more, it is hardly possible to observe any difference between the clean and steganographically-modified image, even if they are displayed next to one another. It is only the differential image (c) that proves the manipulation. The same refers to nsF5, J-Uniward, and other modern algorithms realizing image steganography—their manipulations are often imperceptible and difficult to detect, considering that the original image is rarely available.

#### *2.2. Detection Methods*

In recent years, several methods of detecting image steganography have been researched. They usually involve the extraction of some sort of parameters out of analyzed images, followed by applying a classification algorithm. They are usually based on an ML approach, employing either shallow or deep learning algorithms. Therefore, in this subsection, we first describe the features most frequently used with steganalytic algorithms, and then briefly describe typical examples of shallow and deep learning-based detection algorithms.

**Figure 1.** (**a**) Clean image, (**b**) image with data hidden using UERD algorithm, and (**c**) differential image between them, scaled 100 times. Density of steganographic data: 0.4 bpnzac, which means here 2638 B of hidden data in each 53 kB image file. Clean image source: unsplash.com (accessed on 8 April 2022).

#### 2.2.1. Feature Extraction

In the literature, several feature spaces for image steganalysis have been researched. One of them is based on discrete cosine transform residuals (DCTR) [12], the main purpose of which is to analyze the data resulting from obtaining the DCT value for a given image. First, in this method, a random 8 × 8 pixel filter is created that will be applied to the entire analyzed set. Then, a histogram is created after applying the convolution function with the previously mentioned filter, iterating through each fragment of the analyzed image. In [13], an example of using DCTR parameters in connection with a multi-level filter is proposed. A different variation of this approach is a method based on gabor filter residuals (GFR) [14]. It works in a very similar way to DCTR, but instead of a random 8 × 8 filter, Gabor filters are used. Article [15] describes a successful application of GFR features in JPEG steganography detection. Another approach to parameterization is using the phase aware projection model (PHARM) [16]. In this approach, various linear and non-linear filters are used, while the histogram is constructed from the projection of values for each residual image fragment.

#### 2.2.2. Shallow Machine Learning Classifiers

A number of shallow classification methods have been proposed for JPEG steganalysis. These include the use of algorithms such as support vector machines (SVM) [17–19] or logistic regression [20]. A method often appearing in recent publications is an ensemble classifier built using the Fisher linear discriminant (FLD) as the base learner [21]. In certain cases [7], parameter extractors coupled with this ensemble classifier outperformed more recent deep learning-based systems. As such, this algorithm has become a point of reference when looking into the performance of shallow ML methods in detecting steganography. The rationale driving attempts to increase its detection accuracy is the fact that data is split randomly into subsets used to train each base learner. Thus, it may be possible that certain base learners are assigned less varied datasets. Their detection accuracy may suffer from poor generalization capabilities. Simple ensemble vote-combining methods such as the one used by default do not take such effects into consideration.

#### 2.2.3. Deep Learning Methods

In recent years, neural networks have often been reported as being used for detecting steganographically hidden data in digital images. As input data, extracted image parameters based on decompressed DCT values such as DCTR, GFR, or PHARM have been used. Proprietary variants of convolutional networks such as XuNet [22], ResNet [23], DenseNet [24], or AleksNet [25] are most often used for this purpose. The common feature of these networks is combining the convolution-batch normalization-dense structures, i.e., the convolutional function, the normalization layer, and the basal layer of neurons with the appropriate activation function. Functions such as sigmoid [26], TLU [27] (threshold linear unit), and Gaussian [28] are used, but the most common are rectified linear unit (ReLU) [29] or TanH [22].

#### **3. Materials and Methods**

In our experiments, we compared how shallow and deep learning methods cope with detecting hidden data in JPEG images. We tested a variety of deep and shallow ML-based classifiers and various feature spaces. Initially, we used raw DCT coefficients as input for the tested methods. As it did not produce satisfactory results, we extracted various parameters from the images. We performed experiments in DCTR, GFR, and PHARM feature spaces. We taught our models features extracted from pairs of images: without and with steganographically hidden data. Details of the data and the classifiers used are presented in the next subsections.

#### *3.1. Datasets Used*

We used the "Break Our Steganograhic System" (BOSS) image collection [30], which contains 10,000 black and white photos (with no hidden data). The photos were converted into JPEG with a quality factor of 75. Then, we generated three other sets of images, hiding random data with a density of either 0.4 or 0.1 bpnzac, using three different steganographic algorithms: J-Uniward, nsF5, and UERD. We used their code published at [31]. All experiments, including generation of the steganographic files, were run on a virtual machine with 64 GB RAM and 8 vCPU cores of Intel Xeon Gold 5220 processor, running on a DELL PowerEdge R740 server. Each dataset was divided in parallel into training and test subsets, in the ratio of 90:10.

#### *3.2. Configuration of Ensemble Classifier*

The base component of the shallow classifier is the ensemble classifier based on the FLD model [21]. A diagram presenting the way the ensemble classifier operates is shown in Figure 2. The set of feature vectors created by extracting DCTR, GFR, or PHARM characteristics from pictures is used to generate smaller subsets through a random selection of samples from the original set (a process called bootstrapping). These subsets are then used to train individual base learners independently from each other to diversify their classification logic. Throughout the training process, the size of the subset and the population of the ensemble (the number of base learners) is adjusted to minimize the out-of-bag error of the system. These subsets are then used to train individual base learners. Upon testing, each base learner reaches its decision independently of others and the results from the whole "population" are aggregated to produce a single decision.

**Figure 2.** A diagram showing the structure of the ensemble classifier.

In our work, we focused on maximizing the detection ability of this classifier through the use of various methods to combine the votes of base learners. While the individual votes in the original ensemble were fused by simply choosing the more popular classification decision, we decided to explore the potential gain of employing machine learning for this. We trained the original ensemble classifier and then used it to obtain the decisions of all base learners for both the training and testing sets. The resulting data formed new feature vectors, which were used for further analysis with different ways of combining the votes of individual base learners. We performed this analysis using primarily methods implemented in the scikit-learn library [32]. As such, the original ensemble became a dimension-reducing layer.

#### *3.3. Deep Learning Environment*

The neural network environment was based on the Keras [33] and Tensorflow [34] library due to the simplicity of the model definition. The network architecture was mainly based on the Dense-BatchNormalization structure, but not using the convolution part as described in the available literature. We also tested various activation functions for the dense layer, such as sigmoid, softsign, TanH, and softmax, but the best results were obtained for the ReLU function. We used two optimizers: adaptive moment estimation (Adam) [35] and stochastic gradient descent (SGD) [36], which gave different results depending on the type of input parameters. The last parameter that significantly influenced the model learning efficiency was the learning rate. We found that lowering it gave very promising results without changing the network architecture and the optimizer. One of the network configurations used is displayed in Figure 3.

**Figure 3.** Example of 3 Dense-BatchNormalization neural networks used for detecting data hidden by a JPEG-based steganographic method.

#### *3.4. Testing Scenarios*

For the shallow ML-based algorithms, we decided to focus on the ensemble classifier, which has been reported in related studies as one of the most promising. During our experiments, we verified a number of ML-based methods used to combine the set of votes coming from all base learners to return the final classifier decision. These include: linear regression, logistic regression, linear discriminant analysis (LDA), and *k* nearest neighbors (*k*-NN). Moreover, the majority voting scheme (i.e., choosing the most popular classification decision, which is the original ensemble vote fusion method), as well as unquantized majority voting (i.e., classification based on the sum of non-quantized decisions of the whole ensemble) was included for comparison.

As for deep learning methods, two network architectures were selected for experiments:


We decided not to use any convolutional layers due to their high computational requirements. However, we used additional normalization layers (BatchNormalization) between the dense layers. Half of the three-layer dense models used the Adam optimizer and half used SGD models, while the two-layer dense model used only the Adam optimizer. In the case of learning rate for the Adam optimizer, the values 1 × *e*<sup>−</sup><sup>4</sup> or 1 × *e*<sup>−</sup><sup>5</sup> were used, while for SGD, 1 × *e*<sup>−</sup><sup>3</sup> or 1 × *e*<sup>−</sup><sup>4</sup> were used. The version of the SGD optimizer with learning rate 1 × *e*<sup>−</sup><sup>3</sup> or 1 × *e*<sup>−</sup><sup>4</sup> and the 1 × *e*<sup>−</sup><sup>4</sup> version of the Adam optimizer were omitted here, because they yielded much worse results compared to the version with three dense layers. In total, five different neural network configurations were tested for steganography detection.

#### *3.5. Evaluation Metrics*

To evaluate the models created, we employed commonly used metrics. The first is accuracy, which indicates what percentage of the entire set of classified data is the correct classification. The second metric is precision, which determines what proportion of the results indicated by the classifier as belonging to a given class actually belongs to it. Another metric is recall, which determines what part of the classification results of a given class is detected by the model. The fourth metric analyzed is the F1-score, which is the harmonic mean of precision and recall. It reaches 1.0 when both components give maximum results. The last metric we used to test the effectiveness of the model is the area under the ROC curve (AUC). We will also present the ROC curves themselves, as they visually present the effectiveness of the detection model.

In our results, we focus on evaluating the accuracy for each model combination, while for the best parameters we also provide the values of the other metrics. Since the testset is ideally balanced, the accuracy score is not biased and reflects well the detection ability of a given classifier.
