Next Article in Journal
Designing of Machine Backups in Reconfigurable Manufacturing Systems
Previous Article in Journal
Faster MDNet for Visual Object Tracking
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Multi-Feature Extraction-Based Algorithm for Stitching Tampered/Untampered Image Classification

Key Laboratory of EMW Information, Department of Electronic Engineering, School of Information Science and Technology, Fudan University, Shanghai 200433, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(5), 2337; https://doi.org/10.3390/app12052337
Submission received: 23 January 2022 / Revised: 18 February 2022 / Accepted: 22 February 2022 / Published: 23 February 2022
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
With the recent internet connectivity revolution, and the fast-growing prevalence of camera-enabled devices, images play a vital role in several fields of modern life. Photos, which often have been seen as evidence in courts, are nowadays subject to more sophisticated tricky forgery. To detect the image stitching between originally unassociated people/scenes and other combining forgery, an algorithm used to extract multiple specific image features, such as grayscale, complementary color wavelet (CCW) based chroma, sharpness, and natural scene statistics (NSS), is first presented in this paper. It is illustrated that a random forest model can be trained by these extracted features and then be employed to classify the stitching tampered/untampered images. The experimental results show that the proposed algorithm favorably outperforms the techniques reported in the literature, and achieves a state-of-the-art performance with higher accuracy values of 91%, 95.24%, and 88.02% on the Tampering ImageNet, Columbia, and CASIA ITDE V2.0 datasets, respectively. The precision, recall, and F1-score were also improved to a certain extent.

1. Introduction

The internet connectivity revolution, and the prevalence of camera-enabled devices, has resulted in the production of a massive daily amount of digital data. Among these digital data, images are one of the most important data categories that influence people’s daily life. Based on its credibility, an image is regarded as one of the most common sources of evidence that plays a vital role in many fields [1]. In general, people or objects are unlikely to appear in the digital image if they do not exist in the scene of the photo. However, stored digital images are vulnerable to tampering, especially when an image editing software is available [2]. It is easy to copy a region in one image to another using the image editing tool. This kind of tampering is known as image stitching. The detection of whether the image content has been tampered with using stitching is significant in the fields of forensic investigation, criminal investigation, news, and surveillance systems [3,4].
When the image is tampered with stitching, it will inevitably lead to changes in the chroma, illuminance, and texture of the image [5]. By detecting the changes in these features, stitching tampering can be detected. In terms of chroma feature detection, Wei et al. adopted the gray-level co-occurrence matrix (GLCM) of an edge image in the Cr channel as the ground feature of judgement [6], but its feature extraction was simply carried out for a single channel, and the edge detection algorithm used was only the subtraction of adjacent pixels. Zhang et al. used the GLCM of the edge information in both Cb and Cr channels as the decisive feature [7]. The Haar-like features of the integral images of Y, Cb, and Cr channels were used as pivotal features in [8]. These methods are sensitive to chroma changes so that the drastic chroma changes of image edges without tampering will also significantly affect the detection performance of these algorithms. In terms of illuminance feature detection, the shape, illumination, and reflection of an image are used to estimate the light environment from the face region in the image [9]. However, the acquisition of such information requires a priori information in the scene. At the same time, the algorithm cannot detect image tampering if no face can be found in the image. Image tampering detected by the color illuminant features of an image is reported in [10]. Unfortunately, it suffers from significant performance loss in complicated illuminance [5]. In terms of texture feature detection, tampering detection through comparison of the gray-level run length matrix (GLRLM) of tampered and untampered images was conducted in [11], but it ignored the spatial information. Histogram features encoded by the local binary pattern (LBP) to reveal image stitching forgery were employed in [12]. However, it is not robust against image regions with little intensity variation and thus is unstable when noise is present. The ‘find gray-level regional maxima’ and the ‘entropy-based edge’ extracted from the local entropy of the median filter residual (MFR) in images for tampering detection are presented in [13]. However, it is known that the texture features obtained by GLRLM, LBP, and MFR local entropy are easily affected by illumination and reflection. Furthermore, all of them ignore the chroma change in an image.
Through the literature review above, it can be found that stitching tampering of an image will usually introduce changes in features, such as the chroma, illumination, and texture. In addition, the tampered area will interfere with the smoothness, consistency, continuity, regularity, and periodicity of the original image [14]. Thus, chroma feature detection with chroma channels only or texture detection with gray-scale images only cannot fully reflect the feature changes in the original image caused by stitching tampering. To achieve a more comprehensive examination of the existence of stitching tampering in digital images, a multi-feature tampering detection algorithm combining chroma, sharpness, and natural scene statistics (NSS) is proposed in this paper. The color filtering feature of complementary color wavelet (CCW) [15] is used to obtain the chromatic change of the image. Sharpness detection is then applied to detect the change in the sharpness of the image. Fused with texture differences and distortion statistics extracted by NSS, it is shown that a multi-feature classifier implemented by the random forest classification algorithm can comprehensively detect whether the image has been tampered with. The proposed model’s performance is evaluated on a newly built tampering detection dataset based on the ImageNet database (Stanford University, CA, USA) [16] (hereinafter referred to as “Tampering ImageNet”), Columbia Uncompressed Image Splicing Detection Evaluation Dataset (Columbia University, NY, USA) [17] (hereinafter referred to as “Columbia Dataset”), and CASIA Image Tampering Detection Evaluation Database V2.0 (CASIA ITDE V2.0) (Institute of Automation, Chinese Academy of Sciences, Beijing, China) [18]. The experimental results show that the proposed algorithm favorably outperforms the existing methods in the literature, in terms of the accuracy, precision, recall, and F1-score evaluation metrics. The main contributions of this paper are listed as follows:
  • Besides adopting other existing datasets, a dataset of over 500 pairs of stitching tampered/untampered images is built for training and evaluating our model. It is named Tampering ImageNet because its entire untampered content is drawn from the existing image classification public ImageNet dataset, which has been manually prepossessed and optimized to overcome most shortcomings of the existing related public datasets, with an untampered vs. tampered images ratio of one, and with the minimum content size able to achieve the maximum possible tampering detection performance.
  • In contrast to image single-feature-based methods, within this research work, novel multi-feature extraction techniques (CCW and NSS) are introduced, and innovatively applied for the first time to image stitching tampering detection, making full use of image chroma information and texture differences and distortion.
  • In this paper, a newly designed architecture of a random forest model for stitching tampered/untampered image classification is proposed, and the model is trained on the previously extracted multi-feature data.
The rest of this paper is organized as follows: In Section 2, a multi-feature method for image tampering detection based on CCW chroma detection, sharpness detection, and texture difference and distortion statistics described by NSS is presented. In Section 3, the algorithm proposed in this paper is used to detect images and tampered images on several public benchmarked datasets, and the classification accuracy is compared with the existing algorithms. Finally, Section 4 summarizes this article.

2. Multi-Feature Detection Method of Image Tampering

An overall diagram of the proposed stitching tampering detection algorithm is shown in Figure 1, whereas Algorithm 1 is its descriptive code script. The algorithm consists of three parallel image feature extraction branches followed by a feature classification module. Firstly, the use of super pixel division makes the pixels with similar characteristics become a more representative “element” while preserving the edge information, and the complementary color chroma detection operator is applied to obtain the chroma feature of the image. Secondly, through the sharpness detection, the algorithm extracts the sharpness of different areas of the image to obtain the characteristics of the sharpness change of the image. Then, using salient region detection, the algorithm divides the image into two regions, and extracts their statistical features with the help of NSS to obtain the texture difference and distortion characteristics of the image. Finally, the random forest classification algorithm is used to classify the extracted features, and the three extracted features are classified separately. The classification results of untampered and tampered images are obtained by majority voting.
Algorithm 1: A descriptive script of the proposed stitching tampered/untampered image classification algorithm
Input: Image dataset D
Output: Image classification result R
function Chroma (D)
 for each image ID do
  Perform super pixel division
  Calculate the complementary color coefficient f j ( r , c ) , which is defined in Equation (5)
 return image chroma coefficient C//see Section 2.1
function Sharpness (D)
 for each image ID do
  Compute the gradient S ( x , y ) of image I at point ( x , y )
  Compute the sharpness Ten as described in Equation (9)
 return gradient value S//see Section 2.2
function NSS (D)
 for each image ID do
  Obtain the image model CCNSSal of the statistical saliency of complementary color natural scenes as shown in Equation (11)
  Extract the statistical features of natural scenes
return statistical results N of NSS//see Section 2.3
procedure RandomForest (D)
C C h r o m a ( D )
S S h a r p n e s s ( D )
N N S S ( D )
 Perform k-fold random forest classification for each feature set C, S, and N//see Section 2.4
 Obtain the classification result R by majority voting

2.1. Complementary Color Wavelet Chroma Detection Operator

In traditional chroma detection, the R, G, and B channels in a color image are usually processed as individual grayscale images, and then the processing results of each channel are fused [19,20]. Such approaches ignore the mutual information between the three channels. CCW not only retains the chroma characteristics of each channel of the color image, but also further considers the information among the channels, thus providing a more effective means for chroma detection [15].
CCW is a color image processing tool based on the theory of complementary color [15]. According to the theory of complementary color, the cone cells in human eyes are sensitive to four pairs of complementary colors, namely red-cyan, green-magenta, blue-yellow, and white-black [21,22]. A common method used to represent complementary colors is the Hue ring. The complementary colors of red-cyan, green-magenta, and blue-tellow form the three color axes of the Hue ring. The corresponding angles of R, G, and B on the Hue ring are 0, 2π/3, and 4π/3, respectively, and the phase difference between R, G, B, and their respective complementary color is π. In [15], a cluster of CCW ψ θ 0 , ψ θ 0 + 2 π / 3 , ψ θ 0 + 4 π / 3 with a phase difference of 2π/3 was constructed by imitating the relationship among the R, G, and B axes in the Cartesian coordinate system.
The CCW ψ θ 0 , ψ θ 0 + 2 π / 3 , and ψ θ 0 + 4 π / 3 are denoted as ψ R , ψ G , and ψ B , respectively, and then the CCW coefficient vectors of the color image level j and direction n can be expressed as [15]:
d j R , n = r ψ j R , n
d j G , n = g ψ j G , n
d j B , n = b ψ j B , n
where r, g, and b are the channel vectors of the color image, and * stands for the convolution operation. For each level and direction, the two-dimensional multi-channel complementary color operator defined in [15] can be written as:
( O j C , n O j R , n O j G , n O j B , n ) = ( 1 1 1 1 1 1 1 1 1 1 1 1 ) ( d j R , n d j G , n d j B , n )
where n = k π / 8 , k = 1 , 2 , , 8 represents the 8 sub-band directions of the complementary color operator, and O j C , n , O j R , n , O j G , n , and O j B , n are black-white, red-cyan, green-magenta, and blue-yellow complementary color operators, respectively. When the color change deviates from the complementary color axis, the corresponding complementary color operator will change accordingly [15].
Based on (4), the black-white complementary color operator is obtained by the summation of the other three pairs of CCW coefficient vectors, and reflects the variation in any chroma changes. Therefore, the black-white complementary color operator is chosen and its energy in a certain sub-band is calculated to represent the chroma change of the image. In this way, for a color image with a size R × C , the chroma change in level j after CCW transformation is [15]:
f j ( r , c ) = n = 1 8 ( y j C , n ( r , c ) ) 2
where y j C , n ( r , c ) is the complementary color coefficient of any point ( r , c ) in the 8 directional sub-band O j , n corresponding to level j , r R j , c C j , and R j = R / 2 j , C j = C / 2 j represent the size of the sub-band O j , n .
To reduce the complexity of CCW chroma detection, simple linear iterative clustering (SLIC) is adopted in this paper. Firstly, the image is divided into superpixels, assuming that superpixels comprising a large number of pixels with similar characteristics can extract the main information of the image while preserving its chroma and edge information [23]. The superpixel algorithm [23] transforms an image into a feature vector V = [ l , a , b , x , y ] T , where [ l , a , b ] T represents the CIE-Lab color information of the pixel and [ x , y ] T is the spatial position of the pixel. The pixels of the image are divided into M superpixel blocks by local clustering. If the image has N pixels in total, the size of each superpixel can be expressed as N/M. M is first initialized to M0, and then the algorithm returns the calculated actual number of superpixels and updates M. M 0 is set to 200 empirically.
Then, CCW is used to detect the chroma of the image after superpixel segmentation to obtain the color changes in the direction of interest. Figure 2 shows some examples of the CCW chroma detection of tampered images by applying (5). It can be found that the CCW chroma operator correctly detects chroma changes in tampered images.

2.2. Sharpness Detection Operator

According to the principle of digital imaging, there are usually focusing regions and defocusing regions in an image [24]. The focusing region is the region where the focus is located, which has higher sharpness and can reflect richer details. The defocused region is usually a blurry region. In a stitching tampered image, there are usually multiple focusing regions. Therefore, sharpness detection can effectively detect such stitching tampering.
In this paper, a gradient-based image sharpness evaluation function is used to extract the gradient values in the horizontal and vertical directions using the Sobel operator [25]. The larger the average grayscale value of the image processed by the Sobel operator, the clearer the image.
The image is divided into 4 × 4 non-overlapping blocks, and the sharpness of each image block is calculated by the Sobel operator. If the sharpness of one or more image blocks differs from the rest, this implies that it is very likely that the examined image has been tampered with. Letting the Sobel convolution kernel be G x and G y , one obtains [26]:
G x = [ 1 0 1 2 0 2 1 0 1 ]
G y = [ 1 2 1 0 0 0 1 2 1 ]
Then, the gradient of image I at point (x,y)is:
S ( x , y ) = | G x I ( x , y ) | 2 + | G y I ( x , y ) | 2
Thus, the sharpness of the image can be defined as [27]:
Ten = 1 / k · x y S ( x , y ) 2
where k is the total number of image pixels.

2.3. Salient Region Detection and Natural Scene Statistics

In a tampered image, the tampered area will cause interference in the smoothness, consistency, continuity, regularity, and periodicity of the original image, and change the correlation between image pixels [14], thus causing image distortion. Such distortions can be detected by the NSS model [28,29]. On the other hand, when acquiring information from the image, the human visual system is more sensitive to the region with high brightness and a complex structure [30]. To reflect the sensitivity of the human vision system, the saliency region detection algorithm usually divides the image into regions of interest (ROI) and regions of no interest (RONI) of the human vision system. In the tampered image, the brightness and structural complexity inside the tampered region are relatively average, and it can only belong to either ROI or RONI, so that the statistical results of NSS of ROI and RONI in the tampered image are significantly different. Therefore, after detecting the salient regions of the image, NSS can be performed on ROI and RONI, respectively, to obtain the texture differences and distortion characteristics.

2.3.1. Salient Region Detection Operator of Complementary Color Wavelet

For a color image with a size R × C , CCW transformation is carried out to obtain 8 directions in 6 levels of sub-band features of black-white, red-cyan, green-magenta, and blue-yellow complementary colors. According to Equation (4), O j C , n , O j R , n , O j G , n , O j B , n can be obtained, where j = 1 , 2 , , 6 , n = k π / 8 , k = 1 , 2 , , 8 .
The neighborhood in any sub-band is considered, and the coefficient of the coordinate point (r,c) in sub-band O j , n is denoted as y j , n ( r , c ) . The 9 pixel points { y j , n ( r ˜ , c ˜ ) } r 1 r ˜ r + 1 c 1 c ˜ c + 1 in the 3 × 3 neighborhood centered on this point, plus the coefficient { y j , n ˜ ( r , c ) } n ˜ = 1 , , 8 ,   n ˜ n of this point in the other 7 directions in the sub-band, constitute the neighborhood coefficient vector Y j , n ( r , c ) of coordinate point (r,c) in sub-band O j , n . The divisive normalization transformation (DNT) factor corresponding to coordinate ( r , c ) in one of the complementary sub-bands O j , n in the direction n in level j can then be written as [31]:
z ^ j , n ( r , c ) = ( Y j , n ( r , c ) ) T ( C U ) 1 Y j , n ( r , c ) / N
where N = 16, and CU is the covariance estimated using all neighborhood vectors in the sub-band.
The DNT normalization factor of all coefficients in the whole sub-band of this point in the vector form is denoted as Z j , n , which represents the salient feature generated by the complementary color change in the direction n at level j . Then, the DNT normalization factors obtained from the characteristic coefficients of the four pairs of complementary colors at all levels and in all directions can be fused according to the following formula to obtain the image model of the statistical saliency of the complementary color natural scenes [31]:
CCNSSal = G { j = 4 6 ( n = 1 8 ( Z j C , n + Z j R , n + Z j G , n + Z j B , n ) ) }
where Z j C , n , Z j R , n , Z j G , n , Z j B , n denote the black-white, red-cyan, green-magenta, and blue-yellow complementary color characteristics of the sub-band DNT normalization factor vector, respectively; n = 1 8 · represents the vector sum in 8 directions, and j = 4 6 ( · ) denotes the operations of the interpolation to size R × C and the summation of the corresponding layers.
After obtaining the statistical salient map model of the complementary color natural scene, the image of ROI and RONI can be extracted by threshold segmentation, which can be described as:
mask = { 1 , CCNSSal ( x , y ) T 0 , CCNSSal ( x , y ) < T
where the threshold T is set as the mean value of the saliency map, i.e., T = mean ( CCNSSal ( x , y ) ) ; and mask is the template image after binarization of the saliency map, which can be employed to extract the ROI of the image. Figure 3 shows the results of some examples of the segmentation implemented according to (11).
The examples in Figure 3 show that the CCW salient region detection operator can effectively divide the region with a complex structure and the region with less texture, and the tampered region will appear in one of the two regions, which provides a basis for the NSS of the two regions, respectively.

2.3.2. Extraction of the Statistical Features of Natural Scenes

Most areas of the natural image are relatively smooth, and the adjacent pixels often have high correlation, but the tampered area of the image is not correlated with adjacent pixels, which may lead to texture distortion. The NSS model considers the uniform statistical invariant features of different image scenes and contents, whose changes under such distortions make it possible to judge the changes in these features and thus identify the stitching tampering through the NSS model.
The NSS model in this paper is composed of a 51-dimensional feature vector z (i.e., z = [ z ( 1 ) , z ( 2 ) , , z ( 51 ) ] ) [32], including spatial domain and transform domain features. The spatial domain features include features based on the generalized Gaussian distribution (GGD), features based on asymmetric GGD, and features based on image entropy [32]. Transform domain features include features based on Benford’s law, features based on energy-subband ratio, and features based on frequency variation [32].

2.4. Classification Algorithm

After obtaining the above three features, they are parallelly fed into the proposed random forest model to be separately classified, and then the final classification decision is obtained by the majority voting rule. The dataset is divided into training and test sets by cross-validation. The features of the images in the training set are fed into the classifier until its training phase is completed, and then its performance is verified by images from the testing set. The random forest algorithm is adopted as the classifier.

2.4.1. Random Forest Classification

Random forest classification is an integrated classification scheme constructed by randomly selecting the data subspace and growing it into a group of decision trees [33]. The research results show that the random forest classifier can achieve high accuracy in the data classification of high-dimensional and multi-category fields. The construction of a random forest from data subspaces has been reported by several studies The most popular method of random forest building was proposed by [33], where random selection of features is selected at each node to develop a branch of the decision tree, and then the bagging method is used to generate the training set and a tree is grown. Finally, all individual trees are constructed to form a random forest model.

2.4.2. Cross Authentication

The k-fold cross-validation method is adopted in this paper to divide the dataset into training and test sets. The steps are as follows:
(1)
Divide all datasets into K sets;
(2)
Choose a set as the test set for K times without repetition, and the other ( K 1 ) ones are taken as the training sets to train the model, and then the accuracy of the model on the test set can be calculated.
(3)
Calculate the mean of the accuracies of k times to obtain the final result:
Ac ( K ) = 1 K i = 1 K Ac i

3. Experimental Results and Analysis

To verify the effectiveness of the method in this paper, we conducted batch tests on images from a newly built dataset named Tampering ImageNet, Columbia dataset [17], and CASIA ITDE V2.0 [18]. The novel Tampering ImageNet dataset was built by manually preprocessing 500 untampered images drawn from the public ImageNet dataset [16] to produce a total of 500 pairs of untampered/tampered images. The images were clipped and zoomed to obtain 512 × 512 untampered images. The stitched images were constructed using the following method: part of the image region from one image (or some images) was selected and stitched into another image. The stitched region was manually decided with a random location and size, and its edge may be regular or irregular. In tampered images, the tampered area of some images was preprocessed, such as resizing and rotating. Objects were selected and resized carefully so that they were not visually abrupt in the tampered images. Through the above preprocessing, 500 untampered images and 500 tampered images with a size of 512 × 512 were obtained, including natural scene, architecture, object, people, animal, plant, and texture images. Some examples of the images are shown in Figure 4. In both the Columbia dataset and CASIA ITDE V2.0, all images of the appropriate size were selected. In the Columbia dataset [17], 117 untampered images and 30 tampered images were selected for detection, and the image size was 1152 × 768 or 768 × 1152. Part of the images is shown in Figure 5. In CASIA ITDE V2.0 [18], 6427 untampered images and 1330 tampered images were selected for detection, and the image sizes were all 384 × 256 or 256 × 384. Part of the images is shown in Figure 6. It is worth mentioning that Columbia [17] and CASIA ITDE V2.0 [18] are already existing public benchmark datasets that were specially built for image tampering detection tasks. Since these datasets are also employed by the related methods, to ensure a fair comparison, in this study, they were also adopted without changing their content nor size, and thus keeping their original amount and ratio of untampered vs. tampered images the same. Whereas Tampering ImageNet is a novel dataset built during this research work, and it is named Tampering ImageNet because its entire untampered content was drawn from the existing image classification public ImageNet dataset [16], which was resized and manually prepossessed to have an untampered vs. tampered images ratio of one. As the untampered vs. tampered images ratio and the content size already differ from one existing public tampering detection dataset to another, our newly built dataset is not supposed to have the same size and ratio. In contrast, the introduced novel dataset was built to overcome the shortcomings of the existing public datasets, and it is a well-balanced dataset built with an untampered vs. tampered images ratio equal to one and with the minimum possible content size to achieve the maximum possible tampering detection performance. The experiments were implemented using toolboxes and functions in MATLAB R2018a (Math Works, Inc., Natick, MA, USA).
The three-feature extraction operators proposed in Section 2 were applied to untampered and tampered images, respectively. The results obtained by the complementary color detection operator are shown in Figure 7, the results obtained by the sharpness detection operator are shown in Figure 8, and the results obtained by the CCW salient region detection operator are shown in Figure 9.
From Figure 7a–d, it can be observed that the CCW chroma detection operator can extract the full edges of chroma changes in the tampered area when the chroma difference between the tampered area and other areas is detected. However, for untampered images with insignificant chroma changes, the detected edges of the chroma changes do not constitute closed shapes. As can be seen from Figure 7e–h, when the chroma of the untampered image changes greatly, the CCW chroma detection operator will obtain a clearer edge of the chroma changes, which is similar to the expected tampered area edge, and may cause misjudgment. For the tampered image with no clear chroma difference between the tampered region and other regions, the detection operator failed to detect its tampered edge, so the classification algorithm will classify it as untampered.
As shown in Figure 8a–d, when there are focusing and defocusing regions in the untampered image due to the range of the depth of the field, the sharpness detection operator can detect the focusing region, as shown in Figure 8b. For tampered images, when there are multiple focusing regions at different depths of field positions, the results given by the sharpness detection operator will show two clear regions separated by a certain distance, as shown in Figure 8d. It can also be seen from Figure 8e–h that when there is no obvious change in the depth of field in the untampered image and the overall image has high sharpness, or when there are multiple objects on the focal plane, the results given by the sharpness detection operator will be scattered into multiple clear regions, as shown in Figure 8f. At the same time, for different depths of the field, when the background is blurred, a foreground with extreme sharpness will also lead to detection errors. For tampered images, when one or more of the following conditions exist, the results given by the sharpness detection operator will be confused with the detection results of untampered images, namely, the focusing region of the untampered image is not obvious, or the introduced tamper region is not the focusing region of another image, or the tampered area overlaps with the focusing area of the pre-tampered image.
As the results obtained by NSS in Section 2.3 are statistical features and cannot be displayed intuitively, only the results of the CCW salient region detection operator are shown in Figure 9. Figure 9a–f illustrate the fact that the saliency detection operator can divide the image into ROI and RONI, and the tampered region exists in one of the two regions due to its relatively consistent brightness and structural complexity, thus facilitating subsequent NSS feature extraction. From Figure 9g–i, it can also be seen that there may be brightness changes or structure complexity changes within the tampered area, or there may be multiple tampered areas. As a result, the brightness and structure complexity of some areas in the tampered region are similar to those in the untampered area. At this time, the tampered region will appear in both ROI and RONI, thus eventually leading to incorrect classification results.
The above analysis indicates that a single feature may not extract full features or over-extract a single class of details, which may lead to incorrect classification results. Moreover, it is quite common that one feature detection fails while the other two succeed, as shown in Figure 10.
Table 1 shows the classification results based on the extracted features of the images shown in Figure 10 using the CCW chroma detection operator, sharpness detection operator, salient region detection operator, and NSS. Figure 10a,b show examples of the CCW chroma detection operator classification failure and sharpness detection operator and NSS classification success. Figure 10c,d show examples of the sharpness detection operator classification failure and CCW chroma detection operator and NSS classification success. Figure 10e,f show examples of NSS classification failure, which also, on the other hand, demonstrates the CCW chroma detection operator and the sharpness detection operator classification success. The results show that combining features can obtain chroma, sharpness changes, and texture distortion of color images, and provide a new and more comprehensive feature extraction and information recognition that cannot be obtained by a single feature. It also partially relieves the difficulties of single feature classification.
Uusing the random forest classifier and K-fold cross-validation method, the three features can be classified. The number of feature variables selected at random for each decision split is equal to the square root of the total number of variables for classification. The maximum depth of the tree is not limited, and then the nodes are expanded until all leaves are pure or until all leaves contain less than the minimum number of samples required to split an internal node, which is set to 2. The minimum number of observations per tree leaf is 1. The number of decision trees in the random forest affects the classification performance of the algorithm. Figure 11 shows the out-of-bag error over the number of grown classification trees.
It can be seen from Figure 11 that the out-of-bag error trends to decrease and become stable as the number of grown trees increases. Moreover, it is straightforward that the time consumption increases with the number of grown trees. Therefore, considering the stability, accuracy, and time consumption of the classification, the number of decision trees of the random forest is set to 174 during the testing phase. For Tampering ImageNet and CASIA ITDE V2.0, K is set to 10, and for the Columbia dataset, K is set to 7. In the judgment stage, this paper uses three classifiers to determine the output category of a given test sample, and votes to decide whether the picture has been tampered with according to the simple majority rule [34]. Knowing that the majority voting policy [34] can only be performed with an odd number of voters, and since the number of extracted image features is equal to the number of voters (classifiers), this paper does not experiment with two extracted features, as two is an even number, which violates the majority voting policy. If the number of features to be classified is an even number, this implies that the random forest model will also consist of the same even number of classifiers (voters), which will lead to an undesirable classification case where half of the total number classifiers will vote that the input image has been tampered with and the second half will vote that it has not been untampered with. As a result, this is not a majority voting case but an equal voting, and based on this voting, we cannot say whether the image has been tampered with, and thus a final classification decision cannot be made. By taking all these facts into consideration, the proposed random forest model consists of three binary classifiers (voters) instead of two, where each of them takes one of the three extracted features as an input and delivers a class label as an output, and then the majority voting policy is applied to determine the final classification decision. We first use the accuracy percentage as the evaluation index to objectively evaluate the performance of the different algorithms. Table 2 shows the classification results obtained by exploiting each of the three image features separately, and the classification results achieved by combining them into one batch and feeding them simultaneously into the proposed model. The final classification decision is achieved by applying the majority voting policy on the model three voters’ (classifiers) outputs. Table 3 shows the accuracy comparison between the proposed method in this paper and the existing algorithms in the literature.
The results in Table 3 show that the proposed algorithm can greatly improve the accuracy of image tampering detection. This is because the superpixel segmentation and complementary color operator in this algorithm can extract chroma features from the chroma information of the image and distinguish the untampered region from the tampered ones. The sharpness detection operator can extract more detailed sharpness characteristics, to reflect the sharpness change between different regions. In addition, the saliency detection and texture distortion statistics extracted by NSS can make up for the shortcomings of the complementary color and sharpness detection operators. The combination of these three features can further improve the accuracy of image tampering detection.
In Tampering ImageNet, Columbia dataset, and CASIA ITDE V2.0, the images that were detected successfully by the algorithm in this paper and the comparison algorithm that failed are shown in Figure 12.
Figure 12 shows that for images with complex textures, the single feature extracted by the comparison algorithm lacks chroma information and location information. However, the algorithm in this paper integrates chroma feature, sharpness feature, and texture distortion statistics, enabling it to detect tampered images that cannot be recognized by relying on a single feature.
The images that cannot be correctly classified by any of the algorithms listed in Table 3 are shown in Figure 13.
The results in Figure 13 indicate that when there are multiple factors causing detection errors in untampered or tampered images, e.g., great chroma variations in untampered images or slight chroma changes in tampered images, insignificant differences in the depth of the field, and high overall sharpness with no obvious focusing area, neither the single-feature nor the multi-feature algorithm can provide a correct classification.
On the other hand, the accuracy, as the most commonly used indicator, cannot reasonably reflect the classification ability of the model when the number of samples in different categories of the dataset is unbalanced. Therefore, the precision, recall, and F1-score metrics are also employed for efficient model evaluation. Each fold in the cross-validation is calculated separately and the average is taken as the final result. The proposed method’s performance is compared to existing related approaches in terms of the other three different evaluation metrics. The quantitative results are shown in Table 4.
Table 4 shows that compared with the methods reported in the existing literature, the feature extraction and classification model proposed in this paper performs better in terms of the precision, recall, and F1-score evaluation metrics. It can be seen from Table 3 and Table 4 that the algorithm proposed in this paper shows an outstanding performance on the three different public benchmark testing datasets, indicating that the introduced algorithm has promising applicability in various real-life fields related to image tampering detection.

4. Discussion

In this paper, a multi-feature extraction-based algorithm for stitching tampered/untampered image classification is proposed based on pattern recognition technology. The algorithm takes chroma, sharpness, and texture differences and distortion statistics into consideration, and effectively reduces the incompleteness of the single-feature algorithm. However, the algorithm proposed in this paper only covers a limited range of image features, and it still cannot effectively identify the stitching tampering when the feature changes of the tampered area are insignificant. In a future work, this study will be extended to investigate the current limitations and further problems, such as how to cover a wider image feature range, and how to benefit from the cutting-edge deep learning technology to identify more subtle tampering and further improve the detection performance, which is also worthy of in-depth study. In addition, significant parts of the adopted tampering detection datasets are artificially constructed, so an expansion of the introduced dataset in this study and improvement of its contents to be more realistic and comprehensive for general applications in image tampering detection are also planned.

5. Conclusions

In this research work, a multi-feature extraction-based classification algorithm for color image stitching tampering was developed. In this system, three types of image feature extraction techniques were innovatively integrated to extract various image features, such as the chroma changes in multiple directions, sharpness features, texture distortion, and difference, namely, CCW chroma detection, sharpness detection, salient region detection, and NSS. A random forest classification algorithm was utilized to classify the extracted features. It was experimentally shown that the multi-feature-based extraction strategy can classify untampered and tampered images more accurately, and dramatically boost the detection accuracy on various benchmarked testing datasets. Compared to the related methods reported in the existing literature, higher classification accuracy values of 91.00%, 95.24%, and 88.02% were obtained using the Tampering ImageNet, Columbia, and CASIA ITDE V2.0 datasets, respectively. The proposed algorithm was also ranked the highest in terms of the other three evaluation metrics: precision, recall, and F1-score. The results prove that the introduced algorithm favorably outperformed the existing techniques, both qualitatively and quantitatively, and a state-of-the-art image tampering detection performance was shown. The proposed algorithm is an effective tool for tackling the growing challenges related to visual data forgery in many critical fields.

Author Contributions

Conceptualization, R.J. and J.Z.; methodology, R.J.; software, R.J.; validation, R.J.; formal analysis, R.J.; investigation, R.J.; data curation, R.J.; writing—original draft preparation, R.J.; writing—review and editing, R.J., A.N., D.L. and J.Z.; visualization, R.J.; project administration, J.Z.; funding acquisition, D.L. and J.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the National Natural Science Foundation of China under Grant 11827808 and 11974082.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Publicly available datasets were analyzed in this study. The data can be found here: ImageNet: https://image-net.org/; Columbia Dataset: https://www.ee.columbia.edu/ln/dvmm/downloads/authsplcuncmp/; CASIA ITDE V2.0: http://forensics.idealtest.org/.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pham, N.T.; Lee, J.-W.; Park, C.-S. Structural Correlation Based Method for Image Forgery Classification and Localization. Appl. Sci. 2020, 10, 4458. [Google Scholar] [CrossRef]
  2. Jain, I.; Goel, N. Advancements in Image Splicing and Copy-move Forgery Detection Techniques: A Survey. In Proceedings of the 2021 11th International Conference on Cloud Computing, Data Science & Engineering (Confluence), Noida, India, 28–29 January 2021; pp. 470–475. [Google Scholar]
  3. Zhu, Y.; Shen, X.; Liu, S.; Zhang, X.; Yan, G. Image Splicing Location Based on Illumination Maps and Cluster Region Proposal Network. Appl. Sci. 2021, 11, 8437. [Google Scholar] [CrossRef]
  4. Li, X.; Yu, N.; Zhang, X.; Zhang, W.; Li, B.; Lu, W.; Wang, W.; Liu, X. Overview of digital media forensics technology. J. Image Graph. 2021, 26, 1216–1226. [Google Scholar] [CrossRef]
  5. Lin, X.; Li, J.-H.; Wang, S.-L.; Liew, A.-W.-C.; Cheng, F.; Huang, X.-S. Recent Advances in Passive Digital Image Security Forensics: A Brief Review. Engineering 2018, 4, 29–39. [Google Scholar] [CrossRef]
  6. Wang, W.; Dong, J.; Tan, T. Effective image splicing detection based on image chroma. In Proceedings of the 2009 16th IEEE International Conference on Image Processing (ICIP), Cairo, Egypt, 7–10 November 2009; pp. 1257–1260. [Google Scholar]
  7. Zhang, K.; Liang, Y.; Zhang, J.; Wang, Z.; Li, X. No One Can Escape: A General Approach to Detect Tampered and Generated Image. IEEE Access 2019, 7, 129494–129503. [Google Scholar] [CrossRef]
  8. William, Y.; Safwat, S.; Salem, M.A.M. Robust Image Forgery Detection Using Point Feature Analysis. In Proceedings of the 2019 Federated Conference on Computer Science and Information Systems (FedCSIS), Leipzig, Germany, 1–4 September 2019; pp. 373–380. [Google Scholar]
  9. Mazumdar, A.; Jacob, J.; Bora, P.K. Forgery Detection in Digital Images through Lighting Environment Inconsistencies. In Proceedings of the 2018 Twenty Fourth National Conference on Communications (NCC), Hyderabad, India, 25–28 February 2018; pp. 1–6. [Google Scholar]
  10. Sekhar, P.N.R.L.C.; Shankar, T.N. An Object-Based Detection of Splicing Forgery using Color Illumination Inconsistencies. In Proceedings of the 2021 12th International Conference on Computing Communication and Networking Technologies (ICCCNT), Kharagpur, India, 6–8 July 2021; pp. 1–5. [Google Scholar]
  11. Mushtaq, S.; Mir, A.H. Forgery detection using statistical features. In Proceedings of the 2014 Innovative Applications of Computational Intelligence on Power, Energy and Controls with their impact on Humanity (CIPECH), Ghaziabad, India, 28–29 November 2014; pp. 92–97. [Google Scholar]
  12. Vaishnavi, D.; Subashini, T.S. Recognizing image splicing forgeries using histogram features. In Proceedings of the 2016 3rd MEC International Conference on Big Data and Smart City (ICBDSC), Muscat, Oman, 15–16 March 2016; pp. 1–4. [Google Scholar]
  13. Rhee, K.H. Detection of Spliced Image Forensics Using Texture Analysis of Median Filter Residual. IEEE Access 2020, 8, 103374–103384. [Google Scholar] [CrossRef]
  14. Shi, Y.Q.; Chen, C.; Chen, W. A Natural Image Model Approach to Splicing Detection. In Proceedings of the Workshop on Multimedia & Security, Dallas, TX, USA, 20—21 September 2007. [Google Scholar]
  15. Chen, Y.; Li, D.; Zhang, J. Complementary Color Wavelet: A Novel Tool for the Color Image/Video Analysis and Processing. IEEE Trans. Circuits Syst. Video Technol. 2019, 29, 12–27. [Google Scholar] [CrossRef]
  16. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. ImageNet Large Scale Visual Recognition Challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef] [Green Version]
  17. Hsu, Y.; Chang, S. Detecting Image Splicing using Geometry Invariants and Camera Characteristics Consistency. In Proceedings of the 2006 IEEE International Conference on Multimedia and Expo, Toronto, ON, Canada, 9–12 July 2006; pp. 549–552. [Google Scholar]
  18. Dong, J.; Wang, W.; Tan, T. CASIA Image Tampering Detection Evaluation Database. In Proceedings of the 2013 IEEE China Summit and International Conference on Signal and Information Processing, Beijing, China, 6–10 July 2013; pp. 422–426. [Google Scholar]
  19. Weijer, J.V.D.; Gevers, T.; Bagdanov, A.D. Boosting color saliency in image feature detection. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 150–156. [Google Scholar] [CrossRef] [Green Version]
  20. Xu, Y.; Yu, L.; Xu, H.; Zhang, H.; Nguyen, T. Vector Sparse Representation of Color Image Using Quaternion Matrix Analysis. IEEE Trans. Image Process. 2015, 24, 1315–1329. [Google Scholar] [CrossRef]
  21. Pridmore, R.W. Complementary colors theory of color vision: Physiology, color mixture, color constancy and color perception. Color Res. Appl. 2011, 36, 394–412. [Google Scholar] [CrossRef]
  22. Pridmore, R.W. Complementary colors: The structure of wavelength discrimination, uniform hue, spectral sensitivity, saturation, chromatic adaptation, and chromatic induction. Color Res. Appl. 2009, 34, 233–252. [Google Scholar] [CrossRef]
  23. Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Süsstrunk, S. SLIC Superpixels Compared to State-of-the-Art Superpixel Methods. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2274–2282. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Wei, H.; Liu, Z. An Improved Convolutional Neural Network Image Fusion Method. Video Eng. 2021, 45, 21–26. [Google Scholar] [CrossRef]
  25. Xu, Y.; Yang, G. Research on Bank Card Number Identification Based on Sobel Operator. Comput. Digit. Eng. 2021, 49, 1672–1675. [Google Scholar] [CrossRef]
  26. Fan, M.; Guo, Z.; Chai, X.; Shang, J. Optimized realization of Sobel edge detection algorithm for FT-M7002. Comput. Eng. 2021. [Google Scholar] [CrossRef]
  27. Chen, L.; Li, W.; Chen, C.; Qin, H.; Lai, J. Efficiency contrast of digital image definition functions for general evaluation. Comput. Eng. Appl. 2013, 49, 152–155. [Google Scholar] [CrossRef]
  28. Huang, H.; Zhang, J. A Natural Scene Statistical Saliency Map Model for Color Images. J. Fudan Univ. (Nat. Sci.) 2014, 53, 51–58. [Google Scholar] [CrossRef]
  29. Moorthy, A.K.; Bovik, A.C. Blind Image Quality Assessment: From Natural Scene Statistics to Perceptual Quality. IEEE Trans. Image Process. 2011, 20, 3350–3364. [Google Scholar] [CrossRef]
  30. Liu, C. Saliency and Similarity Study of Color Image; Fudan University: Shanghai, China, 2017. [Google Scholar]
  31. Chen, Y.; Zhang, J. A natural scene statistical saliency map model in complementary color wavelet domain. Microelectron. Comput. 2019, 36, 17–22. [Google Scholar] [CrossRef]
  32. Ou, F.; Wang, Y.; Zhu, G. A Novel Blind Image Quality Assessment Method Based on Refined Natural Scene Statistics. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), Taipei, China, 22–25 September 2019; pp. 1004–1008. [Google Scholar]
  33. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  34. Song, Y.; Chen, J.; Guo, Y.; Wang, C. Research on multi-feature medical image recognition based on data fusion. Appl. Res. Comput. 2008, 1750–1752. [Google Scholar] [CrossRef]
Figure 1. The overall process of the proposed detection algorithm of stitching tampering. H and W denote the height and the width of the input image, respectively.
Figure 1. The overall process of the proposed detection algorithm of stitching tampering. H and W denote the height and the width of the input image, respectively.
Applsci 12 02337 g001
Figure 2. The results of the CCW chroma detection. (a,d) Tampered images; (b,e) the images after superpixel segmentation; (c,f) chroma detection results. The red box area denotes the tampered area hereinafter.
Figure 2. The results of the CCW chroma detection. (a,d) Tampered images; (b,e) the images after superpixel segmentation; (c,f) chroma detection results. The red box area denotes the tampered area hereinafter.
Applsci 12 02337 g002
Figure 3. The separated ROI and RONI. (a,d) Tampered images; (b,e) ROI of images; (c,f) RONI of images.
Figure 3. The separated ROI and RONI. (a,d) Tampered images; (b,e) ROI of images; (c,f) RONI of images.
Applsci 12 02337 g003
Figure 4. Examples of the images in Tampering ImageNet. (a,b) Untampered images; (c,d) tampered images.
Figure 4. Examples of the images in Tampering ImageNet. (a,b) Untampered images; (c,d) tampered images.
Applsci 12 02337 g004
Figure 5. Examples of the images in Columbia dataset. (a,b) Untampered images; (c,d) tampered images.
Figure 5. Examples of the images in Columbia dataset. (a,b) Untampered images; (c,d) tampered images.
Applsci 12 02337 g005
Figure 6. Examples of the images in CASIA ITDE V2.0. (a,b) Untampered images; (c,d) tampered images.
Figure 6. Examples of the images in CASIA ITDE V2.0. (a,b) Untampered images; (c,d) tampered images.
Applsci 12 02337 g006
Figure 7. Examples of the CCW-based chroma detection results, where (ad) are the images with correct classification, and (eh) are the images with failed classification. (a,e) Untampered images; (b,f) chroma detection results of untampered images using CCW; (c,g) tampered images; (d,h) chroma detection results of tampered images using CCW.
Figure 7. Examples of the CCW-based chroma detection results, where (ad) are the images with correct classification, and (eh) are the images with failed classification. (a,e) Untampered images; (b,f) chroma detection results of untampered images using CCW; (c,g) tampered images; (d,h) chroma detection results of tampered images using CCW.
Applsci 12 02337 g007
Figure 8. Examples of the sharpness detection results, where (ad) are the images with correct classification, and (eh) are the images with failed classification. (a,e) Untampered images; (b,f) the sharpness detection results of the untampered images; (c,g) tampered images; (d,h) the sharpness detection results of the tampered images.
Figure 8. Examples of the sharpness detection results, where (ad) are the images with correct classification, and (eh) are the images with failed classification. (a,e) Untampered images; (b,f) the sharpness detection results of the untampered images; (c,g) tampered images; (d,h) the sharpness detection results of the tampered images.
Applsci 12 02337 g008
Figure 9. Examples of the CCW salient map results, where (af) are the images with correct classification, and (gi) are the images with failed classification. (a) Untampered image; (b) the ROI of the untampered image; (c) the RONI of the untampered image; (d,g) tampered images; (e,h) the ROI of the tampered image; (f,i) the RONI of the tampered image.
Figure 9. Examples of the CCW salient map results, where (af) are the images with correct classification, and (gi) are the images with failed classification. (a) Untampered image; (b) the ROI of the untampered image; (c) the RONI of the untampered image; (d,g) tampered images; (e,h) the ROI of the tampered image; (f,i) the RONI of the tampered image.
Applsci 12 02337 g009
Figure 10. The extracted features of the proposed algorithm. From left to right are the color images, the chroma detection results of CCW, the results of the sharpness detection operator, and the ROI and RONI given by the CCW salient map. (a,c,e) Untampered images; (b,d,f) tampered images. See Table 1 for details.
Figure 10. The extracted features of the proposed algorithm. From left to right are the color images, the chroma detection results of CCW, the results of the sharpness detection operator, and the ROI and RONI given by the CCW salient map. (a,c,e) Untampered images; (b,d,f) tampered images. See Table 1 for details.
Applsci 12 02337 g010
Figure 11. The out-of-bag error over the number of grown classification trees.
Figure 11. The out-of-bag error over the number of grown classification trees.
Applsci 12 02337 g011
Figure 12. Images detected successfully by the proposed algorithm but failed to be detected by the comparison algorithm in the dataset. From left to right are the color images, the chroma detection results of CCW, the results of the sharpness detection operator, and the ROI and RONI given by the CCW salient map. (a,c,e) Untampered images in Tampering ImageNet, Columbia, and CASIA ITDE V2.0, respectively; (b,d,f) tampered images in Tampering ImageNet, Columbia, and CASIA ITDE V2.0, respectively.
Figure 12. Images detected successfully by the proposed algorithm but failed to be detected by the comparison algorithm in the dataset. From left to right are the color images, the chroma detection results of CCW, the results of the sharpness detection operator, and the ROI and RONI given by the CCW salient map. (a,c,e) Untampered images in Tampering ImageNet, Columbia, and CASIA ITDE V2.0, respectively; (b,d,f) tampered images in Tampering ImageNet, Columbia, and CASIA ITDE V2.0, respectively.
Applsci 12 02337 g012
Figure 13. Images that cannot be correctly detected by all the algorithms. From left to right are the color images, the chroma detection results of CCW, the results of the sharpness detection operator, and the ROI and RONI given by the CCW salient map. (ae) Untampered images; (fh) tampered images.
Figure 13. Images that cannot be correctly detected by all the algorithms. From left to right are the color images, the chroma detection results of CCW, the results of the sharpness detection operator, and the ROI and RONI given by the CCW salient map. (ae) Untampered images; (fh) tampered images.
Applsci 12 02337 g013
Table 1. The single feature detection results of the image in Figure 10.
Table 1. The single feature detection results of the image in Figure 10.
(a)(b)(c)(d)(e)(f)
CCW chroma detection operator××
Sharpness detection operator××
Salient region detection operator of CCW and NSS××
Table 2. The accuracy of the single feature and combined features.
Table 2. The accuracy of the single feature and combined features.
CCW Chroma Detection OperatorSharpness Detection OperatorSalient Region Detection Operator of CCW and NSSCombined Features
Tampering ImageNet (Ours)80.90%80.30%79.10%91.00%
Columbia79.59%83.67%88.44%95.24%
CASIA 2.085.47%84.92%85.03%88.02%
Table 3. The accuracy of the different image tampering detection algorithms.
Table 3. The accuracy of the different image tampering detection algorithms.
Ref. [6]Ref. [11]Ref. [12]Ref. [8]Ref. [13]Proposed
Cb + SVMCb + KNNCb +
BPNN
Cr + SVMCr + KNNCr +
BPNN
Tampering
ImageNet (Ours)
72.50%62.60%73.70%67.20%59.00%72.00%66.80%63.00%71.10%74.30%91.00%
Columbia85.71%79.60%93.20%88.40%77.55%84.40%81.60%79.59%81.00%70.10%95.24%
CASIA 2.080.90%83.10%86.50%83.30%82.85%86.70%84.20%82.85%82.90%82.10%88.02%
Table 4. Quantitative results of the proposed method in terms of three evaluation metrics, with a comparison to existing related approaches.
Table 4. Quantitative results of the proposed method in terms of three evaluation metrics, with a comparison to existing related approaches.
Ref. [6]Ref. [11]Ref. [12]Ref. [8]Ref. [13]Proposed
Cb + SVMCb +
KNN
Cb +
BPNN
Cr + SVMCr + KNNCr +
BPNN
Tampering
ImageNet (Ours)
Precision73.58%41.59%52.48%62.24%57.58%50.49%61.45%50.25%41.89%67.62%86.76%
Recall79.43%44.07%54.69%73.80%75.95%49.30%69.08%66.49%35.04%70.15%92.83%
F1 score0.760.410.530.670.640.490.650.560.370.690.90
ColumbiaPrecision77.14%26.56%14.85%42.35%39.29%6.55%29.72%21.43%24.52%2.38%90.00%
Recall49.05%30.27%30.27%52.02%26.43%16.80%39.76%6.43%26.46%3.57%66.67%
F1 score0.670.260.190.450.300.090.310.100.250.030.80
CASIA 2.0Precision86.49%18.62%48.09%58.18%28.16%44.54%61.64%24.35%14.39%32.13%76.60%
Recall57.21%69.37%55.14%62.89%31.87%59.07%62.13%32.96%28.17%35.27%61.20%
F1 score0.670.290.500.600.320.500.620.250.190.340.73
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jia, R.; Nahli, A.; Li, D.; Zhang, J. A Multi-Feature Extraction-Based Algorithm for Stitching Tampered/Untampered Image Classification. Appl. Sci. 2022, 12, 2337. https://doi.org/10.3390/app12052337

AMA Style

Jia R, Nahli A, Li D, Zhang J. A Multi-Feature Extraction-Based Algorithm for Stitching Tampered/Untampered Image Classification. Applied Sciences. 2022; 12(5):2337. https://doi.org/10.3390/app12052337

Chicago/Turabian Style

Jia, Ruofan, Abdelwahed Nahli, Dan Li, and Jianqiu Zhang. 2022. "A Multi-Feature Extraction-Based Algorithm for Stitching Tampered/Untampered Image Classification" Applied Sciences 12, no. 5: 2337. https://doi.org/10.3390/app12052337

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop