A New Method to Detect Splicing Image Forgery Using Convolutional Neural Network
Abstract
:1. Introduction
1.1. Traditional Splicing Forgery Detection Approach
1.2. Deep Learning-Based Splicing Forgery Detection Approach
- The proposed model achieved high accuracy with a small number of parameters as compared with the recently published approaches, which can be considered as power key for the proposed architecture. Moreover, the proposed model is suitable for environments that have limitations in memory space and CPUs.
- The presented CNN model has four convolutional layers, four max-pooling layers, one global average pooling layer, one fully connected layer, and 97,698 hyper-parameters shown in Table 1, so it is a lightweight CNN model.
- Experiments were conducted on the dataset, and an analytical comparison was made between the proposed model’s results and previously presented models (Alahmadi et al. [10], Kanwal et al. [20], Zhang et al. [22], Ding et al. [29], Itier et al. [34], Kadam et al. [30], Abd El-Latif et al. [35], Nath et al. [28], and Niyishaka et al. [22]). This comparison showed that the proposed model is efficient and accurate against the other investigated models.
2. Preliminaries
Understanding of a CNN
3. Proposed Approach
- The first stage is preprocessing. In this stage, the image is resized to a suitable size to be inserted into the next stage without cutting any part of the entered image.
- The second stage is feature extraction. At this stage, there are four convolutional layers. Each convolutional layer is followed by the following: a max-pooling layer, one global average pooling layer, and one fully connected layer. The first convolutional layer has 16 feature maps, a filter size of (3,3), an input shape of (224,224), and an activation function (RELU). The first max-pooling layer has a pool size of (2,2). The second convolutional layer has 32 feature maps, a filter size of (3,3), a shape of (111,111), and an activation function (RELU). The second max-pooling layer has a pool size of (2,2). The third convolutional layer has 64 feature maps, a filter size of (3,3), an input shape of (54,54), and an activation function (RELU). The third max-pooling layer has a pool size of (2,2). The fourth convolutional layer has 128 feature maps, a filter size of (3,3), an input shape of (26,26), and an activation function (RELU). The fourth max-pooling has a pool size of (2,2). These hyperparameters were tabulated in Table 1. The final stage is a dense layer called the classification stage, and it classifies data into two categories: authentic or forgery. The main role of the convolutional layer is to extract features. Each convolutional layer has its own feature maps based on its specified filter. In the first convolutional layer, feature map sizes were reduced, which is important for providing next-layer feature maps. This process is called max-pooling [36]. This map works as an input to the next convolutional layer.
- The third stage is the classification stage: the output of the last block of the convolutional part represents the input of the global average pooling layer of the classification part. The final pooled feature maps of the global average pooling layer are formulated as vectors and fed to the fully connected layer. Finally, we can detect whether the input image is a forgery or authentic.
4. Experimental Results
4.1. Datasets
4.2. Evaluation Metrics
4.3. Experimental Results
4.4. The Results and Comparison over the CASIA 1.0, CASIA 2.0, and CUISDE Datasets
5. Conclusions
6. Future Work
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Marcon, F.; Pasquini, C.; Boato, G. Detection of Manipulated Face Videos over Social Networks: A Large-Scale Study. J. Imaging 2021, 7, 193. [Google Scholar] [CrossRef]
- Bi, X.; Zhang, Z.; Xiao, B. Reality Transform Adversarial Generators for Image Splicing Forgery Detection and Localization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 14274–14283. [Google Scholar] [CrossRef]
- Eltoukhy, M.M.; Elhoseny, M.; Hosny, K.M.; Singh, A.K. Computer aided detection of mammographic mass using exact Gaussian–Hermite moments. J. Ambient. Intell. Humaniz. Comput. 2018, 247, 1–9. [Google Scholar] [CrossRef]
- Ross, A.; Banerjee, S.; Chowdhury, A. Security in smart cities: A brief review of digital forensic schemes for biometric data. Pattern Recognit. Lett. 2020, 138, 346–354. [Google Scholar] [CrossRef]
- Velmurugan, S.; Subashini, T.; Prashanth, M. Dissecting the literature for studying various approaches to copy move forgery detection. Int. J. Adv. Sci. 2020, 29, 6416–6438. [Google Scholar]
- Kadam, K.D.; Ahirrao, S.; Kotecha, K. Efficient Approach towards Detection and Identification of Copy Move and Image Splicing Forgeries Using Mask R-CNN with MobileNet V1. Comput. Intell. Neurosci. 2022, 2022, 6845326. [Google Scholar] [CrossRef]
- He, Z.; Lu, W.; Sun, W.; Huang, J. Digital image splicing detection based on Markov features in DCT and DWT domain. Pattern Recognit. 2012, 45, 4292–4299. [Google Scholar] [CrossRef]
- Zhao, X.; Li, S.; Wang, J.L.; Yang, K. Optimal Chroma-like channel design for passive color image splicing detection. EURASIP J. Adv. Signal Process. 2012, 2012, 240. [Google Scholar] [CrossRef] [Green Version]
- Su, B.; Yuan, Q.; Wang, S.; Zhao, C.; Li, S. Enhanced state selection Markov model for image splicing detection. EURASIP J. Wirel. Commun. Netw. 2014, 2014, 7. [Google Scholar] [CrossRef] [Green Version]
- Alahmadi, A.; Hussain, M.; Aboalsamh, H.; Muhammad, G.; Bebis, G.; Mathkour, H. Passive detection of image forgery using DCT and local binary pattern. Signal Image Video Process. 2016, 11, 81–88. [Google Scholar] [CrossRef]
- Sunitha, K.; Krishna, A. Efficient Keypoint based Copy Move Forgery Detection Method using Hybrid Feature Extraction. In Proceedings of the 2020 2nd International Conference on Innovative Mechanisms for Industry Applications (ICIMIA), Bangalore, India, 5–7 March 2020; pp. 670–675. [Google Scholar] [CrossRef]
- Moghaddasi, Z.; Jalab, H.A.; Noor, R.M.; Aghabozorgi, S. Improving RLRN Image Splicing Detection with the Use of PCA and Kernel PCA. Sci. World J. 2014, 2014, 606570. [Google Scholar] [CrossRef] [Green Version]
- Muhammad, E.; Qureshi, A. Combining spatial and DCT based Markov features for enhanced blind detection of image splicing. Pattern Anal. Appl. 2015, 18, 713–723. [Google Scholar]
- Zhang, Q.; Lu, W.; Weng, J. Joint image splicing detection in DCT and Contourlet transform domain. J. Vis. Commun. Image Represent. 2016, 40, 449–458. [Google Scholar] [CrossRef]
- Pun, C.-M.; Liu, B.; Yuan, X.-C. Multi-scale noise estimation for image splicing forgery detection. J. Vis. Commun. Image Represent. 2016, 38, 195–206. [Google Scholar] [CrossRef]
- Li, C.; Ma, Q.; Xiao, L.; Li, M.; Zhang, A. Image splicing detection based on Markov features in QDCT domain. Neurocomputing 2017, 228, 29–36. [Google Scholar] [CrossRef]
- Zeng, H.; Zhan, Y.; Kang, X.; Lin, X. Image splicing localization using PCA-based noise level estimation. Multimedia Tools Appl. 2016, 76, 4783–4799. [Google Scholar] [CrossRef]
- Zhu, N.; Li, Z. Blind image splicing detection via noise level function. Signal Process. Image Commun. 2018, 68, 181–192. [Google Scholar] [CrossRef]
- Moghaddasi, Z.; Jalab, H.A.; Noor, R. Image splicing forgery detection based on low-dimensional singular value decomposition of discrete cosine transform coefficients. Neural Comput. Appl. 2018, 31, 7867–7877. [Google Scholar] [CrossRef]
- Kanwal, N.; Girdhar, A.; Kaur, L.; Bhullar, J.S. Digital image splicing detection technique using optimal threshold based local ternary pattern. Multimedia Tools Appl. 2020, 79, 12829–12846. [Google Scholar] [CrossRef]
- Revi, K.R.; Wilscy, M.; Antony, R. Portrait photography splicing detection using ensemble of convolutional neural networks. J. Intell. Fuzzy Syst. 2021, 41, 5347–5357. [Google Scholar] [CrossRef]
- Niyishaka, P.; Bhagvati, C. Image splicing detection technique based on Illumination-Reflectance model and LBP. Multimed. Tools Appl. 2021, 80, 2161–2175. [Google Scholar] [CrossRef]
- Zhang, Y.; Shi, T. Image Splicing Detection Scheme Based on Error Level Analysis and Local Binary Pattern. Netw. Intell. 2021, 6, 303–312. [Google Scholar]
- Chen, B.; Qi, X.; Sun, X.; Shi, Y.-Q. Quaternion pseudo-Zernike moments combining both of RGB information and depth information for color image splicing detection. J. Vis. Commun. Image Represent. 2017, 49, 283–290. [Google Scholar] [CrossRef]
- Salloum, R.; Ren, Y.; Kuo, C.-C.J. Image Splicing Localization using a Multi-task Fully Convolutional Network (MFCN). J. Vis. Commun. Image Represent. 2018, 51, 201–209. [Google Scholar] [CrossRef] [Green Version]
- Xiao, B.; Wei, Y.; Bi, X.; Li, W.; Ma, J. Image splicing forgery detection combining coarse to refined convolutional neural network and adaptive clustering. Inf. Sci. 2019, 511, 172–191. [Google Scholar] [CrossRef]
- Ahmed, B.; Gulliver, T.A.; AlZahir, S. Image splicing detection using mask-RCNN. Signal Image Video Process. 2020, 14, 1035–1042. [Google Scholar] [CrossRef]
- Nath, S.; Naskar, R. Automated image splicing detection using deep CNN-learned features and ANN-based classifier. Signal Image Video Process. 2021, 15, 1601–1608. [Google Scholar] [CrossRef]
- Ding, H.; Chen, L.; Tao, Q.; Fu, Z.; Dong, L.; Cui, X. DCU-Net: A dual-channel U-shaped network for image splicing forgery detection. Neural Comput. Appl. 2021, 1710, 1–17. [Google Scholar] [CrossRef]
- Kadam, K.D.; Ahirrao, S.; Kotecha, K.; Sahu, S. Detection and Localization of Multiple Image Splicing Using MobileNet V1. IEEE Access 2021, 9, 162499–162519. [Google Scholar] [CrossRef]
- Hosny, K.M.; Mortda, A.M.; Fouda, M.M.; Lashin, N.A. An Efficient CNN Model to Detect Copy-Move Image Forgery. IEEE Access 2022, 10, 48622–48632. [Google Scholar] [CrossRef]
- Dong, J.; Wang, W.; Tan, T. CASIA image tampering detection evaluation database. In Proceedings of the 2013 IEEE China Summit and International Conference on Signal and Information Processing, Beijing, China, 6–10 July 2013; pp. 422–426. [Google Scholar]
- Hsu, Y.F.; Chang, S.F. Detecting Image Splicing Using Geometry Invariants and Camera Characteristics Consistency. In Proceedings of the 2006 IEEE International Conference on Multimedia and Expo, Toronto, ON, Canada, 9–12 July 2006; pp. 549–552. [Google Scholar]
- Itier, V.; Strauss, O.; Morel, L.; Puech, W. Color noise correlation-based splicing detection for image forensics. Multimed. Tools Appl. 2021, 80, 13215–13233. [Google Scholar] [CrossRef]
- El-Latif, E.I.A.; Taha, A.; Zayed, H.H. A Passive Approach for Detecting Image Splicing using Deep Learning and Haar Wavelet Transform. Int. J. Comput. Netw. Inf. Secur. 2019, 11, 28–35. [Google Scholar] [CrossRef]
- Albawi, S.; Mohammed, T.A.; Al-Zawi, S. Understanding of a convolutional neural network. In Proceedings of the 2017 International Conference on Engineering and Technology (ICET), Antalya, Turkey, 21–23 August 2017; pp. 1–6. [Google Scholar] [CrossRef]
- Yamashita, R.; Nishio, M.; Do, R.K.G.; Togashi, K. Convolutional neural networks: An overview and application in radiology. Insights Imaging 2018, 9, 611–629. [Google Scholar] [CrossRef] [Green Version]
- Musallam, A.S.; Sherif, A.S.; Hussein, M.K. A New Convolutional Neural Network Architecture for Automatic Detection of Brain Tumors in Magnetic Resonance Imaging Images. IEEE Access 2022, 10, 2775–2782. [Google Scholar] [CrossRef]
Layers | Activation Shape | Activation Size | Number of Parameters |
---|---|---|---|
Input layer | (224,224,3) | 163,968 | 0 |
Conv1 | (222,222,16) | 788,544 | 448 |
Max pool1 | (111,111,16) | 197,136 | 0 |
Conv2 | (109,109,32) | 380,192 | 4640 |
Max pool2 | (54,54,32) | 93,312 | 0 |
Conv3 | (52,52,64) | 173,056 | 18,496 |
Max pool3 | (26,26,64) | 43,264 | 0 |
Conv4 | (24,24,128) | 73,728 | 73,856 |
Max pool4 | (12,12,128) | 18,432 | 0 |
Global Average Pooling 2D | 18,432 | 18,432 | 0 |
Fully Connected | 18,432 | 18,432 | 0 |
Output layer | 2 | 2 | 258 |
Total number of parameters | 97,698 |
Dataset | Composition | Size of Image | No. of Training Images | No. of Validation Images | No. of Testing Images | The Input Shape | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
CASIA 1.0 [32] | 913 images | 384 × 256 pixels and 256 × 384 pixels | 457 images | 229 images | 227 images | 224 × 244 pixels | ||||||||||||
tampered | 462 | original | 451 | tampered | 231 | original | 226 | tampered | 116 | original | 113 | tampered | 115 | original | 112 | |||
CASIA 2.0 [32] | 12,613 images | 900 × 600 pixels | 6308 images | 3154 images | 3152 images | 224 × 244 pixels | ||||||||||||
tampered | 5122 | original | 7491 | tampered | 2562 | original | 3746 | tampered | 1281 | original | 1873 | tampered | 1280 | original | 1872 | |||
CUISDE [33] | 363 images | 757 × 568 to 1152 × 768 pixels | 183 images | 90 images | 90 images | 224 × 244 pixels | ||||||||||||
tampered | 180 | original | 183 | tampered | 90 | original | 93 | tampered | 45 | original | 45 | tampered | 45 | original | 45 |
Dataset | Classes | + | − | Total |
---|---|---|---|---|
CASIA 1.0 | + | 115 | 0 | 115 |
− | 2 | 110 | 112 | |
Total | 117 | 110 | 227 | |
CASIA 2.0 | + | 1850 | 22 | 1872 |
− | 0 | 1280 | 1280 | |
Total | 1850 | 1302 | 3152 | |
CUISDE | + | 45 | 0 | 45 |
− | 0 | 45 | 45 | |
Total | 45 | 45 | 90 |
Dataset | Sensitivity % | Specificity % |
---|---|---|
CASIA 1.0 | 98.29 | 100 |
CASIA 2.0 | 100 | 98.31 |
CUISDE | 100 | 100 |
Methods | CASIA 1.0 | CASIA 2.0 | CUISDE | ||||||
---|---|---|---|---|---|---|---|---|---|
Recall % | Precision % | F1-Score % | Recall % | Precision % | F1-Score % | Recall % | Precision % | F1-Score % | |
A. Alahmadi et al. [10] | 98.2 | 96.75 | 97.34 | 96.84 | 98.45 | 97.64 | 97.07 | 98.3 | 97.68 |
E. Abd El-Latif et al. [35] | 98.99 | 95.14 | 97.03 | 99.03 | 97.14 | 98.08 | - | - | - |
N. Kanwal et al. [20] | 100 | - | 98.3 | 100 | - | 97.52 | - | - | - |
K. Kadam et al. [30] | 66.0 | 67.0 | 61.0 | - | - | - | 66.0 | 67 | 66.0 |
H. Ding et al. [29] | - | - | - | 88.93 | 89.12 | 86.67 | 91.76 | 99.81 | 94.98 |
S. Nath et al. [28] | - | - | - | 94.15 | 96.69 | 95.4 | - | - | - |
P. Niyishaka et al. [22] | - | - | - | 99 | 97 | 98 | - | - | - |
Y. Zhang et al. [23] | - | - | - | - | - | - | 93.99 | 89.58 | 91.73 |
Proposed method | 100 | 98.3 | 99.14 | 98.83 | 100 | 99.4 | 100 | 100 | 100 |
Speed Recognition (Time) | |||
---|---|---|---|
CASIA 1.0 | CASIA 2.0 | CUISDE | |
A. Alahmadi et al. [10] | 156 | 326 | 126 |
N. Kanwal et al. [20] | 143 | 234 | 193 |
K. Kadam et al. [30] | 280 | - | 295.2 |
Proposed method | 15.7 | 220 | 7.54 |
CASIA 1.0 | CASIA 2.0 | CUISDE | ||||
---|---|---|---|---|---|---|
Accuracy % | Parameter | Accuracy % | Parameter | Accuracy % | Parameter | |
A. Alahmadi et al. [10] | 97.0 | 16,458,966 | 97.5 | 16,458,966 | 97.77 | 16,458,966 |
E. Abd El-Latif et al. [35] | 94.55 | - | 96.36 | - | - | - |
N. Kanwal et al. [20] | 98.25 | 18,534,965 | 97.59 | 18,534,965 | 96.66 | 18,534,965 |
K. Kadam et al. [30] | 64.0 | 23,812,574 | - | - | 64.0 | 23,812,574 |
H. Ding et al. [29] | - | - | 97.93 | 97.27 | - | |
S. Nath et al. [28] | - | - | 96.45 | - | - | - |
P. Niyishaka et al. [22] | - | - | 94.59 | 2,542,144 | - | - |
Y. Zhang et al. [23] | - | - | - | - | 91.46 | - |
V. Itier et al. [34] | - | - | - | - | 98.13 | - |
Proposed method | 99.1 | 97,698 | 99.3 | 97,698 | 100 | 97,698 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Hosny, K.M.; Mortda, A.M.; Lashin, N.A.; Fouda, M.M. A New Method to Detect Splicing Image Forgery Using Convolutional Neural Network. Appl. Sci. 2023, 13, 1272. https://doi.org/10.3390/app13031272
Hosny KM, Mortda AM, Lashin NA, Fouda MM. A New Method to Detect Splicing Image Forgery Using Convolutional Neural Network. Applied Sciences. 2023; 13(3):1272. https://doi.org/10.3390/app13031272
Chicago/Turabian StyleHosny, Khalid M., Akram M. Mortda, Nabil A. Lashin, and Mostafa M. Fouda. 2023. "A New Method to Detect Splicing Image Forgery Using Convolutional Neural Network" Applied Sciences 13, no. 3: 1272. https://doi.org/10.3390/app13031272
APA StyleHosny, K. M., Mortda, A. M., Lashin, N. A., & Fouda, M. M. (2023). A New Method to Detect Splicing Image Forgery Using Convolutional Neural Network. Applied Sciences, 13(3), 1272. https://doi.org/10.3390/app13031272