Next Article in Journal
Low-Cost Probabilistic 3D Denoising with Applications for Ultra-Low-Radiation Computed Tomography
Next Article in Special Issue
Segmentation of Pancreatic Subregions in Computed Tomography Images
Previous Article in Journal
Evaluating the Influence of ipRGCs on Color Discrimination
Previous Article in Special Issue
Microwave Imaging for Early Breast Cancer Detection: Current State, Challenges, and Future Directions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Digital Watermarking as an Adversarial Attack on Medical Image Analysis with Deep Learning

by
Kyriakos D. Apostolidis
and
George A. Papakostas
*
MLV Research Group, Department of Computer Science, International Hellenic University, 65404 Kavala, Greece
*
Author to whom correspondence should be addressed.
J. Imaging 2022, 8(6), 155; https://doi.org/10.3390/jimaging8060155
Submission received: 18 April 2022 / Revised: 16 May 2022 / Accepted: 26 May 2022 / Published: 30 May 2022
(This article belongs to the Special Issue Intelligent Strategies for Medical Image Analysis)

Abstract

:
In the past years, Deep Neural Networks (DNNs) have become popular in many disciplines such as Computer Vision (CV), and the evolution of hardware has helped researchers to develop many powerful Deep Learning (DL) models to deal with several problems. One of the most important challenges in the CV area is Medical Image Analysis. However, adversarial attacks have proven to be an important threat to vision systems by significantly reducing the performance of the models. This paper brings to light a different side of digital watermarking, as a potential black-box adversarial attack. In this context, apart from proposing a new category of adversarial attacks named watermarking attacks, we highlighted a significant problem, as the massive use of watermarks, for security reasons, seems to pose significant risks to vision systems. For this purpose, a moment-based local image watermarking method is implemented on three modalities, Magnetic Resonance Images (MRI), Computed Tomography (CT-scans), and X-ray images. The introduced methodology was tested on three state-of-the art CV models, DenseNet 201, DenseNet169, and MobileNetV2. The results revealed that the proposed attack achieved over 50% degradation of the model’s performance in terms of accuracy. Additionally, MobileNetV2 was the most vulnerable model and the modality with the biggest reduction was CT-scans.

1. Introduction

The evolution of deep learning and computer hardware has helped computer vision applications become reality. Some disciplines that use DL for computer vision tasks are robotics [1], image quality assessment [2], biometrics [3], face recognition [4], image classification [5], autonomous vehicles [6], etc. One of the most important applications in CV is medical image analysis, where usually DL models were trained to diagnose or predict several diseases from numerous modalities such as MRI, CT-scans, X-rays, Histopathology images, etc. Because of DL success, it has become a useful supportive tool for doctors through medical image analysis as it saves significant time from doctors’ tasks.
Despite DL success, recent studies proved that these models can be easily fooled by imperceptibly perturbating images [7]. According to Goodfellow et al. [8], these attacks decrease the model’s efficiency due to its linearity. Adversarial attacks are divided into three main categories. The first is “white-box attack” in which attackers know the structure and the parameters of the model. The second is “grey-box attack” where attackers know only the model’s structure, and the third is “black-box attack” in which attackers know nothing about the model. Additionally, there are targeted and untargeted attacks. In the former, attackers want to misclassify the input sample in a specific class, while in the latter they just want the sample data to be misclassified. Some of the most known adversarial attacks are Fast Gradient Sign Method (FGSM) [8], Projected Gradient Descent (PGD) [9], Jacobian-based Saliency Maps Attacks (JSMA) [10], and Carlini & Wagner (C&W) [11]. Defense in adversarial attacks can be done in two ways: data level defense and algorithmic level defense. In the first category belong the adversarial training [8] and preprocessing and postprocessing methods [12], while in the second category, some methods modify the model’s architecture, classifier, and capacity [9].
This phenomenon raises questions about the safety of computer vision in medicine, as a wrong diagnosis or prediction can cost a human life. There are several attacks and defenses in medical image analysis [13] which can be exploited by the research community in order to develop models and methods that overcome this challenge. In this paper, we propose a new black-box attack, which is based on a digital watermarking methodology. When handling medical images, the main priority is to ensure that the patient’s details are protected and remain hidden from any forgery by unauthorized persons. That is why the main concern of electronic medical systems is the integration of a standard solution for maintaining the authenticity and integrity of medical images [14]. Digital watermarking is the main solution for this issue. Watermarking enters patients’ information in an invisible way, and usually this is done in binary format. This procedure is called watermark embedding. The watermark embedding must be robust because information should be extracted correctly, even if the image is attacked.
In this paper, we bring to light that watermarking could be a serious problem because it is used for safety reasons, but we show that it can damage the performance of the decision models. In this context, we applied digital watermarking in three modalities: MRIs for brain tumor classification, X-rays for COVID-19, Pneumonia and Normal classification, and CT-scans for COVID detection in lungs. Experiments showed that the proposed watermarking attack can importantly decrease the performance of the models. The MobileNetV2 model was the most vulnerable, while DenseNets were more robust. Furthermore, the lowest values of the watermarking control parameters were able to significantly reduce the accuracy of models in CT-scans. The proposed attack reduced the accuracy by almost 50%. The rest of this paper is organized as follows. Section 2 presents related studies from the literature that applied attacks on medical images. In Section 3, a background of the applied moment-based watermarking method is provided. Section 4 provides details about implementation such as models, datasets, and parameters. Finally, Section 5 concludes this study.

2. Related Works

In recent years, several adversarial attacks for medical images have been proposed. Some studies have experimented with existing attacks on medical images, while others create attacks exclusively for medical images. Yılmaz et al. [15] applied FGSM attack on mammographic images. They used “Digital Database for Screening Mammography” (DDSM), which consists of normal and cancerous images. The accuracy decreased up to 30% while the Structural Similarity Index SSIM index fell below 0.2. Pal et al. [16] applied FGSM attack on X-rays and CT-Scans for COVID-19 detection. They used VGG16 and InceptionV3 models, showing that these models are vulnerable as the accuracy has decreased up to 90% in VGG-16 and up to 63% in InceptionV3. Paul et al. [17] attacked on NLST dataset using the white-box FGSM and the black-box One-pixel attacks. FGSM reduced the model’s accuracy by 36% while One-pixel by only 2–3%. Huq and Pervin [18] applied the FGSM and PGD attacks on dermoscopic images for skin cancer recognition. The model’s performance decreased by up to 75%. Some of the most known white-box attacks, FGSM, PGD, C&W, and BIM, were tested on three datasets with ResNet50. In some cases, the performance of the model decreased by 100% [19]. Ozbulak et al. [20] proposed a targeted attack for medical image segmentation, which is named Adaptive Segmentation Mask Attack (ASMA). This attack creates imperceptible samples and achieves high Intersection-over-Union (IoU) degradation. Chen et al. [21] proposed an attack for medical image segmentation by generating adversarial examples using geometrical deformations to model anatomical and intensity variations. Tian et al. [22] created an adversarial attack that is based on the phenomenon of bias field which can be caused by the wrong acquisition of a medical image, and it can affect the efficacy of a DNN. Kügler et al. [23] investigated a physical attack on skin images by drawing dots and lines with pen or acrylic on the skin. Shao et al. [24] proposed a white-box targeted segmentation attack, which is a combination of adaptive segmentation mask and feature space perturbation in order to create a Multi-Scale Attack (MSA). The authors used the gradient of the last layer and of the middle layer in order for perturbation to be small. Yao et al. [25] proposed a Hierarchical Feature Constraint (HFC) method that can be added to any attack. Adversarial attacks are detected easier in medical images than in natural images, and this method helps attacks to hide adversarial features in order for them to not be easily detected.

3. Materials and Methods

Image moments are one of the most important descriptors of the content of images and they have been used in several research fields such as pattern recognition [26], computer vision [27], and image processing [28]. In the past years, researchers developed orthogonal moments, which are used as kernel function polynomials with orthogonal basis. That means different moment orders describe different parts of images, which results in a minimum of information redundancy. Some well-known moment families are Zernike [29], Tchebichef [30], and Krawtchouk [31]. The watermarking method we applied used Krawtchouk moments due to its robustness under signal processing attacks.

3.1. Krawtchouk Moments

The Krawtchouk orthogonal moments are a family of high-resolution moments defined in the discrete domain, which was introduced into the image analysis by Yap et al. [31]. Krawtchouk moments use the discrete polynomials Krawtchouk, which have the following form,
K n ( x ; p , N ) = F 2   1 ( n , x ; N ; 1 p ) = k = 0 N a k , n , p x k
where x, n = 0,1,2, …, N, N > 0, p ∈ (0, 1) and F 2   1 is the hypergeometric function.
However, using Equation (1) occurred numerical fluctuations and a more stable version of them, the weighted Krawtchouk polynomials, was used,
K ¯ n ( x ; p , N ) = K n ( x ; p , N ) w ( x ; p , N ) ρ ( n ; p , N )
where ρ ( n ; p , N ) ,   is the norm of the Krawtchouk polynomials,
ρ ( n ; p , N ) = ( 1 ) n ( 1 p p ) n n ! ( N ) n ,   n = 1 ,   ,   N
and w ( x ; p , N ) ,   the weight function of the Krawtchouk moments
w ( x ; p , N ) = ( N x ) p x ( 1 p ) N x )  
In Equation (3) the symbol (.)n corresponds to the Pochhammer symbol, which for the general case is defined as ( a ) k = ( a + 1 ) ( a + k + 1 ) .
Based on the above definitions, the orthogonal discrete Krawtchouk image moments of (n + m)th order, of an NxM image with intensity function f(x, y) is defined as follows:
K n m = x = 0 N 1 y = 0 M 1 K ¯ n ( x ; p 1 , N 1 )   K ¯ m ( y ; p 2 , M 1 ) f ( x , y )
Krawtchouk moments are very effective local descriptors, unlike the other moment families which capture the global features of the objects they describe. This locality property is controlled by the appropriate adjustment of the p1, p2 parameters of Equation (5).

3.2. Watermark Embedding

The method we used for watermark embedding was proposed by [32] and consists of the processing modules depicted in Figure 1.
In Figure 1, the original image is the initial medical image where a L-bit length binary message is inserted by constructing the final watermarked image. A set of Krawtchouk moments is calculated according to Equation (5). In this stage, there is a key set K1 that corresponds to the set of parameters p: (p1, p2). Dither modulation is an important methodology that integrates one signal into another one, enhances the embedding rate with minimum distortion of the original image, and increases robustness under attacking conditions. In this methodology, the Krawtchouk moments of the initial image is used as the host signal where the L-bit length binary message (b1, b2, …, bL) is inserted according to Equation (6). The modified Krawtchouk moments, which resulted from dither modulation, are used to construct the watermark information, which is added with the initial image in the last step.
K ˜ n i m i   = [ K n i m i d 1 ( b i )   Δ ] Δ + d i ( b i ) , i = 1 ,   ,   L  
where [.] is the rounding operator, Δ the quantization step (key K2), which is actually the embedding strength of the watermark information, and d i ( . ) the ith dither function satisfying d i ( 1 ) = Δ /2 + d i (0). The dither vector ( d 1 (0), d 2 (0),…, d L (0)) is uniformly distributed in the range [0, Δ].

3.3. Watermarking Adversarial Attack

Digital watermarking is a process that prevents tampering by providing authentication, content verification, and image integration. It consists of two processes. The first process is called watermark embedding, during which digital information is embedded into a multimedia product and the second one is called watermark extraction, in which the information is extracted or detected from the product. Watermarking in the medical field has numerous practical applications, including telediagnosis, teleconferencing between clinicians, and distance training of medical staff. The use of watermarking techniques guarantees the confidentiality, security of the sent data, and the integrity of the medical images. Furthermore, watermark authentication and tamper detection methods can be used to locate the source of the medical images and the falsified area, respectively. All of the above lead to the conclusion that watermarking is a crucial process and necessary in medical image analysis.
So far, we have been taking advantage of the benefits of watermarking, however, digital watermarking can garble the quality of a multimedia product such as an image. These changes may not affect human decision making, but we hypothesize that they can influence the decision of a deep learning model. In this study, we deal only with the watermark embedding part and not with the extraction part since we study the performance of the models on watermarked images. There are numerous watermarking methodologies, like other moment families [33] or transformations [34], that are applied in medical images, and these can constitute a new category of attacks.
We experimented with a watermarking method that uses Krawtchouk moments, because image moments are one of the most important descriptors of the content of images and they are widely used in many fields of image processing. Moreover, another adversarial attack, Discrete Orthogonal Moments Exclusion of Tchebichef image moments DOME-T [35], uses moments to attack ImageNet dataset with remarkable results. Through this research, we highlight a crucial problem that has not been re-studied—that watermarking can impair the performance of the models. Watermarking is widely used in the analysis of medical images, and therefore various watermarking methodologies for the safe use of artificial intelligence in the medical field must be studied from this perspective. We name this new category of adversarial attacks as Watermarking Adversarial Attacks, or WA2 for short, and herein we are studying the Krawtchouk Moments based WA2 represented by the term KMsWA2.

4. Experimental Study

In order to investigate the performance of the proposed watermarking attack (The source code of the proposed KMsWA2 attack will be provided via the github account (https://github.com/MachineLearningVisionRG/KMsWA2, accessed on 17 April 2022) of our research group, upon acceptance of the paper), we trained three popular deep learning models, DenseNet169, DenseNet201, and MobileNetV2, which are widely used by the research community, and thus it is important to investigate their robustness. We combined all p1 and p2 values, p1, p2 ∈ [0.1, 0.2, …, 0.9], with different L-bit lengths and embedding strength a. The L-bit length ranges from 100 to 1000 with step 100. The embedding strength takes four different values, 50, 100, 200, and 300. The watermark embedding was implemented in MATLAB 2018a and the models were trained in Google Collab with Keras 2.4.3. All models were pretrained in ImageNet dataset and they were fine-tuned with Adam optimizer for 20 epochs with a learning rate of 0.0001. We also use three different attacks, FGSM, PGD, and Square Attack [36], for comparison. FGSM and PGD create samples with different models in order for them to treat as black-box attacks. For this purpose, the Adversarial Robustness Toolbox (ART) [37] for creating adversarial samples was applied. Finally, the SSIM index was calculated for the assessment of the image distortion.

4.1. Datasets

The attack was applied in classification problems in three different modalities. The first dataset [38] is an X-ray set from the lungs that classifies the images into three categories, normal, pneumonia, and COVID-19, containing 3840 images. The second dataset [39] consists of brain MRIs of four tumor categories with 3264 total images and the last dataset [40] is a binary classification of CT-Scans for COVID-19 and non-COVID-19 lungs, providing 2481 images. In Figure 2 is presented a sample of the used datasets.

4.2. Ablation Study

The attack consists of three main parameters: embedding strength (a), embedding message length (L-bit), and p values (p1, p2). The embedding strength is an important parameter in digital watermarking because it affects the extraction of information. When the strength value is big, the extraction method is more robust, but the perturbation in images is more visible. The L-bit length concerns the size of information we insert in images. If the size is large, then the part of the image, which is perturbated, is also large. The last parameters, p values (p1, p2), function as coordinates of local patch of the image where the watermark is inserted (Figure 3).
As it is shown in Figure 3a, the watermarking is embedded on the upper left corner, as the p parameters are equal to 0.1, while in (b) the watermarking was embedded on the bottom right corner because p values are equal to 0.9. Both p values range from 0.1 to 0.9 by representing all local points of the image. In Figure 4, it is presented how the embedding strength affects the distortion of an image while the other parameters are constant (L-bit = 1000, p1 = 0.1, p2 = 0.1), and in Figure 5 the perturbation is presented from L-bit length (embedding strength = 300, p1 = 0.1, p2 = 0.1). Embedding strength controls the limit of watermark information that is inserted in the image. A large embedding strength provides more robustness, but it is also more perceptible at the same time.
As it is depicted in Figure 4, increasing the embedding strength the quality of the image is getting worse and the noise becomes more perceptible and intense. On the other hand, in Figure 5 the intense of the noise is almost the same in all L-Bit lengths, but it changes the magnitude of the noise.
In addition, experiments were performed with FGSM, PGD, and Square Attack for ϵ values equal to 0.01, 0.03, 0.05, 0.07, 0.09, 0.12, and 0.15. In Figure 6, MRI with aforementioned attacks and ϵ = 0.01 are presented. The human eye cannot understand any difference between these images. In Figure 7, attacks with ϵ = 0.07 are depicted. Square Attack causes the biggest distortion compared to FGSM and PGD. However, small changes can be observed also in the other two attacks. In Figure 8, the ϵ value has been increased to 0.15, making the noise perceptible.

4.3. Results

All possible combinations of parameters are applied in images in order to investigate, which set of parameters is more effective. As it is reasonable, big values of L-bit length and embedding strength led to greater efficiency. However, adversarial attacks should be as imperceptible as possible. That is why we experimented with all values in order to combine efficiency and imperceptibility. In Table 1, Table 2 and Table 3 the results before and after attack for X-rays Images are presented, while Table A1, Table A2 and Table A3 concern MRIs and Table A4, Table A5 and Table A6 concern CT-scans, all for the case of the three examined DL pretrained models. For each L-bit length and embedding strength, we present the most effective values of p1 and p2. Moreover, the term “original accuracy” refers to the performance of the models in non-watermarked images. Additionally, the SSIM index (it takes values between 0–1 or 0–100% in percentage) between the original and the attacked image is presented in the following tables. The lowest SSIM index was given by X-rays (0.79) with embedding strength and L-Bit length equal to 500 and 1000, respectively. The attacking performance of FGSM, PGD and Square Attack are presented in Table A7, Table A8 and Table A9 for X-rays, MRIs, and CT-Scans, respectively. The value ϵ in tables is the magnitude of perturbation for each attack. Each table shows the SSIM index and the model’s accuracy for each ϵ value. To make the text legible, Table A1, Table A2, Table A3, Table A4, Table A5, Table A6, Table A7, Table A8 and Table A9 are available for viewing in Appendix A.

5. Discussion

According to the results, CT-Scan was the least robust modality, as the accuracy of the models was reduced almost by 50%. This is very interesting, as COVID-19 detection using CT-Scans should have been the most robust problem because it has only two classes. Even with the smallest perturbation, MobileNetV2 was decreased by 12.2% in terms of accuracy (Figure 9). The CT-Scan modality should be further investigated to draw safe conclusions. The problem of brain tumor classification was the most difficult one and therefore the performance of the models, even with clean images, was low. However, the models did not lose significant accuracy with an imperceptible perturbation. On X-rays, accuracy decreases significantly when we increase the embedding strength, or we insert a lot of information.
Moreover, MobileNetV2 is the weakest model, as it loses accuracy easier than the other two models with no need for a perceptible distortion. This may be due to the fact that MobileNetV2 has fewer parameters compared to the other models. In CT-scans case, which was the weakest one, all models lost an important percentage of accuracy with the lowest values, however, the DenseNets lost their accuracy at a slower pace than MobileNetV2. Furthermore, in MRI and X-ray cases DenseNet201 and DenseNet169 need a combination of high values of embedding strength and L-Bit length to significantly reduce their accuracy. On the other hand, the accuracy of MobileNetV2 is significantly decreased when either embedding strength or L-Bit length is high. As a consequence, DenseNets variants need perceptible noise in order to decrease their accuracy. In the case of MRI, the most difficult, DenseNets variants responded very well, losing 5% of their accuracy and needing high values of embedding strength and L-Bit length, 200, and 700, respectively. The problem of classification in medical images is usually difficult because there are no important differences between the different classes. Additionally, there are cases such as X-rays from lungs in which specific points determine the decision. That is why p1 and p2 values play a significant role in the attack’s efficiency. We observe that each problem shares similar p values because these values show the critical points. This is an important advantage of this attack, as we can predefine the p values depending on the images we attack.
The comparison with the other attacks shows that there is not a clear winner. In CT-Scan modality, the proposed attack achieved the greatest accuracy degradation in all models by presenting a much better SSIM index. In X-rays there are cases in which the other three attacks are more effective but with worse SSIM index. For instance, PGD with ϵ = 0.15 dropped the accuracy to 79.8% with SSIM = 44.3%, while the proposed attack at 82% with SSIM = 80%. The proposed KMsWA2 attack shows a high SSIM index even with the high values of the embedding strength, and the L-Bit length is shown in Figure 10, Figure 11 and Figure 12. This is due to the fact that watermarking applied only to the p values and not to the whole image. The other attacks create adversarial noise on the whole image, destroying its quality.
In Figure 10, Figure 11 and Figure 12, six representative scatter plots for the three image modalities are presented, showing that the proposed KMsWA2 attack achieves the same or better performance degradation with significantly higher SSIM index. In Figure 10a, Figure 11a, and Figure 12a, the dots are scattered from top right to bottom and left, indicating that the reduction in the accuracy is achieved only with low SSIM index, while Figure 10b, Figure 11b, and Figure 12b present a vertical direction, which means that the proposed KMsWA2 attack drops the accuracy without dropping much SSIM index. These results constitute evidence that watermarking can be considered as an adversarial attack for the images and thus the research community should study this phenomenon deeply, otherwise the watermarking methods will be inhibitors to the computer vision applications in medical image analysis.

6. Conclusions

In this study, we proposed a black-box adversarial attack for medical images using a moment-based watermarking methodology. We experimented with three different modalities, X-rays, MRIs, and CT-Scans, achieving performance degradation up to 41% to the model, proving that digital watermarking may act as a trojan because it is usually used for the patient’s privacy and safety. However, we showed that even with the least insertion of information or the smallest embedding strength, the performance can be reduced. Moreover, the experiments revealed that the proposed attack is competitive to the established adversarial attacks since it affects the accuracy of the deep learning models in an imperceptible way without being perceived by human eyes. In addition, defending against this attack is not an easy process because the images are distorted locally, and a huge number of images must be created to apply adversarial learning. DenseNets models were the most robust, while MobileNetV2 was the weakest and CT-scans was the most vulnerable modality. As future work, we would like to experiment with more watermarking methodologies as well as more moment families following the same scheme proposed herein and also to examine other popular medical image watermarking techniques, e.g., based on wavelets. Moreover, we are planning to investigate if adversarial learning is able to alleviate the effects of watermarking attacks.

Author Contributions

Conceptualization, G.A.P.; methodology, G.A.P. and K.D.A.; software, K.D.A.; validation, K.D.A.; investigation, K.D.A. and G.A.P.; resources, K.D.A.; writing—original draft preparation, K.D.A.; writing—review and editing, G.A.P.; visualization, K.D.A.; supervision, G.A.P.; project administration, G.A.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Acknowledgments

This work was supported by the MPhil program “Advanced Technologies in Informatics and Computers”, hosted by the Department of Computer Science, International Hellenic University, Kavala, Greece.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. KMsWA2 attack on MobileNetV2 in MRIs.
Table A1. KMsWA2 attack on MobileNetV2 in MRIs.
MRIs − MobileNetV2 − Original Accuracy = 77.6%
L-BitsEmbed. Strength = 50 Embed. Strength = 100Embed. Strength = 200Embed. Strength = 300
SSIM (%)p1, p2Acc. (%)SSIM (%)p1, p2Acc. (%)SSIM (%)p1, p2Acc. (%)SSIM (%)p1, p2Acc. (%)
10099.90.4, 0.476.199.80.1, 0.6 76.699.20.6, 0.475.698.60.6, 0.673.6
20099.80.2, 0.876.699.50.1, 0.675.898.30.6, 0.474.197.00.5, 0.770.0
30099.70.2, 0.776.199.10.5, 0.475.397.20.6, 0.671.895.30.5, 0.766.5
40099.60.4, 0.575.998.70.3, 0.474.896.20.6, 0.771.093.70.5, 0.765.4
50099.50.7, 0.476.198.30.6, 0.573.395.20.5, 0.868.592,00.4, 0.763.2
60099.40.5, 0.675.697.90.6, 0.573.894.20.5, 0.566.090.60.4, 0.560.9
70099.20.5, 0.675.197.50.5, 0.673.193.20.5, 0.566.789.10.4, 0.560.4
80099.00.5, 0.674.897.00.6, 0.572.692.10.4, 0.565.487.70.4, 0.558.6
90098.90.5, 0.674.196.60.4, 0.471.391.10.4, 0.563.286.20.5, 0.556.3
100098.80.5, 0.674.598.80.4, 0.570.390.00.4, 0.562.684.90.4, 0.554.8
Table A2. KMsWA2 attack on DenseNet201 in MRIs.
Table A2. KMsWA2 attack on DenseNet201 in MRIs.
MRIs − DenseNet201 − Original Accuracy = 71.3%
L-BitsEmbed. Strength = 50 Embed. Strength = 100Embed. Strength = 200Embed. Strength = 300
SSIM (%)p1, p2Acc. (%)SSIM (%)p1, p2Acc. (%)SSIM (%)p1, p2Acc. (%)SSIM (%)p1, p2Acc. (%)
10099.90.2, 0.269.899.80.7, 0.169.899.20.5, 0.869.598.60.7, 0.169.0
20099.80.3, 0.869.899.50.7, 0.169.598.30.7, 0.569.097.00.4, 0.468.3
30099.70.3, 0.269.599.10.7, 0.169.597.20.6, 0.968.295.30.6, 0.167.2
40099.60.8, 0.769.598.70.7, 0.169.096.20.4, 0.668.593.70.6, 0.665.7
50099.50.9, 0.869.898.30.6, 0.969.095.20.6, 0.968.092,00.6, 0.664.4
60099.40.6, 0.969.597.90.5, 0.969.094.20.7, 0.766.790.60.4, 0.464.2
70099.20.7, 0.969.597.50.6, 0.969.293.20.4, 0.564.589.10.4, 0.561.7
80099.00.1, 0.269.597.00.9, 0.868.592.10.4, 0.564.287.70.5, 0.557.1
90098.90.6, 0.969.096.60.9, 0.868.091.10.4, 0.563.486.20.4, 0.556.6
100098.80.6, 0.969.098.80.9, 0.868.790.00.4, 0.561.284.90.5, 0.555.0
Table A3. KMsWA2 attacks on DenseNet169 in MRIs.
Table A3. KMsWA2 attacks on DenseNet169 in MRIs.
MRIs − DenseNet169 − Original Accuracy = 69.54%
L-BitsEmbed. Strength = 50 Embed. Strength = 100Embed. Strength = 200Embed. Strength = 300
SSIM (%)p1, p2Acc. (%)SSIM (%)p1, p2Acc. (%)SSIM (%)p1, p2Acc. (%)SSIM (%)p1, p2Acc. (%)
10099.90.5, 0.667.599.80.8, 0.568.099.20.9, 0.466.798.60.8, 0.268.0
20099.80.7, 0.567.099.50.8, 0.468.098.30.9, 0.466.597.00.5, 0.667.2
30099.70.7, 0.567.099.10.3, 0.567.597.20.2, 0.466.795.30.2, 0.564.7
40099.60.8, 0.267.098.70.3, 0.567.796.20.4, 0.566.093.70.2, 0.561.9
50099.50.9, 0.467.098.30.9, 0.567.095.20.4, 0.565.492,00.4, 0.462.1
60099.40.7, 0.567.597.90.9, 0.566.794.20.3, 0.463.090.60.4, 0.459.6
70099.20.9, 0.467.597.50.6, 0.665.793.20.4, 0.462.489.10.4, 0.456.6
80099.00.9, 0.667.797.00.6, 0.666.592.10.3, 0.561.987.70.4, 0.455.0
90098.90.9, 0.667.796.60.5, 0.665.491.10.4, 0.560.986.20.5, 0.551.2
100098.80.9, 0.667.298.80.5, 0.664.790.00.4, 0.558.384.90.5, 0.548.4
Table A4. KMsWA2 attacks on MobileNetV2 in CT-Scans.
Table A4. KMsWA2 attacks on MobileNetV2 in CT-Scans.
CT-Scans − MobileNetV2 − Original Accuracy = 92.2%
L-BitsEmbed. Strength = 50Embed. Strength = 100Embed. Strength = 200Embed. Strength = 300
SSIM (%)p1, p2Acc. (%)SSIM (%)p1, p2Acc. (%)SSIM (%)p1, p2Acc. (%)SSIM (%)p1, p2Acc. (%)
10099.20.7, 0.480.099.00.2, 0.877.998.40.8, 0.772.997.70.8, 0.867.0
20099.10.2, 0.979.198.70.8, 0.876.697.40.7, 0.668.795.90.6, 0.762.9
30099.00.2, 0.978.798.20.8, 0.575.496.20.9, 0.665.894.20.8, 0.660.4
40098.80.9, 0.679.597.80.8, 0.572.995.10.9, 0.564.192.60.9, 0.457.5
50098.70.8, 0.578.397.30.8, 0.572.594.00.8, 0.562.091.00.9, 0.555.0
60098.50.7, 0.577.096.80.8, 0.569.592.90.9, 0.459.189.40.9, 0.554.5
70098.30.8, 0.577.996.40.9, 0.668.391.80.9, 0.558.387.80.9, 0.553.3
80098.20.9, 0.678.395.90.9, 0.668.390.70.9, 0.657.586.30.9, 0.552.9
90098.00.5, 0.677.095.40.9, 0.665.389.50.9, 0.656.284.60.9, 0.552.0
100097.80.7, 0.576.694.90.9, 0.666.288.30.9, 0.556.283.00.9, 0.551.6
Table A5. KMsWA2 attacks on DenseNet201 in CT-Scans.
Table A5. KMsWA2 attacks on DenseNet201 in CT-Scans.
CT-Scans − DenseNet201 − Original Accuracy = 96.6%
L-BitsEmbed. Strength = 50Embed. Strength = 100Embed. Strength = 200Embed. Strength = 300
SSIM (%)p1, p2Acc. (%)SSIM(%)p1, p2Acc. (%)SSIM (%)p1, p2Acc. (%)SSIM (%)p1, p2Acc. (%)
10099.20.6, 0.889.199.00.3, 0.987.998.40.4, 0.987.097.70.7, 0.683.7
20099.10.3, 0.289.198.70.3, 0.987.997.40.6, 0.784.595.90.6, 0.775.8
30099.00.9, 0.988.398.20.3, 0.987.596.20.6, 0.381.294.20.7, 0.771.6
40098.80.9, 0.288.797.80.3, 0.988.395.10.6, 0.379.592.60.6, 0.570.8
50098.70.5, 0.189.197.30.4, 0.687.594.00.6, 0.476.691.00.7, 0.465.8
60098.50.7, 0.288.396.80.4, 0.687.092.90.7, 0.576.689.40.6, 0.565.8
70098.30.8, 0.988.796.40.4, 0.686.291.80.7, 0.575.887.80.6, 0.665.8
80098.20.7, 0.488.795.90.4, 0.685.890.70.7, 0.575.086.30.7, 0.565.4
90098.00.5, 0.988.395.40.4, 0.685.889.50.7, 0.575.084.60.6, 0.565.8
100097.80.9, 0.688.394.90.4, 0.686.288.30.7, 0.575.083.00.9, 0.565.8
Table A6. KMsWA2 attacks on DenseNet169 in CT-Scans.
Table A6. KMsWA2 attacks on DenseNet169 in CT-Scans.
CT-Scans − DenseNet169 − Original Accuracy = 95.8%
L-BitsEmbed. Strength = 50Embed. Strength = 100Embed. Strength = 200Embed. Strength = 300
SSIM (%)p1, p2Acc. (%)SSIM (%)p1, p2Acc. (%)SSIM (%)p1, p2Acc. (%)SSIM (%)p1, p2Acc. (%)
10099.20.4, 0.289.599.00.2, 0.387.998.40.4, 0.882.997.70.3, 0.377.0
20099.10.7, 0.289.598.70.3, 0.387.597.40.3, 0.179.595.90.3, 0.373.7
30099.00.7, 0.289.598.20.3, 0.186.696.20.4, 0.179.194.20.4, 0.370.0
40098.80.7, 0.290.097.80.3, 0.187.095.10.4, 0.278.792.60.4, 0.370.0
50098.70.5, 0.890.097.30.4, 0.186.294.00.4, 0.180.091.00.4, 0.469.1
60098.50.7, 0.289.596.80.3, 0.185.492.90.1, 0.475.889.40.4, 0.567.0
70098.30.9, 0.989.596.40.1, 0.486.291.80.1, 0.476.687.80.4, 0.565.8
80098.20.1, 0.389.195.90.1, 0.386.690.70.1, 0.475.886.30.4, 0.567.0
90098.00.9, 0.989.595.40.1, 0.584.589.50.1, 0.474.184.60.1, 0.564.5
100097.80.9, 0.989.194.90.1, 0.385.088.30.1, 0.572.083.00.1, 0.565.0
Table A7. SSIM index and performance for each attack in X-rays.
Table A7. SSIM index and performance for each attack in X-rays.
X-rays
AttackSSIM (%)Acc. (%)
MobileNetV2DenseNet201DenseNet169
FGSM ϵ = 0.0198.995.395.296.2
FGSM ϵ = 0.0394.686.194.395.0
FGSM ϵ = 0.0582.865.994.080.4
FGSM ϵ = 0.0773.654.990.571.3
FGSM ϵ = 0.0960.142.990.265.3
FGSM ϵ = 0.1245.736.087.762.1
FGSM ϵ = 0.1535.336.980.760.2
PGD ϵ = 0.0199.295.995.295.6
PGD ϵ = 0.0396.390.995.995.3
PGD ϵ = 0.0588.570.393.787.0
PGD ϵ = 0.0781.755.592.575.7
PGD ϵ = 0.0970.141.689.368.5
PGD ϵ = 0.1255.934.084.262.8
PGD ϵ = 0.1544.333.479.860.0
Sq. At ϵ = 0.0199.397.595.995.9
Sq. At ϵ = 0.0395.997.196.295.9
Sq. At ϵ = 0.0585.982.095.093.4
Sq. At ϵ = 0.0778.565.092.489.3
Sq. At ϵ = 0.0970.054.991.885.5
Sq. At ϵ = 0.1256.953.388.083.6
Sq. At ϵ = 0.1548.053.087.079.8
Table A8. SSIM index and performance for each attack in MRIs.
Table A8. SSIM index and performance for each attack in MRIs.
MRIs
AttackSSIM (%)Acc. (%)
MobileNetV2DenseNet201DenseNet169
FGSM ϵ = 0.0198.572.971.969.3
FGSM ϵ = 0.0394.160.167.363.4
FGSM ϵ = 0.0584.048.451.954.5
FGSM ϵ = 0.0777.341.245.049.1
FGSM ϵ = 0.0968.333.538.445.5
FGSM ϵ = 0.1258.728.437.645.0
FGSM ϵ = 0.1551.126.637.040.9
PGD ϵ = 0.0198.876.574.672.1
PGD ϵ = 0.0395.270.374.272.9
PGD ϵ = 0.0587.265.565.759.6
PGD ϵ = 0.0781.663.260.659.6
PGD ϵ = 0.0973.656.354.557.0
PGD ϵ = 0.1264.350.749.956.0
PGD ϵ = 0.1556.547.049.157.0
Sq. At ϵ = 0.0199.075.772.467.0
Sq. At ϵ = 0.0395.265.569.362.1
Sq. At ϵ = 0.0587.352.951.048.9
Sq. At ϵ = 0.0782.742.541.542..5
Sq. At ϵ = 0.0974.437..937.040.7
Sq. At ϵ = 0.1267.434.033.834.8
Sq. At ϵ = 0.1565.035.635.637.3
Table A9. SSIM index and performance for each attack in CT-Scans.
Table A9. SSIM index and performance for each attack in CT-Scans.
CT-Scans
AttackSSIM (%)Acc. (%)
MobileNetV2DenseNet201DenseNet169
FGSM ϵ = 0.0199.692.392.093.2
FGSM ϵ = 0.0396.783.088.694.0
FGSM ϵ = 0.0588.063.682.684.3
FGSM ϵ = 0.0781.058.078.881.4
FGSM ϵ = 0.0970.254.278.080.5
FGSM ϵ = 0.1257.853.478.878.0
FGSM ϵ = 0.1548.353.080.078.0
PGD ϵ = 0.0199.895.490.495.8
PGD ϵ = 0.0398.098.886.797.5
PGD ϵ = 0.0592.698.370.891.2
PGD ϵ = 0.0787.398.365.079.6
PGD ϵ = 0.0970.097.961.775.0
PGD ϵ = 0.1266.797.562.579.2
PGD ϵ = 0.1555.798.362.576.7
Sq. At ϵ = 0.0199.691.191.593.2
Sq. At ϵ = 0.0397.372.590.791.5
Sq. At ϵ = 0.0589.955.577.180.0
Sq. At ϵ = 0.0784.854.269.572.4
Sq. At ϵ = 0.0976.854.261.960.6
Sq. At ϵ = 0.1268.053.453.853.8
Sq. At ϵ = 0.1559.854.258.955.0

References

  1. Geiger, A.; Lenz, P.; Stiller, C.; Urtasun, R. Vision meets robotics: The KITTI dataset. Int. J. Robot. Res. 2013, 32, 1231–1237. [Google Scholar] [CrossRef] [Green Version]
  2. Apostolidis, K.D.; Polyzos, T.; Grigoriadis, I.; Papakostas, G.A. Evaluating Convolutional Neural Networks for No-Reference Image Quality Assessment. In Proceedings of the 2021 4th International Conference on Signal Processing and Information Security (ICSPIS), Dubai, United Arab Emirates, 24–25 November 2021; IEEE: Dubai, United Arab Emirates, 2021; pp. 68–71. [Google Scholar]
  3. Apostolidis, K.; Amanatidis, P.; Papakostas, G. Performance Evaluation of Convolutional Neural Networks for Gait Recognition. In Proceedings of the 24th Pan-Hellenic Conference on Informatics, Athens, Greece, 20–22 November 2020; ACM: Athens, Greece, 2020; pp. 61–63. [Google Scholar]
  4. Filippidou, F.P.; Papakostas, G.A. Single Sample Face Recognition Using Convolutional Neural Networks for Automated Attendance Systems. In Proceedings of the 2020 Fourth International Conference On Intelligent Computing in Data Sciences (ICDS), Fez, Morocco, 21–23 October 2020; IEEE: Fez, Morocco, 2020; pp. 1–6. [Google Scholar]
  5. Shankar, K.; Zhang, Y.; Liu, Y.; Wu, L.; Chen, C.-H. Hyperparameter Tuning Deep Learning for Diabetic Retinopathy Fundus Image Classification. IEEE Access 2020, 8, 118164–118173. [Google Scholar] [CrossRef]
  6. Fang, R.; Cai, C. Computer vision based obstacle detection and target tracking for autonomous vehicles. MATEC Web Conf. 2021, 336, 07004. [Google Scholar] [CrossRef]
  7. Maliamanis, T.; Papakostas, G.A. Machine Learning Vulnerability in Medical Imaging. In Machine Learning, Big Data, and IoT for Medical Informatics; Academic Press: Cambridge, MA, USA, 2021. [Google Scholar]
  8. Goodfellow, I.J.; Shlens, J.; Szegedy, C. Explaining and Harnessing Adversarial Examples. arXiv 2015, arXiv:1412.6572. [Google Scholar]
  9. Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; Vladu, A. Towards Deep Learning Models Resistant to Adversarial Attacks. arXiv 2019, arXiv:1706.06083. [Google Scholar]
  10. Papernot, N.; McDaniel, P.; Jha, S.; Fredrikson, M.; Celik, Z.B.; Swami, A. The Limitations of Deep Learning in Adversarial Settings. arXiv 2015, arXiv:1511.07528. [Google Scholar]
  11. Carlini, N.; Wagner, D. Towards Evaluating the Robustness of Neural Networks. arXiv 2017, arXiv:1608.04644. [Google Scholar]
  12. Xu, W.; Evans, D.; Qi, Y. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks. arXiv 2018, arXiv:1704.01155. [Google Scholar] [CrossRef] [Green Version]
  13. Apostolidis, K.D.; Papakostas, G.A. A Survey on Adversarial Deep Learning Robustness in Medical Image Analysis. Electronics 2021, 10, 2132. [Google Scholar] [CrossRef]
  14. Kuang, L.-Q.; Zhang, Y.; Han, X. A Medical Image Authentication System Based on Reversible Digital Watermarking. In Proceedings of the 2009 First International Conference on Information Science and Engineering, Nanjing, China, 26–28 December 2009; pp. 1047–1050. [Google Scholar]
  15. Yılmaz, I.; Baza, M.; Amer, R.; Rasheed, A.; Amsaad, F.; Morsi, R. On the Assessment of Robustness of Telemedicine Applications against Adversarial Machine Learning Attacks. In Proceedings of the International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems, Kuala Lumpur, Malaysia, 26–29 July 2021; Springer: Cham, Switzerland, 2021; pp. 519–529. [Google Scholar]
  16. Pal, B.; Gupta, D.; Rashed-Al-Mahfuz, M.; Alyami, S.A.; Moni, M.A. Vulnerability in Deep Transfer Learning Models to Adversarial Fast Gradient Sign Attack for COVID-19 Prediction from Chest Radiography Images. Appl. Sci. 2021, 11, 4233. [Google Scholar] [CrossRef]
  17. Paul, R.; Schabath, M.; Gillies, R.; Hall, L.; Goldgof, D. Mitigating Adversarial Attacks on Medical Image Understanding Systems. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA, 3–7 April 2020; IEEE: Iowa City, IA, USA, 2020; pp. 1517–1521. [Google Scholar]
  18. Huq, A.; Pervin, M.T. Analysis of Adversarial Attacks on Skin Cancer Recognition. In Proceedings of the 2020 International Conference on Data Science and Its Applications (ICoDSA), Bandung, Indonesia, 5–6 August 2020; IEEE: Bandung, Indonesia, 2020; pp. 1–4. [Google Scholar]
  19. Ma, X.; Niu, Y.; Gu, L.; Wang, Y.; Zhao, Y.; Bailey, J.; Lu, F. Understanding Adversarial Attacks on Deep Learning Based Medical Image Analysis Systems. arXiv 2020, arXiv:1907.10456. [Google Scholar] [CrossRef]
  20. Ozbulak, U.; Van Messem, A.; De Neve, W. Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation. arXiv 2019, arXiv:1907.13124. [Google Scholar]
  21. Chen, L.; Bentley, P.; Mori, K.; Misawa, K.; Fujiwara, M.; Rueckert, D. Intelligent image synthesis to attack a segmentation CNN using adversarial learning. arXiv 2019, arXiv:1909.11167. [Google Scholar]
  22. Tian, B.; Guo, Q.; Juefei-Xu, F.; Chan, W.L.; Cheng, Y.; Li, X.; Xie, X.; Qin, S. Bias Field Poses a Threat to DNN-based X-ray Recognition. arXiv 2021, arXiv:2009.09247. [Google Scholar]
  23. Kugler, D. Physical Attacks in Dermoscopy: An Evaluation of Robustness for clinical Deep-Learning. J. Mach. Learn. Biomed. Imaging 2021, 7, 1–32. [Google Scholar]
  24. Shao, M.; Zhang, G.; Zuo, W.; Meng, D. Target attack on biomedical image segmentation model based on multi-scale gradients. Inf. Sci. 2021, 554, 33–46. [Google Scholar] [CrossRef]
  25. Yao, Q.; He, Z.; Lin, Y.; Ma, K.; Zheng, Y.; Zhou, S.K. A Hierarchical Feature Constraint to Camouflage Medical Adversarial Attacks. arXiv 2021, arXiv:2012.09501. [Google Scholar]
  26. Papakostas, G.A.; Karakasis, E.G.; Koulouriotis, D.E. Novel moment invariants for improved classification performance in computer vision applications. Pattern Recognit. 2010, 43, 58–68. [Google Scholar] [CrossRef]
  27. Papakostas, G.A.; Boutalis, Y.S.; Karras, D.A.; Mertzios, B.G. A new class of Zernike moments for computer vision applications. Inf. Sci. 2007, 177, 2802–2819. [Google Scholar] [CrossRef]
  28. Kalampokas, T.; Papakostas, G.A. Moment Transform-Based Compressive Sensing in Image. arXiv 2021, arXiv:2111.07254. [Google Scholar]
  29. Papakostas, G.A.; Boutalis, Y.S.; Karras, D.A.; Mertzios, B.G. Efficient computation of Zernike and Pseudo-Zernike moments for pattern classification applications. Pattern Recognit. Image Anal. 2010, 20, 56–64. [Google Scholar] [CrossRef]
  30. Mukundan, R.; Ong, S.H.; Lee, P.A. Image analysis by Tchebichef moments. IEEE Trans. Image Process. 2001, 10, 1357–1364. [Google Scholar] [CrossRef] [PubMed]
  31. Yap, P.-T.; Paramesran, R.; Ong, S.-H. Image analysis by krawtchouk moments. IEEE Trans. Image Process. 2003, 12, 1367–1377. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  32. Papakostas, G.A.; Tsougenis, E.D.; Koulouriotis, D.E. Moment-based local image watermarking via genetic optimization. Appl. Math. Comput. 2014, 227, 222–236. [Google Scholar] [CrossRef]
  33. Yang, C.; Li, J.; Bhatti, U.A.; Liu, J.; Ma, J.; Huang, M. Robust Zero Watermarking Algorithm for Medical Images Based on Zernike-DCT. Secur. Commun. Netw. 2021, 2021, 4944797. [Google Scholar] [CrossRef]
  34. Thakkar, F.N.; Srivastava, V.K. A blind medical image watermarking: DWT-SVD based robust and secure approach for telemedicine applications. Multimed. Tools Appl. 2017, 76, 3669–3697. [Google Scholar] [CrossRef]
  35. Maliamanis, T.; Papakostas, G.A. DOME-T: Adversarial computer vision attack on deep learning models based on Tchebichef image moments. In Proceedings of the Thirteenth International Conference on Machine Vision, Rome, Italy, 4 January 2021; Volume 11605. [Google Scholar]
  36. Andriushchenko, M.; Croce, F.; Flammarion, N.; Hein, M. Square Attack: A Query-Efficient Black-Box Adversarial Attack via Random Search. In Proceedings of the Computer Vision–ECCV 2020, Glasgow, UK, 23–28 August 2020; Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 484–501. [Google Scholar]
  37. Nicolae, M.-I.; Sinn, M.; Tran, M.N.; Buesser, B.; Rawat, A.; Wistuba, M.; Zantedeschi, V.; Baracaldo, N.; Chen, B.; Ludwig, H.; et al. Adversarial Robustness Toolbox v1.0.0. arXiv 2019, arXiv:1807.01069. [Google Scholar]
  38. Sachin Kumar|Novice|Kaggle. Available online: https://www.kaggle.com/sachinkumar413 (accessed on 23 January 2022).
  39. Brain Tumor MRI Dataset|Kaggle. Available online: https://www.kaggle.com/masoudnickparvar/brain-tumor-mri-dataset (accessed on 23 January 2022).
  40. SARS-CoV-2 Ct-Scan Dataset|Kaggle. Available online: https://www.kaggle.com/plameneduardo/sarscov2-ctscan-dataset (accessed on 23 January 2022).
Figure 1. Watermark embedding.
Figure 1. Watermark embedding.
Jimaging 08 00155 g001
Figure 2. Images from three datasets, (a) X-rays, (b) MRIs, and (c) CT-Scans.
Figure 2. Images from three datasets, (a) X-rays, (b) MRIs, and (c) CT-Scans.
Jimaging 08 00155 g002
Figure 3. (a) Watermark embedding with p1 = 0.1 and p2 = 0.1, (b) Watermark embedding with p1 = 0.9 and p2 = 0.9.
Figure 3. (a) Watermark embedding with p1 = 0.1 and p2 = 0.1, (b) Watermark embedding with p1 = 0.9 and p2 = 0.9.
Jimaging 08 00155 g003
Figure 4. (a) Initial image. (b) Embedding strength = 50, (c) Embedding strength = 100, (d) Embedding strength = 200, (e) Embedding strength = 300. The rest of the parameters are L-bit = 1000, p1 = 0.1, p2 = 0.1.
Figure 4. (a) Initial image. (b) Embedding strength = 50, (c) Embedding strength = 100, (d) Embedding strength = 200, (e) Embedding strength = 300. The rest of the parameters are L-bit = 1000, p1 = 0.1, p2 = 0.1.
Jimaging 08 00155 g004
Figure 5. (a) Initial image, (b) L-bit = 200, (c) L-bit = 500, (d) L-bit = 800, (e) L-bit = 800. The rest parameters are Embedding strength = 300, p1 = 0.1, p2 = 0.1.
Figure 5. (a) Initial image, (b) L-bit = 200, (c) L-bit = 500, (d) L-bit = 800, (e) L-bit = 800. The rest parameters are Embedding strength = 300, p1 = 0.1, p2 = 0.1.
Jimaging 08 00155 g005
Figure 6. (a) Initial Image, (b) FGSM attack with ϵ = 0.01, (c) PGD attack with ϵ = 0.01, (d) Square Attack with ϵ = 0.01.
Figure 6. (a) Initial Image, (b) FGSM attack with ϵ = 0.01, (c) PGD attack with ϵ = 0.01, (d) Square Attack with ϵ = 0.01.
Jimaging 08 00155 g006
Figure 7. (a) Initial Image, (b) FGSM attack with ϵ = 0.07, (c) PGD attack with ϵ = 0.07, (d) Square Attack with ϵ = 0.07.
Figure 7. (a) Initial Image, (b) FGSM attack with ϵ = 0.07, (c) PGD attack with ϵ = 0.07, (d) Square Attack with ϵ = 0.07.
Jimaging 08 00155 g007
Figure 8. (a) Initial Image, (b) FGSM attack with ϵ = 0.15, (c) PGD attack with ϵ = 0.15, (d) Square Attack with ϵ = 0.15.
Figure 8. (a) Initial Image, (b) FGSM attack with ϵ = 0.15, (c) PGD attack with ϵ = 0.15, (d) Square Attack with ϵ = 0.15.
Jimaging 08 00155 g008
Figure 9. (a) Clean Image, (b) Attacked image (L-Bit = 100, Embedding strength = 50).
Figure 9. (a) Clean Image, (b) Attacked image (L-Bit = 100, Embedding strength = 50).
Jimaging 08 00155 g009
Figure 10. Scatter plots for MobileNetV2 in CT-Scans under (a) FGSM, PGD, Square Attack, and (b) KMsWA2 attack.
Figure 10. Scatter plots for MobileNetV2 in CT-Scans under (a) FGSM, PGD, Square Attack, and (b) KMsWA2 attack.
Jimaging 08 00155 g010
Figure 11. Scatter plots for DenseNet201 in X-rays under (a) FGSM, PGD, Square Attack, and (b) KMsWA2 attack.
Figure 11. Scatter plots for DenseNet201 in X-rays under (a) FGSM, PGD, Square Attack, and (b) KMsWA2 attack.
Jimaging 08 00155 g011
Figure 12. Scatter plots for DenseNet169 in MRIs under (a) FGSM, PGD, Square Attack, and (b) KMsWA2 attack.
Figure 12. Scatter plots for DenseNet169 in MRIs under (a) FGSM, PGD, Square Attack, and (b) KMsWA2 attack.
Jimaging 08 00155 g012
Table 1. KMsWA2 attack on MobileNetV2 in X-rays dataset.
Table 1. KMsWA2 attack on MobileNetV2 in X-rays dataset.
X-rays − MobileNetV2 − Original Accuracy = 96.8%
L-BitsEmbed. Strength = 50 Embed. Strength = 100Embed. Strength = 200Embed. Strength = 300
SSIM (%)p1, p2Acc. (%)SSIM (%)p1, p2Acc. (%)SSIM (%)p1, p2Acc. (%)SSIM (%)p1, p2Acc. (%)
10099.40.8, 0.195.399.20.9, 0.694.398.40.8, 0.693.197.50.8, 0.492.1
20099.30.9, 0.695.098.70.8, 0.693.797.00.8, 0.491.895.20.8, 0.691.8
30099.00.9, 0.695.098.20.8, 0.593.495.60.8, 0.590.393.00.8, 0.590.6
40098.90.9, 0.694.397.60.8, 0.493.194.20.8, 0.489.390.90.8, 0.488.7
50098.70.8, 0.593.797.10.8, 0.492.892.90.8, 0.489.088.90.7, 0.489.3
60098.50.8, 0.594.096.50.7, 0.492.191.50.7, 0.587.586.90.7, 0.587.8
70098.30.8, 0.593.795.90.7, 0.591.290.20.7, 0.386.884.90.7, 0.586.8
80098.10.8, 0.593.495.30.7, 0.590.088.80.7, 0.587.583.00.7, 0.584.3
90097.90.7, 0.593.196.70.7, 0.689.387.40.7, 0.583.481.00.7, 0.579.0
100097.60.7, 0.593.194.00.7, 0.688.185.90.7, 0.582.179.00.7, 0.578.7
Table 2. KMsWA2 attack on DenseNet201 in X-rays.
Table 2. KMsWA2 attack on DenseNet201 in X-rays.
X-rays − DenseNet201 − Original Accuracy = 96.2%
L-BitsEmbed. Strength = 50Embed. Strength = 100Embed. Strength = 200Embed. Strength = 300
SSIM (%)p1, p2Acc. (%)SSIM (%)p1, p2Acc. (%)SSIM (%)p1, p2Acc. (%)SSIM (%)p1, p2Acc. (%)
10099.40.8, 0.895.599.20.8, 0.495.398.40.8, 0.795.397.50.4, 0.695.0
20099.30.8, 0.895.698.70.9, 0.195.397.00.1, 0.795.395.20.8, 0.694.3
30099.00.8, 0.195.398.20.3, 0.595.695.60.8, 0.795.093.00.5, 0.593.1
40098.90.9, 0.595.697.60.1, 0.795.394.20.1, 0.994.690.90.4, 0.592.1
50098.70.1, 0.895.397.10.8, 0.695.092.90.1, 0.794.388.90.5, 0.591.2
60098.50.8, 0.195.696.50.1, 0.795.091.50.4, 0.693.786.90.6, 0.790.6
70098.30.8, 0.595.395.90.8, 0.995.090.20.3, 0.592.884.90.6, 0.888.7
80098.10.1, 0.295.395.30.5, 0.195.088.80.4, 0.592.883.00.4, 0.687.8
90097.90.9, 0.595.096.70.1, 0.894.787.40.6, 0.692.581.00.4, 0.785.9
100097.60.1, 0.895.394.00.4, 0.394.485.90.4, 0.590.379.00.4, 0.582.1
Table 3. KMsWA2 attack on DenseNet169 in X-rays.
Table 3. KMsWA2 attack on DenseNet169 in X-rays.
X-rays − DenseNet169 − Original Accuracy = 95.9%
L-BitsEmbed. Strength = 50 Embed. Strength = 100Embed. Strength = 200Embed. Strength = 300
SSIM (%)p1, p2Acc. (%)SSIM (%)p1, p2Acc. (%)SSIM (%)p1, p2Acc. (%)SSIM (%)p1, p2Acc. (%)
10099.40.8, 0.495.399.20.9, 0.395.098.40.8, 0.594.097.50.8, 0.794.3
20099.30.8, 0.495.098.70.1, 0.895.097.00.8, 0.593.795.20.8, 0.591.2
30099.00.8, 0.495.398.20.1, 0.594.395.60.7, 0.592.593.00.8, 0.589.3
40098.90.8, 0.895.097.60.8, 0.594.094.20.7, 0.591.590.90.8, 0.589.3
50098.70.8, 0.594.697.10.7, 0.593.792.90.7, 0.590.388.90.7, 0.488.4
60098.50.8, 0.495.096.50.7, 0.594.091.50.7, 0.489.686.90.6, 0.585.9
70098.30.9, 0.394.695.90.7, 0.493.490.20.7, 0.388.484.90.6, 0.585.3
80098.20.9, 0.395.095.30.2, 0.393.788.80.7, 0.587.883.00.6, 0.584.0
90097.90.9, 0.494.696.70.7, 0.491.587.40.7, 0.587.181.00.6, 0.580.6
100097.60.8, 0.694.694.00.7, 0.491.585.90.6, 0.585.079.00.6, 0.580.3
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Apostolidis, K.D.; Papakostas, G.A. Digital Watermarking as an Adversarial Attack on Medical Image Analysis with Deep Learning. J. Imaging 2022, 8, 155. https://doi.org/10.3390/jimaging8060155

AMA Style

Apostolidis KD, Papakostas GA. Digital Watermarking as an Adversarial Attack on Medical Image Analysis with Deep Learning. Journal of Imaging. 2022; 8(6):155. https://doi.org/10.3390/jimaging8060155

Chicago/Turabian Style

Apostolidis, Kyriakos D., and George A. Papakostas. 2022. "Digital Watermarking as an Adversarial Attack on Medical Image Analysis with Deep Learning" Journal of Imaging 8, no. 6: 155. https://doi.org/10.3390/jimaging8060155

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop