Next Article in Journal
Interactive Airfoil Optimization Using Parsec Parametrization and Adjoint Method
Previous Article in Journal
Two-Dimensional Ultra Light-Weight Infant Pose Estimation with Single Branch Network
Previous Article in Special Issue
Segment Shards: Cross-Prompt Adversarial Attacks against the Segment Anything Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Noise Blowing-Up Strategy Creates High Quality High Resolution Adversarial Images against Convolutional Neural Networks

Faculty of Science, Technology and Medicine, University of Luxembourg, L-4365 Luxembourg, Luxembourg
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(8), 3493; https://doi.org/10.3390/app14083493
Submission received: 18 March 2024 / Revised: 9 April 2024 / Accepted: 15 April 2024 / Published: 21 April 2024
(This article belongs to the Special Issue Adversarial Attacks and Cyber Security: Trends and Challenges)

Abstract

:
Convolutional neural networks (CNNs) serve as powerful tools in computer vision tasks with extensive applications in daily life. However, they are susceptible to adversarial attacks. Still, attacks can be positive for at least two reasons. Firstly, revealing CNNs vulnerabilities prompts efforts to enhance their robustness. Secondly, adversarial images can also be employed to preserve privacy-sensitive information from CNN-based threat models aiming to extract such data from images. For such applications, the construction of high-resolution adversarial images is mandatory in practice. This paper firstly quantifies the speed, adversity, and visual quality challenges involved in the effective construction of high-resolution adversarial images, secondly provides the operational design of a new strategy, called here the noise blowing-up strategy, working for any attack, any scenario, any CNN, any clean image, thirdly validates the strategy via an extensive series of experiments. We performed experiments with 100 high-resolution clean images, exposing them to seven different attacks against 10 CNNs. Our method achieved an overall average success rate of 75% in the targeted scenario and 64% in the untargeted scenario. We revisited the failed cases: a slight modification of our method led to success rates larger than 98.9%. As of today, the noise blowing-up strategy is the first generic approach that successfully solves all three speed, adversity, and visual quality challenges, and therefore effectively constructs high-resolution adversarial images with high-quality requirements.

1. Introduction

The ability of convolutional neural networks (CNNs) [1] to automatically learn from data has made them a powerful tool in a wide range of applications touching on various aspects of our daily lives, such as image classification [2,3], object detection [4], facial recognition [5], autonomous vehicles [6], medical image analysis [7,8], natural language processing (NLP) [9,10], augmented reality (AR) [11], quality control in manufacturing [12] and satellite image analysis [13,14].
Even so, CNNs are vulnerable to attacks. In the context of image classification, which is considered in the present paper, carefully designed adversarial noise added to the original image can lead to adversarial images being misclassified by CNNs. These issues can lead to serious safety problems in real-life applications. On the flip side, such vulnerabilities can be also leveraged to obscure security and privacy-sensitive information from CNN-based threat models seeking to extract such data from images [15,16,17].
In a nutshell, adversarial attacks are categorized based on two components: the level of knowledge the attacker has about the CNN; the scenario followed by the attack. Regarding the first component, in a white-box attack [3,18,19,20,21] (also known as gradient-based attack), the attacker has full access to the architecture and to the parameters of the CNN. In contrast, in a black box attack [22,23,24,25,26,27], the attacker does not know the CNN’s parameters or architecture; its knowledge is limited to the CNN’s evaluation for any input image, including the label category in which it classifies the image, and the corresponding label value. As a consequence of the knowledge bias, white-box attacks usually generate adversarial images faster than black-box attacks. Regarding the second component, in the target scenario, the goal of the attack is to manipulate the clean input image to create an adversarial image that the CNN classifies into a predefined target category. In the untargeted scenario, the goal of the attack is to create an adversarial image that the CNN classifies into any category other than the category of the clean image. An additional objective in these scenarios is to require that the modifications put on the original clean image to create the adversarial image remain imperceptible to a human eye.

1.1. Standart Adversarial Attacks

To perform image recognition, CNNs start their assessment of any image by first resizing it to its own input size. In particular, high-resolution images are scaled down, say to 32 × 32 or 224 × 244 for most CNNs trained on CIFAR-10, respectively on ImageNet [28]. Until recently (and still now), to the best of our knowledge, attacks are performed on these resized images. Consequently, the resulting adversarial images’ size coincides with the CNN input’s size, regardless of the size of the original images. Figure 1 describes this standard approach, in which attacks take place in the low-resolution domain, denoted as the R domain in this paper.
As previously highlighted, the susceptibility of CNNs to adversarial attacks can be utilized to obfuscate privacy-sensitive information from CNN-empowered malicious software. To use adversarial images for such security purposes, their sizes must match the sizes of the original clean images considered. In practice, these sizes are usually far larger than 224 × 224 . However, generating high-resolution adversarial images, namely adversarial images in the H domain as we call it in this paper, poses certain difficulties.

1.2. Challenges and Related Works

Creating adversarial images of the same size as their clean counterparts, as illustrated in Figure 2, is a novel and highly challenging task in termes of speed, adversity, and imperceptibility.
Firstly, the complexity of the problem grows quadratically with the size of the images. This issue impacts the speed of attacks performed directly in the H domain. In [29], an evolutionary algorithm-based black-box attack, that successfully handled images of sizes 224 × 224 , was tested on a high-resolution image of size 910 × 607 via the direct approach illustrated in Figure 2. Despite 40 h of computational efforts, it failed to create a high-resolution adversarial image by this direct method. This indicates that a direct attack in the H domain, as described above, is unlikely to succeed. An alternative approach is definitively needed to speed up the attack process in the H domain.
Additionally, the adversarial noise in the high-resolution adversarial image should prevail even when the adversarial image is resized to the input size of the CNN. Finally, the difference between the high-resolution original clean image and the high-resolution adversarial image must be imperceptible to a human eye.
A first solution to the speed and adversity challenges is presented in [29,30] as an effective strategy that smoothly transforms an adversarial image — regardless of how it is generated—from the R domain to the H domain. However, the imperceptibility issue was not resolved.

1.3. Our Contribution

In this article, we introduce a novel strategy, extending our conference paper [31] (and enhancing [29,30]). This strategy stands as the first effective method for generating high visual quality adversarial images in the high-resolution domain in the following sense: The strategy works for any attack, any scenario, any CNN, and any clean high-resolution image. Compared to related works, our refined strategy increases substantially the visual quality of the high-resolution adversarial images, as well as the speed and efficiency in creating them. In summary, the approach amounts to a “blowing-up” to the high-resolution domain of the adversarial noise—only of the adversarial noise, and not of the full adversarial image—created in the low-resolution domain. Adding this high-resolution noise to the original high-resolution clean image leads to an indistinguishable high-resolution adversarial image.
This noise blowing-up strategy is validated in terms of speed, adversity, and visual quality by an extensive set of experiments. It encompasses seven attacks (four white-box and three black-box) against 10 state-of-the-art CNNs trained on ImageNet; the attacks are performed both for the untargeted and the target scenario, with 100 high-resolution clean images. In particular, the visual quality of high-resolution adversarial images generated with our method is thoroughly studied; the outcomes are compared with adversarial images resulting from [29,30].

1.4. Organisation of the Paper

Our paper is organised as follows. Section 2 recalls briefly what are the target and untarget scenarios in R , what their versions in H , fixes some notations, and lists a series of indicators ( L p norms and FID) used to assess the human perception of distinct images. Section 3 formalises the noise blowing-up strategy, provides the scheme of the attack a t k H , C that lifts to H any attack a t k R , C against a CNN C that works in the R domain, and that takes advantage of lifting the adversarial noise only. It recalls some complementary indicators used to assess the impact of the obtained tentative adversarial images (Loss function L C , “safety buffer” Δ C ), and again fixes some notations. The experimental study is performed in the subsequent Sections. Section 4 describes the ingredients of the experiments: the resizing functions, the 10 CNNs, the 100 clean high-resolution images, the target categories considered in the target scenario, and the 7 attacks. Section 5 provides the results of the experiments performed under these conditions: success rate, visual quality, and imperceptibility of the difference between adversarial and clean images, timing, and overhead of the noise blowing-up strategy. The cases, where the standard implementation of the strategy failed to succeed, are revisited in Section 6 thanks to the “safety buffer” Δ C . Finally, Section 7 provides a comparison of the noise blowing-up method with the generic lifting method [29,30] on three challenging high-resolution images, one CNN, and one attack for the target scenario. Section 8 summarizes our findings, and indicates directions for future research. An Appendix completes the paper with additional data and evidence.
All algorithms and experiments were implemented using Python 3.9 [32] with NumPy 1.23.5 [33], TensorFlow 2.14.0 [34], Keras 3 [35], and Scikit 0.22 [36] libraries. Computations were performed on nodes with Nvidia Tesla V100 GPGPUs of the IRIS HPC Cluster at the University of Luxembourg.

2. CNNs and Attack Scenarios

CNNs performing image classification are trained on some large dataset S to sort images into predefined categories c 1 , , c . The categories, and their number , are associated with S and are common to all CNNs trained on S . One denotes R the set of images of size r 1 × r 2 (where r 1 is the height, and r 2 is the width of the image) natively adapted to such CNNs.
Once trained, a CNN can be exposed to images (typically) in the same domain R as those on which it was trained. Given an input image I R , the trained CNN produces a classification output vector
o I = o I [ 1 ] , , o I [ ] ,
where 0 o I [ i ] 1 for 1 i , and i = 1 o I [ i ] = 1 . Each c i -label value o I [ i ] measures the plausibility that the image I belongs to the category c i .
Consequently, the CNN classifies the image I as belonging to the category c k if k = arg max 1 i ( o I [ i ] ) . If there is no ambiguity on the dominating category (as occurs for most images used in practice; we also make this assumption in this paper), one denotes ( c k , o I [ k ] ) the pair specifying the dominating category and the corresponding label value. The higher the c k -label value o I [ k ] , the higher the confidence that I represents an object of the category c k from CNN’s “viewpoint”. For the sake of simplicity and consistency with the remaining of this paper, we shall write ( c I , τ c I ) = ( c k , o I [ k ] ) . In other words, C ’s classification of I is
C ( I ) = ( c I , τ c I ) V = { ( c i , v i ) , where v i [ 0 , 1 ] for 1 i } .

2.1. Assessment of the Human Perception of Distinct Images

Given two images A and B of the same size h × w (belonging or not to the R domain), there are different ways to assess numerically the human perception of the difference between them, as well as the actual “weight” of this difference. In the present study, this assessment is performed mainly by computing the (normalized) values of L p ( A , B ) for p = 0 , 1 , 2 , or and the Fréchet Inception Distance (FID).
Introduced in [37], FID originally served as a metric to evaluate the performance of GANs by assessing the similarity of generated images. FID is one of the recent tools for assessing the visual quality of adversarial images and it aligns closely with human judgment (see [38,39,40]). On the other hand, [41,42] provide an assessment of L p -norms as a measure of perceptual distance between images.
In a nutshell, for an image I of size h × w , the integer
0 p i , j , α ( I ) 255
denotes the value of the pixel positioned in the ith-row, jth-column, of the image I for the channel α { R , G , B } (R = Red, G = Green, B = Blue). Then,
  • L 0 n o r m ( A , B ) = 1 3 h w # { i , j , α ; p i , j , α ( A ) p i , j , α ( B ) } ,
  • L 1 n o r m ( A , B ) = 1 2 8 3 h w i , j , α | p i , j , α ( A ) p i , j , α ( B ) | ,
  • L 2 n o r m ( A , B ) = 1 2 8 3 h w i , j , α | p i , j , α ( A ) p i , j , α ( B ) | 2 ,
  • L ( A , B ) = Max i , j , α | p i , j , α ( A ) p i , j , α ( B ) |
where 1 i h , 1 j w , and α { R , G , B } . These quantities satisfy the inequalities:
0 L 0 n o r m ( A , B ) , L 1 n o r m ( A , B ) , L 2 n o r m ( A , B ) 1 , and 0 L ( A , B ) 256 .
The closer their values are to 0, the closer are the images A , B to each other.
To effectively capture the degree of disturbance, and therefore to provide a reliable measure of the level of disruption, FID quantifies the separation between clean and disturbed images based on extracting features from images that are provided by the Inception-v3 network [43]. Activations from one of the intermediate layers of the Inception v3 model are used as feature representations for each image. FID assesses the similarity between two probability distributions in a metric space, via the formula:
  • FID( A , B ) = μ A μ B 2 + Tr ( M A + M B 2 · M A · M B )
where, μ A and μ B denote feature-wise mean vectors for the images A and B , respectively, reflecting average features observed across the images. M A and M B represent covariance matrices for the feature vectors (covariance matrices offer insights into how features in the vectors co-vary with each other). The quantity μ A μ B 2 captures the squared difference in mean vectors (highlighting disparities in these average features), and the trace quantity assesses dissimilarities between the covariance matrices. In the end, FID quantifies how similar the distribution of feature vectors in the A is to that in the B . The lower the FID value, the more similar the images A and B .

2.2. Attack Scenarios in the R Domain

Let C be a trained CNN, c a be a category among the possible categories, and A a clean image in the R domain, classified by C as belonging to c a . Let τ a be its c a -label value. Based on these initial conditions, we describe two attack scenarios (the target scenario and the untarget scenario) aiming at creating an adversarial image D R accordingly.
Whatever the scenario, one requires that D remains so close to A , that a human would not notice any difference between A and D . This is done in practice by fixing the value of the parameter ϵ , which controls (or restricts) the global maximum amplitude allowed for the modifications of each pixel value of A to construct an adversarial image D . Note that, for a given attack scenario, the value set to ϵ usually depends on the concrete performed attack, more specifically on the L p distance used in the attack to assess the human perception between an original and an adversarial image.
The ( c a , c t ) target scenario performed on A requires first to select a category c t c a . The attack then aims at constructing an image D that is either a good enough adversarial image or a τ -strong adversarial image.
A good enough adversarial image is an adversarial image that C classifies as belonging to the target category c t , without any requirement on the c t -label value τ t beyond being strictly dominant among all label values. A τ -strong adversarial image is an adversarial image that C not only classifies as belonging to the target category c t , but for which its c t -label value τ t τ for some threshold value τ [ 0 , 1 ] fixed a priori.
In the untarget scenario performed on A , the attack aims at constructing an image D that C classifies in any category c c a .
One writes a t k R , C s c e n a r i o to denote the specific attack atk performed to deceive C in the R domain according to the selected scenario, and D = a t k R , C s c e n a r i o ( A ) an adversarial image obtained by running successfully this attack on the clean image A . Note that one usually considers only the first adversarial image obtained by a successful run of an attack, so that D is uniquely defined.
Finally, one writes C ( D ) = ( c , τ c ) the classification of the adversarial image obtained. Note that ( c , τ c ) = ( c t , τ t ) in the case of the target scenario.

2.3. Attack Scenarios Expressed in the H Domain

In the context of high-resolution (HR) images, let us denote by H the set of images that are larger than those of R . In other words, an image of size h × w (where h designates the height, and w the width of the image considered) belongs to H if h r 1 and w r 2 . One assumes given a fixed degradation function
ρ :   H R ,
that transforms any image I H into a “degraded” image ρ ( I ) R . Then there is a well-defined composition of maps C ρ as shown in the following scheme:
Applsci 14 03493 i004
Given A a hr H , one obtains that way the classification of the reduced image A a = ρ ( A a hr ) R as C ( A a ) V .
We assume that the dominating category of the reduced image A a is without ambiguity, and denote by C ( A a ) = ( c a , τ a ) V the outcome of C ’s classification of A a .
Thanks to the degradation function ρ , one can express in the H domain any attack scenario that makes sense in the R domain. This is in particular the case for the target scenario and for the untarget scenario.
Indeed, an adversarial HR image against C for the ( c a , c t ) target scenario performed by an attack a t k H , C t a r g e t on A a hr H is an image D t hr , C ( A a hr ) = a t k H , C t a r g e t ( A a hr ) H , that satisfies two conditions (note that the notation D t hr , C ( A a hr ) , with t as index, encapsulates and summarizes the fact that the adversarial image is obtained for the specific target scenario considered). On the one hand, a human should not be able to notice any visual difference between the original A a hr and the adversarial D t hr , C ( A a hr ) HR images. On the other hand, C should classify the degraded image D t C ( A a hr ) = ρ ( D t hr , C ( A a hr ) ) in the category c t for a sufficiently convincing c t -label value. The image D t hr , C ( A a hr ) H is then a good enough adversarial image or a τ -strong adversarial image if its reduced version D t C ( A a hr ) = ρ ( D t hr , C ( A a hr ) ) is.
Similarly, and mutatis mutandis for the untarget scenario, one denotes by D untarget hr , C ( A a hr ) = a t k H , C u n t a r g e t ( A a hr ) the HR adversarial images obtained by an attack a t k H , C u n t a r g e t for the untarget scenario performed on A a hr H , and by D untarget C ( A a hr ) R its degraded version.
The generic attack scenario on C in the HR domain can be visualized in the following scheme:
Applsci 14 03493 i005
Depending on the scenario considered, one has:
  • For the target scenario: D s c e n a r i o hr , C ( A a hr ) = D t hr , C ( A a hr ) = a t k H , C t a r g e t ( A a hr ) H , D s c e n a r i o C ( A a hr ) = D t C ( A a hr ) = ρ ( D t hr , C ( A a hr ) ) R , and ( c , τ c ) = ( c t , τ t ) with c t dominant among all categories, and, furthermore, τ t τ if one additionally requires the adversarial image to be τ -strong adversarial.
  • For the untarget scenario: D s c e n a r i o hr , C ( A a hr ) = D untarget hr , C ( A a hr ) = a t k H , C u n t a r g e t ( A a hr ) H , D s c e n a r i o C ( A a hr ) = D u n t a r g e t C ( A a hr ) R , and ( c , τ c ) with c such that c c a .
Whatever the scenario, one also requires that a human is unable to notice any difference between the clean image A a hr and the adversarial image D s c e n a r i o hr , C ( A a hr ) in H .

3. The Noise Blowing-Up Strategy

The method presented here (and introduced in [31]) attempts to circumvent the speed, adversity, and visual quality challenges mentioned in the Introduction, which are encountered when one intends to create HR adversarial images. While speed and adversity were successfully addressed in [29,30] via a strategy similar to some extent to the present one, the visual quality challenge remained partly unsolved. The refinement provided by our noise blowing-up strategy, which lifts to the H domain for any attack working in the R domain, addresses this visual quality issue without harming the speed and adversity features. It furthermore simplifies and generalises the attack scheme described in [29,30].
In a nutshell, the noise blowing-up strategy applied to an attack atk on a CNN C following a given scenario, essentially proceeds as follows.
One considers a clean image A a R , degraded from a clean image A a hr H thanks to a degrading function ρ . Then one performs an attack a t k R , C s c e n a r i o on A a in the R domain, that leads to an image R , adversarial against the CNN for the considered scenario. Although getting such adversarial images in the R domain is crucial for obvious reasons, our strategy does not depend on how they are obtained and applies to all possible attacks a t k R , C s c e n a r i o working efficiently in the R domain. This feature contributes substantially to the flexibility of our method.
Then one computes the adversarial noise in R as the difference between the adversarial image and the clean image in R . Thanks to a convenient enlarging function λ , one blows up this adversarial noise from R to H . Then, one adds this blown-up noise to A a hr , creating that way a high-resolution image, called here the HR tentative adversarial image.
One checks whether this HR tentative adversarial image fulfills the criteria stated in the last paragraph of Section 2.3, namely becomes adversarial once degraded by the function ρ . Should this occur, it means that blowing up the adversarial noise in R has led to a noise in H that turns out to be also adversarial. If the blown-up noise is not sufficiently adversarial, one raises the expectations at the R level accordingly.
The concrete design of the noise blowing-up strategy, which aims at creating an efficient attack in the H domain once given an efficient attack in the R domain for some scenario, is given step-by-step in Section 3.1. A series of indicators is given in Section 3.2. The assessment of these indicators depends on the choice of the degrading and enlarging functions used to go from H to R , and vice versa. These choices are specified in Section 4.

3.1. Constructing Images Adversarial in H out of Those Adversarial in R

Given a CNN C , the starting point is a large-size clean image A a hr H .
In Step 1, one constructs its degraded image A a = ρ ( A a hr ) R .
In Step 2, one runs C on A a to get its classification in a category c a . More precisely, one gets C ( A a ) = ( c a , τ a ) .
In Step 3, with notations consistent with those used in Section 2.3, one assumes given an attack a t k R , C s c e n a r i o on A a in the R domain, that leads to an image
D ˜ s c e n a r i o C ( A a ) R ,
adversarial against CNN for the considered scenario. As already stated, how such an adversarial image is obtained does not matter. For reasons linked to Step 5 and to Step 8, one denotes ( c b e f , τ ˜ c b e f ) the outcome of the classification by C of this adversarial image in R . The index “ b e f ” indicates that these assessments and measures take place before the noise blowing-up process per se (Steps 4, 5, 6 essentially).
Step 4 consists in getting the adversarial noise N C ( A a ) R as the difference
N C ( A a ) = D ˜ s c e n a r i o C ( A a ) A a R
of images living in R , one being the adversarial image of the clean other.
To perform Step 5, one needs a fixed enlarging function
λ :   R H
that transforms any image of R into an image in H . Anticipating on Step 8, it is worthwhile noting that, although the reduction function  ρ and the enlarging function  λ have opposite purposes, these functions are not necessarily inverse one from the other. In other words, ρ λ and λ ρ may differ from the identity maps i d R and i d H respectively (usually they do).
One applies the enlarging function λ to the low-resolution adversarial noise N C ( A a ) , what leads to the blown-up noise
N h r , C ( A a hr ) = λ ( N C ( A a ) ) H .
In Step 6, one creates the HR tentative adversarial image by adding this blown-up noise to the original high-resolution image as follows:
D s c e n a r i o hr , C ( A a hr ) = A a hr + N h r , C ( A a hr ) H .
In Step 7, the application of the reduction function ρ on this HD tentative adversarial image creates an image D s c e n a r i o C ( A a hr ) = ρ ( D s c e n a r i o hr , C ( A a hr ) ) in the R domain.
Finally, in Step 8, one runs C on D s c e n a r i o C ( A a hr ) to get its classification ( c a f t , τ c a f t ) . The index “ a f t ” indicates that these assessments and measures take place after the noise blowing-up process per se (Steps 4, 5, 6 essentially).
The attack succeeds if the conditions stated at the end of Section 2.3 are satisfied according to the considered scenario.
Remarks.—(1) For reasons explained in Step 5, there is no reason that τ ˜ c b e f = τ c a f t even when C classifies both images D ˜ s c e n a r i o C ( A a ) and D s c e n a r i o C ( A a hr ) in the same category c = c b e f = c a f t (this condition is expected in the target scenario, provided this common category satisfies c c a ). These label values are very likely to differ. This has two consequences: the first is to make mandatory the verification process performed in Step 8, let alone to make sure that the adversarial image is conveniently classified by C according to the considered scenario; the second is that, for the target scenario, one should set the value of τ ˜ c b e f in a way such to ensure that the image D t hr , C ( A a hr ) is indeed adversarial (see Section 3.2). (2) In the context of the untarget scenario, one should make sure that c a f t c a . In the context of the target scenario, one should also aim at getting c a f t = c b e f (provided one succeeds in creating an adversarial image for which c b e f c a ). These requirements are likely to influence the value set to τ ˜ c b e f as well (see Section 3.2).
Scheme (11) summarizes these steps. It shows how to create, from a target attack a t k R , C s c e n a r i o efficient against C in the R domain, the attack a t k H , C s c e n a r i o in the H domain obtained by the noise blowing-up strategy:
Applsci 14 03493 i006

3.2. Indicators

Although both D ˜ s c e n a r i o C ( A a ) and D s c e n a r i o C ( A a hr ) stem from A a hr , belong to the same set R of low-resolution images, these images nevertheless differ in general, since ρ λ i d R . Therefore, as already stated, this fact implies that the verification process performed in Step 8 is mandatory.
For the target scenario, one aims at c a f t = c b e f = c t . Since τ ˜ c t and τ c t are likely to differ, One measures the difference with the real-valued loss function L defined for A a hr H as
L C ( A a hr ) = τ ˜ c t τ c t .
In particular, for the target scenario, our attack is effective if one can set accurately the value of τ ˜ t to match the inequality τ t τ for the threshold value τ , or to make sure that D t C ( A a hr ) is a good enough adversarial image in the R domain while controlling the distance variations between A a hr and the adversarial D t hr , C ( A a hr ) .
For the untarget scenario, one aims at c a f t c a . To hope to achieve c a f t c a , one requires c b e f c a . However, this requirement alone may not be sufficient to obtain c a f t c a . Indeed, depending on the attack, the adversarial image D u n t a r g e t C ( A a hr ) (in the R domain) may be very sensitive to the level of trust that D ˜ u n t a r g e t C ( A a hr ) (also in the R domain) belongs to the category c b e f . In other words, even if the attack performed in step 3 of the noise blowing-up strategy succeeded, steps 5 to 9 may not succeed under some circumstances, and it may occur that the image resulting from these steps is classified back to c a .
Although less pregnant for the target scenario, a similar sensitivity phenomenon may nevertheless occur, leading to c a f t c b e f (hence to c a f t c t , since c b e f = c t in this scenario), and therefore to an unsuccess of the noise blowing-up strategy.
For these reasons, it may be safer to ensure a “margin of security” measured as follows. One defines the Delta function  Δ C for A a hr H as:
Δ C ( A a hr ) = τ ˜ c b e f τ ˜ c n e x t , b e f ,
where c n e x t , b e f is the second best category, namely the category c for which the label value τ ˜ c is the highest after the label value τ ˜ c b e f of c b e f . Enlarging the distance of the label values between the best and second best category before launching the next steps of the noise blowing-up strategy may lead to higher success rates of the strategy (see Section 6).
Remark.—Note that the present approach, at the difference of the approach, initially introduced in [29,30], does not require frequent resizing up and down via λ , ρ the adversarial images. In particular, if one knows how the loss function behaves (in the worst case, or in average) for a given targeted attack, then one can adjust a priori the value of τ c ˜ accordingly, and be satisfied with one such resizing up and down. Mutatis mutandis for the untarget attack and the Delta function.
To assess the visual variations and the noise between the images (see Section 2.1), we shall compute the L 0 n o r m , L 1 n o r m , L 2 n o r m , L , and FID values for the following pairs of images:
  • A a and D ˜ s c e n a r i o C ( A a hr ) in the R domain. One writes L p , R n o r m , a d v ( p = 0 , 1 , 2 ) and L , R a d v the corresponding values.
  • A a hr and D s c e n a r i o hr , C ( A a hr ) in the H domain. One writes L p , H n o r m , a d v ( p = 0 , 1 , 2 ) and L , H a d v the corresponding values.
  • A a hr and λ ρ ( A a hr ) in the H domain. One writes L p , H n o r m , c l e a n ( p = 0 , 1 , 2 ) and L , H c l e a n the corresponding values.
  • A a hr , λ ρ ( A a hr ) in the H domain. One writes FID H c l e a n the corresponding values.
  • A a hr and D s c e n a r i o hr , C ( A a hr ) in the H domain. One writes FID H a d v the corresponding values.
In particular, when adversarial images are involved, the comparison of some of these values between what occurs in the R domain, and what occurs in the H domain gives an insight into the weight of the noise at each level, and of the noise propagation once blown-up. Additionally, we shall as well assess the ratio:
L 1 , H n o r m , a d v L 1 , H n o r m , c l e a n = L 1 n o r m ( A a hr , D s c e n a r i o hr , C ( A a hr ) ) L 1 n o r m ( A a hr , λ ρ ( A a hr ) ) .
This ratio normalizes the weight of the noise with respect to the effect of the anyhow occurring composition λ ρ . Said otherwise, it evaluates the impact created by the noise normalized by the impact created anyhow by the resizing functions.

4. Ingredients of the Experimental Study

This section specifies the key ingredients used in the experimental study performed in Section 5: degrading and enlarging functions, CNNs, HR clean images, attacks and scenarios. We also take advantage of the outcomes of [29,30,31] for the choice of some parameters used in the experimental study.

4.1. The Selection of ρ and of λ

The assessment of the indicators of Section 3.2, and therefore the performances and adequacy of the resized tentative adversarial images obtained between R and H , clearly depend on the reducing and enlarging functions ρ and λ selected in Scheme (11).
The combination call ( ρ , λ , ρ ) (performed in Step 1 for the first call of ρ , in Step 5 for the unique call of λ , and in Step 7 for the second call of ρ ) to the degrading and enlarging functions are “aside” of the actual attacks performed in the R domain. However, both the adversity and the visual quality of the HR adversarial images are highly sensitive to the selected combination.
Moreover, as pointed out in [29], enlarging functions usually have difficulties with high-frequency features. This phenomenon leads to an increased blurriness in the resulting image. Therefore, the visual quality of (and the speed to construct, see [29]) the high-resolution adversarial images obtained by our noise blowing-up strategy benefits from a scarce usage of the enlarging function. Consequently, the scheme minimizes the number of times λ (and consequently ρ ) are used.
We considered four non-adaptive methods that convert an image from one scale to another. Indeed, the Nearest Neighbor [44], the Bilinear method [45], the Bicubic method [46] and the Lanczos method [47,48] are among the most common interpolation algorithms, and are available in python libraries. Note that the Nearest Neighbor method is the default degradation function on Keras l o a d _ i m g function [35]. Tests performed in [29,30] lead to reducing the resizing functions to the Lanczsos and the Nearest methods.
We performed a case study with the 8 possible different combinations ( ρ , λ , ρ ) obtained with the Lanczsos and the Nearest methods (see Appendix B for the full details). Its outcomes lead us to recommend the combination ( ρ , λ , ρ ) = L a n c z o s , L a n c z o s , L a n c z o s ) (see also Section 4.3).

4.2. The CNNs

The experimental study is performed on 10 diverse and commonly used CNNs trained on ImageNet (see [27] for the reasons for these choices). These CNNs are specified in Table 1.

4.3. The HR Clean Images

The experiments are performed on 100 HR clean images. More specifically, Table 2 gives the 10 ancestor categories c a , and the 10 corresponding target categories c t used in the ( c a , c t ) -target scenario whenever applicable (see Section 4.4). These categories (ancestor or target) are the same as those of [27,49], which were picked at random among the 1000 categories of ImageNet.
For each ancestor category, we picked at random 10 clean ancestor images from the ImageNet validation scheme in the corresponding c a category, provided that their size h × w satisfies h 224 and w 224 . This requirement ensures that these images A a hr belong to the H domain. These images are pictured in Figure A1 in Appendix A, while Table A1 gives their original sizes. Note that, out of the 100 HR clean images in Figure A1, 92 coincide with those used in [27,49] (which were picked at random in this article). We replaced the 8 remaining images used in [27,49] whose sizes did not fulfill the requirement. As a consequence, the images A 1 1 and A 1 10 in the category c a 1 , A 3 3 in the category c a 3 , A 5 1 , A 5 2 , A 5 7 in the category c a 5 , and A 9 4 , A 9 7 in the category c a 9 differ from those of [27,49].
Although the images A q p are picked from the ImageNet validation set in the categories c a q , CNNs may not systematically classify all of them in the “correct” category c a q in the process of Steps 1 and 2 of Scheme (11). Indeed, Table A2 and Table A3 in Appendix A show that this phenomenon occurs for all CNNs, whether one uses ρ = “Lanczos” (L) or “Nearest” (N). Table 3 summarizes these outcomes, where S c l e a n C ( ρ ) designates the set of “correctly” classified clean images A q p .
Table 3 shows that the sets S c l e a n C ( L ) and S c l e a n C ( N ) usually differ. Table A2 and Table A3 proves that this holds as well for C = C 7 , C 9 , C 10 although both sets have the same number of elements.
In any case, the “wrongly” classified clean images are from now on disregarded since they introduce a native bias. Experiments are therefore performed only for the “correctly” classified HR clean images belonging to S c l e a n C ( ρ ) .

4.4. The Attacks

We considered seven well-known attacks against the 10 CNNs given in Table 1. Table 4 lists these attacks, and specifies (with an “x”) whether we use them in the experiments for the targeted scenario, for the untargeted scenario, or for both (see Table 5 for a justification of these choices), and their white-box or black-box nature. To be more precise, if an attack admits a dual nature, namely black box and white-box (potentially semi-white-box), we consider the attack only in its more demanding black-box nature. This leads us to consider three black-box attacks (EA, AdvGAN, SimBA) and four white-box attacks (FGSM, BIM, PGD Inf, PGD L2).
Let us now briefly describe these attacks while specifying the parameters to be used in the experiments. Note that, except (for the time being) for the EA attack, all attacks were applied with the Adversarial Robustness Toolbox (ART) [50], which is a Python library that includes several attack methods.
–EA attack [25,27] is an evolutionary algorithm-based black-box attack. It begins by creating a population of ancestor image copies and iteratively modifies their pixels over generations. The attack’s objective is defined by a fitness function that uses an individual’s c t probability obtained from the targeted CNN. The population size is set to 40, and the pixel mutation magnitude per generation is α = 1 / 255 . The attack is executed in both targeted and untargeted scenarios. For the targeted scenario, the adversarial image’s minimum c t -label value is set to τ t ˜ 0.55 . The maximum number of generations is set to N = 10,000.
–Adversarial GAN attack (AdvGAN) [51] is a type of attack that operates in either a semi-whitebox or black-box setting. It uses a generative adversarial network (GAN) to create adversarial images by employing three key components: a generator, a discriminator, and the targeted neural network. During the attack, the generator is trained to produce perturbations that can convert original images into adversarial images, while the discriminator ensures that the generated adversarial image appears identical to the original image. The attack is executed in the black-box setting.
–Simple Black-box Attack (SimBA) [52] is a versatile algorithm that can be used for both black-box and white-box attacks. It works by randomly selecting a vector from a predefined orthonormal basis and adding or subtracting it from the target image. SimBA is a simple and effective method that can be used for both targeted and untargeted attacks. For our experiments, we utilized SimBA in the black-box setting with the overshoot parameter epsilon set to 0.2, batch size set to 1, and the maximum number of generations set to 10,000 for both targeted and untargeted attacks.
–Fast Gradient Sign Method (FGSM) [53] is a white-box attack that works by using the gradient of the loss function J(X,y) with respect to the input X to determine the direction in which the original input should be modified. FGSM is a one-step algorithm that can be executed quickly. In its untargeted version, the adversarial image is
X a d v = X + ϵ s i g n ( Δ X J ( X , c a ) ) ,
while in its targeted version it is
X a d v = X ϵ s i g n ( Δ X J ( X , c t ) ) .
where ϵ is the perturbation size which is calculated with L i n f norm and Δ is the gradient function. We set e p s _ s t e p = 0.01 and ϵ = 8 / 255 .
–Basic Iterative Method (BIM) [54] is a white-box attack that is an iterative version of FGSM. BIM is a computationally expensive attack, as it requires calculating the gradient at each iteration. In BIM, the adversarial image X a d v is initialized with the original image X and gradually updated over a given number of steps N as follows:
X + 1 a d v = C l i p ϵ { X a d v + α s i g n ( Δ A ( J C ( X a d v , c a ) ) ) }
in its untargeted version and
X + 1 a d v = C l i p ϵ { X a d v α s i g n ( Δ A ( J C ( X a d v , c t ) ) ) } ,
in its targeted version, where α is the step size at each iteration and ϵ is the maximum perturbation magnitude of X a d v = X N a d v . We use the e p s _ s t e p = 0.1 , m a x _ i t e r = 100 , and ϵ = 2 / 255 .
–Projected Gradient Descent Infinite (PGD Inf) [55] is a white-box attack that is similar to the BIM attack, but with some key differences. In PGD Inf, the initial adversarial image is not set to the original image X, but rather to a random point within an L p -ball around X. The distance between X and X a d v is measured using the L n o r m . For our experiments, we set the norm parameter to , which indicates the use of the L norm. We also set the step size parameter e p s _ s t e p to 0.1, the batch size to 32, and the maximum perturbation magnitude ϵ to 8/255.
–Projected Gradient Descent L 2 (PGD L 2 ) [55] is a white-box attack and it is similar to PGD Inf, with the difference that L is replaced with L 2 . We set n o r m = 2 , e p s _ s t e p = 0.1 , b a t c h _ s i z e = 32 , and ϵ = 2 .

5. Experimental Results of the Noise Blowing-Up Method

The experiments, following the process implemented in Scheme (11), essentially proceed in two phases for each CNN listed in Table 1, and for each attack and each scenario specified in Table 4.
Phase 1, whose results are given in Section 5.1, mainly deals with running a t k R , C s c e n a r i o on degraded images in the R domain. It corresponds to Step 3 of Scheme (11). The results of these experiments are interpreted in Section 5.2.
Remark.—It is worthwhile noting that Step 3, which is, of course, mandatory in the whole process, should be considered an independent feature of the noise blowing-up strategy. Indeed, although its results are necessary for the experiments performed in the subsequent steps, the success or failure of Phase 1 measures the success or failure of the considered attack (EA, AdvGAN, BIM, etc.) for the considered scenario (target or untarget) in its usual environment (the low-resolution R domain). In other words, the outcomes of Phase 1 do not assess in any way the success or failure of the noise blowing-up strategy. This very aspect is addressed in the experiments performed in Phase 2.
Phase 2, whose results are given in Section 5.3, indeed encapsulates the essence of running a t k H , C s c e n a r i o via the blowing-up of the adversarial noise from R to H . It corresponds to Steps 4 to 8 of Scheme (11). The results of these experiments are interpreted in Section 5.4.

5.1. Phase 1: Running a t k R , C s c e n a r i o

Table 5 summarizes the outcome of running the attacks a t k R , C k s c e n a r i o on the 100 clean ancestor images ρ ( A q p ) R , obtained by degrading, with ρ = “Lanczos” function, the HR clean images A q p represented in Figure A1, against the 10 CNNs C 1 , , C 10 , either for the untarget scenario, or for the ( c a , c t ) target scenario.
Table 5 gives the number of successfully generated adversarial images in the R domain created by seven attacks against 10 CNNs, for either the targeted (targ) or the untargeted (untarg) scenario. In the last three rows, the maximum, minimum, and average dominant label values achieved by each successful targeted/untargeted attack are reported across all CNNs.

5.2. Interpretation of the Results of Phase 1

Except for SimBA and FGSM for the target scenario, one sees that all attacks are performing well for both scenarios. Given SimBA and FGSM’s poor performance in generating adversarial images for the target scenario (see Remark at the beginning of this Section), we decided to exclude them from the subsequent noise blowing-up strategy for the target scenario.
The analysis of the average dominant label values reveals as expected that white-box attacks usually create very strong adversarial images. This is the case for BIM, PGD Inf, and PGD L2 in both the targeted and untargeted scenarios. A contrario but also as expected, black-box attacks (EA, AdvGan for both scenarios, and SimBA for the untarget scenario) achieved lower label values for the target scenario and significantly lower label value of the dominant category for the untarget scenario. This specific issue (or, better said, its consequences as reported in Section 5.3 and Section 5.4) is addressed in Section 6.

5.3. Phase 2: Running a t k H , C s c e n a r i o

For the relevant adversarial images kept from Table 5, one proceeds with the remaining steps of Scheme (11) with the extraction of the adversarial noise in the R domain, its blowing-up to the H domain, its addition to the clean HR corresponding image, and the classification by the CNN of the resulting tentative adversarial image.
The speed of the noise blowing-up method is directly impacted by the size of the clean high-resolution image (as pointed out in [31]). Therefore, representative HR clean images of large size and small sizes are required to assess the additional computational cost (both in absolute and relative terms) involved by the noise blowing-up method. To ensure a fair comparison across various attacks and CNNs, we selected for each scenario (targeted or untargeted) HR clean images where all attacks successfully generated HR adversarial images against 10 CNNs. This led to the images referred to in Table 6 (the Table indicates their respective sizes h × w ).
The performance of the noise blowing-up method is summarized in Table 7 Please revise all mentions according to requested style and ensure all tables are mentioned in numerical order. for adversarial images generated by a t k H , C t a r g e t e d , and in Table 8 for those generated by a t k H , C u n t a r g e t e d for each CNN and attack (except S i m B A H , C t a r g e t e d and F G S M H , C t a r g e t e d for reasons given in Section 5.2). The adversarial images in R used for these experiments are those referred to in Table 5.
For each relevant attack and CNN, the measures of a series of outcomes are given in Table 7 and Table 8.
Regarding targeted attacks (the five attacks EA, AdvGAN, BIM, PGD Inf, and PGD L2 are considered) as summarized in Table 7, the row c a f t = c b e f (and = c t ) gives the number of adversarial images for which the noise blowing-up strategy succeeded. The row SR gives the resulting success rate in % (For example, with EA and C 1 , SR = 81 89 = 91 % ). The row c a f t c b e f reports the number of adversarial images for which the noise blowing-up strategy failed. The row c a f t = c a reports the number of images, among those that failed, that are classified back to c a . The row L C gives the mean value of the loss function (see Section 3.2) for the adversarial images that succeeded, namely those referred to in the row c a f t = c b e f . Relevant sums or average values are given in the last column.
Regarding untargeted attacks (the seven attacks are considered) as summarized in Table 8, the row c a f t c a gives the number of adversarial images for which the noise blowing-up strategy succeeded, and the row SR gives the resulting success rate. The row c a f t = c b e f reports the number of images, among those that succeeded, that are classified in the same category as the adversarial image obtained in Phase 1. The row c a f t = c a reports the number of images for which the strategy failed. Relevant sums or average values are given in the last column.
To assess the visual imperceptibility of adversarial images compared to clean images, we utilize L p -norms and FID values (see Section 3.2). The average (Avg) and standard deviation (StDev) values of the L p -norms and FID values, across all CNNs for each attack, are provided for both targeted and untargeted scenarios in Table 9 and Table 10, respectively (see Table A6 and Table A7 for detailed report of FID values). Table 9 considers only the successful adversarial images provided in Table 7, namely those identified by c a f t = c b e f , provided their number is statistically relevant (what leads to the exclusion of AdvGAN images). Table 10 considers only the successful adversarial images obtained in Table 8, namely those identified by c a f t c a (all considered attacks lead to a number of adversarial images that is statistically relevant). This is indicated by the pair “ a t k / # of adversarial images used”. Table 9 and Table 10 also provide an assessment of the visual impact of the resizing functions ρ and λ on the considered clean images for which adversarial images are obtained by a t k .
Under these conditions, Table 11 for the target scenario (respectively Table 12 for the untarget scenario) provides the execution times in seconds (averaged over the 10 CNNs for each attack and scenario) for each step of the noise blowing-up method, as described in Scheme (11), for the generation of HR adversarial images from large A 1 10 and small A 2 1 HR clean images (respectively large A 6 9 and small A 8 4 HR clean images).
The Overhead column provides the time of the noise blowing-up method per se, namely computed as the cumulative time of all steps of Scheme (11) except Step 3. The ‰ column displays the relative per mille additional time of the overhead of the noise blowing-up method as compared to the underlying attack a t k performed in Step 3.

5.4. Interpretation of the Results of Phase 2

In the targeted scenario, the noise blowing-up strategy achieved an overall average success rate (overall attacks and CNNs) of 74.7 % (see Table 7).
Notably, the strategy performed close to perfection with PGD Inf, achieving an average success rate of 99.2 % (and minimal loss of 0.009 ). The strategy performed also very well with PGD L2, EA, and BIM, with average success rates of 93.2 % , 91.5 % , and 88.6 % , respectively. In contrast, the strategy performed poorly with AdvGAN, achieving a success rate oscillating between 0 % (for 8 CNNs) and 8.7 % , leading to an average success rate of 0.9 % .
The reason for the success of the noise blowing-up strategy for PGD Inf, PGD L2, EA and BIM, and its failure for AdvGAN is essentially due to the behavior, for these attacks, of the average label values of the dominant categories obtained in Table 5, hence is due to a phenomenon occurring before the noise blowing-up process per se.
Indeed, these values are very high for the white-box attacks PGD Inf ( 0.986 ), PGD L2 ( 0.943 ), and BIM ( 0.901 ), and are quite high for EA ( 0.551 ). However, this value is very low for AdvGAN ( 0.255 ).
The adversarial noises, obtained after Phase 1 (in the R domain) by all attacks except AdvGAN, are particularly robust, and “survive” the Phase 2 treatment: The noise blowing-up process did not significantly reduce their adversarial properties legacy, and the derived adversarial images, obtained after the noise blowing-up process, remained in the target category.
The situation differs for AdvGAN: After Phase 1, the target category is only modestly dominating other categories, and one (or more) other categories achieve only slightly weaker label values than the dominating target category. Consequently, the adversarial noise becomes highly susceptible to even minor perturbations, with the effect that these perturbations can easily cause transitions between categories.
In the untargeted scenario, the noise blowing-up strategy achieved an overall average success rate (overall attacks and CNNs) of 63.9 % (see Table 8).
The strategy performed perfectly or close to perfection with all white-box attacks, namely PGD Inf (average success rate of 100 % ), BIM ( 99.6 % ), PGD L2 ( 99.5 % ) and FGSM ( 98.5 % ). A contrario, the strategy performed weakly or even poorly for all black-box attacks, namely SimBA ( 30.1 % ), AdvGAN ( 9.9 % ), and EA ( 9.9 % ).
The reason for these differences in the successes of the strategy according to the considered attacks is the same as seen before in the target scenario: the behavior of the average label values of the dominating category obtained in Table 5 (hence, in this case too, before the noise blowing-up process).
Indeed, these values are very high or fairly high for PGD Inf ( 0.987 ), BIM ( 0.958 ), PGD L2 ( 0.966 ), and FGSM ( 0.522 ). However, they are much lower for EA ( 0.359 ), SimBA ( 0.352 ), and AdvGAN ( 0.150 ).
The adversarial noises, obtained after Phase 1 by all white-box attacks, are particularly robust, and those obtained by all black-box attacks are less resilient. In this latter case, the adversarial noise leveraged to create the tentative adversarial image by the noise blowing-up process is much more sensitive to minor perturbations, with similar consequences as those already encountered in the target scenario.
Visual quality of the adversarial images: The values of L 0 , R n o r m , a d v in Table 9 (resp. Table 10) show that the attacks performed for the target scenario manipulate on average 82 % of the pixels of the downsized (hence in R ) clean image (resp. 94 % for the untarget scenario).
Nevertheless, the values of L 0 , H n o r m , a d v in both tables (hence in the larger H domain, after the noise blowing-up process) are lower, with an overall average of 74 % for the targeted scenario (resp. 83 % for the untargeted scenario). This trend is consistent across all L p values ( p = 0 , 1 , 2 , ), with L p , R n o r m , a d v generally higher than the corresponding L p , H n o r m , a d v values for all attacks (the values are closely aligned, though, for p = ).
Additionally, FID H a d v values, comparing clean and adversarial images obtained by the noise blowing-up method, ranging between 5.3 (achieved by BIM) and 17.6 in the targeted scenario (with average 11.1 , see Table 9), and between 3.7 (achieved by EA) and 49.5 in the untargeted scenario (with average 16.5 , see Table 10), are significantly low (it is not uncommon to have values in the range 300–500). In other words, the adversarial images maintain a visual quality and proximity to their clean counterparts.
It is important to highlight that the simple operation of scaling down and up the clean images results in even larger L p , H n o r m , c l e a n values than L p , H n o r m , a d v for p = 0 , 1 , for all attacks and scenarios (see Table 9 and Table 10; note that the values for p = 2 are too small to assess the phenomenon described above). When one compares FID H c l e a n to FID H a d v , the same phenomenon occurs for three out of 4 targeted attacks (EA is the exception), and for five out of 7 untargeted attacks (FGSM and PGD Inf being the exceptions).
Said otherwise, the interpolation techniques usually cause more visual damage than the attacks themselves, at least as measured by these indicators.
Figure 3 provides images representative of this general behavior. Evidence is furthermore supported numerically by the L p , H n o r m , c l e a n , L p , H n o r m , a d v ( p = 0 , 1 , ) and the FID H c l e a n , FID H a d v values deduced from these images.
More precisely, the 1st column of Figure 3 displays the HR clean images A 1 2 , A 6 3 , and A 10 10 . The 2nd column displays the non-adversarial λ ρ ( A a hr ) H images, as well as the corresponding L p , H n o r m , c l e a n ( p = 0 , 1 , ) and FID H c l e a n values (in that order).
HR adversarial images D t a r hr , C ( A a hr ) are displayed in the 3rd and 4th columns: For a t k = EA performed on C 4 = MobileNet for the targeted scenario in the 3rd column, and for a t k = BIM on C 6 = ResNet50 for the untargeted scenario in the 4th column. The corresponding numerical values of L p , H n o r m , a d v ( p = 0 , 1 , ) and of FID H a d v (in that order) are provided as well.
Speed of the noise blowing-up method: The outcomes of Table 11 and Table 12 for the overhead of the noise blowing-up method (all steps except Step 3) and its relative cost as compared to the actual attack (performed in Step 3) are twofold.
Firstly, the performance of the noise blowing-up strategy depends on the size of the image: It is substantially faster (between 3.24 times and 6.31 times on average) for smaller than for larger HR clean images.
Secondly, and this is probably the most important outcome of both, the noise blowing-up method demonstrates exceptional speed both in absolute and in relative terms, and consequently an exceptionally minimal overhead, even for large-size HR clean images.
Indeed, the overhead ranges between 0.100 s and 0.757 s on average over 10 CNNs ( 0.100 s achieved in the untargeted scenario for a t k = PGD Inf and A 8 4 ; 0.757 seconds achieved in the targeted scenario for a t k = EA and A 1 10 ). This is to compare to the extreme timing values of the attacks performed in Step 3, ranging between 58.1 and 848.7 s all in all (and ranging between 81.8 and 848.7 s for the cases related to the 0.100 and 0.757 s referred to).
Looking at the relative weight of the overhead as compared to a t k is even more saying: It ranges between 0.28 ‰and 12.75 ‰, hence is almost negligible.

6. Revisiting the Failed Cases with Δ C

The summary of Section 5.4 is essentially threefold. Firstly, the noise blowing-up strategy performs very well and with a negligible timing overhead in the target scenario for all five relevant attacks except AdvGAN, and in the untargeted scenario for all four white-box attacks but not for the three black-box attacks. Secondly, the poor performances of the strategy for AdvGAN (target scenario and untargeted scenario), EA (untargeted scenario), and SimBA (untargeted scenario) are essentially due to too low requirements put on these attacks during Phase 1 (Step 3 of Scheme (11), hence ahead of the noise blowing-up process). Thirdly, although between 74 % and 83 % of the pixels are modified on average, the adversarial images remain visually very close to their corresponding clean images, and actually and surprisingly the attacks themselves tend to reduce the differences introduced by the interpolation functions.
We revisit these failed cases and make use of the Delta function Δ C introduced in Section 3.2 for this purpose. Indeed, we identified the origin of the encountered issues as essentially due to the too low distance between the label values of the dominating category and its closest competitors, hence due to a very small value of Δ C for the considered images and CNNs.
Given A a hr and A a (Step 1), and c a (Step 2), we study in this Subsection how setting the increase of the values of Δ C as a requirement in Step 3 of Scheme (11) impacts the success rate of the noise blowing-up strategy for the failed cases. Note that putting additional requirements on Δ C may lead to lesser adversarial images at the end of Phase 1 as Δ C increases.
We limit this study to a t k = EA (untargeted scenario) and a t k = AdvGAN (untargeted and target scenario). We regrettably exclude SimBA since we do not have access to its code.

6.1. Revisiting the Failed Cases in Both Scenarios

The untargeted scenario revisited for atk = EA and atk = AdvGAN. The new consideration of the failed cases proceeds by taking a hybrid approach in Step 3, leading to two successive sub-steps Step 3(a) and Step 3(b).
Step 3(a) consists in running a t k R , C u n t a r g e t until it succeeds in creating a first adversarial image in R classified outside the ancestor category c a . The obtained category c b e f c a is therefore the most promising category outside c a .
In Step 3(b), we change the version of the attack and run a t k R , C t a r g e t on the adversarial image obtained at the end of Step 3(a) for the target scenario ( c a , c b e f ) , with a (more demanding) stop condition defined by a threshold value on Δ C set at will.
Remarks: (1) To summarize this hybrid approach, Step 3(a) identifies the most promising category c b e f outside c a (and does so by “pushing down” the c a label value until another category c b e f shows up), and Step 3(b) “pushes further” the attack in the direction of c b e f until the label value of this dominant category is sufficiently ahead of all other competitors. (2) Although this hybrid approach mixes the untargeted and the target versions of the attack (be it EA or AdvGAN), it fits the untargeted attack scenario nevertheless. Indeed, the category c b e f c a is not chosen a priori as would be the case in the target scenario but is obtained alongside the attack, and is an outcome of a t k R , C u n t a r g e t .
The target scenario revisited for atk = AdvGAN. We address the failed cases by requiring in Step 3 of Scheme (11), that D ˜ t a r g e t C ( A a ) R is classified in c t  and that Δ C ( A a hr ) is large enough.

6.2. Outcome of Revisiting the Failed Cases

One constructs the graph of the evolution of the success rate ( y - a x i s , in %) of the noise blowing-up strategy performed for the considered attack for the untargeted scenario according to step-wise increasing values (x-axis) set to Δ C .
Figure 4 for a t k = EA (untargeted scenario), Figure 5 for a t k = AdvGAN (untargeted scenario) and Figure 6 for a t k = AdvGAN (targeted scenario) picture this evolution for an example, namely C 4 —MobileNet (a), on average over the 10 CNNs (b), and per CNN for all considered images (c).
UT (resp. T) in (a) and (b) of Figure 4 and Figure 5 (resp. of Figure 6) recalls the “original” success rate achieved by the noise blowing-up method in creating adversarial images without putting extra conditions on Δ C (see Table 8, resp. Table 7). The values at the top of the Figures are the number of images obtained after Phase 1, as Δ C increases.
Detailed reports for each CNN can be found in the Appendix C, Figure A2, Figure A3 and Figure A4.
In the untargeted scenario for a t k = EA, the approach adopted for the revisited failed cases turns out to be overwhelmingly successful, and this in a uniform way over the 10 CNNs. The overall number of considered images drops only by 0.6 % , namely from 920 to 914 (in the example of C 4 , this drop is of one image only), while the success rate drastically increases from an original 9.9 % to 98.7 % . In the example of C 4 , the success rate increases from 11.7 % to 98.9 % ; a success rate of 100 % is even achieved for six out of 10 CNNs, even for moderate values of Δ C .
In the untargeted scenario for a t k = AdvGAN, the approach is also successful, but to a lesser extent, and with variations among the CNNs. The overall number of considered images drops by 43 % , namely from 876 to 500 images (in the example of C 4 , this drop amounts to 22 images, hence almost 27 % less images), while the success rate increases from an original 9.9 % to 73.1 % (in the example of C 4 , the success rate increases from 4.9 % to 71.2 % ). Apart from C 2 and C 5 , where the success rate of the revisited method achieves at most 50 % and 25.8 % , all CNNs are reasonably well deceived by the method; the success rate achieves even 100 % for two of them, and this for moderate values of Δ C .
In the targeted scenario, for a t k = AdvGAN, the approach also proves useful, but to a lesser extent as above, and with larger variations among the CNNs. The overall number of considered images drops by 21 % , namely from 758 to 594 images (from 88 to 72 images, hence almost 18 % less images for C 4 ), while the success rate increases from an original 0.3 % to 50.8 % (in the example of C 4 , the success rate increases from 0 % to 34.7 % ). It is worthwhile noting that the method works to perfection with a success rate reaching 100 % for two CNNs ( C 9 and C 10 ), even with a moderate Δ C value.
Table 13 summarizes the outcomes of the numerical experiments when Δ C is set to the demanding value 0.55 . As a consequence, it is advisable to set (for Phase 1, Step 3) τ ˜ c b e f to 0.78 for E A u n t a r g , to 0.76 for A d v G A N t a r g , and to 0.79 for A d v G A N u n t a r g to be on the safe side (these values exceed the maxima referred to in Table 13).
Finally, experiments show that the visual quality of the HR adversarial images obtained by the revised method remains outstanding. We illustrate this statement in Figure 7 on an example, where Δ C is set to 0.55 (the highest and most demanding value considered in the present study), and the CNN is C 4 . In Figure 7, (a) represents the HR clean image A 2 3 classified by C 4 as belonging to the “acorn” category with corresponding label value 0.90 , (b) the adversarial image created by the strategy applied to the EA attack in the untargeted scenario (classified as “snail” with label value 0.61 ), (c) the adversarial image created by the strategy with AdvGAN in the untargeted scenario (classified as “dung beetle” with label value 0.55 ), and (d) the adversarial image created by the strategy with AdvGAN in the targeted scenario (classified as “rhinoceros beetle” with label value 0.43 ). The images speak for themselves as far as visual quality is concerned.

7. Comparison of the Lifting Method and of the Noise Blowing-Up Method

This section provides a comparison between the outcomes of our adversarial noise blowing-up strategy and those of the lifting method introduced in [29,30].
We shall see on three highly challenging examples, that the noise blowing-up strategy leads to a substantial visual quality gain as compared with the lifting method of [29,30] (both strategies achieve comparable and negligible timing overheads as compared to the actual underlying attacks performed). Indeed, the visual quality gain is particularly flagrant when one zooms on some areas that remained visually problematic with the method used in [29,30].

7.1. The Three HR Images, the CNN, the Attack, the Scenario

We make here a case study with three HR images (two of which have been considered in [31]), with C = VGG-16 trained on ImageNet, for the EA-based black-box targeted attack given in Section 4.4.
The three HR pictures are represented in Table 14. They are the comics Spiderman picture ( A 1 hr retrieved from the Internet and under Creative Commons License), an artistic picture graciously provided by the French artist Speedy Graphito ( A 2 hr pictured in [56]) and Hippopotamus image ( A 3 hr = A 7 2 ) taken from Figure A1. An advantage of adding artistic images is that, while a human may have difficulties in classifying them in any category, CNNs do it.

7.2. Implementation and Outcomes

Regarding implementation issues, we use ( ρ , λ ) = (Lanczos, Lanczos) for both the lifting method of [29,30] and the noise blowing-up method presented here, whenever needed.
In terms of the steps described in Section 3.1, note that both strategies coincide up to Step 3 included, and start to differ from Step 4 on. In particular, the attack process (Step 3) in the R domain is the same for both strategies. In the present case, one shall apply the EA-based targeted attack in the R domain, with the aim to create a 0.55 -strong adversarial image. In other words, τ ˜ b e f 0.55 (with notations consistent with Section 3). This process succeeded for the three examples.
Figure 8, Figure 9 and Figure 10 provide a visual comparison (both globally and on some zoomed area) of a series of images in the H domain for a = 1 , 2 , 3 , respectively: (a) the clean image A a hr , (b) the non-adversarial resized image λ ρ ( A a hr ) , (c) the adversarial image in H obtained by the lifting method of [29,30], (d) the adversarial image in H obtained by the noise blowing-up method. The non-adversarial image referred to in (b) remains classified by C in the c a category, and the adversarial images referred to in (c) and (d) are classified in the c t category mentioned in Table 14, with c t -label values indicated in the Figures.
With notations consistent with Table 9 and Table 10, and with the exponent a d v , l i f t , and a d v , n o i s e indicating respectively that the adversarial images are obtained via the lifting method, and by the noise blowing-up method respectively, Table 15 gives a numerical assessment of the visual quality of the different HR images (b), (c), (d) compared to the clean ones (a) of Figure 8, Figure 9 and Figure 10, as measured by L p distances and FID values.
Figure 8, Figure 9 and Figure 10 show that, at some distance, both the non-adversarial resized image (b) and the HR adversarial images (c) and (d) seem to have a good visual quality as compared to the HR clean image (a). However, the zoomed areas show that details from the HR clean images become blurry in the HR adversarial images obtained by the lifting method (c) and in the non-adversarial resized images (b). Moreover, a human eye is not able to distinguish the blurriness that occurs in (b) from the one that shows up in (c): The loss of visual quality looks the same in both cases. However, a loss of visual quality does not occur (at least to the same extent) in the HR adversarial images obtained by the noise blowing-up method (d). These observations are also sustained numerically by Table 15: L p , H n o r m , c l e a n and L p , H n o r m , a d v , l i f t , as well as FID H c l e a n and FID H a d v , l i f t are close one to the other, while L p , H n o r m , a d v , n o i s e and FID H a d v , n o i s e achieve much smaller values than their above counterparts. In particular, we see and measure in these examples, that the noise blowing-up method largely compensates for the negative visual impact of the resizing interpolation functions.
In other words, the adversarial images displayed by the noise blowing-up method in (d) are visually very close to the original clean images (a), while the adversarial images displayed by the lifting method in (c) are visually very close to the non-adversarial resized images in (b).
These experiments strongly speak in favor of our noise blowing-up method, despite the fact that interpolation scaling-up methods λ result in a loss of high-frequency features in the H domain (as seen in (b) and (c)). More precisely, our noise blowing-up method essentially avoids (and even corrects, as shows the behavior of by L p and FID values) this later issue, while the lifting method does not.

8. Conclusions

In this extensive study, we exposed in detail the noise blowing-up strategy to create high-quality high-resolution images adversarial against convolutional neural networks, and indistinguishable from the original clean images.
This strategy is designed to apply to any attack (black-box or white-box), to any scenario (targeted or untargeted scenario), to any CNN, and to any clean image.
We performed an extensive experimental study on 10 state-of-the-art and diverse CNNs, with 100 high-resolution clean images, three black-box (EA, AdvGAN, SimBA), and four white-box (FGSM, BIM, PGD Inf, PGD L2) attacks, applied in the target and the untarget scenario whenever possible.
This led to the construction of 4110 adversarial images for the target scenario and 3996 adversarial images for the untarget scenario. Therefore, the noise blowing-up method achieved an overall average success rate of 74.7 % in the target scenario, and of 63.9 % in the untarget scenario; the strategy performing perfectly or close to perfection (with a success rate of 100 % or close to it) for many attacks.
We then focused on the failed cases. We showed that a minor additional requirement in one step of the strategy led to a substantial success rate increase (e.g., from circa 9.9 % to 98.7 % in the untarget scenario for the EA attack).
All along, we showed that the additional time required to perform our noise blowing-up strategy is negligible as compared to the actual cost of the underlying attack on which the strategy applies.
Finally, we compared our noise blowing-up method to another generic method, namely the lifting method. We showed that the visual quality and indistinguishability of the adversarial images obtained by our noise blowing-up strategy substantially outperform those of the adversarial images obtained by the lifting method. We also showed that applying our noise blowing-up strategy substantially corrects some visual blurriness artifacts caused natively by interpolation resizing functions.
Clearly, the noise blowing-up strategy, which essentially amounts to the addition to the clean high-resolution image of one layer of “substantial” adversarial noise, blown-up from R to H , is subject to a series of refinements and variants. For instance, one may instead consider adding to the clean image several “thin” layers of “moderate” blown-up adversarial noise. This would present at least two advantages. Firstly, one can parallelize this process. Secondly, depending on how adding different layers of adversarial noise impacts the overall τ c a f t -value, one could consider relaxing the expectations on the τ ˜ c b e f value for each run of the attack in the R domain, and still meet τ c a f t and Δ C preset thresholds by adding up wisely the successive layers of noise. Both advantages may lead to a substantial speed-up of the process, and potentially to an increased visual quality. One could also consider applying the strategy to the flat scenario, where all label values are almost equidistributed, henceforth the CNN considers that all categories are almost equally likely (even this variant admits variants, e.g., where one specifies a number 2 x of dominating categories for which the attack would create an appropriate flatness).
Another promising direction comes from the observation that in the present method as well as in the method introduced in [29,30], the considered attacks explore a priori the whole image space. In future work, we intend to explore the possibility of restricting the size of the zones to explore. Provided the kept zones are meaningful (in a sense to be defined), one could that way design an additional generic method which, combined with the one presented in this paper, could lead, at a lower computational cost, to high-resolution adversarial images of very good quality, especially if one pays attention to high-frequency areas.

Author Contributions

Conceptualization, F.L., A.O.T. and E.M.; methodology, F.L. and A.O.T.; software, A.O.T., E.M., E.A. and T.G.; validation, F.L., A.O.T. and E.M.; formal analysis, F.L., A.O.T. and E.M.; investigation, F.L., A.O.T. and E.M.; data curation, A.O.T., E.M., E.A. and T.G.; writing—original draft preparation, F.L., A.O.T. and E.M.; writing—review and editing, F.L., A.O.T. and E.M.; visualization, A.O.T. and E.M.; supervision, F.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is contained within the article.

Acknowledgments

The authors express their gratitude to Speedy Graphito and to Bernard Utudjian for the provision of two artistic images used in the feasibility study and for their interest in this work.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Clean Images

Figure A1. Representation of the 100 ancestor clean images A q p used in the experiments. A q p pictured in the qth row and pth column ( 1 p , q 10 ) is randomly chosen from the ImageNet validation set of the ancestor category c a q specified on the left of the qth row.
Figure A1. Representation of the 100 ancestor clean images A q p used in the experiments. A q p pictured in the qth row and pth column ( 1 p , q 10 ) is randomly chosen from the ImageNet validation set of the ancestor category c a q specified on the left of the qth row.
Applsci 14 03493 g0a1
Table A1. Size h × w (with h , w 224 ) of the 100 clean ancestor images A q p .
Table A1. Size h × w (with h , w 224 ) of the 100 clean ancestor images A q p .
Ancestor Images A q p and Their Original Size ( h × w )
c a q p 12345678910
q
abacus12448 × 3264960 × 1280262 × 275598 × 300377 × 500501 × 344375 × 500448 × 500500 × 5002448 × 3264
acorn2374 × 500500 × 469375 × 500500 × 375500 × 500500 × 500375 × 500374 × 500461 × 500333 × 500
baseball3398 × 543240 × 2392336 × 3504333 × 500262 × 350310 × 310404 × 500344 × 500375 × 500285 × 380
broom4500 × 333286 × 490360 × 480298 × 298413 × 550366 × 500400 × 400348 × 500346 × 500640 × 480
brown bear5700 × 467903 × 1365333 × 500500 × 333497 × 750336 × 500480 × 599375 × 500334 × 500419 × 640
canoe6500 × 332450 × 600500 × 375375 × 500406 × 613600 × 4001067 × 1600333 × 5001536 × 2048375 × 500
hippopotamus7375 × 5001200 × 1600333 × 500450 × 291525 × 525375 × 500500 × 457424 × 475500 × 449339 × 500
llama8500 × 333618 × 468500 × 447253 × 380500 × 333333 × 500375 × 1024375 × 500290 × 345375 × 500
maraca9375 × 500375 × 500470 × 6271328 × 1989250 × 510375 × 500768 × 104375 × 500375 × 500500 × 375
mountain bike10375 × 500500 × 375375 × 500333 × 500500 × 375300 × 402375 × 500446 × 500375 × 500500 × 333
Table A2. In Step 1 and 2 of Scheme (11), the Lanczos degrading interpolation function is employed for resizing images to match the input size of CNNs before they are fed into the CNNs. For 1 p 10 , the ancestor category c a q -label values given by the 10 CNNs of the image A q p pictured in Figure A1. A label value in red indicates that the category c a q is not the dominant one.
Table A2. In Step 1 and 2 of Scheme (11), the Lanczos degrading interpolation function is employed for resizing images to match the input size of CNNs before they are fed into the CNNs. For 1 p 10 , the ancestor category c a q -label values given by the 10 CNNs of the image A q p pictured in Figure A1. A label value in red indicates that the category c a q is not the dominant one.
CNNspAbacusAcornBaseballBroomBrown BearCanoeHippopotamusLlamaMaracaMountain Bike
C 1
DenseNet-121
11.0000.9940.9970.9820.9960.9870.9990.9980.4810.941
21.0000.9970.9930.9990.5750.9210.9990.9740.9870.992
30.9990.9541.0000.9990.9990.6750.9930.9961.0000.814
40.9980.9981.0001.0000.9980.5520.6840.9660.7420.255
51.0000.9991.0000.9990.9930.8271.0000.9990.1530.637
61.0000.9980.9460.9971.0000.9750.9910.9610.6840.995
70.9990.9990.9970.9450.9490.5240.9730.9870.9600.835
81.0000.9990.9850.9400.9990.8931.0000.9990.9970.968
91.0000.9960.9671.0000.9980.7101.0001.0000.9910.969
100.9971.0000.9990.9970.9920.7901.0000.9350.9290.907
C 2
DenseNet-169
11.0000.9980.9990.9730.9990.9950.9950.9990.9910.799
20.9991.0000.9980.9910.3430.6830.9990.9990.9910.862
31.0001.0001.0001.0001.0000.9291.0000.9971.0000.922
40.9900.9991.0001.0001.0000.4790.9270.9600.6650.885
51.0001.0001.0000.9990.9980.9411.0000.9930.6810.969
61.0001.0000.9990.9991.0000.9970.9970.9910.8290.952
71.0001.0000.9991.0000.9900.7960.9900.9990.7270.856
81.0001.0000.9980.9851.0000.9440.9981.0001.0000.942
91.0001.0000.8861.0001.0000.9491.0001.0000.9080.941
100.9481.0000.9980.9990.9990.8970.9990.9990.7200.502
C 3
DenseNet-201
11.0000.9990.9941.0000.9940.9900.9990.9990.5650.986
21.0001.0000.9851.0000.9280.9490.9990.9780.9990.995
30.9830.9570.9991.0000.9990.7191.0000.9951.0000.829
40.9370.9871.0001.0000.9990.8460.9191.0000.7320.752
51.0001.0000.9990.9950.9950.7861.0000.9930.3160.936
61.0001.0000.9961.0001.0000.9901.0000.7900.7330.994
71.0001.0001.0000.9980.9970.8170.9970.9840.9590.682
81.0001.0000.9650.9990.9660.9250.9921.0000.9980.992
91.0000.9980.8181.0000.9800.9801.0000.9990.9710.964
101.0001.0000.9950.9980.9790.9760.9640.9900.6040.966
C 4
MobileNet
10.9440.2160.6090.6460.9660.2870.8760.6210.3240.736
20.8670.9840.9660.9570.5060.7510.6130.8380.9720.937
30.9670.9050.9370.9820.9690.7780.9700.9330.9990.939
40.9840.9780.9660.9400.9610.6210.7580.9680.4720.576
50.9150.9840.9170.8290.9710.8360.93110.8730.3830.708
60.9890.9500.9420.9320.9700.8540.9710.8080.5730.863
70.9700.9620.9290.9030.8950.5240.6830.9890.7400.671
80.9700.9850.8340.9060.9420.7320.7230.9860.7880.930
90.9980.9650.7550.9860.9400.7670.8730.9670.9210.855
100.9231.0000.8040.9340.7720.8770.9750.7660.8440.850
C 5
NASNet
Mobile
10.9480.9300.8880.8800.8870.9040.9110.9450.6990.867
20.9720.9170.8820.9010.4260.8970.9410.9540.9760.915
30.8960.9380.8870.9760.9430.7070.9290.7470.8760.945
40.8590.9400.8930.9610.9200.5490.5170.8890.9910.310
50.9490.9500.8790.9560.8960.5770.9140.9770.7200.792
60.9750.9450.9530.9700.9210.6980.9030.9260.3070.859
70.9850.9020.8680.9530.8370.8090.8650.9550.9840.519
80.9690.9550.8790.9220.8810.8700.8000.9690.4980.912
90.9710.8740.5740.9340.9350.6910.9240.9420.9020.938
100.8470.9790.8420.9450.8110.7820.9460.9170.4100.605
C 6
ResNet-50
10.9990.9980.9960.8830.9960.9970.9991.0000.5970.959
20.9800.9990.9990.9990.5290.9901.0000.9980.9970.984
30.9990.9890.9990.9990.9990.8011.0001.0001.0000.990
40.9990.9990.9990.9990.9980.8310.9700.9940.3500.444
50.9990.9990.9990.9940.9860.9500.9970.2780.5430.871
60.9990.9990.9990.9990.9990.9851.0000.9200.7250.685
70.9990.9990.9990.9980.9910.5841.0001.0000.9870.803
81.0000.9990.9260.9900.9970.9810.9701.0000.9990.963
91.0000.9990.8080.9990.9990.9111.0001.0000.9980.991
100.9990.9990.9990.9990.9820.9870.9960.9840.7750.939
C 7
ResNet-101
11.0001.0000.9970.9991.0000.9990.9991.0000.6650.973
21.0001.0000.9721.0000.8360.9841.0000.8680.9950.992
31.0000.8981.0001.0001.0000.9401.0001.0001.0000.778
40.7441.0001.0001.0000.9970.5560.9410.9990.4470.835
51.0001.0001.0000.9990.9680.9390.9990.8940.3250.694
61.0001.0000.9961.0001.0000.9831.0000.8370.7190.996
71.0001.0001.0000.9970.9900.9201.0001.0000.3050.330
81.0001.0000.9990.9970.9780.9930.9431.0000.9970.988
91.0001.0000.9591.0000.9970.9031.0001.0000.9690.983
101.0001.0001.0001.0000.9980.9650.9990.9950.9270.961
C 8
ResNet-152
11.0001.0001.0000.9980.9990.9941.0000.9990.5970.992
20.5781.0000.9991.0000.3560.9791.0000.9990.9970.998
31.0000.9741.0001.0001.0000.6761.0001.0001.0000.919
40.9961.0001.0001.0001.0000.6100.9610.9850.5970.896
51.0001.0001.0001.0001.0000.9091.0000.9190.1610.928
61.0001.0001.0001.0001.0000.9920.9990.8690.9510.964
70.9971.0001.0001.0000.9600.5001.0001.0000.9620.721
81.0001.0000.9991.0000.9950.9861.0001.0000.9980.967
91.0001.0000.9181.0000.9930.9411.0000.9990.7490.999
101.0001.0001.0001.0000.9920.9101.0000.9980.8970.886
C 9
VGG-16
10.9960.4301.0000.9730.9960.9910.9990.8550.5860.832
20.5400.9761.0000.9130.9250.8960.9990.9640.6930.963
30.9980.5311.0000.9971.0000.9101.0001.0001.0000.949
40.9870.9971.0000.9830.9980.8020.2120.9980.3890.485
51.0001.0001.0000.9590.9990.8190.9990.8540.2260.652
61.0001.0000.8690.9921.0000.8901.0000.8330.4500.877
70.9870.9960.9991.0000.9680.6050.9441.0000.3710.617
80.9170.9951.0000.8580.9990.9310.9971.0000.9540.941
91.0000.9890.8610.9890.8990.3011.0001.0000.9010.922
100.9771.0001.0000.9940.9920.9741.0000.9980.8400.564
C 10
VGG-19
11.0000.9591.0000.6691.0000.7401.0000.9890.4660.879
20.9930.9960.9990.9470.9390.7560.9990.9700.7850.818
31.0000.7401.0000.9980.9980.9351.0001.0001.0000.861
40.9960.9521.0000.8900.9970.6840.4680.9920.8280.291
51.0000.9991.0000.7430.9990.4990.9990.2520.5870.794
61.0001.0000.9990.9931.0000.7350.9990.9520.6930.846
70.9990.9980.9990.9990.9030.6560.9881.0000.3700.494
81.0000.9991.0000.9870.9980.7440.9941.0000.9960.795
91.0001.0000.6100.9990.9740.5501.0001.0000.5520.818
100.9981.0001.0000.9990.9950.7911.0000.9940.7580.761
Table A3. In Step 1 and 2 of Scheme (11), the Nearest degrading interpolation function is employed for resizing images to match the input size of CNNs before they are fed into the CNNs. For 1 p 10 , the ancestor category c a q -label values given by the 10 CNNs of the image A q p pictured in Figure A1. A label value in red indicates that the category c a q is not the dominant one.
Table A3. In Step 1 and 2 of Scheme (11), the Nearest degrading interpolation function is employed for resizing images to match the input size of CNNs before they are fed into the CNNs. For 1 p 10 , the ancestor category c a q -label values given by the 10 CNNs of the image A q p pictured in Figure A1. A label value in red indicates that the category c a q is not the dominant one.
CNNspAbacusAcornBaseballBroomBrown BearCanoeHippopotamusLlamaMaracaMountain Bike
C 1
DenseNet-121
11.0000.9810.9970.9990.9950.9920.9990.9970.6070.942
21.0000.9970.9891.0000.6700.9090.9980.9870.8830.987
30.9980.8451.0001.0000.9960.8360.9870.9971.0000.891
40.9960.9971.0001.0000.9970.6200.2390.9840.3120.619
51.0000.9991.0000.9980.9550.8111.0001.0000.1450.986
61.0001.0000.9570.9981.0000.9900.9970.9160.6920.999
70.9980.9990.9990.9730.9370.5250.9850.9740.9020.940
81.0000.9990.9930.9930.9950.9131.0001.0000.9990.962
91.0000.9980.9811.0000.9970.8200.9991.0000.9990.992
101.0000.9961.0000.9990.9950.9230.9990.8860.5720.870
C 2
DenseNet-169
10.9990.9781.0000.9990.9990.9970.9970.9990.9520.873
21.0000.9990.9980.9920.5350.7640.9980.9990.9950.861
31.0000.9981.0001.0000.9990.8801.0000.9941.0000.977
40.9900.9961.0001.0001.0000.5490.5530.9810.5830.973
51.0001.0001.0000.9941.0000.9151.0000.9940.5300.997
61.0001.0000.9981.0001.0000.9970.9950.9750.0910.991
71.0001.0001.0001.0000.9540.8270.9961.0000.9640.945
81.0001.0000.9980.9980.9990.9510.9991.0001.0000.975
91.0001.0000.9431.0000.9990.9051.0001.0000.9930.964
100.9701.0000.9991.0000.9970.9520.9990.9980.6080.507
C 3
DenseNet-201
10.9990.9750.9981.0000.9900.9900.9980.9960.5840.986
21.0001.0000.9841.0000.8440.9570.9960.9960.9930.997
30.9870.9501.0001.0000.9980.6690.9990.9941.0000.886
40.8860.9941.0001.0000.9980.8220.8701.0000.5410.947
51.0001.0000.9990.9830.9800.5861.0000.9980.1410.980
61.0001.0000.9951.0001.0000.9940.9990.7240.6930.996
71.0001.0001.0001.0000.9980.8650.9970.9700.9730.917
81.0001.0000.9931.0000.8740.9780.9900.9990.9970.993
91.0000.9990.8771.0000.9840.9951.0000.9990.9870.988
101.0001.0000.9980.9990.9780.9840.9870.9630.3650.983
C 4
MobileNet
10.9450.5890.7700.8290.9660.5600.9330.4800.5560.854
20.9030.9480.9810.9550.6690.9030.7070.7250.9320.967
30.9220.8500.9350.9710.9850.8300.9580.9380.9990.985
40.9340.9710.9770.9720.9240.8200.7110.9750.5860.851
50.8960.9810.9050.6530.9820.7870.9530.8460.4980.910
60.9900.9760.8810.9730.9130.8180.9890.7740.3550.935
70.9820.9790.8380.9230.7910.7500.7480.9780.8840.379
80.9640.9190.8600.7980.9410.8230.8060.9950.9620.928
90.9970.8510.7300.9960.9430.6920.9700.9880.8480.758
100.9680.9940.8350.7800.8860.9480.9920.5680.4160.883
C 5
NASNet
Mobile
10.9400.9450.8850.9480.8920.9250.9140.9450.2880.869
20.9470.9460.9050.8920.4540.9320.8290.9510.9570.902
30.9030.8840.8890.9780.9480.7020.9260.7540.9110.923
40.8440.9290.8950.9610.9100.5130.6560.9280.9930.667
50.9430.9300.8860.9140.9360.5860.9210.9760.7340.972
60.9730.9450.9490.9720.9250.7920.8460.9360.0850.854
70.9830.8970.8420.9440.8720.8690.8930.9410.8850.781
80.9620.9500.8700.9080.8870.8640.8240.9650.9300.904
90.9750.9040.6910.9490.9250.7830.9250.9490.9650.957
100.8610.9570.8510.9550.8090.8600.9410.9290.3970.495
C 6
ResNet-50
11.0000.7950.9980.8410.9990.9980.9990.9990.8010.986
20.4111.0001.0001.0000.9310.9911.0000.9980.8500.995
31.0000.9011.0001.0001.0000.7781.0001.0001.0000.993
41.0000.9931.0001.0000.9990.8970.8810.9990.4240.929
51.0001.0001.0000.9690.9960.9451.0000.3810.2530.995
60.9991.0001.0000.9990.9990.9951.0000.7710.2110.941
71.0001.0001.0000.9880.9920.7431.0001.0000.9690.892
81.0000.9980.9980.9990.9970.9930.9621.0000.9990.987
91.0001.0000.6951.0000.9990.9711.0001.0000.9980.999
101.0000.9991.0000.9990.9590.9940.9700.7230.3850.965
C 7
ResNet-101
11.0000.9820.9990.9950.9990.9991.0001.0000.9840.969
21.0001.0000.9731.0000.9410.9861.0000.9880.9750.997
31.0000.9291.0001.0001.0000.8821.0001.0001.0000.895
40.7780.9991.0001.0000.9930.5250.6800.9990.8940.970
51.0001.0001.0000.9910.9450.8351.0000.9400.5570.990
61.0001.0000.9940.9980.9990.9961.0000.7220.5990.998
71.0001.0001.0001.0000.9110.9611.0001.0000.7720.756
81.0001.0001.0000.9960.9100.9940.9761.0000.9950.990
91.0001.0000.9791.0000.9970.8481.0001.0000.9590.980
101.0000.9931.0001.0000.9270.9750.9960.9170.5370.984
C 8
ResNet-152
11.0000.9981.0000.9960.9920.9870.9990.9990.9540.991
20.7131.0000.9971.0000.5130.9831.0001.0000.9560.998
31.0000.6651.0001.0001.0000.7940.9991.0001.0000.969
40.9980.9971.0001.0001.0000.6260.8720.9720.8850.960
51.0001.0001.0001.0000.9990.8411.0000.9270.2190.993
61.0001.0001.0000.9941.0000.9970.9990.8050.4360.986
71.0001.0001.0001.0000.9960.5570.9951.0000.9590.860
81.0001.0001.0001.0000.9510.9650.9991.0001.0000.991
91.0001.0000.8571.0000.9780.9790.9921.0000.9490.999
101.0001.0001.0001.0000.8610.8711.0000.8720.8180.961
C 9
VGG-16
10.9990.3921.0000.7250.9970.9900.9990.9400.5920.862
20.9520.9971.0000.9180.9220.9181.0000.9680.6830.979
30.9980.6881.0001.0001.0000.8961.0001.0001.0000.952
40.9960.9991.0000.9930.9980.7640.2140.9990.7030.740
51.0000.9991.0000.9130.9970.6781.0000.9180.1750.936
61.0001.0000.6740.9720.9990.8831.0000.8280.4700.952
70.9990.9980.9990.9990.9470.5950.9351.0000.3580.640
80.9870.9951.0000.8440.9990.9520.9991.0000.9790.973
91.0000.9990.8960.9920.9150.3821.0001.0000.9180.895
100.9981.0001.0000.9980.9640.9811.0000.9980.7450.614
C 10
VGG-19
11.0000.9591.0000.5031.0000.5471.0000.9770.5070.909
20.9900.9980.9990.9570.9840.8121.0000.9830.5140.903
31.0000.7671.0000.9961.0000.9461.0001.0001.0000.912
40.9950.9801.0000.9940.9960.6630.2410.9950.8210.270
51.0000.9991.0000.6170.9970.7161.0000.4630.4360.934
61.0001.0000.9980.9750.9990.7790.9990.9320.7130.957
71.0000.9991.0000.9990.8810.5860.9951.0000.3360.422
81.0001.0001.0000.9560.9970.8460.9971.0000.9940.930
91.0001.0000.5750.9910.9880.4411.0001.0000.6600.752
100.9991.0001.0001.0000.9930.8590.9990.9660.7310.862

Appendix B. Choice of (ρ, λ, ρ) Based on a Case Study

Our previous papers [29,30] showed the sensitivity of tentative adversarial images to the choice of the degrading and enlarging functions. In the present Appendix B, we, therefore, want to find out which degrading and enlarging functions ρ and λ , and which combination ( ρ , λ , ρ ) , used in Scheme (11), provide the best outcome in terms of image quality and of adversity. For this purpose, we perform a case study.
Based on the results of [29,30], the study is limited to the consideration of the “Lanczos” (L) and “Nearest” (N) functions, either for the degrading function ρ or for the enlarging function λ . This leads to 8 combinations for ( ρ , λ , ρ ) , namely (with obvious notations) L-L-L, L-L-N, L-N-L, N-L-L, L-N-N, N-L-N, N-N-L and N-N-N.
For each such combination ( ρ , λ , ρ ) , the study is performed on the 100 clean images A q p represented in Figure A1, with the EA-based targeted attack against the CNN C = C 9 = VGG-16, according to the pairs ( c a , c t ) specified in Table 2.
However, although the images A q p are picked from the ImageNet validation set in the categories c a q , VGG-16 does not systematically classify all of them in the “correct” category c a q in the process of Steps 1 and 2 of Scheme (11). Indeed, Table A2 and Table A3 in Appendix A show that VGG-16 classifies “correctly” only 93 clean images A q p , and classifies “wrongly” 7 when the degrading function used in Step 1 is ρ = L or is ρ = N. Let us observe that although the number of “correctly” classified and of “wrongly” classified images are the same independently on the ρ function used, the actual such images A q p are not necessarily the same. The rest of the experiments are therefore performed on the set S c l e a n VGG - 16 ( ρ ) = 93 of “correctly” classified clean images.
With this setting, the targeted attack aims at creating 0.55 -strong adversarial images in the R domain (hence meaning that it aims at creating images for which τ t ˜ 0.55 ).
As explained in Section 4.4, the attack succeeds when a 0.55 -strong adversarial image in the R domain is obtained within 10,000 generations. In the present case study, we also keep track of the unsuccessful such attacks. More precisely, for the S c l e a n VGG - 16 ( ρ ) = 93 images considered, we also report the cases where either the best tentative adversarial image in the R domain, obtained after 10,000 generations, is classified in c t but with a label value < 0.55 , or is classified in a category c c a , c t , or is classified back to c = c a .
Note en passant that, although unsuccessful for the 0.55 -target scenario, the attack in the R domain is successful at creating good enough adversarial images in the first case considered in the previous paragraph, respectively for the untarget scenario in the second case.
In the present study, Scheme (11) continues with Steps 4 to 8 only for the adversarial images that correspond to the first or the second component of the quadruplet in R , namely those obtained in Step 3 that are classified in c t . Note that we compute the average of the τ c ˜ = τ t ˜ for these images.
At the end of Step 8, we collect the following data: the number of HR tentative adversarial images classified in c t (hence adversarial for the target scenario), classified in c c a , c t (hence adversarial for the untarget scenario), classified back in c a (not adversarial at all). For the images that remain in c t or c c a , c t , we report their c t -label values τ t , the value of the loss function, and the values of the two L p distances where p = 0 , 1 , 2 , (written as L p , R and L p , H to simplify the notations) as specified in Section 3.2.
The outcomes of these experiments are summarized in Table A4 and Table A5 for all ( ρ , λ , ρ ) combinations. Comprehensive reports on these experiments can be accessed via the following link: https://github.com/aliotopal/Noise_blowing_up (accessed on 14 April 2024).
Table A4. The average, maximum, and minimum dominant category label values before and after the application of the noise blowing-up technique ( τ ˜ c , τ ), along with L p norms (where p = 0 , 1 , 2 , ) and loss L for each combination of ( ρ , λ , ρ ). In this summary, the calculations include good-enough adversarial images.
Table A4. The average, maximum, and minimum dominant category label values before and after the application of the noise blowing-up technique ( τ ˜ c , τ ), along with L p norms (where p = 0 , 1 , 2 , ) and loss L for each combination of ( ρ , λ , ρ ). In this summary, the calculations include good-enough adversarial images.
ρ , λ , ρ Step 1–3Step 4–8 L 0 norm L 1 norm L 2 norm L L
τ ˜ c τ c L 0 , R norm , adv L 0 , H norm , adv L 0 , H norm , clean L 1 , R norm , adv L 1 , H norm , adv L 1 , R norm , clean L 1 , H norm , adv L 1 , H norm , clean L 2 , R norm , adv L 2 , H norm , adv L 2 , R norm , clean L , R norm , adv L , H norm , adv L , R norm , clean
LLLAvg0.5480.5040.9550.940.990.030.020.021.779.9  × 10 5 4.5  × 10 5 5.3  × 10 5 46.646.21250.043
Min0.2940.2730.9100.880.900.010.010.000.254.7  × 10 5 7.0  × 10 6 5.4  × 10 6 2122180.007
Max0.5540.5430.9740.971.000.040.040.1113.51.6  × 10 4 9.7  × 10 5 2.0  × 10 4 77742000.116
LLNAvg0.5480.4560.9550.950.850.030.020.030.989.9  × 10 5 4.5  × 10 5 7.9  × 10 5 46.646.21960.499
Min0.2940.0730.9100.880.350.010.010.000.234.7  × 10 5 7.0  × 10 6 7.8  × 10 6 21221130.256
Max0.5540.9990.9740.970.970.040.040.123.861.6  × 10 4 9.7  × 10 5 2.3  × 10 4 77742550.554
LNLAvg0.5480.2900.9550.950.850.030.030.031.069.9  × 10 5 4.9  × 10 5 7.9  × 10 5 46.646.61960.349
Min0.2940.0800.9100.890.350.010.010.000.254.7  × 10 5 7.7  × 10 6 7.8  × 10 6 21211130.100
Max0.5540.8620.9740.970.970.040.040.124.221.6  × 10 4 1.0  × 10 4 2.3  × 10 4 77772550.551
NLLAvg0.5480.4230.9560.950.990.030.020.031.251.0  × 10 4 4.6  × 10 5 7.3  × 10 5 46.846.71960.478
Min0.3500.0530.9280.920.900.010.010.000.295.5  × 10 5 7.0  × 10 6 6.9  × 10 6 2525560.129
Max0.5530.9970.9740.971.000.040.040.134.611.5  × 10 4 9.5  × 10 5 2.7  × 10 4 74693200.553
LNNAvg0.5480.5360.9550.950.850.030.030.031.069.9  × 10 5 4.9  × 10 5 7.9  × 10 5 46.646.61960.526
Min0.2940.0760.9100.890.350.010.010.000.254.7  × 10 5 7.7  × 10 6 7.8  × 10 6 21211130.294
Max0.5540.9990.9740.970.970.040.040.124.221.6  × 10 4 1.0  × 10 4 2.3  × 10 4 77772550.554
NLNAvg0.5480.4160.9560.950.990.030.020.031.251.0  × 10 4 4.6  × 10 5 7.3  × 10 5 46.846.71960.138
Min0.3500.1550.9280.920.900.010.010.000.295.5  × 10 5 7.0  × 10 6 6.9  × 10 6 2525560.0002
Max0.5530.5500.9740.971.000.040.040.134.611.5  × 10 4 9.5  × 10 5 2.7  × 10 4 74693200.395
NNLAvg0.5480.5310.9560.950.690.030.030.031.031.0  × 10 4 5.0  × 10 5 9.5  × 10 5 46.846.82240.532
Min0.3500.0570.9280.920.250.010.010.000.305.5  × 10 5 7.6  × 10 6 9.0  × 10 6 25251270.350
Max0.5530.9990.9740.970.920.040.040.143.801.5  × 10 4 1.0  × 10 4 2.9  × 10 4 74742550.553
NNNAvg0.5480.4020.9550.950.690.030.030.031.019.9  × 10 5 4.9  × 10 5 9.5  × 10 5 46.646.62250.262
Min0.3500.0980.9280.920.250.010.010.000.305.5  × 10 5 7.6  × 10 6 9.0  × 10 6 25251279.5  × 10 5
Max0.5560.9790.9740.970.920.040.040.143.801.5  × 10 4 1.0  × 10 4 2.9  × 10 4 74742550.552
Table A5 summarizes the main findings from the comparison study for different interpolation techniques. The table includes information on the interpolation methods utilized, Lanczos (L) and Nearest (N), which are shown in Column 1. The remaining columns present the following data: Column 2: the number of adversarial images used for testing noise blowing-up technique, Column 3: the number of images classified in the target category, Column 4: the number of images that remained adversarial in the untargeted category, Column 5: the number of images classified in the ancestor category after employing the noise blowing-up technique, and Column 6: the resulting average loss in target category dominance.
Table A4 indicates that there are no significant differences observed when using different combinations of ( ρ , λ , ρ ) in relation to L p norms (where p = 0 , 1 , 2 , ) . However, Table A5 demonstrates that the combination of L-L-L produces optimal results in terms of both the loss function ( L ) and the number of adversarial images remaining in the target category ( c t ) when utilizing the noise blowing-up technique for generating high-resolution adversarial images. Therefore, in our experiments (see Scheme (11)), we employ the L-L-L combination for ( ρ , λ , ρ ).
Table A5. The table presents the results of a case study conducted on 92 adversarial images obtained with EA target , C for C = VGG-16 and τ t ˜ 0.55 (with notations consistent with Section 3). The technique involves manipulating the adversarial images by extracting noise and applying different combinations ( ρ , λ , ρ ) in Steps 1, 5, and 7 (see Section 3.1).
Table A5. The table presents the results of a case study conducted on 92 adversarial images obtained with EA target , C for C = VGG-16 and τ t ˜ 0.55 (with notations consistent with Section 3). The technique involves manipulating the adversarial images by extracting noise and applying different combinations ( ρ , λ , ρ ) in Steps 1, 5, and 7 (see Section 3.1).
ρ , λ , ρ Number of D ˜ targeted VGG 16 ( A a ) Number of D targeted hr , VGG 16 ( A a hr ) Average Loss L
c = c t c c a , c t c = c a
L-L-L9292000.0439
L-L-N921025570.5019
L-N-L925911220.3501
N-L-L921621550.4802
L-N-N92623630.5295
N-L-N9289030.1384
N-N-L92121700.5345
N-N-N926210200.2615

Appendix C. Enhancing the Noise Blowing-Up Method: Exploring its Performance with Varied Strength Levels of Adversarial Images

Figure A2. Evaluating the performance of the noise blowing-up method for EA in untargeted scenarios with the increased strength of adversarial images per each CNN. The charts display Δ values at the bottom, along with the corresponding number of images used for the tests at the top.
Figure A2. Evaluating the performance of the noise blowing-up method for EA in untargeted scenarios with the increased strength of adversarial images per each CNN. The charts display Δ values at the bottom, along with the corresponding number of images used for the tests at the top.
Applsci 14 03493 g0a2
Figure A3. Evaluating the performance of the noise blowing-up method for AdvGAN in untargeted scenarios with the increased strength of adversarial images per each CNN. The charts display Δ values at the bottom, along with the corresponding number of images used for the tests at the top.
Figure A3. Evaluating the performance of the noise blowing-up method for AdvGAN in untargeted scenarios with the increased strength of adversarial images per each CNN. The charts display Δ values at the bottom, along with the corresponding number of images used for the tests at the top.
Applsci 14 03493 g0a3
Figure A4. Evaluating the performance of the noise blowing-up method for AdvGAN in target scenarios with the increased strength of adversarial images per each CNN. The charts display Δ values at the bottom, along with the corresponding number of images used for the tests at the top.
Figure A4. Evaluating the performance of the noise blowing-up method for AdvGAN in target scenarios with the increased strength of adversarial images per each CNN. The charts display Δ values at the bottom, along with the corresponding number of images used for the tests at the top.
Applsci 14 03493 g0a4

Appendix D. Overall FID Results

In Table A6 and Table A7, we present the average FID values (lower is better) for successfully generated high-resolution adversarial images, presented per attack and CNN. Specifically, Table A6 shows FID values for the images generated by targeted attacks (EA, BIM, PGD Inf, PGD L2), while Table A7 displays FID values of the images generated by untargeted attacks (EA, AdvGAN, SimBA, FGSM, BIM, PGD Inf, PGD L2).
Table A6. FID H a d v values assessing the human imperceptibility of the crafted adversarial images for the target scenario.
Table A6. FID H a d v values assessing the human imperceptibility of the crafted adversarial images for the target scenario.
TargetedEABIMPGD InfPGD L2
C 1 12.5864.60511.8735.439
C 2 12.0434.65412.1466.527
C 3 14.0384.94413.5746.917
C 4 13.8733.4389.3455.037
C 5 17.8483.0656.4375.006
C 6 14.5723.6716.6175.434
C 7 16.2773.8677.4945.970
C 8 15.9264.1337.8016.137
C 9 27.9029.70529.22515.025
C 10 31.20010.93332.71315.681
Avg17.6275.30213.7237.717
Table A7. FID H a d v values assessing the human imperceptibility of the crafted adversarial images for the untargeted scenario.
Table A7. FID H a d v values assessing the human imperceptibility of the crafted adversarial images for the untargeted scenario.
UntargetedEAAdvGANSimBAFGSMBIMPGD InfPGD L2
C 1 3.67717.9743.85741.6974.43523.3856.64
C 2 2.11014.8862.62442.4524.21216.7496.283
C 3 3.56814.91513.54844.5554.43621.8387.141
C 4 2.61415.8235.11737.8384.00611.4635.240
C 5 2.14210.51713.87942.2743.2447.2504.996
C 6 2.22211.63812.87147.5034.86213.4247.522
C 7 3.57520.8947.23348.8695.13814.1507.407
C 8 NA14.5887.95645.5224.83413.5137.453
C 9 6.31315.2359.19171.84910.23156.49316.181
C 10 3.75414.1728.22172.28311.51561.13119.001
Avg3.75415.0648.45049.4845.69123.9408.786

References

  1. Taye, M.M. Theoretical Understanding of Convolutional Neural Network: Concepts, Architectures, Applications, Future Directions. Computation 2023, 11, 52. [Google Scholar] [CrossRef]
  2. Sun, Y.; Xue, B.; Zhang, M.; Yen, G.G. Evolving deep convolutional neural networks for image classification. IEEE Trans. Evol. Comput. 2019, 24, 394–407. [Google Scholar] [CrossRef]
  3. Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.J.; Fergus, R. Intriguing properties of neural networks. In Proceedings of the 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, 14–16 April 2014. [Google Scholar]
  4. Gao, H.; Cheng, B.; Wang, J.; Li, K.; Zhao, J.; Li, D. Object classification using CNN-based fusion of vision and LIDAR in autonomous vehicle environment. IEEE Trans. Ind. Inform. 2018, 14, 4224–4231. [Google Scholar] [CrossRef]
  5. Coşkun, M.; Uçar, A.; Yildirim, Ö.; Demir, Y. Face recognition based on convolutional neural network. In Proceedings of the 2017 International Conference on Modern Electrical and Energy Systems (MEES), Kremenchuk, Ukraine, 15–17 November 2017; pp. 376–379. [Google Scholar]
  6. Yang, S.; Wang, W.; Liu, C.; Deng, W. Scene understanding in deep learning-based end-to-end controllers for autonomous vehicles. IEEE Trans. Syst. Man Cybern. Syst. 2018, 49, 53–63. [Google Scholar] [CrossRef]
  7. Ghosh, A.; Jana, N.D.; Das, S.; Mallipeddi, R. Two-Phase Evolutionary Convolutional Neural Network Architecture Search for Medical Image Classification. IEEE Access 2023, 11, 115280–115305. [Google Scholar] [CrossRef]
  8. Abdou, M.A. Literature review: Efficient deep neural networks techniques for medical image analysis. Neural Comput. Appl. 2022, 34, 5791–5812. [Google Scholar] [CrossRef]
  9. Chugh, A.; Sharma, V.K.; Kumar, S.; Nayyar, A.; Qureshi, B.; Bhatia, M.K.; Jain, C. Spider monkey crow optimization algorithm with deep learning for sentiment classification and information retrieval. IEEE Access 2021, 9, 24249–24262. [Google Scholar] [CrossRef]
  10. Fahfouh, A.; Riffi, J.; Mahraz, M.A.; Yahyaouy, A.; Tairi, H. PV-DAE: A hybrid model for deceptive opinion spam based on neural network architectures. Expert Syst. Appl. 2020, 157, 113517. [Google Scholar] [CrossRef]
  11. Cao, J.; Lam, K.Y.; Lee, L.H.; Liu, X.; Hui, P.; Su, X. Mobile augmented reality: User interfaces, frameworks, and intelligence. ACM Comput. Surv. 2023, 55, 1–36. [Google Scholar] [CrossRef]
  12. Coskun, H.; Yiğit, T.; Üncü, İ.S. Integration of digital quality control for intelligent manufacturing of industrial ceramic tiles. Ceram. Int. 2022, 48, 34210–34233. [Google Scholar] [CrossRef]
  13. Khan, M.J.; Singh, P.P. Advanced road extraction using CNN-based U-Net model and satellite imagery. E-Prime Electr. Eng. Electron. Energy 2023, 5, 100244. [Google Scholar] [CrossRef]
  14. Saralioglu, E.; Gungor, O. Semantic segmentation of land cover from high resolution multispectral satellite images by spectral-spatial convolutional neural network. Geocarto Int. 2022, 37, 657–677. [Google Scholar] [CrossRef]
  15. Zhang, Y.; Liu, Y.; Liu, J.; Miao, J.; Argyriou, A.; Wang, L.; Xu, Z. 360-attack: Distortion-aware perturbations from perspective-views. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 15035–15044. [Google Scholar]
  16. Meng, W.; Xing, X.; Sheth, A.; Weinsberg, U.; Lee, W. Your online interests: Pwned! a pollution attack against targeted advertising. In Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, Scottsdale, AZ, USA, 3–7 November 2014; pp. 129–140. [Google Scholar]
  17. Hardt, M.; Nath, S. Privacy-aware personalization for mobile advertising. In Proceedings of the 2012 ACM Conference on Computer and Communications Security, Raleigh, NC, USA, 16–18 October 2012; pp. 662–673. [Google Scholar]
  18. Biggio, B.; Corona, I.; Maiorca, D.; Nelson, B.; Šrndić, N.; Laskov, P.; Giacinto, G.; Roli, F. Evasion attacks against machine learning at test time. In Proceedings of the Joint European Conference on Machine Learning and Knowledge Discovery in Databases, Prague, Czech Republic, 23–27 September 2013; Springer: Berlin/Heidelberg, Germany, 2013; pp. 387–402. [Google Scholar]
  19. Carlini, N.; Wagner, D. Towards evaluating the robustness of neural networks. In Proceedings of the 2017 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA, 22–26 May 2017; pp. 39–57. [Google Scholar]
  20. Wang, Y.; Liu, J.; Chang, X.; Misic, J.V.; Misic, V.B. IWA: Integrated Gradient based White-box Attacks for Fooling Deep Neural Networks. arXiv 2021, arXiv:2102.02128. [Google Scholar] [CrossRef]
  21. Mohammadian, H.; Ghorbani, A.A.; Lashkari, A.H. A gradient-based approach for adversarial attack on deep learning-based network intrusion detection systems. Appl. Soft Comput. 2023, 137, 110173. [Google Scholar] [CrossRef]
  22. Papernot, N.; McDaniel, P.; Goodfellow, I.; Jha, S.; Celik, Z.B.; Swami, A. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, Abu Dhabi, United Arab Emirates, 2–7 April 2017; pp. 506–519. [Google Scholar]
  23. Andriushchenko, M.; Croce, F.; Flammarion, N.; Hein, M. Square attack: A query-efficient black-box adversarial attack via random search. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2020; pp. 484–501. [Google Scholar]
  24. Chitic, R.; Bernard, N.; Leprévost, F. A proof of concept to deceive humans and machines at image classification with evolutionary algorithms. In Proceedings of the Intelligent Information and Database Systems, 12th Asian Conference, ACIIDS 2020, Phuket, Thailand, 23–26 March 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 467–480. [Google Scholar]
  25. Chitic, R.; Leprévost, F.; Bernard, N. Evolutionary algorithms deceive humans and machines at image classification: An extended proof of concept on two scenarios. J. Inf. Telecommun. 2020, 5, 121–143. [Google Scholar] [CrossRef]
  26. Al-Ahmadi, S.; Al-Eyead, S. GAN-based Approach to Crafting Adversarial Malware Examples against a Heterogeneous Ensemble Classifier. In Proceedings of the 19th International Conference on Security and Cryptography—Volume 1: SECRYPT, INSTICC, Lisbon, Portugal, 11–13 July 2022; SciTePress: Setúbal, Portugal, 2022; pp. 451–460. [Google Scholar] [CrossRef]
  27. Topal, A.O.; Chitic, R.; Leprévost, F. One evolutionary algorithm deceives humans and ten convolutional neural networks trained on ImageNet at image recognition. Appl. Soft Comput. 2023, 143, 110397. [Google Scholar] [CrossRef]
  28. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. The ImageNet Image Database. 2009. Available online: http://image-net.org (accessed on 14 April 2024).
  29. Leprévost, F.; Topal, A.O.; Avdusinovic, E.; Chitic, R. A Strategy Creating High-Resolution Adversarial Images against Convolutional Neural Networks and a Feasibility Study on 10 CNNs. J. Inf. Telecommun. 2022, 7, 89–119. [Google Scholar] [CrossRef]
  30. Leprévost, F.; Topal, A.O.; Avdusinovic, E.; Chitic, R. Strategy and Feasibility Study for the Construction of High Resolution Images Adversarial against Convolutional Neural Networks. In Proceedings of the Intelligent Information and Database Systems, 13th Asian Conference, ACIIDS 2022, Ho-Chi-Minh-City, Vietnam, 28–30 November 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 467–480. [Google Scholar]
  31. Leprévost, F.; Topal, A.O.; Mancellari, E. Creating High-Resolution Adversarial Images Against Convolutional Neural Networks with the Noise Blowing-Up Method. In Intelligent Information and Database Systems; Nguyen, N.T., Boonsang, S., Fujita, H., Hnatkowska, B., Hong, T.P., Pasupa, K., Selamat, A., Eds.; Springer: Singapore, 2023; pp. 121–134. [Google Scholar]
  32. Van Rossum, G.; Drake, F.L. Python 3 Reference Manual; CreateSpace: Scotts Valley, CA, USA, 2009. [Google Scholar]
  33. Oliphant, T.E. Guide to NumPy; Trelgol, 2006; Available online: https://web.mit.edu/dvp/Public/numpybook.pdf (accessed on 14 April 2024).
  34. Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. 2015. Available online: https://www.tensorflow.org (accessed on 14 April 2024).
  35. Keras. 2015. Available online: https://keras.io (accessed on 14 April 2024).
  36. Van der Walt, S.; Schönberger, J.L.; Nunez-Iglesias, J.; Boulogne, F.; Warner, J.D.; Yager, N.; Gouillart, E.; Yu, T.; The Scikit-Image Contributors. scikit-image: Image processing in Python. PeerJ 2014, 2, e453. [Google Scholar] [CrossRef] [PubMed]
  37. Heusel, M.; Ramsauer, H.; Unterthiner, T.; Nessler, B.; Hochreiter, S. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Adv. Neural Inf. Process. Syst. 2017, 30. [Google Scholar] [CrossRef]
  38. Luo, C.; Lin, Q.; Xie, W.; Wu, B.; Xie, J.; Shen, L. Frequency-driven imperceptible adversarial attack on semantic similarity. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 15315–15324. [Google Scholar]
  39. Chen, F.; Wang, J.; Liu, H.; Kong, W.; Zhao, Z.; Ma, L.; Liao, H.; Zhang, D. Frequency constraint-based adversarial attack on deep neural networks for medical image classification. Comput. Biol. Med. 2023, 164, 107248. [Google Scholar] [CrossRef]
  40. Liu, J.; Lu, B.; Xiong, M.; Zhang, T.; Xiong, H. Adversarial Attack with Raindrops. arXiv 2023, arXiv:2302.14267. [Google Scholar]
  41. Zhao, Z.; Liu, Z.; Larson, M. Towards large yet imperceptible adversarial image perturbations with perceptual color distance. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 1039–1048. [Google Scholar]
  42. Johnson, J.; Alahi, A.; Fei-Fei, L. Perceptual losses for real-time style transfer and super-resolution. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Proceedings, Part II 14. Springer: Cham, Switzerland, 2016; pp. 694–711. [Google Scholar]
  43. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. arXiv 2015, arXiv:1512.00567. [Google Scholar]
  44. Patel, V.; Mistree, K. A review on different image interpolation techniques for image enhancement. Int. J. Emerg. Technol. Adv. Eng. 2013, 3, 129–133. [Google Scholar]
  45. Agrafiotis, D. Chapter 9—Video Error Concealment. In Academic Press Library in signal Processing; Theodoridis, S., Chellappa, R., Eds.; Elsevier: Amsterdam, The Netherlands, 2014; Volume 5, pp. 295–321. [Google Scholar] [CrossRef]
  46. Keys, R. Cubic convolution interpolation for digital image processing. IEEE Trans. Acoust. Speech Signal Process. 1981, 29, 1153–1160. [Google Scholar] [CrossRef]
  47. Duchon, C.E. Lanczos filtering in one and two dimensions. J. Appl. Meteorol. Climatol. 1979, 18, 1016–1022. [Google Scholar] [CrossRef]
  48. Parsania, P.S.; Virparia, P.V. A comparative analysis of image interpolation algorithms. Int. J. Adv. Res. Comput. Commun. Eng. 2016, 5, 29–34. [Google Scholar] [CrossRef]
  49. Chitic, R.; Topal, A.O.; Leprévost, F. ShuffleDetect: Detecting Adversarial Images against Convolutional Neural Networks. Appl. Sci. 2023, 13, 4068. [Google Scholar] [CrossRef]
  50. Nicolae, M.I.; Sinn, M.; Tran, M.N.; Buesser, B.; Rawat, A.; Wistuba, M.; Zantedeschi, V.; Baracaldo, N.; Chen, B.; Ludwig, H.; et al. Adversarial Robustness Toolbox v1.2.0. arXiv 2018, arXiv:1807.01069. [Google Scholar]
  51. Xiao, C.; Li, B.; Zhu, J.Y.; He, W.; Liu, M.; Song, D. Generating Adversarial Examples with Adversarial Networks. arXiv 2019, arXiv:1801.02610. [Google Scholar]
  52. Guo, C.; Gardner, J.; You, Y.; Wilson, A.G.; Weinberger, K. Simple black-box adversarial attacks. In Proceedings of the International Conference on Machine Learning, PMLR, Long Beach, CA, USA, 9–15 June 2019; pp. 2484–2493. [Google Scholar]
  53. Goodfellow, I.J.; Shlens, J.; Szegedy, C. Explaining and Harnessing Adversarial Examples. arXiv 2015, arXiv:1810.00069. [Google Scholar]
  54. Kurakin, A.; Goodfellow, I.J.; Bengio, S. Adversarial examples in the physical world. arXiv 2016, arXiv:1607.02533. [Google Scholar]
  55. Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; Vladu, A. Towards Deep Learning Models Resistant to Adversarial Attacks. arXiv 2019, arXiv:1706.06083. [Google Scholar]
  56. SpeedyGraphito. Mes 400 Coups; Panoramart: France, 2020. [Google Scholar]
Figure 1. Standard attacks’ process, where c a is the CNN’s leading category of the clean resized image, and c c a is the CNN’s leading category of the adversarial image.
Figure 1. Standard attacks’ process, where c a is the CNN’s leading category of the clean resized image, and c c a is the CNN’s leading category of the adversarial image.
Applsci 14 03493 g001
Figure 2. Direct attack process generating an adversarial image with the same size as the original clean image.
Figure 2. Direct attack process generating an adversarial image with the same size as the original clean image.
Applsci 14 03493 g002
Figure 3. Examples of images for which the interpolation techniques cause more visual damage than the attacks themselves. Clean HR images A a hr in the 1st column; corresponding non-adversarial HR resized images λ ρ ( A a hr ) in the 2nd column, with values of L p , H n o r m , c l e a n , p = 0 , 1 , and FID H c l e a n underneath (in that order); adversarial HR images in the 3rd column ( a t k = EA, C = C 4 , target scenario) and in the 4th column ( a t k = BIM, C = C 6 , untarget scenario), with L p , H n o r m , a d v , p = 0 , 1 , , and FID H a d v underneath (in that order). To enhance visibility, consider zooming in for a clearer view.
Figure 3. Examples of images for which the interpolation techniques cause more visual damage than the attacks themselves. Clean HR images A a hr in the 1st column; corresponding non-adversarial HR resized images λ ρ ( A a hr ) in the 2nd column, with values of L p , H n o r m , c l e a n , p = 0 , 1 , and FID H c l e a n underneath (in that order); adversarial HR images in the 3rd column ( a t k = EA, C = C 4 , target scenario) and in the 4th column ( a t k = BIM, C = C 6 , untarget scenario), with L p , H n o r m , a d v , p = 0 , 1 , , and FID H a d v underneath (in that order). To enhance visibility, consider zooming in for a clearer view.
Applsci 14 03493 g003
Figure 4. Performance of the noise blowing-up method for EA in the untargeted scenario with the increased strength of adversarial images: (a) specifically for C 4 , (b) averaged across 10 CNNs, and (c) overall report for all CNNs. In (a,b), Δ C values are displayed at the bottom, and the resulting number of used images is at the top.
Figure 4. Performance of the noise blowing-up method for EA in the untargeted scenario with the increased strength of adversarial images: (a) specifically for C 4 , (b) averaged across 10 CNNs, and (c) overall report for all CNNs. In (a,b), Δ C values are displayed at the bottom, and the resulting number of used images is at the top.
Applsci 14 03493 g004
Figure 5. Performance of the noise blowing-up method for AdvGAN in the untargeted scenario with the increased strength of adversarial images: (a) specifically for C 4 , (b) averaged across 10 CNNs, and (c) overall report for all CNNs. In (a,b), Δ C values are displayed at the bottom, and the resulting number of used images is at the top.
Figure 5. Performance of the noise blowing-up method for AdvGAN in the untargeted scenario with the increased strength of adversarial images: (a) specifically for C 4 , (b) averaged across 10 CNNs, and (c) overall report for all CNNs. In (a,b), Δ C values are displayed at the bottom, and the resulting number of used images is at the top.
Applsci 14 03493 g005
Figure 6. Performance of the noise blowing-up method for AdvGAN in the target scenario with the increased strength of adversarial images: (a) specifically for C 4 , (b) averaged across 10 CNNs, and (c) overall report for all CNNs. In (a,b), Δ C values are displayed at the bottom, and the resulting number of used images is at the top.
Figure 6. Performance of the noise blowing-up method for AdvGAN in the target scenario with the increased strength of adversarial images: (a) specifically for C 4 , (b) averaged across 10 CNNs, and (c) overall report for all CNNs. In (a,b), Δ C values are displayed at the bottom, and the resulting number of used images is at the top.
Applsci 14 03493 g006
Figure 7. Sample of HR adversarial images generated by the noise blowing-up strategy for the EA and AdvGAN attacks in the untargeted scenario, and the AdvGAN attack in the targeted scenario against C 4 = MobileNet, with Δ C set to 0.55 in the R domain. Classification (dominant category and label value) of C 4 are displayed at the bottom. (a) Clean image acorn: 0.90. (b) EA u n t a r g snail: 0.61. (c) AdvGAN u n t a r g dung_beetle: 0.55. (d) AdvGAN t a r g rhinoceros_beetle: 0.43.
Figure 7. Sample of HR adversarial images generated by the noise blowing-up strategy for the EA and AdvGAN attacks in the untargeted scenario, and the AdvGAN attack in the targeted scenario against C 4 = MobileNet, with Δ C set to 0.55 in the R domain. Classification (dominant category and label value) of C 4 are displayed at the bottom. (a) Clean image acorn: 0.90. (b) EA u n t a r g snail: 0.61. (c) AdvGAN u n t a r g dung_beetle: 0.55. (d) AdvGAN t a r g rhinoceros_beetle: 0.43.
Applsci 14 03493 g007
Figure 8. Visual comparison in the H domain of (a) the clean image A 1 hr , (b) its non-adversarial resized version, the adversarial image obtained with EA target , C for C = VGG-16, (c) by the lifting method of [29,30], and (d) by the noise blowing-up method. Both non-adversarial images are classified as “comic books”, (a) with label value 0.49 and (b) with label value 0.45 . Both HR adversarial images are classified as “altar”, (c) with label value 0.52 , and (d) with label value 0.41 .
Figure 8. Visual comparison in the H domain of (a) the clean image A 1 hr , (b) its non-adversarial resized version, the adversarial image obtained with EA target , C for C = VGG-16, (c) by the lifting method of [29,30], and (d) by the noise blowing-up method. Both non-adversarial images are classified as “comic books”, (a) with label value 0.49 and (b) with label value 0.45 . Both HR adversarial images are classified as “altar”, (c) with label value 0.52 , and (d) with label value 0.41 .
Applsci 14 03493 g008
Figure 9. Visual comparison in the H domain of (a) the clean image A 2 hr , (b) its non-adversarial resized version, the adversarial image obtained with EA target , C for C = VGG-16, (c) by the lifting method of [29,30], and (d) by the noise blowing-up method. Both non-adversarial images are classified as “Coffee Mug”, (a) with label value 0.08 and (b) with label value 0.08 . Both HR adversarial images are classified as “Hamper”, (c) with label value 0.51 , and (d) with label value 0.53 .
Figure 9. Visual comparison in the H domain of (a) the clean image A 2 hr , (b) its non-adversarial resized version, the adversarial image obtained with EA target , C for C = VGG-16, (c) by the lifting method of [29,30], and (d) by the noise blowing-up method. Both non-adversarial images are classified as “Coffee Mug”, (a) with label value 0.08 and (b) with label value 0.08 . Both HR adversarial images are classified as “Hamper”, (c) with label value 0.51 , and (d) with label value 0.53 .
Applsci 14 03493 g009
Figure 10. Visual comparison in the H domain of (a) the clean image A 3 hr , (b) its non-adversarial resized version, the adversarial image obtained with EA target , C for C = VGG-16, (c) by the lifting method of [29,30], and (d) by the noise blowing-up method. Both non-adversarial images are classified as “hippopotamus”, (a) with label value 0.99 and (b) with label value 0.99 . Both HR adversarial images are classified as “trifle”, (c) with label value 0.51 , and (d) with label value 0.50 .
Figure 10. Visual comparison in the H domain of (a) the clean image A 3 hr , (b) its non-adversarial resized version, the adversarial image obtained with EA target , C for C = VGG-16, (c) by the lifting method of [29,30], and (d) by the noise blowing-up method. Both non-adversarial images are classified as “hippopotamus”, (a) with label value 0.99 and (b) with label value 0.99 . Both HR adversarial images are classified as “trifle”, (c) with label value 0.51 , and (d) with label value 0.50 .
Applsci 14 03493 g010
Table 1. The 10 CNNs trained on ImageNet, their number of parameters (in millions), and their Top-1 and Top-5 accuracy.
Table 1. The 10 CNNs trained on ImageNet, their number of parameters (in millions), and their Top-1 and Top-5 accuracy.
C k Name of the CNNParametersTop-1 AccuracyTop-5 Accuracy
C 1 DenseNet1218M0.7500.923
C 2 DenseNet16914M0.7620.932
C 3 DenseNet20120M0.7730.936
C 4 MobileNet4M0.7040.895
C 5 NASNetMobile4M0.7440.919
C 6 ResNet5026M0.7490.921
C 7 ResNet10145M0.7640.928
C 8 ResNet15260M0.7660.931
C 9 VGG16138M0.7130.901
C 10 VGG19144M0.7130.900
Table 2. For 1 p 10 , the second column lists the ancestor category c a p and its ordinal 1 a p 1000 among the categories of ImageNet. Mutatis mutandis in the third column with the target category c t p and ordinal t p .
Table 2. For 1 p 10 , the second column lists the ancestor category c a p and its ordinal 1 a p 1000 among the categories of ImageNet. Mutatis mutandis in the third column with the target category c t p and ordinal t p .
p ( c a p , a p ) ( c t p , t p )
1(abacus, 398)(bannister, 421)
2(acorn, 988)(rhinoceros beetle, 306)
3(baseball, 429)(ladle, 618)
4(broom, 462)(dingo, 273)
5(brown bear, 294)(pirate, 724)
6(canoe, 472)(saluki, 176)
7(hippopotamus, 344)(trifle, 927)
8(llama, 355)(agama, 42)
9(maraca, 641)(conch, 112)
10(mountain bike, 671)(strainer, 828)
Table 3. For each CNN C k (1st row), number of clean HR images A q p classified by C k in the “correct” category c a q either with the degrading function ρ = “Lanczos” (2nd row), or with ρ = “Nearest” (3rd row).
Table 3. For each CNN C k (1st row), number of clean HR images A q p classified by C k in the “correct” category c a q either with the degrading function ρ = “Lanczos” (2nd row), or with ρ = “Nearest” (3rd row).
C C 1 C 2 C 3 C 4 C 5 C 6 C 7 C 8 C 9 C 10
# S c l e a n C ( L ) 97999897959895959394
# S c l e a n C ( N ) 99979795949795949394
Table 4. List of attacks considered, their white-box or black-box nature, and the scenarios for which they are run in the present study.
Table 4. List of attacks considered, their white-box or black-box nature, and the scenarios for which they are run in the present study.
AttacksWhite BoxBlack BoxTargetedUntargeted
EA xxx
advGAN xxx
SimBA x x
FGSMx x
BIMx xx
PGD Infx xx
PGD L 2 x xx
Table 5. Number of successfully generated adversarial images in the R domain.
Table 5. Number of successfully generated adversarial images in the R domain.
AttacksEAAdvGANSimBAFGSMBIMPGD InfPGD L2
untargtarguntargtarguntargtarguntargtarguntargtarguntargtarguntargtarg
C 1 95899681910780966897969782
C 2 95929485940632987899989990
C 3 94909382920671967198989785
C 4 94908188910640968497979794
C 5 89778974850500885693839471
C 6 95949279900841969697989698
C 7 92878678920801939395959393
C 8 86938874780750949495958994
C 9 90927858870863927692939183
C 10 90947959871871927793949284
m a x 0.5460.5550.7620.4810.5080.0410.9930.4570.9990.9990.9990.9990.9990.999
m i n 0.0070.5500.0030.0310.0380.0410.0500.2570.2580.2890.5240.7210.2770.221
avg0.3590.5510.1500.2550.3520.0410.5220.3400.9580.9010.9870.9860.9660.943
Table 6. Images employed for the assessment of the speed/overhead of the noise blowing-up method for each considered scenario and attack.
Table 6. Images employed for the assessment of the speed/overhead of the noise blowing-up method for each considered scenario and attack.
AttacksTargetedUntargeted
EA, FGSM, BIM, PGD Inf, PGD L2FGSM, BIM, PGD Inf, PGD L2
images ( h × w ) A 1 10 ( 2448 × 3264 ) A 6 9 ( 1536 × 2048 )
A 2 1 ( 374 × 500 ) A 8 4 ( 253 × 380 )
Table 7. Performance of the Noise blowing-up strategy on adversarial images generated with attacks for the targeted scenario ( c a , c b e f ) (with c b e f = c t ) against 10 CNNs. The symbol ↑ (resp. ↓) indicates the higher (resp. the lower) the value the better.
Table 7. Performance of the Noise blowing-up strategy on adversarial images generated with attacks for the targeted scenario ( c a , c b e f ) (with c b e f = c t ) against 10 CNNs. The symbol ↑ (resp. ↓) indicates the higher (resp. the lower) the value the better.
Targeted Attacks C 1 C 2 C 3 C 4 C 5 C 6 C 7 C 8 C 9 C 10
EA c a f t = c b e f 81748076619186899294824
c a f t c b e f 81810141631400
c a f t = c a 8181014331400
↑ SR91.080.488.984.479.296.898.995.710010091.5
L C 0.2020.2130.1890.2490.2430.1390.1290.1200.0440.0360.156
AdvGAN c a f t = c b e f 00000000022
c a f t c b e f 4425118432421
c a f t = c a 76818083637274713436
↑ SR0000000008.70.9
L C 0.1130.2180.2110.1600.1620.1760.2210.1860.0340.0400.152
BIM c a f t = c b e f 50645369479692927572710
c a f t c b e f 18141815901215
c a f t = c a 17131815601213
↑ SR73.583.174.682.183.910098.997.998.693.588.6
L C 0.1000.1650.1670.1170.1190.0070.0240.0230.0250.0250.077
PGD Inf c a f t = c b e f 96989796789895959394940
c a f t c b e f 0011500000
c a f t = c a 0011400000
↑ SR10010098.998.993.910010010010010099.2
L C 0.0130.0100.0110.0170.0463 × 10 6 7 × 10 5 6 × 10 6 2 × 10 5 1 × 10 4 0.009
PGD L2 c a f t = c b e f 69767589649692938281817
c a f t c b e f 1314105721113
c a f t = c a 1314105421112
↑ SR84.184.488.294.790.197.998.998.998.896.493.2
L C 0.0130.1260.1140.0810.0700.0050.0050.0040.0150.0200.058
↑ Average SR69.769.670.172.069.478.979.378.579.579.774.7
↓ Average L C 0.1140.1460.1380.1250.1280.0650.0760.0670.0240.0240.091
Table 8. Performance of the Noise blowing-up technique on adversarial images generated with untargeted attacks against 10 CNNs. The symbol ↑ (resp. ↓) indicates the higher (resp. the lower) the value the better.
Table 8. Performance of the Noise blowing-up technique on adversarial images generated with untargeted attacks against 10 CNNs. The symbol ↑ (resp. ↓) indicates the higher (resp. the lower) the value the better.
Untargeted Attacks C 1 C 2 C 3 C 4 C 5 C 6 C 7 C 8 C 9 C 10
EA c a f t c a 231112750312890
c a f t = c b e f 2311127503128
c a f t = c a 93929383878887865962
↑ SR2.23.21.111.72.27.45.40.034.431.19.9
AdvGAN c a f t c a 48448726182283
c a f t = c b e f 484487261822
c a f t = c a 92868977818584826057
↑ SR4.28.54.34.99.07.62.36.823.127.89.9
SimBA c a f t c a 25232224182532303136266
c a f t = c b e f 25232224182532303136
c a f t = c a 66717067676560485651
↑ SR27.524.523.926.421.227.834.838.535.641.430.1
FGSM c a f t c a 77606463498479758687724
c a f t = c b e f 77596259498374758686
c a f t = c a 1331101000
↑ SR98.795.295.598.498.0100.098.8100.0100.0100.098.5
BIM c a f t c a 95979695889693939292937
c a f t = c b e f 95969692879693939292
c a f t = c a 1101000100
↑ SR99.099.0100.099.0100.0100.0100.098.9100.0100.099.6
PGD Inf c a f t c a 97999897939795959293956
c a f t = c b e f 97999897929795959293
c a f t = c a 0000000000
SR100100100100100100100100100100100.0
PGD L2 c a f t c a 96989796929693899192940
c a f t = c b e f 96989796929693899092
c a f t = c a 1101200000
↑ SR99.099.0100.099.097.9100.0100.0100.0100.0100.099.5
↑ Average SR61.561.360.762.861.263.363.063.570.571.563.9
Table 9. Visual quality as assessed by L p -distances and FID values for the target scenario.
Table 9. Visual quality as assessed by L p -distances and FID values for the target scenario.
Targeted Attack/# of Adversarial Images Used
EA/824 BIM/710 PGD Inf/940 PGD L2/817 Overall/3291
Avg StDev Avg StDev Avg StDev Avg StDev Avg StDev
L 0 L 0 , R n o r m , a d v 0.9450.0140.9790.0130.9710.0100.9950.0090.8290.009
L 0 , H n o r m , a d v 0.9390.0150.8330.0360.8580.0430.6450.0230.7440.024
L 0 , H n o r m , c l e a n 0.9980.0090.9970.0120.9960.0140.9960.0140.9960.010
L 1 L 1 , R n o r m , a d v 0.0470.0780.0090.0350.0100.0030.003 × 10 4 0.0140.023
L 1 , H n o r m , a d v 0.0230.0060.0050.0010.0090.0030.003 × 10 4 0.0090.002
L 1 , H n o r m , c l e a n 0.0270.0170.0220.0140.0250.0160.0230.0150.0210.014
L 1 , H n o r m , a d v / L 1 , H n o r m , c l e a n 1.2711.1120.3620.3380.5620.4800.2140.2020.5430.467
L 2 L 2 , R n o r m , a d v × 10 5 × 10 5 × 10 5 × 10 6 × 10 5 × 10 6 × 10 5 × 10 7 × 10 5 × 10 6
L 2 , H n o r m , a d v × 10 5 × 10 5 × 10 6 × 10 6 × 10 5 × 10 6 × 10 6 × 10 6 × 10 5 × 10 6
L 2 , H n o r m , c l e a n × 10 5 × 10 5 × 10 5 × 10 5 × 10 5 × 10 5 × 10 5 × 10 5 × 10 5 × 10 5
L L , R n o r m , a d v 36112080174164
L , H n o r m , a d v 381050132185184
L , H n o r m , c l e a n 1294112542127421254211934
FID FID H a d v 17.66.65.32.713.79.47.74.111.15.7
FID H c l e a n 14.41.214.20.713.90.314.20.514.20.7
Table 10. Visual quality as assessed by L p -distances and FID values for the untargeted scenario.
Table 10. Visual quality as assessed by L p -distances and FID values for the untargeted scenario.
Untargeted Attack/# of Adversarial Images
EA/90 AdvGAN/83 SimBA/266 FGSM/724 BIM/937 PGD Inf/956 PGD L2/940 Overall/3996
Avg StDev Avg StDev Avg StDev Avg StDev Avg StDev Avg StDev Avg StDev Avg StDev
L 0 L 0 , R n o r m , a d v 0.8220.1500.8380.0880.9940.0500.9900.0170.9800.0130.9740.0110.9930.0100.9420.048
L 0 , H n o r m , a d v 0.8250.1070.8510.0640.8090.0910.9660.0100.8440.0310.8790.0390.6540.0180.8320.051
L 0 , H n o r m , c l e a n 0.9960.0150.9980.0110.9950.0160.9980.0060.9960.0140.9960.0140.9960.0140.9970.013
L 1 L 1 , R n o r m , a d v 0.0110.0050.0210.0110.0080.0030.0310.0010.0060.0010.0120.0030.004 × 10 4 0.0130.003
L 1 , H n o r m , a d v 0.0100.0050.0190.0090.0070.0030.0260.0010.0050.0010.0110.0030.003 × 10 4 0.0120.003
L 1 , H n o r m , c l e a n 0.0200.0120.0250.0120.0230.0150.0270.0170.0250.0170.0250.0180.0250.0180.0250.015
L 1 , H n o r m , a d v / L 1 , H n o r m , c l e a n 0.8081.0550.7610.0550.6060.9311.4611.2840.3430.3270.6800.6080.2050.1970.6950.637
L 2 L 2 , R n o r m , a d v × 10 5 × 10 5 × 10 5 × 10 5 × 10 5 × 10 6 × 10 5 × 10 6 × 10 5 × 10 6 × 10 5 × 10 6 × 10 5 × 10 12 × 10 5 × 10 5
L 2 , H n o r m , a d v × 10 5 × 10 5 × 10 5 × 10 5 × 10 5 × 10 6 × 10 5 × 10 5 × 10 6 × 10 6 × 10 5 × 10 6 × 10 6 × 10 6 × 10 5 × 10 5
L 2 , H n o r m , c l e a n × 10 5 × 10 5 × 10 5 × 10 5 × 10 5 × 10 5 × 10 5 × 10 5 × 10 5 × 10 5 × 10 5 × 10 5 × 10 5 × 10 5 × 10 5 × 10 5
L L , R n o r m , a d v 17810333136802080164247
L , H n o r m , a d v 1681033913620150141174278
L , H n o r m , c l e a n 1194613941121431324012742127421264212742
FID FID H a d v 3.71.915.12.98.53.949.512.35.72.823.918.98.84.816.56.7
FID H c l e a n 14.54.624.47.615.52.516.11.614.10.514.00.313.52.116.02.7
Table 11. In the target scenario, for each considered attack a t k , execution time (in seconds, averaged over the 10 CNNs) of each step of Scheme (11) for the generation of HR adversarial images for the HR clean images A 1 10 and A 2 1 . The Overhead column provides the cumulative time of all steps except Step 3. The ‰ column displays the relative per mille additional time of the Overhead as compared to the time required by a t k performed in Step 3.
Table 11. In the target scenario, for each considered attack a t k , execution time (in seconds, averaged over the 10 CNNs) of each step of Scheme (11) for the generation of HR adversarial images for the HR clean images A 1 10 and A 2 1 . The Overhead column provides the cumulative time of all steps except Step 3. The ‰ column displays the relative per mille additional time of the Overhead as compared to the time required by a t k performed in Step 3.
atk ImagesStep 1Step 2Step 3Step 4Step 5Step 6Step 7Step 8Overhead
EA A 1 10 0.1440.047848.7 × 10 4 0.0530.1010.3630.0480.7570.89
A 2 1 0.0100.048443.2 × 10 4 0.0030.0020.0110.0470.1220.28
FGSM A 1 10 0.1480.05059.2 × 10 4 0.0450.1030.3600.0490.75512.75
A 2 1 0.0090.04958.1 × 10 4 0.0030.0020.0110.0460.1202.06
BIM A 1 10 0.1430.04983.8 × 10 4 0.0450.1030.3560.0490.7448.88
A 2 1 0.0090.04797.5 × 10 4 0.0030.0020.0100.0460.1181.22
PGD Inf A 1 10 0.1430.04890.7 × 10 4 0.0450.1020.3570.0490.7448.21
A 2 1 0.0090.05188.3 × 10 4 0.0030.0020.0100.0460.1221.38
PGD L2 A 1 10 0.1410.048104.2 × 10 4 0.0440.1010.3500.0470.7327.02
A 2 1 0.0090.048106.3 × 10 4 0.0030.0020.0100.0460.1181.11
AVG A 1 10 0.1440.048 × 10 4 0.0470.1020.3570.0480.746
A 2 1 0.0090.049 × 10 4 0.0030.0020.0100.0460.120
Table 12. In the untargeted scenario, for each considered attack a t k , execution time (in seconds, averaged over the 10 CNNs) of each step of Scheme (11) for the generation of HR adversarial images for the HR clean images A 6 9 and A 8 4 . The Overhead column provides the cumulative time of all steps except Step 3. The ‰ column displays the relative per mille additional time of the Overhead as compared to the time required by a t k performed in Step 3.
Table 12. In the untargeted scenario, for each considered attack a t k , execution time (in seconds, averaged over the 10 CNNs) of each step of Scheme (11) for the generation of HR adversarial images for the HR clean images A 6 9 and A 8 4 . The Overhead column provides the cumulative time of all steps except Step 3. The ‰ column displays the relative per mille additional time of the Overhead as compared to the time required by a t k performed in Step 3.
ImagesStep 1Step 2Step 3Step 4Step 5Step 6Step 7Step 8Overhead
FGSM A 6 9 0.0560.04266.3 × 10 4 0.0210.0370.1360.0420.3345.04
A 8 4 0.0070.04567.2 × 10 4 0.0020.0010.0050.0420.1031.53
BIM A 6 9 0.0550.04278.9 × 10 4 0.0210.0380.1350.0420.3334.22
A 8 4 0.0070.04379.1 × 10 4 0.0020.0010.0050.0420.1011.27
PGD Inf A 6 9 0.0560.04380.9 × 10 4 0.0200.0380.1370.0420.3364.15
A 8 4 0.0070.04381.8 × 10 4 0.0020.0010.0050.0420.1001.23
PGD L2 A 6 9 0.0550.04380.9 × 10 4 0.0210.0380.1370.0420.3374.17
A 8 4 0.0070.04581.5 × 10 4 0.0020.0010.0050.0400.1011.24
AVG A 6 9 0.0550.043 × 10 4 0.0210.0380.1360.0420.335
A 8 4 0.0070.044 × 10 4 0.0020.0010.0050.0410.101
Table 13. Minimum, maximum, and mean of the label values τ ˜ c b e f of adversarial images in the R domain (Phase 1, Step 3) when Δ C is set to 0.55 per CNN.
Table 13. Minimum, maximum, and mean of the label values τ ˜ c b e f of adversarial images in the R domain (Phase 1, Step 3) when Δ C is set to 0.55 per CNN.
EA untarg AdvGAN targ AdvGAN untarg
Min Max Mean Min Max Mean Min Max Mean
C 1 0.5870.7790.7230.5750.7310.6360.5880.7700.690
C 2 0.5930.7780.7250.5830.6890.6330.6190.7660.710
C 3 0.5970.7770.7200.6130.7570.6620.5940.7720.701
C 4 0.5720.7710.6570.5710.6880.6240.5810.7710.654
C 5 0.5640.7750.6530.5720.7390.6330.5820.7480.646
C 6 0.6100.7800.7250.5870.7410.6550.5810.7780.691
C 7 0.5870.7780.7250.6040.7490.6550.6100.7670.691
C 8 0.6030.7770.7240.5860.7560.6540.6060.7880.699
C 9 0.5930.7780.7090.5840.7330.6330.5940.7750.674
C 10 0.5930.7790.7080.5840.7490.6320.6050.7750.677
Avg0.5900.7770.7070.5860.7330.6410.5960.7710.683
Table 14. Three clean HR images A a hr , their original size, the classification of VGG-16 as ( c a , τ a ) of their reduced versions ρ ( A a hr ) (with ρ = “Lanczos”), and the target category.
Table 14. Three clean HR images A a hr , their original size, the classification of VGG-16 as ( c a , τ a ) of their reduced versions ρ ( A a hr ) (with ρ = “Lanczos”), and the target category.
a123
A a hr Applsci 14 03493 i001Applsci 14 03493 i002Applsci 14 03493 i003
h × w 800 × 1280 1710 × 1740 1200 × 1600
( c a , τ a ) (Comic Book, 0.4916)(Coffee Mug, 0.0844)(Hippopotamus, 0.9993)
c t altarhampertrifle
Table 15. Numerical assessment of the visual quality of the different HR images (b), (c), (d) compared to the clean ones (a) of Figure 8, Figure 9 and Figure 10, as measured by L p distances and FID values.
Table 15. Numerical assessment of the visual quality of the different HR images (b), (c), (d) compared to the clean ones (a) of Figure 8, Figure 9 and Figure 10, as measured by L p distances and FID values.
A 1 hr A 2 hr A 3 hr
L 0 L 0 , H n o r m , c l e a n 0.9630.9380.999
L 0 , H n o r m , a d v , l i f t 0.9730.9700.969
L 0 , H n o r m , a d v , n o i s e 0.9200.9600.961
L 1 L 1 , H n o r m , c l e a n 0.0710.0370.029
L 1 , H n o r m , a d v , l i f t 0.0750.0490.049
L 1 , H n o r m , a d v , n o i s e 0.0210.0280.032
L 2 L 2 , H n o r m , c l e a n × 10 5 × 10 5 × 10 5
L 2 , H n o r m , a d v , l i f t × 10 5 × 10 5 × 10 5
L 2 , H n o r m , a d v , n o i s e × 10 5 × 10 5 × 10 5
L L , H n o r m , c l e a n 244174191
L , H n o r m , a d v , l i f t 245163198
L , H n o r m , a d v , n o i s e 273058
FID FID H c l e a n 180.545.550.4
FID H a d v , l i f t 221.944.164.1
FID H a d v , n o i s e 55.334.921.9
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Topal, A.O.; Mancellari, E.; Leprévost, F.; Avdusinovic, E.; Gillet, T. The Noise Blowing-Up Strategy Creates High Quality High Resolution Adversarial Images against Convolutional Neural Networks. Appl. Sci. 2024, 14, 3493. https://doi.org/10.3390/app14083493

AMA Style

Topal AO, Mancellari E, Leprévost F, Avdusinovic E, Gillet T. The Noise Blowing-Up Strategy Creates High Quality High Resolution Adversarial Images against Convolutional Neural Networks. Applied Sciences. 2024; 14(8):3493. https://doi.org/10.3390/app14083493

Chicago/Turabian Style

Topal, Ali Osman, Enea Mancellari, Franck Leprévost, Elmir Avdusinovic, and Thomas Gillet. 2024. "The Noise Blowing-Up Strategy Creates High Quality High Resolution Adversarial Images against Convolutional Neural Networks" Applied Sciences 14, no. 8: 3493. https://doi.org/10.3390/app14083493

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop