Next Article in Journal
Bibliometric Analysis of IoT Lightweight Cryptography
Next Article in Special Issue
Unveiling the Dark Side of ChatGPT: Exploring Cyberattacks and Enhancing User Awareness
Previous Article in Journal
RealWaste: A Novel Real-Life Data Set for Landfill Waste Classification Using Deep Learning
Previous Article in Special Issue
A Homomorphic Encryption Framework for Privacy-Preserving Spiking Neural Networks
 
 
Article
Peer-Review Record

AdvRain: Adversarial Raindrops to Attack Camera-Based Smart Vision Systems

Information 2023, 14(12), 634; https://doi.org/10.3390/info14120634
by Amira Guesmi *, Muhammad Abdullah Hanif and Muhammad Shafique
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Information 2023, 14(12), 634; https://doi.org/10.3390/info14120634
Submission received: 31 July 2023 / Revised: 15 September 2023 / Accepted: 20 September 2023 / Published: 28 November 2023

Round 1

Reviewer 1 Report

The presented research on adversarial attacks is topical: it is crucial to understand and mitigate adversarial attacks to ensure the development of safe and trustworthy intelligent systems. Taking measures to defend against adversarial attacks is imperative for maintaining the reliability and security of deep learning models, particularly in critical applications like autonomous vehicles, robotics, and other intelligent systems that interact with people.
Adversary introduces imperceptible perturbations to the digital input image, specifically tailored to deceive a given deep neural network (DNN) model.

The main contributions of this research, as the authors state:
• Proposed a novel technique that utilizes a random search optimization method guided by grad-cam to craft adversarial perturbations. These perturbations are introduced into the visual path between the camera and the object without altering the appearance of the object itself.
• The adversarial perturbations are designed to resemble a natural phenomenon, specifically raindrops, resulting in an inconspicuous pattern. These patterns are printed on a translucent sticker and affixed to the camera lens, making them difficult to detect.
• The proposed adversarial sticker applies the same perturbation to all images belonging to the same class, making it a universal attack against the target class.
• Experiments demonstrate the potency of the AdvRain attack, achieving a significant decrease of over 61% in accuracy for VGG-19 on ImageNet and 57% for Resnet34 on Caltech-101 compared to 37% and 40% (for the same structural similarity index (SSIM)) when using FakeWeather.
• Studied the impact of blurring specific parts of the image, introducing low-frequency patterns, on model interpretability. This provides valuable insights into the behavior of the deep learning models under the proposed adversarial attack.

Here are a few suggestions and questions for the authors:
- The literature review could be extended, including more articles on adversarial attacks and their prevention topic.
- The analyzed models in the research (VGG-19, Resnet34) are quite old. It would be interesting to see more recent CNNs and vision transformers-based models included in the assessment of the resilience to adversarial attacks.
- How would the authors suggest mitigating the effect of the AdvRain attacks?
- All the adversarial attacks presented in Figure 2 ("AdvRain vs. existing physical-world attacks") could be discussed in more detail in the paper and cited.
- How is it easy to perform an AdvRain attack in real-world scenarios if we can't optimize the pattern of the attack? Real-world scenarios (e.g., cars) use videos. How the presented attack AdvRain would perform on videos, and what could be the success ratio of the attack on a sequence of images (videos)? Would the carefully crafted pattern of raindrops affect the analysis of all the video frames?

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

This paper presents a novel technique called AdvRain for crafting adversarial perturbations in camera-based vision systems. The technique uses a random search optimization method guided by Grad-CAM to identify critical positions in the image that influence the decision-making process of deep learning models. Adversarial perturbations resembling raindrops are generated and printed on a translucent sticker placed over the camera lens, resulting in inconspicuous adversarial images. The experiments demonstrate the effectiveness of AdvRain in reducing the classification accuracy of VGG-19 and Resnet34 models on ImageNet and Caltech-101 datasets. The paper also discusses the impact of different sizes of raindrops on the models' accuracy and the visual similarity of the generated adversarial examples.

 

The paper is well organized in general. The topic is relevant for "Information" readers. I just have several suggestions. Please see my detailed comments as below:

 

1. What is the technique used in AdvRain for crafting adversarial perturbations? Please elabrated it more clearly.

2. How are the adversarial perturbations generated and applied in AdvRain? This part should be explained with more details.

3. Any limilations in this presented method/technique?

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop