Artificial Intelligence and Applications—Responsible AI

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 10 January 2025 | Viewed by 2201

Special Issue Editors


E-Mail Website1 Website2
Guest Editor
Faculty of Science and Technology, Charles Darwin University, Darwin, Sydney, NSW 2000, Australia
Interests: artificial intelligence; computational intelligence; explainable/responsible/ethical AI; evolutionary optimization; intelligent systems; cyber-physical systems
Data Science Institute, University of Technology Sydney, Ultimo, NSW 2007, Australia
Interests: AI ethics; AI fairness; AI explainability; Behavior analytics; human–computer interaction
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Artificial Intelligence and its applications across different industrial sectors are transforming the world. It is important to apply Artificial Intelligence decision-making systems to different industries with a strong emphasis on the ethical and explainable use of AI. Technological advancements are leading us toward the development of Artificial Intelligence decision-making systems that have the capability to make informed, responsible and ethical decisions within their designated industries.  Given the current speed at which Artificial Intelligence is developing, it is critical for us to consider the ethical implications of Artificial Intelligence systems. Another important component of AI systems to consider is the professional development of systems and integrating the correct principles when selecting and implementing intelligent algorithms. The current Special Issue highlights the applications of Artificial Intelligence in solving real-life industry problems that range from predictions for providing business solutions to cyber-physical systems, all the while maintaining and emphasizing the explainability of these decisions. It addresses the applications of Artificial Intelligence and smart algorithms, such as different neural networks, a variety of classifications and clustering methods for addressing real-world problems. It also delves into the explainable and ethical aspects of these AI solutions.

This Special Issue aims to collect the latest research studies on applications in AI, AI explainability, machine learning and deep learning; classification algorithms; and neural networks and clustering methods, such as support vector machines, graph neural networks, SHAP, convolutional neural networks, Ada Boost and KNN. Some specific topics include but are not limited to the following:

  • Industry applications of AI;
  • AI in business;
  • AI for management;
  • Responsible AI;
  • Explainable AI;
  • Ethical AI;
  • Cyber-physical systems and explainability;
  • Trusted AI;
  • AI in healthcare;
  • AI for good;
  • Transparency in AI.

Dr. Niusha Shafiabady
Dr. Jianlong Zhou
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • explainability
  • responsible AI
  • AI applications
  • ethical AI
  • classification
  • prediction
  • clustering
  • AI for industry

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 2377 KiB  
Article
Efficient Adversarial Attack Based on Moment Estimation and Lookahead Gradient
by Dian Hong, Deng Chen, Yanduo Zhang, Huabing Zhou, Liang Xie, Jianping Ju and Jianyin Tang
Electronics 2024, 13(13), 2464; https://doi.org/10.3390/electronics13132464 - 24 Jun 2024
Viewed by 292
Abstract
Adversarial example generation is a technique that involves perturbing inputs with imperceptible noise to induce misclassifications in neural networks, serving as a means to assess the robustness of such models. Among the adversarial attack algorithms, momentum iterative fast gradient sign Method (MI-FGSM) and [...] Read more.
Adversarial example generation is a technique that involves perturbing inputs with imperceptible noise to induce misclassifications in neural networks, serving as a means to assess the robustness of such models. Among the adversarial attack algorithms, momentum iterative fast gradient sign Method (MI-FGSM) and its variants constitute a class of highly effective offensive strategies, achieving near-perfect attack success rates in white-box settings. However, these methods’ use of sign activation functions severely compromises gradient information, which leads to low success rates in black-box attacks and results in large adversarial perturbations. In this paper, we introduce a novel adversarial attack algorithm, NA-FGTM. Our method employs the Tanh activation function instead of the sign which can accurately preserve gradient information. In addition, it utilizes the Adam optimization algorithm as well as the Nesterov acceleration, which is able to stabilize gradient update directions and expedite gradient convergence. Above all, the transferability of adversarial examples can be enhanced. Through integration with data augmentation techniques such as DIM, TIM, and SIM, NA-FGTM can further improve the efficacy of black-box attacks. Extensive experiments on the ImageNet dataset demonstrate that our method outperforms the state-of-the-art approaches in terms of black-box attack success rate and generates adversarial examples with smaller perturbations. Full article
(This article belongs to the Special Issue Artificial Intelligence and Applications—Responsible AI)
Show Figures

Figure 1

16 pages, 318 KiB  
Article
DPShield: Optimizing Differential Privacy for High-Utility Data Analysis in Sensitive Domains
by Pratik Thantharate, Shyam Bhojwani and Anurag Thantharate
Electronics 2024, 13(12), 2333; https://doi.org/10.3390/electronics13122333 - 14 Jun 2024
Viewed by 403
Abstract
The proliferation of cloud computing has amplified the need for robust privacy-preserving technologies, particularly when dealing with sensitive financial and human resources (HR) data. However, traditional differential privacy methods often struggle to balance rigorous privacy protections with maintaining data utility. This study introduces [...] Read more.
The proliferation of cloud computing has amplified the need for robust privacy-preserving technologies, particularly when dealing with sensitive financial and human resources (HR) data. However, traditional differential privacy methods often struggle to balance rigorous privacy protections with maintaining data utility. This study introduces DPShield, an optimized adaptive framework that enhances the trade-off between privacy guarantees and data utility in cloud environments. DPShield leverages advanced differential privacy techniques, including dynamic noise-injection mechanisms tailored to data sensitivity, cumulative privacy loss tracking, and domain-specific optimizations. Through comprehensive evaluations on synthetic financial and real-world HR datasets, DPShield demonstrated a remarkable 21.7% improvement in aggregate query accuracy over existing differential privacy approaches. Moreover, it maintained machine learning model accuracy within 5% of non-private benchmarks, ensuring high utility for predictive analytics. These achievements signify a major advancement in differential privacy, offering a scalable solution that harmonizes robust privacy assurances with practical data analysis needs. DPShield’s domain adaptability and seamless integration with cloud architectures underscore its potential as a versatile privacy-enhancing tool. This work bridges the gap between theoretical privacy guarantees and practical implementation demands, paving the way for more secure, ethical, and insightful data usage in cloud computing environments. Full article
(This article belongs to the Special Issue Artificial Intelligence and Applications—Responsible AI)
Show Figures

Figure 1

Back to TopTop