Cybersecurity and Artificial Intelligence: Current and Future Developments

A special issue of AI (ISSN 2673-2688).

Deadline for manuscript submissions: 30 September 2024 | Viewed by 4594

Special Issue Editors


E-Mail Website
Guest Editor
Cardiff School of Technologies, Cardiff Metropolitan University, Cardiff CF5 2YB, UK
Interests: cybersecurity; AI; IoT

E-Mail Website
Guest Editor
Cardiff School of Technologies, Cardiff Metropolitan University, Cardiff CF5 2YB, UK
Interests: cybersecurity; IoT; fog computing

E-Mail Website
Guest Editor
Department of Software Engineering, National University of Modern Languages, Islamabad 44000, Pakistan
Interests: blockchain; AI; software engineering

Special Issue Information

Dear Colleagues,

This Special Issue titled "Cybersecurity and Artificial Intelligence: Current and Future Developments" provides a comprehensive exploration of the sophisticated relationship between cybersecurity and artificial intelligence (AI). In the current technological landscape, cyber threats are escalating in sophistication, demanding innovative approaches to reinforce digital defenses. This Special Issue showcases cutting-edge research on AI-driven cybersecurity solutions, emphasizing their efficacy in adapting to evolving threat vectors. Machine learning algorithms and natural language processing techniques are highlighted for their ability to analyze vast datasets and detect patterns, enhancing threat intelligence across diverse industries. However, this collection also critically examines the challenges and ethical considerations associated with the integration of AI in cybersecurity. Issues such as bias in AI algorithms, the potential for malicious use, and privacy concerns are addressed to encourage an understanding of responsible AI deployment. Looking forward, this Special Issue envisions the convergence of quantum computing, the blockchain, and AI, offering insights into the future of proactive threat hunting, automated incident response, and self-healing systems. The interdisciplinary nature of the contributions, spanning computer science, ethics, law, and policy, can stimulate discussion and discourse, making this collection an invaluable resource for academics, practitioners, and policymakers navigating the complex landscape of securing the digital realm in the age of AI. In essence, this Special Issue serves as a collection of knowledge, providing a universal perspective on the current state and future trajectories of the dynamic relationship between cybersecurity and artificial intelligence.

In this Special Issue, original research articles and reviews are welcome. Research areas may include (but not limited to) the following:

  1. Adversarial machine learning in cybersecurity;
  2. Integrating AI for threat detection and prevention;
  3. Ethical considerations of AI-driven cybersecurity;
  4. Secure and privacy-preserving AI algorithms;
  5. AI-powered incident response and forensics;
  6. Machine learning for anomaly detection in network traffic;
  7. Role of AI in predictive cyber risk assessment;
  8. Deep learning approaches for malware analysis;
  9. AI-driven authentication and access control;
  10. The intersection of blockchain and AI in cybersecurity;
  11. AI in cybersecurity policy and governance;
  12. Securing IoT devices with artificial intelligence;
  13. Human factors in AI-enhanced cybersecurity;
  14. Cyber threat intelligence using machine learning models;
  15. Explainability and transparency of AI for cybersecurity.

Dr. Sheikh Tahir Bakhsh
Dr. Sabeen Tahir
Dr. Basit Shahzad
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. AI is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • AI-driven cybersecurity
  • ethical considerations
  • future directions
  • interdisciplinary perspectives
  • digital resilience

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

21 pages, 2574 KiB  
Article
ZTCloudGuard: Zero Trust Context-Aware Access Management Framework to Avoid Medical Errors in the Era of Generative AI and Cloud-Based Health Information Ecosystems
by Khalid Al-hammuri, Fayez Gebali and Awos Kanan
AI 2024, 5(3), 1111-1131; https://doi.org/10.3390/ai5030055 - 8 Jul 2024
Viewed by 656
Abstract
Managing access between large numbers of distributed medical devices has become a crucial aspect of modern healthcare systems, enabling the establishment of smart hospitals and telehealth infrastructure. However, as telehealth technology continues to evolve and Internet of Things (IoT) devices become more widely [...] Read more.
Managing access between large numbers of distributed medical devices has become a crucial aspect of modern healthcare systems, enabling the establishment of smart hospitals and telehealth infrastructure. However, as telehealth technology continues to evolve and Internet of Things (IoT) devices become more widely used, they are also increasingly exposed to various types of vulnerabilities and medical errors. In healthcare information systems, about 90% of vulnerabilities emerge from medical error and human error. As a result, there is a need for additional research and development of security tools to prevent such attacks. This article proposes a zero-trust-based context-aware framework for managing access to the main components of the cloud ecosystem, including users, devices, and output data. The main goal and benefit of the proposed framework is to build a scoring system to prevent or alleviate medical errors while using distributed medical devices in cloud-based healthcare information systems. The framework has two main scoring criteria to maintain the chain of trust. First, it proposes a critical trust score based on cloud-native microservices for authentication, encryption, logging, and authorizations. Second, a bond trust scoring system is created to assess the real-time semantic and syntactic analysis of attributes stored in a healthcare information system. The analysis is based on a pre-trained machine learning model that generates the semantic and syntactic scores. The framework also takes into account regulatory compliance and user consent in the creation of the scoring system. The advantage of this method is that it applies to any language and adapts to all attributes, as it relies on a language model, not just a set of predefined and limited attributes. The results show a high F1 score of 93.5%, which proves that it is valid for detecting medical errors. Full article
Show Figures

Figure 1

17 pages, 3202 KiB  
Article
Arabic Spam Tweets Classification: A Comprehensive Machine Learning Approach
by Wafa Hussain Hantom and Atta Rahman
AI 2024, 5(3), 1049-1065; https://doi.org/10.3390/ai5030052 - 2 Jul 2024
Viewed by 646
Abstract
Nowadays, one of the most common problems faced by Twitter (also known as X) users, including individuals as well as organizations, is dealing with spam tweets. The problem continues to proliferate due to the increasing popularity and number of users of social media [...] Read more.
Nowadays, one of the most common problems faced by Twitter (also known as X) users, including individuals as well as organizations, is dealing with spam tweets. The problem continues to proliferate due to the increasing popularity and number of users of social media platforms. Due to this overwhelming interest, spammers can post texts, images, and videos containing suspicious links that can be used to spread viruses, rumors, negative marketing, and sarcasm, and potentially hack the user’s information. Spam detection is among the hottest research areas in natural language processing (NLP) and cybersecurity. Several studies have been conducted in this regard, but they mainly focus on the English language. However, Arabic tweet spam detection still has a long way to go, especially emphasizing the diverse dialects other than modern standard Arabic (MSA), since, in the tweets, the standard dialect is seldom used. The situation demands an automated, robust, and efficient Arabic spam tweet detection approach. To address the issue, in this research, various machine learning and deep learning models have been investigated to detect spam tweets in Arabic, including Random Forest (RF), Support Vector Machine (SVM), Naive Bayes (NB) and Long-Short Term Memory (LSTM). In this regard, we have focused on the words as well as the meaning of the tweet text. Upon several experiments, the proposed models have produced promising results in contrast to the previous approaches for the same and diverse datasets. The results showed that the RF classifier achieved 96.78% and the LSTM classifier achieved 94.56%, followed by the SVM classifier that achieved 82% accuracy. Further, in terms of F1-score, there is an improvement of 21.38%, 19.16% and 5.2% using RF, LSTM and SVM classifiers compared to the schemes with same dataset. Full article
Show Figures

Figure 1

Review

Jump to: Research

21 pages, 2273 KiB  
Review
Artificial Intelligence-Driven Facial Image Analysis for the Early Detection of Rare Diseases: Legal, Ethical, Forensic, and Cybersecurity Considerations
by Peter Kováč, Peter Jackuliak, Alexandra Bražinová, Ivan Varga, Michal Aláč, Martin Smatana, Dušan Lovich and Andrej Thurzo
AI 2024, 5(3), 990-1010; https://doi.org/10.3390/ai5030049 - 27 Jun 2024
Viewed by 1537
Abstract
This narrative review explores the potential, complexities, and consequences of using artificial intelligence (AI) to screen large government-held facial image databases for the early detection of rare genetic diseases. Government-held facial image databases, combined with the power of artificial intelligence, offer the potential [...] Read more.
This narrative review explores the potential, complexities, and consequences of using artificial intelligence (AI) to screen large government-held facial image databases for the early detection of rare genetic diseases. Government-held facial image databases, combined with the power of artificial intelligence, offer the potential to revolutionize the early diagnosis of rare genetic diseases. AI-powered phenotyping, as exemplified by the Face2Gene app, enables highly accurate genetic assessments from simple photographs. This and similar breakthrough technologies raise significant privacy and ethical concerns about potential government overreach augmented with the power of AI. This paper explores the concept, methods, and legal complexities of AI-based phenotyping within the EU. It highlights the transformative potential of such tools for public health while emphasizing the critical need to balance innovation with the protection of individual privacy and ethical boundaries. This comprehensive overview underscores the urgent need to develop robust safeguards around individual rights while responsibly utilizing AI’s potential for improved healthcare outcomes, including within a forensic context. Furthermore, the intersection of AI and sensitive genetic data necessitates proactive cybersecurity measures. Current and future developments must focus on securing AI models against attacks, ensuring data integrity, and safeguarding the privacy of individuals within this technological landscape. Full article
Show Figures

Graphical abstract

Back to TopTop