sensors-logo

Journal Browser

Journal Browser

Security and Privacy for Machine Learning Applications

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (15 January 2024) | Viewed by 9162

Special Issue Editors


E-Mail Website
Guest Editor
Institute for Digital Technologies, Loughborough University London, London E20 3BS, UK
Interests: privacy-preserving techniques; applied cryptography; homomorphic machine learning; cybersecurity
Special Issues, Collections and Topics in MDPI journals
School Of Cyber Science And Engineering, Sichuan University, Chengdu 610207, China
Interests: cyber-physical security; IoT security; artificial intelligence
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Artificial intelligence and machine learning are playing a key role in developing cybersecurity tools to protect cyberphysical infrastructure from malicious actors. Currently, these technologies are being used in a multitude of domains to automate tasks such as driving, diagnostics, trading, marketing, and many more. As we rely heavily on artificial intelligence and machine learning for our day-to-day activities, it is paramount to protect these technologies from emerging cyber threats. Machine learning models were built using a huge amount of high-quality and application-specific data. Even though machine learning models can only be trained at places where data are available, anyone can use the trained model for classification tasks via the internet. While this looks revolutionary, trained machine learning models are not readily available to users in sectors such as healthcare, finance, or marketing due to privacy issues. Users do not want to share their sensitive data with service providers due to a lack of trust. Privacy legislation such as the General Data Protection Regulation (GDPR) restricts data organizations from sharing with other institutes. Moreover, several security threats, such as adversarial generative networks, pose serious threats to the reliability of these models being used in high-stake applications, such as autonomous vehicles and medical diagnostics. A trusted framework is required for machine learning systems to ensure security and data privacy. This will boost confidence in consumers who want to use machine learning services without security and privacy worries. Simply encrypting data only protects them during storage and transmission. Hence, federated learning, fully homomorphic encryption, differential privacy, secure multiparty computation, and many more novel technologies are being exploited to uphold the security and privacy of machine learning algorithms. This Special Issue is expected to receive papers presenting novel algorithms to mitigate security and privacy threats to the emerging artificial intelligence and machine learning paradigm. Potential topics include, but are not limited to, the following research areas:

  1. Privacy-preserving techniques
  2. Encrypted computation
  3. Machine learning in encrypted domain
  4. Homomorphic encryption for machine learning
  5. Secure multi-party computation and federated learning
  6. Security and privacy of federated machine learning

Dr. Yogachandran Rahulamathavan
Dr. Beibei Li
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 7967 KiB  
Article
A Federated Attention-Based Multimodal Biometric Recognition Approach in IoT
by Leyu Lin, Yue Zhao, Jintao Meng and Qi Zhao
Sensors 2023, 23(13), 6006; https://doi.org/10.3390/s23136006 - 28 Jun 2023
Cited by 1 | Viewed by 1439
Abstract
The rise of artificial intelligence applications has led to a surge in Internet of Things (IoT) research. Biometric recognition methods are extensively used in IoT access control due to their convenience. To address the limitations of unimodal biometric recognition systems, we propose an [...] Read more.
The rise of artificial intelligence applications has led to a surge in Internet of Things (IoT) research. Biometric recognition methods are extensively used in IoT access control due to their convenience. To address the limitations of unimodal biometric recognition systems, we propose an attention-based multimodal biometric recognition (AMBR) network that incorporates attention mechanisms to extract biometric features and fuse the modalities effectively. Additionally, to overcome issues of data privacy and regulation associated with collecting training data in IoT systems, we utilize Federated Learning (FL) to train our model This collaborative machine-learning approach enables data parties to train models while preserving data privacy. Our proposed approach achieves 0.68%, 0.47%, and 0.80% Equal Error Rate (EER) on the three VoxCeleb1 official trial lists, performs favorably against the current methods, and the experimental results in FL settings illustrate the potential of AMBR with an FL approach in the multimodal biometric recognition scenario. Full article
(This article belongs to the Special Issue Security and Privacy for Machine Learning Applications)
Show Figures

Figure 1

15 pages, 1174 KiB  
Article
Hierarchical Aggregation for Numerical Data under Local Differential Privacy
by Mingchao Hao, Wanqing Wu and Yuan Wan
Sensors 2023, 23(3), 1115; https://doi.org/10.3390/s23031115 - 18 Jan 2023
Viewed by 1269
Abstract
The proposal of local differential privacy solves the problem that the data collector must be trusted in centralized differential privacy models. The statistical analysis of numerical data under local differential privacy has been widely studied by many scholars. However, in real-world scenarios, numerical [...] Read more.
The proposal of local differential privacy solves the problem that the data collector must be trusted in centralized differential privacy models. The statistical analysis of numerical data under local differential privacy has been widely studied by many scholars. However, in real-world scenarios, numerical data from the same category but in different ranges frequently require different levels of privacy protection. We propose a hierarchical aggregation framework for numerical data under local differential privacy. In this framework, the privacy data in different ranges are assigned different privacy levels and then disturbed hierarchically and locally. After receiving users’ data, the aggregator perturbs the privacy data again to convert the low-level data into high-level data to increase the privacy data at each privacy level so as to improve the accuracy of the statistical analysis. Through theoretical analysis, it was proved that this framework meets the requirements of local differential privacy and that its final mean estimation result is unbiased. The proposed framework is combined with mini-batch stochastic gradient descent to complete the linear regression task. Sufficient experiments both on synthetic datasets and real datasets show that the framework has a higher accuracy than the existing methods in both mean estimation and mini-batch stochastic gradient descent experiments. Full article
(This article belongs to the Special Issue Security and Privacy for Machine Learning Applications)
Show Figures

Figure 1

18 pages, 1102 KiB  
Article
A Novel Steganography Method for Character-Level Text Image Based on Adversarial Attacks
by Kangyi Ding, Teng Hu, Weina Niu, Xiaolei Liu, Junpeng He, Mingyong Yin and Xiaosong Zhang
Sensors 2022, 22(17), 6497; https://doi.org/10.3390/s22176497 - 29 Aug 2022
Cited by 3 | Viewed by 1705
Abstract
The Internet has become the main channel of information communication, which contains a large amount of secret information. Although network communication provides a convenient channel for human communication, there is also a risk of information leakage. Traditional image steganography algorithms use manually crafted [...] Read more.
The Internet has become the main channel of information communication, which contains a large amount of secret information. Although network communication provides a convenient channel for human communication, there is also a risk of information leakage. Traditional image steganography algorithms use manually crafted steganographic algorithms or custom models for steganography, while our approach uses ordinary OCR models for information embedding and extraction. Even if our OCR models for steganography are intercepted, it is difficult to find their relevance to steganography. We propose a novel steganography method for character-level text images based on adversarial attacks. We exploit the complexity and uniqueness of neural network boundaries and use neural networks as a tool for information embedding and extraction. We use an adversarial attack to embed the steganographic information into the character region of the image. To avoid detection by other OCR models, we optimize the generation of the adversarial samples and use a verification model to filter the generated steganographic images, which, in turn, ensures that the embedded information can only be recognized by our local model. The decoupling experiments show that the strategies we adopt to weaken the transferability can reduce the possibility of other OCR models recognizing the embedded information while ensuring the success rate of information embedding. Meanwhile, the perturbations we add to embed the information are acceptable. Finally, we explored the impact of different parameters on the algorithm with the potential of our steganography algorithm through parameter selection experiments. We also verify the effectiveness of our validation model to select the best steganographic images. The experiments show that our algorithm can achieve a 100% information embedding rate and more than 95% steganography success rate under the set condition of 3 samples per group. In addition, our embedded information can be hardly detected by other OCR models. Full article
(This article belongs to the Special Issue Security and Privacy for Machine Learning Applications)
Show Figures

Figure 1

19 pages, 539 KiB  
Article
A Novel Homomorphic Approach for Preserving Privacy of Patient Data in Telemedicine
by Yasir Iqbal, Shahzaib Tahir, Hasan Tahir, Fawad Khan, Saqib Saeed, Abdullah M. Almuhaideb and Adeel M. Syed
Sensors 2022, 22(12), 4432; https://doi.org/10.3390/s22124432 - 11 Jun 2022
Cited by 10 | Viewed by 3413
Abstract
Globally, the surge in disease and urgency in maintaining social distancing has reawakened the use of telemedicine/telehealth. Amid the global health crisis, the world adopted the culture of online consultancy. Thus, there is a need to revamp the conventional model of the telemedicine [...] Read more.
Globally, the surge in disease and urgency in maintaining social distancing has reawakened the use of telemedicine/telehealth. Amid the global health crisis, the world adopted the culture of online consultancy. Thus, there is a need to revamp the conventional model of the telemedicine system as per the current challenges and requirements. Security and privacy of data are main aspects to be considered in this era. Data-driven organizations also require compliance with regulatory bodies, such as HIPAA, PHI, and GDPR. These regulatory compliance bodies must ensure user data privacy by implementing necessary security measures. Patients and doctors are now connected to the cloud to access medical records, e.g., voice recordings of clinical sessions. Voice data reside in the cloud and can be compromised. While searching voice data, a patient’s critical data can be leaked, exposed to cloud service providers, and spoofed by hackers. Secure, searchable encryption is a requirement for telemedicine systems for secure voice and phoneme searching. This research proposes the secure searching of phonemes from audio recordings using fully homomorphic encryption over the cloud. It utilizes IBM’s homomorphic encryption library (HElib) and achieves indistinguishability. Testing and implementation were done on audio datasets of different sizes while varying the security parameters. The analysis includes a thorough security analysis along with leakage profiling. The proposed scheme achieved higher levels of security and privacy, especially when the security parameters increased. However, in use cases where higher levels of security were not desirous, one may rely on a reduction in the security parameters. Full article
(This article belongs to the Special Issue Security and Privacy for Machine Learning Applications)
Show Figures

Figure 1

Back to TopTop