applsci-logo

Journal Browser

Journal Browser

Data Privacy and Security for Information Engineering

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 20 February 2025 | Viewed by 56122

Special Issue Editor


E-Mail Website
Guest Editor
Department of Frontier Media Science, School of Interdisciplinary Mathematical Sciences, Meiji University, Tokyo 164-8525, Japan
Interests: data privacy; network security
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The rapid growth of data-driven technologies for large-scale data processing have raised significant concerns about data privacy and security. This Special Issue aims to explore these various challenges related to big data and address issues with advanced technologies. The scope of this Special Issue includes differential privacy, anonymization schemes for multi-dimensional data, studies on various threat models, and the human aspect of privacy policies for privacy technologies.

Prof. Dr. Hiroaki Kikuchi
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • differential privacy
  • data anonymization
  • privacy policy
  • membership inference
  • attribute inference
  • surveillance camera

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

17 pages, 1162 KiB  
Article
Analyzing Machine Learning Models for Activity Recognition Using Homomorphically Encrypted Real-World Smart Home Datasets: A Case Study
by Hasina Attaullah, Sanaullah Sanaullah and Thorsten Jungeblut
Appl. Sci. 2024, 14(19), 9047; https://doi.org/10.3390/app14199047 - 7 Oct 2024
Viewed by 1059
Abstract
The era of digitization and IoT devices is marked by the constant storage of massive amounts of data. The growing adoption of smart home environments, which use sensors and devices to monitor and control various aspects of daily life, underscores the need for [...] Read more.
The era of digitization and IoT devices is marked by the constant storage of massive amounts of data. The growing adoption of smart home environments, which use sensors and devices to monitor and control various aspects of daily life, underscores the need for effective privacy and security measures. HE is a technology that enables computations on encrypted data, preserving confidentiality. As a result, researchers have developed methodologies to protect user information, and HE is one of the technologies that make it possible to perform computations directly on encrypted data and produce results using this encrypted information. Thus, this research study compares the performance of three ML models, XGBoost, Random Forest, and Decision Classifier, on a real-world smart home dataset using both with and without FHE. Practical results demonstrate that the Decision Classifier showed remarkable results, maintaining high accuracy with FHE and even surpassing its plaintext performance, suggesting that encryption can enhance model accuracy under certain conditions. Additionally, Random Forest showed efficiency in terms of execution time and low prediction errors with FHE, making it a strong candidate for encrypted data processing in smart homes. These findings highlight the potential of FHE to set new privacy standards, advancing secure and privacy-preserving technologies in smart environments. Full article
(This article belongs to the Special Issue Data Privacy and Security for Information Engineering)
Show Figures

Figure 1

15 pages, 657 KiB  
Article
A Differentially Private (Random) Decision Tree without Noise from k-Anonymity
by Atsushi Waseda, Ryo Nojima and Lihua Wang
Appl. Sci. 2024, 14(17), 7625; https://doi.org/10.3390/app14177625 - 28 Aug 2024
Viewed by 911
Abstract
This paper focuses on the relationship between decision trees, a typical machine learning method, and data anonymization. It is known that information leaked from trained decision trees can be evaluated using well-studied data anonymization techniques and that decision trees can be strengthened using [...] Read more.
This paper focuses on the relationship between decision trees, a typical machine learning method, and data anonymization. It is known that information leaked from trained decision trees can be evaluated using well-studied data anonymization techniques and that decision trees can be strengthened using k-anonymity and -diversity; unfortunately, however, this does not seem sufficient for differential privacy. In this paper, we show how one might apply k-anonymity to a (random) decision tree, which is a variant of the decision tree. Surprisingly, this results in differential privacy, which means that security is amplified from k-anonymity to differential privacy without the addition of noise. Full article
(This article belongs to the Special Issue Data Privacy and Security for Information Engineering)
Show Figures

Figure 1

23 pages, 1267 KiB  
Article
Comparative Analysis of Local Differential Privacy Schemes in Healthcare Datasets
by Andres Hernandez-Matamoros and Hiroaki Kikuchi
Appl. Sci. 2024, 14(7), 2864; https://doi.org/10.3390/app14072864 - 28 Mar 2024
Cited by 2 | Viewed by 1646
Abstract
In the rapidly evolving landscape of healthcare technology, the critical need for robust privacy safeguards is undeniable. Local Differential Privacy (LDP) offers a potential solution to address privacy concerns in data-rich industries. However, challenges such as the curse of dimensionality arise when dealing [...] Read more.
In the rapidly evolving landscape of healthcare technology, the critical need for robust privacy safeguards is undeniable. Local Differential Privacy (LDP) offers a potential solution to address privacy concerns in data-rich industries. However, challenges such as the curse of dimensionality arise when dealing with multidimensional data. This is particularly pronounced in k-way joint probability estimation, where higher values of k lead to decreased accuracy. To overcome these challenges, we propose the integration of Bayesian Ridge Regression (BRR), known for its effectiveness in handling multicollinearity. Our approach demonstrates robustness, manifesting a noteworthy reduction in average variant distance when compared to baseline algorithms such as LOPUB and LOCOP. Additionally, we leverage the R-squared metric to highlight BRR’s advantages, illustrating its performance relative to LASSO, as LOPUB and LOCOP are based on it. This paper addresses a relevant concern related to datasets exhibiting high correlation between attributes, potentially allowing the extraction of information from one attribute to another. We convincingly show the superior performance of BRR over LOPUB and LOCOP across 15 datasets with varying average correlation attributes. Healthcare takes center stage in this collection of datasets. Moreover, the datasets explore diverse fields such as finance, travel, and social science. In summary, our proposed approach consistently outperforms the LOPUB and LOCOP algorithms, particularly when operating under smaller privacy budgets and with datasets characterized by lower average correlation attributes. This signifies the efficacy of Bayesian Ridge Regression in enhancing privacy safeguards in healthcare technology. Full article
(This article belongs to the Special Issue Data Privacy and Security for Information Engineering)
Show Figures

Figure 1

Review

Jump to: Research

47 pages, 650 KiB  
Review
Balancing Privacy and Progress: A Review of Privacy Challenges, Systemic Oversight, and Patient Perceptions in AI-Driven Healthcare
by Steven M. Williamson and Victor Prybutok
Appl. Sci. 2024, 14(2), 675; https://doi.org/10.3390/app14020675 - 12 Jan 2024
Cited by 41 | Viewed by 51256
Abstract
Integrating Artificial Intelligence (AI) in healthcare represents a transformative shift with substantial potential for enhancing patient care. This paper critically examines this integration, confronting significant ethical, legal, and technological challenges, particularly in patient privacy, decision-making autonomy, and data integrity. A structured exploration of [...] Read more.
Integrating Artificial Intelligence (AI) in healthcare represents a transformative shift with substantial potential for enhancing patient care. This paper critically examines this integration, confronting significant ethical, legal, and technological challenges, particularly in patient privacy, decision-making autonomy, and data integrity. A structured exploration of these issues focuses on Differential Privacy as a critical method for preserving patient confidentiality in AI-driven healthcare systems. We analyze the balance between privacy preservation and the practical utility of healthcare data, emphasizing the effectiveness of encryption, Differential Privacy, and mixed-model approaches. The paper navigates the complex ethical and legal frameworks essential for AI integration in healthcare. We comprehensively examine patient rights and the nuances of informed consent, along with the challenges of harmonizing advanced technologies like blockchain with the General Data Protection Regulation (GDPR). The issue of algorithmic bias in healthcare is also explored, underscoring the urgent need for effective bias detection and mitigation strategies to build patient trust. The evolving roles of decentralized data sharing, regulatory frameworks, and patient agency are discussed in depth. Advocating for an interdisciplinary, multi-stakeholder approach and responsive governance, the paper aims to align healthcare AI with ethical principles, prioritize patient-centered outcomes, and steer AI towards responsible and equitable enhancements in patient care. Full article
(This article belongs to the Special Issue Data Privacy and Security for Information Engineering)
Show Figures

Figure 1

Back to TopTop