Next Article in Journal
Heart Murmur Quality Detection Using Deep Neural Networks with Attention Mechanism
Next Article in Special Issue
Enhancing Data Privacy Protection and Feature Extraction in Secure Computing Using a Hash Tree and Skip Attention Mechanism
Previous Article in Journal
Evaluation of the Mineral Manganese OXMN009 and OXMN009P in the Chemical Looping Combustion (CLC) Process Using Thermogravimetry
Previous Article in Special Issue
AHAC: Advanced Network-Hiding Access Control Framework
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A User-Centered Framework for Data Privacy Protection Using Large Language Models and Attention Mechanisms

China Agricultural University, Beijing 100083, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2024, 14(15), 6824; https://doi.org/10.3390/app14156824
Submission received: 26 June 2024 / Revised: 21 July 2024 / Accepted: 31 July 2024 / Published: 5 August 2024
(This article belongs to the Special Issue Cloud Computing: Privacy Protection and Data Security)

Abstract

This paper introduces a user-centered data privacy protection framework utilizing large language models (LLMs) and user attention mechanisms, which are tailored to address urgent privacy concerns in sensitive data processing domains like financial computing and facial recognition. The innovation lies in a novel user attention mechanism that dynamically adjusts attention weights based on data characteristics and user privacy needs, enhancing the ability to identify and protect sensitive information effectively. Significant methodological advancements differentiate our approach from existing techniques by incorporating user-specific attention into traditional LLMs, ensuring both data accuracy and privacy. We succinctly highlight the enhanced performance of this framework through a selective presentation of experimental results across various applications. Notably, in computer vision, the application of our user attention mechanism led to improved metrics over traditional multi-head and self-attention methods: FasterRCNN models achieved precision, recall, and accuracy rates of 0.82, 0.79, and 0.80, respectively. Similar enhancements were observed with SSD, YOLO, and EfficientDet models with notable increases in all performance metrics. In natural language processing tasks, our framework significantly boosted the performance of models like Transformer, BERT, CLIP, BLIP, and BLIP2, demonstrating the framework’s adaptability and effectiveness. These streamlined results underscore the practical impact and the technological advancement of our proposed framework, confirming its superiority in enhancing privacy protection without compromising on data processing efficacy.
Keywords: large language models; privacy-preserving framework; multi-task learning; differential privacy; computer vision tasks large language models; privacy-preserving framework; multi-task learning; differential privacy; computer vision tasks

Share and Cite

MDPI and ACS Style

Zhou, S.; Zhou, Z.; Wang, C.; Liang, Y.; Wang, L.; Zhang, J.; Zhang, J.; Lv, C. A User-Centered Framework for Data Privacy Protection Using Large Language Models and Attention Mechanisms. Appl. Sci. 2024, 14, 6824. https://doi.org/10.3390/app14156824

AMA Style

Zhou S, Zhou Z, Wang C, Liang Y, Wang L, Zhang J, Zhang J, Lv C. A User-Centered Framework for Data Privacy Protection Using Large Language Models and Attention Mechanisms. Applied Sciences. 2024; 14(15):6824. https://doi.org/10.3390/app14156824

Chicago/Turabian Style

Zhou, Shutian, Zizhe Zhou, Chenxi Wang, Yuzhe Liang, Liangyu Wang, Jiahe Zhang, Jinming Zhang, and Chunli Lv. 2024. "A User-Centered Framework for Data Privacy Protection Using Large Language Models and Attention Mechanisms" Applied Sciences 14, no. 15: 6824. https://doi.org/10.3390/app14156824

APA Style

Zhou, S., Zhou, Z., Wang, C., Liang, Y., Wang, L., Zhang, J., Zhang, J., & Lv, C. (2024). A User-Centered Framework for Data Privacy Protection Using Large Language Models and Attention Mechanisms. Applied Sciences, 14(15), 6824. https://doi.org/10.3390/app14156824

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop