entropy-logo

Journal Browser

Journal Browser

Trustworthy AI: Information Theoretic Perspectives

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: closed (30 April 2023) | Viewed by 9855

Special Issue Editors

Department of Computer Science and Technology, Harbin Institute of Technology, Shenzhen 518055, China
Interests: information theory; data compression; algebraic coding theory; machine learning; deep learning; distributed storage
Special Issues, Collections and Topics in MDPI journals
Department of Computer Science and Technology, Anhui University, Hefei 243032, China
Interests: data mining; clustering analysis; social network analysis; community detection
Department of Computer Science and Technology, Harbin Institute of Technology, Shenzhen 518055, China
Interests: artificial intelligence security; applicated cryptography; multi-party computation; game theory

E-Mail Website
Guest Editor
Department of Computer Science and Technology, Harbin Institute of Technology, Shenzhen 518055, China
Interests: machine learning; immune computation; swarm intelligence; security; privacy; optimization

Special Issue Information

Dear Colleagues,

Recent studies have shown that many AI-based systems are vulnerable to various attacks on both data and model levels, such as backdoor attacks, adversarial attacks, and model stealing. Building trustworthy AI systems is of great significance; they are expected to be built through studying security and performance limits from information-theoretic perspectives and beyond. This Special Issue aims to gather the body of recent results on trustworthy AI systems to bolster their value and emphasize the importance they continue to play in the development of AI security. The goal is to help discover the limitations of the current state-of-the-art AI-based methods and propose new defense AI methods to withstand malicious attacks under both black-box and white-box settings. We believe that this Special Issue will offer a timely collection of research updates to benefit the researchers and practitioners working on AI security topics of interest, including, but not limited to:

  1. Adversarial attacks and defenses.
  2. Backdoor attacks and defenses.
  3. Privacy-preserving schemes and applications.
  4. Model stealing and its defenses.
  5. Domain adaption and robust generalization.
  6. Data leakage and its defenses.
  7. Cryptography in AI.

Dr. Bin Chen
Dr. Li Ni
Dr. Yulin Wu
Dr. Wenjian Luo
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • adversarial learning
  • backdoor learning
  • data privacy
  • model stealing
  • domain adaption
  • data protection
  • cryptography

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 492 KiB  
Article
Achieving Verifiable Decision Tree Prediction on Hybrid Blockchains
by Moxuan Fu, Chuan Zhang, Chenfei Hu, Tong Wu, Jinyang Dong and Liehuang Zhu
Entropy 2023, 25(7), 1058; https://doi.org/10.3390/e25071058 - 13 Jul 2023
Cited by 1 | Viewed by 1344
Abstract
Machine learning has become increasingly popular in academic and industrial communities and has been widely implemented in various online applications due to its powerful ability to analyze and use data. Among all the machine learning models, decision tree models stand out due to [...] Read more.
Machine learning has become increasingly popular in academic and industrial communities and has been widely implemented in various online applications due to its powerful ability to analyze and use data. Among all the machine learning models, decision tree models stand out due to their great interpretability and simplicity, and have been implemented in cloud computing services for various purposes. Despite its great success, the integrity issue of online decision tree prediction is a growing concern. The correctness and consistency of decision tree predictions in cloud computing systems need more security guarantees since verifying the correctness of the model prediction remains challenging. Meanwhile, blockchain has a promising prospect in two-party machine learning services as the immutable and traceable characteristics satisfy the verifiable settings in machine learning services. In this paper, we initiate the study of decision tree prediction services on blockchain systems and propose VDT, a Verifiable Decision Tree prediction scheme for decision tree prediction. Specifically, by leveraging the Merkle tree and hash function, the scheme allows the service provider to generate a verification proof to convince the client that the output of the decision tree prediction is correctly computed on a particular data sample. It is further extended to an update method for a verifiable decision tree to modify the decision tree model efficiently. We prove the security of the proposed VDT schemes and evaluate their performance using real datasets. Experimental evaluations show that our scheme requires less than one second to produce verifiable proof. Full article
(This article belongs to the Special Issue Trustworthy AI: Information Theoretic Perspectives)
Show Figures

Figure 1

12 pages, 1866 KiB  
Article
Backdoor Attack against Face Sketch Synthesis
by Shengchuan Zhang and Suhang Ye
Entropy 2023, 25(7), 974; https://doi.org/10.3390/e25070974 - 25 Jun 2023
Viewed by 1325
Abstract
Deep neural networks (DNNs) are easily exposed to backdoor threats when training with poisoned training samples. Models using backdoor attack have normal performance for benign samples, and possess poor performance for poisoned samples manipulated with pre-defined trigger patterns. Currently, research on backdoor attacks [...] Read more.
Deep neural networks (DNNs) are easily exposed to backdoor threats when training with poisoned training samples. Models using backdoor attack have normal performance for benign samples, and possess poor performance for poisoned samples manipulated with pre-defined trigger patterns. Currently, research on backdoor attacks focuses on image classification and object detection. In this article, we investigated backdoor attacks in facial sketch synthesis, which can be beneficial for many applications, such as animation production and assisting police in searching for suspects. Specifically, we propose a simple yet effective poison-only backdoor attack suitable for generation tasks. We demonstrate that when the backdoor is integrated into the target model via our attack, it can mislead the model to synthesize unacceptable sketches of any photos stamped with the trigger patterns. Extensive experiments are executed on the benchmark datasets. Specifically, the light strokes devised by our backdoor attack strategy can significantly decrease the perceptual quality. However, the FSIM score of light strokes is 68.21% on the CUFS dataset and the FSIM scores of pseudo-sketches generated by FCN, cGAN, and MDAL are 69.35%, 71.53%, and 72.75%, respectively. There is no big difference, which proves the effectiveness of the proposed backdoor attack method. Full article
(This article belongs to the Special Issue Trustworthy AI: Information Theoretic Perspectives)
Show Figures

Figure 1

15 pages, 793 KiB  
Article
Multiscale Attention Fusion for Depth Map Super-Resolution Generative Adversarial Networks
by Dan Xu, Xiaopeng Fan and Wen Gao
Entropy 2023, 25(6), 836; https://doi.org/10.3390/e25060836 - 23 May 2023
Cited by 1 | Viewed by 1281
Abstract
Color images have long been used as an important supplementary information to guide the super-resolution of depth maps. However, how to quantitatively measure the guiding effect of color images on depth maps has always been a neglected issue. To solve this problem, inspired [...] Read more.
Color images have long been used as an important supplementary information to guide the super-resolution of depth maps. However, how to quantitatively measure the guiding effect of color images on depth maps has always been a neglected issue. To solve this problem, inspired by the recent excellent results achieved in color image super-resolution by generative adversarial networks, we propose a depth map super-resolution framework with generative adversarial networks using multiscale attention fusion. Fusion of the color features and depth features at the same scale under the hierarchical fusion attention module effectively measure the guiding effect of the color image on the depth map. The fusion of joint color–depth features at different scales balances the impact of different scale features on the super-resolution of the depth map. The loss function of a generator composed of content loss, adversarial loss, and edge loss helps restore clearer edges of the depth map. Experimental results on different types of benchmark depth map datasets show that the proposed multiscale attention fusion based depth map super-resolution framework has significant subjective and objective improvements over the latest algorithms, verifying the validity and generalization ability of the model. Full article
(This article belongs to the Special Issue Trustworthy AI: Information Theoretic Perspectives)
Show Figures

Figure 1

19 pages, 2602 KiB  
Article
ShrewdAttack: Low Cost High Accuracy Model Extraction
by Yang Liu, Ji Luo, Yi Yang, Xuan Wang, Mehdi Gheisari and Feng Luo
Entropy 2023, 25(2), 282; https://doi.org/10.3390/e25020282 - 2 Feb 2023
Cited by 1 | Viewed by 2086
Abstract
Machine learning as a service (MLaaS) plays an essential role in the current ecosystem. Enterprises do not need to train models by themselves separately. Instead, they can use well-trained models provided by MLaaS to support business activities. However, such an ecosystem could be [...] Read more.
Machine learning as a service (MLaaS) plays an essential role in the current ecosystem. Enterprises do not need to train models by themselves separately. Instead, they can use well-trained models provided by MLaaS to support business activities. However, such an ecosystem could be threatened by model extraction attacks—an attacker steals the functionality of a trained model provided by MLaaS and builds a substitute model locally. In this paper, we proposed a model extraction method with low query costs and high accuracy. In particular, we use pre-trained models and task-relevant data to decrease the size of query data. We use instance selection to reduce query samples. In addition, we divided query data into two categories, namely low-confidence data and high-confidence data, to reduce the budget and improve accuracy. We then conducted attacks on two models provided by Microsoft Azure as our experiments. The results show that our scheme achieves high accuracy at low cost, with the substitution models achieving 96.10% and 95.24% substitution while querying only 7.32% and 5.30% of their training data on the two models, respectively. This new attack approach creates additional security challenges for models deployed on cloud platforms. It raises the need for novel mitigation strategies to secure the models. In future work, generative adversarial networks and model inversion attacks can be used to generate more diverse data to be applied to the attacks. Full article
(This article belongs to the Special Issue Trustworthy AI: Information Theoretic Perspectives)
Show Figures

Figure 1

22 pages, 3570 KiB  
Article
HT-Fed-GAN: Federated Generative Model for Decentralized Tabular Data Synthesis
by Shaoming Duan, Chuanyi Liu, Peiyi Han, Xiaopeng Jin, Xinyi Zhang, Tianyu He, Hezhong Pan and Xiayu Xiang
Entropy 2023, 25(1), 88; https://doi.org/10.3390/e25010088 - 31 Dec 2022
Cited by 6 | Viewed by 2480
Abstract
In this paper, we study the problem of privacy-preserving data synthesis (PPDS) for tabular data in a distributed multi-party environment. In a decentralized setting, for PPDS, federated generative models with differential privacy are used by the existing methods. Unfortunately, the existing models apply [...] Read more.
In this paper, we study the problem of privacy-preserving data synthesis (PPDS) for tabular data in a distributed multi-party environment. In a decentralized setting, for PPDS, federated generative models with differential privacy are used by the existing methods. Unfortunately, the existing models apply only to images or text data and not to tabular data. Unlike images, tabular data usually consist of mixed data types (discrete and continuous attributes) and real-world datasets with highly imbalanced data distributions. Existing methods hardly model such scenarios due to the multimodal distributions in the decentralized continuous columns and highly imbalanced categorical attributes of the clients. To solve these problems, we propose a federated generative model for decentralized tabular data synthesis (HT-Fed-GAN). There are three important parts of HT-Fed-GAN: the federated variational Bayesian Gaussian mixture model (Fed-VB-GMM), which is designed to solve the problem of multimodal distributions; federated conditional one-hot encoding with conditional sampling for global categorical attribute representation and rebalancing; and a privacy consumption-based federated conditional GAN for privacy-preserving decentralized data modeling. The experimental results on five real-world datasets show that HT-Fed-GAN obtains the best trade-off between the data utility and privacy level. For the data utility, the tables generated by HT-Fed-GAN are the most statistically similar to the original tables and the evaluation scores show that HT-Fed-GAN outperforms the state-of-the-art model in terms of machine learning tasks. Full article
(This article belongs to the Special Issue Trustworthy AI: Information Theoretic Perspectives)
Show Figures

Figure 1

Back to TopTop