Next Article in Journal
Inferred Rate of Default as a Credit Risk Indicator in the Bulgarian Bank System
Previous Article in Journal
Multiview Data Clustering with Similarity Graph Learning Guided Unsupervised Feature Selection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stable and Fast Deep Mutual Information Maximization Based on Wasserstein Distance

1
State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
2
Guizhou Key Laboratory of Pattern Recognition and Intelligent System, Guizhou Minzu University, Guiyang 550025, China
3
Guizhou Big Data Academy, Guizhou University, Guiyang 550025, China
4
Institute of Guizhou Aerospace Measuring and Testing Technology, Guiyang 550025, China
*
Author to whom correspondence should be addressed.
Entropy 2023, 25(12), 1607; https://doi.org/10.3390/e25121607
Submission received: 7 November 2023 / Revised: 26 November 2023 / Accepted: 28 November 2023 / Published: 30 November 2023
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

Deep learning is one of the most exciting and promising techniques in the field of artificial intelligence (AI), which drives AI applications to be more intelligent and comprehensive. However, existing deep learning techniques usually require a large amount of expensive labeled data, which limit the application and development of deep learning techniques, and thus it is imperative to study unsupervised machine learning. The learning of deep representations by mutual information estimation and maximization (Deep InfoMax or DIM) method has achieved unprecedented results in the field of unsupervised learning. However, in the DIM method, to restrict the encoder to learn more normalized feature representations, an adversarial network learning method is used to make the encoder output consistent with a priori positively distributed data. As we know, the model training of the adversarial network learning method is difficult to converge, because there is a logarithmic function in the loss function of the cross-entropy measure, and the gradient of the model parameters is susceptible to the “gradient explosion” or “gradient disappearance” phenomena, which makes the training of the DIM method extremely unstable. In this regard, we propose a Wasserstein distance-based DIM method to solve the stability problem of model training, and our method is called the WDIM. Subsequently, the training stability of the WDIM method and the classification ability of unsupervised learning are verified on the CIFAR10, CIFAR100, and STL10 datasets. The experiments show that our proposed WDIM method is more stable to parameter updates, has faster model convergence, and at the same time, has almost the same accuracy as the DIM method on the classification task of unsupervised learning. Finally, we also propose a reflection of future research for the WDIM method, aiming to provide a research idea and direction for solving the image classification task with unsupervised learning.
Keywords: machine learning; deep learning; unsupervised learning; encoder network; mutual information estimation machine learning; deep learning; unsupervised learning; encoder network; mutual information estimation

Share and Cite

MDPI and ACS Style

He, X.; Peng, C.; Wang, L.; Tan, W.; Wang, Z. Stable and Fast Deep Mutual Information Maximization Based on Wasserstein Distance. Entropy 2023, 25, 1607. https://doi.org/10.3390/e25121607

AMA Style

He X, Peng C, Wang L, Tan W, Wang Z. Stable and Fast Deep Mutual Information Maximization Based on Wasserstein Distance. Entropy. 2023; 25(12):1607. https://doi.org/10.3390/e25121607

Chicago/Turabian Style

He, Xing, Changgen Peng, Lin Wang, Weijie Tan, and Zifan Wang. 2023. "Stable and Fast Deep Mutual Information Maximization Based on Wasserstein Distance" Entropy 25, no. 12: 1607. https://doi.org/10.3390/e25121607

APA Style

He, X., Peng, C., Wang, L., Tan, W., & Wang, Z. (2023). Stable and Fast Deep Mutual Information Maximization Based on Wasserstein Distance. Entropy, 25(12), 1607. https://doi.org/10.3390/e25121607

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop