entropy-logo

Journal Browser

Journal Browser

Applications of Information Theory in Statistics

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: closed (20 June 2022) | Viewed by 20917

Special Issue Editor


E-Mail Website
Guest Editor
Department of Statistics and Data Science, Yonsei University, Seoul 03722, Republic of Korea
Interests: information theory; censored data; nonparametric entropy estimation; Fisher information

Special Issue Information

Dear Colleagues,

It is well known that several important topics in statistics are highly related to information theory—maximum likelihood and cross entropy, Akaike information criterion and Kullback–Leibler information, sum of squares and mutual information, etc. Though Fisher information is the most important measure of information in parametric statistical inference and related science, information theory is a popular topic related to nonparametric statistical inference and machine learning methods in statistics today. Thus, gathering relevant works on the applications of information theory in statistics is of growing importance.

This Special Issue aims to serve as a forum for the interpretation of information theory in terms of statistics. All topics related to information theory and statistics, which include the applications of information theory in statistics or applications of statistical concepts to information theory, fall within the scope of this Special Issue.

Prof. Sangun Park
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • estimation and testing
  • information theory
  • statistics
  • applications
  • Fisher information

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

48 pages, 635 KiB  
Article
The Cauchy Distribution in Information Theory
by Sergio Verdú
Entropy 2023, 25(2), 346; https://doi.org/10.3390/e25020346 - 13 Feb 2023
Cited by 4 | Viewed by 4618
Abstract
The Gaussian law reigns supreme in the information theory of analog random variables. This paper showcases a number of information theoretic results which find elegant counterparts for Cauchy distributions. New concepts such as that of equivalent pairs of probability measures and the strength [...] Read more.
The Gaussian law reigns supreme in the information theory of analog random variables. This paper showcases a number of information theoretic results which find elegant counterparts for Cauchy distributions. New concepts such as that of equivalent pairs of probability measures and the strength of real-valued random variables are introduced here and shown to be of particular relevance to Cauchy distributions. Full article
(This article belongs to the Special Issue Applications of Information Theory in Statistics)
22 pages, 3794 KiB  
Article
Index Coding with Multiple Interpretations
by Valéria G. Pedrosa and Max H. M. Costa
Entropy 2022, 24(8), 1149; https://doi.org/10.3390/e24081149 - 18 Aug 2022
Viewed by 1894
Abstract
The index coding problem consists of a system with a server and multiple receivers with different side information and demand sets, connected by a noiseless broadcast channel. The server knows the side information available to the receivers. The objective is to design an [...] Read more.
The index coding problem consists of a system with a server and multiple receivers with different side information and demand sets, connected by a noiseless broadcast channel. The server knows the side information available to the receivers. The objective is to design an encoding scheme that enables all receivers to decode their demanded messages with a minimum number of transmissions, referred to as an index code length. The problem of finding the minimum length index code that enables all receivers to correct a specific number of errors has also been studied. This work establishes a connection between index coding and error-correcting codes with multiple interpretations from the tree construction of nested cyclic codes. The notion of multiple interpretations using nested codes is as follows: different data packets are independently encoded, and then combined by addition and transmitted as a single codeword, minimizing the number of channel uses and offering error protection. The resulting packet can be decoded and interpreted in different ways, increasing the error correction capability, depending on the amount of side information available at each receiver. Motivating applications are network downlink transmissions, information retrieval from datacenters, cache management, and sensor networks. Full article
(This article belongs to the Special Issue Applications of Information Theory in Statistics)
Show Figures

Figure 1

19 pages, 606 KiB  
Article
The Residual ISI for Which the Convolutional Noise Probability Density Function Associated with the Blind Adaptive Deconvolution Problem Turns Approximately Gaussian
by Monika Pinchas
Entropy 2022, 24(7), 989; https://doi.org/10.3390/e24070989 - 17 Jul 2022
Viewed by 1739
Abstract
In a blind adaptive deconvolution problem, the convolutional noise observed at the output of the deconvolution process, in addition to the required source signal, is—according to the literature—assumed to be a Gaussian process when the deconvolution process (the blind adaptive equalizer) is deep [...] Read more.
In a blind adaptive deconvolution problem, the convolutional noise observed at the output of the deconvolution process, in addition to the required source signal, is—according to the literature—assumed to be a Gaussian process when the deconvolution process (the blind adaptive equalizer) is deep in its convergence state. Namely, when the convolutional noise sequence or, equivalently, the residual inter-symbol interference (ISI) is considered small. Up to now, no closed-form approximated expression is given for the residual ISI, where the Gaussian model can be used to describe the convolutional noise probability density function (pdf). In this paper, we use the Maximum Entropy density technique, Lagrange’s Integral method, and quasi-moment truncation technique to obtain an approximated closed-form equation for the residual ISI where the Gaussian model can be used to approximately describe the convolutional noise pdf. We will show, based on this approximated closed-form equation for the residual ISI, that the Gaussian model can be used to approximately describe the convolutional noise pdf just before the equalizer has converged, even at a residual ISI level where the “eye diagram” is still very closed, namely, where the residual ISI can not be considered as small. Full article
(This article belongs to the Special Issue Applications of Information Theory in Statistics)
Show Figures

Figure 1

12 pages, 345 KiB  
Article
Focused Information Criterion for Restricted Mean Survival Times: Non-Parametric or Parametric Estimators
by Szilárd Nemes, Andreas Gustavsson and Alexandra Jauhiainen
Entropy 2022, 24(5), 713; https://doi.org/10.3390/e24050713 - 16 May 2022
Cited by 2 | Viewed by 1838
Abstract
Restricted Mean Survival Time (RMST), the average time without an event of interest until a specific time point, is a model-free, easy to interpret statistic. The heavy reliance on non-parametric or semi-parametric methods in the survival analysis has [...] Read more.
Restricted Mean Survival Time (RMST), the average time without an event of interest until a specific time point, is a model-free, easy to interpret statistic. The heavy reliance on non-parametric or semi-parametric methods in the survival analysis has drawn criticism, due to the loss of efficacy compared to parametric methods. This assumes that the parametric family used is the true one, otherwise the gain in efficacy might be lost to interpretability problems due to bias. The Focused Information Criterion (FIC) considers the trade-off between bias and variance and offers an objective framework for the selection of the optimal non-parametric or parametric estimator for scalar statistics. Herein, we present the FIC framework for the selection of the RMST estimator with the best bias-variance trade-off. The aim is not to identify the true underling distribution that generated the data, but to identify families of distributions that best approximate this process. Through simulation studies and theoretical reasoning, we highlight the effect of censoring on the performance of FIC. Applicability is illustrated with a real life example. Censoring has a non-linear effect on FICs performance that can be traced back to the asymptotic relative efficiency of the estimators. FICs performance is sample size dependent; however, with censoring percentages common in practical applications FIC selects the true model at a nominal probability (0.843) even with small or moderate sample sizes. Full article
(This article belongs to the Special Issue Applications of Information Theory in Statistics)
Show Figures

Figure 1

11 pages, 287 KiB  
Article
Sufficient Dimension Reduction: An Information-Theoretic Viewpoint
by Debashis Ghosh
Entropy 2022, 24(2), 167; https://doi.org/10.3390/e24020167 - 22 Jan 2022
Cited by 6 | Viewed by 2726
Abstract
There has been a lot of interest in sufficient dimension reduction (SDR) methodologies, as well as nonlinear extensions in the statistics literature. The SDR methodology has previously been motivated by several considerations: (a) finding data-driven subspaces that capture the essential facets of regression [...] Read more.
There has been a lot of interest in sufficient dimension reduction (SDR) methodologies, as well as nonlinear extensions in the statistics literature. The SDR methodology has previously been motivated by several considerations: (a) finding data-driven subspaces that capture the essential facets of regression relationships; (b) analyzing data in a ‘model-free’ manner. In this article, we develop an approach to interpreting SDR techniques using information theory. Such a framework leads to a more assumption-lean understanding of what SDR methods do and also allows for some connections to results in the information theory literature. Full article
(This article belongs to the Special Issue Applications of Information Theory in Statistics)
17 pages, 1555 KiB  
Article
Inferring a Property of a Large System from a Small Number of Samples
by Damián G. Hernández and Inés Samengo
Entropy 2022, 24(1), 125; https://doi.org/10.3390/e24010125 - 14 Jan 2022
Cited by 2 | Viewed by 1669
Abstract
Inferring the value of a property of a large stochastic system is a difficult task when the number of samples is insufficient to reliably estimate the probability distribution. The Bayesian estimator of the property of interest requires the knowledge of the prior distribution, [...] Read more.
Inferring the value of a property of a large stochastic system is a difficult task when the number of samples is insufficient to reliably estimate the probability distribution. The Bayesian estimator of the property of interest requires the knowledge of the prior distribution, and in many situations, it is not clear which prior should be used. Several estimators have been developed so far in which the proposed prior us individually tailored for each property of interest; such is the case, for example, for the entropy, the amount of mutual information, or the correlation between pairs of variables. In this paper, we propose a general framework to select priors that is valid for arbitrary properties. We first demonstrate that only certain aspects of the prior distribution actually affect the inference process. We then expand the sought prior as a linear combination of a one-dimensional family of indexed priors, each of which is obtained through a maximum entropy approach with constrained mean values of the property under study. In many cases of interest, only one or very few components of the expansion turn out to contribute to the Bayesian estimator, so it is often valid to only keep a single component. The relevant component is selected by the data, so no handcrafted priors are required. We test the performance of this approximation with a few paradigmatic examples and show that it performs well in comparison to the ad-hoc methods previously proposed in the literature. Our method highlights the connection between Bayesian inference and equilibrium statistical mechanics, since the most relevant component of the expansion can be argued to be that with the right temperature. Full article
(This article belongs to the Special Issue Applications of Information Theory in Statistics)
Show Figures

Figure 1

22 pages, 934 KiB  
Article
Robust Universal Inference
by Amichai Painsky and Meir Feder
Entropy 2021, 23(6), 773; https://doi.org/10.3390/e23060773 - 18 Jun 2021
Cited by 4 | Viewed by 3137
Abstract
Learning and making inference from a finite set of samples are among the fundamental problems in science. In most popular applications, the paradigmatic approach is to seek a model that best explains the data. This approach has many desirable properties when the number [...] Read more.
Learning and making inference from a finite set of samples are among the fundamental problems in science. In most popular applications, the paradigmatic approach is to seek a model that best explains the data. This approach has many desirable properties when the number of samples is large. However, in many practical setups, data acquisition is costly and only a limited number of samples is available. In this work, we study an alternative approach for this challenging setup. Our framework suggests that the role of the train-set is not to provide a single estimated model, which may be inaccurate due to the limited number of samples. Instead, we define a class of “reasonable” models. Then, the worst-case performance in the class is controlled by a minimax estimator with respect to it. Further, we introduce a robust estimation scheme that provides minimax guarantees, also for the case where the true model is not a member of the model class. Our results draw important connections to universal prediction, the redundancy-capacity theorem, and channel capacity theory. We demonstrate our suggested scheme in different setups, showing a significant improvement in worst-case performance over currently known alternatives. Full article
(This article belongs to the Special Issue Applications of Information Theory in Statistics)
Show Figures

Figure 1

11 pages, 295 KiB  
Article
Information Measure in Terms of the Hazard Function and Its Estimate
by Sangun Park
Entropy 2021, 23(3), 298; https://doi.org/10.3390/e23030298 - 28 Feb 2021
Cited by 2 | Viewed by 1779
Abstract
It is well-known that some information measures, including Fisher information and entropy, can be represented in terms of the hazard function. In this paper, we provide the representations of more information measures, including quantal Fisher information and quantal Kullback-leibler information, in terms of [...] Read more.
It is well-known that some information measures, including Fisher information and entropy, can be represented in terms of the hazard function. In this paper, we provide the representations of more information measures, including quantal Fisher information and quantal Kullback-leibler information, in terms of the hazard function and reverse hazard function. We provide some estimators of the quantal KL information, which include the Anderson-Darling test statistic, and compare their performances. Full article
(This article belongs to the Special Issue Applications of Information Theory in Statistics)
Show Figures

Figure 1

Back to TopTop