**1. Introduction**

Shannon entropy, in the form we know it, was introduced by Boltzmann and used by Shannon in the context of Information Theory. This entropy has applications in Statistical Thermodynamics, Combinatorics and Machine Learning. In Machine Learning, Shannon entropy represents the basis for building decision trees and fitting classification models.

In the last couple of years, many generalizations of Shannon entropy appeared: Tsallis entropy, Kaniadakis entropy, Rényi entropy, Varma entropy, weighted entropy, relative entropy, cumulative entropy, etc. These entropies have applications in areas such as Physics, Information Theory, Probabilities, Communication Theory, and Statistics.

Tsallis entropy was introduced by C. Tsallis in [1] and is applied to: Income distribution (see [2,3]), Internet (see [4]), Non-coding human DNA (see [5]), Plasma (see [6]), and Stock exchanges (see [7,8]).

Kaniadakis entropy was introduced by G. Kaniadakis in [9] and is useful in many areas like: Finance (see [10–12]), Astrophysics (see [13,14]), Networks (see [15,16]), Economics (see [17,18]), and Statistical Mechanics (see [19–21]).

S. Kullback and R.A. Leibler were concerned "to measure" "the distance" or "the divergence" between statistical populations and they generalized Shannon entropy by defining, in [22], a nonsymmetric measure, called Kullback–Leibler divergence. This divergence between two probability measures *μ*<sup>1</sup> and *μ*<sup>2</sup> on a measurable non-negligible set *<sup>A</sup>* is additive, non-negative and greater than log *<sup>μ</sup>*1(*A*) *μ*2(*A*) , where log is the classical

logarithm function. Divergences are a key tool in Information Geometry (see [23]).

The goodness of fit test is based on the Corrected Weighted Kullback–Leibler divergence (see [24]) and, as a consequence, it inherits all special characteristics of this divergence measure. Narowcki and Harding proposed the use of weighted entropy as a measure of investment risk (see [25]). Afterwards, Guia¸su used the weighted entropy to group data with respect to the importance of specific regions of the domain (see [26]), Di Crescenzo and Longobardi propose the weighted residual and past entropies (see [27]) and Suhov and

**Citation:** Sfetcu, R.-C.; Sfetcu, S.-C.; Preda, V. Some Properties of Weighted Tsallis and Kaniadakis Divergences. *Entropy* **2022**, *24*, 1616. https://doi.org/10.3390/e24111616

Academic Editors: Karagrigoriou Alexandros and Makrides Andreas

Received: 29 September 2022 Accepted: 2 November 2022 Published: 5 November 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

Zohren proposed the quantum version of weighted entropy and its properties in Quantum Statistical Mechanics (see [28]).

Working with the Kullback–Leibler divergence formula and using the same technique like in the cases of Tsallis and Kaniadakis entropies (i.e., classical logarithm is replaced by Tsallis logarithm, respectively, by Kaniadakis logarithm), Tsallis and Kaniadakis divergences were introduced in some papers (see [29–32]).

Motivated by the aforementioned facts and by the papers [33–35], we deal with the weighted Tsallis and Kaniadakis divergences in this article.

In the following, we briefly describe the structure of the paper. Section 2 is dedicated to preliminaries. In Section 3, using some inequalities concerning the Tsallis logarithm, we obtain inequalities between the weighted Tsallis and Kaniadakis divergences on a non-negligible measurable arbitrary set and Tsallis logarithm, respectively, Kaniadakis logarithm (see Theorem 1). Finally, we prove that the weighted Tsallis and Kaniadakis divergences are limited by bounds that are similar to those that limit Kullback–Leibler divergence (see Theorem 2). In Section 4, we define the weighted Tsallis and Kaniadakis divergences for product measure spaces and prove some pseudo-additivity properties for them (see Theorem 3).

Other interesting results related to the present topics can be found in: [36–40].

#### **2. Preliminary Facts**

**Definition 1.** *Let k* <sup>∈</sup> <sup>R</sup>∗*. We consider the Tsallis logarithm given by*

$$\log^{\mathsf{T}}\mathfrak{x} = \begin{cases} \frac{\mathfrak{x}^{k} - 1}{k} & \text{if } \mathfrak{x} > 0 \\\ 0 & \text{if } \mathfrak{x} = 0 \end{cases}$$

*and the Kaniadakis logarithm given via*

$$\log^K x = \begin{cases} \frac{x^k - x^{-k}}{2k} & \text{if } x > 0\\ 0 & \text{if } x = 0. \end{cases}$$

**Remark 1.** *It is easy to see that* log*<sup>K</sup> <sup>k</sup> <sup>x</sup>* <sup>=</sup> <sup>1</sup> 2 - log*<sup>T</sup> <sup>k</sup> <sup>x</sup>* <sup>+</sup> log*<sup>T</sup>* <sup>−</sup>*<sup>k</sup> <sup>x</sup> for any x* ≥ 0*.*

*We have* lim *k*→0 log*<sup>T</sup> <sup>k</sup> x* = lim *k*→0 log*<sup>K</sup> <sup>k</sup> x* = log *x for any x* > 0 *("*log*" is the classical logarithm function).*

**Definition 2.** *Let* (Ω, <sup>T</sup> ) *be a measurable space and <sup>μ</sup>*, *<sup>ν</sup>* : T → <sup>R</sup><sup>+</sup> = [0, <sup>∞</sup>) ∪ {∞} *two measures. We say that μ is absolutely continuous with respect to ν if, for any A* ∈ T *such that ν*(*A*) = 0*, one has μ*(*A*) = 0*.*

**Notation 1.** *If μ and ν are absolutely continuous with respect to each other, we denote this fact by μ* ∼ *ν.*

In the absence of other mentions, we work in the following scenario: Let (Ω, T , *μi*), *i* = 1, 2 be two measure spaces and *λ* a measure on (Ω, T ) such that *μ<sup>i</sup>* ∼ *λ* for any *i* = 1, 2. With the help of Radon–Nikodým Theorem we find two non-negative measurable functions *<sup>f</sup>*<sup>1</sup> and *<sup>f</sup>*<sup>2</sup> defined on <sup>Ω</sup> such that *<sup>μ</sup>i*(*A*) = *A fidλ* for any *A* ∈ T and any *i* = 1, 2. Consider *w* : Ω → (0, ∞) a weight function (i.e., *w* is a non-negative measurable function).

**Definition 3.** *Let A* ∈ T *. The weighted Tsallis divergence on A between μ*<sup>1</sup> *and μ*<sup>2</sup> *is defined via*

$$D\_k^{w,T}(\mu\_1|\mu\_{2\prime}A) = \begin{cases} \frac{1}{\int\_A w d\mu\_1} \int\_A w \log\_k^T \left(\frac{f\_1}{f\_2}\right) d\mu\_1 & \text{if } \mu\_1(A) \neq 0\\ 0 & \text{if } \mu\_1(A) = 0 \end{cases}$$

*and the weighted Kaniadakis divergence on A between μ*<sup>1</sup> *and μ*<sup>2</sup> *is given by*

$$D\_k^{w,K}(\mu\_1|\mu\_{2\prime}A) = \begin{cases} \frac{1}{\int\_A w d\mu\_1} \int\_A w \log\_k^K \left(\frac{f\_1}{f\_2}\right) d\mu\_1 & \text{if } \mu\_1(A) \neq 0\\ 0 & \text{if } \mu\_1(A) = 0. \end{cases}$$

**Remark 2.** *We assume that all divergences and integrals which appear in this paper are finite.*

**Remark 3.** *We can see that the values of Dw*,*<sup>T</sup> <sup>k</sup>* (*μ*1|*μ*2, *<sup>A</sup>*) *and <sup>D</sup>w*,*<sup>K</sup> <sup>k</sup>* (*μ*1|*μ*2, *A*) *do not depend on the choice of reference measure <sup>λ</sup> (because <sup>f</sup>*<sup>1</sup> *f*2 <sup>=</sup> *<sup>d</sup>μ*1/*d<sup>λ</sup> <sup>d</sup>μ*2/*d<sup>λ</sup>* <sup>=</sup> *<sup>d</sup>μ*<sup>1</sup> *dμ*<sup>2</sup> *).*
