**Shio Gai Quek 1, Ganeshsree Selvachandran 1, Florentin Smarandache 2, J. Vimala 3, Son Hoang Le 4,\*, Quang-Thinh Bui 5,6 and Vassilis C. Gerogiannis <sup>7</sup>**


Received: 13 May 2020; Accepted: 10 June 2020; Published: 12 June 2020

**Abstract:** Plithogenic set is an extension of the crisp set, fuzzy set, intuitionistic fuzzy set, and neutrosophic sets, whose elements are characterized by one or more attributes, and each attribute can assume many values. Each attribute has a corresponding degree of appurtenance of the element to the set with respect to the given criteria. In order to obtain a better accuracy and for a more exact exclusion (partial order), a contradiction or dissimilarity degree is defined between each attribute value and the dominant attribute value. In this paper, entropy measures for plithogenic sets have been introduced. The requirements for any function to be an entropy measure of plithogenic sets are outlined in the axiomatic definition of the plithogenic entropy using the axiomatic requirements of neutrosophic entropy. Several new formulae for the entropy measure of plithogenic sets are also introduced. The newly introduced entropy measures are then applied to a multi-attribute decision making problem related to the selection of locations.

**Keywords:** neutrosophic set; plithogenic set; fuzzy set; entropy; similarity measure; information measure

### **1. Introduction**

In recent years, there has been numerous authors who gave characterizations of entropy measures on fuzzy sets and their generalizations. Most notably, the majority of them had worked on developing entropy measures on intuitionistic fuzzy sets (IFS). Alongside with their introduction of new ways of entropy measures on IFS, these authors have also given some straightforward examples to show how their entropy measures can be applied to various applications including multi-attribute decision making (MADM) problems [1,2].

In 2016, Zhu and Li [3] gave a new definition for entropy measures on IFS. The new definition was subsequently compared against many other previous definitions of entropy measures on IFS. Montes et al. [4] proposed another new definition for entropy measures on intuitionistic fuzzy sets based on divergence. Both of these research groups [3,4] subsequently demonstrated the applications

of their definition of IFS onto MADM problems, and both of them deployed examples of IFS, whose data values were not derived from real-life datasets but were predetermined by the authors to justify their new concepts. On the other hand, Farnoosh et al. [5] also gave their new definition for entropy measures on IFS, but they focused only on discussing its potential application in fault elimination of digital images rather than MADM. Ansari et al. [6] also gave a new definition of entropy measures on IFS in edge detection of digital images. Both research groups [5,6] did not provide examples on how their new definitions for entropy measures on IFS may be applied on MADM.

Some of the definitions of entropy measures defined for IFS were parametric in nature. Gupta et al. [7] defined an entropy measures on IFS, characterized by a parameter α. Meanwhile, Joshi and Kumar [8] independently (with respect to [7]) defined a new entropy measures on IFS, also characterized by a parameter α. An example on MADM was also discussed by Joshi and Kumar [8], once again involving a small, conceptual IFS like those encountered in the work of Zhu and Li [3] as well as Montes et al. [4]. The works by Joshi and Kumar [8] were subsequently followed by Garg et al. [9] who defined an entropy measure on IFS characterized by two parameters: (α, β). Like the previous authors, Garg et al. [9] discussed the application of their proposed entropy measure on MADM using a similar manner. In particular, they compared the effect of different parameters α, β on the results of such decision-making process. Besides, they had also compared the results yielded by the entropy measure on IFS from some other authors. Joshi and Kumar [10] also defined another entropy measure on IFS, following their own previous work on the classical fuzzy sets in [11] and also the work by Garg et al. in [9].

For various generalizations derived from IFS, such as inter-valued intuitionistic fuzzy sets (IVIFS) or generalized intuitionistic fuzzy soft sets (GIFSS), there were also some studies to establish entropy measures on some generalizations, followed by a demonstration on how such entropy measures can be applied to certain MADM problems. Recently, Garg [12] defined an entropy measure for inter-valued intuitionistic fuzzy sets and discussed the application of such entropy measures on solving MADM problems with unknown attribute weights. In 2018, Rashid et al. [13] defined another distance-based entropy measure on the inter-valued intuitionistic fuzzy sets. Again, following the conventions of the previous authors, they clarified the applications of their work on MADM problem using a simple, conceptual small dataset. Selvachandran et al. [14] defined a distance induced intuitionistic entropy for generalized intuitionistic fuzzy soft sets, for which they also clarified the applications of their work on MADM problems using a dataset of the same kind.

As for the Pythagorean fuzzy set (PFS) and its generalizations, an entropy measure was defined by Yang and Hussein in [15]. Thao and Smarandache [16] proposed a new entropy measure for Pythagorean fuzzy sets in 2019. Such new definitions of entropy in [16] discarded the use of natural logarithm as in [15], which is computationally intensive. Such work was subsequently followed by Athira et.al. [17,18], where an entropy measure was given for Pythagorean fuzzy soft sets—a further generalization of Pythagorean fuzzy sets. As for vague set and its generalizations, Feng and Wang [19] defined an entropy measure considering the hesitancy degree. Later, Selvachandran et al. [20] defined an entropy measure on complex vague soft sets. In the ever-going effort of establishing entropy measures for other generalizations of fuzzy sets, Thao and Smarandache [16] and Selvachandran et al. [20] were among the research groups who justified the applicability of their entropy measures using examples on MADM. Likewise, each of those works involved one or several (if more than one example provided in a work) small and conceptual datasets created by the authors themselves.

Besides IFS, PFS, vague sets and all their derivatives, there were also definitions of entropy established on some other generalizations of fuzzy sets in recent years, some came alongside with examples on MADM involving conceptual datasets as well [21]. Wei [22] defined an asymmetrical cross entropy measure for two fuzzy sets, called the fuzzy cross-entropy. Such cross entropy for interval neutrosophic sets was also studied by Sahin in [23]. Ye and Du [21] gave four different new ways entropy measures on interval-valued neutrosophic sets. Sulaiman et al. [24,25] defined entropy measures for interval-valued fuzzy soft sets and multi-aspect fuzzy soft sets. Hu et al. [26] gave an

entropy measure for hesitant fuzzy sets. Al-Qudah and Hassan [27] gave an entropy measure for complex multi-fuzzy soft sets. Barukab et al. [28] gave an entropy measure for spherical fuzzy sets. Piasecki [29] gave some remarks and characterizations of entropy measures among fuzzy sets. In 2019, Dass and Tomar [30] further examined the legitimacy of some exponential entropy measures on IFS, such as those defined by Verna and Sharma in [31], Zhang and Jiang in [32], and Mishra in [33]. On the other hand, Kang and Deng [34] outlined the general patterns for which the formula for entropy measures could be formed, thus also applicable for entropy measures to various generalizations of fuzzy sets. Santos et al. [35] and Cao and Lin [36] derived their entropy formulas for data processing based on those for fuzzy entropy with applications in image thresholding and electroencephalogram.

With many entropy measures being defined for various generalizations of fuzzy sets, it calls upon a need to standardize which kind of functions are eligible to be used as entropy measures and which are not. One of the most notable and recent works in this field is accomplished by Majumdar [1], who established an axiomatic definition on the entropy measure on a single-valued neutrosophic set (SVNS). Such an axiomatic definition of entropy measure, once defined for a particular generalization of fuzzy set, serves as an invaluable tool when choosing a new entropy measure for a particular purpose. Moreover, with the establishment of such an axiomatic definition of entropy measure, it motivates researchers to work on deriving a collection of functions, which all qualify themselves to be used as entropy measures, rather than inventing a single standalone function as an entropy measure for a particular scenario.

In 2017, Smarandache [2] firstly established a concept of plithogenic sets, intended to serve as a profound and conclusive generalization from most (if not all) of the previous generalizations from fuzzy sets. This obviously includes the IFS, where most works had been done to establish its great variety of entropy measures. However, Smarandache [2] did not give any definitions on entropy measures for plithogenic sets.

Our work on this paper shall be presented as follows: Firstly, in Section 2, we mention all the prerequisite definitions needed for the establishment of entropy measures for plithogenic sets. We also derive some generalizations of those previous definitions. Such generalizations are necessary to further widen the scope of our investigation on the set of functions that qualifies as entropy measures for plithogenic sets. Then, in Section 3, we first propose new entropy measures for plithogenic sets in which requirements for any function to be an entropy measure of plithogenic sets are outlined in the axiomatic definition. Later in Section 3, several new formulae for the entropy measure of plithogenic sets are also introduced. In Section 4, we will apply a particular example of our entropy measure onto a MADM problem related to the selection of locations.

Due to the complexity and the novelty of plithogetic sets, as well as the scope constraints of this paper, the plithogenic set involved in the demonstration of MADM will be of a small caliber within 150 data values in total. Those data values contained in the plithogenic set example will be also conceptual in nature (only two to three digits per value). Such presentation, although it may be perceived as simple, is in alliance with the common practice of most renowned works done by the previous authors discussed before, whenever a novel way of entropy measure is invented and first applied on a MADM problem. Hence, such a start-up with a small and conceptual dataset does not hinder the justification on the practicability of the proposed notions. Quite the contrary, it enables even the most unfamiliar readers to focus on the procedure of such novel methods of dealing with MADM problems, rather than being overwhelmed by the immense caliber of computation encountered in dealing with up-to-date real-life datasets.

#### **2. Preliminary**

Throughout all the following of this article, let *U* be the universal set.

**Definition 1** [1]**.** *A single valued neutrosophic sets (SVNS) on U is defined to be the collection*

$$A = \{ (\mathfrak{x}, T\_A(\mathfrak{x}), I\_A(\mathfrak{x}), F\_A(\mathfrak{x})) : \mathfrak{x} \in \mathcal{U} \}$$

*where TA*, *IA*, *FA* : *U* → [0, 1] *and* 0 ≤ *TA*(*x*) + *IA*(*x*) + *FA*(*x*) ≤ 3. *We denote SVNS*(*U*) *to be the collection of all SVNS on U*.

Majumdar [1] have established the following axiomatic definition for an entropy measure on SVNS.

**Definition 2** [1]**.** *An entropy measure on SVNS is a function EN* : *SVNS*(*U*) → [0, 1] *that satisfies the following axioms for all A* ∈ *SVNS*(*U*):


In the study of fuzzy entropy, a fuzzy set with membership degree of 0.5 is a very special fuzzy set as it is the fuzzy set with the highest degree of fuzziness. Similarly, in the study of entropy for SVNSs, a SVNS with all its membership degree of 0.5 for all the three membership components is very special as it is the SVNS with the highest degree of uncertainty. Hence, we denote *A*[ <sup>1</sup> <sup>2</sup> ] <sup>∈</sup> *SVNS*(*U*) as the SVNS with (*TA*(*x*), *IA*(*x*), *FA*(*x*)) <sup>=</sup> 1 2 , 1 2 , 1 2 for all *x* ∈ *A*. Such axiomatic descriptions in Definition 2 of this paper, defined by Majumdar [1], serve as the cornerstone for establishing similar axiomatic descriptions for the entropy measures on other generalizations of fuzzy sets, which shall certainly include that for plithogenic sets by Smarandache [2].

We however, disagree with (iii) of Definition 2. As an illustrative example, let *A* be empty, then *A* has zero entropy because it is of *absolute certainty* that *A* "does not contain any element". Whereas a superset of *A*, say *B* ∈ *SVNS*(*U*), may have higher entropy because it may not be crisp. Thus, we believe that upon establishing (iii) of Definition 2, the authors in [1] concerned only the case where *A* and *B* are very close to the entire *U*. Thus, on the establishment of entropy measures on plithogenic sets in this article, only axioms (i), (ii) and (iv) of Definition 2 will be considered.

Together with axioms (i), (ii), and (iv) of Definition 2, the following two well-established generalizations of functions serve as **our motives of defining the entropies**, which allows different users to customize to their respective needs.

**Definition 3** [1]**.** *Let* T : [0, 1] <sup>2</sup> <sup>→</sup> [0, 1], *be a function satisfying the following for all p*, *<sup>q</sup>*,*r*,*<sup>s</sup>* <sup>∈</sup> [0, 1].


*Then* T *is said to be a T-norm function*.

**Example 1.** "*minimum*" *is a T-norm function*.

**Definition 4** [1]**.** *Let* S : [0, 1] <sup>2</sup> <sup>→</sup> [0, 1], *be a function satisfying the following for all p*, *<sup>q</sup>*,*r*,*<sup>s</sup>* <sup>∈</sup> [0, 1].


5. S(*p*, 1) = 1

*Then,* S *is said to be an S-norm (or a T-conorm) function*.

**Example 2.** "*maximum*" *is an S-norm function*.

In the study of fuzzy logic, we also find ourselves seeking functions that measure the central tendencies, as well as a given position of the dataset, besides maximum and minimum. Such measurement often involves more than two entities and those entities can be ordered of otherwise. This is the reason we introduce the concept of *M-type* function and the *S-type* function, defined on all finite (not ordered) sets with entries in [0, 1]. Due to the commutativity of S-norm function, S-type function is thus a further generalization of S-norm function as it allows more than two entities. In all the following of this article, let us denote Φ[0,1] as the collection of all finite sets with entries in [0, 1]. To avoid ending up with too many brackets in an expression, it is convenient to denote the image of {*a*1, *a*2, ··· , *an*} under *f* as *f*(*a*1, *a*2, ··· , *an*).

**Definition 5** [1]**.** *Let f* : Φ[0,1] → [0, 1], *be a function satisfying the following:*


*Then f is said to be an M-type function*.

**Remark 1.** *"maximum", "minimum", "mean", "interpolated inclusive median", "interpolated exclusive median", "inclusive first quartile", "exclusive 55th percentile"*, 1 <sup>−</sup> *k*∈*K*(<sup>1</sup> <sup>−</sup> *<sup>k</sup>*) *and* <sup>1</sup> <sup>−</sup> <sup>|</sup>*K*<sup>|</sup> *<sup>k</sup>*∈*K*(<sup>1</sup> <sup>−</sup> *<sup>k</sup>*) *are some particular examples of M-type functions*.

**Definition 6** [1]**.** *Let f* : Φ[0,1] → [0, 1], *be a function satisfying the following:*


*Then f is said to be an S-type function*.

**Remark 2.** *"maximum",* <sup>1</sup> <sup>−</sup> *k*∈*K*(<sup>1</sup> <sup>−</sup> *<sup>k</sup>*) *and* <sup>1</sup> <sup>−</sup> <sup>|</sup>*K*<sup>|</sup> *<sup>k</sup>*∈*K*(<sup>1</sup> <sup>−</sup> *<sup>k</sup>*) *are some particular examples of S-type functions*.

**Lemma 1.** *If f is an S-type function, then it is also an M-type function.*

**Proof.** As {1, 1, ··· , 1} contains one element which equals to 1, *f*(1, 1, ··· , 1) = 1, thus the lemma follows. -

**Remark 3.** *The converse of this lemma is not true however, as it is obvious that "mean" is an M-type function but not an S-type function.*

All of these definitions and lemmas suffice for the establishment of our entropy measure for plithogenic sets.

#### **3. Proposed Entropy Measure for Plithogenic Sets**

In [2], Smarandache introduced the concept of plithogenic set. Such a concept is as given in the following definition.

**Definition 7** [2]**.** *Let U be a universal set. Let P* ⊆ *U*. *Let A be a set of attributes. For each attribute a* ∈ *A*: *Let Sa be the set of all its corresponding attribute values. Take Va* ⊆ *Sa*. *Define a function da* : *P* × *Va* → [0, 1], *called the attribute value appurtenance degree function. Define a function ca* : *Va* × *Va* → [0, 1], *called the attribute value contradiction (dissimilarity) degree function, which further satisfies:*


*Then*:


**Remark 4.** *If P* = *U*, *A* = {*ao*}, *Vao* = {*v*1, *v*2, *v*3}, *cao* (*v*1, *v*2) = *cao* (*v*2, *v*3) = 0.5 *and cao* (*v*1, *v*3) = 1, *then* **R** *is reduced to a single valued neutrosophic set (SVNS) on U*.

**Remark 5.** *If P* = *U*, *Va* = {*v*1, *v*2}*for all a* ∈ *A*, *da* : *P* × *Va* → [0, 1] *is such that* 0 ≤ *da*(*x*, *v*1) +*da*(*x*, *v*2) ≤ 1 *for all x* ∈ *P and for all a* ∈ *A*, *ca*(*v*1, *v*2) = *ca*(*v*2, *v*1) = 1 *for all a* ∈ *A*, *then* **R** *is reduced to a generalized intuitionistic fuzzy soft set (GIFSS) on U*.

**Remark 6.** *If P* = *U*, *A* = {*ao*}, *Vao* = {*u*1, *v*1, *u*2, *v*2}, *dao* : *P* × *Vao* → [0, 1] *is such that* 0 ≤ *dao* (*x*, *v*1) + *dao* (*x*, *v*2) ≤ 1, 0 ≤ *dao* (*x*, *u*1) ≤ *dao* (*x*, *v*1) *and* 0 ≤ *dao* (*x*, *u*2) ≤ *dao* (*x*, *v*2) *all satisfied for all x* ∈ *P*, *and cao* (*u*1, *u*2) = *cao* (*v*1, *v*2) = 1, *then* **R** *is reduced to an inter-valued intuitionistic fuzzy set (IVIFS) on U*.

**Remark 7.** *If P* = *U*, *A* = {*ao*}, *Vao* = {*v*1, *v*2}, *dao* : *P* × *Vao* → [0, 1] *is such that* 0 ≤ *dao* (*x*, *v*1) + *dao* (*x*, *v*2) ≤ 1 *for all x* ∈ *P*, *and cao* (*v*1, *v*2) = 1, *then* **R** *is reduced to an intuitionistic fuzzy set (IFS) on U*.

**Remark 8.** *If P* = *U*, *A* = {*ao*}, *Vao* = {*v*1, *v*2}, *dao* : *P* × *Vao* → [0, 1] *is such that* 0 ≤ *dao* (*x*, *v*1) <sup>2</sup> + *dao* (*x*, *v*2) <sup>2</sup> <sup>≤</sup> <sup>1</sup> *for all x* <sup>∈</sup> *<sup>P</sup>*, *and cao* (*v*1, *<sup>v</sup>*2) <sup>=</sup> 1, *then* **<sup>R</sup>** *is reduced to a Pythagorean fuzzy set (PFS) on U*.

**Remark 9.** *If P* = *U*, *A* = {*ao*} *and Vao* = {*vo*}, *then* **R** *is reduced to a fuzzy set on U*.

**Remark 10.** *If P* = *U*, *A* = {*ao*}, *Vao* = {*vo*}, *and dao* : *P* × *Vao* → {0, 1} ⊂ [0, 1], *then* **R** *is reduced to a classical crisp set on U*.

In all the following, the collection of all the plithogenic sets on *U* shall be denoted as PLFT(*U*).

**Definition 8.** *Let* = *P*, *A*, *V*, *d*, *c* ∈ PLFT(*U*). *The compliment for* **R**, *is defined as*

$$
\overline{\mathbf{R}} = \left< P\_\prime A\_\prime \, V\_\prime \, \overline{d}\_\prime c \right>\_{\prime\prime}
$$

*where da* = 1 − *da for all a* ∈ *A*.

**Remark 11.** *This definition of compliment follows from page 42 of* [2].

**Remark 12.** *It is clear that* **R** ∈ PLFT(*U*) *as well*.

With the all these definitions established, we now proceed to define a way of measurement of entropy for plithogenic sets. In the establishment of such entropy measures, we must let all the AFD-functions {*da* : *a* ∈ *A*} and all the CFD-functions {*ca* : *a* ∈ *A*} to participate in contributing to the overall entropy measures of = *P*, *A*, *V*, *d*, *c* ∈ PLFT(*U*).

We now discuss some common traits of how each element from {*da* : *a* ∈ *A*} and {*ca* : *a* ∈ *A*} shall contribute to the overall entropy measures of **R**, all of which are firmly rooted in our conventional understanding of entropy as a quantitative measurement for the amount of disorder.

Firstly, on the elements of {*da* : *a* ∈ *A*}: In accordance with Definition 7, each *da*(*x*, *v*) is the appurtenance fuzzy degree of *x* ∈ *P*, over the attribute value *v* ∈ *Va* (*Va* in turn belongs to the attribute *a* ∈ *A*). Note that *da*(*x*, *v*) = 1 indicates *absolute certainty* of membership of *x* in *v*; whereas *da*(*x*, *v*) = 0 indicates *absolute certainty* of non-membership of *x* in *v*. Hence, any *da*(*x*, *v*) satisfying *da*(*x*, *v*) ∈ {0, 1} must be regarded as contributing *zero* magnitude to the overall entropy measure of **R**, as absolute certainty implies zero amount of disorder. On the other hand, *da*(*x*, *v*) = 0.5 indicates *total uncertainty* of the membership of *x* in *v*, as 0.5 is in the middle of 0 and 1. Hence, any *da*(*x*, *v*) satisfying *da*(*x*, *v*) = 0.5 must be regarded as contributing *the greatest* magnitude to the overall entropy measure of **R**, as total uncertainty implies the highest possible amount of disorder.

Secondly, on the elements of {*ca* : *a* ∈ *A*}: For each attribute *a* ∈ *A*, *ca*(*v*1, *v*2) = 0 indicates that the attribute values *v*1, *v*<sup>2</sup> are of identical meaning (synonyms) with each other (e.g., "big" and "large"), whereas *ca*(*v*1, *v*2) = 1 indicates that the attribute values *v*1, *v*<sup>2</sup> are of opposite meaning to each other (e.g., "big" and "small"). Therefore, in the case of *ca*(*v*1, *v*2) = 0 and *da*(*x*, *v*1), *da*(*x*, *v*2) = {0, 1}, it implies that *x* is absolutely certain to be inside one *vi* among {*v*1, *v*2}, while outside of the other, even though *v*<sup>1</sup> and *v*<sup>2</sup> carry identical meaning to each other. Such collection of *ca*(*v*1, *v*2), *da*(*x*, *v*1), *da*(*x*, *v*2) is therefore of the highest possible amount of disorder, because their combined meaning implies an analogy to the statement of "*x* is very large and not big" or "*x* is not large and very big". As a result, such collection of *ca*(*v*1, *v*2), *da*(*x*, *v*1), *da*(*x*, *v*2) aforementioned must be regarded as contributing *the greatest* magnitude to the overall entropy measure of **R**. Furthermore, in the case of *ca*(*v*1, *v*2) = 1 and *da*(*x*, *v*1), *da*(*x*, *v*2) ⊂ {0, 1}, it implies that *x* is absolutely certain to be inside both *v*<sup>1</sup> and *v*<sup>2</sup> (or outside both *v*<sup>1</sup> and *v*2), even though *v*<sup>1</sup> and *v*<sup>2</sup> carry opposite meaning with each other. Likewise, such collection of *ca*(*v*1, *v*2), *da*(*x*, *v*1), *da*(*x*, *v*2) is of the highest possible amount of disorder, because their combined meaning implies an analogy to the statement of "*x* something is very big and very small" or "*x* something is not big and not small". As a result, such a collection of *ca*(*v*1, *v*2), *da*(*x*, *v*1), *da*(*x*, *v*2) aforementioned must be regarded as contributing *the greatest* magnitude to the overall entropy measure of **R** as well.

We now define the three axioms of entropy on plithogenic sets, analogous to the axioms (i), (ii), and (iv) in Definition 2 respectively.

**Definition 9.** *An entropy measure on plithogenic sets, is a function E* : PLFT(*U*) → [0, 1] *satisfying the following three axioms*

	- (a) *da* : *P* × *Va* → {0, 1} *for all a* ∈ *A*.
	- (b) *da*(*x*, *v*1), *da*(*x*, *v*2) = {0, 1} *whenever ca*(*v*1, *v*2) ≥ 0.5.
	- (c) *da*(*x*, *v*1), *da*(*x*, *v*2) ⊂ {0, 1} *whenever ca*(*v*1, *v*2) < 0.5.

*Then E*(**R**) = 0.

(ii) *(analogy to (ii) in Definition 2). Let* **R** = *P*, *A*, *V*, *d*, *c* ∈ PLFT(*U*) *satisfying da* : *P* × *Va* → {0.5} *for all a* ∈ *A*. *Then E*(**R**) = 1.

(iii) *(analogy to (iv) in Definition 2). For all* **R** = *P*, *A*, *V*, *d*, *c* ∈ PLFT(*U*), *E*(**R**) = *E*(**R**) *holds*.

The three axioms in Definition 9 thus serve as general rules for which any functions must fulfill to be used as entropy measures on plithogenic sets. However, the existence of such functions satisfying these three axioms needs to be ascertained. To ensure that we do have an abundance of functions satisfying these axioms, we must therefore propose and give characterization to such functions with explicit examples and go to the extent of proving that each one among our proposed examples satisfy all these axioms. Such a procedure of proving the existence of many different entropy functions is indispensable. This is because in practical use, the choices of an entropy measure will fully depend on the type of scenario examined, as well as the amount of computing power available to perform such computations, without jeopardizing the axioms of entropy measures as mentioned. It is only by doing so that users are guaranteed to have plenty of room to customize an entropy measure of plithogenic sets suited for their particular needs. In light of this motivation, a theorem showing a collection of functions satisfying those axioms is presented in this paper.

**Theorem 1.** *Let m*1, *m*2, *m*<sup>3</sup> *be any M-type functions. Let s*1, *s*<sup>2</sup> *be any S-type functions. Let* Δ *be any function satisfying the following conditions*:


*Let* ω *be any function satisfying the following conditions:*


*Define* εΔ,*<sup>a</sup>* : *P* × *Va* → [0, 1], *where* εΔ,*a*(*x*, *v*) = Δ(*da*(*x*, *v*)) *for all* (*x*, *v*) ∈ *P* × *Va*. *Define* ϕω,*<sup>a</sup>* : *P* × *Va* × *Va* → [0, 1], *where:*

$$\varphi\_{\omega, \mathfrak{a}}(\mathbf{x}, \upsilon\_1, \upsilon\_2) = \omega(1 - c\_{\mathfrak{a}}(\upsilon\_1, \upsilon\_2)) \cdot \left| d\_{\mathfrak{a}}(\mathbf{x}, \upsilon\_1) - d\_{\mathfrak{a}}(\mathbf{x}, \upsilon\_2) \right| + \omega(c\_{\mathfrak{a}}(\upsilon\_1, \upsilon\_2)) \cdot \left| d\_{\mathfrak{a}}(\mathbf{x}, \upsilon\_1) + d\_{\mathfrak{a}}(\mathbf{x}, \upsilon\_2) - 1 \right| $$

*for all* (*x*, *v*1, *v*2) ∈ *P* × *Va* × *Va*.

*Then, any function E* : PLFT(*U*) → [0, 1], *in the form of*

$$E(\mathbf{R}) = m\_3 \left| m\_2 \left\{ m\_1 \middle| s\_2 \middle| \varepsilon\_{\Lambda, \mathfrak{a}}(\mathbf{x}, \boldsymbol{\upsilon}), \ s\_1 \middle| q \upgamma\_{\boldsymbol{\omega}, \mathfrak{a}}(\mathbf{x}, \boldsymbol{\upsilon}, \boldsymbol{\mu}) : \boldsymbol{\mu} \in V\_{\mathfrak{a}} \right\} \right| : \boldsymbol{\upsilon} \in V\_{\mathfrak{a}} \right| : \boldsymbol{a} \in A \Big\rangle : \boldsymbol{x} \in P \Big\rangle$$

*for all* **R** = *P*, *A*, *V*, *d*, *c* ∈ PLFT(*U*), *are all entropy measures on plithogenic sets*.

**Proof.** + Axiom (i): Taking any arbitrary *u*, *v* ∈ *Va*, *a* ∈ *A* and *x* ∈ *P*.

a. As *da*(*x*, *v*) ∈ {0, 1}, εΔ,*a*(*x*, *v*) = Δ(*da*(*x*, *v*)) = 0.

b. Whenever *ca*(*v*1, *v*2) ≥ 0.5, it follows that 1 − *ca*(*v*1, *v*2) ≤ 0.5, which implies ω(1 − *ca*(*v*1, *v*2)) = 0.

Thus, ϕω,*a*(*x*, *v*1, *v*2) = ω(*ca*(*v*1, *v*2))· *da*(*x*, *<sup>v</sup>*1) <sup>+</sup> *da*(*x*, *<sup>v</sup>*2) <sup>−</sup> <sup>1</sup> . Since *da*(*x*, *v*1), *da*(*x*, *v*2) = {0, 1}, *da*(*x*, *v*1) + *da*(*x*, *v*2) − 1 = 0 follows, which further implies that ϕω,*a*(*x*, *v*1, *v*2) = 0.

c. whenever *ca*(*v*1, *v*2) < 0.5, it implies ω(*ca*(*v*1, *v*2)) = 0.

Thus, ϕω,*a*(*x*, *v*1, *v*2) = ω(1 − *ca*(*v*1, *v*2))· *da*(*x*, *<sup>v</sup>*1) <sup>−</sup> *da*(*x*, *<sup>v</sup>*2) .

Since *da*(*x*, *v*1), *da*(*x*, *v*2) <sup>⊂</sup> {0, 1}, *da*(*x*, *<sup>v</sup>*1) <sup>−</sup> *da*(*x*, *<sup>v</sup>*2) = 0 follows, which further implies that ϕω,*a*(*x*, *v*1, *v*2) = 0.

Hence, ϕω,*a*(*x*, *v*, *u*) = εΔ,*a*(*x*, *v*) = 0 follows for all *u*, *v*, *a*, *x*.

As a result,

$$\begin{split} E(\mathbf{R}) &= m\_3 \{ m\_2 \{ m\_1 \{ s\_2 \{ \varepsilon\_{\Lambda, d}(\mathbf{x}, v), s\_1 \{ q\_{w,d}(\mathbf{x}, v, \mu) : u \in V\_d \} \} : v \in V\_d \} : a \in A \} : \mathbf{x} \in P \} \\ &= m\_3 \{ m\_2 \{ m\_1 \{ s\_2 \{ 0, \ s\_1 \{ 0 : u \in V\_d \} \} : v \in V\_d \} : a \in A \} : \mathbf{x} \in P \} \\ &= m\_3 \{ m\_2 \{ m\_1 \{ s\_2 \{ 0, 0 \} : v \in V\_d \} : a \in A \} : \mathbf{x} \in P \} \\ &= m\_3 \{ m\_2 \{ m\_1 \{ 0 : v \in V\_d \} : a \in A \} : \mathbf{x} \in P \} = 0. \end{split}$$

+ Axiom (ii): Taking any arbitrary *v* ∈ *Va*, *a* ∈ *A* and *x* ∈ *P*. As *da* : *P* × *Va* → {0.5} for all *a* ∈ *A*, we have *da*(*x*, *v*) = 0.5 for all *v*, *a*, *x*. This further implies that εΔ,*a*(*x*, *v*) = Δ(*da*(*x*, *v*)) = 1 for all *v*, *a*, *x*. As a result,

$$\begin{split} E(\mathbb{R}) &= m\_3 \{ m\_2 \{ m\_1 \{ s\_2 \{ \varepsilon\_{\Lambda, \mu} (\mathbf{x}, \boldsymbol{\upsilon}), s\_1 \{ \boldsymbol{\varrho}\_{\boldsymbol{\omega}, \mu} (\mathbf{x}, \boldsymbol{\upsilon}, \boldsymbol{\mu}) : \boldsymbol{u} \in V\_d \} \} : \boldsymbol{v} \in V\_d \} : \boldsymbol{x} \in A \} : \boldsymbol{x} \in P \} \\ &= m\_3 \{ m\_2 \{ \boldsymbol{m}\_1 \{ s\_2 \{ 1, \boldsymbol{s}\_1 \{ \boldsymbol{\varrho}\_{\boldsymbol{\omega}, \mu} (\mathbf{x}, \boldsymbol{\upsilon}, \boldsymbol{\mu}) : \boldsymbol{u} \in V\_d \} \} : \boldsymbol{v} \in V\_d \} : \boldsymbol{x} \in A \} : \boldsymbol{x} \in P \} \\ &= m\_3 \{ m\_2 \{ \boldsymbol{m}\_1 \{ 1 : \boldsymbol{v} \in V\_d \} : \boldsymbol{a} \in A \} : \boldsymbol{x} \in P \} = 1. \end{split}$$

+ Axiom (iii): *da* = 1 − *da* follows by Definition 8. This will imply the following

$$
\Delta(\mathbf{a}) \quad \Delta(\overline{d}\_{\mathbf{a}}(\mathbf{x}, \upsilon)) = \Delta(1 - d\_{\mathbf{a}}(\mathbf{x}, \upsilon)) = \Delta(d\_{\mathbf{a}}(\mathbf{x}, \upsilon)) = \varepsilon\_{\Delta, \mathbf{a}}(\mathbf{x}, \upsilon).
$$

(b) First, we have

$$\begin{aligned} \left| \overline{d}\_{\mathfrak{a}}(\mathfrak{x}, \upsilon\_1) - \overline{d}\_{\mathfrak{a}}(\mathfrak{x}, \upsilon\_2) \right| &= \left| (1 - d\_{\mathfrak{a}}(\mathfrak{x}, \upsilon\_1)) - (1 - d\_{\mathfrak{a}}(\mathfrak{x}, \upsilon\_2)) \right| \\ &= \left| -d\_{\mathfrak{a}}(\mathfrak{x}, \upsilon\_1) + d\_{\mathfrak{a}}(\mathfrak{x}, \upsilon\_2) \right| \\ &= \left| d\_{\mathfrak{a}}(\mathfrak{x}, \upsilon\_1) - d\_{\mathfrak{a}}(\mathfrak{x}, \upsilon\_2) \right| \end{aligned}$$

and

$$\begin{array}{l} \left| \widetilde{d}\_{\mathfrak{a}}(\mathfrak{x}, \upsilon\_{1}) + \widetilde{d}\_{\mathfrak{a}}(\mathfrak{x}, \upsilon\_{2}) - 1 \right| &= \left| \left( 1 - d\_{\mathfrak{a}}(\mathfrak{x}, \upsilon\_{1}) \right) + \left( 1 - d\_{\mathfrak{a}}(\mathfrak{x}, \upsilon\_{2}) \right) - 1 \right| \\ &= \left| 1 - d\_{\mathfrak{a}}(\mathfrak{x}, \upsilon\_{1}) + 1 - d\_{\mathfrak{a}}(\mathfrak{x}, \upsilon\_{2}) - 1 \right| \\ &= \left| 1 - d\_{\mathfrak{a}}(\mathfrak{x}, \upsilon\_{1}) - d\_{\mathfrak{a}}(\mathfrak{x}, \upsilon\_{2}) \right| \\ &= \left| d\_{\mathfrak{a}}(\mathfrak{x}, \upsilon\_{1}) + d\_{\mathfrak{a}}(\mathfrak{x}, \upsilon\_{2}) - 1 \right|. \end{array}$$

Therefore, it follows that

$$\begin{aligned} \left.\omega\left(1-c\_{a}(\upsilon\_{1},\upsilon\_{2})\right)\cdot\left|\overline{d}\_{a}(\mathbf{x},\upsilon\_{1})-\overline{d}\_{a}(\mathbf{x},\upsilon\_{2})\right|+\left.\omega\left(c\_{a}(\upsilon\_{1},\upsilon\_{2})\right)\right|\overline{d}\_{a}(\mathbf{x},\upsilon\_{1})+\overline{d}\_{a}(\mathbf{x},\upsilon\_{2})-1\right|\\ \left.\omega\right]&=\omega\left(1-c\_{a}(\upsilon\_{1},\upsilon\_{2})\right)\cdot\left|d\_{a}(\mathbf{x},\upsilon\_{1})-d\_{a}(\mathbf{x},\upsilon\_{2})\right|+\left.\omega\left(c\_{a}(\upsilon\_{1},\upsilon\_{2})\right)\right|d\_{a}(\mathbf{x},\upsilon\_{1})+d\_{a}(\mathbf{x},\upsilon\_{2})-1\right|\\ &=\eta\_{\omega\left(\mathbf{x},\upsilon\_{1},\upsilon\_{2}\right)}\end{aligned}$$

Since

$$E(\mathbb{R}) = m\_3 \Big\{ m\_2 \Big\{ m\_1 \Big\{ s\_2 \Big\{ \varepsilon\_{\Lambda, d}(\mathbf{x}, \boldsymbol{\upsilon}), s\_1 \Big\} q\_{\boldsymbol{\omega}, d}(\mathbf{x}, \boldsymbol{\upsilon}, \boldsymbol{\mu}) : \boldsymbol{u} \in V\_d \Big\} \; : \; \boldsymbol{\upsilon} \in V\_d \Big\} : \; a \in A \Big\} : \; \mathbf{x} \in P \Big\} $$

*E*(**R**) = *E*(**R**) now follows. -

**Remark 13.** *As* εΔ,*a*(*x*, *v*) = Δ(*da*(*x*, *v*)) *and*

$$\rho\_{\boldsymbol{\vartheta}\boldsymbol{\vartheta},\boldsymbol{\vartheta}}(\mathbf{x},\boldsymbol{\upsilon},\boldsymbol{\mu}) = \omega(1-\mathsf{c}\_{\boldsymbol{\mathfrak{s}}}(\boldsymbol{\upsilon},\boldsymbol{\mu})) \cdot \left| d\_{\boldsymbol{\mathfrak{s}}}(\mathbf{x},\boldsymbol{\upsilon}) - d\_{\boldsymbol{\mathfrak{s}}}(\mathbf{x},\boldsymbol{\mu}) \right| + \omega(\mathsf{c}\_{\boldsymbol{\mathfrak{s}}}(\boldsymbol{\upsilon},\boldsymbol{\mu})) \cdot \left| d\_{\boldsymbol{\mathfrak{s}}}(\mathbf{x},\boldsymbol{\upsilon}) + d\_{\boldsymbol{\mathfrak{s}}}(\mathbf{x},\boldsymbol{\mu}) - 1 \right| $$

*It follows that*

$$m\_3\left\{m\_2\left\{\begin{aligned} &\mathbf{E}(\mathbf{R})=\\ &\omega\left|\begin{array}{c} \omega\\ \end{array}\right|\Delta\left(\operatorname{d}\_a(\mathbf{x},\upsilon)\right),s\_1\right\} \left\{ &\begin{array}{c} \boldsymbol{\omega}\left(\mathbf{1}-\mathsf{c}\_a(\mathbf{v},\boldsymbol{u})\right)\cdot\\ \left|\begin{matrix} \operatorname{d}\_a(\mathbf{x},\upsilon)-\operatorname{d}\_a(\mathbf{x},\boldsymbol{u})\end{matrix}\right|+\\ &\boldsymbol{\omega}\left(\mathbf{c}\_a(\mathbf{v},\boldsymbol{u})\right)\cdot\\ \left|\begin{matrix} \operatorname{d}\_a(\mathbf{x},\upsilon)+\operatorname{d}\_a(\mathbf{x},\boldsymbol{u})-1\end{matrix}\right| \end{aligned} : \boldsymbol{\upmu}\in V\_a\right\} : \boldsymbol{x}\in A\right\}: \boldsymbol{x}\in P$$

*Such a version of the formula serves as an even more explicit representation of E*(**R**).

**Remark 14.** *For instance, the following is one of the many theoretical ways of choosing* {*m*1, *m*2, *m*3,*s*1,*s*2, Δ, ω} *to form a particular entropy measure on plithogenic sets*.

$$\text{(a)}\quad \omega(c) = \begin{cases} 0, & 0 \le c < \frac{1}{2} \\\ 2\left(c - \frac{1}{2}\right), & \frac{1}{2} \le c \le 1 \\\ 1, & \end{cases}, \text{for all } c \in [0, 1].$$

$$\text{(b)}\quad\Lambda(\mathfrak{c}) = \left\{ \begin{array}{ll} 2\mathfrak{c}, \; 0 \le \mathfrak{c} < \frac{1}{2} \\ 2(1-\mathfrak{c}), \; \frac{1}{2} \le \mathfrak{c} \le 1 \end{array} , \right. \\ \text{( $\mathfrak{c}$  for all  $\mathfrak{c} \in \left[0, 1\right]$ )} .$$

$$\text{(c)}\qquad s\_1(K) = \text{maximum}(K)\_\prime for \, all \, K \in \Phi\_{[0,1]}.$$

$$\text{in } (\mathbf{d}) \quad \text{s}\_2(K) = 1 - \prod\_{k \in K} (1 - k) , \text{for all } K \in \Phi\_{[0, 1]}.$$


In practical applications, however, the choices of {*m*1, *m*2, *m*3,*s*1,*s*2, Δ, ω} will depend on the type of scenario examined, as well as the amount of computing power available to perform such computations. Such abundance of choices is a huge advantage, because it allows each user plenty of room of customization suited for their own needs, without jeopardizing the principles of entropy functions.

#### **4. Numerical Example of Plithogenic Sets**

In this section, we demonstrate the utility of the proposed entropy functions for plithogenic sets using an illustrative example of a MADM problem involving a property buyer making a decision whether to live in Town *P* or Town *B*.

#### *4.1. Attributes and Attributes Values*

Three different addresses within Town *P* are selected: *P* = *p*, *q*,*r* . Another four different addresses within Town *B* are selected as well: *B* = α, β, γ, δ . All the seven addresses are investigated by that person based on 3 attributes as follows:

$$A = \begin{cases} \text{Series near the address } (j), \text{ Security near the address } (s), \\ \text{Public transport near the address } (t) \end{cases}$$

#

For each of the 3 attributes, the following attribute values are considered:

*Vj* = School(*u*1), Bank(*u*2), Factory(*u*3), Construction Site(*u*4), Clinic(*u*5) *Vs* = Police on Patrol(*v*1), Police Station(*v*2), CCTV Coverage(*v*3), Premise Guards(*v*4) *Vt* = Bus(*w*1), Train(*w*2), Taxi(*w*3), Grab services(*w*4) 

#### *4.2. Attribute Value Appurtenance Degree Functions*

In light of the limitation of one person doing the investigation, there could possibly be some characteristics of Town *P* left unknown or unsure of. As a result, our example involved in this paper, though small in caliber, shall provide a realistic illustration of such phenomena.

Thus, in our example: Let the attribute value appurtenance degree functions for Town *P* be given in Tables 1–3 (as deduced by the property buyer).

**Table 1.** Attribute value appurtenance fuzzy degree function for *j* ∈ *A* on Town *P* (*dj*).


**Table 2.** Attribute value appurtenance fuzzy degree function for *s* ∈ *A* on Town *P* (*ds*).


**Table 3.** Attribute value appurtenance fuzzy degree function for *t* ∈ *A* on Town *P* (*dt*).


For example:

*dj*(*p*, *u*1) = 1.0 indicates that schools exist near address *p* in town *P*.

*dt*(*q*, *w*4) = 0.9 indicates that Grab services are very likely to exist near address *q* in town *P*.

*ds*(*r*, *v*2) = 1.0 indicates that police stations exist near address *r* in town *P*.

Similarly, let the attribute value appurtenance degree functions for Town *B* be given in Tables 4–6 (as deduced by the property buyer):


**Table 4.** Attribute value appurtenance fuzzy degree function for *j* ∈ *A* on Town *B* (*hj*).

**Table 5.** Attribute value appurtenance fuzzy degree function for *s* ∈ *A* on Town *B* (*hs*).



**Table 6.** Attribute value appurtenance fuzzy degree function for *t* ∈ *A* on Town *B* (*ht*).

#### *4.3. Attribute Value Contradiction Degree Functions*

Moreover, each of the attributes of a town may be dependent on one another. For example, in a place where schools are built, clinics should be built near to the schools, whereas factories should be built far from the schools. Moreover, the police force should spread their manpower patrolling across the town away from a police station. As a result, our example involved in this paper, though small in caliber, shall provide a realistic illustration of such phenomena as well.

Thus, as an example, let the attribute value contradiction degree functions for the attributes *j*,*s*, *t* ∈ *A* be given in Tables 7–9: (as deduced by the property buyer), to be used for both towns.

**Table 7.** Attribute value contradiction degree functions for *j* ∈ *A* (*cj*).




**Table 9.** Attribute value contradiction degree functions for *t* ∈ *A* (*ct*).


In particular,

*cj*(*u*1, *u*3) = 1.0 indicates that schools and factories should not be in the same place, because it is not healthy to the students.

*cj*(*u*1, *u*5) = 0.0 indicates that schools and clinics should be available together, so that any student who falls ill can visit the clinic.

*cs*(*v*1, *v*2) = 1.0, because it is very inefficient for police to patrol only nearby a police station itself, instead of places of a significant distance to a police station. This also ensures that police force will be present in all places, as either a station or a patrol unit will be present.

*ct*(*w*1, *w*2) = 0.0, because all train stations must have buses going to/from it. On the other hand, one must also be able to reach a train station from riding a bus.

*ct*(*w*3, *w*4) = 0.9 due to the conflicting nature of the two businesses.

#### *4.4. Two Plithogenic Sets Representing Two Towns*

From all attributes of the two towns given, we thus form two plithogenic sets representing each of them


Intuitively, it is therefore evident that the property buyer should choose Town *P* over Town *B* as his living place. One of the many reasons being, in Town *P* schools and factories are unlikely to appear near one address within the town, whereas in Town *B* there exist addresses where schools and factories are both nearby (so both schools and factories are near to each other). Moreover, in Town *P*, the police force is more efficient as they spread their manpower across the town, rather than merely patrolling near their stations and even leaving some addresses unguarded as in Town *B*. On top of this, there exists places in town where taxi and grab services are near to each other, which can cause conflict or possibly vandalism to each other's property. Town *P* is thus deemed less "chaotic", whereas Town *B* is deemed more "chaotic".

As a result, our entropy measure must be able to give Town *P* as having lower entropy than Town *B*, under certain choices of {*m*1, *m*2, *m*3,*s*1,*s*2, Δ, ω} which are customized for the particular use of the property buyer.

#### *4.5. An Example of Entropy Measure on Two Towns*

Choose the following to form *E* : PLFT(*U*) → [0, 1] in accordance with Theorem 1:

$$\mathbf{u}(\mathbf{a}) \quad \omega(\mathbf{c}) = \begin{cases} \mathbf{0}, \; 0 \le \mathbf{c} < \frac{1}{2} \\\ 2\mathbf{(c-\frac{1}{2})}, \; \frac{1}{2} \le \mathbf{c} \le 1 \end{cases}, \; \text{for all } \mathbf{c} \in [0, 1].$$

$$\text{(b)}\quad\Lambda(c) = \begin{cases} \quad 2c, \ 0 \le c < \frac{1}{2} \\\quad 2(1-c), \ \frac{1}{2} \le c \le 1 \end{cases}, \text{ for all } c \in [0, 1].$$

$$\text{(c)}\quad s\_1(K) = s\_2(K) = 1 - \sqrt[|K|]{\prod\_{k \in K} (1 - k)}, \text{ for all } K \in \Phi\_{[0, 1]}.$$

(d) *m*1(*K*) = *m*2(*K*) = *m*3(*K*) = mean(*K*), for all *K* ∈ Φ[0,1].

Then, by the calculation in accordance with Theorem 1 which is subsequently highlighted in Figure 1.

We have *E*(**R**) = 0.05541+0.14126+0.25710 <sup>3</sup> <sup>=</sup> 0.15126, and *<sup>E</sup>*(**T**) <sup>=</sup> 0.54868+0.43571+0.39926 <sup>3</sup> = 0.46122. Town *P* is concluded to have lower entropy, and, therefore, is less "chaotic", compared to Town *B*.

**Figure 1.** The entire workflow of determining the entropy measure **R** of a plithogenic set.

### **5. Conclusions**

The plithogenic set **R** = *P*, *A*, *V*, *d*, *c* is an improvement to the neutrosophic model whereby each attribute is characterized by a degree of appurtenance *d* that describes belongingness to the given criteria, and every pair attribute is characterized by a degree of contradiction *c* that describes the amount of similarity or opposition between two attributes. In Section 3 of this paper, we have introduced new entropy measures for plithogenic sets *E*(**R**). The axiomatic definition of the plithogenic entropy was defined using some of the axiomatic requirements of neutrosophic entropy and some additional conditions. Some formulae for the entropy measure of plithogenic sets have been introduced in Theorem 1 and these formulas have been developed further to satisfy characteristics of plithogenic sets such as satisfying exact exclusion (partial order) and containing a contradiction or dissimilarity degree between each attribute value and the dominant attribute value. The practical application of the proposed plithogenic entropy measures was demonstrated by applying it to a multi-attribute decision making problem related to the selection of locations.

Future works related to the plithogenic entropy include studying more examples of entropy measures for plithogenic sets with structures different from the one mentioned in Theorem 1, and to apply the different types of entropy measure for plithogenic sets onto real life datasets. We are also working on developing entropy measures for other types of plithogenic sets such as plithogenic intuitionistic fuzzy sets and plithogenic neutrosophic sets, and the study of the application of these measures in solving real world problems using real life datasets [36–43].

**Author Contributions:** Concept: S.G.Q. and G.S.; methodology: S.G.Q., G.S., F.S. and J.V.; flowchart: S.G.Q.; software: S.H.L. and Q.-T.B.; validation: S.H.L. and V.C.G.; data curation: Q.-T.B. and S.G.Q.; writing—original draft preparation: S.G.Q. and G.S.; writing—review and editing: F.S., J.V., S.H.L., Q.-T.B., V.C.G. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
