Next Article in Journal
On the Constant-Roll Tachyon Inflation with Large and Small ηH
Previous Article in Journal
Firewall Anomaly Detection Based on Double Decision Tree
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Some Enhanced Distance Measuring Approaches Based on Pythagorean Fuzzy Information with Applications in Decision Making

1
Department of General Education, Chongqing Preschool Education College, Chongqing 404047, China
2
Department of Mathematics, University of Agriculture, Makurdi P.M.B. 2373, Nigeria
3
Key Laboratory of Intelligent Information Processing and Control, Chongqing Three Gorges University, Chongqing 404100, China
4
Department of Computer Science, University of Agriculture, Makurdi P.M.B. 2373, Nigeria
5
Department of Mathematics, Federal University of Technology, Minna P.M.B. 65, Nigeria
*
Authors to whom correspondence should be addressed.
Symmetry 2022, 14(12), 2669; https://doi.org/10.3390/sym14122669
Submission received: 26 October 2022 / Revised: 6 December 2022 / Accepted: 13 December 2022 / Published: 16 December 2022
(This article belongs to the Section Mathematics)

Abstract

:
The construct of Pythagorean fuzzy distance measure (PFDM) is a competent measuring tool to curb incomplete information often encountered in decision making. PFDM possesses a wider scope of applications than distance measure under intuitionistic fuzzy information. Some Pythagorean fuzzy distance measure approaches (PFDMAs) have been developed and applied in decision making, albeit with some setbacks in terms of accuracy and precision. In this paper, some novel PFDMAs are developed with better accuracy and reliability rates compared to the already developed PFDMAs. In an effort to validate the novel PFDMAs, some of their properties are discussed in terms of theorems with proofs. In addition, some applications of the novel PFDMAs in problems of disease diagnosis and pattern recognition are discussed. Furthermore, we present comparative studies of the novel PFDMAs in conjunction to the existing PFDMAs to buttress the merit of the novel approaches in terms of consistency and precision. To end with, some new Pythagorean fuzzy similarity measuring approaches (PFDSAs) based on the novel PFDMAs are presented and applied to solve the problems of disease diagnosis and pattern recognition as well.

1. Introduction

The process of decision making, which involves choice making by identifying, information gathering, and evaluation of alternative resolution, is a challenging procedure due to incomplete information. A dependable method for carrying out decision making is by means of fuzzy set because of incomplete information in the process. Pattern recognition, decision making, medical diagnosis, and selection process, among others, have been explored with the instrumentality of fuzzy logic. By definition, a fuzzy set [1] defined in a set U is categorized by a membership degree symbolized by β , which associates numbers from an interval, I = [ 0 , 1 ] to the elements of U . Nonetheless, fuzzy set is inadequate since it considers only the degree of membership without minding any other deciding parameters. As a follow-up to this weakness, Atanassov [2] developed a concept called intuitionistic fuzzy set (IFS), which considers a degree of membership β in addition to a degree of nonmembership γ such that either 1 β γ or 1 γ β . Several applications of IFSs has been discussed based on various information measures. Pattern recognition problems [3,4] and medical diagnosis [5] have been carried out based on intuitionistic fuzzy similarity measures. Other sundry approaches such as intuitionistic fuzzy distance measures, intuitionistic fuzzy relations, and intuitionistic fuzzy correlation measures in have been used to crack a number of problems in pattern recognition [6,7] and decision making [8], among others. A method of group decision making by means of intuitionistic fuzzy aggregation operators has been deliberated [9]. A number of applicable distance measures under IFSs were considered in [10,11,12].
The clear drawback of IFS is its restriction that the summation of the degrees of membership and nonmembership must not be bigger than one. Consequentially to this inadequacy, the term IFS of second type (IFSST) [8,13] was constructed, which was mostly called Pythagorean fuzzy sets (PFSs) [14,15]. In PFS, the aggregate of the degrees of membership and nonmembership might be bigger than one. PFS finds numerous significances in the models of hands-on problems. Sundry operators such as Einstein t-norm, Einstein operator, and Einstein t-conorm were studied under PFSs and applied in decision making [16,17]. An approach for cracking multiattributes decision making (MADM) was discussed [18] via interval-valued Pythagorean fuzzy linguistic information. A variant of linguistic PFSs was discussed in [19] and applied to MADM. More so, in [20], a new extension of the technique of TOPSIS for multiple criteria decision making (MCDM) based on hesitant PFSs was discussed. Sundry utilizations of Pythagorean fuzzy information measures in hands-on decision making have been studied [15,21,22], pattern recognition [23], MCDM [24,25,26], etc. Some Pythagorean fuzzy information measures were developed with their applications in real-world problems [27,28,29]. In recent times, various uses of PFSs were discussed using assorted approaches [30,31,32,33,34,35].
In addition, similarity and distance measures have been studied in linear Diophantine fuzzy sets, linguistic linear Diophantine fuzzy sets, and interval-valued bipolar q-rung orthopair fuzzy sets with applications [36,37,38]. In [39,40], the applications of complex PFSs and Pythagorean fuzzy soft sets were used for MCDM, TOPSIS, VIKOR, and MADM, respectively. Methods for data classification have been discussed using distance-based similarity measures under fuzzy parameterized fuzzy soft matrices [41,42], aggregation operator of fuzzy parameterized fuzzy soft matrices [43], and fuzzy parameterized soft k-nearest neighbor classifier [44].
As earlier stated, the applications of PFSs have been possible using several measures. Distance operator is a tool for computing distance between PFSs drawn from the similar space. Lots of studies on PFDMAs and practical applications have been conducted. Zhang and Xu [24] pioneered the research on PFDM by introducing a PFDMA and applied it to MCDM. Li and Zeng [45] developed a PFDMA with application to the solution of real-life problems. Assorted PFDMAs were developed and characterized in [46], which were the extended versions of the fuzzy distance approaches [47] and intuitionistic fuzzy distances approaches [11], respectively. The PFDMA in [24] was fortified in [48] to enhance accurate measure. Numerous PFDMAs have been explored and used to decide group MCDM [49,50]. In recent times, Hussain and Yang [51] developed a dissimilar PFDMA via Hausdorff metric with fuzzy TOPSIS application, and Xiao and Ding [52] developed a PFDMA by modifying a PFDMA in [46] and discussed its application in the diagnostic process. Most recently, Mahanta and Panda [53] developed a novel PFDMA and elaborated several of its applications.
The PFDMAs in [24,46,48,52] defaulted in the matter of precision, although they take cognizance of the whole parameters of PFSs unlike the PFDMAs in [51,53]. The PFDMA in [51] does not consider the whole parameters of PFSs, and it is also based on maximum extreme value without minding the influence of the other values. The PFDMA in [53] is defective because the whole parameters of PFSs were not accounted for. By taking all these shortcomings into consideration, it is then necessary to develop new PFDMAs that resolve the shortcomings in the hitherto PFDMAs to foster reliability and precision. In a recap, in this paper, we introduce two PFDMAs and their associated PFSMAs with outstanding advantage in terms of accuracy and reliability. The main objectives of the article are to
  • develop new PFDMAs (and their associating PFSMAs) and show their computational processes,
  • authenticate the new PFDMAs (and their associated PFSMAs) by describing their properties in consonant with the axiomatic descriptions of similarity and distance operators,
  • apply the new PFDMAs (and their associated PFSMAs) to the problems of diagnosis and patterns recognition, and
  • give comparative studies of the new PFDMAs with some existing PFDMAs to showcase the importance of the newfangled PFDMAs.
The article’s outline by sections is as follows: in Section 2, we give some fundamentals of PFS and definitions of distance and similarity operators on PFSs; in Section 3, we present the new PFDMAs (and their associated PFSMAs), their computation example, and applications to the problems of patterns recognition and diseases diagnosis; in Section 4, we discuss the comparative studies of the new PFDMAs in conjunction with some other PFDMAs; and in Section 5, we sum up the paper with directions for future studies.

2. Preliminaries

Certain fundamentals of PFSs were presented in [14,15]. Foremost, we describe IFS as following.
Definition 1
([2]). An IFS in a set U symbolized by F is defined by
F = { u , β F ( u ) , γ F ( u ) | u U } ,
where β F , γ F : U [ 0 , 1 ] describe the grades of membership and nonmembership of u U such that 0 β F ( u ) + γ F ( u ) 1 . In IFS F in U , δ F ( u ) = 1 β F ( u ) γ F ( u ) is the margin of hesitation of F .
Definition 2
([14]). A PFS in U symbolized by k is defined by
k = { u , β k ( u ) , γ k ( u ) | u U } ,
where β k , γ k : U [ 0 , 1 ] describe the grades of membership and nonmembership of u U such that 0 β k 2 ( u ) + γ k 2 ( u ) 1 . If β k 2 ( u ) + γ k 2 ( u ) 1 , then there is a function δ k ( u ) [ 0 , 1 ] defined by δ k ( u ) = 1 β k 2 ( u ) γ k 2 ( u ) , which is called grade of indeterminacy of u U to k .
We can write a PFS k in U as k = β k ( u ) , γ k ( u ) , δ k ( u ) for easy expression. Now, we recall the basic operations on PFSs.
Definition 3
([15]). If k , k 1 , and k 2 are PFSs in U , then
(i)
k 1 k 2 iff β k 1 ( u ) β k 2 ( u ) and γ k 1 ( u ) γ k 2 ( u ) u U ,
(ii)
k 1 = k 2 iff β k 1 ( u ) = β k 2 ( u ) and γ k 1 ( u ) = γ k 2 ( u ) u U ,
(iii)
k 1 k 2 iff β k 1 ( u ) β k 2 ( u ) and γ k 1 ( u ) γ k 2 ( u ) u U ,
(iv)
k ¯ = { u , γ k ( u ) , β k ( u ) | u U } ,
(v)
k 1 k 2 = { u , min { β k 1 ( u ) , β k 2 ( u ) } , max { γ k 1 ( u ) , γ k 2 ( u ) } | u U } ,
(vi)
k 1 k 2 = { u , max { β k 1 ( u ) , β k 2 ( u ) } , min { γ k 1 ( u ) , γ k 2 ( u ) } | u U } .
Now, we present the definition of Pythagorean fuzzy distance operator (PFDO) as in [46].
Definition 4
([46]). If k , k 1 and k 2 are PFSs in U , then PFDO between k 1 and k 2 represented by D ( k 1 , k 2 ) is a function, D : P F S × P F S [ 0 , 1 ] satisfying the ensuing conditions
(i)
D ( k 1 , k 2 ) [ 0 , 1 ] (boundedness),
(ii)
D ( k 1 , k 1 ) = 0 , D ( k 2 , k 2 ) = 0 (reflexivity),
(iii)
D ( k 1 , k 2 ) = 0 k 1 = k 2 (separability),
(iv)
D ( k 1 , k 2 ) = D ( k 2 , k 1 ) (symmetry),
(v)
D ( k 1 , k ) D ( k 1 , k 2 ) + D ( k 2 , k ) (triangle inequality).
As D ( k 1 , k 2 ) tends to 0, it indicates that k 1 and k 2 are more associated, and as D ( k 1 , k 2 ) tends to 1, it shows that k 1 and k 2 are not associated.
Since distance operator is a dual of similarity operator, we now present the definition of Pythagorean fuzzy similarity operator (PFSO) as following.
Definition 5
([46]). Suppose k , k 1 and k 2 are PFSs in U , then PFSO between k 1 and k 2 represented by S ( k 1 , k 2 ) is a function, S : P F S × P F S [ 0 , 1 ] satisfying the ensuing conditions
(i)
S ( k 1 , k 2 ) [ 0 , 1 ] ,
(ii)
S ( k 1 , k 1 ) = 1 , S ( k 2 , k 2 ) = 1 ,
(iii)
S ( k 1 , k 2 ) = 1 k 1 = k 2 ,
(iv)
S ( k 1 , k 2 ) = S ( k 2 , k 1 ) ,
(v)
S ( k 1 , k ) S ( k 1 , k 2 ) + S ( k 2 , k ) .
As S ( k 1 , k 2 ) tends to 1, it indicates that k 1 and k 2 are more associated, and as S ( k 1 , k 2 ) tends to 0, it shows that k 1 and k 2 are not associated.

Some Existing PFDMAs/PFSMAs

For arbitrary PFSs k 1 and k 2 in U = { u 1 , u 2 , , u N } , we enumerate some approaches of distance measures (and associated similarity measures) under PFSs. Before enumerating the distance/similarity measures, we write the difference of k 1 and k 2 , denoted by k 1 k 2 in two forms as follow:
(i)
k 1 k 2 = ( A , B , C ) , and
(ii)
k 1 k 2 = ( A ˜ , B ˜ , C ˜ ) ,
where
A = β k 1 ( u j ) β k 2 ( u j ) , B = γ k 1 ( u j ) γ k 2 ( u j ) , C = δ k 1 ( u j ) δ k 2 ( u j ) ,
A ˜ = β k 1 2 ( u j ) β k 2 2 ( u j ) , B ˜ = γ k 1 2 ( u j ) γ k 2 2 ( u j ) , C ˜ = δ k 1 2 ( u j ) δ k 2 2 ( u j ) .
The existing distance/similarity measures for PFSs k 1 and k 2 in U are:
  • Approach in [24]
    D 1 ( k 1 , k 2 ) = 1 2 Σ j = 1 N | A ˜ | + | B ˜ | + | C ˜ | , S 1 ( k 1 , k 2 ) = 1 1 2 Σ j = 1 N | A ˜ | + | B ˜ | + | C ˜ | .
    The PFDMA D 1 is developed based on Hamming distance function.
  • Approaches in [46]
    D 2 ( k 1 , k 2 ) = 1 2 Σ j = 1 N | A | + | B | + | C | , S 2 ( k 1 , k 2 ) = 1 1 2 Σ j = 1 N | A | + | B | + | C | ,
    D 3 ( k 1 , k 2 ) = 1 2 Σ j = 1 N A 2 + B 2 + C 2 , S 3 ( k 1 , k 2 ) = 1 1 2 Σ j = 1 N A 2 + B 2 + C 2 ,
    D 4 ( k 1 , k 2 ) = 1 2 N Σ j = 1 N | A | + | B | + | C | , S 4 ( k 1 , k 2 ) = 1 1 2 N Σ j = 1 N | A | + | B | + | C | ,
    D 5 ( k 1 , k 2 ) = 1 2 N Σ j = 1 N A 2 + B 2 + C 2 , S 5 ( k 1 , k 2 ) = 1 1 2 N Σ j = 1 N A 2 + B 2 + C 2 .
    The PFDMAs D 2 and D 4 are developed based on Hamming distance function and normalized Hamming distance function, respectively. D 3 and D 5 are developed based on Euclidean distance function and normalized Euclidean distance function, respectively.
  • Approach in [48]
    D 6 ( k 1 , k 2 ) = 1 2 N Σ j = 1 N | A ˜ | + | B ˜ | + | C ˜ | , S 6 ( k 1 , k 2 ) = 1 1 2 N Σ j = 1 N | A ˜ | + | B ˜ | + | C ˜ | .
    The PFDMA D 6 is developed based on normalized Hamming distance function.
  • Approach in [51]
    D 7 ( k 1 , k 2 ) = 1 N Σ j = 1 N max { | A ˜ | , | B ˜ | } , S 7 ( k 1 , k 2 ) = 1 1 N Σ j = 1 N max { | A ˜ | , | B ˜ | } .
    The PFDMA D 7 is developed based on Hausdorff distance function.
  • Approach in [52]
    D 8 ( k 1 , k 2 ) = 1 2 N Σ j = 1 N A ˜ 2 + B ˜ 2 + C ˜ 2 , S 8 ( k 1 , k 2 ) = 1 1 2 N Σ j = 1 N A ˜ 2 + B ˜ 2 + C ˜ 2 .
    The PFDMA D 8 is developed based on normalized Euclidean distance function.
  • Approach in [53]
    D 9 ( k 1 , k 2 ) = Σ j = 1 N | A ˜ | + | B ˜ | N Σ j = 1 N β k 1 2 ( u j ) + γ k 1 2 ( u j ) + Σ j = 1 N β k 2 2 ( u j ) + γ k 2 2 ( u j ) , S 9 ( k 1 , k 2 ) = 1 Σ j = 1 N | A ˜ | + | B ˜ | N Σ j = 1 N β k 1 2 ( u j ) + γ k 1 2 ( u j ) + Σ j = 1 N β k 2 2 ( u j ) + γ k 2 2 ( u j ) .
    The PFDMA D 9 is developed based on cosine distance function.

3. Enhanced Distance-Similarity Measuring Approaches for PFSs

For PFSs, k 1 and k 2 in U = { u 1 , u 2 , , u N } , the enhanced distance measuring approaches are
D ^ ( k 1 , k 2 ) = 1 N Σ j = 1 N A v g { | β k 1 2 ( u j ) β k 2 2 ( u j ) | , | γ k 1 2 ( u j ) γ k 2 2 ( u j ) | , | δ k 1 2 ( u j ) δ k 2 2 ( u j ) | }
and
D ^ * ( k 1 , k 2 ) = Σ j = 1 N | β k 1 2 ( u j ) β k 2 2 ( u j ) | + | γ k 1 2 ( u j ) γ k 2 2 ( u j ) | + | δ k 1 2 ( u j ) δ k 2 2 ( u j ) | N k 1 2 + k 2 2 ,
where
k 1 2 = Σ j = 1 N β k 1 2 ( u j ) + γ k 1 2 ( u j ) + δ k 1 2 ( u j ) , k 2 2 = Σ j = 1 N β k 2 2 ( u j ) + γ k 2 2 ( u j ) + δ k 2 2 ( u j ) ,
and A v g stands for average.
Certainly, k 1 2 = N = k 2 2 and hence, (13) can be rewritten as
D ^ * ( k 1 , k 2 ) = 1 2 N 2 Σ j = 1 N | β k 1 2 ( u j ) β k 2 2 ( u j ) | + | γ k 1 2 ( u j ) γ k 2 2 ( u j ) | + | δ k 1 2 ( u j ) δ k 2 2 ( u j ) | .
The associated PFSMAs are given by (14) and (15) as
S ^ ( k 1 , k 2 ) = 1 1 N Σ j = 1 N Avg { | β k 1 2 ( u j ) β k 2 2 ( u j ) | , | γ k 1 2 ( u j ) γ k 2 2 ( u j ) | , | δ k 1 2 ( u j ) δ k 2 2 ( u j ) | } ,
S ^ * ( k 1 , k 2 ) = 1 Σ j = 1 N | β k 1 2 ( u j ) β k 2 2 ( u j ) | + | γ k 1 2 ( u j ) γ k 2 2 ( u j ) | + | δ k 1 2 ( u j ) δ k 2 2 ( u j ) | N k 1 2 + k 2 2 .
Now, we apply the new PFDMAs and PFSMAs to find the distance and similarity between two PFSs.

3.1. Computation Example

The new distance techniques between PFSs are developed to resolve the setbacks in the approaches in [24,46,48,51,52,53]. The PFDMA in [24] is unreliable because it is not normalized, although A ˜ , B ˜ , and C ˜ are presented in Pythagorean fuzzy setting. The PFDMAs D 2 and D 3 in [46] are not normalized, so they cannot yield dependable results. In addition, A, B, and C in D 2 , D 3 , D 4 , and D 5 are not presented in Pythagorean fuzzy setting but intuitionistic fuzzy setting; thus, their results cannot be trusted.
Although the PFDMA D 6 in [48] seems to be well developed, it does not capture the frequency of | A ˜ | , | B ˜ | , and | C ˜ | , hence its result cannot be trusted. The PFDMA D 7 in [51] cannot produce a reasonable result because it uses only maximum extreme value of | A ˜ | and | B ˜ | and discards the influence of hesitation margin.
Though the PFDMA D 8 in [52] seems to be well structured, it does not incorporate the frequency of A ˜ 2 , B ˜ 2 , and C ˜ 2 , and so the result cannot be reasonable for a reliable interpretation. The PFDMA D 9 in [53] cannot be trusted because it does not consider the influence of the hesitation margin, which can leads to exclusion error. In the following example, we show the effect of these setbacks on the outcome by juxtaposing the results with that of the new PFDMAs.
Suppose k 1 and k 2 are PFSs in U = { u 1 , u 2 , u 3 } defined by
k 1 = { u 1 , 5 10 , 5 10 , u 2 , 7 10 , 3 10 , u 3 , 1 10 , 8 10 } ,
k 2 = { u 1 , 4 10 , 4 10 , u 2 , 6 10 , 2 10 , u 3 , 0.0 , 8 10 } .
The hesitation margins for k 1 and k 2 are
δ k 1 ( u 1 ) = 0.7071 , δ k 1 ( u 2 ) = 0.6481 , δ k 1 ( u 3 ) = 0.5916 ,
δ k 2 ( u 1 ) = 0.8246 , δ k 2 ( u 2 ) = 0.7746 , δ k 2 ( u 3 ) = 0.6 .
From this example, we see that k 2 k 1 , and they are closely related. By deploying the new distance measuring techniques, we get their distance as follows (using Table 1):
Thus, we have
j = 1 3 | β k 1 2 ( u j ) β k 2 2 ( u j ) | = 0.23 , j = 1 3 | γ k 1 2 ( u j ) γ k 2 2 ( u j ) | = 0.14 ,
j = 1 3 | δ k 1 2 ( u j ) δ k 2 2 ( u j ) | = 0.37 , k 1 2 = 3 , and k 2 2 = 3 .
Using (12) and (13) for N = 3 , we have
D ^ ( k 1 , k 2 ) = 0.0800 , D ^ * ( k 1 , k 2 ) = 0.0600 .
For the similarity, we have
S ^ ( k 1 , k 2 ) = 1 0.08 = 0.9200 , S ^ * ( k 1 , k 2 ) = 1 0.06 = 0.9400 .
These results show that k 1 and k 2 are closely related because their distance is small (large for the case of similarity measure), in agreement with the initial observation. Using the existing PFDMAs, we have
D 1 ( k 1 , k 2 ) = 0.3600 , D 2 ( k 1 , k 2 ) = 0.3220 , D 3 ( k 1 , k 2 ) = 0.1868 ,
D 4 ( k 1 , k 2 ) = 0.1073 , D 5 ( k 1 , k 2 ) = 0.1079 , D 6 ( k 1 , k 2 ) = 0.1200 ,
D 7 ( k 1 , k 2 ) = 0.0733 , D 8 ( k 1 , k 2 ) = 0.1294 , D 9 ( k 1 , k 2 ) = 0.0667 .
Using their corresponding PFSMAs, we have
S 1 ( k 1 , k 2 ) = 0.64 , S 2 ( k 1 , k 2 ) = 0.6780 , S 3 ( k 1 , k 2 ) = 0.8132 ,
S 4 ( k 1 , k 2 ) = 0.8927 , S 5 ( k 1 , k 2 ) = 0.8921 , S 6 ( k 1 , k 2 ) = 0.8800 ,
S 7 ( k 1 , k 2 ) = 0.9267 , S 8 ( k 1 , k 2 ) = 0.8706 , S 9 ( k 1 , k 2 ) = 0.9333 .
These results show the effects of the setbacks in the existing PFDMAs/PFSMAs. Despite the fact that the PFDMAs/PFSMAs, D 7 / S 7 [51] seems to be more precise than D ^ / S ^ , it cannot be reliable due to the omission of hesitation margin. In addition, the new PFDMAs/PFSMAs, D ^ * / S ^ * , which is the enhanced version of D 9 / S 9 [53] by the inclusion of the hesitation margin, gives the more precise and reliable result in consonant with the real relation between the considered PFSs. With these, we can say that the new PFDMAs/PFSMAs are the most reliable approaches because they include all the parametric information of PFSs and yield the most precise results.

3.2. Some Theoretic Results of the New PFDMAs/PFSMAs

What follows are some of the properties of the novel PFDMAs and PFSMAs to authenticate their consistency.
Proposition 1.
If N = 3 , then D ^ * ( k 1 , k 2 ) = D ^ ( k 1 , k 2 ) 2 and D ^ ( k 1 , k 2 ) = 2 D ^ * ( k 1 , k 2 ) .
Proof. 
Suppose N = 3 ; then, we have k 1 2 = 3 and k 2 2 = 3 . Assume
Σ j = 1 N | β k 1 2 ( u j ) β k 2 2 ( u j ) | = α , Σ j = 1 N | γ k 1 2 ( u j ) γ k 2 2 ( u j ) | = θ , and
Σ j = 1 N | δ k 1 2 ( u j ) δ k 2 2 ( u j ) | = σ .
So, we have
D ^ ( k 1 , k 2 ) = 1 3 Σ j = 1 3 Avg { | β k 1 2 ( u j ) β k 2 2 ( u j ) | , | γ k 1 2 ( u j ) γ k 2 2 ( u j ) | , | δ k 1 2 ( u j ) δ k 2 2 ( u j ) | } = Avg { α , θ , σ } 3 = α + θ + σ 9 ,
and
D ^ * ( k 1 , k 2 ) = Σ j = 1 N | β k 1 2 ( u j ) β k 2 2 ( u j ) | + | γ k 1 2 ( u j ) γ k 2 2 ( u j ) | + | δ k 1 2 ( u j ) δ k 2 2 ( u j ) | N k 1 2 + k 2 2 = α + θ + σ 3 × 6 = 1 2 α + θ + σ 9 = D ^ ( k 1 , k 2 ) 2 .
Similarly, it follows that D ^ ( k 1 , k 2 ) = 2 D ^ * ( k 1 , k 2 ) . □
Corollary 1.
If N = 3 , then S ^ * ( k 1 , k 2 ) = S ^ ( k 1 , k 2 ) 2 and S ^ ( k 1 , k 2 ) = 2 S ^ * ( k 1 , k 2 ) .
Proof. 
Similar to the proof of Proposition 1. □
Proposition 2.
For PFSs k 1 and k 2 in U , we have
(i) 
D ^ ( k 1 , k 2 ) = D ^ ( k 2 , k 1 ) ,
(ii) 
D ^ * ( k 1 , k 2 ) = D ^ * ( k 2 , k 1 ) ,
(iii) 
D ^ ( k 1 , k 2 ) = D ^ ( k 1 ¯ , k 2 ¯ ) ,
(iv) 
D ^ * ( k 1 , k 2 ) = D ^ * ( k 1 ¯ , k 2 ¯ ) .
Proof. 
We show the proof of (i) thus
D ^ ( k 1 , k 2 ) = 1 N Σ j = 1 N Avg { | β k 1 2 ( u j ) β k 2 2 ( u j ) | , | γ k 1 2 ( u j ) γ k 2 2 ( u j ) | , | δ k 1 2 ( u j ) δ k 2 2 ( u j ) | } = 1 N Σ j = 1 N ( Avg { | β k 2 2 ( u j ) β k 1 2 ( u j ) | , | γ k 2 2 ( u j ) γ k 1 2 ( u j ) | , | δ k 2 2 ( u j ) δ k 1 2 ( u j ) | } ) = 1 N Σ j = 1 N Avg { | β k 2 2 ( u j ) β k 1 2 ( u j ) | , | γ k 2 2 ( u j ) γ k 1 2 ( u j ) | , | δ k 2 2 ( u j ) δ k 1 2 ( u j ) | } = D ^ ( k 2 , k 1 ) .
Similarly, (ii) follows.
The proof of (iii) holds since
D ^ ( k 1 , k 2 ) = 1 N Σ j = 1 N Avg { | β k 1 2 ( u j ) β k 2 2 ( u j ) | , | γ k 1 2 ( u j ) γ k 2 2 ( u j ) | , | δ k 1 2 ( u j ) δ k 2 2 ( u j ) | } = 1 N Σ j = 1 N Avg { | γ k 2 2 ( u j ) γ k 1 2 ( u j ) | , | β k 2 2 ( u j ) β k 1 2 ( u j ) | , | δ k 2 2 ( u j ) δ k 1 2 ( u j ) | } = D ^ ( k 1 ¯ , k 2 ¯ ) .
Similarly, (iv) holds.
Proposition 3.
If k 1 and k 2 are PFSs in U . Then,
(i) 
S ^ ( k 1 , k 2 ) = S ^ ( k 2 , k 1 ) ,
(ii) 
S ^ * ( k 1 , k 2 ) = S ^ * ( k 2 , k 1 ) ,
(iii) 
S ^ ( k 1 , k 2 ) = S ^ ( k 1 ¯ , k 2 ¯ ) ,
(iv) 
S ^ * ( k 1 , k 2 ) = S ^ * ( k 1 ¯ , k 2 ¯ ) .
Proof. 
Similar to the proof of Proposition 2. □
Remark 1.
The definitions of intersection and union of PFSs in terms of the new PFDMAs are as follows:
D ^ ( k 1 k 2 , k 1 k 2 ) = 1 N Σ j = 1 N ( Avg { | max { β k 1 2 ( u j ) , β k 2 2 ( u j ) } min { β k 1 2 ( u j ) , β k 2 2 ( u j ) } | , | min { γ k 1 2 ( u j ) , γ k 2 2 ( u j ) } max { γ k 1 2 ( u j ) , γ k 2 2 ( u j ) } | , | δ k 1 k 2 2 ( u j ) δ k 1 k 2 2 ( u j ) | } ) ,
D ^ ( k 1 k 2 , k 1 k 2 ) = 1 N Σ j = 1 N ( Avg { | min { β k 1 2 ( u j ) , β k 2 2 ( u j ) } max { β k 1 2 ( u j ) , β k 2 2 ( u j ) } | , | max { γ k 1 2 ( u j ) , γ k 2 2 ( u j ) } min { γ k 1 2 ( u j ) , γ k 2 2 ( u j ) } | , | δ k 1 k 2 2 ( u j ) δ k 1 k 2 2 ( u j ) | } ) ,
D ^ * ( k 1 k 2 , k 1 k 2 ) = 1 N k 1 2 + k 2 2 Σ j = 1 N ( | max { β k 1 2 ( u j ) , β k 2 2 ( u j ) } min { β k 1 2 ( u j ) , β k 2 2 ( u j ) } | + | min { γ k 1 2 ( u j ) , γ k 2 2 ( u j ) } max { γ k 1 2 ( u j ) , γ k 2 2 ( u j ) } | + | δ k 1 k 2 2 ( u j ) δ k 1 k 2 2 ( u j ) | ) ,
D ^ * ( k 1 k 2 , k 1 k 2 ) = 1 N k 1 2 + k 2 2 Σ j = 1 N ( | min { β k 1 2 ( u j ) , β k 2 2 ( u j ) } max { β k 1 2 ( u j ) , β k 2 2 ( u j ) } | + | max { γ k 1 2 ( u j ) , γ k 2 2 ( u j ) } min { γ k 1 2 ( u j ) , γ k 2 2 ( u j ) } | + | δ k 1 k 2 2 ( u j ) δ k 1 k 2 2 ( u j ) | ) .
Theorem 1.
For PFSs k 1 and k 2 in U , we have
(i) 
D ^ ( k 1 , k 1 k 2 ) = D ^ ( k 2 , k 1 k 2 ) ,
(ii) 
D ^ ( k 1 , k 1 k 2 ) = D ^ ( k 2 , k 1 k 2 ) .
Proof. 
The results are established by assuming that δ k 1 k 2 ( u j ) = δ k 2 and δ k 1 k 2 ( u j ) = δ k 1 . Then | δ k 1 k 2 2 ( u j ) δ k 1 2 ( u j ) | = 0 = | δ k 1 k 2 2 ( u j ) δ k 2 2 ( u j ) | . By using (16) and (17), we have
D ^ ( k 1 , k 1 k 2 ) = 1 N Σ j = 1 N ( Avg { | β k 1 2 ( u j ) min { β k 1 2 ( u j ) , β k 2 2 ( u j ) } | , | γ k 1 2 ( u j ) max { γ k 1 2 ( u j ) , γ k 2 2 ( u j ) } | } ) = 1 N Σ j = 1 N ( Avg { | β k 1 2 ( u j ) β k 1 2 ( u j ) + β k 2 2 ( u j ) max { β k 1 2 ( u j ) , β k 2 2 ( u j ) } | , | γ k 1 2 ( u j ) γ k 1 2 ( u j ) + γ k 2 2 ( u j ) min { γ k 1 2 ( u j ) , γ k 2 2 ( u j ) } | } ) = 1 N Σ j = 1 N ( Avg { | β k 1 2 ( u j ) β k 1 2 ( u j ) β k 2 2 ( u j ) + max { β k 1 2 ( u j ) , β k 2 2 ( u j ) } | , | γ k 1 2 ( u j ) γ k 1 2 ( u j ) γ k 2 2 ( u j ) + min { γ k 1 2 ( u j ) , γ k 2 2 ( u j ) } | } ) = 1 N Σ j = 1 N ( Avg { | β k 2 2 ( u j ) max { β k 1 2 ( u j ) , β k 2 2 ( u j ) } | , | γ k 2 2 ( u j ) min { γ k 1 2 ( u j ) , γ k 2 2 ( u j ) } | } ) = 1 N Σ j = 1 N ( Avg { | β k 2 2 ( u j ) max { β k 1 2 ( u j ) , β k 2 2 ( u j ) } | , | γ k 2 2 ( u j ) min { γ k 1 2 ( u j ) , γ k 2 2 ( u j ) } | } ) = D ^ ( k 2 , k 1 k 2 ) ,
which shows (i).
Next, for the proof of (ii), we have
D ^ ( k 1 , k 1 k 2 ) = 1 N Σ j = 1 N ( Avg { | β k 1 2 ( u j ) max { β k 1 2 ( u j ) , β k 2 2 ( u j ) } | , | γ k 1 2 ( u j ) min { γ k 1 2 ( u j ) , γ k 2 2 ( u j ) } | } ) = 1 N Σ j = 1 N ( Avg { | β k 1 2 ( u j ) β k 1 2 ( u j ) + β k 2 2 ( u j ) min { β k 1 2 ( u j ) , β k 2 2 ( u j ) } | , | γ k 1 2 ( u j ) γ k 1 2 ( u j ) + γ k 2 2 ( u j ) max { γ k 1 2 ( u j ) , γ k 2 2 ( u j ) } | } ) = 1 N Σ j = 1 N ( Avg { | β k 1 2 ( u j ) β k 1 2 ( u j ) β k 2 2 ( u j ) + min { β k 1 2 ( u j ) , β k 2 2 ( u j ) } | , | γ k 1 2 ( u j ) γ k 1 2 ( u j ) γ k 2 2 ( u j ) + max { γ k 1 2 ( u j ) , γ k 2 2 ( u j ) } | } ) = 1 N Σ j = 1 N ( Avg { | β k 2 2 ( u j ) min { β k 1 2 ( u j ) , β k 2 2 ( u j ) } | , | γ k 2 2 ( u j ) max { γ k 1 2 ( u j ) , γ k 2 2 ( u j ) } | } ) = 1 N Σ j = 1 N ( Avg { | β k 2 2 ( u j ) min { β k 1 2 ( u j ) , β k 2 2 ( u j ) } | , | γ k 2 2 ( u j ) max { γ k 1 2 ( u j ) , γ k 2 2 ( u j ) } | } ) = D ^ ( k 2 , k 1 k 2 ) .
Theorem 2.
Assume that k 1 and k 2 are PFSs in U and δ k 1 k 2 ( u j ) = δ k 1 k 2 ( u j ) ; then, we have D ^ ( k 1 k 2 , k 1 k 2 ) = D ^ ( k 1 k 2 , k 1 k 2 ) .
Proof. 
It follows that | δ k 1 k 2 2 ( u j ) δ k 1 k 2 2 ( u j ) | = 0 since δ k 1 k 2 ( u j ) = δ k 1 k 2 ( u j ) . Synthesizing (16) and (17), we get
D ^ ( k 1 k 2 , k 1 k 2 ) = 1 N Σ j = 1 N ( Avg { | min β k 1 ( u j ) , β k 2 ( u j ) max β k 1 ( u j ) , β k 2 ( s i ) | + | max γ k 1 ( u j ) , γ k 2 ( u j ) min γ k 1 ( u j ) , γ k 2 ( u j ) | } ) = 1 N Σ j = 1 N ( Avg { | β k 1 ( u j ) + β k 2 ( u j ) max β k 1 ( u j ) , β k 2 ( u j ) [ β k 1 ( u j ) + β k 2 ( u j ) min β k 1 ( u j ) , β k 2 ( u j ) | + | γ k 1 ( u j ) + γ k 2 ( u j ) min γ k 1 ( u j ) , γ k 2 ( u j ) γ k 1 ( u j ) + γ k 2 ( u j ) max γ k 1 ( u j ) , γ k 2 ( u j ) | } ) = 1 N Σ j = 1 N ( Avg { | max β k 1 ( u j ) , β k 2 ( u j ) min β k 1 ( u j ) , β k 2 ( u j ) | + | min γ k 1 ( u j ) , γ k 2 ( u j ) max γ k 1 ( u j ) , γ k 2 ( u j ) | } ) = D ^ ( k 1 k 2 , k 1 k 2 ) .
Corollary 2.
For PFSs k 1 and k 2 in U , we have
(i) 
S ^ ( k 1 , k 1 k 2 ) = S ^ ( k 2 , k 1 k 2 ) ,
(ii) 
S ^ ( k 1 , k 1 k 2 ) = S ^ ( k 2 , k 1 k 2 ) ,
(iii) 
S ^ ( k 1 k 2 , k 1 k 2 ) = S ^ ( k 1 k 2 , k 1 k 2 ) .
Proof. 
Similar to the proof of Theorems 1 and 2. □
Proposition 4.
For any two PFSs k 1 and k 2 in U , we have
(i) 
D ^ * ( k 1 , k 1 k 2 ) = D ^ * ( k 2 , k 1 k 2 ) ,
(ii) 
D ^ * ( k 1 , k 1 k 2 ) = D ^ * ( k 2 , k 1 k 2 ) ,
(iii) 
D ^ * ( k 1 k 2 , k 1 k 2 ) = D ^ * ( k 1 k 2 , k 1 k 2 ) .
Proof. 
Using (18) and (19), and the logic in Theorems 1 and 2, the proof follows. □
Corollary 3.
For any two PFSs k 1 and k 2 in U , we have
(i) 
S ^ * ( k 1 , k 1 k 2 ) = S ^ * ( k 2 , k 1 k 2 ) ,
(ii) 
S ^ * ( k 1 , k 1 k 2 ) = S ^ * ( k 2 , k 1 k 2 ) ,
(iii) 
S ^ * ( k 1 k 2 , k 1 k 2 ) = S ^ * ( k 1 k 2 , k 1 k 2 ) .
Proof. 
Similar to the proof of Proposition 4. □
Proposition 5.
If k 1 and k 2 are PFSs in U , then D ^ ( k 1 , k 2 ) = 0 and D ^ * ( k 1 , k 2 ) = 0 if and only if k 1 = k 2 .
Proof. 
First, assume D ^ ( k 1 , k 2 ) = 0 . Then,
| β k 1 2 ( u j ) β k 2 2 ( u j ) | = 0 , | γ k 1 2 ( u j ) γ k 2 2 ( u j ) | = 0 ,
and
| δ k 1 2 ( u j ) δ k 2 2 ( u j ) | = 0 .
Hence,
β k 1 ( u j ) = β k 2 ( u j ) , γ k 1 ( u j ) = γ k 2 ( u j )
and
δ k 1 ( u j ) = δ k 2 ( u j ) ,
and so k 1 = k 2 .
Second, if k 1 = k 2 , then
D ^ ( k 1 , k 2 ) = Σ j = 1 N Avg { | β k 1 2 ( u j ) β k 2 2 ( u j ) | , | γ k 1 2 ( u j ) γ k 2 2 ( u j ) | , | δ k 1 2 ( u j ) δ k 2 2 ( u j ) | } N = 0 ,
which completes the proof.
The proof of the second part is similar. □
Proposition 6.
If k 1 and k 2 are PFSs in U , then S ^ ( k 1 , k 2 ) = 1 and S ^ * ( k 1 , k 2 ) = 1 if and only if k 1 = k 2 .
Proof. 
Similar to the proof of Proposition 5. □
Theorem 3.
Suppose k 1 and k 2 are PFSs in U ; then, D ^ ( k 1 , k 2 ) , D ^ * ( k 1 , k 2 ) [ 0 , 1 ] .
Proof. 
To establish D ^ ( k 1 , k 2 ) [ 0 , 1 ] , we show that
(i)
D ^ ( k 1 , k 2 ) , D ^ * ( k 1 , k 2 ) 0 ,
(ii)
D ^ ( k 1 , k 2 ) , D ^ * ( k 1 , k 2 ) 1 .
The proof of (i) follows since
| β k 1 2 ( u j ) β k 2 2 ( u j ) | 0 , | γ k 1 2 ( u j ) γ k 2 2 ( u j ) | 0 and | δ k 1 2 ( u j ) δ k 2 2 ( u j ) | 0 .
Next, we proof (ii) as thus. Assume that
Σ j = 1 N | β k 1 2 ( u j ) β k 2 2 ( u j ) | = α , Σ j = 1 N | γ k 1 2 ( u j ) γ k 2 2 ( u j ) | = θ , and
Σ j = 1 N | δ k 1 2 ( u j ) δ k 2 2 ( u j ) | = σ .
Hence
D ^ ( k 1 , k 2 ) = Σ j = 1 N Avg { | β k 1 2 ( u j ) β k 2 2 ( u j ) | , | γ k 1 2 ( u j ) γ k 2 2 ( u j ) | , | δ k 1 2 ( u j ) δ k 2 2 ( u j ) | } N Avg { Σ j = 1 N | β k 1 2 ( u j ) β k 2 2 ( u j ) | , Σ j = 1 N | γ k 1 2 ( u j ) γ k 2 2 ( u j ) | , Σ j = 1 N | δ k 1 2 ( u j ) δ k 2 2 ( u j ) | } N = Avg { α , θ , σ } N = α + θ + σ 3 N .
Then
D ^ ( k 1 , k 2 ) 1 = α + θ + σ 3 N 1 = α + θ + σ 3 N 3 N = ( 3 N α θ σ ) 3 N 0 .
Thus, D ^ ( k 1 , k 2 ) 1 0 implies D ^ ( k 1 , k 2 ) 1 .
Similarly,
D ^ * ( k 1 , k 2 ) = Σ j = 1 N | β k 1 2 ( u j ) β k 2 2 ( u j ) | + | γ k 1 2 ( u j ) γ k 2 2 ( u j ) | + | δ k 1 2 ( u j ) δ k 2 2 ( u j ) | N k 1 2 + k 2 2 Σ j = 1 N | β k 1 2 ( u j ) β k 2 2 ( u j ) | + Σ j = 1 N | γ k 1 2 ( u j ) γ k 2 2 ( u j ) | + Σ j = 1 N | δ k 1 2 ( u j ) δ k 2 2 ( u j ) | N k 1 2 + k 2 2 = α + θ + σ N k 1 2 + k 2 2 .
Then,
D ^ * ( k 1 , k 2 ) 1 = α + θ + σ N k 1 2 + k 2 2 1 = α + θ + σ N k 1 2 + k 2 2 N k 1 2 + k 2 2 = N k 1 2 + k 2 2 ( α + θ + σ ) N k 1 2 + k 2 2 0 ,
which implies that D ^ * ( k 1 , k 2 ) 1 . Hence D ^ ( k 1 , k 2 ) , D ^ * ( k 1 , k 2 ) [ 0 , 1 ] . □
Corollary 4.
Suppose k 1 and k 2 are PFSs in U ; then, S ^ ( k 1 , k 2 ) , S ^ * ( k 1 , k 2 ) [ 0 , 1 ] .
Proof. 
Similar to the proof of Theorem 3. □
Theorem 4.
Suppose k 1 , k 2 , and k are PFSs in U ; then, the triangle inequality exists for D ^ and D ^ * , respectively.
Proof. 
We can rewrite D ^ ( k 1 , k 2 ) as
D ^ ( k 1 , k 2 ) = 1 N Σ j = 1 N Avg { | β k 1 2 ( u j ) β k 2 2 ( u j ) | , | γ k 1 2 ( u j ) γ k 2 2 ( u j ) | , | δ k 1 2 ( u j ) δ k 2 2 ( u j ) | } = 1 3 N Σ j = 1 N | β k 1 2 ( u j ) β k 2 2 ( u j ) | + | γ k 1 2 ( u j ) γ k 2 2 ( u j ) | + | δ k 1 2 ( u j ) δ k 2 2 ( u j ) | .
We need to prove that D ^ ( k 1 , k ) D ^ ( k 1 , k 2 ) + D ^ ( k 2 , k ) and D ^ * ( k 1 , k ) D ^ * ( k 1 , k 2 ) + D ^ * ( k 2 , k ) , respectively.
Suppose
D ^ ( k 1 , k ) = max 1 j N 1 3 N | β k 1 2 ( u j ) β k 2 ( u j ) | + | γ k 1 2 ( u j ) γ k 2 ( u j ) | + | δ k 1 2 ( u j ) δ k 2 ( u j ) | = 1 3 N | β k 1 2 ( u l ) β k 2 ( u l ) | + | γ k 1 2 ( u l ) γ k 2 ( u l ) | + | δ k 1 2 ( u l ) δ k 2 ( u l ) | ,
for some fixed l j = 1 , 2 , , l , , N . Then
| β k 1 2 ( u l ) β k 2 ( u l ) | | β k 1 2 ( u l ) β k 2 2 ( u l ) | + | β k 2 2 ( u l ) β k 2 ( u l ) | ,
| γ k 1 2 ( u l ) γ k 2 ( u l ) | | γ k 1 2 ( u l ) γ k 2 2 ( u l ) | + | γ k 2 2 ( u l ) γ k 2 ( u l ) | ,
| δ k 1 2 ( u l ) δ k 2 ( u l ) | | δ k 1 2 ( u l ) δ k 2 2 ( u l ) | + | δ k 2 2 ( u l ) δ k 2 ( u l ) | ,
and
D ^ ( k 1 , k 2 ) | β k 1 2 ( u l ) β k 2 2 ( u l ) | + | γ k 1 2 ( u l ) γ k 2 2 ( u l ) | + | δ k 1 2 ( u l ) δ k 2 2 ( u l ) | ,
D ^ ( k 2 , k ) | β k 2 2 ( u l ) β k 2 ( u l ) | + | γ k 2 2 ( u l ) γ k 2 ( u l ) | + | δ k 2 2 ( u l ) δ k 2 ( u l ) | .
Hence, D ^ ( k 1 , k ) D ^ ( k 1 , k 2 ) + D ^ ( k 2 , k ) .
Since
D ^ * ( k 1 , k 2 ) = Σ j = 1 N | β k 1 2 ( u j ) β k 2 2 ( u j ) | + | γ k 1 2 ( u j ) γ k 2 2 ( u j ) | + | δ k 1 2 ( u j ) δ k 2 2 ( u j ) | N Σ j = 1 N β k 1 2 ( u j ) + γ k 1 2 ( u j ) + δ k 1 2 ( u j ) + Σ j = 1 N β k 2 2 ( u j ) + γ k 2 2 ( u j ) + δ k 2 2 ( u j ) = 1 2 N 2 Σ j = 1 N | β k 1 2 ( u j ) β k 2 2 ( u j ) | + | γ k 1 2 ( u j ) γ k 2 2 ( u j ) | + | δ k 1 2 ( u j ) δ k 2 2 ( u j ) | ,
then the proof of D ^ * ( k 1 , k ) D ^ * ( k 1 , k 2 ) + D ^ * ( k 2 , k ) is similar. □
Corollary 5.
If k 1 , k 2 , and k are PFSs in U , then the triangle inequality exists for S ^ and S ^ * , respectively.
Proof. 
Similar to the proof of Theorem 4. □
Remark 2.
We observe that (i) (12) extends (9) and modifies it to avoid information loss, and (ii) (13) extends (11) by taking account of hesitation margins.

3.3. Decision Making Applications

This section discusses the processes of patterns recognition and ailments diagnosis based on the new PFDMAs and PFSMAs. The concept of Pythagorean fuzzy distance and similarity measures are explicated in this article because
  • PFS has a wider scope of application equipped to curb incomplete information in decision making, and
  • they have been proved to be efficient soft computing devices appropriate for making worthwhile decisions.
Now, we present the procedure to aid the utilization of the new PFDMAs and PFSMAs, respectively. Supposing there are N choices represented as PFSs A j for j = 1 , 2 , , N drawn from the space, U = { u 1 , u 2 , , u N } . In addition, if there is an unknown sample symbolized as PFS B , which is to be connected with A j , then
D ^ ( A j , B ) = min D ^ ( A 1 , B ) , D ^ ( A 2 , B ) , D ^ ( A N , B )
or
D ^ * ( A j , B ) = min D ^ * ( A 1 , B ) , D ^ * ( A 2 , B ) , , D ^ * ( A N , B ) ,
decides the grouping of A j and B .
In the same vein,
S ^ ( A j , B ) = max S ^ ( A 1 , B ) , S ^ ( A 2 , B ) , , S ^ ( A N , B )
or
S ^ * ( A j , B ) = max S ^ * ( A 1 , B ) , S ^ * ( A 2 , B ) , , S ^ * ( A N , B ) ,
decides the grouping of A j and B .

3.3.1. Pattern Recognition

First and foremost, we discuss pattern recognition based on the new PFDMAs and PFSMAs due to the uncertainties in classifying patterns. In fact, the approach of pattern recognition via PFSs is outstanding for a dependable patterns association.
Assuming there are three patterns P 1 , P 2 , and P 3 , exemplified as PFSs in U = { u 1 , u 2 , u 3 } . If there is an unknown pattern Q represented as PFS in U . We look forward to categorize Q into any of P 1 , P 2 , and P 3 , by deploying the new PFDMA and PFSMA, respectively. The patterns are given by Table 2.
With the new distance and similarity measuring approaches, we obtain the results in Table 3.
By letting M = ( P 1 , Q ) , N = ( P 2 , Q ) , and O = ( P 3 , Q ) , we obtain Figure 1.
From the information in Table 3 and Figure 1, we can say that the unknown pattern Q is associated with pattern P 2 since the distance of ( P 2 , Q ) is the smallest (and greatest for similarity). In this example, the uncategorized pattern is associated by using the smallest distance, and the greatest similarity is devoid of any uncertainty. Owing to the presence of imprecision in the process of pattern recognition, the approaches of PFDMA and PFSMA are of massive important in the process of pattern recognition.

3.3.2. Diagnostic Analysis

Disease diagnosis is a process that needs diligence to forestall erroneous medical analysis with its attendant consequences on patient’s health status. PFDMA and PFSMA are effective in making medical diagnosis because PFSs is endowed to curb the uncertainties and imprecision in the diagnostic process. The diagnostic process is carried out using simulated medical data.
Take D = { D 1 , D 2 , D 3 , D 4 , D 5 } as a set of maladies signified by PFSs, where D 1 stands for viral fever, D 2 stands for malaria, D 3 stands for typhoid fever, D 4 stands for stomach pain and D 5 stands for chest pain, respectively. Similarly, take U = { u 1 , u 2 , u 3 , u 4 , u 5 } as a set of symptoms where u 1 represents temperature, u 2 represents headache, u 3 represents stomach pain, u 4 represents cough, and u 5 represents chest pain, respectively.
Again, suppose that a patient represented by a PFS P went for a medical consultation/test to ascertain his medical status, and after the medical consultation/test, the patient P expresses symptoms U . The symptoms and the diseases/patient are associated by Δ : U D / P . The Pythagorean fuzzy medical information of D and P under U is given by Table 4.
The diagnosis is decided by calculating the distance/similarity of D and P using the new PFDMAs and PFSMAs, respectively. Table 5 presents the results via the new approaches.
By taking M = ( D 1 , P ) , N = ( D 2 , P ) , 0 = ( D 3 , P ) , P = ( D 4 , P ) , and Q = ( D 5 , P ) , we have Figure 2.
From the information in Table 5 and Figure 2, we can say that P is mainly suffering from viral fever since the distance for ( D 1 , P ) is the smallest, and greatest for the case of similarity. In addition, the patient should be examined for malaria fever and typhoid fever for an effective treatment since the patient has some considerable symptoms of malaria fever and typhoid fever as well.
Furthermore, disease diagnosis using the idea of PFDMA and PFSMA is essential for the reason that PFSs are built with the capacity to handle incomplete information. Medical decision making could be much better if the process of medical diagnostic is enhanced with PFDMA and PFSMA via identifying the least disease-patient distance and the greatest disease–patient similarity.

4. Comparative Studies

This section presents the comparative analysis of the new PFDMAs and the existing PFDMAs with regards to the application examples.

4.1. Comparative Analysis (Pattern Recognition)

Using the information in Remark 2, we apply the approaches in [51,53] to the data in Table 2 to compare with the results of (12) and with the results of (13), respectively. Table 6 contains the outputs. By letting M stands for ( P 1 , Q ) , N stands for ( P 2 , Q ) , and O stands for ( P 3 , Q ) , we plot the graph of Table 6.
Using the information in Table 6 and Figure 3, we observe that the PFDMAs give the same pattern recognition, and the new approaches give better results compare to the approaches they were modified from (i.e., D ^ is better compared to the method in [51] and D ^ * is better compare to the method in [53]). Although the approaches in [51,53] seem to be better compare to the new approaches at ( P 2 , Q ) , they cannot be dependable because they do not include the hesitation margin.
Now, the comparison of the new PFDMAs with the existing PFDMAs [24,46,48,51,52,53] with their associated similarities is shown in Table 7 and Table 8 to showcase the edge of the new methods.
From Table 7, we see the same ranking from all the approaches, and the new approaches especially, D ^ * is a more dependable PFDMA since its produces the least distance measuring values. In addition, D 1 and D 3 yield unrealitic results.
Similarly, same ranking is observed from all the approaches, and the new approach, S ^ * is a more dependable PFSMA since its produces the greatest similarity measuring values. It is needful to note that S 1 and S 3 are not good similarity measures.

4.2. Comparative Analysis (Diagnostic Analysis)

Using the information in Remark 2, we apply the approaches in [51,52] to the Pythagorean fuzzy medical data (Table 4) to compare with the results of (12) and (13), respectively, as follows in Table 9.
By letting M stand for ( D 1 , P ) , N stand for ( D 2 , P ) , O stand for ( D 3 , P ) , P stand for ( D 1 , P ) , and Q stand for ( D 2 , P ) , we plot the graph of Table 9 as Figure 4.
We observe that the new approaches give better results compare to the approaches they were modified from (i.e., D ^ is better compare to the method in [51] and D ^ * is better compared to the method in [53]).
Now, the comparison of the new PFDMAs with the existing PFDMAs [24,46,48,51,52,53] based on the Pythagorean fuzzy medical data are shown in Table 10 and Table 11 to showcase the merits of the new PFDMA and PFSMA, respectively.
From Table 10, we see that of the new methods, D ^ * is an especially dependable PFDMA since its yields the smallest distance measuring values. In addition, D 1 , D 3 and D 4 yield results that infringe upon a condition of distance measure. In fact, D 1 , D 3 , and D 4 are not dependable PFDMAs.
Using the associated similarity approaches, we obtain the results in Table 11.
Likewise, the new approach, S ^ * is a more dependable compare to the other PFSMAs because its gives the greatest similarity measuring values. In addition, S 1 , S 3 and S 4 produce results that infringe upon a condition of similarity measure. In fact, S 1 , S 3 , and S 4 are not appropriate PFSMAs.

4.3. Advantages of the New Approaches

The new PFDMAs and PFSMAs are much more effective compared to the existing PFDMAs and PFSMAs because
  • the developed PFDMAs (and associated PFSMAs) satisfied the axiomatic description of distance (and similarity) measures contrasting some of the PFDMAs (and associated PFSMAs) in [24,46],
  • the proposed PFDMAs (and associated PFSMAs) give precise and reasonable outputs to enhance real interpretation devoid of exclusion error observed in [51,53], and
  • the proposed PFDMAs (and associated PFSMAs) include all the parametric information of PFSs (i.e., degrees of membership, nonmembership, and hesitation) contrasting the approaches in [51,53].

5. Conclusions

In this study, PFDM and PFSM have been explored, and some new PFDMAs (and associated PFSMAs) were developed to enhance applications in areas of clustering analysis, pattern recognition, decision making process, machine learning, etc. A computational example for the developed PFDMAs (and associated PFSMAs) were shown, and properties of the new PFDMAs (and associated PFSMAs) were discussed to explain their configuration with the notion of classical distance (and associated similarity) measure. In addition, the applications of the new PFDMAs (and associated PFSMAs) were discussed in the solution of pattern recognition problem and disease diagnosis. More so, comparative studies of the new PFDMAs (and associated PFSMAs) with some existing PFDMAs (and associated PFSMAs) were presented to validate the merits of the new PFDMAs (and associated PFSMAs). From the comparative studies, we see that the developed PFDMAs (and associated PFSMAs); (i) satisfied the axiomatic description of distance (and similarity) measure contrasting some of the distance (similarity) measuring approaches in [24,46], (ii) give accurate and reasonable outputs to enhance real interpretation devoid of error of exclusion in [51,53], and (iii) include the complete parametric information of PFSs contrasting the PFDMAs in [51,53]. The developed PFDMAs (and their associated PFSMAs) could be extended to TOPSIS, MCDM, MADM, and VIKOR methods to solve group decision making problems. In addition, the developed PFDMAs (and their associated PFSMAs) can be extended to other uncertain environments like interval-valued PFSs, Fermatean fuzzy sets, interval-valued Fermatean fuzzy sets, linear Diophantine fuzzy sets, etc. However, the developed PFDMAs (and their associated PFSMAs) can only be used in triparametric environments, and as such, they cannot be extended to uncertain environments such as spherical fuzzy sets, neutrosophic sets, and picture fuzzy sets except with modification.

Author Contributions

Conceptualization, P.A.E.; Methodology, P.A.E. and S.E.J.; Software, I.C.O.; Validation, K.W., Y.F. and I.C.O.; Data curation, S.A.; Writing—original draft, S.E.J.; Writing—review & editing, Y.F. and S.A.; Supervision, K.W. and Y.F.; Funding acquisition, K.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the Science and Technology Research Program of Chongqing Municipal Education Commission (No. KJZD-M202201204), and the Foundation of Intelligent Ecotourism Subject Group of Chongqing Three Gorges University (Nos. zhlv20221003, zhlv20221006).

Data Availability Statement

This paper has no associated data.

Conflicts of Interest

The authors declare that they have no competing interests.

References

  1. Zadeh, L.A. Fuzzy sets. Inf. Control. 1965, 8, 338–353. [Google Scholar] [CrossRef] [Green Version]
  2. Atanassov, K.T. Intuitionistic fuzzy sets. Fuzzy Set Syst. 1986, 20, 87–96. [Google Scholar] [CrossRef]
  3. Boran, F.E.; Akay, D. A biparametric similarity measure on intuitionistic fuzzy sets with applications to pattern recognition. Inf. Sci. 2014, 255, 45–57. [Google Scholar] [CrossRef]
  4. Chen, S.M.; Chang, C.H. A novel similarity measure between Atanassov’s intuitionistic fuzzy sets based on transformation techniques with applications to pattern recognition. Inf. Sci. 2015, 291, 96–114. [Google Scholar] [CrossRef]
  5. Szmidt, E.; Kacprzyk, J. Intuitionistic fuzzy sets in some medical applications. Note IFS 2001, 7, 58–64. [Google Scholar]
  6. Wang, W.; Xin, X. Distance measure between intuitionistic fuzzy sets. Pattern Recog. Lett. 2005, 26, 2063–2069. [Google Scholar] [CrossRef]
  7. Hatzimichailidis, A.G.; Papakostas, A.G.; Kaburlasos, V.G. A novel distance measure of intuitionistic fuzzy sets and its application to pattern recognition problems. Int. J. Intell. Syst. 2012, 27, 396–409. [Google Scholar] [CrossRef]
  8. Atanassov, K.T. Intuitionistic Fuzzy Sets: Theory and Applications; Physica-Verlag: Berlin/Heidelberg, Germany, 1999. [Google Scholar]
  9. Liu, P.; Chen, S.M. Group decision making based on Heronian aggregation operators of intuitionistic fuzzy numbers. IEEE Trans. Cybern. 2017, 47, 2514–2530. [Google Scholar] [CrossRef]
  10. Burillo, P.; Bustince, H. Entropy on intuitionistic fuzzy sets and on interval-valued fuzzy sets. Fuzzy Set. Syst. 1996, 78, 305–315. [Google Scholar] [CrossRef]
  11. Szmidt, E.; Kacprzyk, J. Distances between intuitionistic fuzzy sets. Fuzzy Set. Syst. 2000, 114, 505–518. [Google Scholar] [CrossRef]
  12. Davvaz, B.; Sadrabadi, E.H. An application of intuitionistic fuzzy sets in medicine. Int. J. Biomath. 2016, 9, 1650037. [Google Scholar] [CrossRef]
  13. Atanassov, K.T. Geometrical Interpretation of the Elements of the Intuitionistic Fuzzy Objects, Mathematical Foundations of Artificial Intelligence Seminar, Sofia, 1989, Preprint IM-MFAIS-1-89. Repr. Int. J. Bioautom. 2016, 20, S27–S42. [Google Scholar]
  14. Yager, R.R. Pythagorean Membership Grades in Multicriteria Decision Making; Technical Report MII-3301; Machine Intelligence Institute Iona College: New Rochelle, NY, USA, 2013. [Google Scholar]
  15. Yager, R.R.; Abbasov, A.M. Pythagorean membership grades, complex numbers and decision making. Int. J. Intell. Syst. 2013, 28, 436–452. [Google Scholar] [CrossRef]
  16. Garg, H. A new generalized Pythagorean fuzzy information aggregation using Einstein operations and its application to decision making. Int. J. Intell. Syst. 2016, 31, 886–920. [Google Scholar] [CrossRef]
  17. Garg, H. Generalized Pythagorean fuzzy geometric aggregation operators using Einstein t-norm and t-conorm for multicriteria decision making process. Int. J. Intell. Syst. 2017, 32, 597–630. [Google Scholar] [CrossRef]
  18. Du, Y.Q.; Hou, F.; Zafar, W.; Yu, Q.; Zhai, Y. A novel method for multiattribute decision making with interval-valued Pythagorean fuzzy linguistic information. Int. J. Intell. Syst. 2017, 32, 1085–1112. [Google Scholar] [CrossRef]
  19. Garg, H. Linguistic Pythagorean fuzzy sets and its applications in multiattribute decision making process. Int. J. Intell. Syst. 2018, 33, 1234–1263. [Google Scholar] [CrossRef]
  20. Liang, D.; Xu, Z. The new extension of TOPSIS method for multiple criteria decision making with hesitant Pythagorean fuzzy fets. Appl. Soft. Comput. 2017, 60, 167–179. [Google Scholar] [CrossRef]
  21. Ejegwa, P.A.; Wen, S.; Feng, Y.; Zhang, W.; Liu, J. A three-way Pythagorean fuzzy correlation coefficient approach and its applications in deciding some real-life problems. Appl. Intell. 2022. [Google Scholar] [CrossRef]
  22. Ejegwa, P.A.; Wen, S.; Feng, Y.; Zhang, W.; Chen, J. Some new Pythagorean fuzzy correlation techniques via statistical viewpoint with applications to decision-making problems. J. Intell. Fuzzy Syst. 2021, 40, 9873–9886. [Google Scholar] [CrossRef]
  23. Ejegwa, P.A.; Wen, S.; Feng, Y.; Zhang, W. Determination of pattern recognition problems based on a Pythagorean fuzzy correlation measure from statistical viewpoint. In Proceedings of the 13th International Conference Advanced Computational Intelligence, Wanzhou, China, 14–16 May 2021; pp. 132–139. [Google Scholar]
  24. Zhang, X.L.; Xu, Z.S. Extension of TOPSIS to Multiple Criteria Decision Making with Pythagorean Fuzzy Sets. Int. J. Intell. Syst. 2014, 29, 1061–1078. [Google Scholar] [CrossRef]
  25. Ejegwa, P.A.; Jana, C.; Pal, M. Medical diagnostic process based on modified composite relation on Pythagorean fuzzy multisets. Granul. Comput. 2022, 7, 15–23. [Google Scholar] [CrossRef]
  26. Ejegwa, P.A.; Onyeke, I.C. Some new distance and similarity algorithms for Pythagorean fuzzy sets with application in decision-making problems. In Handbook of Research on Advances and Applications of Fuzzy Sets and Logic; Broumi, S., Ed.; IGI Global: Hershey, PA, USA, 2022; pp. 192–211. [Google Scholar]
  27. Peng, X.; Yuan, H.; Yang, Y. Pythagorean fuzzy information measures and their applications. Int. J. Intell. Syst. 2017, 32, 991–1029. [Google Scholar] [CrossRef]
  28. Peng, X. New similarity measure and distance measure for Pythagorean fuzzy set. Complex Intell. Syst. 2019, 5, 101–111. [Google Scholar] [CrossRef] [Green Version]
  29. Ejegwa, P.A.; Wen, S.; Feng, Y.; Zhang, W.; Tang, N. Novel Pythagorean fuzzy correlation measures via Pythagorean fuzzy deviation, variance and covariance with applications to pattern recognition and career placement. IEEE Trans. Fuzzy Syst. 2022, 30, 1660–1668. [Google Scholar] [CrossRef]
  30. Meng, L.; Wei, X. Research on evaluation of sustainable development of new urbanization from the perspective of urban agglomeration under the Pythagorean fuzzy sets. Discret. Dyn. Nat. Soc. 2021. [Google Scholar] [CrossRef]
  31. Wan, Z.; Shi, M.; Yang, F.; Zhu, G. A novel Pythagorean group decision-making method based on evidence theory and interactive power averaging operator. Complexity 2021. [Google Scholar] [CrossRef]
  32. Zulqarnain, R.M.; Siddique, I.; Jarad, F.; Hamed, Y.S.; Abualnaja, K.M.; Iampan, A. Einstein aggregation operators for Pythagorean fuzzy soft sets with their application in multiattribute group decision-making. J. Funct. Spaces 2022. [Google Scholar] [CrossRef]
  33. Saeed, M.; Ahmad, M.R.; Rahman, A.U. Refined Pythagorean fuzzy sets: Properties, set-theoretic operations and axiomatic results. J. Comput. Cogn. Eng. 2022. [Google Scholar] [CrossRef]
  34. Akram, M.; Zahid, K.; Alcantud, J.C.R. A new outranking method for multicriteria decision making with complex Pythagorean fuzzy information. Neural Comput. Appl. 2022, 34, 8069–8102. [Google Scholar] [CrossRef]
  35. Ye, J.; Chen, T.Y. Pythagorean fuzzy sets combined with the PROMETHEE method for the selection of cotton woven fabric. J. Nat. Fibers 2022. [Google Scholar] [CrossRef]
  36. Kamaci, H.; Marinkovic, D.; Petchimuthu, S.; Riaz, M.; Ashra, S.F. Novel distance-measures-based extended TOPSIS method under linguistic linear Diophantine fuzzy information. Symmetry 2022, 14, 2140. [Google Scholar] [CrossRef]
  37. Kamaci, H.; Petchimuthu, S. Some similarity measures for interval-valued bipolar q-rung orthopair fuzzy sets and their application to supplier evaluation and selection in supply chain management. Env. Dev. Sustain. 2022. [Google Scholar] [CrossRef]
  38. Kamaci, H. Complex linear Diophantine fuzzy sets and their cosine similarity measures with applications. Complex Intell. Syst. 2022, 8, 1281–1305. [Google Scholar] [CrossRef]
  39. Naeem, K.; Riaz, M.; Karaaslan, F. A mathematical approach to medical diagnosis via Pythagorean fuzzy soft TOPSIS, VIKOR and generalized aggregation operators. Complex Intell. Syst. 2021, 7, 2783–2795. [Google Scholar] [CrossRef]
  40. Naeem, K.; Riaz, M. Pythagorean fuzzy soft sets-based MADM. In Pythagorean Fuzzy Sets: Theory and Applications; Garg, H., Ed.; Springer Nature: Singapore, 2021. [Google Scholar]
  41. Memis, S.; Enginoglu, S.; Erkan, U. Numerical data classification via distance-based similarity measures of fuzzy parameterized fuzzy soft matrices. IEEE Access 2021, 9, 88583–88601. [Google Scholar] [CrossRef]
  42. Memis, S.; Enginoglu, S.; Erkan, U. A classification method in machine learning based on soft decision-making via fuzzy parameterized fuzzy soft matrices. Soft. Comput. 2022, 26, 1165–1180. [Google Scholar] [CrossRef]
  43. Memis, S.; Enginoglu, S.; Erkan, U. (2022) A new classification method using soft decision-making based on an aggregation operator of fuzzy parameterized fuzzy soft matrices. Turk. J. Electr. Eng. Comput. Sci. 2022, 30, 871–890. [Google Scholar] [CrossRef]
  44. Memis, S.; Enginoglu, S.; Erkan, U. Fuzzy parameterized fuzzy soft k-nearest neighbor classifier. Neurocomputing 2022, 500, 351–378. [Google Scholar] [CrossRef]
  45. Li, D.Q.; Zeng, W.Y. Distance Measure of Pythagorean Fuzzy Sets. Int. J. Intell. Syst. 2018, 33, 348–361. [Google Scholar] [CrossRef]
  46. Ejegwa, P.A. Distance and similarity measures for Pythagorean fuzzy sets. Granul. Comput. 2020, 5, 225–238. [Google Scholar] [CrossRef]
  47. Diamond, P.; Kloeden, P. Metric Spaces of Fuzzy Sets Theory and Applications; Word Scientific: Singapore, 1994. [Google Scholar]
  48. Ejegwa, P.A. Modified Zhang and Xu’s Distance measure of Pythagorean fuzzy sets and its application to pattern recognition problems. Neural Comput. Appl. 2020, 32, 10199–10208. [Google Scholar] [CrossRef]
  49. Zeng, W.; Li, D.; Yin, Q. Distance and Similarity Measures of Pythagorean Fuzzy Sets and their Applications to Multiple Criteria Group Decision Making. Int. J. Intell. Syst. 2018, 33, 2236–2254. [Google Scholar] [CrossRef]
  50. Ejegwa, P.A.; Awolola, J.A. Novel Distance Measures for Pythagorean Fuzzy Sets with Applications to Pattern Recognition Problems. Granul. Comput. 2021, 6, 181–189. [Google Scholar] [CrossRef]
  51. Hussain, Z.; Yang, M.S. Distance and similarity measures of Pythagorean fuzzy sets based on the Hausdorff metric with application to fuzzy TOPSIS. Int. J. Intell. Syst. 2019, 34, 2633–2654. [Google Scholar] [CrossRef]
  52. Xiao, F.; Ding, W. Divergence measure of Pythagorean fuzzy sets and its application in medical diagnosis. Appl. Soft Comput. 2019, 79, 254–267. [Google Scholar] [CrossRef]
  53. Mahanta, J.; Panda, S. Distance measure for Pythagorean fuzzy sets with varied applications. Neural Comput. Appl. 2021, 33, 17161–17171. [Google Scholar] [CrossRef]
Figure 1. Plot of Table 3.
Figure 1. Plot of Table 3.
Symmetry 14 02669 g001
Figure 2. Plot of Table 5.
Figure 2. Plot of Table 5.
Symmetry 14 02669 g002
Figure 3. Plot of Table 6.
Figure 3. Plot of Table 6.
Symmetry 14 02669 g003
Figure 4. Plot of Table 9.
Figure 4. Plot of Table 9.
Symmetry 14 02669 g004
Table 1. Computation Procedures.
Table 1. Computation Procedures.
U | β k 1 2 ( u j ) β k 2 2 ( u j ) | | γ k 1 2 ( u j ) γ k 2 2 ( u j ) | | δ k 1 2 ( u j ) δ k 2 2 ( u j ) | k 1 2 k 2 2
u 1 0.090.090.1811
u 2 0.130.050.1811
u 3 0.0100.0111
Table 2. Patterns under PFSs.
Table 2. Patterns under PFSs.
Patterns vs. Sample Space u 1 u 2 u 3
P 1 ( 1 10 , 1 10 )( 5 10 , 1 10 )( 1 10 , 9 10 )
P 2 ( 5 10 , 5 10 )( 7 10 , 3 10 )( 0 , 8 10 )
P 3 ( 7 10 , 2 10 )( 1 10 , 8 10 )( 4 10 , 4 10 )
U ( 4 10 , 4 10 )( 6 10 , 2 10 )( 0 , 8 10 )
Table 3. Distances and Similarities for the Patterns.
Table 3. Distances and Similarities for the Patterns.
New Methods ( P 1 , Q ) ( P 2 , Q ) ( P 3 , Q ) Rankings
D ^ 0.13780.08000.3133 D ^ ( P 2 , Q ) D ^ ( P 1 , Q ) D ^ ( P 3 , Q )
D ^ * 0.06890.04000.1567 D ^ * ( P 2 , Q ) D ^ * ( P 1 , Q ) D ^ * ( P 3 , Q )
S ^ 0.86220.92000.6867 S ^ ( P 2 , Q ) S ^ ( P 1 , Q ) S ^ ( P 3 , Q )
S ^ * 0.93110.96000.8433 S ^ * ( P 2 , Q ) S ^ * ( P 1 , Q ) S ^ * ( P 3 , Q )
Table 4. Pythagorean Fuzzy Medical Information.
Table 4. Pythagorean Fuzzy Medical Information.
Clinical Expressions
Δ u 1 u 2 u 3 u 4 u 5
D 1 ( 4 10 , 0 )( 3 10 , 5 10 )( 1 10 , 7 10 )( 4 10 , 3 10 )( 1 10 , 7 10 )
D 2 ( 7 10 , 0 )( 2 10 , 6 10 )( 0 , 9 10 )( 7 10 , 0 )( 1 10 , 8 10 )
D 3 ( 3 10 , 3 10 )( 6 10 , 1 10 )( 2 10 , 7 10 )( 2 10 , 6 10 )( 1 10 , 9 10 )
D 4 ( 1 10 , 7 10 )( 2 10 , 4 10 )( 8 10 , 0 )( 2 10 , 7 10 )( 2 10 , 7 10 )
D 5 ( 1 10 , 8 10 )( 0 , 8 10 )( 2 10 , 8 10 )( 2 10 , 8 10 )( 8 10 , 1 10 )
P ( 6 10 , 1 10 )( 5 10 , 4 10 )( 3 10 , 4 10 )( 7 10 , 2 10 )( 3 10 , 4 10 )
Table 5. Distances-Similarities.
Table 5. Distances-Similarities.
New Methods ( D 1 , P ) ( D 2 , P ) ( D 3 , P ) ( D 4 , P ) ( D 5 , P )
D ^ 0.18130.20400.24670.26930.3653
D ^ * 0.05380.06100.07400.08080.1096
S ^ 0.81870.79600.75330.73070.6347
S ^ * 0.94620.93900.92600.91920.8904
Table 6. New Approaches vs. Approaches in [51,53].
Table 6. New Approaches vs. Approaches in [51,53].
Pattern PairsPFDMA [51] D ^ PFDMA [53] D ^ *
( P 1 , Q ) 0.14330.13780.08400.0689
( P 2 , Q ) 0.07330.08000.03900.0400
( P 3 , Q ) 0.47000.31330.23780.1567
Table 7. Comparative Results for PFDMAs.
Table 7. Comparative Results for PFDMAs.
Methods ( P 1 , Q ) ( P 2 , Q ) ( P 3 , Q ) Rankings
D 1 [24]0.61990.36001.4099 D 1 ( P 2 , Q ) D 1 ( P 1 , Q ) D 1 ( P 3 , Q )
D 2 [46]0.20660.12000.4700 D 2 ( P 2 , Q ) D 2 ( P 1 , Q ) D 2 ( P 3 , Q )
D 3 [46]0.71330.32201.4733 D 3 ( P 2 , Q ) D 3 ( P 1 , Q ) D 3 ( P 3 , Q )
D 4 [46]0.37780.18680.7626 D 4 ( P 2 , Q ) D 4 ( P 1 , Q ) D 4 ( P 3 , Q )
D 5 [46]0.23780.10730.4911 D 5 ( P 2 , Q ) D 5 ( P 1 , Q ) D 5 ( P 3 , Q )
D 6 [48]0.21810.10790.4403 D 6 ( P 2 , Q ) D 6 ( P 1 , Q ) D 6 ( P 3 , Q )
D 7 [51]0.14330.07330.4700 D 7 ( P 2 , Q ) D 7 ( P 1 , Q ) D 7 ( P 3 , Q )
D 8 [52]0.19530.12940.6856 D 8 ( P 2 , Q ) D 8 ( P 1 , Q ) D 8 ( P 3 , Q )
D 9 [53]0.08400.03900.2378 D 9 ( P 2 , Q ) D 9 ( P 1 , Q ) D 9 ( P 3 , Q )
D ^ 0.13780.08000.3133 D ^ ( P 2 , Q ) D ^ ( P 1 , Q ) D ^ ( P 3 , Q )
D ^ * 0.06890.04000.1567 D ^ * ( P 2 , Q ) D ^ * ( P 1 , Q ) D ^ * ( P 3 , Q )
Table 8. Comparative Results for PFSMAs.
Table 8. Comparative Results for PFSMAs.
Methods ( P 1 , Q ) ( P 2 , Q ) ( P 3 , Q ) Rankings
S 1 0.38010.6400−0.4099 S 1 ( P 2 , Q ) S 1 ( P 1 , Q ) S 1 ( P 3 , Q )
S 2 0.79340.88000.5300 S 2 ( P 2 , Q ) S 2 ( P 1 , Q ) S 2 ( P 3 , Q )
S 3 0.28670.6780−0.4733 S 3 ( P 2 , Q ) S 3 ( P 1 , Q ) S 3 ( P 3 , Q )
S 4 0.62220.81320.2374 S 4 ( P 2 , Q ) S 4 ( P 1 , Q ) S 4 ( P 3 , Q )
S 5 0.76220.89270.5089 S 5 ( P 2 , Q ) S 5 ( P 1 , Q ) S 5 ( P 3 , Q )
S 6 0.78190.89210.5597 S 6 ( P 2 , Q ) S 6 ( P 1 , Q ) S 6 ( P 3 , Q )
S 7 0.85670.92670.5300 S 7 ( P 2 , Q ) S 7 ( P 1 , Q ) S 7 ( P 3 , Q )
S 8 0.80470.87060.3144 S 8 ( P 2 , Q ) S 8 ( P 1 , Q ) S 8 ( P 3 , Q )
S 9 0.91600.96100.7622 S 9 ( P 2 , Q ) S 9 ( P 1 , Q ) S 9 ( P 3 , Q )
S ^ 0.86220.92000.6867 S ^ ( P 2 , Q ) S ^ ( P 1 , Q ) S ^ ( P 3 , Q )
S ^ * 0.93110.96000.8433 S ^ * ( P 2 , Q ) S ^ * ( P 1 , Q ) S ^ * ( P 3 , Q )
Table 9. New Approaches vs Approaches in [51,53].
Table 9. New Approaches vs Approaches in [51,53].
Diseases/PatientPFDMA [51] D ^ PFDMA [53] D ^ *
( D 1 , P ) 0.27000.18130.09330.0538
( D 2 , P ) 0.30200.20400.08130.0610
( D 3 , P ) 0.37000.26930.12120.0740
( D 4 , P ) 0.40400.26930.14390.0808
( D 5 , P ) 0.54800.36530.15620.1096
Table 10. Distances for diagnostic analysis.
Table 10. Distances for diagnostic analysis.
Methods ( D 1 , P ) ( D 2 , P ) ( D 3 , P ) ( D 4 , P ) ( D 5 , P )
D 1 [24]1.35991.50991.84992.01992.7399
D 2 [46]0.27200.30200.37000.40400.5480
D 3 [46]1.33271.55961.87432.17972.7824
D 4 [46]0.52920.70620.76630.95831.0020
D 5 [46]0.26650.31190.37490.43590.5565
D 6 [48]0.23670.31580.35760.42860.4481
D 7 [51]0.27000.30200.37000.40400.5480
D 8 [52]0.25850.35580.37640.39160.4931
D 9 [53]0.09330.08130.12120.14390.1562
D ^ 0.18130.20400.24670.26930.3653
D ^ * 0.05380.06100.07400.08080.3653
Table 11. Similarities for diagnostic analysis.
Table 11. Similarities for diagnostic analysis.
Methods ( D 1 , P ) ( D 2 , P ) ( D 3 , P ) ( D 4 , P ) ( D 5 , P )
S 1 −0.3599−0.5099−0.8499−1.0199−1.7399
S 2 0.72800.69800.63000.59600.4520
S 3 −0.3327−0.5596−0.8743−1.1797−1.7824
S 4 0.47080.29380.23370.0417−0.0020
S 5 0.73350.68810.62510.56410.4435
S 6 0.76330.68420.64240.57140.5519
S 7 0.73000.69800.63000.59600.4520
S 8 0.74150.64420.62360.60840.5069
S 9 0.90670.91870.87880.85610.8438
S ^ 0.81870.79600.75330.73070.6347
S ^ * 0.94620.93900.92600.91920.6347
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wu, K.; Ejegwa, P.A.; Feng, Y.; Onyeke, I.C.; Johnny, S.E.; Ahemen, S. Some Enhanced Distance Measuring Approaches Based on Pythagorean Fuzzy Information with Applications in Decision Making. Symmetry 2022, 14, 2669. https://doi.org/10.3390/sym14122669

AMA Style

Wu K, Ejegwa PA, Feng Y, Onyeke IC, Johnny SE, Ahemen S. Some Enhanced Distance Measuring Approaches Based on Pythagorean Fuzzy Information with Applications in Decision Making. Symmetry. 2022; 14(12):2669. https://doi.org/10.3390/sym14122669

Chicago/Turabian Style

Wu, Keke, Paul Augustine Ejegwa, Yuming Feng, Idoko Charles Onyeke, Samuel Ebimobowei Johnny, and Sesugh Ahemen. 2022. "Some Enhanced Distance Measuring Approaches Based on Pythagorean Fuzzy Information with Applications in Decision Making" Symmetry 14, no. 12: 2669. https://doi.org/10.3390/sym14122669

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop