Next Article in Journal
Uncertain Production Scheduling Based on Fuzzy Theory Considering Utility and Production Rate
Next Article in Special Issue
Generalized Single-Valued Neutrosophic Hesitant Fuzzy Prioritized Aggregation Operators and Their Applications to Multiple Criteria Decision-Making
Previous Article in Journal
Can Computers Become Conscious, an Essential Condition for the Singularity?
Previous Article in Special Issue
Certain Concepts in Intuitionistic Neutrosophic Graph Structures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Some New Biparametric Distance Measures on Single-Valued Neutrosophic Sets with Applications to Pattern Recognition and Medical Diagnosis

School of Mathematics, Thapar Institute of Engineering & Technology, Deemed University Patiala, Punjab 147004, India
*
Author to whom correspondence should be addressed.
Information 2017, 8(4), 162; https://doi.org/10.3390/info8040162
Submission received: 29 November 2017 / Revised: 10 December 2017 / Accepted: 11 December 2017 / Published: 15 December 2017
(This article belongs to the Special Issue Neutrosophic Information Theory and Applications)

Abstract

:
Single-valued neutrosophic sets (SVNSs) handling the uncertainties characterized by truth, indeterminacy, and falsity membership degrees, are a more flexible way to capture uncertainty. In this paper, some new types of distance measures, overcoming the shortcomings of the existing measures, for SVNSs with two parameters are proposed along with their proofs. The various desirable relations between the proposed measures have also been derived. A comparison between the proposed and the existing measures has been performed in terms of counter-intuitive cases for showing its validity. The proposed measures have been illustrated with case studies of pattern recognition as well as medical diagnoses, along with the effect of the different parameters on the ordering of the objects.

1. Introduction

The classical measure theory has been widely used to represent uncertainties in data. However, these measures are valid only for precise data, and hence they may be unable to give accurate judgments for data uncertain and imprecise in nature. To handle this, fuzzy set (FS) theory, developed by Zadeh [1], has received much attention over the last decades because of its capability of handling uncertainties. After this, Atanassov [2] proposed the concept of an intuitionistic fuzzy set (IFS), which extends the theory of FSs with the addition of a degree of non-membership. As IFS theory has widely been used by researchers [3,4,5,6,7,8,9,10,11,12,13,14,15,16] in different disciplines for handling the uncertainties in data, hence its corresponding analysis is more meaningful than FSs’ crisp analysis. Nevertheless, neither the FS nor IFS theory are able to deal with indeterminate and inconsistent information. For instance, we take a person giving their opinion about an object with 0.5 being the possibility that the statement is true, 0.7 being the possibility that the statement is false and 0.2 being the possibility that he or she is not sure. To resolve this, Smarandache [17] introduced a new component called the “indeterminacy-membership function” and added the “truth membership function” and “falsity membership function”, all which are independent components lying in ] 0 , 1 + [ , and hence the corresponding set is known as a neutrosophic set (NS), which is the generalization of the IFS and FS. However, without specification, NSs are difficult to apply to real-life problems. Thus, a particular case of the NS called a single-valued NS (SVNS) has been proposed by Smarandache [17], Wang et al. [18].
After this pioneering work, researchers have been engaged in extensions and applications to different disciplines. However, the most important task for the decision-maker is to rank the objects so as to obtain the desired object(s). For this, researchers have made efforts to enrich the concept of information measures in neutrosophic environments. Broumi and Smarandache [19] introduced the Hausdorff distance, while Majumdar [20] presented the Hamming and Euclidean distance for comparing the SVNSs. Ye [21] presented the concept of correlation for single-valued neutrosophic numbers (SVNNs). Additionally, Ye [22] improved the concept of cosine similarity for SVNSs, which was firstly introduced by Kong et al. [23] in a neutrosophic environment. Nancy and Garg [24] presented an improved score function for ranking the SVNNs and applied them to solve the decision-making problem. Garg and Nancy [25] presented the entropy measure of order α and applied them to solve decision-making problems. Recently, Garg and Nancy [26] presented a technique for order preference by similarity to ideal solution (TOPSIS) method under an interval NS environment to solve decision-making problems. Aside from these, various authors have incorporated the idea of NS theory into the similarity measures [27,28], distance measures [29,30], the cosine similarity measure [19,22,31], and aggregation operators [22,31,32,33,34,35,36,37,38,39,40].
Thus, on the basis of the above observations, it has been observed that distance or similarity measures are of key importance in a number of theoretical and applied statistical inference and data processing problems. It has been deduced from studies that similarity, entropy and divergence measures could be induced by the normalized distance measure on the basis of their axiomatic definitions. On the other hand, SVNSs are one of the most successful theories to handle the uncertainties and certainties in the system, but little systematic research has explored these problems. The gap in the research motivates us to develop some families of the distance measures of the SVNS to solve the decision-making problem, for which preferences related to different alternatives are taken in the form of neutrosophic numbers. The main contributions of this work are summarized as follows: (i) to highlight the shortcomings of the various existing distance measures under the single-valued neutrosophic information through illustrative examples; (ii) to overcome the shortcomings of the existing measures, this paper defines some new series of biparametric distance measures between SVNSs, which depend on two parameters, namely, p and t, where p is the L p norm and t identifies the level of uncertainty. The various desirable relations between these have been investigated in detail. Then, we utilized these measures to solve the problem of pattern recognition as well as medical diagnosis and compared their performance with that of some of the existing approaches.
The rest of this paper is organized as follows. Section 2 briefly describes the concepts of NSs, SVNSs and their corresponding existing distance measures. Section 3 presents a family of the normalized and weighted normalized distance measures between two SVNSs. Some of their desirable properties have also been investigated in detail, while generalized distance measures have been proposed in Section 4. The defined measures are illustrated, by an example in Section 5, using the field of pattern recognition and medical diagnosis for demonstrating the effectiveness and stability of the proposed measures. Finally, a concrete conclusion has been drawn in Section 6.

2. Preliminaries

An overview of NSs and SVNSs is addressed here on the universal set X.

2.1. Basic Definitions

Definition 1
([17,41]). A neutrosophic set (NS) A in X is defined by its truth membership function ( T A ( x ) ) , an indeterminacy-membership function ( I A ( x ) ) and a falsity membership function ( F A ( x ) ) , where all are subsets of ] 0 , 1 + [ . There is no restriction on the sum of T A ( x ) , I A ( x ) and F A ( x ) ; thus 0 sup   T A ( x ) + sup   I A ( x ) + sup   F A ( x ) 3 + for all x X . Here, sup represents the supremum of the set.
Wang et al. [18], Smarandache [41] defined the SVNS, which is an instance of a NS.
Definition 2
([18,41]). A single-valued neutrosophic set (SVNS) A is defined as
A = { x , T A ( x ) , I A ( x ) , F A ( x ) | x X }
where T A : X [ 0 , 1 ] , I A : X [ 0 , 1 ] and F A : X [ 0 , 1 ] with T A ( x ) + I A ( x ) + F A ( x ) 3 for all x X . The values T A ( x ) , I A ( x ) and F A ( x ) denote the truth-membership degree, the indeterminacy-membership degree and the falsity-membership degree of x to A, respectively. The pairs of these are called single-valued neutrosophic numbers (SVNNs), which are denoted by α = μ A , ρ A , ν A , and class of SVNSs is denoted by Φ ( X ) .
Definition 3.
Let A = μ A , ρ A , ν A and B = μ B , ρ B , ν B be two single-valued neutrosophic sets (SVNSs). Then the following expressions are defined by [18]:
(i) 
A B if and only if (iff) μ A ( x ) μ B ( x ) , ρ A ( x ) ρ B ( x ) and ν A ( x ) ν B ( x ) for all x in X;
(ii) 
A = B iff A B and B A ;
(iii) 
A c = { ν A ( x ) , 1 ρ A ( x ) , μ A ( x ) | x X } ;
(iv) 
A B = min ( μ A ( x ) , μ B ( x ) ) ,   max ( ρ A ( x ) , ρ B ( x ) ) ,   max ( ν A ( x ) , ν B ( x ) ) ;
(v) 
A B = max ( μ A ( x ) , μ B ( x ) ) ,   min ( ρ A ( x ) , ρ B ( x ) ) ,   min ( ν A ( x ) , ν B ( x ) ) .

2.2. Existing Distance Measures

Definition 4.
A real function d : Φ ( X ) × Φ ( X ) [ 0 , 1 ] is called a distance measure [19], where d satisfies the following axioms for A , B , C Φ ( X ) :
(P1) 
0 d ( A , B ) 1 ;
(P2) 
d ( A , B ) = 0 iff A = B ;
(P3) 
d ( A , B ) = d ( B , A ) ;
(P4) 
If A B C , then d ( A , C ) d ( A , B ) and d ( A , C ) d ( B , C ) .
On the basis of this, several researchers have addressed the various types of distance and similarity measures between two SVNSs A = x i , μ A ( x i ) , ρ A ( x i ) , ν A ( x i ) | x i X and B = x i , μ B ( x i ) , ρ B ( x i ) , ν B ( x i ) | x i X , i = 1 , 2 , , n , which are given as follows:
(i)
The extended Hausdorff distance [19]:
D H ( A , B ) = 1 n i = 1 n max | μ A ( x i ) μ B ( x i ) | , | ρ A ( x i ) ρ B ( x i ) | , | ν A ( x i ) ν B ( x i ) |
(ii)
The normalized Hamming distance [20]:
D N H ( A , B ) = 1 3 n i = 1 n | μ A ( x i ) μ B ( x i ) | + | ρ A ( x i ) ρ B ( x i ) | + | ν A ( x i ) ν B ( x i ) |
(iii)
The normalized Euclidean distance [20]:
D N E ( A , B ) = 1 3 n i = 1 n ( μ A ( x i ) μ B ( x i ) ) 2 + ( ρ A ( x i ) ρ B ( x i ) ) 2 + ( ν A ( x i ) ν B ( x i ) ) 2 1 / 2
(iv)
The cosine similarities [22]:
S C S 1 ( A , B ) = 1 n i = 1 n cos π | μ A ( x i ) μ B ( x i ) | | ρ A ( x i ) ρ B ( x i ) | | ν A ( x i ) ν B ( x i ) | 2
and
S C S 2 ( A , B ) = 1 n i = 1 n cos π | μ A ( x i ) μ B ( x i ) | + | ρ A ( x i ) ρ B ( x i ) | + | ν A ( x i ) ν B ( x i ) | 6
and their corresponding distances denoted by D C S 1 = 1 S C S 1 and D C S 2 = 1 S C S 2 .
(v)
The tangent similarities [42]:
S T 1 ( A , B ) = 1 1 n i = 1 n tan π | μ A ( x i ) μ B ( x i ) | | ρ A ( x i ) ρ B ( x i ) | | ν A ( x i ) ν B ( x i ) | 4
and
S T 2 ( A , B ) = 1 1 n i = 1 n tan π | μ A ( x i ) μ B ( x i ) | + | ρ A ( x i ) ρ B ( x i ) | + | ν A ( x i ) ν B ( x i ) | 12
and their corresponding distances denoted by D T 1 = 1 S T 1 and D T 2 = 1 S T 2 .

2.3. Shortcomings of the Existing Measures

The above measures have been widely used; however, simultaneously they have some drawbacks, which are illustrated with the numerical example that follows.
Example 1.
Consider two known patterns A and B, which are represented by SVNSs in a universe X given by A = x , 0.5 , 0.0 , 0.0 | x X , B = x , 0.0 , 0.5 , 0.0 | x X . Consider an unknown pattern C S V N S s ( X ) , which is recognized where C = x , 0.0 , 0.0 , 0.5 | x X ; then the target of this problem is to classify the pattern C in one of the classes A or B. If we apply the existing measures [19,20,22,42] defined in Equations (1)–(7) above, then we obtain the following:
Pair D H D NH D NE D CS 1 D CS 2 D T 1 D T 1
(A,C)0.50.33330.40480.29290.13400.41420.2679
(B,C)0.50.33330.40480.29290.13400.41420.2679
Thus, from this, we conclude that these existing measures are unable to classify the pattern C with A and B. Hence these measures are inconsistent and unable to perform ranking.
Example 2.
Consider two SVNSs defined on the universal set X given by A = x , 0.3 , 0.2 , 0.3 | x X and B = x , 0.4 , 0.2 , 0.4 | x X . If we replace the degree of falsity membership of A (0.3) with 0.4, and that of B (0.4) with 0.3, then we obtain new SVNSs as C = x , 0.3 , 0.2 , 0.4 | x X and D = x , 0.4 , 0.2 , 0.3 | x X . Now, by using the distance measures defined in Equations (1)–(7), we obtain their corresponding values as follows:
Pair D H D NH D NE D CS 1 D CS 2 D T 1 D T 1
(A,B)0.10.0660.0770.0130.0060.0780.052
(C,D)0.10.0660.0770.0130.0060.0780.052
Thus, it has been concluded that by changing the falsity degree of SVNSs and keeping the other degrees unchanged, the values of their corresponding measures remain the same. Thus, there is no effect of the degree of falsity membership on the distance measures. Similarly, we can observe the same for the degree of the truth membership functions.
This seems to be worthless to calculate distance using the measures mentioned above. Thus, there is a need to build up a new distance measure that overcomes the shortcomings of the existing measures.

3. Some New Distance Measures between SVNSs

In this section, we present the Hamming and the Euclidean distances between SVNSs, which can be used in real scientific and engineering applications.
Letting Φ ( X ) be the class of SVNSs over the universal set X, then we define the distances for SVNSs, A = μ A ( x i ) , ρ A ( x i ) , ν A ( x i ) | x i X and B = μ B ( x i ) , ρ B ( x i ) , ν B ( x i ) | x i X , by considering the uncertainty parameter t, as follows:
(i)
Hamming distance:
d 1 ( A , B ) = 1 3 ( 2 + t ) i = 1 n | t ( μ A ( x i ) μ B ( x i ) ) + ( ρ A ( x i ) ρ B ( x i ) ) + ( ν A ( x i ) ν B ( x i ) ) | + | t ( ρ A ( x i ) ρ B ( x i ) ) ( ν A ( x i ) ν B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | + | t ( ν A ( x i ) ν B ( x i ) ) ( ρ A ( x i ) ρ B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) |
(ii)
Normalized Hamming distance:
d 2 ( A , B ) = 1 3 n ( 2 + t ) i = 1 n | t ( μ A ( x i ) μ B ( x i ) ) + ( ρ A ( x i ) ρ B ( x i ) ) + ( ν A ( x i ) ν B ( x i ) ) | + | t ( ρ A ( x i ) ρ B ( x i ) ) ( ν A ( x i ) ν B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | + | t ( ν A ( x i ) ν B ( x i ) ) ( ρ A ( x i ) ρ B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) |
(iii)
Euclidean distance:
d 3 ( A , B ) = 1 3 ( 2 + t ) 2 i = 1 n | t ( μ A ( x i ) μ B ( x i ) ) + ( ρ A ( x i ) ρ B ( x i ) ) + ( ν A ( x i ) ν B ( x i ) ) | 2 + | t ( ρ A ( x i ) ρ B ( x i ) ) ( ν A ( x i ) ν B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | 2 + | t ( ν A ( x i ) ν B ( x i ) ) ( ρ A ( x i ) ρ B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | 2 1 / 2
(iv)
Normalized Euclidean distance:
d 4 ( A , B ) = 1 3 n ( 2 + t ) 2 i = 1 n | t ( μ A ( x i ) μ B ( x i ) ) + ( ρ A ( x i ) ρ B ( x i ) ) + ( ν A ( x i ) ν B ( x i ) ) | 2 + | t ( ρ A ( x i ) ρ B ( x i ) ) ( ν A ( x i ) ν B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | 2 + | t ( ν A ( x i ) ν B ( x i ) ) ( ρ A ( x i ) ρ B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | 2 1 / 2
where t 3 is a parameter.
Then, on the basis of the distance properties as defined in Definition 4, we can obtain the following properties:
Proposition 1.
The above-defined distance d 2 ( A , B ) , between two SVNSs A and B, satisfies the following properties (P1)–(P4):
(P1) 
0 d 2 ( A , B ) 1 , A , B Φ ( X ) ;
(P2) 
d 2 ( A , B ) = 0 iff A = B ;
(P3) 
d 2 ( A , B ) = d 2 ( B , A ) ;
(P4) 
If A B C , then d 2 ( A , C ) d 2 ( A , B ) and d 2 ( A , C ) d 2 ( B , C ) .
Proof. 
For two SVNSs A and B, we have
(P1)
0 μ A ( x i ) , μ B ( x i ) 1 , 0 ρ A ( x i ) , ρ B ( x i ) 1 and 0 ν A ( x i ) , ν B ( x i ) 1 . Thus, | μ A ( x i ) μ B ( x i ) | 1 , | ρ A ( x i ) ρ B ( x i ) | 1 , | ν A ( x i ) ν B ( x i ) | 1 and | t ( μ A ( x i ) μ B ( x i ) ) | t .
Therefore,
| ( t μ A ( x i ) ν A ( x i ) ρ A ( x i ) ) ( t μ B ( x i ) ν B ( x i ) ρ B ( x i ) ) | ( 2 + t ) | ( t ρ A ( x i ) + ν A ( x i ) μ A ( x i ) ) ( t ρ B ( x i ) + ν B ( x i ) μ B ( x i ) ) | ( 2 + t ) | ( t ν A ( x i ) + ρ A ( x i ) μ A ( x i ) ) ( t ν B ( x i ) + ρ B ( x i ) μ B ( x i ) ) | ( 2 + t )
Hence, by the definition of d 2 , we obtain 0 d 2 ( A , B ) 1 .
(P2)
Firstly, we assume that A = B , which implies that μ A ( x i ) = μ B ( x i ) , ρ A ( x i ) = ρ B ( x i ) , and ν A ( x i ) = ν B ( x i ) for i = 1 , 2 , , n . Thus, by the definition of d 2 , we obtain d 2 ( A , B ) = 0 . Conversely, assuming that d 2 ( A , B ) = 0 for two SVNSs A and B, this implies that
| t ( μ A ( x i ) μ B ( x i ) ) + ( ρ A ( x i ) ρ B ( x i ) ) + ( ν A ( x i ) ν B ( x i ) ) | + | t ( ρ A ( x i ) ρ B ( x i ) ) ( ν A ( x i ) ν B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | + | t ( ν A ( x i ) ν B ( x i ) ) ( ρ A ( x i ) ρ B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | = 0
or
| t ( μ A ( x i ) μ B ( x i ) ) + ( ρ A ( x i ) ρ B ( x i ) ) + ( ν A ( x i ) ν B ( x i ) ) | = 0 | t ( ρ A ( x i ) ρ B ( x i ) ) ( ν A ( x i ) ν B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | = 0 | t ( ν A ( x i ) ν B ( x i ) ) ( ρ A ( x i ) ρ B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | = 0
After solving, we obtain μ A ( x i ) μ B ( x i ) = 0 , ρ A ( x i ) ρ B ( x i ) = 0 and ν A ( x i ) ν B ( x i ) = 0 , which implies μ A ( x i ) = μ B ( x i ) , ρ A ( x i ) = ρ B ( x i ) and ν A ( x i ) = ν B ( x i ) . Therefore, A = B . Hence d 2 ( A , B ) = 0 iff A = B .
(P3)
This is straightforward from the definition of d 2 .
(P4)
If A B C , then μ A ( x i ) μ B ( x i ) μ C ( x i ) , ρ A ( x i ) ρ B ( x i ) ρ C ( x i ) and ν A ( x i ) ν B ( x i ) ν C ( x i ) , which implies that μ A ( x i ) μ B ( x i ) μ A ( x i ) μ C ( x i ) , ν A ( x i ) ν B ( x i ) ν A ( x i ) ν C ( x i ) , and ρ A ( x i ) ρ B ( x i ) ρ A ( x i ) ρ C ( x i ) .
Therefore,
| t ( μ A ( x i ) μ B ( x i ) ) + ( ρ A ( x i ) ρ B ( x i ) ) + ( ν A ( x i ) ν B ( x i ) ) | | t ( μ A ( x i ) μ c ( x i ) ) + ( ρ A ( x i ) ρ C ( x i ) ) + ( ν A ( x i ) ν C ( x i ) ) | | t ( ρ A ( x i ) ρ B ( x i ) ) ( ν A ( x i ) ν B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | | t ( ρ A ( x i ) ρ C ( x i ) ) ( ν A ( x i ) ν C ( x i ) ) + ( μ A ( x i ) μ C ( x i ) ) | | t ( ν A ( x i ) ν B ( x i ) ) ( ρ A ( x i ) ρ B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | | t ( ν A ( x i ) ν C ( x i ) ) ( ρ A ( x i ) ρ C ( x i ) ) + ( μ A ( x i ) μ C ( x i ) ) |
By adding, we obtain d 2 ( A , B ) d 2 ( A , C ) . Similarly, we obtain d 2 ( B , C ) d 2 ( A , C ) .
Proposition 2.
Distance d 4 as defined in Equation (11) is also a valid measure
Proof. 
For two SVNSs A and B, we have
(P1)
0 μ A ( x i ) , μ B ( x i ) 1 , 0 ρ A ( x i ) , ρ B ( x i ) 1 and 0 ν A ( x i ) , ν B ( x i ) 1 . Thus, | μ A ( x i ) μ B ( x i ) | 1 , | ρ A ( x i ) ρ B ( x i ) | 1 , | ν A ( x i ) ν B ( x i ) | 1 and | t ( μ A ( x i ) μ B ( x i ) ) | t . Therefore,
| t ( μ A ( x i ) μ B ( x i ) ) + ( ρ A ( x i ) ρ B ( x i ) ) + ( ν A ( x i ) ν B ( x i ) ) | 2 ( 2 + t ) 2 | t ( ρ A ( x i ) ρ B ( x i ) ) ( ν A ( x i ) ν B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | 2 ( 2 + t ) 2 | t ( ν A ( x i ) ν B ( x i ) ) ( ρ A ( x i ) ρ B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | 2 ( 2 + t ) 2
Hence, by the definition of d 4 , we obtain 0 d 4 ( A , B ) 1 .
(P2)
Assuming that A = B implies that μ A ( x i ) = μ B ( x i ) , ρ A ( x i ) = ρ B ( x i ) and ν A ( x i ) = ν B ( x i ) for i = 1 , 2 , , n , and hence using Equation (11), we obtain d 4 ( A , B ) = 0 . Conversely, assuming that d 4 ( A , B ) = 0 implies
| t ( μ A ( x i ) μ B ( x i ) ) + ( ρ A ( x i ) ρ B ( x i ) ) + ( ν A ( x i ) ν B ( x i ) ) | 2 = 0 | t ( ρ A ( x i ) ρ B ( x i ) ) ( ν A ( x i ) ν B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | 2 = 0 | t ( ν A ( x i ) ν B ( x i ) ) ( ρ A ( x i ) ρ B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | 2 = 0
After solving these, we obtain μ A ( x i ) μ B ( x i ) = 0 , ρ A ( x i ) ρ B ( x i ) = 0 and ν A ( x i ) ν B ( x i ) = 0 ; that is, μ A ( x i ) = μ B ( x i ) , ρ A ( x i ) = ρ B ( x i ) and ν A ( x i ) = ν B ( x i ) for t 3 . Hence A = B . Therefore, d 4 ( A , B ) = 0 iff A = B .
(P3)
This is straightforward from the definition of d 4 .
(P4)
If A B C , then μ A ( x i ) μ B ( x i ) μ C ( x i ) , ρ A ( x i ) ρ B ( x i ) ρ C ( x i ) , and ν A ( x i ) ν B ( x i ) ν C ( x i ) . Therefore
| t ( μ A ( x i ) μ B ( x i ) ) + ( ρ A ( x i ) ρ B ( x i ) ) + ( ν A ( x i ) ν B ( x i ) ) | 2 | t ( μ A ( x i ) μ c ( x i ) ) + ( ρ A ( x i ) ρ C ( x i ) ) + ( ν A ( x i ) ν C ( x i ) ) | 2 | t ( ρ A ( x i ) ρ B ( x i ) ) ( ν A ( x i ) ν B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | 2 | t ( ρ A ( x i ) ρ C ( x i ) ) ( ν A ( x i ) ν C ( x i ) ) + ( μ A ( x i ) μ C ( x i ) ) | 2 | t ( ν A ( x i ) ν B ( x i ) ) ( ρ A ( x i ) ρ B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | 2 | t ( ν A ( x i ) ν C ( x i ) ) ( ρ A ( x i ) ρ C ( x i ) ) + ( μ A ( x i ) μ C ( x i ) ) | 2
Hence by the definition of d 4 , we obtain d 4 ( A , B ) d 4 ( A , C ) . Similarly, we obtain d 4 ( B , C ) d 4 ( A , C ) .
Now, on the basis of these proposed distance measures, we conclude that this successfully overcomes the shortcomings of the existing measures as described above.
Example 3.
If we apply the proposed distance measures d 2 and d 4 on the data considered in Example 1 to classify the pattern C, then corresponding to the parameter t = 3 , we obtain d 2 ( A , C ) = 0.3333 , d 2 ( B , C ) = 0.1333 , d 4 ( A , C ) = 0.3464 and d 4 ( B , C ) = 0.1633 . Thus, the pattern C is classified with the pattern B and hence is able to identify the best pattern.
Example 4.
If we utilize the proposed distances d 2 and d 4 for the above-considered Example 2, then their corresponding values are d 2 ( A , B ) = 0.0267 , d 2 ( C , D ) = 0.0667 , d 4 ( A , B ) = 0.0327 and d 4 ( C , D ) = 0.6930 . Therefore, there is a significant effect of the change in the falsity membership on the measure values and hence consequently on the ranking values.
Proposition 3.
Measures d 1 and d 3 satisfy the following properties:
(i) 
0 d 1 n ;
(ii) 
0 d 3 n 1 / 2 .
Proof. 
We can easily obtain that d 1 ( A , B ) = n d 2 ( A , B ) , and thus by Proposition 1, we obtain 0 d 1 ( A , B ) n . Similarly, we can obtain 0 d 3 ( A , B ) n 1 / 2 . ☐
However, in many practical situations, the different sets may have taken different weights, and thus weight ω i ( i = 1 , 2 , , n ) of the element x i X should be taken into account. In the following, we develop a weighted Hamming distance and the normalized weighted Euclidean distance between SVNSs.
(i)
The normalized weighted Hamming distance:
d 5 ( A , B ) = 1 3 n ( 2 + t ) i = 1 n ω i | t ( μ A ( x i ) μ B ( x i ) ) + ( ρ A ( x i ) ρ B ( x i ) ) + ( ν A ( x i ) ν B ( x i ) ) | + | t ( ρ A ( x i ) ρ B ( x i ) ) ( ν A ( x i ) ν B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | + | t ( ν A ( x i ) ν B ( x i ) ) ( ρ A ( x i ) ρ B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) |
(ii)
The normalized weighted Euclidean distance:
d 6 ( A , B ) = 1 3 n ( 2 + t ) 2 i = 1 n ω i | t ( μ A ( x i ) μ B ( x i ) ) + ( ρ A ( x i ) ρ B ( x i ) ) + ( ν A ( x i ) ν B ( x i ) ) | 2 + | t ( ρ A ( x i ) ρ B ( x i ) ) ( ν A ( x i ) ν B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | 2 + | t ( ν A ( x i ) ν B ( x i ) ) ( ρ A ( x i ) ρ B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | 2 1 / 2
where t 3 is a parameter.
It is straightforward to check that the normalized weighted distance d k ( A , B ) ( k = 5 , 6 ) between SVNSs A and B also satisfies the above properties (P1)–(P4).
Proposition 4.
Distance measures d 2 and d 5 satisfy the relation d 5 d 2 .
Proof. 
Because ω i 0 , i = 1 n ω i = 1 , then for any two SVNSs A and B, we have d 5 ( A , B ) = 1 3 n ( 2 + t ) i = 1 n ω i | t ( μ A ( x i ) μ B ( x i ) ) + ( ρ A ( x i ) ρ B ( x i ) ) + ( ν A ( x i ) ν B ( x i ) ) | + | t ( ρ A ( x i ) ρ B ( x i ) ) ( ν A ( x i ) ν B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | + | t ( ν A ( x i ) ν B ( x i ) ) ( ρ A ( x i ) ρ B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | 1 3 n ( 2 + t ) i = 1 n | t ( μ A ( x i ) μ B ( x i ) ) + ( ρ A ( x i ) ρ B ( x i ) ) + ( ν A ( x i ) ν B ( x i ) ) | + | t ( ρ A ( x i ) ρ B ( x i ) ) ( ν A ( x i ) ν B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | + | t ( ν A ( x i ) ν B ( x i ) ) ( ρ A ( x i ) ρ B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | ; that is, d 5 ( A , B ) d 2 ( A , B ) . ☐
Proposition 5.
Let A and B be two SVNSs in X; then d 5 and d 6 are the distance measures.
Proof. 
Because ω i [ 0 , 1 ] and i = 1 n ω i = 1 then we can easily obtain 0 d 5 ( A , B ) d 2 ( A , B ) . Thus, d 5 ( A , B ) satisfies (P1). The proofs of (P2)–(P4) are similar to those of Proposition 1. Similar is true for d 6 . ☐
Proposition 6.
The distance measures d 4 and d 6 satisfy the relation d 6 d 4 .
Proof. 
The proof follows from Proposition 4. ☐
Proposition 7.
The distance measures d 2 and d 4 satisfy the inequality d 4 d 2 .
Proof. 
For two SVNSs A and B, we have
| t ( μ A ( x i ) μ B ( x i ) ) + ( ρ A ( x i ) ρ B ( x i ) ) + ( ν A ( x i ) ν B ( x i ) ) | 2 ( 2 + t ) 2 | t ( ρ A ( x i ) ρ B ( x i ) ) ( ν A ( x i ) ν B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | 2 ( 2 + t ) 2 | t ( ν A ( x i ) ν B ( x i ) ) ( ρ A ( x i ) ρ B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | 2 ( 2 + t ) 2
which implies that
| t ( μ A ( x i ) μ B ( x i ) ) + ( ρ A ( x i ) ρ B ( x i ) ) + ( ν A ( x i ) ν B ( x i ) ) 2 + t | 2 1 | t ( ρ A ( x i ) ρ B ( x i ) ) ( ν A ( x i ) ν B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) 2 + t | 2 1 | t ( ν A ( x i ) ν B ( x i ) ) ( ρ A ( x i ) ρ B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) 2 + t | 2 1
For any a [ 0 , 1 ] , we have a 2 a . Therefore,
| t ( μ A ( x i ) μ B ( x i ) ) + ( ρ A ( x i ) ρ B ( x i ) ) + ( ν A ( x i ) ν B ( x i ) ) 2 + t | 2 | t ( μ A ( x i ) μ B ( x i ) ) + ( ρ A ( x i ) ρ B ( x i ) ) + ( ν A ( x i ) ν B ( x i ) ) 2 + t | | t ( ρ A ( x i ) ρ B ( x i ) ) ( ν A ( x i ) ν B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) 2 + t | 2 | t ( ρ A ( x i ) ρ B ( x i ) ) ( ν A ( x i ) ν B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) 2 + t | and | t ( ν A ( x i ) ν B ( x i ) ) ( ρ A ( x i ) ρ B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) 2 + t | 2 | t ( ν A ( x i ) ν B ( x i ) ) ( ρ A ( x i ) ρ B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) 2 + t |
By adding these inequalities and by the definition of d 4 , we have
d 4 ( A , B ) = 1 3 n ( 2 + t ) 2 i = 1 n | t ( μ A ( x i ) μ B ( x i ) ) + ( ρ A ( x i ) ρ B ( x i ) ) + ( ν A ( x i ) ν B ( x i ) ) | 2 + | t ( ρ A ( x i ) ρ B ( x i ) ) ( ν A ( x i ) ν B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | 2 + | t ( ν A ( x i ) ν B ( x i ) ) ( ρ A ( x i ) ρ B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | 2 1 / 2 1 3 n ( 2 + t ) i = 1 n | t ( μ A ( x i ) μ B ( x i ) ) + ( ρ A ( x i ) ρ B ( x i ) ) + ( ν A ( x i ) ν B ( x i ) ) | + | t ( ρ A ( x i ) ρ B ( x i ) ) ( ν A ( x i ) ν B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | + | t ( ν A ( x i ) ν B ( x i ) ) ( ρ A ( x i ) ρ B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | 1 / 2 ( d 2 ( A , B ) ) 1 / 2
As A and B are arbitrary SVNSs, thus we obtain d 4 d 2 . ☐
Proposition 8.
Measures d 6 and d 5 satisfy the inequality d 6 d 5 .
Proof. 
The proof follows from Proposition 7. ☐
The Hausdroff distance between two non-empty closed and bounded sets is a measure of the resemblance between them. For example, we consider A = [ x 1 , x 2 ] and B = [ y 1 , y 2 ] in the Euclidean domain R; the Hausdroff distance in the additive set environment is given by the following [8]:
H ( A , B ) = max | x 1 y 1 | , | x 2 y 2 |
Now, for any two SVNSs A and B over X = { x 1 , x 2 , , x n } , we propose the following utmost distance measures:
  • Utmost normalized Hamming distance:
    d 1 H ( A , B ) = 1 3 n ( 2 + t ) i = 1 n max i | t ( μ A ( x i ) μ B ( x i ) ) + ( ρ A ( x i ) ρ B ( x i ) ) + ( ν A ( x i ) ν B ( x i ) ) | , | t ( ρ A ( x i ) ρ B ( x i ) ) ( ν A ( x i ) ν B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | , | t ( ν A ( x i ) ν B ( x i ) ) ( ρ A ( x i ) ρ B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) |
  • Utmost normalized weighted Hamming distance:
    d 2 H ( A , B ) = 1 3 n ( 2 + t ) i = 1 n ω i   max i   | t ( μ A ( x i ) μ B ( x i ) ) + ( ρ A ( x i ) ρ B ( x i ) ) + ( ν A ( x i ) ν B ( x i ) ) | , | t ( ρ A ( x i ) ρ B ( x i ) ) ( ν A ( x i ) ν B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | , | t ( ν A ( x i ) ν B ( x i ) ) ( ρ A ( x i ) ρ B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) |
  • Utmost normalized Euclidean distance:
    d 3 H ( A , B ) = 1 3 n ( 2 + t ) 2 i = 1 n max i | t ( μ A ( x i ) μ B ( x i ) ) + ( ρ A ( x i ) ρ B ( x i ) ) + ( ν A ( x i ) ν B ( x i ) ) | 2 , | t ( ρ A ( x i ) ρ B ( x i ) ) ( ν A ( x i ) ν B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | 2 , | t ( ν A ( x i ) ν B ( x i ) ) ( ρ A ( x i ) ρ B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | 2 1 / 2
  • Utmost normalized weighted Euclidean distance:
    d 4 H ( A , B ) = 1 3 n ( 2 + t ) 2 i = 1 n ω i   max i | t ( μ A ( x i ) μ B ( x i ) ) + ( ρ A ( x i ) ρ B ( x i ) ) + ( ν A ( x i ) ν B ( x i ) ) | 2 , | t ( ρ A ( x i ) ρ B ( x i ) ) ( ν A ( x i ) ν B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | 2 , | t ( ν A ( x i ) ν B ( x i ) ) ( ρ A ( x i ) ρ B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | 2 1 / 2
Proposition 9.
The distance d 1 H ( A , B ) defined in Equation (14) for two SVNSs A and B is a valid distance measure.
Proof. 
The above measure satisfies the following properties:
(P1)
As A and B are SVNSs, so | μ A ( x i ) μ B ( x i ) | 1 , | ρ A ( x i ) ρ B ( x i ) | 1 and | ν A ( x i ) ν B ( x i ) | 1 . Thus,
| ( t μ A ( x i ) ν A ( x i ) ρ A ( x i ) ) ( t μ B ( x i ) ν B ( x i ) ρ B ( x i ) ) | ( 2 + t ) | ( t ρ A ( x i ) + ν A ( x i ) μ A ( x i ) ) ( t ρ B ( x i ) + ν B ( x i ) μ B ( x i ) ) | ( 2 + t ) | ( t ν A ( x i ) + ρ A ( x i ) μ A ( x i ) ) ( t ν B ( x i ) + ρ B ( x i ) μ B ( x i ) ) | ( 2 + t )
Hence, by the definition of d 1 H , we obtain 0 d 1 H ( A , B ) 1 .
(P2)
Similar to the proof of Proposition 1.
(P3)
This is clear from Equation (14).
(P4)
Let A B C , which implies μ A ( x i ) μ B ( x i ) μ C ( x i ) , ρ A ( x i ) ρ B ( x i ) ρ C ( x i ) and ν A ( x i ) ν B ( x i ) ν C ( x i ) . Therefore, | t ( μ A ( x i ) μ B ( x i ) ) + ( ρ A ( x i ) ρ B ( x i ) ) + ( ν A ( x i ) ν B ( x i ) ) | | t ( μ A ( x i ) μ c ( x i ) ) + ( ρ A ( x i ) ρ C ( x i ) ) + ( ν A ( x i ) ν C ( x i ) ) | , | t ( ρ A ( x i ) ρ B ( x i ) ) ( ν A ( x i ) ν B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | | t ( ρ A ( x i ) ρ C ( x i ) ) ( ν A ( x i ) ν C ( x i ) ) + ( μ A ( x i ) μ C ( x i ) ) | and | t ( ν A ( x i ) ν B ( x i ) ) ( ρ A ( x i ) ρ B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | | t ( ν A ( x i ) ν C ( x i ) ) ( ρ A ( x i ) ρ C ( x i ) ) + ( μ A ( x i ) μ C ( x i ) ) | , which implies that max i | t ( μ A ( x i ) μ B ( x i ) ) + ( ρ A ( x i ) ρ B ( x i ) ) + ( ν A ( x i ) ν B ( x i ) ) | , | t ( ρ A ( x i ) ρ B ( x i ) ) ( ν A ( x i ) ν B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | , | t ( ν A ( x i ) ν B ( x i ) ) ( ρ A ( x i ) ρ B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | max i ( | t ( μ A ( x i ) μ c ( x i ) ) + ( ρ A ( x i ) ρ C ( x i ) ) + ( ν A ( x i ) ν C ( x i ) ) | , | t ( ρ A ( x i ) ρ C ( x i ) ) ( ν A ( x i ) ν C ( x i ) ) + ( μ A ( x i ) μ C ( x i ) ) | and | t ( ν A ( x i ) ν C ( x i ) ) ( ρ A ( x i ) ρ C ( x i ) ) + ( μ A ( x i ) μ C ( x i ) ) | ) . Hence d 1 H ( A , B ) d 1 H ( A , C ) . Similarly, we obtain d 1 H ( B , C ) d 1 H ( A , C ) .
Proposition 10.
For A , B Φ ( X ) , d 2 H , d 3 H and d 4 H are the distance measures.
Proof. 
The proof follows from the above proposition. ☐
Proposition 11.
The measures d 2 H and d 1 H satisfy the following inequality: d 2 H d 1 H .
Proof. 
Because w i [ 0 , 1 ] , therefore
d 2 H ( A , B ) = 1 3 n ( 2 + t ) i = 1 n w i ( max i ( | t ( μ A ( x i ) μ B ( x i ) ) + ( ρ A ( x i ) ρ B ( x i ) ) + ( ν A ( x i ) ν B ( x i ) ) | , | t ( ρ A ( x i ) ρ B ( x i ) ) ( ν A ( x i ) ν B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | , | t ( ν A ( x i ) ν B ( x i ) ) ( ρ A ( x i ) ρ B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | ) ) 1 3 n ( 2 + t ) i = 1 n max i ( | t ( μ A ( x i ) μ B ( x i ) ) + ( ρ A ( x i ) ρ B ( x i ) ) + ( ν A ( x i ) ν B ( x i ) ) | , | t ( ρ A ( x i ) ρ B ( x i ) ) ( ν A ( x i ) ν B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | , | t ( ν A ( x i ) ν B ( x i ) ) ( ρ A ( x i ) ρ B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | ) = d 1 H ( A , B )
Hence, d 2 H d 1 H . ☐
Proposition 12.
The measures d 3 H and d 4 H satisfy the inequality d 4 H ( A , B ) d 3 H ( A , B ) .
Proof. 
The proof follows from Proposition 11. ☐
Proposition 13.
The measures d 3 H and d 1 H satisfy the inequality d 3 H d 1 H .
Proof. 
Because for any a [ 0 , 1 ] , a 2 a a 1 / 2 , the remaining proof follows from Proposition 7. ☐
Proposition 14.
The measures d 4 H and d 2 H satisfy the inequality d 4 H d 2 H .
Proof. 
The proof follows from Proposition 13. ☐
Proposition 15.
The measures d 1 H and d 2 satisfy the following inequality:
d 1 H d 2
Proof. 
For positive numbers a i , i = 1 , 2 , , n , we have max i { a i } i = 1 n a i . Thus, for any two SVNSs A and B, we have d 1 H ( A , B ) = 1 3 n ( 2 + t ) i = 1 n   max i ( | t ( μ A ( x i ) μ B ( x i ) ) + ( ρ A ( x i ) ρ B ( x i ) ) + ( ν A ( x i ) ν B ( x i ) ) | , | t ( ρ A ( x i ) ρ B ( x i ) ) ( ν A ( x i ) ν B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | , | t ( ν A ( x i ) ν B ( x i ) ) ( ρ A ( x i ) ρ B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | ) 1 3 n ( 2 + t ) i = 1 n | t ( μ A ( x i ) μ B ( x i ) ) + ( ρ A ( x i ) ρ B ( x i ) ) + ( ν A ( x i ) ν B ( x i ) ) + | t ( ρ A ( x i ) ρ B ( x i ) ) ( ν A ( x i ) ν B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | + | t ( ν A ( x i ) ν B ( x i ) ) ( ρ A ( x i ) ρ B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | = d 2 ( A , B ) . Hence d 1 H d 2 . ☐
Proposition 16.
The measures d 3 H and d 4 satisfy the following inequality:
d 3 H d 4
Proof. 
The proof follows from Proposition 15. ☐
Proposition 17.
The measures d 2 , d 5 and d 1 H satisfy the following inequalities:
(i) 
d 2 d 5 + d 1 H 2 ;
(ii) 
d 2 d 5 · d 1 H .
Proof. 
Because d 2 d 5 and d 2 d 1 H , by adding these inequalities, we obtain d 2 d 5 + d 1 H 2 . On the other hand, by multiplying these, we obtain d 2 d 5 · d 1 H . ☐

4. Generalized Distance Measure

The above-defined Hamming and Euclidean distance measures are generalized for the two SVNSs A and B on the universal set X as follows:
d p ( A , B ) = { 1 3 n ( 2 + t ) p i = 1 n ( | t ( μ A ( x i ) μ B ( x i ) ) + ( ρ A ( x i ) ρ B ( x i ) ) + ( ν A ( x i ) ν B ( x i ) ) | p + | t ( ρ A ( x i ) ρ B ( x i ) ) ( ν A ( x i ) ν B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | p + | t ( ν A ( x i ) ν B ( x i ) ) ( ρ A ( x i ) ρ B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | p ) } 1 / p
where p 1 is an L p norm and t 3 represents the uncertainty index parameters.
In particular, if p = 1 and p = 2 , then the above measure, given in Equation (18), reduces to measures d 2 and d 4 defined in Equations (9) and (11), respectively.
Proposition 18.
The above-defined distance d p ( A , B ) , between SVNSs A and B, satisfies the following properties (P1)–(P4):
(P1) 
0 d p ( A , B ) 1 , A , B Φ ( X ) ;
(P2) 
d p ( A , B ) = 0 , iff A = B ;
(P3) 
d p ( A , B ) = d p ( B , A ) ;
(P4) 
If A B C , then d p ( A , C ) d p ( A , B ) and d p ( A , C ) d p ( B , C ) .
Proof. 
For p 1 and t 3 , we have the following:
(P1)
For SVNSs, | μ A ( x i ) μ B ( x i ) | 1 , | ρ A ( x i ) ρ B ( x i ) | 1 and | ν A ( x i ) ν B ( x i ) | 1 . Thus, we obtain
( 2 + t ) t ( μ A ( x i ) μ B ( x i ) ) ( ν A ( x i ) ν B ( x i ) ) ( ρ A ( x i ) ρ B ( x i ) ) ( 2 + t ) ( 2 + t ) t ( ρ A ( x i ) ρ B ( x i ) ) ( ν A ( x i ) ν B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) ( 2 + t ) ( 2 + t ) t ( ν A ( x i ) ν B ( x i ) ) ( ρ A ( x i ) ρ B ( x i ) ) + ( μ A ( x i ) ) μ B ( x i ) ( 2 + t )
which implies that
0 | t ( μ A ( x i ) μ B ( x i ) ) ( ν A ( x i ) ν B ( x i ) ) ( ρ A ( x i ) ρ B ( x i ) ) | p ( 2 + t ) p 0 | t ( ρ A ( x i ) ρ B ( x i ) ) ( ν A ( x i ) ν B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | p ( 2 + t ) p 0 | t ( ν A ( x i ) ν B ( x i ) ) ( ρ A ( x i ) ρ B ( x i ) ) + ( μ A ( x i ) ) μ B ( x i ) | p ( 2 + t ) p
Thus, by adding these inequalities, we obtain 0 d p ( A , B ) 1 .
(P2)
Assuming that A = B μ A ( x ) = μ B ( x i ) , ρ A ( x i ) = ρ B ( x i ) , and ν A ( x ) = ν B ( x i ) , thus, d p ( A , B ) = 0 .
Conversely, assuming that d p ( A , B ) = 0 implies that
| t ( μ A ( x i ) μ B ( x i ) ) + ( ρ A ( x i ) ρ B ( x i ) ) + ( ν A ( x i ) ν B ( x i ) ) | = 0 | t ( ρ A ( x i ) ρ B ( x i ) ) ( ν A ( x i ) ν B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | = 0 | t ( ν A ( x i ) ν B ( x i ) ) ( ρ A ( x i ) ρ B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | = 0
and hence, after solving, we obtain μ A ( x i ) = μ B ( x i ) , ρ A ( x i ) = ρ B ( x i ) and ν A ( x i ) = ν B ( x i ) . Thus, A = B .
(P3)
This is straightforward.
(P4)
Let A B C ; then μ A ( x i ) μ B ( x i ) μ C ( x i ) , ρ A ( x i ) ρ B ( x i ) ρ C ( x i ) and ν A ( x i ) ν B ( x i ) ν C ( x i ) . Thus, μ A ( x i ) μ B ( x i ) μ A ( x i ) μ C ( x i ) , ρ A ( x i ) ρ B ( x i ) ρ A ( x i ) ρ C ( x i ) and ν A ( x i ) ν B ( x i ) ν A ( x i ) ν C ( x i ) . Hence, we obtain
| t ( μ A ( x i ) μ B ( x i ) ) + ( ρ A ( x i ) ρ B ( x i ) ) + ( ν A ( x i ) ν B ( x i ) ) | p | t ( μ A ( x i ) μ c ( x i ) ) + ( ρ A ( x i ) ρ C ( x i ) ) + ( ν A ( x i ) ν C ( x i ) ) | p | t ( ρ A ( x i ) ρ B ( x i ) ) ( ν A ( x i ) ν B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | p | t ( ρ A ( x i ) ρ C ( x i ) ) ( ν A ( x i ) ν C ( x i ) ) + ( μ A ( x i ) μ C ( x i ) ) | p and | t ( ν A ( x i ) ν B ( x i ) ) ( ρ A ( x i ) ρ B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | p | t ( ν A ( x i ) ν C ( x i ) ) ( ρ A ( x i ) ρ C ( x i ) ) + ( μ A ( x i ) μ C ( x i ) ) | p
Thus, we obtain d p ( A , B ) d p ( A , C ) . Similarly, d p ( B , C ) d p ( A , C ) .
 ☐
If the weight vector ω i , ( i = 1 , 2 , , n ) of each element is considered such that ω i [ 0 , 1 ] and i ω i = 1 , then a generalized parametric distance measure between SVNSs A and B takes the following form:
d w p ( A , B ) = ( 1 3 n ( 2 + t ) p i = 1 n ω i { ( | t ( μ A ( x i ) μ B ( x i ) ) + ( ρ A ( x i ) ρ B ( x i ) ) + ( ν A ( x i ) ν B ( x i ) ) | p + | t ( ρ A ( x i ) ρ B ( x i ) ) ( ν A ( x i ) ν B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | p + | t ( ν A ( x i ) ν B ( x i ) ) ( ρ A ( x i ) ρ B ( x i ) ) + ( μ A ( x i ) μ B ( x i ) ) | p ) } ) 1 / p
In particular, if p = 1 and p = 2 , Equation (19) reduces to Equations (12) and (13), respectively.
Proposition 19.
Let ω = ( ω 1 , ω 2 , , ω n ) T be the weight vector of x i , ( i = 1 , 2 , , n ) with ω i 0 and i = 1 n   ω i = 1 ; then the generalized parametric distance measure between the SVNSs A and B defined by Equation (19) satisfies the following:
(P1) 
0 d w p ( A , B ) 1 , A , B Φ ( X ) ;
(P2) 
d w p ( A , B ) = 0 iff A = B ;
(P3) 
d w p ( A , B ) = d w p ( B , A ) ;
(P4) 
A B C then d w p ( A , C ) d w p ( A , B ) and d w p ( A , C ) d w p ( B , C ) .
Proof. 
The proof follows from Proposition 18. ☐

5. Illustrative Examples

In order to illustrate the performance and validity of the above-proposed distance measures, two examples from the fields of pattern recognition and medical diagnosis have been taken into account.

5.1. Example 1: Application of Distance Measure in Pattern Recognition

Consider three known patterns A 1 , A 2 and A 3 , which are represented by the following SVNSs in a given universe X = { x 1 , x 2 , x 3 , x 4 } :
A 1 = { x 1 , 0.7 , 0.0 , 0.1 , x 2 , 0.6 , 0.1 , 0.2 , x 3 , 0.8 , 0.7 , 0.6 , x 4 , 0.5 , 0.2 , 0.3 } A 2 = { x 1 , 0.4 , 0.2 , 0.3 , x 2 , 0.7 , 0.1 , 0.0 , x 3 , 0.1 , 0.1 , 0.6 , x 4 , 0.5 , 0.3 , 0.6 } A 3 = { x 1 , 0.5 , 0.2 , 0.2 , x 2 , 0.4 , 0.1 , 0.2 , x 3 , 0.1 , 0.1 , 0.4 , x 4 , 0.4 , 0.1 , 0.2 }
Consider an unknown pattern B S V N S ( X ) , which will be recognized where
B = { x 1 , 0.4 , 0.1 , 0.4 , x 2 , 0.6 , 0.1 , 0.1 , x 3 , 0.1 , 0.0 , 0.4 , x 4 , 0.4 , 0.4 , 0.7 }
Then the target of this problem is to classify the pattern B in one of the classes A 1 , A 2 or A 3 . For this, proposed distance measures, d 1 , d 2 , d 3 , d 4 , d 1 H and d 3 H , have been computed from B to A k ( k = 1 , 2 , 3 ) corresponding to t = 3 , and the results are given as follows:
d 1 ( A 1 , B ) = 0.5600 ; d 1 ( A 2 , B ) = 0.2932 ; d 1 ( A 3 , B ) = 0.4668 d 2 ( A 1 , B ) = 0.1400 ; d 2 ( A 2 , B ) = 0.0733 ; d 2 ( A 3 , B ) = 0.1167 d 3 ( A 1 , B ) = 0.3499 ; d 3 ( A 2 , B ) = 0.1641 ; d 3 ( A 3 , B ) = 0.3120 d 4 ( A 1 , B ) = 0.1749 ; d 4 ( A 2 , B ) = 0.0821 ; d 4 ( A 3 , B ) = 0.1560 d 1 H ( A 1 , B ) = 0.0633 ; d 1 H ( A 2 , B ) = 0.0300 ; d 1 H ( A 3 , B ) = 0.0567 d 3 H ( A 1 , B ) = 0.1252 ; d 3 H ( A 2 , B ) = 0.0560 ; d 3 H ( A 3 , B ) = 0.1180
Thus, from these distance measures, we conclude that the pattern B belongs to the pattern A 2 . On the other hand, if we assume that the weights of x 1 , x 2 , x 3 and x 4 are 0.3, 0.4, 0.2 and 0.1, respectively, then we utilize the distance measures d 5 , d 6 , d 2 H and d 4 H for obtaining the most suitable pattern as follows:
d 5 ( A 1 , B ) = 0.0338 ; d 5 ( A 2 , B ) = 0.0162 ; d 5 ( A 3 , B ) = 0.0233 d 6 ( A 1 , B ) = 0.0861 ; d 6 ( A 2 , B ) = 0.0369 ; d 6 ( A 3 , B ) = 0.0604 d 2 H ( A 1 , B ) = 0.0148 ; d 2 H ( A 2 , B ) = 0.0068 ; d 2 H ( A 3 , B ) = 0.0117 d 4 H ( A 1 , B ) = 0.0603 ; d 4 H ( A 2 , B ) = 0.0258 ; d 4 H ( A 3 , B ) = 0.0464
Thus, the ranking order of the three patterns is A 2 , A 3 and A 1 , and hence A 2 is the most desirable pattern to be classified with B. Furthermore, it can be easily verified that these results validate the above-proposed propositions on the distance measures.

Comparison of Example 1 Results with Existing Measures

The above-mentioned measures have been compared with some existing measures under a NS environment for showing the validity of the approach whose results are summarized in Table 1. From these results, it has been shown that the final ordering of the pattern coincides with the proposed measures, and hence it shows the conservative nature of the measures.

5.2. Example 2: Application of Distance Measure in Medical Diagnosis

Consider a set of diseases Q = { Q 1 ( Viral   fever ) , Q 2 ( Malaria ) , Q 3 ( Typhoid ) , Q 4 ( Stomach   Problem ) , Q 5 ( Chest   problem ) } and a set of symptoms S = { s 1 ( Temperature ) , s 2 ( HeadAche ) , s 3 ( Stomach   Pain ) , s 4 ( Cough ) , s 5 ( Chest   pain ) } . Suppose a patient, with respect to all the symptoms, can be represented by the following SVNS:
P ( Patient ) = { ( s 1 , 0.8 , 0.2 , 0.1 ) , ( s 2 , 0.6 , 0.3 , 0.1 ) , ( s 3 , 0.2 , 0.1 , 0.8 ) , ( s 4 , 0.6 , 0.5 , 0.1 ) , ( s 5 , 0.1 , 0.4 , 0.6 ) }
and each diseases Q k ( k = 1 , 2 , 3 , 4 , 5 ) is as follows:
Q 1 ( Viral   fever ) = { ( s 1 , 0.4 , 0.6 , 0.0 ) , ( s 2 , 0.3 , 0.2 , 0.5 ) , ( s 3 , 0.1 , 0.3 , 0.7 ) , ( s 4 , 0.4 , 0.3 , 0.3 ) , ( s 5 , 0.1 , 0.2 , 0.7 ) } Q 2 ( Malaria ) = { ( s 1 , 0.7 , 0.3 , 0.0 ) , ( s 2 , 0.2 , 0.2 , 0.6 ) , ( s 3 , 0.0 , 0.1 , 0.9 ) , ( s 4 , 0.7 , 0.3 , 0.0 ) , ( s 5 , 0.1 , 0.1 , 0.8 ) } Q 3 ( Typhoid ) = { ( s 1 , 0.3 , 0.4 , 0.3 ) , ( s 2 , 0.6 , 0.3 , 0.1 ) , ( s 3 , 0.2 , 0.1 , 0.7 ) , ( s 4 , 0.2 , 0.2 , 0.6 ) , ( s 5 , 0.1 , 0.0 , 0.9 ) } Q 4 ( Stomach   problem ) = { ( s 1 , 0.1 , 0.2 , 0.7 ) , ( s 2 , 0.2 , 0.4 , 0.4 ) , ( s 3 , 0.8 , 0.2 , 0.0 ) , ( s 4 , 0.2 , 0.1 , 0.7 ) , ( s 5 , 0.2 , 0.1 , 0.7 ) } Q 5 ( Chest   problem ) = { ( s 1 , 0.1 , 0.1 , 0.8 ) , ( s 2 , 0.0 , 0.2 , 0.8 ) , ( s 3 , 0.2 , 0.0 , 0.8 ) , ( s 4 , 0.2 , 0.0 , 0.8 ) , ( s 5 , 0.8 , 0.1 , 0.1 ) }
Now, the target is to diagnose the disease of patient P among Q 1 , Q 2 , Q 3 , Q 4 and Q 5 . For this, proposed distance measures, d 1 , d 2 , d 3 , d 4 , d 1 H and d 3 H , have been computed from P to Q k ( k = 1 , 2 , , 5 ) and are given as follows:
d 1 ( Q 1 , P ) = 0.6400 ; d 1 ( Q 2 , P ) = 0.9067 ; d 1 ( Q 3 , P ) = 0.6333 ; d 1 ( Q 4 , P ) = 1.4600 ; d 1 ( Q 5 , P ) = 1.6200 d 2 ( Q 1 , P ) = 0.1280 ; d 2 ( Q 2 , P ) = 0.1813 ; d 2 ( Q 3 , P ) = 0.1267 ; d 2 ( Q 4 , P ) = 0.2920 ; d 2 ( Q 5 , P ) = 0.3240 d 3 ( Q 1 , P ) = 0.3626 ; d 3 ( Q 2 , P ) = 0.4977 ; d 3 ( Q 3 , P ) = 0.4113 ; d 3 ( Q 4 , P ) = 0.7566 ; d 3 ( Q 5 , P ) = 0.8533 d 4 ( Q 1 , P ) = 0.1622 ; d 4 ( Q 2 , P ) = 0.2226 ; d 4 ( Q 3 , P ) = 0.1840 ; d 4 ( Q 4 , P ) = 0.3383 ; d 4 ( Q 5 , P ) = 0.3816 d 1 H ( Q 1 , P ) = 0.0613 ; d 1 H ( Q 2 , P ) = 0.0880 ; d 1 H ( Q 3 , P ) = 0.0627 ; d 1 H ( Q 4 , P ) = 0.1320 ; d 1 H ( Q 5 , P ) = 0.1400 d 3 H ( Q 1 , P ) = 0.1175 ; d 3 H ( Q 2 , P ) = 0.1760 ; d 3 H ( Q 3 , P ) = 0.1373 ; d 3 H ( Q 4 , P ) = 0.2439 ; d 3 H ( Q 5 , P ) = 0.2661
Thus, from these distance measures, we conclude that the patient P suffers from the disease Q 3 .
On the other hand, if we assign weights 0.3, 0.2, 0.2, 0.1 and 0.2 corresponding to Q k ( k = 1 , 2 , , 5 ) , respectively, then we utilize the distance measures d 5 , d 6 , d 2 H and d 4 H for obtaining the most suitable pattern as
d 5 ( Q 1 , P ) = 0.0284 ; d 5 ( Q 2 , P ) = 0.0403 ; d 5 ( Q 3 , P ) = 0.0273 ; d 5 ( Q 4 , P ) = 0.0625 ; d 5 ( Q 5 , P ) = 0.0684 d 6 ( Q 1 , P ) = 0.0795 ; d 6 ( Q 2 , P ) = 0.1101 ; d 6 ( Q 3 , P ) = 0.0862 ; d 6 ( Q 4 , P ) = 0.1599 ; d 6 ( Q 5 , P ) = 0.1781 d 2 H ( Q 1 , P ) = 0.0135 ; d 2 H ( Q 2 , P ) = 0.0200 ; d 2 H ( Q 3 , P ) = 0.0129 ; d 2 H ( Q 4 , P ) = 0.0276 ; d 2 H ( Q 5 , P ) = 0.0289 d 4 H ( Q 1 , P ) = 0.0572 ; d 4 H ( Q 2 , P ) = 0.0885 ; d 4 H ( Q 3 , P ) = 0.0636 ; d 4 H ( Q 4 , P ) = 0.1139 ; d 4 H ( Q 5 , P ) = 0.1226
Thus, on the basis of the ranking order, we conclude that the patient P suffers from the disease Q 3 .

Comparison of Example 2 Results with Existing Approaches

In order to verify the feasibility of the proposed decision-making approach based on the distance measure, we conducted a comparison analysis based on the same illustrative example. For this, various measures as presented in Equations (1)–(7) were taken, and their corresponding results are summarized in Table 2, which shows that the patient P suffers from the disease Q 1 .

5.3. Effect of the Parameters p and t on the Ordering

However, in order to analyze the effect of the parameters t and p on the measure values, an experiment was performed by taking different values of p ( p = 1 , 1.5 , 2 , 3 , 5 , 10 ) corresponding to a different value of the uncertainty parameter t ( t = 3 , 5 , 7 ). On the basis of these different pairs of parameters, distance measures were computed, and their results are summarized in Table 3 and Table 4, respectively, for Examples 1 and 2 corresponding to different criterion weights.
From these, the following have been computed:
(i)
For a fixed value of p, it has been observed that the measure values corresponding to each alternative increase with the increase in the value of t. On the other hand, by varying the value of t from 3 to 7, corresponding to a fixed value of p, this implies that values of the distance measures of each diagnosis from the patient P increase.
(ii)
It has also been observed from this table that when the weight vector has been assigned to each criterion weight, then the measure values are less than that of an equal weighting case.
(iii)
Finally, it is seen from the table that the measured values corresponding to each alternative Q k ( k = 1 , 2 , 3 , 4 , 5 ) are conservative in nature.
For each pair, the measure values lie between 0 and 1, and hence, on the basis of this, we conclude that the patient P suffers from the Q 1 disease. The ranking order for the decision-maker is shown in the table as (13245), which indicates that the order of the different attributes is of the form Q 1 Q 3 Q 2 Q 4 Q 5 . Hence Q 1 is the most desirable, while Q 5 is the least desirable for different values of t and p.

5.4. Advantages of the Proposed Method

According to the above comparison analysis, the proposed method for addressing decision-making problems has the following advantages:
(i)
The distance measure under the IFS environment can only handle situations in which the degree of membership and non-membership is provided to the decision-maker. This kind of measure is unable to deal with indeterminacy, which commonly occurs in real-life applications. Because SVNSs are a successful tool in handling indeterminacy, the proposed distance measure in the neutrosophic domain can effectively be used in many real applications in decision-making.
(ii)
The proposed distance measure depends upon two parameters p and t, which help in adjusting the hesitation margin in computing data. The effect of hesitation will be diminished or almost neglected if the value of t is taken very large, and for smaller values of t, the effect of hesitation will rise. Thus, according to requirements, the decision-maker can adjust the parameter to handle incomplete as well as indeterminate information. Therefore, this proposed approach is more suitable for engineering, industrial and scientific applications.
(iii)
As has been observed from existing studies, various existing measures under NS environments have been proposed by researchers, but there are some situations that cannot be distinguished by these existing measures; hence their corresponding algorithm may give an irrelevant result. The proposed measure has the ability to overcome these flaws; thus it is a more suitable measure to tackle problems.

6. Conclusions

SVNSs are applied to problems with imprecise, uncertain, incomplete and inconsistent information existing in the real world. Although several measures already exist to deal with such kinds of information systems, they have several flaws, as described in the manuscript. Here in this article, we overcome these flaws by proposing an alternative way to define new generalized distance measures between the two SVNNs. Further, a family of normalized and weighted normalized Hamming and Euclidean distance measures have been proposed for the SVNSs. Some desirable properties and their relations have been studied in detail. Finally, a decision-making method has been proposed on the basis of these distance measures. To demonstrate the efficiency of the proposed coefficients, numerical examples of pattern recognition as well as medical diagnosis have been taken. A comparative study, as well as the effect of the parameters on the ranking of the alternative, will support the theory and hence demonstrate that the proposed measures are an alternative way to solve the decision-making problems. In the future, we will extend the proposed approach to the soft set environment [43,44,45], the multiplicative environment [46,47,48], and other uncertain and fuzzy environments [7,49,50,51,52,53].

Acknowledgments

The authors wish to thank the anonymous reviewers for their valuable suggestions. The second author, Nancy, was supported through the Maulana Azad National Fellowship funded by the University Grant Commission (No. F1-17.1/2017-18/MANF-2017-18-PUN-82613).

Author Contributions

H. Garg and Nancy jointly designed the idea of research, planned its development. Nancy reviewed the literature and finding examples. H. Garg wrote the paper. Nancy made a contribution to the case study. H. Garg analyzed the data and checking language. Finally, all the authors have read and approved the final manuscript.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353. [Google Scholar] [CrossRef]
  2. Atanassov, K.T. Intuitionistic fuzzy sets. Fuzzy Sets Syst. 1986, 20, 87–96. [Google Scholar] [CrossRef]
  3. Xu, Z.S. Intuitionistic fuzzy aggregation operators. IEEE Trans. Fuzzy Syst. 2007, 15, 1179–1187. [Google Scholar]
  4. Garg, H. Confidence levels based Pythagorean fuzzy aggregation operators and its application to decision-making process. Comput. Math. Organ. Theory 2017, 23, 546–571. [Google Scholar] [CrossRef]
  5. Garg, H. Novel intuitionistic fuzzy decision making method based on an improved operation laws and its application. Eng. Appl. Artif. Intell. 2017, 60, 164–174. [Google Scholar] [CrossRef]
  6. Garg, H. Generalized intuitionistic fuzzy interactive geometric interaction operators using Einstein t-norm and t-conorm and their application to decision making. Comput. Ind. Eng. 2016, 101, 53–69. [Google Scholar] [CrossRef]
  7. Kumar, K.; Garg, H. TOPSIS method based on the connection number of set pair analysis under interval-valued intuitionistic fuzzy set environment. Comput. Appl. Math. 2016, 1–11. [Google Scholar] [CrossRef]
  8. Grzegorzewski, P. Distance between intuitionistic fuzzy sets and/or interval-valued fuzzy sets based on the hausdorff metric. Fuzzy Sets Syst. 2004, 148, 319–328. [Google Scholar] [CrossRef]
  9. Garg, H. A new generalized improved score function of interval-valued intuitionistic fuzzy sets and applications in expert systems. Appl. Soft Comput. 2016, 38, 988–999. [Google Scholar] [CrossRef]
  10. Garg, H. A New Generalized Pythagorean Fuzzy Information Aggregation Using Einstein Operations and Its Application to Decision Making. Int. J. Intell. Syst. 2016, 31, 886–920. [Google Scholar] [CrossRef]
  11. Xu, Z.S.; Yager, R.R. Some geometric aggregation operators based on intuitionistic fuzzy sets. Int. J. Gen. Syst. 2006, 35, 417–433. [Google Scholar] [CrossRef]
  12. Garg, H. Some Picture Fuzzy Aggregation Operators and Their Applications to Multicriteria Decision-Making. Arabian J. Sci. Eng. 2017, 42, 5275–5290. [Google Scholar] [CrossRef]
  13. Garg, H. Some series of intuitionistic fuzzy interactive averaging aggregation operators. SpringerPlus 2016, 5, 999. [Google Scholar] [CrossRef] [PubMed]
  14. Yager, R.R. On ordered weighted avergaing aggregation operators in multi-criteria decision making. IEEE Trans. Syst. Man Cybern. 1988, 18, 183–190. [Google Scholar] [CrossRef]
  15. Garg, H. A new improved score function of an interval-valued Pythagorean fuzzy set based TOPSIS method. Int. J. Uncertain. Quantif. 2017, 7, 463–474. [Google Scholar] [CrossRef]
  16. Garg, H. Some methods for strategic decision-making problems with immediate probabilities in Pythagorean fuzzy environment. Int. J. Intell. Syst. 2017, 1–26. [Google Scholar] [CrossRef]
  17. Smarandache, F. A Unifying Field in Logics. Neutrosophy: Neutrosophic Probability, Set and Logic; American Research Press: Rehoboth, DE, USA, 1999. [Google Scholar]
  18. Wang, H.; Smarandache, F.; Zhang, Y.Q.; Sunderraman, R. Single valued neutrosophic sets. Rev. Air Force Acad. 2010, 1, 10–14. [Google Scholar]
  19. Broumi, S.; Smarandache, F. Correlation coefficient of interval neutrosophic set. Appl. Mech. Mater. 2013, 436, 511–517. [Google Scholar] [CrossRef]
  20. Majumdar, P. Neutrosophic Sets and Its Applications to Decision Making. In Computational Intelligence for Big Data Analysis: Frontier Advances and Applications; Acharjya, D., Dehuri, S., Sanyal, S., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 97–115. [Google Scholar]
  21. Ye, J. Multicriteria decision making method using the correlation coefficient under single-value neutrosophic environment. Int. J. Gen. Syst. 2013, 42, 386–394. [Google Scholar] [CrossRef]
  22. Ye, J. Improved cosine similarity measures of simplified neutrosophic sets for medical diagnoses. Artif. Intell. Med. 2015, 63, 171–179. [Google Scholar] [CrossRef] [PubMed]
  23. Kong, L.; Wu, Y.; Ye, J. Misfire fault diagnosis method of gasoline engines using the cosine similarity measure of neutrosophic numbers. Neutrosophic Sets Syst. 2015, 8, 42–45. [Google Scholar]
  24. Nancy; Garg, H. An improved score function for ranking Neutrosophic sets and its application to decision-making process. Int. J. Uncertain. Quantif. 2016, 6, 377–385. [Google Scholar]
  25. Garg, H.; Nancy. On single-valued neutrosophic entropy of order α. Neutrosophic Sets Syst. 2016, 14, 21–28. [Google Scholar]
  26. Garg, H.; Nancy. Non-linear programming method for multi-criteria decision making problems under interval neutrosophic set environment. Appl. Intell. 2017, 1–15. [Google Scholar] [CrossRef]
  27. Ye, J. Multiple attribute group decision-making method with completely unknown weights based on similarity measures under single valued neutrosophic environment. J. Intell. Fuzzy Syst. 2014, 27, 2927–2935. [Google Scholar]
  28. Majumdar, P.; Samant, S.K. On Similarity and entropy of neutrosophic sets. J. Intell. Fuzzy Syst. 2014, 26, 1245–1252. [Google Scholar]
  29. Ye, J. Similarity measures between interval neutrosophic sets and their applications in multicriteria decision-making. Int. J. Intell. Fuzzy Syst. 2014, 26, 165–172. [Google Scholar]
  30. Huang, H.L. New distance measure of single-valued neutrosophic sets and its application. Int. J. Intell. Syst. 2016, 31, 1021–1032. [Google Scholar] [CrossRef]
  31. Broumi, S.; Smarandache, F. Cosine similarity measure of interval valued neutrosophic sets. Neutrosophic Sets Syst. 2014, 5, 15–20. [Google Scholar]
  32. Peng, J.J.; Wang, J.Q.; Wang, J.; Zhang, H.Y.; Chen, Z.H. Simplified neutrosophic sets and their applications in multi-criteria group decision-making problems. Int. J. Syst. Sci. 2016, 47, 2342–2358. [Google Scholar] [CrossRef]
  33. Ye, J. A multicriteria decision-making method using aggregation operators for simplified neutrosophic sets. J. Intell. Fuzzy Syst. 2014, 26, 2459–2466. [Google Scholar]
  34. Li, Y.; Liu, P.; Chen, Y. Some single valued neutrosophic number heronian mean operators and their application in multiple attribute group decision making. Informatica 2016, 27, 85–110. [Google Scholar] [CrossRef]
  35. Ye, J. Multiple attribute decision-making method based on the possibility degree ranking method and ordered weighted aggregation operators of interval neutrosophic numbers. J. Intell. Fuzzy Syst. 2015, 28, 1307–1317. [Google Scholar]
  36. Liu, P.; Liu, X. The neutrosophic number generalized weighted power averaging operator and its application in multiple attribute group decision making. Int. J. Mach. Learn. Cybern. 2016, 1–12. [Google Scholar] [CrossRef]
  37. Peng, J.J.; Wang, J.Q.; Wu, X.H.; Wang, J.; Chen, X.H. Multi-valued neutrosophic sets and power aggregation operators with their applications in multi-criteria group decision-making problems. Int. J. Comput. Intell. Syst. 2015, 8, 345–363. [Google Scholar] [CrossRef]
  38. Nancy; Garg, H. Novel single-valued neutrosophic decision making operators under Frank norm operations and its application. Int. J. Uncertain. Quantif. 2016, 6, 361–375. [Google Scholar] [CrossRef]
  39. Yang, L.; Li, B. A Multi-Criteria Decision-Making Method Using Power Aggregation Operators for Single-valued Neutrosophic Sets. Int. J. Database Theory Appl. 2016, 9, 23–32. [Google Scholar] [CrossRef]
  40. Tian, Z.P.; Wang, J.; Zhang, H.Y.; Wang, J.Q. Multi-criteria decision-making based on generalized prioritized aggregation operators under simplified neutrosophic uncertain linguistic environment. Int. J. Mach. Learn. Cybern. 2016, 1–17. [Google Scholar] [CrossRef]
  41. Smarandache, F. Neutrosophy. Neutrosophic Probability, Set, and Logic, ProQuest Information & Learning; Infolearnquest: Ann Arbor, MI, USA, 1998. [Google Scholar]
  42. Ye, J. A netting method for clustering-simplified neutrosophic information. Soft Comput. 2017, 21, 7571–7577. [Google Scholar] [CrossRef]
  43. Garg, H.; Arora, R. Distance and similarity measures for Dual hesistant fuzzy soft sets and their applications in multi criteria decision-making problem. Int. J. Uncertain. Quantif. 2017, 7, 229–248. [Google Scholar] [CrossRef]
  44. Garg, H.; Arora, R. Generalized and Group-based Generalized intuitionistic fuzzy soft sets with applications in decision-making. Appl. Intell. 2017, 1–13. [Google Scholar] [CrossRef]
  45. Garg, H.; Arora, R. A nonlinear-programming methodology for multi-attribute decision-making problem with interval-valued intuitionistic fuzzy soft sets information. Appl. Intell. 2017, 1–16. [Google Scholar] [CrossRef]
  46. Garg, H. A Robust Ranking Method for Intuitionistic Multiplicative Sets Under Crisp, Interval Environments and Its Applications. IEEE Trans. Emerg. Top. Comput. Intell. 2017, 1, 366–374. [Google Scholar] [CrossRef]
  47. Garg, H. Generalized interaction aggregation operators in intuitionistic fuzzy multiplicative preference environment and their application to multicriteria decision-making. Appl. Intell. 2017, 1–17. [Google Scholar] [CrossRef]
  48. Garg, H. Generalized intuitionistic fuzzy multiplicative interactive geometric operators and their application to multiple criteria decision making. Int. J. Mach. Learn. Cybern. 2016, 7, 1075–1092. [Google Scholar] [CrossRef]
  49. Kumar, K.; Garg, H. Connection number of set pair analysis based TOPSIS method on intuitionistic fuzzy sets and their application to decision making. Appl. Intel. 2017, 1–8. [Google Scholar] [CrossRef]
  50. Rani, D.; Garg, H. Distance measures between the complex intuitionistic fuzzy sets and its applications to the decision-making process. Int. J. Uncertain. Quantif. 2017, 7, 423–439. [Google Scholar] [CrossRef]
  51. Garg, H. Generalized Pythagorean fuzzy Geometric aggregation operators using Einstein t-norm and t-conorm for multicriteria decision-making process. Int. J. Intell. Syst. 2017, 32, 597–630. [Google Scholar] [CrossRef]
  52. Garg, H. A novel improved accuracy function for interval valued Pythagorean fuzzy sets and its applications in decision making process. Int. J. Intell. Syst. 2017, 31, 1247–1260. [Google Scholar] [CrossRef]
  53. Garg, H. A novel correlation coefficients between Pythagorean fuzzy sets and its applications to Decision-Making processes. Int. J. Intell. Syst. 2016, 31, 1234–1252. [Google Scholar] [CrossRef]
Table 1. Ordering value of Example 1.
Table 1. Ordering value of Example 1.
MethodsMeasure Value of B fromRanking Order
A 1 A 2 A 3
D H (defined in Equation (1)) [19]0.32500.12500.2500 A 2 A 3 A 1
Correlation coefficient [19]0.78830.96750.8615 A 2 A 3 A 1
D N E (defined in Equation (3)) [20]0.52510.76740.6098 A 1 A 3 A 2
S C S 1 (defined in Equation (4)) [22]0.82090.97850.8992 A 2 A 3 A 1
S C S 2 (defined in Equation (5)) [22]0.89490.99110.9695 A 2 A 3 A 1
S T 1 (defined in Equation (6)) [42]0.72750.90140.7976 A 2 A 3 A 1
S T 2 (defined in Equation (7)) [42]0.91430.96730.9343 A 2 A 3 A 1
Table 2. Comparison of diagnosis result using existing measures.
Table 2. Comparison of diagnosis result using existing measures.
ApproachRanking Order
D H (defined in Equation (1)) [19] Q 1 Q 3 Q 2 Q 4 Q 5
Correlation [19] Q 1 Q 2 Q 3 Q 4 Q 5
Distance measure [27]
p = 1 Q 3 Q 1 Q 2 Q 4 Q 5
p = 2 Q 1 Q 3 Q 2 Q 4 Q 5
p = 3 Q 1 Q 3 Q 2 Q 4 Q 5
p = 5 Q 1 Q 3 Q 2 Q 4 Q 5
D N H (defined in Equation (2)) [20] Q 3 Q 1 Q 2 Q 4 Q 5
D N H (defined in Equation (3)) [20] Q 1 Q 3 Q 2 Q 4 Q 5
S C S 1 (defined in Equation (4)) [22] Q 1 Q 3 Q 2 Q 4 Q 5
S C S 1 (defined in Equation (5)) [22] Q 1 Q 2 Q 3 Q 4 Q 5
S T 1 (defined in Equation (6)) [42] Q 1 Q 3 Q 2 Q 4 Q 5
S T 1 (defined in Equation (7)) [42] Q 1 Q 3 Q 2 Q 4 Q 5
Table 3. Results of classification of given sample using proposed distance measure.
Table 3. Results of classification of given sample using proposed distance measure.
When Equal Importance Is given to Each CriteriaWhen Weight Vector ( 0.3 , 0.4 , 0.2 , 0.1 ) T Is Taken
p t d p ( A 1 , B ) d p ( A 2 , B ) d p ( A 3 , B ) Ranking d w p ( A 1 , B ) d w p ( A 2 , B ) d w p ( A 3 , B ) Ranking
130.14000.07330.1167 A 2 A 3 A 1 0.03380.01620.0233 A 2 A 3 A 1
50.16670.07620.1214 A 2 A 3 A 1 0.03870.01700.0248 A 2 A 3 A 1
70.18150.07780.1241 A 2 A 3 A 1 0.04140.01750.0256 A 2 A 3 A 1
1.530.15980.07830.1374 A 2 A 3 A 1 0.06200.02770.0426 A 2 A 3 A 1
50.19240.08170.1437 A 2 A 3 A 1 0.07230.02930.0452 A 2 A 3 A 1
70.21160.08380.1480 A 2 A 3 A 1 0.07840.03040.0469 A 2 A 3 A 1
230.17490.08210.1560 A 2 A 3 A 1 0.08610.03690.0604 A 2 A 3 A 1
50.21370.08590.1646 A 2 A 3 A 1 0.10210.03920.0644 A 2 A 3 A 1
70.23740.08850.1705 A 2 A 3 A 1 0.11200.04080.0671 A 2 A 3 A 1
330.19700.08800.1875 A 2 A 3 A 1 0.12290.05070.0927 A 2 A 3 A 1
50.24690.09290.02012 A 2 A 3 A 1 0.14970.05430.1000 A 2 A 3 A 1
70.27850.09620.2098 A 2 A 3 A 1 0.16720.05660.1046 A 2 A 3 A 1
530.22400.09670.2314 A 2 A 1 A 3 0.16800.06890.1469 A 2 A 3 A 1
50.29020.10410.2526 A 2 A 3 A 1 0.21280.07490.1605 A 2 A 3 A 1
70.33260.10870.2650 A 2 A 3 A 1 0.24260.07860.1685 A 2 A 3 A 1
1030.25640.11070.2830 A 2 A 1 A 3 0.22030.09390.2248 A 2 A 1 A 3
50.34210.12310.3131 A 2 A 3 A 1 0.29150.10470.2487 A 2 A 3 A 1
70.39420.13040.3301 A 2 A 3 A 1 0.33560.11090.2622 A 2 A 3 A 1
Table 4. Diagnosis result on basis of proposed distance measure.
Table 4. Diagnosis result on basis of proposed distance measure.
When Equal Importance Is Given to Each CriteriaWhen Weight Vector ( 0.3 , 0.2 , 0.2 , 0.1 , 0.2 ) T is Taken
p t d p ( Q 1 , P ) d p ( Q 2 , P ) d p ( Q 3 , P ) d p ( Q 4 , P ) d p ( Q 5 , P ) d w p ( Q 1 , P ) d w p ( Q 2 , P ) d w p ( Q 3 , P ) d w p ( Q 4 , P ) d w p ( Q 5 , P )
130.12800.18130.12670.29200.32400.02840.04030.02730.06250.0684
50.14100.18670.14570.30760.34000.03040.04130.03000.06430.0700
70.14810.18960.15630.31780.34890.03150.04190.03150.06560.070
1.530.14650.20230.16000.31750.35740.05530.07680.05790.11540.1282
50.16120.21310.17940.33640.37780.05980.08080.06280.12020.1334
70.17110.22050.19160.34920.39130.06300.08360.06580.12370.1369
230.16220.22260.18400.33830.38160.07950.11010.08620.15990.1781
50.17870.23910.20380.36090.40520.08670.11830.09280.16860.1872
70.18950.25010.21680.37600.42110.09140.12380.09720.17440.1933
330.18700.26010.21630.37150.41420.11820.16620.13120.22760.2509
50.20610.28760.23760.40040.44210.12970.18420.14090.24360.2666
70.21750.30470.25160.41850.46010.13650.19540.14750.25350.2765
530.21850.31870.25310.41700.45040.16750.24710.18920.31270.3354
50.24050.36250.27820.45310.48260.18410.28170.20450.33840.3588
70.25290.38770.29400.47400.50230.19340.30160.21450.35320.3729
1030.25190.39800.29690.47310.48960.22150.35240.25990.40950.4235
50.27710.45860.32710.51700.52520.24340.40630.28400.44640.4547
70.29120.46240.34510.54200.54660.25560.43630.29810.46750.4730

Share and Cite

MDPI and ACS Style

Garg, H.; Nancy. Some New Biparametric Distance Measures on Single-Valued Neutrosophic Sets with Applications to Pattern Recognition and Medical Diagnosis. Information 2017, 8, 162. https://doi.org/10.3390/info8040162

AMA Style

Garg H, Nancy. Some New Biparametric Distance Measures on Single-Valued Neutrosophic Sets with Applications to Pattern Recognition and Medical Diagnosis. Information. 2017; 8(4):162. https://doi.org/10.3390/info8040162

Chicago/Turabian Style

Garg, Harish, and Nancy. 2017. "Some New Biparametric Distance Measures on Single-Valued Neutrosophic Sets with Applications to Pattern Recognition and Medical Diagnosis" Information 8, no. 4: 162. https://doi.org/10.3390/info8040162

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop