Next Article in Journal
Proteolysis of β-lactoglobulin by Trypsin: Simulation by Two-Step Model and Experimental Verification by Intrinsic Tryptophan Fluorescence
Previous Article in Journal
Two Variables Shivley’s Matrix Polynomials
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Criteria Decision-Making Method Based on Single-Valued Neutrosophic Schweizer–Sklar Muirhead Mean Aggregation Operators

1
School of Information, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, China
2
Graduate School, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, China
*
Author to whom correspondence should be addressed.
Symmetry 2019, 11(2), 152; https://doi.org/10.3390/sym11020152
Submission received: 3 January 2019 / Revised: 24 January 2019 / Accepted: 26 January 2019 / Published: 29 January 2019

Abstract

:
Schweizer–Sklar (SS) operation can make information aggregation more flexible, and the Muirhead mean (MM) operator can take into account the correlation between inputs by a variable parameter. Because traditional MM is only available for real numbers and single-valued neutrosophic set (SVNS) can better express incomplete and uncertain information in decision systems, in this paper, we applied MM operators to single-valued neutrosophic sets (SVNSs) and presented two new MM aggregation operators with the SS operation, i.e., a single-valued neutrosophic SS Muirhead mean (SVNSSMM) operator and a weighted single-valued neutrosophic SS MM (WSVNSSMM) operator. We listed some properties of them and some particular cases about various parameter values. We also proposed the multi-criteria decision-making method based on the WSVNSSMM operator in SVNS. At last, we illustrated the feasibility of this method using a numerical example of company investment.

1. Introduction

Since Zadeh [1] established fuzzy sets (FS), they have developed quickly. However, the inadequacy of FS is obvious because a FS only has a membership degree (MD) T(x), and it cannot deal with some complex fuzzy information. Shortly afterward, Atanassov [2,3,4,5] further presented the intuitionistic fuzzy set (IFS). Compared with FS, which only has a membership degree which expresses determinacy, IFS considers the indeterminacy and adds the non-membership degree (NMD) F(x). Nevertheless, in practical issues, IFS also has limitations; it cannot handle the information that blurs the borders between truth and falsity. In order to fix this problem, Atanassov [5] and Gargov [3] extended the MD and NMD to interval numbers and proposed the interval-valued IFS (IVIFS). In addition, Turksen [6] also proposed interval-valued fuzzy sets (IVFS), which also used the MD and the NMD to describe determinacy and indeterminacy. However, under some circumstances, the MD and NMD cannot express fuzzy information clearly. Therefore, Smarandache [7] proposed neutrosophic sets (NS) by increasing a hesitation degree I(x). The hesitation degree describes the difference between the MD and NMD. Further, a large number of theories about neutrosophic sets are gradually being put forward. For example, Ye [8] proposed a simple neutrosophic set (SNS), which is a subset of NS. Wang [9] gave the definition of interval NS (INS), which used the standard interval to express the functions of the MD, the hesitation degree, and the NMD. Ye [8] and Wang and Smarandache [10,11] proposed the single-valued neutrosophic set (SVNS), which can solve inaccuracy, incompleteness, and inconsistency problems well.
In fuzzy set theory and application, Archimedean t-norm and t-conorm (ATT) occupy an important position. To promote the classical triangular inequality, Karl Menger [12] proposed the concept of trigonometric function that is the prototype of t-norm and t-conorm. Schweizer and Skar [13] gave a detailed definition of t-norm and t-conorm, and SS t-norm and t-conorm (SSTT) are one of the forms of t-norm and t-conorm [14,15,16]. Schweizer-Sklaroperation is an instance of ATT, but SS operation contains an alterable parameter; therefore, they are more agile and superior, and can better reflect the property of “logical and” and “logical or”, respectively.
In fuzzy environments, information aggregation operators [17,18,19,20] are effective tools to handle multi-criteria decision-making problems, and now they have gained greater attention. In handling multi-criteria decision-making (MCDM) problems, some traditional methods, for instance, TOPSIS (Technique for Order Preference by Similarity to an Ideal Solution) [21] and ELECTRE (Elimination Et Choice Translation Reality) [22], can only give the ranking results, while aggregation operators are able to provide the integrated values of alternatives, and provide the ranking results. In particular, some aggregation operators can take into account the relationship of the aggregated parameter. For instance, Yager [23] gave the power average (PA) operator; this operator aggregates input data and allocates the weighted vector by the support degree between input parameters. Bonferroni [24] proposed the Bonferroni mean (BM) operator and Beliakov [25] presented the Heronian mean (HM) operator, and they can capture the interrelationships between input parameter very well. Then, Yager extended the BM operators to handle different uncertain information such as intuitionistic fuzzy numbers (IFNs) [26], interval-valued IFNs (IVIFNs) [27], and multi-valued neutrosophic numbers [28]. In addition, the HM operator was extended to IFNs [29], IVIFNs [30,31], etc. Furthermore, Yu and Wu [32] explained the difference between the BM and HM. However, the BM and HM operators only consider the relationships between two input parameters. In order to consider interrelationships among multiple input parameters, in 1729, Maclaurin [33] first proposed the Maclaurin symmetric mean (MSM) operator, which has the salient advantage of being able to capture the correlation between arbitrary parameters. After that, a more generalized operator was presented, that is, the Muirhead mean [34] was proposed by Muirhead, which was added an alterable parametric vector P on the basis of considering interrelationships among multiple input parameters, and some existing operators are its special cases, for instance, arithmetic and geometric mean (GM) operators (not considering the correlations), BM operator, and MSM. When dealing with MCDM problems, some aggregation operators cannot consider the relationship between any input parameters, while MM operator can take into account the correlation between inputs by a variable parameter. Therefore, the MM operator is more superior when deal with MCDM problems.
Multi-criteria decision-making refers to the use of existing decision information, in the case of multi-criteria that are in conflict with each other and cannot coexist, and in which the limited alternatives are ranked or selected in a certain way. Schweizer-Sklar operation uses a variable parameter to make their operations more effective and flexible. In addition, SVNS can handle incomplete, indeterminate, and inconsistent information under fuzzy environments. Therefore, we conducted further research on SS operations for SVNS and applied SS operations to MCDM problems. Furthermore, because the MM operator considers interrelationships among multiple input parameters by the alterable parametric vector, hence combining the MM operator with the SS operation gives some aggregation operators, and it was more meaningful to develop some new means to solve the MCDM problems in the single-valued neutrosophic fuzzy environment. According to this, the purpose and significance of this article are (1) to develop a number of new MM operators by combining MM operators, SS operations, and SVNS; (2) to discuss some meaningful properties and a number of cases of these operators put forward; (3) to deal with an MCDM method for SVNS information more effectively based on the operators put forward; and (4) to demonstrate the viability and superiority of the newly developed method.
In this article, the rest of this paper is as follows. In Section 2, we briefly state the fundamental conceptions of SVNS, SSTT, and MM operators. In Section 3, we develop some single-valued neutrosophic Schweizer–Sklar Muirhead mean operators to explore a number of ideal features and particular cases of the presented operators. In Section 4, we present an MCDM method based on the developed operators. In Section 5, this paper provides a numerical example of company investment to demonstrate the activeness and feasibility of the presented method, and compare it to other existing methods. In Section 6, we briefly summarize this study.

2. Preliminaries

In the following, we illustrate the notions of SVNS, the operations of Schweizer–Sklar, and the Muirhead mean operator, which will be utilized in the rest of the paper.

2.1. Single-Valued Neutrosophic Set (SVNS)

Definition 1
([20]). Let X be a space of points (objects), with a generic element in X denoted by x . A neutrosophic set A in X is characterized by the degree of membership function T A ( x ) , the degree of indeterminacy function I A ( x ) , and the degree of non-membership function F A ( x ) . If the functions T A ( x ) , I A ( x ) and F A ( x ) are defined in singleton subintervals or subsets in the real standard [ 0 , 1 ] , that is T A ( x ) : X [ 0 , 1 ] , I A ( x ) : X [ 0 , 1 ] , F A ( x ) : X [ 0 , 1 ] , then the sum of T A ( x ) , I A ( x ) and F A ( x ) satisfies the condition 0 T A ( x ) + I A ( x ) + F A ( x ) 3 for any x X . Then, a SVNS A is denoted as follows:
A = { < x , T A ( x ) , I A ( x ) , F A ( x ) > | x X }
For convenience, the ordered triple component t < T A ( x ) , I A ( x ) , F A ( x ) > , which is the core of SVNS, can be called a single-valued neutrosophic number (SVNN). What is more, each SVNN can be described as a = ( T a , I a , F a ) , where T a [ 0 , 1 ] , I a [ 0 , 1 ] , F a [ 0 , 1 ] , and
Definition 2
([21]). For any SVNS a = ( T a , I a , F a ) , respectively, define the score function S ( a ) , accuracy function A ( a ) and certainty function C ( a ) of a as follows:
S ( a ) = ( T a + 1 I a + 1 F a )
A ( a ) = ( T a F a ) , and
C ( a ) = T a .
Definition 3
([21]). Let a = ( T a , I a , F a ) and b = ( T b , I b , F b ) be any two SVNSs. Define the comparison method as follows:
I f    S ( a ) > S ( b ) ,   t h e n   a b ;
I f    S ( a ) = S ( b )   a n d   A ( a ) > A ( b ) ,   t h e n   a b ;
I f    S ( a ) = S ( b ) , A ( a ) = A ( b )   a n d   C ( a ) > C ( b ) ,   t h e n   a b .

2.2. Muirhead Mean Operator

Definition 4
([22]). Let α i ( i = 1 , 2 , 3 , ... , n ) be a set of arbitrary positive real numbers, the parameter vector is P = ( p 1 , p 2 , ... , p l ) R l . Suppose
M M P ( α 1 , α 2 , ... , α n ) = ( 1 n ! σ S n j s = 1 n α σ ( j s ) P j s ) 1 j s = 1 n P j s
Then, M M P is called MM operator, the σ ( j s ) ( j s = 1 , 2 , ... , n ) is any permutation of ( 1 , 2 , , n ) , and S n is a set of all permutations of ( 1 , 2 , , n ) .
Furthermore, from Equation (8), we can know that
(1)
If P = ( 1 , 0 , ... , 0 ) , the MM reduces to
(2)
M M ( 1 , 0 , ... , 0 ) ( α 1 , α 2 , ... , α n ) = 1 n j = 1 n α j , which is the arithmetic averaging operator.
(3)
If P = ( 1 / n , 1 / n , ... , 1 / n ) , the MM reduces to M M ( 1 / n , 1 / n , ... , 1 / n ) ( α 1 , α 2 , ... , α n ) = j = 1 n α j 1 / n , which is the GM operator.
(4)
If P = ( 1 , 1 , 0 , ... , 0 ) , the MM reduces to M M ( 1 , 1 , 0 , 0 , ... , 0 ) ( α 1 , α 2 , ... , α n ) = ( 1 n ( n 1 ) i , j = 1 i j n α i α j ) 1 2 , which is the BM operator [35].
(5)
If P = ( 1 , 1 , ... , 1 , k 0 , 0 , ... , 0 ) n k the MM reduces to M M ( 1 , 1 , ... , 1 , k 0 , 0 , ... , 0 ) n k ( α 1 , α 2 , ... , α n ) = ( 1 i 1 i k n j = 1 k α i j C n k ) 1 k which is the MSM operator [36].
From Definition 4 and the special cases of the MM operator mentioned above, we know that the advantage of the MM operator is that it can capture the overall interrelationships among the multiple input parameters and it is a generalization of some existing aggregation operators.

2.3. Schweizer–Sklar Operations

Schweizer-Sklar operations contain the SS product and SS sum, they are respectively the particular cases of ATT.
Definition 5
([20]). Suppose A = ( a A , b A , c A ) and B = ( a B , b B , c B ) are any two SVNSs, then the generalized intersection and union are defined as follows:
A T , T * B = { y , T ( a A ( y ) , a B ( y ) ) , T ( b A ( y ) , b B ( y ) ) , T ( c A ( y ) , c B ( y ) ) y Y }
A T , T * B = { y , T ( a A ( y ) , a B ( y ) ) , T ( b A ( y ) , b B ( y ) ) , T ( c A ( y ) , c B ( y ) ) y Y }
where T expresses a T-norm and T * expresses a T-conorm.
The definitions of the SS T-norm and T-conorm are described as follows:
T S S , γ ( x , y ) = ( x γ + y γ 1 ) 1 / γ
T S S , γ ( x , y ) = 1 ( ( 1 x ) γ + ( 1 y ) γ 1 ) 1 / γ
where γ < 0 , x , y [ 0 , 1 ] . In addition, when γ = 0 , we have T γ ( x , y ) = x y and T γ ( x , y ) = x + y x y . They are the algebraic T-norm and algebraic T-conorm.
According to the T-norm T γ ( x , y ) and T-conorm T γ ( x , y ) of SS operations, we define SS operations of SVNSs as follows.
Definition 6.
Suppose a ˜ 1 = ( T A , I A , F A ) and a ˜ 2 = ( T B , I B , F B ) are any two SVNSs, then present the generalized intersection and generalized union based on SS operations as follows:
a ˜ 1 T , T * a ˜ 2 = ( T ( T A , T B ) , T ( I A , I B ) , T ( F A , F B ) )
a ˜ 1 T , T a ˜ 2 = ( T ( T A , T B ) , T ( I A , I B ) , T ( F A , F B ) )
According to Definitions 3 and 4, the SS operational rules of SVNSs are given as follows ( γ < 0 ):
a ˜ 1 S S a ˜ 2 = ( 1 ( ( 1 T A ) γ + ( 1 T B ) γ 1 ) 1 / γ , ( I A γ + I B γ 1 ) 1 / γ , ( F A γ + F B γ 1 ) 1 / γ )
a ˜ 1 S S a ˜ 2 = ( ( T A γ + T B γ 1 ) 1 / γ , 1 ( ( 1 I A ) γ + ( 1 I B ) γ 1 ) 1 / γ , 1 ( ( 1 F A ) γ + ( 1 F B ) γ 1 ) 1 / γ )
n a ˜ 1 = ( 1 ( n ( 1 T A ) γ ( n 1 ) ) 1 / γ , ( n I A γ ( n 1 ) ) 1 / γ , ( n F A γ ( n 1 ) ) 1 / γ ) ,   n > 0
a ˜ 1 n = ( ( n T A γ ( n 1 ) ) 1 / γ , 1 ( n ( 1 I A ) γ ( n 1 ) ) 1 / γ , 1 ( n ( 1 F A ) γ ( n 1 ) ) 1 / γ ) , n > 0
Theorem 1.
Suppose a ˜ 1 = ( T A , I A , F A ) and a ˜ 2 = ( T B , I B , F B ) are any two SVNSs, then
( 1 )   a ˜ 1 S S a ˜ 2 = a ˜ 2 S S a ˜ 1
( 2 )   a ˜ 1 S S a ˜ 2 = a ˜ 2 S S a ˜ 1
( 3 )   n ( a ˜ 1 S S a ˜ 2 ) = n a ˜ 1 S S n a ˜ 2 , n 0
( 4 )   n 1 a ˜ 1 S S n 2 a ˜ 2 = ( n 1 + n 2 ) a ˜ 1 , n 1 , n 2 0
( 5 )   a ˜ 1 n 1 S S a ˜ 1 n 2 = ( a ˜ 1 ) n 1 + n 2 , n 1 , n 2 0
( 6 )   a ˜ 1 n S S a ˜ 2 n = ( a ˜ 1 S S a ˜ 2 ) n , n 0
Theorem 1 is demonstrated easily.

3. Single-Valued Neutrosophic Schweizer–Sklar Muirhead Mean Aggregation Operators

In the following, we will produce single-valued neutrosophic SS Muirhead mean (SVNSSMM) operators and weighted single-valued neutrosophic SS Muirhead mean (WSVNSSMM) operators and discuss their special cases and some of the properties of the new operators.

3.1. The SVNSSMM Operator

Definition 7.
Let α i = ( T i , I i , F i ) ( i = 1 , 2 , , n ) be a set of SVNSs, and P = ( p 1 , p 2 , , p n ) R n be a vector of parameters. If
S V N S S M M P ( α 1 , α 2 , , α n ) = ( 1 n ! σ S n j = 1 n α σ ( j ) P j ) 1 j = 1 n P j
Then we call S V N S S M M P as the single-valued neutrosophic Schweizer–Sklar MM (SVNSSMM) operator, where σ ( j ) ( j = 1 , 2 , , n ) is any a permutation of ( 1 , 2 , , n ) , and S n is the set of all permutations of ( 1 , 2 , , n ) .
Based on the SS operational rules of the SVNSs, we give the result of Definition 7 as shown as Theorem 2.
Theorem 2.
Let α i = ( T i , I i , F i ) ( i = 1 , 2 , , n ) be a collection of SVNSs and γ < 0 , then the result generated by Definition 7 can be shown as
S V N S S M M P ( α 1 , α 2 , , α n ) = ( ( 1 + 1 j = 1 n P j ( 1 ( 1 + 1 n ! ( σ S n ( 1 ( 1 + j = 1 n P j ( T σ ( j ) γ 1 ) ) 1 γ ) γ 2 ) ) 1 γ ) γ 1 j = 1 n P j ) 1 γ , 1 ( 1 + 1 j = 1 n P j ( 1 ( 1 + 1 n ! ( σ S n ( 1 ( 1 + j = 1 n P j ( ( 1 I σ ( j ) ) γ 1 ) ) 1 γ ) γ 2 ) ) 1 γ ) γ 1 j = 1 n P j ) 1 γ , 1 ( 1 + 1 j = 1 n P j ( 1 ( 1 + 1 n ! ( σ S n ( 1 ( 1 + j = 1 n P j ( ( 1 F σ ( j ) ) γ 1 ) ) 1 γ ) γ 2 ) ) 1 γ ) γ 1 j = 1 n P j ) 1 γ )
Proof. 
By the operational laws of SVNSs based on SS operations, we get
α σ ( j ) P j = ( ( P j T σ ( j ) γ ( P j 1 ) ) 1 γ , 1 ( P j ( 1 I σ ( j ) ) γ ( P j 1 ) ) 1 γ , 1 ( P j ( 1 F σ ( j ) ) γ ( P j 1 ) ) 1 γ )
and
j = 1 n α σ ( j ) P j = ( ( j = 1 n P j T σ ( j ) γ j = 1 n P j + 1 ) 1 γ , 1 ( j = 1 n P j ( 1 I σ ( j ) ) γ j = 1 n P j + 1 ) 1 γ , 1 ( j = 1 n P j ( 1 F σ ( j ) ) γ j = 1 n P j + 1 ) 1 γ )
Then
σ S n j = 1 n α σ ( j ) P j = ( 1 ( σ S n ( 1 ( j = 1 n P j T σ ( j ) γ j = 1 n P j + 1 ) 1 γ ) γ 1 ) 1 γ , ( σ S n ( 1 ( j = 1 n P j ( 1 I σ ( j ) ) γ j = 1 n P j + 1 ) 1 γ ) γ 1 ) 1 γ , ( σ S n ( 1 ( j = 1 n P j ( 1 F σ ( j ) ) γ j = 1 n P j + 1 ) 1 γ ) γ 1 ) 1 γ )
Further, we are able to obtain
1 n ! σ S n j = 1 n α σ ( j ) P j = ( 1 ( 1 n ! ( σ S n ( 1 ( j = 1 n P j T σ ( j ) γ j = 1 n P j + 1 ) 1 γ ) γ 1 ) ( 1 n ! 1 ) ) 1 γ , ( 1 n ! ( σ S n ( 1 ( j = 1 n P j ( 1 I σ ( j ) ) γ j = 1 n P j + 1 ) 1 γ ) γ 1 ) ( 1 n ! 1 ) ) 1 γ , ( 1 n ! ( σ S n ( 1 ( j = 1 n P j ( 1 F σ ( j ) ) γ j = 1 n P j + 1 ) 1 γ ) γ 1 ) ( 1 n ! 1 ) ) 1 γ )
Therefore
S V N S S M M ( 1 , 2 , 1 ) ( x , y , z ) = ( ( 1 + 1 j = 1 n P j ( 1 ( 1 + 1 n ! ( σ S n ( 1 ( 1 + j = 1 n P j ( T σ ( j ) γ 1 ) ) 1 γ ) γ 2 ) ) 1 γ ) γ 1 j = 1 n P j ) 1 γ , 1 ( 1 + 1 j = 1 n P j ( 1 ( 1 + 1 n ! ( σ S n ( 1 ( 1 + j = 1 n P j ( ( 1 I σ ( j ) ) γ 1 ) ) 1 γ ) γ 2 ) ) 1 γ ) γ 1 j = 1 n P j ) 1 γ , 1 ( 1 + 1 j = 1 n P j ( 1 ( 1 + 1 n ! ( σ S n ( 1 ( 1 + j = 1 n P j ( ( 1 F σ ( j ) ) γ 1 ) ) 1 γ ) γ 2 ) ) 1 γ ) γ 1 j = 1 n P j ) 1 γ ) = ( ( 1 + 1 4 ( 1 ( 1 + 1 6 ( ( 1 ( 1 + ( 0.4 2 1 ) + 2 ( 0.2 2 1 ) + ( 0.6 2 1 ) ) 1 2 ) 2 + ( 1 ( 1 + ( 0.4 2 1 ) + 2 ( 0.6 2 1 ) + ( 0.2 2 1 ) ) 1 2 ) 2 + ( 1 ( 1 + ( 0.2 2 1 ) + 2 ( 0.4 2 1 ) + ( 0.6 2 1 ) ) 1 2 ) 2 + ( 1 ( 1 + ( 0.2 2 1 ) + 2 ( 0.6 2 1 ) + ( 0.4 2 1 ) ) 1 2 ) 2 + ( 1 ( 1 + ( 0.6 2 1 ) + 2 ( 0.4 2 1 ) + ( 0.2 2 1 ) ) 1 2 ) 2 + ( 1 ( 1 + ( 0.6 2 1 ) + 2 ( 0.2 2 1 ) + ( 0.4 2 1 ) ) 1 2 ) 2 2 ) ) 1 2 ) 2 1 4 ) 1 2 , 1 ( 1 + 1 4 ( 1 ( 1 + 1 6 ( ( 1 ( 1 + ( ( 1 0.3 ) 2 1 ) + 2 ( ( 1 0.7 ) 2 1 ) + ( ( 1 0.4 ) 2 1 ) ) 1 2 ) 2 + ( 1 ( 1 + ( ( 1 0.3 ) 2 1 ) + 2 ( ( 1 0.4 ) 2 1 ) + ( ( 1 0.7 ) 2 1 ) ) 1 2 ) 2 + ( 1 ( 1 + ( ( 1 0.7 ) 2 1 ) + 2 ( ( 1 0.3 ) 2 1 ) + ( ( 1 0.4 ) 2 1 ) ) 1 2 ) 2 + ( 1 ( 1 + ( ( 1 0.7 ) 2 1 ) + 2 ( ( 1 0.4 ) 2 1 ) + ( ( 1 0.3 ) 2 1 ) ) 1 2 ) 2 + ( 1 ( 1 + ( ( 1 0.4 ) 2 1 ) + 2 ( ( 1 0.3 ) 2 1 ) + ( ( 1 0.7 ) 2 1 ) ) 1 2 ) 2 + ( 1 ( 1 + ( ( 1 0.4 ) 2 1 ) + 2 ( ( 1 0.7 ) 2 1 ) + ( ( 1 0.3 ) 2 1 ) ) 1 2 ) 2 2 ) ) 1 2 ) 2 1 4 ) 1 2 , 1 ( 1 + 1 4 ( 1 ( 1 + 1 6 ( ( 1 ( 1 + ( ( 1 0.8 ) 2 1 ) + 2 ( ( 1 0.1 ) 2 1 ) + ( ( 1 0.1 ) 2 1 ) ) 1 2 ) 2 + ( 1 ( 1 + ( ( 1 0.8 ) 2 1 ) + 2 ( ( 1 0.1 ) 2 1 ) + ( ( 1 0.1 ) 2 1 ) ) 1 2 ) 2 + ( 1 ( 1 + ( ( 1 0.1 ) 2 1 ) + 2 ( ( 1 0.8 ) 2 1 ) + ( ( 1 0.1 ) 2 1 ) ) 1 2 ) 2 + ( 1 ( 1 + ( ( 1 0.1 ) 2 1 ) + 2 ( ( 1 0.1 ) 2 1 ) + ( ( 1 0.8 ) 2 1 ) ) 1 2 ) 2 + ( 1 ( 1 + ( ( 1 0.1 ) 2 1 ) + 2 ( ( 1 0.8 ) 2 1 ) + ( ( 1 0.1 ) 2 1 ) ) 1 2 ) 2 + ( 1 ( 1 + ( ( 1 0.1 ) 2 1 ) + 2 ( ( 1 0.1 ) 2 1 ) + ( ( 1 0.8 ) 2 1 ) ) 1 2 ) 2 2 ) ) 1 2 ) 2 1 4 ) 1 2 ) = ( 0.5401 , 0.3966 , 0.4420 ) .
Theorem 3
(Monotonicity).Let α i = ( T i , I i , F i ) and α i = ( T i , I i , F i ) ( 1 , 2 , , n ) be two sets of SVNSs. If T i T i I i I i F i F i for all i , then
S V N S S M M P ( α 1 , α 2 , , α n ) S V N S S M M P ( α 1 , α 2 , , α n )
Proof. 
Let S V N S S M M P ( α 1 , α 2 , , α n ) = ( T , I , F ) ,
S V N S S M M P ( α 1 , α 2 , , α n ) = ( T , I , F ) .
Where
T = ( 1 + 1 j = 1 n P j ( 1 ( 1 + 1 n ! ( σ S n ( 1 ( 1 + j = 1 n P j ( T σ ( j ) γ 1 ) ) 1 γ ) γ 2 ) ) 1 γ ) γ 1 j = 1 n P j ) 1 γ T = ( 1 + 1 j = 1 n P j ( 1 ( 1 + 1 n ! ( σ S n ( 1 ( 1 + j = 1 n P j ( T σ ( j ) γ 1 ) ) 1 γ ) γ 2 ) ) 1 γ ) γ 1 j = 1 n P j ) 1 γ
and
I = 1 ( 1 + 1 j = 1 n P j ( 1 ( 1 + 1 n ! ( σ S n ( 1 ( 1 + j = 1 n P j ( ( 1 I σ ( j ) ) γ 1 ) ) 1 γ ) γ 2 ) ) 1 γ ) γ 1 j = 1 n P j ) 1 γ I = 1 ( 1 + 1 j = 1 n P j ( 1 ( 1 + 1 n ! ( σ S n ( 1 ( 1 + j = 1 n P j ( ( 1 I σ ( j ) ) γ 1 ) ) 1 γ ) γ 2 ) ) 1 γ ) γ 1 j = 1 n P j ) 1 γ F = 1 ( 1 + 1 j = 1 n P j ( 1 ( 1 + 1 n ! ( σ S n ( 1 ( 1 + j = 1 n P j ( ( 1 F σ ( j ) ) γ 1 ) ) 1 γ ) γ 2 ) ) 1 γ ) γ 1 j = 1 n P j ) 1 γ F = 1 ( 1 + 1 j = 1 n P j ( 1 ( 1 + 1 n ! ( σ S n ( 1 ( 1 + j = 1 n P j ( ( 1 F σ ( j ) ) γ 1 ) ) 1 γ ) γ 2 ) ) 1 γ ) γ 1 j = 1 n P j ) 1 γ
Since T i T i and γ < 0 we can get T σ ( j ) γ T σ ( j ) γ .
Then   ( 1 + j = 1 n P j ( T σ ( j ) γ 1 ) ) 1 γ ( 1 + j = 1 n P j ( T σ ( j ) γ 1 ) ) 1 γ 1 ( 1 + j = 1 n P j ( T σ ( j ) γ 1 ) ) 1 γ 1 ( 1 + j = 1 n P j ( T σ ( j ) γ 1 ) ) 1 γ ( 1 ( 1 + j = 1 n P j ( T σ ( j ) γ 1 ) ) 1 γ ) γ ( 1 ( 1 + j = 1 n P j ( T σ ( j ) γ 1 ) ) 1 γ ) γ ( 1 + 1 n ! ( σ S n ( 1 ( 1 + j = 1 n P j ( T σ ( j ) γ 1 ) ) 1 γ ) γ 2 ) ) 1 γ ( 1 + 1 n ! ( σ S n ( 1 ( 1 + j = 1 n P j ( T σ ( j ) γ 1 ) ) 1 γ ) γ 2 ) ) 1 γ 1 ( 1 + 1 n ! ( σ S n ( 1 ( 1 + j = 1 n P j ( T σ ( j ) γ 1 ) ) 1 γ ) γ 2 ) ) 1 γ 1 ( 1 + 1 n ! ( σ S n ( 1 ( 1 + j = 1 n P j ( T σ ( j ) γ 1 ) ) 1 γ ) γ 2 ) ) 1 γ ( 1 ( 1 + 1 n ! ( σ S n ( 1 ( 1 + j = 1 n P j ( T σ ( j ) γ 1 ) ) 1 γ ) γ 2 ) ) 1 γ ) γ ( 1 ( 1 + 1 n ! ( σ S n ( 1 ( 1 + j = 1 n P j ( T σ ( j ) γ 1 ) ) 1 γ ) γ 2 ) ) 1 γ ) γ ( 1 + 1 j = 1 n P j ( 1 ( 1 + 1 n ! ( σ S n ( 1 ( 1 + j = 1 n P j ( T σ ( j ) γ 1 ) ) 1 γ ) γ 2 ) ) 1 γ ) γ 1 j = 1 n P j ) 1 γ ( 1 + 1 j = 1 n P j ( 1 ( 1 + 1 n ! ( σ S n ( 1 ( 1 + j = 1 n P j ( T σ ( j ) γ 1 ) ) 1 γ ) γ 2 ) ) 1 γ ) γ 1 j = 1 n P j ) 1 γ
i.e., T i T i .
Similarly, we also have I i I i , F i F i .
Therefore, we can get the following conclusion.
S V N S S M M P ( α 1 , α 2 , , α n ) S V N S S M M P ( α 1 , α 2 , , α n )
Theorem 4
(Commutativity).Suppose α i ( i = 1 , 2 , , n ) is any permutation of α i ( i = 1 , 2 , , n ) . Then
S V N S S M M P ( α 1 , α 2 , , α n ) = S V N S S M M P ( α 1 , α 2 , , α n )
Because this property is clear, so the proof is now omitted.
In the following, we will research several particular forms of the SVNSSMM operator with the different parameters vector P .
(1)
When P = ( 1 , 0 , , 0 ) , the SVNSSMM operator will reduce to the single-valued neutrosophic Schweizer–Sklar arithmetic averaging operator.
S V N S S M M ( 1 , 0 , , 0 ) ( α 1 , α 2 , , α n ) = 1 n j = 1 n α j =   ( 1 ( 1 + 1 n ( j = 1 n ( 1 T j ) γ 2 ) ) 1 / γ , ( 1 + 1 n ( j = 1 n I j γ 2 ) ) 1 / γ , ( 1 + 1 n ( j = 1 n F j γ 2 ) ) 1 / γ )
(2)
When P = ( λ , 0 , , 0 ) , the SVNSSMM operator will reduce to the single valued neutrosophic Schweizer–Sklar generalized arithmetic averaging operator.
S V N S S M M ( λ , 0 , , 0 ) ( α 1 , α 2 , , α n ) = ( 1 n j = 1 n α j λ ) 1 / λ = ( ( 1 + 1 λ ( ( 1 ( 1 + 1 n ( j = 1 n ( 1 ( 1 + λ ( T j γ 1 ) ) 1 / γ ) γ 2 ) ) 1 / γ ) γ 1 ) ) 1 / γ , 1 ( 1 + 1 λ ( ( 1 ( 1 + 1 n ( j = 1 n ( 1 ( 1 + λ ( ( 1 I j ) γ 1 ) ) 1 / γ ) γ 2 ) ) 1 / γ ) γ 1 ) ) 1 / γ , 1 ( 1 + 1 λ ( ( 1 ( 1 + 1 n ( j = 1 n ( 1 ( 1 + λ ( ( 1 F j ) γ 1 ) ) 1 / γ ) γ 2 ) ) 1 / γ ) γ 1 ) ) 1 / γ )
(3)
When P = ( 1 , 1 , 0 , 0 , , 0 ) , the SVNSSMM operator will reduce to the single-valued neutrosophic Schweizer–Sklar BM operator.
S V N S S M M ( 1 , 1 , 0 , 0 , , 0 ) ( α 1 , α 2 , , α n ) = ( 1 n ( n 1 ) i , j = 1 i j n α i α j ) 1 / 2 = ( ( 1 2 ( 1 + ( 1 ( 1 + 1 n ( n 1 ) ( i , j = 1 i j n ( 1 ( T i γ + T j γ 1 ) 1 / γ ) γ 2 ) ) 1 / γ ) γ ) ) 1 / γ , 1 ( 1 2 ( 1 + ( 1 ( 1 + 1 n ( n 1 ) ( i , j = 1 i j n ( 1 ( ( 1 I i ) γ + ( 1 I j ) γ 1 ) 1 / γ ) γ 2 ) ) 1 / γ ) γ ) ) 1 / γ , 1 ( 1 2 ( 1 + ( 1 ( 1 + 1 n ( n 1 ) ( i , j = 1 i j n ( 1 ( ( 1 F i ) γ + ( 1 F j ) γ 1 ) 1 / γ ) γ 2 ) ) 1 / γ ) γ ) ) 1 / γ )
(4)
When P = ( 1 , 1 , 1 k , 0 , 0 , , 0 ) n k , the SVNSSMM operator will reduce to the single-valued neutrosophic Schweizer–Sklar Maclaurin symmetric mean (MSM) operator.
S V N S S M M ( 1 , 1 , 1 k , 0 , 0 , , 0 ) n k ( α 1 , α 2 , , α n ) = ( 1 i 1 i k n j = 1 k α i j C n k ) 1 / k = ( ( 1 + 1 k ( ( 1 ( 1 + 1 C n k ( 1 i 1 i k n ( 1 ( j = 1 k T i j γ 1 ) 1 / γ ) γ 2 ) ) 1 / γ ) γ 1 ) ) 1 / γ , 1 ( 1 + 1 k ( ( 1 ( 1 + 1 C n k ( 1 i 1 i k n ( 1 ( j = 1 k ( 1 I i j ) γ 1 ) 1 / γ ) γ 2 ) ) 1 / γ ) γ 1 ) ) 1 / γ , 1 ( 1 + 1 k ( ( 1 ( 1 + 1 C n k ( 1 i 1 i k n ( 1 ( j = 1 k ( 1 F i j ) γ 1 ) 1 / γ ) γ 2 ) ) 1 / γ ) γ 1 ) ) 1 / γ )
(5)
When P = ( 1 , 1 , , 1 ) , the SVNSSMM operator will reduce to the single-valued neutrosophic Schweizer–Sklar geometric averaging operator.
S V N S S M M ( 1 , 1 , , 1 ) ( α 1 , α 2 , , α n ) = ( j = 1 n α j ) 1 / n = ( ( 1 + 1 n j = 1 n ( T j γ 1 ) ) 1 / γ , 1 ( 1 + 1 n j = 1 n ( ( 1 I j ) γ 1 ) ) 1 / γ , 1 ( 1 + 1 n j = 1 n ( ( 1 F j ) γ 1 ) ) 1 / γ )
(6)
When P = ( 1 / n , 1 / n , , 1 / n ) , the SVNSSMM operator will reduce to the single-valued neutrosophic Schweizer–Sklar geometric averaging operator.
S V N S S M M ( 1 / n , 1 / n , , 1 / n ) ( α 1 , α 2 , , α n ) = j = 1 n α j 1 / n = ( ( 1 + 1 n j = 1 n ( T j γ 1 ) ) 1 / γ , 1 ( 1 + 1 n j = 1 n ( ( 1 I j ) γ 1 ) ) 1 / γ , 1 ( 1 + 1 n j = 1 n ( ( 1 F j ) γ 1 ) ) 1 / γ )

3.2. The WSVNSSMM Operator

In real decision-making, the weight of the criteria is of great significance in the decision-making results. However, SVNSSMM operator cannot take into account the attribute weight, so we will establish weighted SVNSSMM operator in the following.
Definition 8.
Let α i = ( T i , I i , F i ) ( i = 1 , 2 , , n ) be a set of SVNSs, and P = ( p 1 , p 2 , , p n ) R n be a vector of parameters. ω = ( ω 1 , ω 2 , ω n ) T be the weight vector of α i = ( i = 1 , 2 , , n ) , which satisfies ω i [ 0 , 1 ] and i = 1 n ω i = 1 . If
W S V N S S M M P ( α 1 , α 2 , , α n ) = ( 1 n ! σ S n j = 1 n ( α σ ( j ) n ω σ ( j ) ) P j ) 1 j = 1 n P j
Then we call W S V N S S M M P the weighted single-valued neutrosophic Schweizer–Sklar MM (WSVNSSMM) operator where σ ( j ) ( j = 1 , 2 , , n ) is any permutation of ( 1 , 2 , , n ) , and S n is the collection of all permutations of ( 1 , 2 , , n ) .
Theorem 5.
Let α i = ( T i , I i , F i ) ( i = 1 , 2 , , n ) be a collection of SVNSs and γ < 0 , then the result from Definition 8 is an SVNSs, even
W S V N S S M M P ( α 1 , α 2 , , α n ) = ( ( 1 + 1 j = 1 n P j ( 1 ( 1 + 1 n ! ( σ S n ( 1 ( 1 + j = 1 n P j · n ω σ ( j ) ( T σ ( j ) γ 1 ) ) 1 γ ) γ 2 ) ) 1 γ ) γ 1 j = 1 n P j ) 1 γ , 1 ( 1 + 1 j = 1 n P j ( 1 ( 1 + 1 n ! ( σ S n ( 1 ( 1 + j = 1 n P j · n ω σ ( j ) ( ( 1 I σ ( j ) ) γ 1 ) ) 1 γ ) γ 2 ) ) 1 γ ) γ 1 j = 1 n P j ) 1 γ , 1 ( 1 + 1 j = 1 n P j ( 1 ( 1 + 1 n ! ( σ S n ( 1 ( 1 + j = 1 n P j · n ω σ ( j ) ( ( 1 F σ ( j ) ) γ 1 ) ) 1 γ ) γ 2 ) ) 1 γ ) γ 1 j = 1 n P j ) 1 γ )
Proof that Theorem 5 is the same as the proof of Theorem 2, so it will not be repeated here.
Theorem 6
(Monotonicity).Let α i = ( T i , I i , F i ) and α i = ( T i , I i , F i ) ( 1 , 2 , , n ) be two sets of SVNSs. If T i T i I i I i F i F i for all i , then
W S V N S S M M P ( α 1 , α 2 , , α n ) W S V N S S M M P ( α 1 , α 2 , , α n )
Theorem 7
(Commutativity).Suppose α i ( i = 1 , 2 , , n ) is any permutation of α i ( i = 1 , 2 , , n ) . Then
W S V N S S M M P ( α 1 , α 2 , , α n ) = W S V N S S M M P ( α 1 , α 2 , , α n )
The proofs of Theorems 6 and 7 are the same as the proofs of monotonicity and commutativity of the SVNSSMM operator, so it will not be repeated here.
Theorem 8.
The SVNSSMM operator is a particular case of the WSVNSSMM operator.
Proof. 
When ω = ( 1 n , 1 n , , 1 n ) ,
  W S V N S S M M P ( α 1 , α 2 , , α n ) = ( ( 1 + 1 j = 1 n P j ( 1 ( 1 + 1 n ! ( σ S n ( 1 ( 1 + j = 1 n P j · n ω σ ( j ) ( T σ ( j ) γ 1 ) ) 1 γ ) γ 2 ) ) 1 γ ) γ 1 j = 1 n P j ) 1 γ ,
then
1 ( 1 + 1 j = 1 n P j ( 1 ( 1 + 1 n ! ( σ S n ( 1 ( 1 + j = 1 n P j · n ω σ ( j ) ( ( 1 I σ ( j ) ) γ 1 ) ) 1 γ ) γ 2 ) ) 1 γ ) γ 1 j = 1 n P j ) 1 γ , 1 ( 1 + 1 j = 1 n P j ( 1 ( 1 + 1 n ! ( σ S n ( 1 ( 1 + j = 1 n P j · n ω σ ( j ) ( ( 1 F σ ( j ) ) γ 1 ) ) 1 γ ) γ 2 ) ) 1 γ ) γ 1 j = 1 n P j ) 1 γ ) = ( ( 1 + 1 j = 1 n P j ( 1 ( 1 + 1 n ! ( σ S n ( 1 ( 1 + j = 1 n P j × n × 1 n × ( T σ ( j ) γ 1 ) ) 1 γ ) γ 2 ) ) 1 γ ) γ 1 j = 1 n P j ) 1 γ , 1 ( 1 + 1 j = 1 n P j ( 1 ( 1 + 1 n ! ( σ S n ( 1 ( 1 + j = 1 n P j × n × 1 n × ( ( 1 I σ ( j ) ) γ 1 ) ) 1 γ ) γ 2 ) ) 1 γ ) γ 1 j = 1 n P j ) 1 γ , 1 ( 1 + 1 j = 1 n P j ( 1 ( 1 + 1 n ! ( σ S n ( 1 ( 1 + j = 1 n P j × n × 1 n × ( ( 1 I σ ( j ) ) γ 1 ) ) 1 γ ) γ 2 ) ) 1 γ ) γ 1 j = 1 n P j ) 1 γ ) = ( ( 1 + 1 j = 1 n P j ( 1 ( 1 + 1 n ! ( σ S n ( 1 ( 1 + j = 1 n P j ( T σ ( j ) γ 1 ) ) 1 γ ) γ 2 ) ) 1 γ ) γ 1 j = 1 n P j ) 1 γ , 1 ( 1 + 1 j = 1 n P j ( 1 ( 1 + 1 n ! ( σ S n ( 1 ( 1 + j = 1 n P j ( ( 1 I σ ( j ) ) γ 1 ) ) 1 γ ) γ 2 ) ) 1 γ ) γ 1 j = 1 n P j ) 1 γ , 1 ( 1 + 1 j = 1 n P j ( 1 ( 1 + 1 n ! ( σ S n ( 1 ( 1 + j = 1 n P j ( ( 1 F σ ( j ) ) γ 1 ) ) 1 γ ) γ 2 ) ) 1 γ ) γ 1 j = 1 n P j ) 1 γ ) = S V N S S M M P ( α 1 , α 2 , , α n ) ,

4. MCDM Method Based on WSVNSSMM Operator

In the next, we are going to put forward a novel MCDM method based on the WSVNSSMM operator as described below.
Assume A = { A 1 , A 2 , , A M } is a collection of alternatives, and C = { C 1 , C 2 , , C n } is a collection of n criteria. Suppose the weight vector of the criterion is ω = ( ω 1 , ω 2 , , ω n ) T and satisfies ω i [ 0 , 1 ] and i = 1 n ω i = 1 , and ω i denotes the importance of the criteria c j . The performance degree of alternative a i in criteria c j is measured by SVNSs and the decision matrix is R = ( r i j ) m × n , where r i j = ( T i j , I i j , F i j ) . After that, ranking its alternatives is the main purpose. Finally, we give the detailed decision-making steps.
Step 1: Normalizing the criterion values.
In the real decision, there are two types of criteria: one is the benefit and the other is the cost type. To hold consistency of this type, the first step is to convert the criteria type to a consistent type. In general, the cost type should be changed to the benefit type. The formula is as follows:
If c j is the cost type, then
r i j = ( F i j , 1 I i j , T i j ) ,   else   r i j = ( T i j , I i j , F i j )
Step 2: Aggregating all criterion values for each alternative.
We would utilize Definition 8 to obtain the comprehensive value shown as follows:
Z i = W S V N S S M M ( r i 1 , r i 2 , , r i m )
Step 3: Calculate the score values of r i ( i = 1 , 2 , , n ) by Definition 2.
After that, when two score values of them are equal, we would calculate the accuracy values and certainty function.
Step 4: Rank all the alternatives.
Based on Step 3 and Definition 3, we will obtain the order of alternatives.

5. Numerical Example

In this subsection, we refer to an example of MCDM to prove the feasibility and validity of the presented method.
We refer to the decision-making problem in Reference [8]. There is an investment company, which intends to choose the best investment in the possible alternatives. There are four possible options for the investment company to choose from: (1) a car company A 1 ; (2) a food company A 2 ; (3) a computer company A 3 ; (4) an arms company A 4 . The investment company shall consider the following three evaluation indexes to make choices: (1) the risk analysis C 1 ; (2) the growth analysis C 2 ; and (3) the environmental influence analysis. Among C 1 and C 2 are the benefit criteria and C 3 is the cost criterion. The weight vector of the criteria is ω = ( 0.35 , 0.25 , 0.4 ) T . The four possible alternatives are evaluated with respect to the above three criteria by the form of SVNSs, and single-valued neutrosophic decision matrix D is constructed as listed in Table 1.

5.1. Rank the Alternatives by the WSVNSSMM Operator

The step is described as follows:
Step 1: Normalizing the criterion values.
In this case, C 1 and C 2 are benefit types, and C 3 is a cost type, so we set-up the decision matrix as shown in Table 2.
Step 2: Aggregating all criterion values for each alternative. Utilize Definition 8 to obtain the comprehensive value Z i and suppose P = ( 1 , 1 , 1 ) and γ = 2 that have
Z 1 = ( 0.4878 , 0.1864 , 0.3361 ) ,                           Z 2 = ( 0.6379 , 0.1384 , 0.1864 ) , Z 3 = ( 0.5480 , 0.2227 , 0.2380 ) ,                           Z 4 = ( 0.6097 , 0.1667 , 0.1600 ) . .
Step 3: Calculate the score function S ( z i ) ( i = 1 , 2 , 3 , 4 ) of the value z i ( i = 1 , 2 , 3 , 4 ) . S ( z 1 ) = 1.9653 , S ( z 2 ) = 2.3131 , S ( z 3 ) = 2.0872 , S ( z 4 ) = 2.2831 .
Step 4: Ranking all the alternatives.
Based on the score functions S ( z i ) ( i = 1 , 2 , 3 , 4 ) , we will obtain the order of alternatives { A 1 , A 2 , A 3 , A 4 } is A 2 A 4 A 3 A 1 . Obviously, the best alternative is A 2 .

5.2. The Influence of the Parameters Vector on Decision-Making Result of This Example

To verify the impact of the parameters vectors γ and P on the decision-making of the instance, we select diverse parameters vectors γ and P , and give the sorting results of the alternatives. We can see the results in Table 3, Table 4 and Table 5.
When γ = 2 , parameter vector P takes different values, the sorting results of alternatives are given in Table 3.
When γ = 5 , parameter vector P takes different values, the sorting results of alternatives are given in Table 4.
When γ = 20 and parameter vector P takes different values, the sorting results of alternatives are given in Table 5.
As is shown in Table 3, Table 4 and Table 5, when the parameter vector P are the same and γ are changeable, the scoring functions are changes, but the ranking results are still the same. Usually, the ranking result is the same when γ = 200 by verification, the different decision-makers can choice diverse parameters values γ according to their preferences, so we might assume γ = 2 here. What is more, if the parameter value γ are fixed and parameter values P are different, we can get different ranking results. For example, When P = ( 1 , 1 , 1 ) considers interrelationships among all input parameters, the sorting order is A 2 A 4 A 3 A 1 , so the best option is A 2 ; nevertheless, when P = ( 1 , 0 , 0 ) and P = ( 1 , 1 , 0 ) , the sorting result is A 4 A 2 A 3 A 1 so the best option is A 4 . In addition, on the basis of the above results, we can know that for the WSVNSSMM operator the score function value decreases as the correlations of criteria increases, in other words, the more 0 in the parameters vector P , the larger the value of the score functions. Hence, the decision-makers are able to set diverse parameters vector γ and P by means of different risk preferences.

5.3. Comparing with the Other Methods

So as to demonstrate the validity of the method presented in this thesis, we can use the existing methods including the cosine similarity measure proposed by Ye [8], the single-valued neutrosophic weighted Bonferroni mean (WSVNBM) operator extended from the normal neutrosophic weighted Bonferroni mean (NNWBM) [37] operator, the weighted correlation coefficient proposed by Ye [38] to illustrate this numerical example. The sorting results of these methods are given from Table 6 and Table 7.
According to Table 6, we can see that the best alternatives A 2 and the ranking results obtained by these methods are the same. This result indicates the validity and viability of the new method put forward in this thesis. After that, we further analyzed, that for when the parameter vectors were P = ( 1 , 0 , 0 ) or P = ( 2 , 0 , 0 ) , whether the WSVNSSMM would reduce to the weighted single-valued neutrosophic Schweizer–Sklar arithmetic averaging operator. That is to say, when P = ( 1 , 0 , 0 ) is similar to the method based on cosine similarity measurement proposed by Ye [8], we can think the input parameters are independent and the interrelationship between input parameters is not taken into account. When P = ( 1 , 1 , 0 ) , the WSVNSSMM will reduce to the weighted single-valued neutrosophic SS BM operator, which can take the interrelationship of two input parameters into account and its sorting result is the same as that of WSVNBM. In addition, from Table 7, we can see Ye [38] also put forward a method based on the weighted correlation coefficient to get a sorting result which was A 2 A 4 A 3 A 1 . When P = ( 1 , 1 , 1 ) , the sorting result was A 2 A 4 A 3 A 1 , which shows that the best alternative is not only A 4 , but A 2 is also possible, and the WSVNSSMM method developed in this thesis is more comprehensive. Therefore, the presented methods in this thesis are a generalization of many existing methods.
In real decision-making environments, we should take into account two parameters, multiple parameters, or not consider parameters based on the preference of the decision-makers, and the presented methods in this paper can capture all of the above situations by changing parameter P .
In short, from the above comparative analysis, we are aware that the methods in this thesis are better, more advanced, and more effective based on the WSVNSSMM operator. Hence, the methods we presented are more advantageous in dealing with such decision-making problems.

6. Conclusions

The MCDM problems based on SVNS information is widely applied in various fields. In this paper, we used the Schweizer–Sklar operation rule and we considered the MM operator’s own the remarkable feature, particularly the correlation between attributes through parameter vector P . In the single-valued neutrosophic environment, we combine the MM operator with the SS operation rule, then presented two new MM aggregation operators, respectively, the single-valued neutrosophic Schweizer–Sklar Muirhead mean (SVNSSMM) operator and the weighted single-valued neutrosophic Schweizer–Sklar Muirhead (WSVNSSMM) operator. After that, we explained the ideal feature and some particular cases of the new operators in detail. Lastly, the methods presented in this paper were compared with other methods by numerical example to verify the viability of these methods. In the future, using the WSVNSSMM operator can help us to settle more complex MCDM problems. Moreover, we would further study other aggregation operators to handle MCDM problems.

Author Contributions

In this article, a short paragraph specifying authors’ individual contributions must be provided. Y.G. conceived and designed the experiments; H.Z. and F.W. analyzed the data, and wrote the paper.

Funding

This research was funded by Key R & D project of Shandong Province, China, grant number is 2017XCGC0605.

Acknowledgments

This work was supported by the Key Research and Development Plan Project of Shandong Province (No.2017XCGC0605).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zadeh, L. Fuzzy sets. Inf. Control 1965, 8, 338–353. [Google Scholar] [CrossRef]
  2. Atanassov, K.T. Intuitionistic fuzzy sets. Fuzzy Sets Syst. 1986, 20, 87–96. [Google Scholar] [CrossRef]
  3. Atanassov, K.; Gargov, G. Interval valued intuitionistic fuzzy sets. Fuzzy Sets Syst. 1989, 31, 343–349. [Google Scholar] [CrossRef]
  4. Atanassov, K.T. More on Intuitionistic Fuzzy Sets; Elsevier: North-Holland, The Netherland, 1989. [Google Scholar]
  5. Atanassov, K.T. Operators over interval valued intuitionistic fuzzy sets. Fuzzy Sets Syst. 1994, 64, 159–174. [Google Scholar] [CrossRef]
  6. Turksen, I.B. Interval valued fuzzy sets based on normal forms. Fuzzy Sets Syst. 1986, 20, 191–210. [Google Scholar] [CrossRef]
  7. Smarandache, F. Neutrosophy: Neutrosophic Probability, Set, and Logic: Analytic Synthesis & Synthetic Analysis. Available online: https://www.google.com.tw/url?sa=t&rct=j&q=&esrc=s&source=web&cd=3&cad=rja&uact=8&ved=2ahUKEwjF8vj495HgAhUJIIgKHTsxBFEQFjACegQIDxAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F266366834_Neutrosophy_Neutrosophic_probability_set_and_logic_Analytic_synthesis_and_synthetic_analysis&usg=AOvVaw38P0cCji8UDaORB1bDykIZ (accessed on 2 January 2019).
  8. Ye, J. A multicriteria decision-making method using aggregation operators for simplified neutrosophic sets. J. Intell. Fuzzy Syst. 2014, 26, 2459–2466. [Google Scholar]
  9. Wang, H.; Smarandache, F.; Sunderraman, R.; Zhang, Y.-Q. Interval Neutrosophic Sets and Logic: Theory and Applications in Computing: Theory and Applications in Computing; Infinite Study: El Segundo, CA, USA, 2005. [Google Scholar]
  10. Peng, J.; Wang, J.; Zhang, H.; Chen, X. An outranking approach for multi-criteria decision-making problems with simplified neutrosophic sets. Appl. Soft Comput. 2014, 25, 336–346. [Google Scholar] [CrossRef]
  11. Haibin, W.; Smarandache, F.; Zhang, Y.; Sunderraman, R. Single Valued Neutrosophic Sets; Infinite Study: El Segundo, CA, USA, 2010. [Google Scholar]
  12. Menger, K. Statistical Metrics. Proc. Natl. Acad. Sci. USA 1942, 28, 535–537. [Google Scholar] [CrossRef]
  13. Schweizer, B.; Sklar, A. Probabilistic Metric Spaces; Courier Corporation: North Chelmsford, MA, USA, 2011. [Google Scholar]
  14. Liu, P.; Wang, P. Some interval-valued intuitionistic fuzzy Schweizer–Sklar power aggregation operators and their application to supplier selection. Int. J. Syst. Sci. 2018, 1–24. [Google Scholar] [CrossRef]
  15. Deschrijver, G. Generalized arithmetic operators and their relationship to t-norms in interval-valued fuzzy set theory. Fuzzy Sets Syst. 2009, 160, 3080–3102. [Google Scholar] [CrossRef]
  16. Zhang, X.; He, H.; Xu, Y. A fuzzy logic system based on Schweizer-Sklar t-norm. Sci. China Ser. F Inf. Sci. 2006, 49, 175–188. [Google Scholar] [CrossRef]
  17. Liu, P.; Liu, Z.; Zhang, X. Some intuitionistic uncertain linguistic Heronian mean operators and their application to group decision making. Appl. Math. Comput. 2014, 230, 570–586. [Google Scholar] [CrossRef]
  18. Liu, P.; Wang, Y. Multiple attribute decision-making method based on single-valued neutrosophic normalized weighted Bonferroni mean. Neural Comput. Appl. 2014, 25, 2001–2010. [Google Scholar] [CrossRef]
  19. Liu, P.; Yu, X. 2-Dimension uncertain linguistic power generalized weighted aggregation operator and its application in multiple attribute group decision making. Knowl. Based Syst. 2014, 57, 69–80. [Google Scholar] [CrossRef]
  20. Liu, P. The Aggregation Operators Based on Archimedean t-Conorm and t-Norm for Single-Valued Neutrosophic Numbers and their Application to Decision Making. Int. J. Fuzzy Syst. 2016, 18, 849–863. [Google Scholar] [CrossRef]
  21. Liu, P. Multi-attribute decision-making method research based on interval vague set and TOPSIS method. Technol. Econ. Dev. Econ. 2009, 15, 453–463. [Google Scholar] [CrossRef]
  22. Liu, P.; Zhang, X. Research on the supplier selection of a supply chain based on entropy weight and improved ELECTRE-III method. Int. J. Prod. Res. 2011, 49, 637–646. [Google Scholar] [CrossRef]
  23. Yager, R.R. The power average operator. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 2001, 31, 724–731. [Google Scholar] [CrossRef]
  24. Bonferroni, C. Sulle medie multiple di potenze. Bollettino dell’Unione Matematica Italiana 1950, 5, 267–270. [Google Scholar]
  25. Liu, P.; Shi, L. Some neutrosophic uncertain linguistic number Heronian mean operators and their application to multi-attribute group decision making. Neural Comput. Appl. 2017, 28, 1079–1093. [Google Scholar] [CrossRef]
  26. Xu, Z.; Yager, R.R. Intuitionistic fuzzy Bonferroni means. IEEE Trans. Syst. Man Cybern. B Cybern. 2011, 41, 568–578. [Google Scholar] [PubMed]
  27. Liu, P.; Li, H. Interval-valued intuitionistic fuzzy power Bonferroni aggregation operators and their application to group decision making. Cogn. Comput. 2017, 9, 494–512. [Google Scholar] [CrossRef]
  28. Liu, P.; Zhang, L.; Liu, X.; Wang, P. Multi-valued neutrosophic number Bonferroni mean operators with their applications in multiple attribute group decision making. Int. J. Inf. Technol. Decis. Mak. 2016, 15, 1181–1210. [Google Scholar] [CrossRef]
  29. Liu, P.; Chen, S.M. Group decision making based on Heronian aggregation operators of intuitionistic fuzzy numbers. IEEE Trans. Cybern. 2017, 47, 2514–2530. [Google Scholar] [CrossRef] [PubMed]
  30. Chen, Y.; Liu, P. Multi-attribute decision-making approach based on intuitionistic trapezoidal fuzzy generalized heronian OWA operator. J. Intell. Fuzzy Syst. 2014, 27, 1381–1392. [Google Scholar]
  31. Liu, P. Multiple attribute group decision making method based on interval-valued intuitionistic fuzzy power Heronian aggregation operators. Comput. Ind. Eng. 2017, 108, 199–212. [Google Scholar] [CrossRef]
  32. Yu, D.; Wu, Y. Interval-valued intuitionistic fuzzy Heronian mean operators and their application in multi-criteria decision making. Afr. J. Bus. Manag. 2012, 6, 4158–4168. [Google Scholar] [CrossRef]
  33. Qin, J.; Liu, X. An approach to intuitionistic fuzzy multiple attribute decision making based on Maclaurin symmetric mean operators. J. Intell. Fuzzy Syst. 2014, 27, 2177–2190. [Google Scholar]
  34. Muirhead, R.F. Some methods applicable to identities and inequalities of symmetric algebraic functions of n letters. Proc. Edinburgh Math. Soc. 1902, 21, 144–162. [Google Scholar] [CrossRef]
  35. Liu, P.; Rong, L.; Chu, Y.; Li, Y. Intuitionistic linguistic weighted Bonferroni mean operator and its application to multiple attribute decision making. Sci. World J. 2014, 2014. [Google Scholar] [CrossRef]
  36. Maclaurin, C. A second letter to Martin Folkes, Esq.; concerning the roots of equations, with the demonstartion of other rules in algebra. Philos. Trans. R. Soc. Lond. 1730, 36, 59–96. [Google Scholar]
  37. Liu, P.; Li, H. Multiple attribute decision-making method based on some normal neutrosophic Bonferroni mean operators. Neural Comput. Appl. 2017, 28, 179–194. [Google Scholar] [CrossRef]
  38. Ye, J. Multicriteria decision-making method using the correlation coefficient under single-valued neutrosophic environment. Int. J. Gener. Syst. 2013, 42, 386–394. [Google Scholar] [CrossRef]
Table 1. Decision Matrix D.
Table 1. Decision Matrix D.
Options\Attributes C 1 C 2 C 3
A 1 ( 0.4 , 0.2 , 0.3 ) ( 0.4 , 0.2 , 0.3 ) ( 0.2 , 0.2 , 0.5 )
A 2 ( 0.6 , 0.1 , 0.2 ) ( 0.6 , 0.1 , 0.2 ) ( 0.5 , 0.2 , 0.2 )
A 3 ( 0.3 , 0.2 , 0.3 ) ( 0.5 , 0.2 , 0.3 ) ( 0.5 , 0.3 , 0.2 )
A 4 ( 0.7 , 0.0 , 0.1 ) ( 0.6 , 0.1 , 0.2 ) ( 0.4 , 0.3 , 0.2 )
Table 2. Normalize the Decision Matrix D.
Table 2. Normalize the Decision Matrix D.
Options\Attributes C 1 C 2 C 3
A 1 ( 0.4 , 0.2 , 0.3 ) ( 0.4 , 0.2 , 0.3 ) ( 0.2 , 0.2 , 0.5 )
A 2 ( 0.6 , 0.1 , 0.2 ) ( 0.6 , 0.1 , 0.2 ) ( 0.5 , 0.2 , 0.2 )
A 3 ( 0.3 , 0.2 , 0.3 ) ( 0.5 , 0.2 , 0.3 ) ( 0.5 , 0.3 , 0.2 )
A 4 ( 0.7 , 0.0 , 0.1 ) ( 0.6 , 0.1 , 0.2 ) ( 0.4 , 0.3 , 0.2 )
Table 3. Comparisons of different values of P when γ = 2 .
Table 3. Comparisons of different values of P when γ = 2 .
Parameters   Vector   P The   Score   Function S ( z i ) Ranking Results
P = ( 1 , 0 , 0 ) S ( z 1 ) = 1.9417 , S ( z 2 ) = 2.3175
S ( z 3 ) = 2.0575 , S ( z 4 ) = 2.3708
A 4 A 2 A 3 A 1
P = ( 1 , 1 , 0 ) S ( z 1 ) = 1.9400 , S ( z 2 ) = 2.3046
S ( z 3 ) = 2.0574 , S ( z 4 ) = 2.3904
A 4 A 2 A 3 A 1
P = ( 1 , 1 , 1 ) S ( z 1 ) = 1.9653 , S ( z 2 ) = 2.3131
S ( z 3 ) = 2.0872 , S ( z 4 ) = 2.2831
A 2 A 4 A 3 A 1
P = ( 0.25 , 0.25 , 0.25 ) S ( z 1 ) = 1.9653 , S ( z 2 ) = 2.3131
S ( z 3 ) = 2.0872 , S ( z 4 ) = 2.2831
A 2 A 4 A 3 A 1
P = ( 2 , 0 , 0 ) S ( z 1 ) = 2.0105 , S ( z 2 ) = 2.3463
S ( z 3 ) = 2.1100 , S ( z 4 ) = 2.3843
A 4 A 2 A 3 A 1
P = ( 3 , 0 , 0 ) S ( z 1 ) = 2.0696 , S ( z 2 ) = 2.3742
S ( z 3 ) = 2.1578 , S ( z 4 ) = 2.4004
A 4 A 2 A 3 A 1
Table 4. Comparisons of different values of P when γ = 5 .
Table 4. Comparisons of different values of P when γ = 5 .
Parameters   Vector   P The   Score   Function S ( z i ) Ranking Results
P = ( 1 , 0 , 0 ) S ( z 1 ) = 1.8855 , S ( z 2 ) = 2.3067
S ( z 3 ) = 2.0311 , S ( z 4 ) = 2.4191
A 4 A 2 A 3 A 1
P = ( 1 , 1 , 0 ) S ( z 1 ) = 1.7815 , S ( z 2 ) = 2.2522
S ( z 3 ) = 1.9386 , S ( z 4 ) = 2.3569
A 4 A 2 A 3 A 1
P = ( 1 , 1 , 1 ) S ( z 1 ) = 1.6546 , S ( z 2 ) = 2.2027
S ( z 3 ) = 1.8621 , S ( z 4 ) = 2.1001
A 2 A 4 A 3 A 1
P = ( 0.25 , 0.25 , 0.25 ) S ( z 1 ) = 1.6546 , S ( z 2 ) = 2.2027
S ( z 3 ) = 1.8621 , S ( z 4 ) = 2.1001
A 2 A 4 A 3 A 1
P = ( 2 , 0 , 0 ) S ( z 1 ) = 1.8802 , S ( z 2 ) = 2.3013
S ( z 3 ) = 2.0231 , S ( z 4 ) = 2.4039
A 4 A 2 A 3 A 1
P = ( 3 , 0 , 0 ) S ( z 1 ) = 1.8789 , S ( z 2 ) = 2.2989
S ( z 3 ) = 2.0201 , S ( z 4 ) = 2.3939
A 4 A 2 A 3 A 1
Table 5. Comparisons of different values of P when γ = 20 .
Table 5. Comparisons of different values of P when γ = 20 .
Parameters   Vector   P The   Score   Function S ( z i ) Ranking Results
P = ( 1 , 0 , 0 ) S ( z 1 ) = 1.8941 , S ( z 2 ) = 2.3070
S ( z 3 ) = 2.0761 , S ( z 4 ) = 2.4799
A 4 A 2 A 3 A 1
P = ( 1 , 1 , 0 ) S ( z 1 ) = 1.8524 , S ( z 2 ) = 2.2779
S ( z 3 ) = 1.9840 , S ( z 4 ) = 2.3632
A 4 A 2 A 3 A 1
P = ( 1 , 1 , 1 ) S ( z 1 ) = 1.5331 , S ( z 2 ) = 2.1544
S ( z 3 ) = 1.7619 , S ( z 4 ) = 1.9668
A 2 A 4 A 3 A 1
P = ( 0.25 , 0.25 , 0.25 ) S ( z 1 ) = 1.5331 , S ( z 2 ) = 2.1544
S ( z 3 ) = 1.7619 , S ( z 4 ) = 1.9668
A 2 A 4 A 3 A 1
P = ( 2 , 0 , 0 ) S ( z 1 ) = 1.8919 , S ( z 2 ) = 2.3048
S ( z 3 ) = 2.0724 , S ( z 4 ) = 2.4753
A 4 A 2 A 3 A 1
P = ( 3 , 0 , 0 ) S ( z 1 ) = 1.8908 , S ( z 2 ) = 2.3035
S ( z 3 ) = 2.0703 , S ( z 4 ) = 2.4724
A 4 A 2 A 3 A 1
Table 6. Comparison of the different methods.
Table 6. Comparison of the different methods.
MethodsParameter ValueRanking
cosine similarity measure [8]No A 4 A 2 A 3 A 1
Methods in [37] p = 1 , q = 1 A 4 A 2 A 3 A 1
Methods in this paper γ = 2 and P = ( 1 , 0 , 0 ) A 4 A 2 A 3 A 1
γ = 5 and P = ( 2 , 0 , 0 ) A 4 A 2 A 3 A 1
γ = 20 and P = ( 1 , 1 , 0 ) A 4 A 2 A 3 A 1
Table 7. Comparison of the weighted correlation coefficient method.
Table 7. Comparison of the weighted correlation coefficient method.
MethodsParameter valueRanking
correlation coefficient [38] A 2 A 4 A 3 A 1
Methods in this paper γ = 2 and P = ( 1 , 1 , 1 ) A 2 A 4 A 3 A 1
γ = 5 and P = ( 1 , 1 , 1 ) A 2 A 4 A 3 A 1
γ = 20 and P = ( 1 , 1 , 1 ) A 2 A 4 A 3 A 1

Share and Cite

MDPI and ACS Style

Zhang, H.; Wang, F.; Geng, Y. Multi-Criteria Decision-Making Method Based on Single-Valued Neutrosophic Schweizer–Sklar Muirhead Mean Aggregation Operators. Symmetry 2019, 11, 152. https://doi.org/10.3390/sym11020152

AMA Style

Zhang H, Wang F, Geng Y. Multi-Criteria Decision-Making Method Based on Single-Valued Neutrosophic Schweizer–Sklar Muirhead Mean Aggregation Operators. Symmetry. 2019; 11(2):152. https://doi.org/10.3390/sym11020152

Chicago/Turabian Style

Zhang, Huanying, Fei Wang, and Yushui Geng. 2019. "Multi-Criteria Decision-Making Method Based on Single-Valued Neutrosophic Schweizer–Sklar Muirhead Mean Aggregation Operators" Symmetry 11, no. 2: 152. https://doi.org/10.3390/sym11020152

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop