Next Article in Journal
Graphs in an Intuitionistic Fuzzy Soft Environment
Next Article in Special Issue
Cross Entropy Measures of Bipolar and Interval Bipolar Neutrosophic Sets and Their Application for Multi-Attribute Decision-Making
Previous Article in Journal
A Type of the Shadowing Properties for Generic View Points
Previous Article in Special Issue
Neutrosophic Soft Rough Graphs with Application
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Attribute Decision-Making Method Based on Neutrosophic Soft Rough Information

by
Muhammad Akram
1,*,
Sundas Shahzadi
1 and
Florentin Smarandache
2
1
Department of Mathematics, University of the Punjab, New Campus, Lahore 54590, Pakistan
2
Mathematics & Science Department, University of New Mexico, 705 Gurley Ave., Gallup, NM 87301, USA
*
Author to whom correspondence should be addressed.
Axioms 2018, 7(1), 19; https://doi.org/10.3390/axioms7010019
Submission received: 17 February 2018 / Revised: 14 March 2018 / Accepted: 19 March 2018 / Published: 20 March 2018
(This article belongs to the Special Issue Neutrosophic Multi-Criteria Decision Making)

Abstract

:
Soft sets (SSs), neutrosophic sets (NSs), and rough sets (RSs) are different mathematical models for handling uncertainties, but they are mutually related. In this research paper, we introduce the notions of soft rough neutrosophic sets (SRNSs) and neutrosophic soft rough sets (NSRSs) as hybrid models for soft computing. We describe a mathematical approach to handle decision-making problems in view of NSRSs. We also present an efficient algorithm of our proposed hybrid model to solve decision-making problems.

1. Introduction

Smarandache [1] initiated the concept of neutrosophic set (NS). Smarandache’s NS is characterized by three parts: truth, indeterminacy, and falsity. Truth, indeterminacy and falsity membership values behave independently and deal with the problems of having uncertain, indeterminant and imprecise data. Wang et al. [2] gave a new concept of single valued neutrosophic set (SVNS) and defined the set of theoretic operators in an instance of NS called SVNS. Ye [3,4,5] studied the correlation coefficient and improved correlation coefficient of NSs, and also determined that, in NSs, the cosine similarity measure is a special case of the correlation coefficient. Peng et al. [6] discussed the operations of simplified neutrosophic numbers and introduced an outranking idea of simplified neutrosophic numbers.
Molodtsov [7] introduced the notion of soft set as a novel mathematical approach for handling uncertainties. Molodtsov’s soft sets give us new technique for dealing with uncertainty from the viewpoint of parameters. Maji et al. [8,9,10] introduced neutrosophic soft sets (NSSs), intuitionistic fuzzy soft sets (IFSSs) and fuzzy soft sets (FSSs). Babitha and Sunil gave the idea of soft set relations [11]. In [12], Sahin and Kucuk presented NSS in the form of neutrosophic relation.
Rough set theory was initiated by Pawlak [13] in 1982. Rough set theory is used to study the intelligence systems containing incomplete, uncertain or inexact information. The lower and upper approximation operators of RSs are used for managing hidden information in a system. Therefore, many hybrid models have been built such as soft rough sets (SRSs), rough fuzzy sets (RFSs), fuzzy rough sets (FRSs), soft fuzzy rough sets (SFRSs), soft rough fuzzy sets (SRFSs), intuitionistic fuzzy soft rough sets (IFSRS), neutrosophic rough sets (NRSs), and rough neutrosophic sets (RNSs) for handling uncertainty and incomplete information effectively. Soft set theory and RS theory are two different mathematical tools to deal with uncertainty. Evidently, there is no direct relationship between these two mathematical tools, but efforts have been made to define some kind of relation [14,15]. Feng et al. [15] took a significant step to introduce parametrization tools in RSs. They introduced SRSs, in which parameterized subsets of universal sets are elementary building blocks for approximation operators of a subset. Shabir et al. [16] introduced another approach to study roughness through SSs, and this approach is known as modified SRSs (MSR-sets). In MSR-sets, some results proved to be valid that failed to hold in SRSs. Feng et al. [17] introduced a modification of Pawlak approximation space known as soft approximation space (SAS) in which SAS SRSs were proposed. Moreover, they introduced soft rough fuzzy approximation operators in SAS and initiated a idea of SRFSs, which is an extension of RFSs introduced by Dubois and Prade [18] . Meng et al. [19] provide further discussion of the combination of SSs, RSs and FSs. In various decision-making problems, RSs have been used. The existing results of RSs and other extended RSs such as RFSs, generalized RFSs, SFRSs and IFRSs based decision-making models have their advantages and limitations [20,21]. In a different way, RS approximations have been constructed into the IF environment and are known as IFRSs, RIFSs and generalized IFRSs [22,23,24]. Zhang et al. [25,26] presented the notions of SRSs, SRIFSs, and IFSRSs, its application in decision-making, and also introduced generalized IFSRSs. Broumi et al. [27,28] developed a hybrid structure by combining RSs and NSs, called RNSs. They also presented interval valued neutrosophic soft rough sets by combining interval valued neutrosophic soft sets and RSs. Yang et al. [29] proposed single valued neutrosophic rough sets (SVNRSs) by combining SVNSs and RSs, and established an algorithm for decision-making problems based on SVNRSs in two universes. For some papers related to NSs and multi-criteria decision-making (MCDM), the readers are referred to [30,31,32,33,34,35,36,37,38]. The notion of SRNSs is a extension of SRSs, SRIFSs, IFSRSs, introduced by Zhang et al. motivated by the idea of single valued neutrosophic rough sets (SVNRSs) introduced, we extend the single valued neutrosophic rough sets’ lower and upper approximations to the case of a neutrosophic soft rough set. The concept of a neutrosophic soft rough set is introduced by coupling both the neutrosophic soft sets and rough sets. In this research paper, we introduce the notions of SRNSs and NSRSs as hybrid models for soft computing. Approximation operators of SRNSs and NSRSs are described and their relevant properties are investigated in detail. We describe a mathematical approach to handle decision-making problems in view of NSRSs. We also present an efficient algorithm of our proposed hybrid model to solve decision-making problems.

2. Construction of Soft Rough Neutrosophic Sets

In this section, we introduce the notions of SRNSs by combining soft sets with RNSs and soft rough neutrosophic relations (SRNRs). Soft rough neutrosophic sets consist of two basic components, namely neutrosophic sets and soft relations, which are the mathematical basis of SRNSs. The basic idea of soft rough neutrosophic sets is based on the approximation of sets by a couple of sets known as the lower soft rough neutrosophic approximation and the upper soft rough neutrosophic approximation of a set. Here, the lower and upper approximation operators are based on an arbitrary soft relation. The concept of soft rough neutrosophic sets is an extension of the crisp set, rough set for the study of intelligent systems characterized by inexact, uncertain or insufficient information. It is a useful tool for dealing with uncertainty or imprecision information. The concept of neutrosophic soft sets is powerful logic to handle indeterminate and inconsistent situations, and the theory of rough neutrosophic sets is also powerful mathematical logic to handle incompleteness. We introduce the notions of soft rough neutrosophic sets (SRNSs) and neutrosophic soft rough sets (NSRSs) as hybrid models for soft computing. The rating of all alternatives is expressed with the upper soft rough neutrosophic approximation and lower soft rough neutrosophic approximation operator and the pair of neutrosophic sets that are characterized by truth-membership degree, indeterminacy-membership degree, and falsity-membership degree from the view point of parameters.
Definition 1.
Let Y be an initial universal set and M a universal set of parameters. For an arbitrary soft relation P over Y × M , let P s : Y N ( M ) be a set-valued function defined as P s ( u ) = { k M | ( u , k ) P } , u Y .
Let ( Y , M , P ) be an SAS. For any NS C = { ( k , T C ( k ) , I C ( k ) , F C ( k ) ) | k M } N ( M ) , where N ( M ) is a neutrosophic power set of parameter set M , the lower soft rough neutrosophic approximation (LSRNA) and the upper soft rough neutrosophic approximation (USRNA) operators of C w.r.t ( Y , M , P ) denoted by P ̲ ( C ) and P ¯ ( C ) , are, respectively, defined as follows:
P ¯ ( C ) = { ( u , T P ¯ ( C ) ( u ) , I P ¯ ( C ) ( u ) , F P ¯ ( C ) ( u ) ) | u Y } ,
P ̲ ( C ) = { ( u , T P ̲ ( C ) ( u ) , I P ̲ ( C ) ( u ) , F P ̲ ( C ) ( u ) ) | u Y } ,
where
T P ¯ ( C ) ( u ) = k P s ( u ) T C ( k ) , I P ¯ ( C ) ( u ) = k P s ( u ) I C ( k ) , F P ¯ ( C ) ( u ) = k P s ( u ) F C ( k ) ,
T P ̲ ( C ) ( u ) = k P s ( u ) T C ( k ) , I P ̲ ( C ) ( u ) = k P s ( u ) I C ( k ) , F P ̲ ( C ) ( u ) = k P s ( u ) F C ( k ) .
It is observed that P ¯ ( C ) and P ̲ ( C ) are two NSs on Y, P ̲ ( C ) , P ¯ ( C ) : N ( M ) P ( Y ) are referred to as the LSRNA and the USRNA operators, respectively. The pair ( P ̲ ( C ) , P ¯ ( C ) ) is called SRNS of C w.r.t ( Y , M , P ) .
Remark 1.
Let ( Y , M , P ) be an SAS. If C I F ( M ) and C P ( M ) , where I F ( M ) and P ( M ) are intuitionistic fuzzy power set and crisp power set of M, respectively. Then, the above SRNA operators P ̲ ( C ) and P ¯ ( C ) degenerate to SRIFA and SRA operators, respectively. Hence, SRNA operators are an extension of SRIFA and SRA operators.
Example 1.
Suppose that Y = { w 1 , w 2 , w 3 , w 4 , w 5 } is the set of five careers under observation, and Mr. X wants to select best suitable career. Let M = { k 1 , k 2 , k 3 , k 4 } be a set of decision parameters. The parameters k 1 , k 2 , k 3 and k 4 stand for “aptitude”, “work value”, “skill” and “recent advancement”, respectively. Mr. X describes the “most suitable career” by defining a soft relation P from Y to M, which is a crisp soft set as shown in Table 1.
P s : Y N ( M ) is a set valued function, and we have P s ( w 1 ) = { k 1 , k 4 } , P s ( w 2 ) = { k 1 , k 2 , k 3 , k 4 } , P s ( w 3 ) = { k 2 , k 4 } , P s ( w 4 ) = { k 1 } and P s ( w 5 ) = { k 2 , k 4 } . Mr. X gives most the favorable parameter object C, which is an NS defined as follows:
C = { ( k 1 , 0.2 , 0.5 , 0.6 ) , ( k 2 , 0.4 , 0.3 , 0.2 ) , ( k 3 , 0.2 , 0.4 , 0.5 ) , ( k 4 , 0.6 , 0.2 , 0.1 ) } .
From the Definition 1, we have
T P ¯ ( C ) ( w 1 ) = k P s ( w 1 ) T C ( k ) = { 0.2 , 0.6 } = 0.6 , I P ¯ ( C ) ( w 1 ) = k P s ( w 1 ) I C ( k ) = { 0.5 , 0.2 } = 0.2 , F P ¯ ( C ) ( w 1 ) = k P s ( w 1 ) F C ( k ) = { 0.6 , 0.1 } = 0.1 ,
T P ¯ ( C ) ( w 2 ) = 0.6 , I P ¯ ( C ) ( w 2 ) = 0.2 , F P ¯ ( C ) ( w 2 ) = 0.1 ,
T P ¯ ( C ) ( w 3 ) = 0.6 , I P ¯ ( C ) ( w 3 ) = 0.2 , F P ¯ ( C ) ( w 3 ) = 0.1 ,
T P ¯ ( C ) ( w 4 ) = 0.2 , I P ¯ ( C ) ( w 4 ) = 0.5 , F P ¯ ( C ) ( w 4 ) = 0.6 ,
T P ¯ ( C ) ( w 5 ) = 0.6 , I P ¯ ( C ) ( w 5 ) = 0.2 , F P ¯ ( C ) ( w 5 ) = 0.1 .
Similarly,
T P ̲ ( C ) ( w 1 ) = k P s ( w 1 ) T C ( k ) = { 0.2 , 0.6 } = 0.2 , I P ̲ ( C ) ( w 1 ) = k P s ( w 1 ) I C ( k ) = { 0.5 , 0.2 } = 0.5 , F P ̲ ( C ) ( w 1 ) = k P s ( w 1 ) F C ( k ) = { 0.6 , 0.1 } = 0.6 ,
T P ̲ ( C ) ( w 2 ) = 0.2 , I P ̲ ( C ) ( w 2 ) = 0.5 , F P ̲ ( C ) ( w 2 ) = 0.6 ,
T P ̲ ( C ) ( w 3 ) = 0.4 , I P ̲ ( C ) ( w 3 ) = 0.3 , F P ̲ ( C ) ( w 3 ) = 0.2 ,
T P ̲ ( C ) ( w 4 ) = 0.2 , I P ̲ ( C ) ( w 4 ) = 0.5 , F P ̲ ( C ) ( w 4 ) = 0.6 ,
T P ̲ ( C ) ( w 5 ) = 0.4 , I P ̲ ( C ) ( w 5 ) = 0.3 , F P ̲ ( C ) ( w 5 ) = 0.2 .
Thus, we obtain
P ¯ ( C ) = { ( w 1 , 0.6 , 0.2 , 0.1 ) , ( w 2 , 0.6 , 0.2 , 0.1 ) , ( w 3 , 0.6 , 0.2 , 0.1 ) , ( w 4 , 0.2 , 0.5 , 0.6 ) , ( w 5 , 0.6 , 0.2 , 0.1 ) } , P ̲ ( C ) = { ( w 1 , 0.2 , 0.5 , 0.6 ) , ( w 2 , 0.2 , 0.5 , 0.6 ) , ( w 3 , 0.4 , 0.3 , 0.2 ) , ( w 4 , 0.2 , 0.5 , 0.6 ) , ( w 5 , 0.4 , 0.3 , 0.2 ) } .
Hence, ( P ̲ ( C ) , P ¯ ( C ) ) is an SRNS of C.
Theorem 1.
Let ( Y , M , P ) be an SAS. Then, the LSRNA and the USRNA operators P ̲ ( C ) and P ¯ ( C ) satisfy the following properties for all C , D N ( M ) :
(i)
P ¯ ( C ) = P ̲ ( C ) ,
(ii)
P ̲ ( C D ) = P ̲ ( C ) P ̲ ( D ) ,
(iii)
C D P ̲ ( C ) P ̲ ( D ) ,
(iv)
P ̲ ( C D ) P ̲ ( C ) P ̲ ( D ) ,
(v)
P ̲ ( C ) = P ¯ ( C ) ,
(vi)
P ¯ ( C D ) = P ¯ ( C ) P ¯ ( D ) ,
(vii)
C D P ¯ ( C ) P ¯ ( D ) ,
(viii)
P ¯ ( C D ) P ¯ ( C ) P ¯ ( D ) ,
where C is the complement of C.
Proof. 
(i)
By definition of SRNS, we have
C = { ( k , F C ( k ) , 1 I C ( k ) , T C ( k ) ) } , P ̲ ( C ) = { ( u , T P ̲ ( C ) ( u ) , I P ̲ ( C ) ( u ) , F P ̲ ( C ) ( u ) ) | u Y } , P ̲ ( C ) = { ( u , F P ̲ ( C ) ( u ) , 1 I P ̲ ( C ) ( u ) , T P ̲ ( C ) ( u ) ) | u Y } ,
where
F P ̲ ( C ) ( u ) = k P s ( u ) T C ( k ) , I P ̲ ( C ) ( u ) = k P s ( u ) ( 1 I C ( k ) ) , T P ̲ ( C ) ( u ) = k P s ( u ) F C ( k ) .
Hence, P ̲ ( C ) = P ¯ ( C ) .
(ii)
P ̲ ( C D ) = { ( u , T P ̲ ( C D ) ( u ) , I P ̲ ( C D ) ( u ) , F P ̲ ( C D ) ( u ) ) | u Y } = { ( u , k P s ( u ) T ( C D ) ( k ) , k P s ( u ) I ( C D ) ( k ) , k P s ( u ) F ( C D ) ( k ) ) | u Y } = { ( u , k P s ( u ) ( T C ( k ) T D ( k ) ) , k P s ( u ) ( I C ( k ) I D ( k ) ) , k P s ( u ) ( F C ( k ) F D ( k ) ) | u Y } = { ( u , T P ̲ ( C ) ( u ) T P ̲ ( D ) ( u ) , I P ̲ ( C ) ( u ) I P ̲ ( D ) ( u ) , F P ̲ ( C ) ( u ) F P ̲ ( D ) ( u ) ) | u Y } = P ̲ ( C ) P ̲ ( D ) .
(iii)
It can be easily proved by Definition 1.
(iv)
T P ̲ ( C D ) ( u ) = k P s ( u ) T C D ( k ) = k P s ( u ) ( T C ( k ) T D ( k ) ) ( k P s ( u ) T C ( k ) k P s ( u ) T D ( k ) ) ( T P ̲ ( C ) ( u ) T P ̲ ( D ) ( u ) ) , T P ̲ ( C D ) ( u ) T P ̲ ( C ) ( u ) T P ̲ ( D ) ( u ) .
Similarly, we can prove that
I P ̲ ( C D ) ( u ) I P ̲ ( C ) ( u ) I P ̲ ( D ) ( u ) , F P ̲ ( C D ) ( u ) F P ̲ ( C ) ( u ) F P ̲ ( D ) ( u ) .
Thus, P ̲ ( C D ) P ̲ ( C ) P ̲ ( D ) .
The properties (v)–(viii) of the USRNA P ¯ ( C ) can be easily proved similarly. ☐
Example 2.
Considering Example 1, we have
C = { ( k 1 , 0.6 , 0.5 , 0.2 ) , ( k 2 , 0.2 , 0.7 , 0.4 ) , ( k 3 , 0.5 , 0.6 , 0.2 ) , ( k 4 , 0.1 , 0.8 , 0.6 ) } , P ¯ ( C ) = { ( w 1 , 0.6 , 0.5 , 0.2 ) , ( w 2 , 0.6 , 0.5 , 0.2 ) , ( w 3 , 0.2 , 0.7 , 0.4 ) , ( w 4 , 0.6 , 0.5 , 0.2 ) , ( w 5 , 0.2 , 0.7 , 0.4 ) } , P ¯ ( C ) = { ( w 1 , 0.2 , 0.5 , 0.6 ) , ( w 2 , 0.2 , 0.5 , 0.6 ) , ( w 3 , 0.4 , 0.3 , 0.2 ) , ( w 4 , 0.2 , 0.5 , 0.6 ) , ( w 5 , 0.4 , 0.3 , 0.2 ) } , = P ̲ ( C ) . L e t D = { ( k 1 , 0.4 , 0.2 , 0.6 ) , ( k 2 , 0.5 , 0.3 , 0.2 ) , ( k 3 , 0.5 , 0.5 , 0.1 ) , ( k 4 , 0.6 , 0.4 , 0.7 ) } , P ̲ ( D ) = { ( w 1 , 0.4 , 0.4 , 0.7 ) , ( w 2 , 0.4 , 0.5 , 0.6 ) , ( w 3 , 0.5 , 0.4 , 0.7 ) , ( w 4 , 0.4 , 0.2 , 0.6 ) , ( w 5 , 0.5 , 0.4 , 0.7 ) } , C D = { ( k 1 , 0.2 , 0.5 , 0.6 ) , ( k 2 , 0.4 , 0.3 , 0.2 ) , ( k 3 , 0.2 , 0.5 , 0.5 ) , ( k 4 , 0.6 , 0.4 , 0.7 ) } ,
P ̲ ( C D ) = { ( w 1 , 0.2 , 0.5 , 0.7 ) , ( w 2 , 0.2 , 0.5 , 0.6 ) , ( w 3 , 0.4 , 0.4 , 0.7 ) , ( w 4 , 0.2 , 0.5 , 0.6 ) , ( w 5 , 0.4 , 0.4 , 0.7 ) } , P ̲ ( C ) P ̲ ( D ) = { ( w 1 , 0.2 , 0.5 , 0.7 ) , ( w 2 , 0.2 , 0.5 , 0.6 ) , ( w 3 , 0.4 , 0.4 , 0.7 ) , ( w 4 , 0.2 , 0.5 , 0.6 ) , ( w 5 , 0.4 , 0.4 , 0.7 ) } , P ̲ ( C D ) = P ̲ ( C ) P ̲ ( D ) , C D = { ( k 1 , 0.4 , 0.2 , 0.6 ) , ( k 2 , 0.5 , 0.3 , 0.2 ) , ( k 3 , 0.5 , 0.4 , 0.1 ) , ( k 4 , 0.6 , 0.2 , 0.1 ) } , P ̲ ( C D ) = { ( w 1 , 0.4 , 0.2 , 0.6 ) , ( w 2 , 0.4 , 0.4 , 0.6 ) , ( w 3 , 0.5 , 0.3 , 0.2 ) , ( w 4 , 0.4 , 0.2 , 0.6 ) , ( w 5 , 0.5 , 0.3 , 0.2 ) } , P ̲ ( C ) P ̲ ( D ) = { ( w 1 , 0.4 , 0.4 , 0.6 ) , ( w 2 , 0.4 , 0.5 , 0.6 ) , ( w 3 , 0.5 , 0.3 , 0.2 ) , ( w 4 , 0.4 , 0.2 , 0.6 ) , ( w 5 , 0.5 , 0.3 , 0.2 ) } .
Clearly, P ̲ ( C D ) P ̲ ( C ) P ̲ ( D ) . Hence, properties of the LSRNA operator hold, and we can easily verify the properties of the USRNA operator.
The conventional soft set is a mapping from a parameter to the subset of universe and let ( P , M ) be a crisp soft set. In [11], Babitha and Sunil introduced the concept of soft set relation. Now, we present the constructive definition of SRNR by using a soft relation R from M × M = M ´ to P ( Y × Y = Y ´ ) , where Y is a universal set and M is a set of parameter.
Definition 2.
A SRNR ( R ̲ ( D ) , R ¯ ( D ) ) on Y is a SRNS, R : M ´ P ( Y ´ ) is a soft relation on Y defined by
R ( k i k j ) = { u i u j | u i P ( k i ) , u j P ( k j ) } , u i u j Y ´ .
Let R s : Y ´ P ( M ´ ) be a set-valued function by
R s ( u i u j ) = { k i k j M ´ | ( u i u j , k i k j ) R } , u i u j Y ´ .
For any D N ( M ´ ) , the USRNA and the LSRNA operators of D w.r.t ( Y ´ , M ´ , R ) defined as follows:
R ¯ ( D ) = { ( u i u j , T R ¯ ( D ) ( u i u j ) , I R ¯ ( D ) ( u i u j ) , F R ¯ ( D ) ( u i u j ) ) | u i u j Y ´ } ,
R ̲ ( D ) = { ( u i u j , T R ̲ ( D ) ( u i u j ) , I R ̲ ( D ) ( u i u j ) , F R ̲ ( D ) ( u i u j ) ) | u i u j Y ´ } ,
where
T R ¯ ( D ) ( u i u j ) = k i k j R s ( u i u j ) T D ( k i k j ) , I R ¯ ( D ) ( u i u j ) = k i k j R s ( u i u j ) I D ( k i k j ) , F R ¯ ( D ) ( u i u j ) = k i k j R s ( u i u j ) F D ( k i k j ) ,
T R ̲ ( D ) ( u i u j ) = k i k j R s ( u i u j ) T D ( k i k j ) , I R ̲ ( D ) ( u i u j ) = k i k j R s ( u i u j ) I D ( k i k j ) , F R ̲ ( D ) ( u i u j ) = k i k j R s ( u i u j ) F D ( k i k j ) .
The pair ( R ̲ ( D ) , R ¯ ( D ) ) is called SRNR and R ̲ , R ¯ : N ( M ´ ) P ( Y ´ ) are called the LSRNA and the USRNA operators, respectively.
Remark 2.
For an NS D on M ´ and an NS C on M,
T D ( k i k j ) min k i M { T C ( k i ) } , I D ( k i k j ) min k i M { I C ( k i ) } , F D ( k i k j ) min k i M { F C ( k i ) } .
According to the definition of SRNR, we get
T R ¯ ( D ) ( u i u j ) min { T R ¯ ( C ) ( u i ) , T R ¯ ( C ) ( u j ) } , I R ¯ ( D ) ( u i u j ) max { I R ¯ ( C ) ( u i ) , I R ¯ ( C ) ( u j ) } , F R ¯ ( D ) ( u i u j ) max { F R ¯ ( C ) ( u i ) , F R ¯ ( C ) ( u j ) } .
Similarly, for the LSRNA operator R ̲ ( D ) ,
T R ̲ ( D ) ( u i u j ) min { T R ̲ ( C ) ( u i ) , T R ̲ ( C ) ( u j ) } , I R ̲ ( D ) ( u i u j ) max { I R ̲ ( C ) ( u i ) , I R ̲ ( C ) ( u j ) } , F R ̲ ( D ) ( u i u j ) max { F R ̲ ( C ) ( u i ) , F R ̲ ( C ) ( u j ) } .
Example 3.
Let Y = { u 1 , u 2 , u 3 } be a universal set and M = { k 1 , k 2 , k 3 } be a set of parameters. A soft set ( P , M ) on Y can be defined in tabular form (see Table 2) as follows:
Let E = { u 1 u 2 , u 2 u 3 , u 2 u 2 , u 3 u 2 } Y ´ and L = { k 1 k 3 , k 2 k 1 , k 3 k 2 } M ´ . Then, a soft relation R on E (from L to E) can be defined in tabular form (see Table 3) as follows:
Now, we can define set-valued function R s such that
R s ( u 1 u 2 ) = { k 1 k 3 } , R s ( u 2 u 3 ) = { k 1 k 3 , k 3 k 2 } , R s ( u 2 u 2 ) = { k 1 k 3 } , R s ( u 3 u 2 ) = { k 2 k 1 } .
  • Let C = { ( k 1 , 0.2 , 0.4 , 0.6 ) , ( k 2 , 0.4 , 0.5 , 0.2 ) , ( k 3 , 0.1 , 0.2 , 0.4 ) } be an NS on M, then
  • R ¯ ( C ) = { ( u 1 , 0.2 , 0.2 , 0.4 ) , ( u 2 , 0.2 , 0.4 , 0.4 ) , ( u 3 , 0.4 , 0.2 , 0.2 ) } ,
  • R ̲ ( C ) = { ( u 1 , 0.1 , 0.4 , 0.6 ) , ( u 2 , 0.1 , 0.4 , 0.6 ) , ( u 3 , 0.1 , 0.5 , 0.4 ) } ,
  • Let D = { ( k 1 k 3 , 0.1 , 0.2 , 0.2 ) , ( k 2 k 1 , 0.1 , 0.1 , 0.2 ) , ( k 3 k 2 , 0.1 , 0.2 , 0.1 ) } be an NS on L , then
  • R ¯ ( D ) = { ( u 1 u 2 , 0.1 , 0.2 , 0.2 ) , ( u 2 u 3 , 0.1 , 0.2 , 0.1 ) , ( u 2 u 2 , 0.1 , 0.2 , 0.2 ) , ( u 3 u 2 , 0.1 , 0.1 , 0.2 ) } ,
  • R ̲ ( D ) = { ( u 1 u 2 , 0.1 , 0.2 , 0.2 ) , ( u 2 u 3 , 0.1 , 0.2 , 0.1 ) , ( u 2 u 2 , 0.1 , 0.2 , 0.2 ) , ( u 3 u 2 , 0.1 , 0.1 , 0.2 ) } .
Hence, R ( D ) = ( R ̲ ( D ) , R ¯ ( D ) ) is SRNR.

3. Construction of Neutrosophic Soft Rough Sets

In this section, we will introduce the notions of NSRSs, neutrosophic soft rough relations (NSRRs).
Definition 3.
Let Y be an initial universal set and M a universal set of parameters. For an arbitrary neutrosophic soft relation P ˜ from Y to M, ( Y , M , P ˜ ) is called neutrosophic soft approximation space (NSAS). For any NS C N ( M ) , we define the upper neutrosophic soft approximation (UNSA) and the lower neutrosophic soft approximation (LNSA) operators of C with respect to ( Y , M , P ˜ ) denoted by P ˜ ¯ ( C ) and P ˜ ̲ ( C ) , respectively as follows:
P ˜ ¯ ( C ) = { ( u , T P ˜ ¯ ( C ) ( u ) , I P ˜ ¯ ( C ) ( u ) , F P ˜ ¯ ( C ) ( u ) ) | u Y } , P ˜ ̲ ( C ) = { ( u , T P ˜ ̲ ( C ) ( u ) , I P ˜ ̲ ( C ) ( u ) , F P ˜ ̲ ( C ) ( u ) ) | u Y } ,
where
T P ˜ ¯ ( C ) ( u ) = k M T P ˜ ( C ) ( u , k ) T C ( k ) , I P ˜ ¯ ( C ) ( u ) = k M I P ˜ ( C ) ( u , k ) I C ( k ) , F P ˜ ¯ ( C ) ( u ) = k M F P ˜ ( C ) ( u , k ) F C ( k ) , T P ˜ ̲ ( C ) ( u ) = k M F P ˜ ( C ) ( u , k ) T C ( k ) , I P ˜ ̲ ( C ) ( u ) = k M ( 1 I P ˜ ( C ) ( u , k ) ) I C ( k ) , F P ˜ ̲ ( C ) ( u ) = k M T P ˜ ( C ) ( u , k ) F C ( k ) .
The pair ( P ˜ ̲ ( C ) , P ˜ ¯ ( C ) ) is called NSRS of C w.r.t ( Y , M , P ˜ ) , a n d P ˜ ̲ and P ˜ ¯ are referred to as the LNSRA and the UNSRA operators, respectively.
Remark 3.
A neutrosophic soft relation over Y × M is actually a neutrosophic soft set on Y. The NSRA operators are defined over two distinct universes Y and M. As we know, universal set Y and parameter set M are two different universes of discourse but have solid relations. These universes can not be considered as identical universes; therefore, the reflexive, symmetric and transitive properties of neutrosophic soft relations from Y to M do not exist.
Let P ˜ be a neutrosophic soft relation from Y to M, if, for each u Y , there exists k M such that T P ˜ ( u , k ) = 1 , I P ˜ ( u , k ) = 0 , F P ˜ ( u , k ) = 0 . Then, P ˜ is referred to as a serial neutrosophic soft relation from Y to parameter set M.
Example 4.
Suppose that Y = { w 1 , w 2 , w 3 , w 4 } is the set of careers under consideration, and Mr. X wants to select the most suitable career. M = { k 1 , k 2 , k 3 } is a set of decision parameters. Mr. X describes the “most suitable career” by defining a neutrosophic soft set ( P ˜ , M ) on Y that is a neutrosophic relation from Y to M as shown in Table 4.
Now, Mr. X gives the most favorable decision object C, which is an NS on M defined as follows: C = { ( k 1 , 0.5 , 0.2 , 0.4 ) , ( k 2 , 0.2 , 0.3 , 0.1 ) , ( k 3 , 0.2 , 0.4 , 0.6 ) } . By Definition 3, we have
T P ˜ ¯ ( C ) ( w 1 ) = k M T P ˜ ( C ) ( w 1 , k ) T C ( k ) = { 0.3 , 0.1 , 0.2 } = 0.3 , I P ˜ ¯ ( C ) ( w 1 ) = k M I P ˜ ( C ) ( w 1 , k ) I C ( k ) = { 0.4 , 0.5 , 0.4 } = 0.4 , F P ˜ ¯ ( C ) ( w 1 ) = k M F P ˜ ( C ) ( w 1 , k ) F C ( k ) = { 0.5 , 0.4 , 0.6 } = 0.4 ,
T P ˜ ¯ ( C ) ( w 2 ) = 0.4 , I P ˜ ¯ ( C ) ( w 2 ) = 0.2 , F P ˜ ¯ ( C ) ( w 2 ) = 0.4 ,
T P ˜ ¯ ( C ) ( w 3 ) = 0.2 , I P ˜ ¯ ( C ) ( w 3 ) = 0.4 , F P ˜ ¯ ( C ) ( w 3 ) = 0.3 ,
T P ˜ ¯ ( C ) ( w 4 ) = 0.2 , I P ˜ ¯ ( C ) ( w 4 ) = 0.3 , F P ˜ ¯ ( C ) ( w 4 ) = 0.4 .
Similarly,
T P ˜ ̲ ( C ) ( w 1 ) = k M F P ˜ ( C ) ( w 1 , k ) T C ( k ) = { 0.5 , 0.4 , 0.4 } = 0.4 , I P ˜ ̲ ( C ) ( w 1 ) = k M ( 1 I P ˜ ( C ) ( w 1 , k ) ) I C ( k ) = { 0.2 , 0.3 , 0.4 } = 0.4 , F P ˜ ̲ ( C ) ( w 1 ) = k M T P ˜ ( C ) ( w 1 , k ) F C ( k ) = { 0.3 , 0.1 , 0.3 } = 0.3 ,
T P ˜ ̲ ( C ) ( w 2 ) = 0.5 , I P ˜ ̲ ( C ) ( w 2 ) = 0.4 , F P ˜ ̲ ( C ) ( w 2 ) = 0.4 ,
T P ˜ ̲ ( C ) ( w 3 ) = 0.4 , I P ˜ ̲ ( C ) ( w 3 ) = 0.4 , F P ˜ ̲ ( C ) ( w 3 ) = 0.3 ,
T P ˜ ̲ ( C ) ( w 4 ) = 0.5 , I P ˜ ̲ ( C ) ( w 4 ) = 0.4 , F P ˜ ̲ ( C ) ( w 4 ) = 0.5 .
Thus, we obtain
P ˜ ¯ ( C ) = { ( w 1 , 0.3 , 0.4 , 0.4 ) , ( w 2 , 0.4 , 0.2 , 0.4 ) , ( w 3 , 0.2 , 0.4 , 0.3 ) , ( w 4 , 0.2 , 0.3 , 0.4 ) } , P ˜ ̲ ( C ) = { ( w 1 , 0.4 , 0.4 , 0.3 ) , ( w 2 , 0.5 , 0.4 , 0.4 ) , ( w 3 , 0.4 , 0.4 , 0.3 ) , ( w 4 , 0.5 , 0.4 , 0.5 ) } .
Hence, ( P ˜ ̲ ( C ) , P ˜ ¯ ( C ) ) is an NSRS of C.
Theorem 2.
Let ( Y , M , P ˜ ) be an NSAS. Then, the UNSRA and the LNSRA operators P ˜ ¯ ( C ) and P ˜ ̲ ( C ) satisfy the following properties for all C , D N ( M ) :
(i)
P ˜ ̲ ( C ) = P ˜ ¯ ( A ) ,
(ii)
P ˜ ̲ ( C D ) = P ˜ ̲ ( C ) P ˜ ̲ ( D ) ,
(iii)
C D P ˜ ̲ ( C ) P ˜ ̲ ( D ) ,
(iv)
P ˜ ̲ ( C D ) P ˜ ̲ ( C ) P ˜ ̲ ( D ) ,
(v)
P ˜ ¯ ( C ) = P ˜ ̲ ( C ) ,
(vi)
P ˜ ¯ ( C D ) = P ˜ ¯ ( C ) P ˜ ¯ ( D ) ,
(vii)
C D P ˜ ¯ ( C ) P ˜ ¯ ( D ) ,
(viii)
P ˜ ¯ ( C D ) P ˜ ¯ ( C ) P ˜ ¯ ( D ) .
Proof. 
(i)
C = { ( k , F C ( k ) , 1 I C ( k ) , T C ( k ) ) | k M } . By definition of NSRS , we have P ˜ ( C ) = { u , T P ˜ ¯ ( C ) ( u ) , I P ˜ ¯ ( C ) ( u ) , F P ˜ ¯ ( C ) ( u ) | u Y } , P ˜ ( C ) = { u , F P ˜ ¯ ( C ) ( u ) , 1 I P ˜ ¯ ( C ) ( u ) , T P ˜ ¯ ( C ) ( u ) | u Y } , F P ˜ ¯ ( C ) ( u ) = k M F P ˜ ( u , k ) T C ( k ) = T P ˜ ̲ ( C ) ( u ) , 1 I P ˜ ¯ ( C ) ( u ) = 1 k M [ I P ˜ ( u , k ) I C ( k ) ] = k M ( 1 I P ˜ ( u , k ) ) ( 1 I C ( k ) ) = k M ( 1 I P ˜ ( u , k ) ) 1 ( 1 I C ( k ) ) = k M ( 1 I P ˜ ( u , k ) ) I C ( k ) = I P ˜ ̲ ( C ) ( u ) , T P ˜ ¯ ( C ) ( u ) = k M T P ˜ ( u , k ) T C ( k ) = k M T P ˜ ( u , k ) F C ( k ) = F P ˜ ̲ ( C ) ( u ) . Thus , P ˜ ̲ ( C ) = P ˜ ¯ ( C ) .
(ii)
P ˜ ̲ ( C D ) = { u , T P ˜ ̲ ( C D ) ( u ) , I P ˜ ̲ ( C D ) ( u ) , F P ˜ ̲ ( C D ) ( u ) } , P ˜ ̲ ( C ) P ˜ ̲ ( D ) = { u , T P ˜ ̲ ( C ) ( u ) T P ˜ ̲ ( D ) ( u ) , I P ˜ ̲ ( C ) ( u ) I P ˜ ̲ ( D ) ( u ) , F P ˜ ̲ ( C ) ( u ) F P ˜ ̲ ( D ) ( u ) } .
Now, consider
T P ˜ ̲ ( C D ) ( u ) = k M F P ˜ ̲ ( u , k ) T C D ( k ) = k M F P ˜ ̲ ( u , k ) ( T C ( k ) T D ( k ) ) = k M F P ˜ ̲ ( u , k ) T C ( k ) k M F P ˜ ̲ ( u , k ) T D ( k ) = T P ˜ ̲ ( C ) ( u ) T P ˜ ̲ ( D ) ( u ) ,
I P ˜ ̲ ( C D ) ( u ) = k M ( 1 I P ˜ ̲ ( u , k ) ) I C D ( k ) = k M ( 1 I P ˜ ̲ ( u , k ) ) ( I C ( k ) I D ( k ) ) = k M ( 1 I P ˜ ̲ ( u , k ) ) ) I C ( k ) k M ( 1 I P ˜ ̲ ( u , k ) ) I D ( k ) = I P ˜ ̲ ( C ) ( u ) I P ˜ ̲ ( D ) ( u ) , F P ˜ ̲ ( C D ) ( u ) = k M T P ˜ ̲ ( u , k ) F C D ( k ) = k M T P ˜ ̲ ( u , k ) ( F C ( k ) F D ( k ) ) = k M T P ˜ ̲ ( u , k ) F C ( k ) k M T P ˜ ̲ ( u , k ) F D ( k ) = F P ˜ ̲ ( C ) ( u ) F P ˜ ̲ ( D ) ( u ) . Thus , P ˜ ̲ ( C D ) = P ˜ ̲ ( C ) P ˜ ̲ ( D ) .
(iii)
It can be easily proven by Definition 3.
(iv)
P ˜ ̲ ( C D ) = { u , T P ˜ ̲ ( C D ) ( u ) , I P ˜ ̲ ( C D ) ( u ) , F P ˜ ̲ ( C D ) ( u ) } , P ˜ ̲ ( C ) P ˜ ̲ ( D ) = { u , T P ˜ ̲ ( C ) ( u ) T P ˜ ̲ ( D ) ( u ) , I P ˜ ̲ ( C ) ( u ) I P ˜ ̲ ( D ) ( u ) , F P ˜ ̲ ( C ) ( u ) F P ˜ ̲ ( D ) ( u ) } , T P ˜ ̲ ( C D ) ( u ) = k M ( F P ˜ ( u , k ) T C D ( k ) ) = k M F P ˜ ( u , k ) [ T C ( k ) T D ( k ) ] = k M [ F P ˜ ( u , k ) T C ( k ) ] [ F P ˜ ( u , k ) T D ( k ) ] k M F P ˜ ( u , k ) T C ( k ) k M F P ˜ ( u , k ) T D ( k ) = T P ˜ ̲ ( C ) ( u ) T P ˜ ̲ ( D ) ( u ) , I P ˜ ̲ ( C D ) ( u ) = k M ( 1 I P ˜ ̲ ( u , k ) ) I C D ( k ) = k M ( 1 I P ˜ ̲ ( u , k ) ) [ I C ( k ) I D ( k ) ] = k M [ 1 I P ˜ ̲ ( u , k ) ) I C ( k ) ] [ ( 1 I P ˜ ̲ ( u , k ) ) I D ( k ) ] k M ( 1 I P ˜ ̲ ( u , k ) ) I C ( k ) k M ( 1 I P ˜ ̲ ( u , k ) ) I D ( k ) = I P ˜ ̲ ( C ) ( u ) I P ˜ ̲ ( D ) ( u ) , F P ˜ ̲ ( C D ) ( u ) = k M T P ˜ ̲ ( u , k ) F C D ( k ) = k M T P ˜ ̲ ( u , k ) [ F C ( k ) F D ( k ) ] = k M [ T P ˜ ̲ ( u , k ) F C ( k ) ] [ T P ˜ ̲ ( u , k ) F D ( k ) ] k M T P ˜ ̲ ( u , k ) F C ( k ) k M T P ˜ ̲ ( u , k ) F D ( k ) = F P ˜ ̲ ( C ) ( u ) F P ˜ ̲ ( D ) ( u ) .
(vii)
P ˜ ¯ ( C D ) = { u , T P ˜ ¯ ( C D ) ( u ) , I P ˜ ¯ ( C D ) ( u ) , F P ˜ ¯ ( C D ) ( u ) } , P ˜ ¯ ( C ) P ˜ ¯ ( D ) = { u , T P ˜ ¯ ( C ) ( u ) T P ˜ ¯ ( D ) ( u ) , I P ˜ ¯ ( C ) ( u ) I P ˜ ¯ ( D ) ( u ) , F P ˜ ¯ ( C ) ( u ) F P ˜ ¯ ( D ) ( u ) } , T P ˜ ¯ ( C D ) ( u ) = k M ( T P ˜ ( u , k ) T C D ( k ) ) = k M T P ˜ ( u , k ) [ T C ( k ) T D ( k ) ] = k M [ T P ˜ ( u , k ) T C ( k ) ] [ T P ˜ ( u , k ) T D ( k ) ] k M T P ˜ ( u , k ) T C ( k ) k M T P ˜ ( u , k ) T D ( k ) = T P ˜ ¯ ( C ) ( u ) T P ˜ ¯ ( D ) ( u ) , I P ˜ ¯ ( C D ) ( u ) = k M I P ˜ ¯ ( u , k ) I C D ( k ) = k M I P ˜ ¯ ( u , k ) [ I C ( k ) I D ( k ) ] = k M [ I P ˜ ¯ ( u , k ) I C ( k ) ] [ I P ˜ ¯ ( u , k ) I D ( k ) ] k M ( I P ˜ ¯ ( u , k ) ) I C ( k ) k M ( I P ˜ ¯ ( u , k ) ) I D ( k ) = I P ˜ ¯ ( C ) ( u ) I P ˜ ¯ ( D ) ( u ) , F P ˜ ¯ ( C D ) ( u ) = k M F P ˜ ¯ ( u , k ) F C D ( k ) = k M F P ˜ ¯ ( u , k ) [ F C ( k ) F D ( k ) ] = k M [ F P ˜ ¯ ( u , k ) F C ( k ) ] [ F P ˜ ¯ ( u , k ) F D ( k ) ] k M F P ˜ ¯ ( u , k ) F C ( k ) k M F P ˜ ¯ ( u , k ) F D ( k ) = F P ˜ ¯ ( C ) ( u ) F P ˜ ¯ ( D ) ( u ) . Thus , P ˜ ¯ ( C D ) P ˜ ¯ ( C ) P ˜ ¯ ( D ) .
The properties (v)–(vii) of the UNSRA operator P ˜ ¯ ( C ) can be easily proved similarly. ☐
Theorem 3.
Let ( Y , M , P ˜ ) be an NSAS. The UNSRA and the LNSRA operators P ˜ ¯ and P ˜ ̲ satisfy the following properties for all C , D N ( M ) :
(i)
P ˜ ̲ ( C D ) P ˜ ̲ ( C ) P ˜ ¯ ( D ) ,
(ii)
P ˜ ¯ ( C D ) P ˜ ¯ ( C ) P ˜ ̲ ( D ) .
Proof. 
(i)
By Definition 3 and definition of difference of two NSs, for all u Y ,
T P ˜ ̲ ( C D ) ( u ) = k M F P ˜ ( u , k ) T C D ( k ) = k M F P ˜ ( u , k ) ( T C ( k ) F D ( k ) ) = k M [ F P ˜ ( u , k ) T C ( k ) ] [ F P ˜ ( u , k ) F D ( k ) ] = k M F P ˜ ( u , k ) T C ( k ) k M F P ˜ ( u , k ) F D ( k ) = T P ˜ ̲ ( C ) ( u ) F P ˜ ¯ ( D ) ( u ) = T P ˜ ̲ ( C ) P ˜ ¯ ( D ) ( u ) , I P ˜ ̲ ( C D ) ( u ) = k M ( 1 I P ˜ ( u , k ) ) I C D ( k ) = k M ( 1 I P ˜ ( u , k ) ) ( I C ( k ) ( 1 I D ( k ) ) ) = k M [ ( 1 I P ˜ ( u , k ) ) I C ( k ) ] [ ( 1 I P ˜ ( u , k ) ) ( 1 I D ( k ) ) ] = k M [ ( 1 I P ˜ ( u , k ) ) I C ( k ) ] [ 1 I P ˜ ( u , k ) I D ( k ) ] k M ( 1 I P ˜ ( u , k ) ) I C ( k ) k M 1 I P ˜ ( u , k ) I D ( k ) k M ( 1 I P ˜ ( u , k ) ) I C ( k ) 1 k M I P ˜ ( u , k ) I D ( k ) = I P ˜ ̲ ( C ) ( u ) ( 1 I P ˜ ¯ ( D ) ( u ) ) = I P ˜ ̲ ( C ) P ˜ ¯ ( D ) ( u ) , F P ˜ ̲ ( C D ) ( u ) = k M T P ˜ ( u , k ) F C D ( k ) = k M T P ˜ ( u , k ) ( F C ( k ) T D ( k ) ) = k M [ T P ˜ ( u , k ) F C ( k ) ] [ T P ˜ ( u , k ) T D ( k ) ] k M T P ˜ ( u , k ) F C ( k ) k M T P ˜ ( u , k ) T D ( k ) = F P ˜ ̲ ( C ) ( u ) T P ˜ ¯ ( D ) ( u ) = F P ˜ ̲ ( C ) P ˜ ¯ ( D ) ( u ) . Thus , P ˜ ̲ ( C D ) P ˜ ̲ ( C ) P ˜ ¯ ( D ) .
(ii)
By Definition 3 and definition of difference of two NSs, for all u Y ,
T P ˜ ¯ ( C D ) ( u ) = k M T P ˜ ( u , k ) T C D ( k ) = k M T P ˜ ( u , k ) ( T C ( k ) F D ( k ) ) = k M [ T P ˜ ( u , k ) T C ( k ) ] [ T P ˜ ( u , k ) F D ( k ) ] k M T P ˜ ( u , k ) T C ( k ) k M T P ˜ ( u , k ) F D ( k ) = T P ˜ ¯ ( C ) ( u ) F P ˜ ̲ ( D ) ( u ) = T P ˜ ¯ ( C ) P ˜ ̲ ( D ) ( u ) , I P ˜ ¯ ( C D ) ( u ) = k M I P ˜ ( u , k ) I C D ( k ) = k M I P ˜ ( u , k ) ( I C ( k ) ( 1 I D ( k ) ) ) = k M [ I P ˜ ( u , k ) I C ( k ) ] [ I P ˜ ( u , k ) ( 1 I D ( k ) ) ] = k M [ I P ˜ ( u , k ) I C ( k ) ] [ 1 ( 1 I P ˜ ( u , k ) ) ( 1 I D ( k ) ) ] = k M ( I P ˜ ( u , k ) I C ( k ) ) 1 k M ( 1 I P ˜ ( u , k ) ) I D ( k ) = I P ˜ ¯ ( C ) ( u ) ( 1 I P ˜ ̲ ( D ) ( u ) ) = I P ˜ ¯ ( C ) P ˜ ̲ ( D ) ( u ) , F P ˜ ¯ ( C D ) ( u ) = k M F P ˜ ( u , k ) F C D ( k ) = k M F P ˜ ( u , k ) ( F C ( k ) T D ( k ) ) = k M [ F P ˜ ( u , k ) F C ( k ) ] [ F P ˜ ( u , k ) T D ( k ) ] = k M F P ˜ ( u , k ) F C ( k ) k M F P ˜ ( u , k ) T D ( k ) = F P ˜ ¯ ( C ) ( u ) T P ˜ ̲ ( D ) ( u ) = F P ˜ ¯ ( C ) P ˜ ̲ ( D ) ( u ) . Thus , P ˜ ¯ ( C D ) P ˜ ¯ ( C ) P ˜ ̲ ( D ) .
 ☐
Theorem 4.
Let ( Y , M , P ˜ ) be an NSAS. If P ˜ is serial, then the UNSA and the LNSA operators P ˜ ¯ and P ˜ ̲ satisfy the following properties for all , M , C N ( M ) :
(i)
P ˜ ¯ ( ) = , P ˜ ̲ ( M ) = Y ,
(ii)
P ˜ ̲ ( C ) P ˜ ¯ ( C ) .
Proof. 
(i)
P ˜ ¯ ( ) = { ( u , T P ˜ ¯ ( ) ( u ) , I P ˜ ¯ ( ) ( u ) , F P ˜ ¯ ( ) ( u ) ) | u Y } , T P ˜ ¯ ( ) ( u ) = k M T P ˜ ( u , k ) T ( k ) , I P ˜ ¯ ( ) ( u ) = k M I P ˜ ( u , k ) I ( k ) , F P ˜ ¯ ( ) ( u ) = k M F P ˜ ( u , k ) F ( k ) .
Since ∅ is a null NS on M, T ( k ) = 0 , I ( k ) = 1 , F ( k ) = 1 , and this implies T P ˜ ¯ ( ) ( u ) = 0 , I P ˜ ¯ ( u ) = 1 , F P ˜ ¯ ( u ) = 1 . Thus , P ˜ ¯ ( ) = .
Now,
P ˜ ̲ ( M ) = { ( u , T P ˜ ̲ ( M ) ( u ) , I P ˜ ̲ ( M ) ( u ) , F P ˜ ̲ ( M ) ( u ) ) | u Y } , T P ˜ ̲ ( M ) ( u ) = k M F P ˜ ( u , k ) T M ( k ) , I P ˜ ̲ ( M ) ( u ) = k M ( 1 I P ˜ ( u , k ) ) I M ( k ) , F P ˜ ̲ ( M ) ( u ) = k M T P ˜ ( u , k ) F M ( k ) .
Since M is full NS on M , T M ( k ) = 1 , I M ( k ) = 0 , F M ( k ) = 0 , for all k M , and this implies T P ˜ ̲ ( M ) ( u ) = 1 , I P ˜ ̲ ( M ) ( u ) = 0 , F P ˜ ̲ ( M ) ( u ) = 0 . Thus, P ˜ ̲ ( M ) = Y .
(ii)
Since ( Y , M , P ˜ ) is an NSAS and P ˜ is a serial neutrosophic soft relation, then, for each u Y , there exists k M , such that T P ˜ ( u , k ) = 1 , I P ˜ ( u , k ) = 0 , and F P ˜ ( u , k ) = 0 . The UNSRA and LNSRA operators P ˜ ¯ ( C ) , and P ˜ ̲ ( C ) of an NS C can be defined as:
T P ˜ ¯ ( C ) ( u ) = k M T C ( k ) , I P ˜ ¯ ( C ) ( u ) = k M I C ( k ) , F P ˜ ¯ ( C ) ( u ) = k M F C ( k ) ,
T P ˜ ̲ ( C ) ( u ) = k M T C ( k ) , I P ˜ ̲ ( C ) ( u ) = k M I C ( k ) , F P ˜ ̲ ( C ) ( u ) = k M F C ( k ) .
Clearly, T P ˜ ̲ ( C ) ( u ) T P ˜ ¯ ( C ) ( u ) , I P ˜ ̲ ( C ) ( u ) T P ˜ ¯ ( C ) ( u ) , F P ˜ ̲ ( C ) ( u ) F P ˜ ¯ ( C ) ( u ) for all u Y . Thus, P ˜ ̲ ( C ) P ˜ ¯ ( C ) .
 ☐
The conventional NSS is a mapping from a parameter to the neutrosophic subset of universe and let ( P ˜ , M ) be NSS. Now, we present the constructive definition of neutrosophic soft rough relation by using a neutrosophic soft relation R ˜ from M × M = M ´ to N ( Y × Y = Y ´ ) , where Y is a universal set and M is a set of parameters.
Definition 4.
A neutrosophic soft rough relation ( R ˜ ̲ ( D ) , R ˜ ¯ ( D ) ) on Y is an NSRS, R ˜ : M ´ N ( Y ´ ) is a neutrosophic soft relation on Y defined by
R ˜ ( k i k j ) = { u i u j | u i P ˜ ( k i ) , u j P ˜ ( k j ) } , u i u j Y ´ ,
such that
T R ˜ ( u i u j , k i k j ) min { T P ˜ ( u i , k i ) , T P ˜ ( u j , k j ) } , I R ˜ ( u i u j , k i k j ) max { I P ˜ ( u i , k i ) , I P ˜ ( u j , k j ) } , F R ˜ ( u i u j , k i k j ) max { F P ˜ ( u i , k i ) , F P ˜ ( u j , k j ) } .
For any D N ( M ´ ) , the UNSA and the LNSA of B w.r.t ( Y ´ , M ´ , R ˜ ) are defined as follows:
R ˜ ¯ ( D ) = { ( u i u j , T R ˜ ¯ ( D ) ( u i u j ) , I R ˜ ¯ ( D ) ( u i u j ) , F R ˜ ¯ ( D ) ( u i u j ) ) | u i u j Y ´ } ,
R ˜ ̲ ( D ) = { ( u i u j , T R ˜ ̲ ( D ) ( u i u j ) , I R ˜ ̲ ( D ) ( u i u j ) , F R ˜ ̲ ( D ) ( u i u j ) ) | u i u j Y ´ } ,
where
T R ˜ ¯ ( D ) ( u i u j ) = k i k j M ´ T R ˜ ( u i u j , k i k j ) T D ( k i k j ) , I R ˜ ¯ ( D ) ( u i u j ) = k i k j M ˜ I R ˜ ( u i u j , k i k j ) I D ( k i k j ) , F R ˜ ¯ ( D ) ( u i u j ) = k i k j M ˜ F R ˜ ( u i u j , k i k j ) F D ( k i k j ) ,
T R ˜ ̲ ( D ) ( u i u j ) = k i k j M ´ F R ˜ ( u i u j , k i k j ) T D ( k i k j ) , I R ˜ ̲ ( D ) ( u i u j ) = k i k j M ˜ ( 1 I R ˜ ( u i u j , k i k j ) ) I D ( k i k j ) , F R ˜ ̲ ( D ) ( u i u j ) = k i k j M ˜ T R ˜ ( u i u j , k i k j ) F D ( k i k j ) .
The pair ( R ˜ ̲ ( D ) , R ˜ ¯ ( D ) ) is called NSRR and R ˜ ̲ , R ˜ ¯ : N ( M ´ ) N ( Y ´ ) are called the LNSRA and the UNSRA operators, respectively.
Remark 4.
Consideer an NS D on M ´ and an NS C on M,
T D ( k i k j ) min { T C ( k i ) , T C ( k j ) } , I D ( k i k j ) max { I C ( k i ) , I C ( k j ) } , F D ( k i k j ) max { F C ( k i ) , F C ( k j ) } .
According to the definition of NSRR, we get
T R ˜ ¯ ( D ) ( u i u j ) min { T R ˜ ¯ ( C ) ( u i ) , T R ˜ ¯ ( C ) ( u j ) } , I R ˜ ¯ ( D ) ( u i u j ) max { I R ˜ ¯ ( C ) ( u i ) , I R ˜ ¯ ( C ) ( u j ) } , F R ˜ ¯ ( D ) ( u i u j ) max { F R ˜ ¯ ( C ) ( u i ) . F R ˜ ¯ ( C ) ( u j ) } .
Similarly, for LNSRA operator R ˜ ̲ ( D ) ,
T R ˜ ̲ ( D ) ( u i u j ) min { T R ˜ ̲ ( C ) ( u i ) , T R ˜ ̲ ( C ) ( u j ) } , I R ˜ ̲ ( D ) ( u i u j ) max { I R ˜ ̲ ( C ) ( u i ) , I R ˜ ̲ ( C ) ( u j ) } , F R ˜ ̲ ( D ) ( u i u j ) max { F R ˜ ̲ ( C ) ( u i ) . F R ˜ ̲ ( C ) ( u j ) } .
Example 5.
Let Y = { u 1 , u 2 , u 3 } be a universal set and M = { k 1 , k 2 , k 3 } a set of parameters. A neutrosophic soft set ( P ˜ , M ) on Y can be defined in tabular form (see Table 5) as follows:
Let E = { u 1 u 2 , u 2 u 3 , u 2 u 2 , u 3 u 2 } Y ´ and L = { k 1 k 3 , k 2 k 1 , k 3 k 2 } M ´ .
Then, a soft relation R ˜ on E (from L to E) can be defined in tabular form (see Table 6) as follows:
  • Let C = { ( k 1 , 0.2 , 0.4 , 0.6 ) , ( k 2 , 0.4 , 0.5 , 0.2 ) , ( k 3 , 0.1 , 0.2 , 0.4 ) } be an NS on M, then
  • R ˜ ¯ ( C ) = { ( u 1 , 0.4 , 0.2 , 0.4 ) , ( u 2 , 0.3 , 0.4 , 0.3 ) , ( u 3 , 0.4 , 0.2 , 0.3 ) } ,
  • R ˜ ̲ ( C ) = { ( u 1 , 0.3 , 0.5 , 0.4 ) , ( u 2 , 0.2 , 0.5 , 0.6 ) , ( u 3 , 0.4 , 0.5 , 0.6 ) } ,
  • Let B = { ( k 1 k 3 , 0.1 , 0.3 , 0.5 ) , ( k 2 k 1 , 0.2 , 0.4 , 0.3 ) , ( k 3 k 2 , 0.1 , 0.2 , 0.3 ) } be an NS on L , then
  • R ˜ ¯ ( D ) = { ( u 1 u 2 , 0.2 , 0.3 , 0.3 ) , ( u 2 u 3 , 0.2 , 0.3 , 0.3 ) , ( u 2 u 2 , 0.2 , 0.4 , 0.3 ) , ( u 3 u 2 , 0.2 , 0.4 , 0.3 ) } ,
  • R ˜ ̲ ( D ) = { ( u 1 u 2 , 0.2 , 0.4 , 0.4 ) , ( u 2 u 3 , 0.2 , 0.4 , 0.5 ) , ( u 2 u 2 , 0.3 , 0.4 , 0.5 ) , ( u 3 u 2 , 0.2 , 0.4 , 0.5 ) } .
  • Hence, R ˜ ( D ) = ( R ˜ ¯ ( D ) , R ˜ ̲ ( D ) ) is NSRR.
Theorem 5.
Let P ˜ 1 , P ˜ 2 be two NSRRs from universal Y to a parameter set M; for all C N ( M ) , we have
(i)
P ˜ 1 P ˜ 2 ̲ ( C ) = P ˜ 1 ̲ ( C ) P ˜ 2 ̲ ( C ) ,
(ii)
P ˜ 1 P ˜ 2 ¯ ( C ) = P ˜ 1 ¯ ( C ) P ˜ 2 ¯ ( C ) .
Theorem 6.
Let P ˜ 1 , P ˜ 2 be two neutrosophic soft relations from universal Y to a parameter set M; for all C N ( M ) , we have
(i)
P ˜ 1 P ˜ 2 ̲ ( C ) P ˜ 1 ̲ ( C ) P ˜ 2 ̲ ( C ) P ˜ 1 ̲ ( C ) P ˜ 2 ̲ ( C ) ,
(ii)
P ˜ 1 P ˜ 2 ¯ ( C ) P ˜ 1 ¯ ( C ) P ˜ 2 ¯ ( C ) .

4. Application

In this section, we apply the concept of NSRSs to a decision-making problem. In recent times, the object recognition problem has gained considerable importance. The object recognition problem can be considered as a decision-making problem, in which final identification of object is founded on a given amount of information. A detailed description of the algorithm for the selection of the most suitable object based on an available set of alternatives is given, and the proposed decision-making method can be used to calculate lower and upper approximation operators to address deep concerns of the problem. The presented algorithms can be applied to avoid lengthy calculations when dealing with a large number of objects. This method can be applied in various domains for multi-criteria selection of objects. A multicriteria decision making (MCDM) can be modeled using neutrosophic soft rough sets and is ideally suited for solving problems.
In the pharmaceutical industry, different pharmaceutical companies develop, produce and discover pharmaceutical medicines (drugs) for use as medication. These pharmaceutical companies deal with “brand name medicine” and “generic medicine”. Brand name medicine and generic medicine are bioequivalent, have a generic medicine rate and element of absorption. Brand name medicine and generic medicine have the same active ingredients, but the inactive ingredients may differ. The most important difference is cost. Generic medicine is less expensive as compared to brand names in comparison. Usually, generic drug manufacturers have competition to produce products that cost less. The product may possibly be slightly dissimilar in color, shape, or markings. The major difference is cost. We consider a brand name drug “ u = Claritin ( loratadink ) " with an ideal neutrosophic value number n u = ( 1 , 0 , 0 ) used for seasonal allergy medication. Consider
Y = { u 1 = Nasacort Aq ( Triamcinolone ) , u 2 = Zyrtec D ( Cetirizine / Pseudoephedrine ) , u 3 = Sudafed ( Pseudoephedrine ) , u 4 = Claritin D ( loratadine / pseudoephedrine ) , u 5 = Flonase ( Fluticasone ) }
is a set of generic versions of “Clarition”. We want to select the most suitable generic version of Claritin on the basis of parameters e 1 = Highly soluble, e 2 = Highly permeable, e 3 = Rapidly dissolving. M = { e 1 , e 2 , e 3 } be a set of paraments. Let P ˜ be a neutrosophic soft relation from Y to parameter set M as shown in Table 7.
Suppose C = { ( e 1 , 0.2 , 0.4 , 0.5 ) , ( e 2 , 0.5 , 0.6 , 0.4 ) , ( e 3 , 0.7 , 0.5 , 0.4 ) } is the most favorable object that is an NS on the parameter set M under consideration. Then, ( P ˜ ̲ ( C ) , P ˜ ¯ ( C ) ) is an NSRS in NSAS ( Y , M , P ˜ ) , where
P ˜ ¯ ( C ) = { ( u 1 , 0.6 , 0.5 , 0.4 ) , ( u 2 , 0.7 , 0.4 , 0.4 ) , ( u 3 , 0.7 , 0.4 , 0.4 ) , ( u 4 , 0.7 , 0.6 , 0.5 ) , ( u 5 , 0.7 , 0.5 , 0.5 ) } , P ˜ ̲ ( C ) = { ( u 1 , 0.5 , 0.6 , 0.4 ) , ( u 2 , 0.5 , 0.6 , 0.5 ) , ( u 3 , 0.3 , 0.3 , 0.5 ) , ( u 4 , 0.5 , 0.6 , 0.5 ) , ( u 5 , 0.4 , 0.5 , 0.5 ) } .
In [6], the sum of two neutrosophic numbers is defined. The sum of LNSRA and the UNSRA operators P ˜ ¯ ( C ) and P ˜ ̲ ( C ) is an NS P ˜ ¯ ( C ) P ˜ ̲ ( C ) defined by
P ˜ ¯ ( C ) P ˜ ̲ ( C ) = { ( u 1 , 0.8 , 0.3 , 0.16 ) , ( u 2 , 0.85 , 0.24 , 0.2 ) , ( u 3 , 0.79 , 0.2 , 0.2 ) , ( u 4 , 0.85 , 0.36 , 0.25 ) , ( u 5 , 0.82 , 0.25 , 0.25 ) } .
Let n u i = ( T n u i , I n u i , F n u i ) be a neutrosophic value number of generic versions medicine u i . We can calculate the cosine similarity measure S ( n u i , n u ) between each neutrosophic value number n u i of generic version u i and ideal value number n u of brand name drug u, and the grading of all generic version medicines of Y can be determined. The cosine similarity measure is calculated as the inner product of two vectors divided by the product of their lengths. It is the cosine of the angle between the vector representations of two neutrosophic soft rough sets. The cosine similarity measure is a fundamental measure used in information technology. In [3], the cosine similarity is measured between neutrosophic numbers and demonstrates that the cosine similarity measure is a special case of the correlation coefficient in SVNS. Then, a decision-making method is proposed by the use of the cosine similarity measure of SVNSs, in which the evaluation information for alternatives with respect to criteria is carried out by truth-membership degree, indeterminacy-membership degree, and falsity-membership degree under single-valued neutrosophic environment. It defined as follows:
S ( n u , n u i ) = T n u · T n u i + I n u · I n u i + F n u · F n u i T n u 2 + T n u 2 + F n u 2 + T n u i 2 + T n u i 2 + F n u i 2 .
Through the cosine similarity measure between each object and the ideal object, the ranking order of all objects can be determined and the best object can be easily identified as well. The advantage is that the proposed MCDM approach has some simple tools and concepts in the neutrosophic similarity measure approach among the existing ones. An illustrative application shows that the proposed method is simple and effective.
The generic version medicine u i with the larger similarity measure S ( n u i , n u ) is the most suitable version u i because it is close to the brand name drug u. By comparing the cosine similarity measure values, the grading of all generic medicines can be determined, and we can find the most suitable generic medicine after selection of suitable NS of parameters. By Equation (1), we can calculate the cosine similarity measure between neutrosophic value numbers n u of u and n u i of u i as follows:
S ( n u , n u 1 ) = 0.9203 , S ( n u , n u 2 ) = 0.9386 , S ( n u , n u 3 ) = 0.9415 , S ( n u , n u 4 ) = 0.8888 S ( n u , n u 5 ) = 0.9183 .
We get S ( n u , n u 3 ) > S ( n u , n u 2 ) > S ( n u , n u 1 ) > S ( n u , n u 5 ) > S ( n u , n u 4 ) . Thus, the optimal decision is u 3 , and the most suitable generic version of Claritin is Sudafed (Pseudoephedrine). We have used software MATLAB (version 7, MathWorks, Natick, MA, USA) for calculations in the application. The flow chart of the algorithm is general for any number of objects with respect to certain parameters. The flow chart of our proposed method is given in Figure 1. The method is presented as an algorithm in Algorithm 1.
Algorithm 1: Algorithm for selection of the most suitable objects
 1. Begin
 2.   Input the number of elements in universal set Y = { u 1 , u 2 , , u n } .
 3.   Input the number of elements in parameter set M = { e 1 , e 2 , , e m } .
 4.   Input a neutrosophic soft relation P ˜ from Y to M.
 5.   Input an NS C on M.
 6.    if s i z e ( P ˜ ) [ n , 3 m ]
 7.      fprintf(‛ size of neutrosophic soft relation from universal set to parameter
                       set is not correct, it should be of order % d x % d ; , n , 3 m )
 8.       error(‛ Dimemsion of neutrosophic soft relation on vertex set is not correct. ’)
 9.    end
 10.   if s i z e ( C ) [ m , 3 ]
 11.     fprintf(‛ size of NS on parameter set is not correct,
                      it should be of order %dx3; ’,m)
 12.     error(’Dimemsion of NS on parameter set is not correct.’)
 13.   end
 14.   T P ˜ ¯ ( C ) = z e r o s ( n , 1 ) ;
 15.   I P ˜ ¯ ( C ) = o n e s ( n , 1 ) ;
 16.   F P ˜ ¯ ( C ) = o n e s ( n , 1 ) ;
 17.   T P ˜ ̲ ( C ) = o n e s ( n , 1 ) ;
 18.   I P ˜ ̲ ( C ) = z e r o s ( n , 1 ) ;
 19.   F P ˜ ̲ ( C ) = z e r o s ( n , 1 ) ;
 20.       if s i z e ( P ˜ ) = = [ n , 3 m ]
 21.           if s i z e ( C ) = = [ m , 3 ]
 22.               if P ˜ > = 0 & & P ˜ < = 1
 23.                       if C > = 0 & & C < = 1
 24.                                for i = 1 : n
 25.                                         for k = 1 : m
 26.                                             j=3*k-2;
 27.                                              T P ˜ ¯ ( C ) ( i , 1 ) = max ( T P ˜ ¯ ( C ) ( i , 1 ) , min ( P ˜ ( i , j ) , C ( k , 1 ) ) ) ;
 28.                                              I P ˜ ¯ ( C ) ( i , 1 ) = min ( I P ˜ ¯ ( C ) ( i , 1 ) , max ( P ˜ ( i , j + 1 ) , C ( k , 2 ) ) ) ;
 29.                                              F P ˜ ¯ ( C ) ( i , 1 ) = min ( F P ˜ ¯ ( C ) ( i , 1 ) , max ( P ˜ ( i , j + 2 ) , C ( k , 3 ) ) ) ;
 30.                                              T P ˜ ̲ ( C ) ( i , 1 ) = min ( T P ˜ ̲ ( C ) ( i , 1 ) , max ( P ˜ ( i , j + 2 ) , C ( k , 1 ) ) ) ;
 31.                                              I P ˜ ̲ ( C ) ( i , 1 ) = max ( I P ˜ ̲ ( C ) ( i , 1 ) , min ( ( 1 P ˜ ( i , j + 1 ) ) , C ( k , 2 ) ) ) ;
 32.                                              F P ˜ ̲ ( C ) ( i , 1 ) = max ( F P ˜ ̲ ( C ) ( i , 1 ) , min ( P ˜ ( i , j ) , C ( k , 3 ) ) ) ;
 33.                                         end
 34.                                  end
 35.                            P ˜ ¯ ( C ) = ( T P ˜ ¯ ( C ) , I P ˜ ¯ ( C ) , F P ˜ ¯ ( C ) )
 36.                              P ˜ ̲ ( C ) = ( T P ˜ ̲ ( C ) , I P ˜ ¯ ( C ) , F P ˜ ̲ ( C ) )
 37.                            if P ˜ ¯ ( C ) = = P ˜ ̲ ( C )
 38.                                  fprintf(‛ it is a neutrosophic set on universal set. )
 39.                            else
 40.                                  fprintf(‛it is an NSRS on universal set. )
 41.                                   P ˜ ¯ ( C ) P ˜ ̲ ( C ) = z e r o s ( n , 3 ) ;
 42.                                          for i=1:n
 43.                                                  T P ˜ ¯ ( C ) ( i ) T P ˜ ̲ ( C ) ( i ) = T P ˜ ¯ ( C ) ( i ) + T P ˜ ̲ ( C ) ( i )
T P ˜ ¯ ( C ) ( i ) . T P ˜ ̲ ( C ) ( i ) ;
 44.                                                  I P ˜ ¯ ( C ) ( i ) I P ˜ ̲ ( C ) ( i ) = I P ˜ ¯ ( C ) ( i ) . I P ˜ ̲ ( C ) ( i ) ;
 45.                                                  F P ˜ ¯ ( C ) ( i ) F P ˜ ̲ ( C ) ( i ) = F P ˜ ¯ ( C ) ( i ) . F P ˜ ̲ ( C ) ( i ) ;
 46.                                            end
 47.                                   n u = ( 1 , 0 , 0 ) ;
 48.                                   S ( n u , n u i ) = z e r o s ( n , 1 ) ;
 49.                                   for i=1:n
 50.                                     S ( n u , n u i ) = T n u · T n u i + I n u · I n u i + F n u · F n u i T n u 2 + T n u 2 + F n u 2 + T n u i 2 + T n u i 2 + F n u i 2 ;
 51.                                   end
 52.                               S ( n u , n u i )
 53.                              D=max(S);
 54.                              l=0;
 55.                              m=zeros(n,1);
 56.                              D2=zeros(n,1);
 57.                               for j=1:n
 58.                                              if S(j,1)==D
 59.                                             l=l+1;
 60.                                           D2(j,1)=S(j,1);
 61.                                                m(j)=j;
 62.                                             end
 63.                               end
 64.                               for j = 1 : n
 65.                                    if m ( j ) = 0
 66.                                   fprintf(‛ you can choice the element u % d ,j)
 67.                                    end
 68.                               end
 69.                      end
 70.           end
 71.            end
 72.         end
 73.     end
 74. End

5. Conclusions and Future Directions

Rough set theory can be considered as an extension of classical set theory. Rough set theory is a very useful mathematical model to handle vagueness. NS theory, RS theory and SS theory are three useful distinguished approaches to deal with vagueness. NS and RS models are used to handle uncertainty, and combining these two models with another remarkable model of SSs gives more precise results for decision-making problems. In this paper, we have first presented the notion of SRNSs. Furthermore, we have introduced NSRSs and investigated some properties of NSRSs in detail. The notion of NSRS can be utilized as a mathematical tool to deal with imprecise and unspecified information. In addition, a decision-making method based on NSRSs has been proposed. This research work can be extended to (1) rough bipolar neutrosophic soft sets; (2) bipolar neutrosophic soft rough sets; (3) interval-valued bipolar neutrosophic rough sets; and (4) neutrosophic soft rough graphs.

Author Contributions

Muhammad Akram and Sundas Shahzadi conceived and designed the experiments; Florentin Smarandache analyzed the data; Sundas Shahzadi wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Smarandache, F. Neutrosophy: Neutrosophic Probability, Set, and Logic; American Research Press: Rehoboth, DE, USA, 1998; 105p. [Google Scholar]
  2. Wang, H.; Smarandache, F.; Zhang, Y.; Sunderraman, R. Single-valued neutrosophic sets. Multispace Multistruct. 2010, 4, 410–413. [Google Scholar]
  3. Ye, J. Multicriteria decision-making method using the correlation coefficient under single-valued neutrosophic environment. Int. J. Gen. Syst. 2013, 42, 386–394. [Google Scholar] [CrossRef]
  4. Ye, J. Improved correlation coefficients of single valued neutrosophic sets and interval neutrosophic sets for multiple attribute decision making. J. Intell. Fuzzy Syst. 2014, 27, 2453–2462. [Google Scholar]
  5. Ye, J.; Fu, J. Multi-period medical diagnosis method using a single valued neutrosophic similarity measure based on tangent function. Comput. Methods Prog. Biomed. 2016, 123, 142–149. [Google Scholar] [CrossRef] [PubMed]
  6. Peng, J.J.; Wang, J.Q.; Zhang, H.Y.; Chen, X.H. An outranking approach for multi-criteria decision-making problems with simplified neutrosophic sets. Appl. Soft Comput. 2014, 25, 336–346. [Google Scholar] [CrossRef]
  7. Molodtsov, D.A. Soft set theory-first results. Comput. Math. Appl. 1999, 37, 19–31. [Google Scholar] [CrossRef]
  8. Maji, P.K.; Biswas, R.; Roy, A.R. Fuzzy soft sets. J. Fuzzy Math. 2001, 9, 589–602. [Google Scholar]
  9. Maji, P.K.; Biswas, R.; Roy, A.R. Intuitionistic fuzzy soft sets. J. Fuzzy Math. 2001, 9, 677–692. [Google Scholar]
  10. Maji, P.K. Neutrosophic soft set. Ann. Fuzzy Math. Inform. 2013, 5, 157–168. [Google Scholar]
  11. Babitha, K.V.; Sunil, J.J. Soft set relations and functions. Comput. Math. Appl. 2010, 60, 1840–1849. [Google Scholar] [CrossRef]
  12. Sahin, R.; Kucuk, A. On similarity and entropy of neutrosophic soft sets. J. Intell. Fuzzy Syst. Appl. Eng. Technol. 2014, 27, 2417–2430. [Google Scholar]
  13. Pawlak, Z. Rough sets. Int. J. Comput. Inf. Sci. 1982, 11, 341–356. [Google Scholar] [CrossRef]
  14. Ali, M. A note on soft sets, rough sets and fuzzy soft sets. Appl. Soft Comput. 2011, 11, 3329–3332. [Google Scholar]
  15. Feng, F.; Liu, X.; Leoreanu-Fotea, B.; Jun, Y.B. Soft sets and soft rough sets. Inf. Sci. 2011, 181, 1125–1137. [Google Scholar] [CrossRef]
  16. Shabir, M.; Ali, M.I.; Shaheen, T. Another approach to soft rough sets. Knowl.-Based Syst. 2013, 40, 72–80. [Google Scholar] [CrossRef]
  17. Feng, F.; Li, C.; Davvaz, B.; Ali, M.I. Soft sets combined with fuzzy sets and rough sets: A tentative approach. Soft Comput. 2010, 14, 899–911. [Google Scholar] [CrossRef]
  18. Dubois, D.; Prade, H. Rough fuzzy sets and fuzzy rough sets. Int. J. Gen. Syst. 1990, 17, 191–209. [Google Scholar] [CrossRef]
  19. Meng, D.; Zhang, X.; Qin, K. Soft rough fuzzy sets and soft fuzzy rough sets. Comput. Math. Appl. 2011, 62, 4635–4645. [Google Scholar] [CrossRef]
  20. Sun, B.Z.; Ma, W.; Liu, Q. An approach to decision making based on intuitionistic fuzzy rough sets over two universes. J. Oper. Res. Soc. 2013, 64, 1079–1089. [Google Scholar] [CrossRef]
  21. Sun, B.Z.; Ma, W. Soft fuzzy rough sets and its application in decision making. Artif. Intell. Rev. 2014, 41, 67–80. [Google Scholar] [CrossRef]
  22. Zhang, X.; Dai, J.; Yu, Y. On the union and intersection operations of rough sets based on various approximation spaces. Inf. Sci. 2015, 292, 214–229. [Google Scholar] [CrossRef]
  23. Zhang, H.; Shu, L. Generalized intuitionistic fuzzy rough set based on intuitionistic fuzzy covering. Inf. Sci. 2012, 198, 186–206. [Google Scholar] [CrossRef]
  24. Zhang, X.; Zhou, B.; Li, P. A general frame for intuitionistic fuzzy rough sets. Inf. Sci. 2012, 216, 34–49. [Google Scholar] [CrossRef]
  25. Zhang, H.; Shu, L.; Liao, S. Intuitionistic fuzzy soft rough set and its application in decision making. Abstr. Appl. Anal. 2014, 2014, 13. [Google Scholar]
  26. Zhang, H.; Xiong, L.; Ma, W. Generalized intuitionistic fuzzy soft rough set and its application in decision making. J. Comput. Anal. Appl. 2016, 20, 750–766. [Google Scholar]
  27. Broumi, S.; Smarandache, F. Interval-valued neutrosophic soft rough sets. Int. J. Comput. Math. 2015, 2015, 232919. [Google Scholar] [CrossRef]
  28. Broumi, S.; Smarandache, F.; Dhar, M. Rough Neutrosophic sets. Neutrosophic Sets Syst. 2014, 3, 62–67. [Google Scholar]
  29. Yang, H.L.; Zhang, C.L.; Guo, Z.L.; Liu, Y.L.; Liao, X. A hybrid model of single valued neutrosophic sets and rough sets: Single valued neutrosophic rough set model. Soft Comput. 2016, 21, 6253–6267. [Google Scholar] [CrossRef]
  30. Faizi, S.; Salabun, W.; Rashid, T.; Watrbski, J.; Zafar, S. Group decision-making for hesitant fuzzy sets based on characteristic objects method. Symmetry 2017, 9, 136. [Google Scholar] [CrossRef]
  31. Faizi, S.; Rashid, T.; Salabun, W.; Zafar, S.; Watrbski, J. Decision making with uncertainty using hesitant fuzzy sets. Int. J. Fuzzy Syst. 2018, 20, 93–103. [Google Scholar] [CrossRef]
  32. Mardani, A.; Nilashi, M.; Antucheviciene, J.; Tavana, M.; Bausys, R.; Ibrahim, O. Recent Fuzzy Generalisations of Rough Sets Theory: A Systematic Review and Methodological Critique of the Literature. Complexity 2017, 2017, 33. [Google Scholar] [CrossRef]
  33. Liang, R.X.; Wang, J.Q.; Zhang, H.Y. A multi-criteria decision-making method based on single-valued trapezoidal neutrosophic preference relations with complete weight information. Neural Comput. Appl. 2017, 1–16. [Google Scholar] [CrossRef]
  34. Liang, R.; Wang, J.; Zhang, H. Evaluation of e-commerce websites: An integrated approach under a single-valued trapezoidal neutrosophic environment. Knowl.-Based Syst. 2017, 135, 44–59. [Google Scholar] [CrossRef]
  35. Peng, H.G.; Zhang, H.Y.; Wang, J.Q. Probability multi-valued neutrosophic sets and its application in multi-criteria group decision-making problems. Neural Comput. Appl. 2016, 1–21. [Google Scholar] [CrossRef]
  36. Wang, L.; Zhang, H.Y.; Wang, J.Q. Frank Choquet Bonferroni mean operators of bipolar neutrosophic sets and their application to multi-criteria decision-making problems. Int. J. Fuzzy Syst. 2018, 20, 13–28. [Google Scholar] [CrossRef]
  37. Zavadskas, E.K.; Bausys, R.; Kaklauskas, A.; Ubarte, I.; Kuzminske, A.; Gudiene, N. Sustainable market valuation of buildings by the single-valued neutrosophic MAMVA method. Appl. Soft Comput. 2017, 57, 74–87. [Google Scholar] [CrossRef]
  38. Li, Y.; Liu, P.; Chen, Y. Some single valued neutrosophic number heronian mean operators and their application in multiple attribute group decision making. Informatica 2016, 27, 85–110. [Google Scholar] [CrossRef]
Figure 1. Flow chart for selection of most suitable objects.
Figure 1. Flow chart for selection of most suitable objects.
Axioms 07 00019 g001
Table 1. Crisp soft relation P.
Table 1. Crisp soft relation P.
P w 1 w 2 w 3 w 4 w 5
k 1 1 1 0 1 0
k 2 0 1 1 0 1
k 3 0 1 0 0 0
k 4 1 1 1 0 1
Table 2. Soft set ( P , M ) .
Table 2. Soft set ( P , M ) .
P u 1 u 2 u 3
k 1 1 1 0
k 2 0 0 1
k 3 1 1 1
Table 3. Soft relation R.
Table 3. Soft relation R.
R u 1 u 2 u 2 u 3 u 2 u 2 u 3 u 2
k 1 k 3 1 1 1 0
k 2 k 1 0 0 0 1
k 3 k 2 0 1 0 0
Table 4. Neutrosophic soft relation P ˜ .
Table 4. Neutrosophic soft relation P ˜ .
P ˜ w 1 w 2 w 3 w 4
k 1 ( 0.3 , 0.4 , 0.5 ) ( 0.4 , 0.2 , 0.3 ) ( 0.1 , 0.5 , 0.4 ) ( 0.2 , 0.3 , 0.4 )
k 2 ( 0.1 , 0.5 , 0.4 ) ( 0.3 , 0.4 , 0.6 ) ( 0.4 , 0.4 , 0.3 ) ( 0.5 , 0.3 , 0.8 )
k 3 ( 0.3 , 0.4 , 0.4 ) ( 0.4 , 0.6 , 0.7 ) ( 0.3 , 0.5 , 0.4 ) ( 0.5 , 0.4 , 0.6 )
Table 5. Neutrosophic soft set ( P ˜ , M ) .
Table 5. Neutrosophic soft set ( P ˜ , M ) .
P ˜ u 1 u 2 u 3
k 1 ( 0.4 , 0.5 , 0.6 ) ( 0.7 , 0.3 , 0.2 ) ( 0.6 , 0.3 , 0.4 )
k 2 ( 0.5 , 0.3 , 0.6 ) ( 0.3 , 0.4 , 0.3 ) ( 0.7 , 0.2 , 0.3 )
k 3 ( 0.7 , 0.2 , 0.3 ) ( 0.6 , 0.5 , 0.4 ) ( 0.7 , 0.2 , 0.4 )
Table 6. Neutrosophic soft relation R ˜ .
Table 6. Neutrosophic soft relation R ˜ .
R ˜ u 1 u 2 u 2 u 3 u 2 u 2 u 3 u 2
k 1 k 3 ( 0.4 , 0.4 , 0.5 ) ( 0.6 , 0.3 , 0.4 ) ( 0.5 , 0.4 , 0.2 ) ( 0.5 , 0.4 , 0.3 )
k 2 k 1 ( 0.3 , 0.3 , 0.4 ) ( 0.3 , 0.2 , 0.3 ) ( 0.2 , 0.3 , 0.3 ) ( 0.7 , 0.2 , 0.2 )
k 3 k 2 ( 0.3 , 0.3 , 0.2 ) ( 0.5 , 0.3 , 0.2 ) ( 0.2 , 0.4 , 0.4 ) ( 0.3 , 0.4 , 0.4 )
Table 7. Neutrosophic soft set ( P ˜ , M ) .
Table 7. Neutrosophic soft set ( P ˜ , M ) .
P ˜ e 1 e 2 e 3
u 1 ( 0.4 , 0.5 , 0.6 ) ( 0.7 , 0.3 , 0.2 ) ( 0.6 , 0.3 , 0.4 )
u 2 ( 0.5 , 0.3 , 0.6 ) ( 0.3 , 0.4 , 0.3 ) ( 0.7 , 0.2 , 0.3 )
u 3 ( 0.7 , 0.2 , 0.3 ) ( 0.6 , 0.5 , 0.4 ) ( 0.7 , 0.2 , 0.4 )
u 4 ( 0.5 , 0.7 , 0.5 ) ( 0.8 , 0.4 , 0.6 ) ( 0.8 , 0.7 , 0.6 )
u 5 ( 0.6 , 0.5 , 0.4 ) ( 0.7 , 0.8 , 0.5 ) ( 0.7 , 0.3 , 0.5 )

Share and Cite

MDPI and ACS Style

Akram, M.; Shahzadi, S.; Smarandache, F. Multi-Attribute Decision-Making Method Based on Neutrosophic Soft Rough Information. Axioms 2018, 7, 19. https://doi.org/10.3390/axioms7010019

AMA Style

Akram M, Shahzadi S, Smarandache F. Multi-Attribute Decision-Making Method Based on Neutrosophic Soft Rough Information. Axioms. 2018; 7(1):19. https://doi.org/10.3390/axioms7010019

Chicago/Turabian Style

Akram, Muhammad, Sundas Shahzadi, and Florentin Smarandache. 2018. "Multi-Attribute Decision-Making Method Based on Neutrosophic Soft Rough Information" Axioms 7, no. 1: 19. https://doi.org/10.3390/axioms7010019

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop