**Linear Diophantine Fuzzy Soft Rough Sets for the Selection of Sustainable Material Handling Equipment**

**Muhammad Riaz <sup>1</sup> , Masooma Raza Hashmi 1,2, Humaira Kalsoom <sup>3</sup> , Dragan Pamucar <sup>4</sup> and Yu-Ming Chu 5,6,**<sup>∗</sup>


Received: 25 June 2020; Accepted: 22 July 2020; Published: 24 July 2020

**Abstract:** The concept of linear Diophantine fuzzy sets (LDFSs) is a new approach for modeling uncertainties in decision analysis. Due to the addition of reference or control parameters with membership and non-membership grades, LDFS is more flexible and reliable than existing concepts of intuitionistic fuzzy sets (IFSs), Pythagorean fuzzy sets (PFSs), and q-rung orthopair fuzzy sets (q-ROFSs). In this paper, the notions of linear Diophantine fuzzy soft rough sets (LDFSRSs) and soft rough linear Diophantine fuzzy sets (SRLDFSs) are proposed as new hybrid models of soft sets, rough sets, and LDFS. The suggested models of LDFSRSs and SRLDFSs are more flexible to discuss fuzziness and roughness in terms of upper and lower approximation operators. Certain operations on LDFSRSs and SRLDFSs have been established to discuss robust multi-criteria decision making (MCDM) for the selection of sustainable material handling equipment. For these objectives, some algorithms are developed for the ranking of feasible alternatives and deriving an optimal decision. Meanwhile, the ideas of the upper reduct, lower reduct, and core set are defined as key factors in the proposed MCDM technique. An application of MCDM is illustrated by a numerical example, and the final ranking in the selection of sustainable material handling equipment is computed by the proposed algorithms. Finally, a comparison analysis is given to justify the feasibility, reliability, and superiority of the proposed models.

**Keywords:** linear Diophantine fuzzy set; linear Diophantine fuzzy soft rough set; soft rough linear Diophantine fuzzy set; upper reduct and lower reduct; core set; multi-criteria decision making

### **1. Introduction**

The multi-criteria decision making (MCDM) techniques have been rigorously investigated by many researchers around the real world. Due to uncertain and vague information, the complexity of human's decision making has grown broadly in the present era. This pursuit gave rise to many resourceful techniques to deal with real-world problems. The methodologies developed for this objective essentially rely on the description of the problem under contemplation. The problem

of imperfect, uncertain, and vague information has been focused on by many researchers in the last few decades.

Zadeh (1965) developed the notion of fuzzy sets, fuzzy numbers, and linguistic variables to describe hidden uncertain information in the objects by using membership grades. Researchers found that membership/affiliation grades alone are not enough to express some real-life situations such as: benefit and loss claims, positive results and side effects of drugs, inferiority and superiority, perfection and imperfection, affiliation and non-affiliation, etc. In order to cope with these challenges, Atanassov (1983) proposed the idea of intuitionistic fuzzy sets (IFSs) with the inclusion of satisfaction or membership grade (MG) and dissatisfaction or non-membership grade (NMG). Yager (2014, 2017) extended IFSs to Pythagorean fuzzy sets (PFSs) and q-rung orthopair fuzzy sets (q-ROFSs).

In inadequate information data, the vagueness caused by the indiscernibility can be manipulated by utilizing the rough set techniques. This is an individualistic generalization of crisp set theory and was first originated by Pawlak (1982). This hypothesis acts as a tool for investigating and implementing solutions for various decision making difficulties found in the fields of computer intelligence, image processing, data analysis, medical sciences, and many more. It eliminates vagueness by using upper and lower approximation operators of a collection by assembling the equivalence relation. The above-listed theories do not deal with the parameterizations of the input information set. For this purpose, Molodtsov (1999) proposed soft set theory to deal with the uncertainties in parametric behavior.

Sustainability is the ability to exist constantly. The main components of sustainability are society, the economy, and the environment. The equilibrium of local and global efforts for sustainability is necessary to meet elementary human needs without destroying the environment. The ability to finance all capital projects is essential for the sustainability of the economy. The environmental concerns while retaining sustainable growth for the environment are becoming increasingly relevant to decision analysis around the world. Sustainability is regarded as a task, process, activity, and exercise through which humankind avoids the destruction of natural resources. The selection of sustainable material handling equipment is essential for the development of infrastructure.

Multi-criteria decision making (MCDM) is a branch of operations research that explicitly evaluates multiple conflicting criteria in decision making. The purpose of MCDM is to support decision makers (DMs) facing problems in ranking feasible alternatives/objects. There are different types of criteria for determining weights for the alternatives/objects. The subjective criteria weights depend on the DM and can change if another DM computes them. On the opposite side, there is the idea of objective weights, which are different because they have the capacity to evaluate alternatives. For determining subjective and objective weights, there are different fuzzy and crisp methods like SWARA (step-wise weight assessment ratio analysis), WASPAS (weighted aggregated sum product assessment), ARAS (additive ratio assessment), AHP (analytic hierarchy process), PIPRECIA (pivot pairwise relative criteria importance assessment), and CRITIC (criteria importance through inter-criteria correlation). Some integrated MCDM methods studied by researchers are TOPSIS (technique for the order preference by similarity to ideal solution), VIKOR (vlse kriterijumska optimizacija kompromisno resenje), PROMETHEE (preference ranking organization method for enrichment evaluations), COPRAS (complex proportional assessment), MOORA (multi-objective optimization by ratio analysis), GRA (grey relational analysis), ANP (analytic network process), BWM (best worst method), and aggregation operators.

### *1.1. Literature Review*

Bellman and Zadeh [1] proposed the MCDM technique based on fuzzy sets for the first time in 1970. Akram et al. [2] introduced the m-polar fuzzy soft rough sets and presented their applications in multi-attribute decision making (MADM) difficulties. Ali et al. [3] established certain properties of rough sets, soft sets, and fuzzy soft sets. Chen and Tan [4] established the concept of the score function, which was presented by Tversky and Kahneman [5] earlier. Feng et al. [6–9] proposed the idea of soft rough sets. Garg [10] investigated Einstein operators and established the Pythagorean operators to solve decision making obstacles. Hashmi et al. [11] invented the hybrid structure of m-polar neutrosophic set (MPNS) as an abstraction of the bipolar neutrosophic set by combining MPFSsand neutrosophic sets. They developed innovative algorithms to deal with the difficulties in medical sciences and for the clustering of information data. Hashmi and Riaz [12] introduced Pythagorean m-polar fuzzy Dombi operators and proposed a novel technique to the censuses process.

Jose and Kuriaskose [13] proposed the MCDM model for intuitionistic fuzzy numbers (IFNs) by using operators. Naeem et al. [14] introduced multi-criteria group decision making (MCGDM) methods based on TOPSIS and VIKOR using Pythagorean fuzzy soft sets. Pawlak and Skowron [15] presented certain extensions of rough sets.

Riaz and Hashmi [16,17] invented the notions of cubic m-polar fuzzy sets and Pythagorean m-polar fuzzy soft rough sets with applications to decision making difficulties. They established novel structures of soft rough Pythagorean *m*-polar fuzzy sets and Pythagorean *m*-polar fuzzy soft rough sets with their applications. Riaz et al. [18,19] introduced the soft rough topology including its applications to group decision making. Riaz and Tehrim [20–22] originated the notions of the bipolar fuzzy soft topology, cubic bipolar fuzzy sets, and operators. By using diverse algorithms, they solved some new and challenging decision making applications. Roy et al. [23] introduced a rough strength relational decision making trial and evaluation laboratory (DEMATEL) model for analyzing the key success factors of hospital service quality. Sharma et al. [24] introduced a rough set theory application in forecasting models.

Wei et al. [25] proposed aggregation operators based on hesitant triangular fuzzy information to determine MADM obstacles. Zhang et al. [26] established the concept of intuitionistic fuzzy soft rough sets and presented its applications.

Zhao [27] et al. discovered novel algorithms based on generalized intuitionistic fuzzy aggregation operators. Xu and Chen [28] practiced distance and similarity measures on IFSs. Kulak et al. [29] and Karande et al. [30] prepared some techniques for the assortment of material handling equipment using the information axiom and weighted utility additive theory. Zubair et al. [31] presented the optimization of a material handling system. Vashist [32] presented an algorithm for finding the reduct and core of the information dataset. Zhang et al. [33–35] discovered different covering based rough sets, fuzzy rough sets, and intuitionistic fuzzy rough sets with their applications to MADM obstacles. Wang and Triantaphyllou [36] identified irregularities in the ranking when evaluating alternatives using certain elimination et choix traduisant la realite (ELECTRE) methods. In order to evaluate green suppliers, Búyúkózkan and Çifçi [37] presented a novel hybrid MCDM approach based on fuzzy DEMATEL, fuzzy ANP, and fuzzy TOPSIS.

Govindan et al. [38] developed a DEMATEL approach focused on experience to establish sustainability strategies and efficiency in a green supply chain. Via flipped e-learning, Jeong and González-Gómez [39] built a system adjusting to the pedagogical changes in sustainable mathematics education through pre-service teachers (PSTs) : rating the requirements with MCDA/F -DEMATEL. Under a q-rung orthopair fuzzy set, Wang and Li [40] established a novel approach for green supplier selection. Xu et al. [41] presented some q-rung dual hesitant Heronian mean operators with their application to multiple group decision making attributes. Soft rough fuzzy sets were developed by Sun and Ma [42] with their applications in strategic decision making. Including its various results and illustrations, Meng et al. [43] introduced the structures of soft rough fuzzy sets and soft fuzzy rough sets. Hussain et al. [44] invented Pythagorean fuzzy soft-rough set models and presented their applications in decision making. Zadeh [45] introduced the concept of a linguistic variable and its application to approximate reasoning.

### *1.2. Motivation and Objectives*

A q-ROFS is the generalization of both IFS and PFS. The main feature of q-ROFS is that the uncertain space for MG and NMG is boarder. Each IFS is a PFS, and each PFS is a q-ROFS, but not conversely. A q-ROFS is more powerful in growing the freedom between MG and NMG. However, there are some situations when these theories are unable to deal with uncertain information. In order to relax existing constraints on MG and NMG, Riaz and Hashmi (2019) introduced the innovative idea of linear Diophantine fuzzy sets (LDFSs). The use of reference or control parameters in LDFS give freedom to DMs in choosing MG and NMG. Moreover, IFSs, PFSs, and q-ROFSs can be considered as specific cases of LDFSs with some limitations (see Figure 1). The semantic comparison of suggested technique with some existing structures is given in Table 1.

**Figure 1.** Graphical comparison among IFNs, PFFNs, q-ROFNs, and LDFNs.

The goal of this paper is to develop strong models for MCDM that have less limitations than other models. Table 1 shows the advantages and drawbacks of some set theoretical models. The notions of linear Diophantine fuzzy soft rough sets (LDFSRSs) and soft rough linear Diophantine fuzzy sets (SRLDFSs) are established as new hybrid models of soft sets, rough sets, and LDFSs. The suggested models of LDFSRSs and SRLDFSs are more flexible to discuss fuzziness and roughness in terms of upper and lower approximation operators. Certain operations on LDFSRSs and SRLDFSs have been established to discuss a robust multi-criteria decision making (MCDM) for the selection of sustainable material handling equipment. We present four new algorithms based on LDFS, crisp soft approximation spaces, core sets, and reducts.

The organization of this article is provided as follows. Section 2 implies certain fundamental notions of fuzzy sets, IFSs, PFSs, q-ROFSs, and LDFSs. We investigate fascinating operations and score functions of LDFSs. In Section 3, we invent the notions of LDFSRSs and SRLDFSs by applying the LDFS approximation space and crisp soft approximation space. We establish multiple results based on intended structures with the help of illustrations. In Section 4, we present four novel algorithms to determine the material handling equipment selection obstacle. These algorithms are based on the approximation spaces, score functions, upper and lower reducts, and core set. We examine and compare our suggested structures and their results with certain existing notions. Section 5 provides the conclusion of this manuscript.

*Symmetry* **2020**, *12*, 1215


**Table 1.** Semantic comparison of the suggested technique with some existing models.

### **2. Some Basic Concepts**

First, we assemble fascinating fundamental ideas of LDFSs, rough sets, soft sets, and soft rough sets.

**Definition 1** ([54])**.** *A linear Diophantine fuzzy set* <sup>D</sup> *in* <sup>Q</sup>¨ *is defined as:*

$$\mathcal{O} = \left\{ \left( \mathfrak{G}, \langle \ddot{\mathcal{G}}\_{\mathcal{Q}}(\mathfrak{G}), \ddot{\mathfrak{S}}\_{\mathcal{J}}(\mathfrak{G}) \rangle, \langle \mathfrak{a}\_{\mathcal{Q}}(\mathfrak{G}), \mathfrak{f}\_{\mathcal{P}}(\mathfrak{G}) \rangle \right) : \mathfrak{G} \in \ddot{\mathcal{Q}} \right\}.$$

*where* <sup>T</sup>¨<sup>D</sup> (G), <sup>S</sup>¨ <sup>D</sup> (G), *<sup>α</sup>*<sup>D</sup> (G), *<sup>β</sup>*<sup>D</sup> (G) <sup>∈</sup> [0, 1] *are the satisfaction grade, the dissatisfaction grade, and the corresponding reference parameters, respectively. Moreover, it is required that:*

$$0 \le \mathfrak{a}\_{\mathcal{Q}}(\mathfrak{G}) + \mathcal{J}\_{\mathcal{Q}}(\mathfrak{G}) \le 1\_{\mathcal{A}}$$

*and:*

$$0 \le \mathfrak{a}\_{\mathcal{Q}}(\mathfrak{G})\mathcal{J}\_{\mathcal{Q}}(\mathfrak{G}) + \mathfrak{f}\_{\mathcal{Q}}(\mathfrak{G})\dot{\mathfrak{S}}\_{\mathcal{Q}}(\mathfrak{G}) \le 1$$

*for all* <sup>G</sup> <sup>∈</sup> <sup>Q</sup>¨*. The LDFS:*

$$\mathcal{O}\_{\mathfrak{Q}} = \{ (\mathfrak{G}, \langle 1, 0 \rangle, \langle 1, 0 \rangle) : \mathfrak{G} \in \mathfrak{Q} \}$$

*is called the absolute LDFS in* <sup>Q</sup>¨*. The LDFS:*

$$\mathcal{Q}\_{\Phi} = \{ (\mathfrak{G}, \langle 0, 1 \rangle, \langle 0, 1 \rangle) : \mathfrak{G} \in \mathcal{Q} \}.$$

*is called the null LDFS in* <sup>Q</sup>¨*.*

The reference parameters are useful for describing objective weights for each pair of MG and NMG. These parameters can be used for multiple objectives to express the physical interpretation of a dynamical system. In addition, *<sup>γ</sup>*<sup>D</sup> (G)*π*˙ <sup>D</sup> (G) = <sup>1</sup> <sup>−</sup> (*α*<sup>D</sup> (G)T¨<sup>D</sup> (G) + *<sup>β</sup>*<sup>D</sup> (G)S¨ <sup>D</sup> (G)), where *<sup>π</sup>*˙ <sup>D</sup> (G) is called the indeterminacy degree of G to D and *γ*<sup>D</sup> (G) is the reference parameter related to the indeterminacy part. It can be seen that the tuples (hT¨<sup>D</sup> (G), <sup>S</sup>¨ <sup>D</sup> (G)i,h*α*<sup>D</sup> (G), *<sup>β</sup>*<sup>D</sup> (G)i) with <sup>G</sup> <sup>∈</sup> <sup>Q</sup>¨ are crucial for specifying the LDFS D. Due to this fact, we introduce the new notion of the linear Diophantine fuzzy number (LDFN) denoted as <sup>A</sup>¨<sup>D</sup> = (h˙*t*<sup>D</sup> , ˙ *f*<sup>D</sup> i,h*α*<sup>D</sup> , *β*<sup>D</sup> i) satisfying all the constraints listed above for LDFSs. The collection of all LDFSs in <sup>Q</sup>¨ is denoted as <sup>D</sup>(Q¨).

**Example 1** (Combination of drugs in medicine for better treatment.)**.** *Medicines are chemicals or compounds used to cure, halt, or prevent disease, ease symptoms, or help in the diagnosis of illnesses. Advances in medicines have enabled doctors to cure many diseases and save lives. A combination drug or a fixed-dose combination (FDC) is a medicine that includes two or more active ingredients combined in a single dosage form. For example: aspirin/paracetamol and caffeine is a combination drug for the treatment of pain, especially tension headaches and migraines. Let* <sup>Q</sup>¨ <sup>=</sup> {G1, <sup>G</sup>2, <sup>G</sup>3, <sup>G</sup>4, <sup>G</sup>5} *be the collection of some life-saving drugs. In order to gain a high impact of medicine, two or more drugs can be combined in the preparation of a medicine. If the reference or control parameter is considered as:*

*α* = *excellent impact against infection produced during surgeries*

*β* = *no high impact against infection produced during surgeries*

*then its LDFS is given in Table 2.*


**Table 2.** LDFS for medication.

*According to the quality, variety, and severity of the disease, a physician provides medicine to the subject. The information data can be classified using control parameters. These parameters represent how much that portion is necessary for the treatment, and their grade values describe how much that factor is present in that medicine. If we change the parameter as:*

> *α* = *"Excellent impact against ear infection" β* = *"Not highly affective for ear infection" OR α* = *"Fewer side effects" β* = *"More side effects"*,

*then we can establish various LDFSs that are suitable in other situations. This model helps a pharmacist/doctor/consultant prescribe the most reliable and suitable medicine to the patient for his/her disease. Moreover, reference or control parameters can be used for the purpose of various alternatives in medicine.*

**Theorem 1** ([54])**.** *LDFSs have a larger valuation space than IFSs and PFSs.*

**Definition 2** ([54])**.** *Let* <sup>A</sup>¨<sup>D</sup> = (h˙*t*<sup>D</sup> , ˙ *f*<sup>D</sup> i,h*α*<sup>D</sup> , *β*<sup>D</sup> i) *be an LDFN and* X > 0*. Then:*


**Definition 3** ([54])**.** *Let* <sup>A</sup>¨ *<sup>i</sup>* = (h˙*t*D*<sup>i</sup>* , ˙ *f*D*i* i,h*α*<sup>D</sup> , *β*<sup>D</sup> i) *be two LDFNs with i* = 1, 2*. Then:*


**Definition 4** ([54])**.** *Let* <sup>A</sup>¨D*<sup>i</sup>* = (h˙*t*D*<sup>i</sup>* , ˙ *f*D*i* i,h*α*D*<sup>i</sup>* , *β*D*<sup>i</sup>* i) *be a collection of LDFNs with i* ∈ ∆*. Then:*

$$\bullet \quad \bigcup\_{i \in \Delta} \vec{\mathcal{A}}\_{\mathcal{\mathcal{G}}\_i} = \left( \langle \sup\_{i \in \Delta} t\_{\mathcal{\mathcal{G}}\_{i'}} \inf\_{i \in \Delta} f\_{\mathcal{\mathcal{G}}\_i} \rangle, \langle \sup\_{i \in \Delta} \boldsymbol{\kappa}\_{\mathcal{\mathcal{G}}\_{i'}} \inf\_{i \in \Delta} \boldsymbol{\mathcal{B}}\_{\mathcal{\mathcal{G}}\_i} \rangle \right);$$

$$\bullet \quad \bigcap\_{i \in \Lambda} \mathcal{A}\_{\mathcal{B}\_i} = \left( \langle \inf\_{i \in \Lambda} \dot{t}\_{\mathcal{B}\_{i'}} \sup\_{i \in \Lambda} \dot{f}\_{\mathcal{B}\_i} \rangle, \langle \inf\_{i \in \Lambda} \boldsymbol{\alpha}\_{\mathcal{B}\_{i'}} \sup\_{i \in \Lambda} \boldsymbol{\beta}\_{\mathcal{B}\_i} \rangle \right)$$

**Definition 5** ([56])**.** *For the non-empty collection of alternatives* <sup>Q</sup>¨ *and the collection of attributes* <sup>G</sup>˙ *, the soft set is evaluated by the mapping* Ω˙ : G →˙ <sup>P</sup>f(Q¨)*. Alternatively, it can be represented as:*

*.*

$$(\widehat{\Omega}, \mathcal{G}) = \{ (\wp, \widehat{\Omega}(\wp)) : \Omega(\wp) \in \widehat{\mathcal{P}}(\mathcal{Q}), \wp \in \mathcal{G} \}$$

*The collection of all subsets of* <sup>Q</sup>¨ *is denoted as* <sup>P</sup>f(Q¨)*.*

**Definition 6** ([55])**.** *Suppose the indiscernibility relation on* <sup>Q</sup>¨ *is denoted as* <sup>R</sup>*. We assume arbitrarily that* <sup>R</sup> *is an equivalence relation. Moreover, Neg*R<sup>K</sup> <sup>=</sup> Q − K ¨ <sup>&</sup>gt;*, Pos*R<sup>K</sup> <sup>=</sup> <sup>K</sup>>*, and Bnd*R<sup>K</sup> <sup>=</sup> <sup>K</sup><sup>&</sup>gt; − K<sup>&</sup>gt; *are said to be negative, positive, and boundary regions of* K ⊆ <sup>Q</sup>¨*. The characteristics of these regions are given as follows:*


*The equivalence class of object* <sup>G</sup> *under the relation* <sup>R</sup> *is represented as* [G]R*. The pair* (Q¨, <sup>R</sup>) *is said to be a "Pawlak approximation space", and* <sup>R</sup> *will generate the partition* <sup>Q</sup>¨ /<sup>R</sup> <sup>=</sup> {[G]<sup>R</sup> : G ∈ Q}¨ *. Then, pair* (R>(K), R>(K)) *is called the rough set of crisp set* K*, where:*

$$\mathcal{R}\_{\ast}(\mathcal{K}) = \{ \mathcal{G} \in \ddot{\mathcal{Q}} : [\mathcal{G}]\_{\mathcal{R}} \subseteq \mathcal{K} \}$$

$$\mathcal{R}^{\ast}(\mathcal{K}) = \{ \mathcal{G} \in \ddot{\mathcal{Q}} : [\mathcal{G}]\_{\mathcal{R}} \cap \mathcal{K} \neq \emptyset \}$$

*are called "lower and upper approximations" of* <sup>K</sup> *with respect to* (Q¨, <sup>R</sup>)*. If* <sup>R</sup>>(K) = <sup>R</sup>>(K)*, then* <sup>K</sup> *is said to be definable; otherwise, it is called a rough set.*

**Remark 1.** *The concepts of the core and reduct in rough set theory are very significant tools in the decision making methods. We can deduce the reduct from the reference set* <sup>Q</sup>¨*. It is used to reduce the unimportant information in the input data. The core is the intersection of all reducts and provides the final optimal decision about the decision making problem (see [3,32]).*

**Definition 7** ([26])**.** *For a non-empty collection of alternatives* <sup>Q</sup>¨ *and the collection of attributes* <sup>G</sup>˙ *, the crisp soft relation* R ⊆ Q ×¨ <sup>G</sup>˙ *is written as:*

$$\mathcal{R} = \{ \langle (\mathcal{G}, \Diamond \wp), \upPsi \pi(\mathcal{G}, \Diamond \wp) \rangle : (\mathcal{G}, \Diamond \wp) \in \mathcal{Q} \times \mathcal{G} \}$$

*where <sup>ψ</sup>*<sup>R</sup> : Q ×¨ G → { ˙ 0, 1} *and:*

$$\psi\_{\mathcal{R}}(\mathcal{G}, \dot{\varphi}) = \begin{cases} 1 & \text{if } (\mathcal{G}, \dot{\varphi}) \in \mathcal{R} \\ 0 & \text{otherwise} \end{cases}$$

**Definition 8** ([26])**.** *For a non-empty collection of alternatives* <sup>Q</sup>¨ *and the collection of attributes* <sup>G</sup>˙ *, we have a crisp soft relation* <sup>A</sup>˜ <sup>⊆</sup> Q ×¨ <sup>G</sup>˙ *. A mapping* A˜ *s* : Q →¨ *<sup>P</sup>*(G˙) *is written as:*

$$\mathcal{A}\_{\sf s}^{\sf f}(\mathcal{G}) = \{ \phi \in \mathcal{G} : (\mathcal{G}, \phi) \in \mathcal{J}; \mathcal{G} \in \mathcal{Q} \}$$

<sup>A</sup>˜ *is called serial if* ∀G ∈ <sup>Q</sup>¨, <sup>A</sup>˜ *<sup>s</sup>*(G) 6= *φ. The "crisp soft approximation space" is represented by this triplet* (Q¨, <sup>G</sup>˙ , <sup>A</sup>˜)*. For arbitrary* H ⊆ <sup>G</sup>˙ *,* <sup>A</sup>˜>(H) *and* <sup>A</sup>˜>(H) *are called the "lower and upper approximations", respectively, defined as:*

$$\mathcal{ad}\_{\#}(\mathcal{H}) = \{ \mathcal{G} \in \mathcal{\mathcal{Q}} : \mathcal{ad}\_{\text{s}}(\mathcal{G}) \cap \mathcal{H} \neq \emptyset \}$$

$$\mathcal{ad}^{\sharp \ast}(\mathcal{H}) = \{ \mathcal{G} \in \mathcal{\mathcal{Q}} : \mathcal{ad}\_{\text{s}}(\mathcal{G}) \subseteq \mathcal{Q} \}$$

*The pair* (A˜>(H), <sup>A</sup>˜>(H)) *is called the crisp soft rough set, and* <sup>A</sup>˜>, <sup>A</sup>˜<sup>&</sup>gt; : <sup>P</sup>f(G˙) <sup>→</sup> <sup>P</sup>f(Q¨) *are called "lower and upper approximation operators".* <sup>P</sup>f(G˙) *and* <sup>P</sup>f(Q¨) *are an assembly of all subsets of* <sup>G</sup>˙ *and* <sup>Q</sup>¨*, respectively. If* <sup>A</sup>˜>(H) = <sup>A</sup>˜>(H)*, then* <sup>Q</sup>¨ *is called definable.*

### **3. Construction of SRLDFSs and LDFSRSs**

In this part, we organize the innovative hybrid structures of soft rough linear Diophantine fuzzy sets (SRLDFSs) and linear Diophantine fuzzy soft rough sets (LDFSRSs) by merging the fundamental compositions of LDFSs, soft sets, and rough sets. In decision making obstacles, we deal with the ambiguities and vagueness in the initial input information. Due to these circumstances, we cannot manage these inputs by utilizing simplistic models. In fuzzy sets, IFSs, PFSs, and q-ROFSs, the opportunities for the assortment of satisfaction and dissatisfaction degrees are restricted due to constraints <sup>0</sup> <sup>≤</sup> <sup>T</sup>˙ <sup>≤</sup> 1, <sup>0</sup> <sup>≤</sup> <sup>T</sup>¨ <sup>+</sup> S ≤¨ 1, <sup>0</sup> <sup>≤</sup> <sup>T</sup>¨ <sup>2</sup> <sup>+</sup> <sup>S</sup>¨<sup>2</sup> <sup>≤</sup> 1, and <sup>0</sup> <sup>≤</sup> <sup>T</sup>¨ *<sup>q</sup>* <sup>+</sup> <sup>S</sup>¨*<sup>q</sup>* <sup>≤</sup> 1. However, in the LDFS, we can comfortably choose the degrees from [0, 1], due to the reference or control parameters. However, this set does not deal with the vagueness or roughness. We cannot handle uncertainties and parameterizations if we deal only with the roughness of a set. The soft set only works for parameterizations. Therefore, to eliminate these ambiguities and to fill in the research gap, we assemble SRLDFSs and LDFSRSs. These models dispense with the fuzzy degrees, parameterizations, and roughness of the data in the decision making difficulties. The significance of these generalized and authentic notions can be examined in the entire article. Table 3 represents the notations used in the whole manuscript.


**Table 3.** Description of the notations used in the whole manuscript.

### *3.1. Soft Rough Linear Diophantine Fuzzy Sets*

**Definition 9.** *For the reference set* <sup>Q</sup>¨ *and set of decision variables* <sup>G</sup>˙ *, if we define a crisp soft relation* A˜ *over* Q ×¨ <sup>G</sup>˙ *, then* (Q¨, <sup>G</sup>˙ , <sup>A</sup>˜) *is called a "crisp soft approximation space". If* <sup>Y</sup><sup>D</sup> <sup>∈</sup> <sup>D</sup>(G˙)*, then* <sup>A</sup>˜>(Y<sup>D</sup> ) *and* <sup>A</sup>˜>(Y<sup>D</sup> ) *are called "upper and lower approximations" of* <sup>Y</sup><sup>D</sup> *about* (Q¨, <sup>G</sup>˙ , A˜) *respectively and written as:*

$$\begin{split} \mathrm{ad}\_{\mathfrak{F}}(\mathcal{Y}\_{\mathcal{G}}) &= \{ (\mathcal{G}, \langle \mathcal{F}\_{\operatorname{ad}\_{\mathfrak{F}}^{\mathfrak{z}}(\mathcal{Y}\_{\mathcal{G}})}(\mathcal{G}), \mathcal{S}\_{\operatorname{ad}\_{\mathfrak{z}}(\mathcal{Y}\_{\mathcal{G}})}(\mathcal{G}) \rangle, \langle \mathfrak{a}\_{\operatorname{ad}\_{\mathfrak{z}}(\mathcal{Y}\_{\mathcal{G}})}(\mathcal{G}), \mathfrak{f}\_{\operatorname{ad}\_{\mathfrak{z}}(\mathcal{Y}\_{\mathcal{G}})}(\mathcal{G}) \rangle \rangle : \mathcal{G} \in \mathcal{Q} \} \\ \mathrm{ad}^{\mathfrak{z}}(\mathcal{Y}\_{\mathcal{G}}) &= \{ (\mathcal{G}, \langle \mathcal{F}\_{\operatorname{ad}^{\mathfrak{z}}(\mathcal{Y}\_{\mathcal{G}})}(\mathcal{G}), \mathcal{S}\_{\operatorname{ad}^{\mathfrak{z}}(\mathcal{Y}\_{\mathcal{G}})}(\mathcal{G}) \rangle, \langle \mathfrak{a}\_{\operatorname{ad}^{\mathfrak{z}}(\mathcal{Y}\_{\mathcal{G}})}(\mathcal{G}), \mathfrak{f}\_{\operatorname{ad}^{\mathfrak{z}}(\mathcal{Y}\_{\mathcal{G}})}(\mathcal{G}) \rangle : \mathcal{G} \in \mathcal{Q} \} \end{split}$$

*where the degrees can be calculated as given in Table 4.*


**Table 4.** Formulation of SRLDFSs.

*The notions given in Table 4 satisfy the following constraints::*

$$0 \le \mathfrak{a}\_{\mathcal{J}^{\mathfrak{z}}\ast(\mathcal{Y}\mathcal{G})}(\mathcal{G})\mathcal{F}\_{\mathcal{J}^{\mathfrak{z}}\ast(\mathcal{Y}\mathcal{G})}(\mathcal{G}) + \mathfrak{b}\_{\mathcal{J}^{\mathfrak{z}}\ast(\mathcal{Y}\mathcal{G})}(\mathcal{G})\mathcal{S}\_{\mathcal{J}^{\mathfrak{z}}\ast(\mathcal{Y}\mathcal{G})}(\mathcal{G}) \le 1$$

$$0 \le \mathfrak{a}\_{\mathcal{J}^{\mathfrak{z}}\_{\mathfrak{z}}\left(\mathcal{Y}\mathcal{G}\_{\mathcal{G}}\right)}(\mathcal{G})\mathcal{F}\_{\mathcal{J}^{\mathfrak{z}}\_{\mathfrak{z}}\left(\mathcal{G}\_{\mathcal{G}}\right)}(\mathcal{G}) + \mathfrak{b}\_{\mathcal{J}^{\mathfrak{z}}\_{\mathfrak{z}}\left(\mathcal{G}\_{\mathcal{G}}\right)}(\mathcal{G})\mathcal{S}\_{\mathcal{J}^{\mathfrak{z}}\_{\mathfrak{z}}\left(\mathcal{Y}\mathcal{G}\_{\mathcal{G}}\right)}(\mathcal{G}) \le 1$$

$$0 \le \mathfrak{a}\_{\mathcal{J}^{\mathfrak{z}}\left(\mathcal{Y}\mathcal{G}\_{\mathcal{G}}\right)}(\mathcal{G}) + \mathfrak{b}\_{\mathcal{J}^{\mathfrak{z}}\left(\mathcal{Y}\mathcal{G}\_{\mathcal{G}}\right)}(\mathcal{G}) \le 1 \quad \text{and}$$

$$0 \le \mathfrak{a}\_{\mathcal{J}^{\mathfrak{z}}\_{\mathfrak{z}}\left(\mathcal{Y}\mathcal{G}\_{\mathcal{G}}\right)}(\mathcal{G}) + \mathfrak{b}\_{\mathcal{J}^{\mathfrak{z}}\_{\mathfrak{z}}\left(\mathcal{Y}\mathcal{G}\_{\mathcal{G}}\right)}(\mathcal{G}) \le 1$$

<sup>D</sup>(G˙) *is an assembly of LDFSs over* <sup>G</sup>˙ *.* <sup>A</sup>˜>(Y<sup>D</sup> ) *and* <sup>A</sup>˜>(Y<sup>D</sup> ) *are LDFSs over* <sup>Q</sup>¨*. Thus, the pair* (A˜>(Y<sup>D</sup> ), <sup>A</sup>˜>(Y<sup>D</sup> )) *is called the soft rough linear Diophantine fuzzy set (SRLDFS) about* (Q¨, <sup>G</sup>˙ , A˜)*, and* <sup>A</sup>˜>, <sup>A</sup>˜<sup>&</sup>gt; : <sup>D</sup>(G˙) <sup>→</sup> <sup>D</sup>(Q¨) *are called upper and lower SRLDF approximation operators. If* <sup>A</sup>˜>(Y<sup>D</sup> ) = <sup>A</sup>˜>(Y<sup>D</sup> )*, then* <sup>Y</sup><sup>D</sup> *is called definable.*

**Example 2.** *We consider the collection of well known cars given as* <sup>Q</sup>¨ <sup>=</sup> {G1, <sup>G</sup>2, <sup>G</sup>3, <sup>G</sup>4} *and the assembly of suitable attributes* <sup>G</sup>˙ <sup>=</sup> {℘˙ <sup>1</sup>, <sup>℘</sup>˙ <sup>2</sup>, <sup>℘</sup>˙ <sup>3</sup>, <sup>℘</sup>˙ <sup>4</sup>}*. The attributes are given as "comfortable and reliable", "good safety", "good maintenance", and "affordable". Let* (*η*, <sup>G</sup>˙) *be the soft set in* <sup>Q</sup>¨ *given as:*

$$\begin{aligned} \eta(\dot{\wp}\_1) &= \{\mathcal{G}\_1, \mathcal{G}\_2, \mathcal{G}\_3\}, & \eta(\dot{\wp}\_2) &= \{\mathcal{G}\_2, \mathcal{G}\_4\} \\ \eta(\dot{\wp}\_3) &= \{\mathcal{G}\_1, \mathcal{G}\_2, \mathcal{G}\_3, \mathcal{G}\_4\}, & \eta(\dot{\wp}\_4) &= \{\mathcal{G}\_1, \mathcal{G}\_4\} \end{aligned}$$

*A crisp relation over* Q ×¨ <sup>G</sup>˙ *is given as*

<sup>A</sup>˜ <sup>=</sup> {(G1, <sup>℘</sup>˙ <sup>1</sup>),(G2, <sup>℘</sup>˙ <sup>1</sup>),(G3, <sup>℘</sup>˙ <sup>1</sup>),(G2, <sup>℘</sup>˙ <sup>2</sup>),(G4, <sup>℘</sup>˙ <sup>2</sup>),(G1, <sup>℘</sup>˙ <sup>3</sup>),(G2, <sup>℘</sup>˙ <sup>3</sup>),(G3, <sup>℘</sup>˙ <sup>3</sup>),(G4, <sup>℘</sup>˙ <sup>3</sup>),(G1, <sup>℘</sup>˙ <sup>4</sup>),(G4, <sup>℘</sup>˙ <sup>4</sup>)}*. By definition, we have:*

$$\begin{aligned} \mathcal{g}\_{\mathsf{s}}^{\sharp}(\mathcal{G}\_{1}) &= \{ \dot{\wp}\_{1\prime} \dot{\wp}\_{3\prime} \dot{\wp}\_{4} \} \\ \mathcal{g}\_{\mathsf{s}}^{\sharp}(\mathcal{G}\_{2}) &= \{ \dot{\wp}\_{1\prime} \dot{\wp}\_{2\prime} \dot{\wp}\_{3} \} \\ \mathcal{g}\_{\mathsf{s}}^{\sharp}(\mathcal{G}\_{3}) &= \{ \dot{\wp}\_{1\prime} \dot{\wp}\_{3} \} \\ \mathcal{g}\_{\mathsf{s}}^{\sharp}(\mathcal{G}\_{4}) &= \{ \dot{\wp}\_{2\prime} \dot{\wp}\_{3\prime} \dot{\wp}\_{4} \} \end{aligned}$$

*We consider LDFS,* <sup>Y</sup><sup>D</sup> <sup>∈</sup> <sup>D</sup>(G˙)*, given as:*

$$\begin{aligned} \; \mathcal{Y}\_{\mathcal{Y}} &= \{ (\langle \wp\_1, \langle 0.786, 0.765 \rangle, \langle 0.234, 0.123 \rangle), (\wp\_2, \langle 0.987, 0.574), \langle 0.232, 0.423 \rangle) \}, \\ (\langle \wp\_3, \langle 0.912, 0.536 \rangle, \langle 0.235, 0.635 \rangle), (\wp\_4, \langle 0.726, 0.825 \rangle, \langle 0.765, 0.122 \rangle) \} \end{aligned}$$

*The "upper and lower approximations" can be computed by using Definition 9. Upper approximations are given as:*

T¨ A˜>(YD) (G1) = 0.912, <sup>S</sup>¨ A˜>(YD) (G1) = 0.536, *<sup>α</sup>*A˜>(YD) (G1) = 0.765, *<sup>β</sup>*A˜>(YD) (G1) = 0.122 T¨ A˜>(YD) (G2) = 0.987, <sup>S</sup>¨ A˜>(YD) (G2) = 0.574, *<sup>α</sup>*A˜>(YD) (G2) = 0.765, *<sup>β</sup>*A˜>(YD) (G2) = 0.122 T¨ A˜>(YD) (G3) = 0.912, <sup>S</sup>¨ A˜>(YD) (G3) = 0.536, *<sup>α</sup>*A˜>(YD) (G3) = 0.235, *<sup>β</sup>*A˜>(YD) (G3) = 0.123 T¨ A˜>(YD) (G4) = 0.987, <sup>S</sup>¨ A˜>(YD) (G4) = 0.536, *<sup>α</sup>*A˜>(YD) (G4) = 0.765, *<sup>β</sup>*A˜>(YD) (G4) = 0.122

*Lower approximations are evaluated as:*

T¨ A˜>(YD) (G1) = 0.726, <sup>S</sup>¨ A˜>(YD) (G1) = 0.825, *<sup>α</sup>*A˜>(YD) (G1) = 0.234, *<sup>β</sup>*A˜>(YD) (G1) = 0.635 T¨ A˜>(YD) (G2) = 0.786, <sup>S</sup>¨ A˜>(YD) (G2) = 0.765, *<sup>α</sup>*A˜>(YD) (G2) = 0.232, *<sup>β</sup>*A˜>(YD) (G2) = 0.635 T¨ A˜>(YD) (G3) = 0.786, <sup>S</sup>¨ A˜>(YD) (G3) = 0.765, *<sup>α</sup>*A˜>(YD) (G3) = 0.234, *<sup>β</sup>*A˜>(YD) (G3) = 0.635 T¨ A˜>(YD) (G4) = 0.726, <sup>S</sup>¨ A˜>(YD) (G4) = 0.825, *<sup>α</sup>*A˜>(YD) (G4) = 0.232, *<sup>β</sup>*A˜>(YD) (G4) = 0.635

*Thus:*

$$\begin{split}d^{\mathfrak{A}}(\mathcal{Y}\_{\mathcal{Y}}) &= \{ (\mathcal{G}\_{1\prime}(\mathbf{0}0.912, \mathbf{0}0.536), \langle \mathbf{0}0.765, \mathbf{0}0.122 \rangle), (\mathcal{G}\_{2\prime}(\mathbf{0}0.987, \mathbf{0}0.574), \langle \mathbf{0}0.765, \mathbf{0}0.122 \rangle) \}, \\ (\mathcal{G}\_{3\prime}(\mathbf{0}0.912, \mathbf{0}0.536), \langle \mathbf{0}0.235, \mathbf{0}0.123 \rangle) \}, (\mathcal{G}\_{4\prime}(\mathbf{0}0.987, \mathbf{0}0.536), \langle \mathbf{0}0.765, \mathbf{0}0.122 \rangle) \} \\ d\tilde{\omega}\_{\pm}(\mathcal{Y}\_{\mathcal{Y}}) &= \{ (\mathcal{G}\_{1\prime}(\mathbf{0}0.726, \mathbf{0}0.825), \langle \mathbf{0}0.234, \mathbf{0}0.635)), (\mathcal{G}\_{2\prime}(\mathbf{0}0.786, \mathbf{0}0.765), \langle \mathbf{0}0.232, \mathbf{0}0.635)) \}, \\ (\mathcal{G}\_{3\prime}(\mathbf{0}0.786, \mathbf{0}0.765), \langle \mathbf{0}0.234, \mathbf{0}0.635)) \}, (\mathcal{G}\_{4\prime}(\mathbf{0}0.726, \mathbf{0}0.825), \langle \mathbf{0}0.232, \mathbf{0}0.635)) \} \end{split}$$

*Therefore,* (A˜>(Y<sup>D</sup> ), <sup>A</sup>˜>(Y<sup>D</sup> )) *is said to be SRLDFS.*

**Remark 2.** *For the "crisp soft approximation space"* (Q¨, <sup>G</sup>˙ , A˜)*, if we take the upper and lower approximations of the following sets listed in Table 5, then we can observe the degeneration of SRLDF approximation operators into different structures based on rough sets.*

*It is evident from Table 5 that our proposed model is superior and powerful in contrast with other existing structures. However, we cannot decompose the described theories into the SRLDFSs and their respective approximation operators. In simple terms, SRLDFS is the generalization of "soft rough sets, soft rough fuzzy sets, soft rough intuitionistic fuzzy sets, soft rough Pythagorean fuzzy sets, and soft rough q-rung orthopair fuzzy sets".*

**Theorem 2.** *Let* <sup>Y</sup><sup>D</sup> , <sup>B</sup><sup>D</sup> <sup>∈</sup> <sup>D</sup>(G) *and* <sup>A</sup>˜>(Y<sup>D</sup> )*,* <sup>A</sup>˜>(Y<sup>D</sup> ) *be "upper and lower approximation operators" over the approximation space* (Q¨, <sup>G</sup>˙ , A˜)*, then the following axioms are true:*

*(1)* <sup>A</sup>˜>(Y<sup>D</sup> ) =<sup>∼</sup> <sup>A</sup>˜>(<sup>∼</sup> <sup>Y</sup><sup>D</sup> )*, (2)* <sup>Y</sup><sup>D</sup> <sup>⊆</sup> <sup>B</sup><sup>D</sup> <sup>⇒</sup> <sup>A</sup>˜>(Y<sup>D</sup> ) <sup>⊆</sup> <sup>A</sup>˜>(B<sup>D</sup> )*, (3)* <sup>A</sup>˜>(Y<sup>D</sup> <sup>∩</sup> <sup>B</sup><sup>D</sup> ) = <sup>A</sup>˜>(Y<sup>D</sup> ) <sup>∩</sup> <sup>A</sup>˜>(B<sup>D</sup> )*, (4)* <sup>A</sup>˜>(Y<sup>D</sup> <sup>∪</sup> <sup>B</sup><sup>D</sup> ) <sup>⊇</sup> <sup>A</sup>˜>(Y<sup>D</sup> ) <sup>∪</sup> <sup>A</sup>˜>(B<sup>D</sup> )*, (5)* <sup>A</sup>˜>(Y<sup>D</sup> ) =<sup>∼</sup> <sup>A</sup>˜>(<sup>∼</sup> <sup>Y</sup><sup>D</sup> )*, (6)* <sup>Y</sup><sup>D</sup> <sup>⊆</sup> <sup>B</sup><sup>D</sup> <sup>⇒</sup> <sup>A</sup>˜>(Y<sup>D</sup> ) <sup>⊆</sup> <sup>A</sup>˜>(B<sup>D</sup> )*, (7)* <sup>A</sup>˜>(Y<sup>D</sup> <sup>∪</sup> <sup>B</sup><sup>D</sup> ) = <sup>A</sup>˜>(Y<sup>D</sup> ) <sup>∪</sup> <sup>A</sup>˜>(B<sup>D</sup> )*, (8)* <sup>A</sup>˜>(Y<sup>D</sup> <sup>∩</sup> <sup>B</sup><sup>D</sup> ) <sup>⊆</sup> <sup>A</sup>˜>(Y<sup>D</sup> ) <sup>∩</sup> <sup>A</sup>˜>(B<sup>D</sup> )*.*

*The complement of* Y<sup>D</sup> *is represented by* ∼ Y<sup>D</sup> *.*

### **Proof.** See Appendix A.

Now, we provide a counter example to prove that equality does not exist in Parts (4) and (8) of Theorem 2.



**Table 5.** Degeneration of SRLDF approximation operators into different rough set models.

**Example 3.** *For the reference set* <sup>Q</sup>¨ <sup>=</sup> {G1, <sup>G</sup>2, <sup>G</sup>3, <sup>G</sup>4} *and assembly of decision variables* <sup>G</sup>˙ <sup>=</sup> {℘˙ <sup>1</sup>, <sup>℘</sup>˙ <sup>2</sup>, <sup>℘</sup>˙ <sup>3</sup>}*, we define a soft set* (*η*, <sup>G</sup>˙) *in* <sup>Q</sup>¨ *written as:*

$$\beta(\phi\_1) = \{\mathcal{G}\_1, \mathcal{G}\_4\}, \beta(\phi\_2) = \{\mathcal{G}\_1, \mathcal{G}\_2, \mathcal{G}\_4\}, \beta(\phi\_3) = \{\mathcal{G}\_2, \mathcal{G}\_3\}$$

*The crisp soft relation* <sup>A</sup>˜ *in* Q ×¨ <sup>G</sup>˙ *is given as*

<sup>A</sup>˜ <sup>=</sup> {(G1, ˙℘1),(G4, ˙℘1),(G1, ˙℘2),(G2, ˙℘2),(G4, ˙℘2),(G2, ˙℘3),(G3, ˙℘3)}*. We can write it as:*

$$\rho \vec{d}\_{\mathfrak{s}}(\mathcal{G}\_1) = \{\wp\_1, \wp\_2\}, \rho \vec{d}\_{\mathfrak{s}}(\mathcal{G}\_2) = \{\wp\_2, \wp\_3\}, \rho \vec{d}\_{\mathfrak{s}}(\mathcal{G}\_3) = \{\wp\_3\}, \rho \vec{d}\_{\mathfrak{s}}(\mathcal{G}\_4) = \{\wp\_1, \wp\_2\}$$

*Let* Y<sup>D</sup> , B<sup>D</sup> ∈ D(G) *be given as follows:*

$$\begin{aligned} \mathcal{H}\_{\mathcal{G}} &= \{ (\langle \wp\_1, \langle 0.573, 0.273 \rangle, \langle 0.271, 0.531 \rangle), & \mathcal{A}\_{\mathcal{G}} &= \{ (\langle \wp\_1, \langle 0.773, 0.273 \rangle, \langle 0.281, 0.523 \rangle), \\ & (\langle \wp\_2, \langle 0.378, 0.177 \rangle, \langle 0.291, 0.532 \rangle), & (\wp\_2, \langle 0.778, 0.371 \rangle, \langle 0.283, 0.521 \rangle), \\ & (\langle \wp\_3, \langle 0.678, 0.178 \rangle, \langle 0.271, 0.521 \rangle) \} & & (\wp\_{3\sim} \langle 0.873, 0.371 \rangle, \langle 0.261, 0.532 \rangle) \} \end{aligned}$$

*The "upper approximations" are given as:*

<sup>A</sup>˜>(Y<sup>D</sup> ) = {(G1,h00.573, 00.177i,h00.291, 00.531i),(G2,h00.678, 00.177i,h00.291, 00.521i), (G3,h00.678, 00.178i,h00.271, 00.521i),(G4,h00.573, 00.177i,h00.291, 00.531i)} <sup>A</sup>˜>(B<sup>D</sup> ) = {(G1,h00.778, 00.273i,h00.283, 00.521i),(G2,h00.873, 00.371i,h00.261, 00.521i), (G3,h00.873, 00.371i,h00.261, 00.532i),(G4,h00.778, 00.273i,h00.283, 00.521i)} <sup>A</sup>˜>(Y<sup>D</sup> <sup>∩</sup> <sup>B</sup><sup>D</sup> ) = {(G1,h00.573, 00.273i,h00.283, 00.531i),(G2,h00.678, 00.371i,h00.261, 00.532i), (G3,h00.678, 00.371i,h00.261, 00.532i),(G4,h00.573, 00.273i,h00.283, 00.531i)} <sup>A</sup>˜>(Y<sup>D</sup> ) <sup>∩</sup> <sup>A</sup>˜>(B<sup>D</sup> ) = {(G1,h00.573, 00.273i,h00.283, 00.531i),(G2,h00.678, 00.371i,h00.261, 00.521i), (G3,h00.678, 00.371i,h00.261, 00.532i),(G4,h00.573, 00.273i,h00.283, 00.531i)}

*From the above calculations, it is clear that* <sup>A</sup>˜>(Y<sup>D</sup> ) <sup>∩</sup> <sup>A</sup>˜>(B<sup>D</sup> ) \* <sup>A</sup>˜>(Y<sup>D</sup> <sup>∩</sup> <sup>B</sup><sup>D</sup> ) *as for alternative* <sup>G</sup>2*, the degrees of reference parameter <sup>β</sup>*A˜>(YD)∩A˜>(BD) (G2) *<sup>β</sup>*A˜>(YD∩BD) (G2)*, i.e.,* 00.521 00.532*. Similarly, we can check that* <sup>A</sup>˜>(Y<sup>D</sup> <sup>∪</sup> <sup>B</sup><sup>D</sup> ) \* <sup>A</sup>˜>(Y<sup>D</sup> ) <sup>∪</sup> <sup>A</sup>˜>(B<sup>D</sup> )*.*

**Proposition 1.** *If* <sup>Y</sup><sup>D</sup> , <sup>B</sup><sup>D</sup> <sup>∈</sup> <sup>D</sup>(G)*, then* <sup>A</sup>˜>(Y<sup>D</sup> ), <sup>A</sup>˜>(B<sup>D</sup> ), <sup>A</sup>˜>(Y<sup>D</sup> ) *and* <sup>A</sup>˜>(B<sup>D</sup> ) *are "lower and upper approximations" of LDFSs over the "crisp soft approximation space"* (Q¨, <sup>G</sup>˙ , A˜) *satisfying the following axioms:*

*(1)* <sup>∼</sup> (A˜>(Y<sup>D</sup> ) <sup>∪</sup> <sup>A</sup>˜>(B<sup>D</sup> )) = <sup>A</sup>˜>(<sup>∼</sup> <sup>Y</sup><sup>D</sup> ) <sup>∩</sup> <sup>A</sup>˜>(<sup>∼</sup> <sup>B</sup><sup>D</sup> )*, (2)* <sup>∼</sup> (A˜>(Y<sup>D</sup> ) <sup>∪</sup> <sup>A</sup>˜>(B<sup>D</sup> )) = <sup>A</sup>˜>(<sup>∼</sup> <sup>Y</sup><sup>D</sup> ) <sup>∩</sup> <sup>A</sup>˜>(<sup>∼</sup> <sup>B</sup><sup>D</sup> )*, (3)* <sup>∼</sup> (A˜>(Y<sup>D</sup> ) <sup>∪</sup> <sup>A</sup>˜>(B<sup>D</sup> )) = <sup>A</sup>˜>(<sup>∼</sup> <sup>Y</sup><sup>D</sup> ) <sup>∩</sup> <sup>A</sup>˜>(<sup>∼</sup> <sup>B</sup><sup>D</sup> )*, (4)* <sup>∼</sup> (A˜>(Y<sup>D</sup> ) <sup>∪</sup> <sup>A</sup>˜>(B<sup>D</sup> )) = <sup>A</sup>˜>(<sup>∼</sup> <sup>Y</sup><sup>D</sup> ) <sup>∩</sup> <sup>A</sup>˜>(<sup>∼</sup> <sup>B</sup><sup>D</sup> )*, (5)* <sup>∼</sup> (A˜>(Y<sup>D</sup> ) <sup>∩</sup> <sup>A</sup>˜>(B<sup>D</sup> )) = <sup>A</sup>˜>(<sup>∼</sup> <sup>Y</sup><sup>D</sup> ) <sup>∪</sup> <sup>A</sup>˜>(<sup>∼</sup> <sup>B</sup><sup>D</sup> )*, (6)* <sup>∼</sup> (A˜>(Y<sup>D</sup> ) <sup>∩</sup> <sup>A</sup>˜>(B<sup>D</sup> )) = <sup>A</sup>˜>(<sup>∼</sup> <sup>Y</sup><sup>D</sup> ) <sup>∪</sup> <sup>A</sup>˜>(<sup>∼</sup> <sup>B</sup><sup>D</sup> )*, (7)* <sup>∼</sup> (A˜>(Y<sup>D</sup> ) <sup>∩</sup> <sup>A</sup>˜>(B<sup>D</sup> )) = <sup>A</sup>˜>(<sup>∼</sup> <sup>Y</sup><sup>D</sup> ) <sup>∪</sup> <sup>A</sup>˜>(<sup>∼</sup> <sup>B</sup><sup>D</sup> )*, (8)* <sup>∼</sup> (A˜>(Y<sup>D</sup> ) <sup>∩</sup> <sup>A</sup>˜>(B<sup>D</sup> )) = <sup>A</sup>˜>(<sup>∼</sup> <sup>Y</sup><sup>D</sup> ) <sup>∪</sup> <sup>A</sup>˜>(<sup>∼</sup> <sup>B</sup><sup>D</sup> )*.*

**Proof.** The proof is obvious.

### *3.2. Linear Diophantine Fuzzy Soft Rough Sets*

**Definition 10.** *For the non-empty set of alternatives* <sup>Q</sup>¨ *and the collection of attributes* <sup>G</sup>˙ *, we consider a subset* O ⊆˙ <sup>G</sup>˙ *. Then, we define linear Diophantine fuzzy soft set (LDFSS),* ( ¨*δ*, <sup>O</sup>˙ ) *represented by the mapping:*

$$
\ddot{\delta}: \dot{\mathcal{O}} \to \mathcal{Q}(\ddot{\mathcal{Q}})
$$

*where* <sup>D</sup>(Q¨) *is an assembly of all LDF-subsets of* <sup>Q</sup>¨*. Alternatively, it can be written as:*

$$(\ddot{\delta}, \dot{\mathcal{O}}) = \{ (\Diamond \dot{\delta}(\wp)) : \Diamond \in \dot{\mathcal{O}}, \ddot{\delta}(\wp) \in \mathcal{O}(\ddot{\mathcal{Q}}) \}$$

**Definition 11.** *Let* ( ¨*δ*, <sup>O</sup>˙ ) *be an LDFSS in* <sup>Q</sup>¨*. Then, an LDF-subset* <sup>ð</sup>˜ *of* Q ×¨ <sup>G</sup>˙ *is called a linear Diophantine fuzzy soft relation (LDFSR) from* <sup>Q</sup>¨ *to* <sup>G</sup>˙ *written as:*

$$\tilde{\mathcal{O}} = \{ ((\mathcal{G}, \phi), \langle \mathcal{G}\_{\mathfrak{J}}(\mathcal{G}, \phi), \ddot{\mathcal{S}}\_{\mathfrak{J}}(\mathcal{G}, \dot{\phi}) \rangle, \langle \mathfrak{a}\_{\mathfrak{J}}(\mathcal{G}, \phi), \mathfrak{f}\_{\mathfrak{J}}(\mathcal{G}, \dot{\phi}) \rangle \} : (\mathcal{G}, \phi) \in \dot{\mathcal{Q}} \times \dot{\mathcal{G}} \}$$

*where <sup>α</sup>*T¨ <sup>ð</sup>˜(G, ℘˙ ), *α*S¨ <sup>ð</sup>˜(G, ℘˙ ) ∈ [0, 1] *are truth and falsity grades, respectively, with the corresponding reference parameters α*ð˜(G, ˙℘), *β*ð˜(G, ˙℘) ∈ [0, 1] *satisfying the constraints:*

$$0 \le \alpha\_{\eth}(\mathcal{G}, \dot{\wp})^{\alpha} \mathcal{F}\_{\eth}^{2}(\mathcal{G}, \dot{\wp}) + \beta\_{\eth}(\mathcal{G}, \dot{\wp})^{\alpha} \mathcal{S}\_{\eth}^{2}(\mathcal{G}, \dot{\wp}) \le 1$$

$$0 \le \alpha\_{\eth}(\mathcal{G}, \dot{\wp}) + \beta\_{\eth}(\mathcal{G}, \dot{\wp}) \le 1$$

*If* <sup>Q</sup>¨ <sup>=</sup> {G1, <sup>G</sup>2, ..., <sup>G</sup>*n*} *and* <sup>G</sup>˙ <sup>=</sup> {℘˙ <sup>1</sup>, <sup>℘</sup>˙ <sup>2</sup>, ..., <sup>℘</sup>˙ *<sup>m</sup>*}*, then LDFSR* <sup>ð</sup>˜ *on* Q ×¨ <sup>G</sup>˙ *can be represented in tabular form as Table 6.*

**Table 6.** LDFSR.


**Definition 12.** *For the reference set* <sup>Q</sup>¨ *and set of decision variables* <sup>G</sup>˙ *, if we define an LDFSR* <sup>ð</sup>˜ *over* Q ×¨ <sup>G</sup>˙ *, then* (Q¨, <sup>G</sup>˙ , <sup>ð</sup>˜) *is called an "LDFS approximation space". If* <sup>Y</sup><sup>D</sup> <sup>∈</sup> <sup>D</sup>(G˙)*, then* <sup>ð</sup>˜>(Y<sup>D</sup> ) *and* <sup>ð</sup>˜>(Y<sup>D</sup> ) *are "upper and lower approximations" of* <sup>Y</sup><sup>D</sup> *about* (Q¨, <sup>G</sup>˙ , ð˜) *respectively and written as:*

$$\mathfrak{G}^{\mathfrak{a}}(\mathcal{Y}\_{\mathcal{B}}) = \{ (\mathcal{G}, \langle \mathfrak{F}\_{\mathfrak{d}^{\mathfrak{a}}(\mathcal{Y}\_{\mathcal{B}})}(\mathcal{G}), \mathfrak{S}\_{\mathfrak{d}^{\mathfrak{a}}(\mathcal{Y}\_{\mathcal{B}})}(\mathcal{G}) \rangle, \langle \mathfrak{a}\_{\mathfrak{d}^{\mathfrak{a}}(\mathcal{Y}\_{\mathcal{B}})}(\mathcal{G}), \mathfrak{f}\_{\mathfrak{d}^{\mathfrak{a}}(\mathcal{Y}\_{\mathcal{B}})}(\mathcal{G}) \rangle \rangle : \mathcal{G} \in \mathfrak{Q} \}$$

$$\mathfrak{G}\_{\mathfrak{z}}(\mathcal{Y}\_{\mathcal{G}}) = \{ (\mathcal{G}, \langle \mathfrak{f}\_{\mathfrak{f}\_{\mathfrak{a}}(\mathcal{Y}\_{\mathcal{G}})}(\mathcal{G}), \mathfrak{S}\_{\mathfrak{f}\_{\mathfrak{a}}(\mathcal{Y}\_{\mathcal{G}})}(\mathcal{G}) \rangle, \langle \mathfrak{a}\_{\mathfrak{f}\_{\mathfrak{a}}(\mathcal{Y}\_{\mathcal{G}})}(\mathcal{G}), \mathfrak{f}\_{\mathfrak{f}\_{\mathfrak{a}}(\mathcal{Y}\_{\mathcal{G}})}(\mathcal{G}) \rangle \rangle : \mathcal{G} \in \mathfrak{Q} \}$$

*where the degrees can be calculated as given in Table 7.*

**Table 7.** Formulation of LDFSRSs.


*The pair* (ð˜>(Y<sup>D</sup> ), <sup>ð</sup>˜>(Y<sup>D</sup> )) *is a called linear Diophantine fuzzy soft rough set (LDFSRS) in* (Q¨, <sup>G</sup>˙ , ð˜)*. The "lower and upper approximation operators" are represented as* ð˜>(Y<sup>D</sup> ) *and* ð˜>(Y<sup>D</sup> )*, respectively. If* ð˜>(Y<sup>D</sup> ) = ð˜>(Y<sup>D</sup> )*, then* <sup>Y</sup><sup>D</sup> *is said to be definable.*

**Example 4.** *Let* <sup>Q</sup>¨ <sup>=</sup> {G1, <sup>G</sup>2} *be the collection of certain cloth brands and* <sup>G</sup>˙ <sup>=</sup> {℘˙ <sup>1</sup>, <sup>℘</sup>˙ <sup>2</sup>, <sup>℘</sup>˙ <sup>3</sup>} *be the set of attributes, where:*

> ℘˙ <sup>1</sup> = *Product quality*, ℘˙ <sup>2</sup> = *Affordable*, ℘˙ <sup>3</sup> = *Recovery service*.

*We construct the LDFSR,* <sup>ð</sup>˜ : Q →¨ <sup>G</sup>˙ *, represented in Table 8.*


**Table 8.** LDFSR.

*Consider a linear Diophantine fuzzy soft subset* <sup>Y</sup><sup>D</sup> *of* <sup>G</sup>˙ *given as:*

Y<sup>D</sup> = {(℘˙ <sup>1</sup> ,h0.837, 0.535i,h0.242, 0.242i),(℘˙ <sup>2</sup>,h0.833, 0.635i,h0.634, 0.142i),(℘˙ <sup>3</sup>,h0.725, 0.526i,h0.625, 0.211i)}

*By using Definition 12, we find the "upper and lower approximations" of* Y<sup>D</sup> *given by:*

$$\begin{aligned} \mathcal{G}\_{\widehat{\otimes}^{\bullet}(\mathcal{H}\_{\mathcal{G}})}(\mathcal{G}\_{1}) &= \bigvee\_{\phi} [0.684, 0.825, 0.725] = 0.825, \quad \mathcal{S}\_{\widehat{\otimes}^{\bullet}(\mathcal{H}\_{\mathcal{G}})}(\mathcal{G}\_{1}) = \max\_{\phi} [0.645, 0.635, 0.735] = 0.635, \\ \mathfrak{a}\_{\widehat{\otimes}^{\bullet}(\mathcal{H}\_{\mathcal{G}})}(\mathcal{G}\_{1}) &= \max\_{\phi} [0.221, 0.226, 0.122] = 0.226, \quad \mathcal{S}\_{\widehat{\otimes}^{\bullet}(\mathcal{H}\_{\mathcal{G}})}(\mathcal{G}\_{1}) = \min\_{\phi} [0.675, 0.877, 0.677] = 0.675 \end{aligned}$$

*Similarly, we find all other values for the "upper and lower approximation" of* Y<sup>D</sup> *. This implies that:*

$$
\begin{aligned}
\tilde{\otimes}^{\#}(\mathcal{W}\_{\mathcal{Y}}) &= \{ (\mathcal{G}\_{1\prime} \langle 0.825, 0.635 \rangle, \langle 0.226, 0.675 \rangle), (\mathcal{G}\_{2\prime} \langle 0.837, 0.535 \rangle, \langle 0.242, 0.348 \rangle) \} \\
\tilde{\otimes}\_{\#}(\mathcal{W}\_{\mathcal{Y}}) &= \{ (\mathcal{G}\_{1\prime} \langle 0.725, 0.635 \rangle, \langle 0.774, 0.242 \rangle), (\mathcal{G}\_{2\prime} \langle 0.752, 0.635 \rangle, \langle 0.754, 0.242 \rangle) \}
\end{aligned}
$$

*Thus,* (ð˜>(Y<sup>D</sup> ), ð˜>(Y<sup>D</sup> )) *is called LDFSRS.*

**Remark 3.** *For the "linear Diophantine fuzzy soft approximation space (LDFS approximation space)"* (Q¨, <sup>G</sup>˙ , ð˜)*, if we take the upper and lower approximations of the following sets listed in Table 9, then we can observe the degeneration of LDFSR approximation operators into different structures based on rough sets.*



**Table 9.** Degeneration of LDFSR approximation operators into different rough set models.

*It is evident from Table 9 that our proposed model is superior and powerful in contrast with other existing structures. However, we cannot decompose the described theories into the LDFSRSs and their respective approximation operators. The beauty of this structure is that if we select the "crisp soft approximation space" for LDFSR approximation operators, then it will be degenerated into the proposed SRLDFSs. This generalization provides us a strong relation between both proposed rough set models. In simple terms, LDFSRS is the generalization of "soft fuzzy rough sets, intuitionistic fuzzy soft rough sets, Pythagorean fuzzy soft rough sets, q-rung orthopair fuzzy soft rough sets, and soft rough linear Diophantine fuzzy sets".*

**Theorem 3.** *For arbitrary* Y<sup>D</sup> , B<sup>D</sup> ∈ D(G)*, the "upper and lower approximation operators"* <sup>ð</sup>˜>(Y<sup>D</sup> ), <sup>ð</sup>˜>(B<sup>D</sup> ), <sup>ð</sup>˜>(Y<sup>D</sup> ) *and* <sup>ð</sup>˜>(B<sup>D</sup> ) *on the "LDFS approximation space"* (Q¨, <sup>G</sup>˙ , ð˜) *satisfy the following axioms:*


*The complement of* Y<sup>D</sup> *is represented by* ∼ Y<sup>D</sup> *.*

**Proof.** The proof is similar to the proof given in Appendix A.

**Proposition 2.** *For arbitrary* Y<sup>D</sup> , B<sup>D</sup> ∈ D(G)*, the "upper and lower approximation operators"* <sup>ð</sup>˜>(Y<sup>D</sup> ), <sup>ð</sup>˜>(B<sup>D</sup> ), <sup>ð</sup>˜>(Y<sup>D</sup> ) *and* <sup>ð</sup>˜>(B<sup>D</sup> ) *on the "LDFS approximation space"* (Q¨, <sup>G</sup>˙ , ð˜) *satisfy the following axioms:*

*(1)* <sup>∼</sup> (ð˜>(Y<sup>D</sup> ) <sup>∪</sup> <sup>ð</sup>˜>(B<sup>D</sup> )) = <sup>ð</sup>˜>(<sup>∼</sup> <sup>Y</sup><sup>D</sup> ) <sup>∩</sup> <sup>ð</sup>˜>(<sup>∼</sup> <sup>B</sup><sup>D</sup> )*, (2)* <sup>∼</sup> (ð˜>(Y<sup>D</sup> ) <sup>∪</sup> <sup>ð</sup>˜>(B<sup>D</sup> )) = <sup>ð</sup>˜>(<sup>∼</sup> <sup>Y</sup><sup>D</sup> ) <sup>∩</sup> <sup>ð</sup>˜>(<sup>∼</sup> <sup>B</sup><sup>D</sup> )*, (3)* <sup>∼</sup> (ð˜>(Y<sup>D</sup> ) <sup>∪</sup> <sup>ð</sup>˜>(B<sup>D</sup> )) = <sup>ð</sup>˜>(<sup>∼</sup> <sup>Y</sup><sup>D</sup> ) <sup>∩</sup> <sup>ð</sup>˜>(<sup>∼</sup> <sup>B</sup><sup>D</sup> )*, (4)* <sup>∼</sup> (ð˜>(Y<sup>D</sup> ) <sup>∪</sup> <sup>ð</sup>˜>(B<sup>D</sup> )) = <sup>ð</sup>˜>(<sup>∼</sup> <sup>Y</sup><sup>D</sup> ) <sup>∩</sup> <sup>ð</sup>˜>(<sup>∼</sup> <sup>B</sup><sup>D</sup> )*, (5)* <sup>∼</sup> (ð˜>(Y<sup>D</sup> ) <sup>∩</sup> <sup>ð</sup>˜>(B<sup>D</sup> )) = <sup>ð</sup>˜>(<sup>∼</sup> <sup>Y</sup><sup>D</sup> ) <sup>∪</sup> <sup>ð</sup>˜>(<sup>∼</sup> <sup>B</sup><sup>D</sup> )*, (6)* <sup>∼</sup> (ð˜>(Y<sup>D</sup> ) <sup>∩</sup> <sup>ð</sup>˜>(B<sup>D</sup> )) = <sup>ð</sup>˜>(<sup>∼</sup> <sup>Y</sup><sup>D</sup> ) <sup>∪</sup> <sup>ð</sup>˜>(<sup>∼</sup> <sup>B</sup><sup>D</sup> )*, (7)* <sup>∼</sup> (ð˜>(Y<sup>D</sup> ) <sup>∩</sup> <sup>ð</sup>˜>(B<sup>D</sup> )) = <sup>ð</sup>˜>(<sup>∼</sup> <sup>Y</sup><sup>D</sup> ) <sup>∪</sup> <sup>ð</sup>˜>(<sup>∼</sup> <sup>B</sup><sup>D</sup> )*, (8)* <sup>∼</sup> (ð˜>(Y<sup>D</sup> ) <sup>∩</sup> <sup>ð</sup>˜>(B<sup>D</sup> )) = <sup>ð</sup>˜>(<sup>∼</sup> <sup>Y</sup><sup>D</sup> ) <sup>∪</sup> <sup>ð</sup>˜>(<sup>∼</sup> <sup>B</sup><sup>D</sup> )*.*

**Proof.** The proof is obvious.

**Theorem 4.** *For "LDFS approximation space"* (Q¨, <sup>G</sup>˙ , ð˜)*, if* ð˜ *is serial, then* ð˜>(Y<sup>D</sup> ) *and* ð˜>(Y<sup>D</sup> ) *satisfy the following:*

*(1)* <sup>ð</sup>˜>(∅) = <sup>∅</sup>, <sup>ð</sup>˜>(G˙) = <sup>G</sup>˙ *, (2)* <sup>ð</sup>˜>(Y<sup>D</sup> ) <sup>⊆</sup> <sup>ð</sup>˜>(Y<sup>D</sup> ), <sup>∀</sup> <sup>Y</sup><sup>D</sup> <sup>Y</sup><sup>D</sup> <sup>∈</sup> <sup>D</sup>(G)*.*

**Proof.** The proof is obvious by following Definition 12.

**Definition 13.** *Let* <sup>Y</sup><sup>D</sup> <sup>∈</sup> <sup>D</sup>(Q¨)*, and let* <sup>ð</sup>˜>(Y<sup>D</sup> ), <sup>ð</sup>˜>(Y<sup>D</sup> ) *be lower and upper "LDFSR approximation operators". Then, the ring sum operation of* ð˜>(Y<sup>D</sup> ) *and* ð˜>(Y<sup>D</sup> ) *is written as:*

<sup>ð</sup>˜>(Y<sup>D</sup> ) <sup>⊕</sup> <sup>ð</sup>˜>(Y<sup>D</sup> ) = {(G,hT¨ ð˜ <sup>&</sup>gt;(Y<sup>D</sup> ) (G) + <sup>T</sup>¨ ð˜ <sup>&</sup>gt;(Y<sup>D</sup> ) (G) <sup>−</sup> (T¨ ð˜ <sup>&</sup>gt;(Y<sup>D</sup> ) (G) <sup>×</sup> <sup>T</sup>¨ ð˜ <sup>&</sup>gt;(Y<sup>D</sup> ) (G)), S¨ ð˜ <sup>&</sup>gt;(Y<sup>D</sup> ) (G) <sup>×</sup> <sup>S</sup>¨ ð˜ <sup>&</sup>gt;(Y<sup>D</sup> ) (G)i,h*α*ð˜ <sup>&</sup>gt;(Y<sup>D</sup> ) (G) + *α*ð˜ <sup>&</sup>gt;(Y<sup>D</sup> ) (G) − (*α*ð˜ <sup>&</sup>gt;(Y<sup>D</sup> ) (G) × *α*ð˜ <sup>&</sup>gt;(Y<sup>D</sup> ) (G)), *β*ð˜ <sup>&</sup>gt;(Y<sup>D</sup> ) (G) × *β*ð˜ <sup>&</sup>gt;(Y<sup>D</sup> ) (G)i) : G ∈ Q}¨

**Definition 14.** *Let* D = G,hT¨<sup>D</sup> (G), <sup>S</sup>¨<sup>D</sup> (G)i,h*α*<sup>D</sup> (G), *<sup>β</sup>*<sup>D</sup> (G)<sup>i</sup> : G ∈ <sup>Q</sup>¨ *be an LDFS and the constants* (h*η*, *θ*i,h*ζ*, *ψ*i)*, where η*, *θ*, *ζ*, *ψ* ∈ [0, 1] *satisfying the constraints* 0 ≤ *ηθ* + *ζψ* ≤ 1 *and* 0 ≤ *θ* + *ψ* ≤ 1*. Then, the* (h*η*, *θ*i,h*ζ*, *ψ*i)*-level cut set of* D *is written as:*

$$\mathcal{O}^{\langle \zeta, \emptyset \rangle}\_{\langle \eta, \emptyset \rangle} = \{ \mathcal{G} \in \mathcal{Q} : \mathcal{J}\_{\mathcal{Q}}(\mathcal{G}) \ge \eta, \alpha\_{\mathcal{Q}}(\mathcal{G}) \ge \theta, \mathcal{S}\_{\mathcal{Q}}(\mathcal{G}) \le \zeta, \emptyset\_{\mathcal{P}}(\mathcal{G}) \le \psi \}.$$

*The* <sup>h</sup>*η*, *<sup>θ</sup>*i*-level cut of* <sup>D</sup> *is written as* <sup>D</sup>h*η*,*θ*<sup>i</sup> <sup>=</sup> {G ∈ <sup>Q</sup>¨ : <sup>T</sup>¨<sup>D</sup> (G) <sup>≥</sup> *<sup>η</sup>*, *<sup>α</sup>*<sup>D</sup> (G) <sup>≥</sup> *<sup>θ</sup>*}*. The strong* <sup>h</sup>*η*, *<sup>θ</sup>*i*-level cut of* <sup>D</sup> *is written as* <sup>D</sup>h*η*,*θ*<sup>i</sup> <sup>+</sup> <sup>=</sup> {G ∈ <sup>Q</sup>¨ : <sup>T</sup>¨<sup>D</sup> (G) <sup>&</sup>gt; *<sup>η</sup>*, *<sup>α</sup>*<sup>D</sup> (G) <sup>&</sup>gt; *<sup>θ</sup>*}*. The* <sup>h</sup>*ζ*, *<sup>ψ</sup>*i*-level cut of* <sup>D</sup> *is written as* <sup>D</sup>h*ζ*,*ψ*<sup>i</sup> <sup>=</sup> {G ∈ <sup>Q</sup>¨ : <sup>S</sup>¨<sup>D</sup> (G) <sup>≤</sup> *<sup>ζ</sup>*, *<sup>β</sup>*<sup>D</sup> (G) <sup>≤</sup> *<sup>ψ</sup>*}*. The strong* <sup>h</sup>*ζ*, *<sup>ψ</sup>*i*-level cut of* <sup>D</sup> *is written as* <sup>D</sup>h*ζ*,*ψ*<sup>i</sup> + <sup>=</sup> {G ∈ <sup>Q</sup>¨ : <sup>S</sup>¨<sup>D</sup> (G) <sup>&</sup>lt; *<sup>ζ</sup>*, *<sup>β</sup>*<sup>D</sup> (G) <sup>&</sup>lt; *<sup>ψ</sup>*}*. The other cut sets of an LDFS are analogously described as:*

$$\mathcal{O}\_{\langle\eta,\theta\rangle^{+}}^{\langle\zeta,\mathfrak{p}\rangle^{+}} = \{\mathcal{G} \in \mathcal{Q} \colon \mathcal{J}\_{\mathcal{B}}^{\circ}(\mathcal{G}) > \eta, a\_{\mathcal{B}}(\mathcal{G}) > \theta, \mathcal{S}\_{\mathcal{B}}(\mathcal{G}) \le \zeta, \mathcal{J}\_{\mathcal{B}}(\mathcal{G}) \le \psi\}.$$

$$\mathcal{O}\_{\langle\eta,\theta\rangle}^{\langle\zeta,\mathfrak{p}\rangle^{+}} = \{\mathcal{G} \in \mathcal{Q} \colon \mathcal{J}\_{\mathcal{B}}^{\circ}(\mathcal{G}) \ge \eta, a\_{\mathcal{B}}(\mathcal{G}) \ge \theta, \mathcal{S}\_{\mathcal{B}}(\mathcal{G}) < \zeta, \mathcal{J}\_{\mathcal{B}}(\mathcal{G}) < \psi\}.$$

$$\mathcal{O}\_{\langle\eta,\theta\rangle^{+}}^{\langle\zeta,\mathfrak{p}\rangle^{+}} = \{\mathcal{G} \in \mathcal{Q} \colon \mathcal{J}\_{\mathcal{B}}^{\circ}(\mathcal{G}) > \eta, a\_{\mathcal{B}}(\mathcal{G}) > \theta, \mathcal{S}\_{\mathcal{B}}(\mathcal{G}) < \zeta, \mathcal{J}\_{\mathcal{B}}(\mathcal{G}) < \psi\}.$$

**Theorem 5.** *Let* D, <sup>1</sup>D, <sup>2</sup><sup>D</sup> <sup>∈</sup> <sup>D</sup>(Q¨) *and <sup>η</sup>*, *<sup>θ</sup>*, *<sup>ζ</sup>*, *<sup>ψ</sup>* <sup>∈</sup> [0, 1] *satisfy the constraints* <sup>0</sup> <sup>≤</sup> *ηθ* <sup>+</sup> *ζψ* <sup>≤</sup> <sup>1</sup> *and* 0 ≤ *θ* + *ψ* ≤ 1*. Then, the cut sets of LDFSs satisfy the following axioms:*

$$\begin{aligned} \text{1.} \qquad \mathcal{O}\_{\langle\eta,\theta\rangle}^{\langle\zeta,\Psi\rangle} &= \mathcal{O}\_{\langle\eta,\theta\rangle} \cap \mathcal{O}^{\langle\zeta,\Psi\rangle},\\ \text{2.} \qquad (\sim \mathcal{O})\_{\langle\eta,\theta\rangle} &= \sim \mathcal{O}\_{\langle\eta,\theta\rangle^{+}}, (\sim \mathcal{O})^{\langle\zeta,\Psi\rangle} = \sim \mathcal{O}^{\langle\zeta,\Psi\rangle^{+}}, \end{aligned}$$

$$\text{3.} \quad {}^1\mathcal{O} \subseteq {}^2\mathcal{O} = {}^1\mathcal{O}\_{\langle\eta,\theta\rangle}^{\langle\zeta,\psi\rangle} \subseteq {}^2\mathcal{O}\_{\langle\eta,\theta\rangle}^{\langle\zeta,\psi\rangle}.$$

$$\begin{array}{ll} 4. & (^1\mathcal{O}\cap^2\mathcal{O})\_{\langle\eta,\theta\rangle} = ^1\mathcal{O}\_{\langle\eta,\theta\rangle}\cap^2\mathcal{O}\_{\langle\eta,\theta\rangle\prime}(^1\mathcal{O}\cap^2\mathcal{O})^{\langle\zeta,\emptyset\rangle} = ^1\mathcal{O}^{\langle\zeta,\emptyset\rangle}\cap^2\mathcal{O}^{\langle\zeta,\emptyset\rangle} \\ (^1\mathcal{O}\cap^2\mathcal{O})\_{\langle\eta,\emptyset\rangle}^{\langle\zeta,\emptyset\rangle} = ^1\mathcal{O}^{\langle\zeta,\emptyset\rangle}\cap^2\mathcal{O}^{\langle\zeta,\emptyset\rangle}\_{\langle\eta,\emptyset\rangle} \end{array}$$

$$\begin{array}{rcl} \text{5.} & ({}^1\mathcal{O}\cup{}^2\mathcal{O})\_{\langle\eta,\theta\rangle} = {}^1\mathcal{O}\_{\langle\eta,\theta\rangle}\cup{}^2\mathcal{O}\_{\langle\eta,\theta\rangle}, ({}^1\mathcal{O}\cup{}^2\mathcal{O})^{\langle\zeta,\emptyset\rangle} = {}^1\mathcal{O}^{\langle\zeta,\emptyset\rangle}\cup{}^2\mathcal{O}^{\langle\zeta,\emptyset\rangle}, \\ & ({}^1\mathcal{O}\cup{}^2\mathcal{O})\_{\langle\eta,\emptyset\rangle}^{\langle\zeta,\emptyset\rangle} = {}^1\mathcal{O}^{\langle\zeta,\emptyset\rangle}\_{\langle\eta,\emptyset\rangle} \supseteq {}^2\mathcal{O}^{\langle\zeta,\emptyset\rangle}\_{\langle\eta,\emptyset\rangle} \\ & \text{.} & \text{.} \end{array}$$

$$\begin{array}{rcl} \mathcal{O}. & \mathcal{I} \vdash \mathcal{I} \quad \langle \eta, \theta \rangle \quad \neg \mathcal{I} \quad \langle \eta, \theta \rangle \\ \text{6.} & \text{If } \eta\_{1} \geq \eta\_{2}, \theta\_{1} \geq \theta\_{2} \text{ and } \zeta\_{1} \leq \zeta\_{2}, \psi\_{1} \leq \psi\_{2}, \text{ then} \\ \mathcal{O}\_{\langle \eta\_{1}, \theta\_{1} \rangle} \subseteq \mathcal{O}\_{\langle \eta\_{2}, \theta\_{2} \rangle}, \ \mathcal{O}^{\langle \zeta\_{1}, \psi\_{1} \rangle} \leq \mathcal{O}^{\langle \zeta\_{2}, \psi\_{2} \rangle} \text{ and } \mathcal{O}^{\langle \zeta\_{1}, \psi\_{1} \rangle}\_{\langle \eta\_{1}, \theta\_{1} \rangle} \subseteq \mathcal{O}^{\langle \zeta\_{2}, \psi\_{2} \rangle}\_{\langle \eta\_{1}, \theta\_{1} \rangle}. \end{array}$$

### **Proof.** This proof is inferred explicitly by Definition 14.

By using the defined idea of cut sets on LDFSs, we can find the cut sets of LDFSR:

$$\mathfrak{G} = \{ ((\mathcal{G}, \wp), \langle \mathcal{S}\_{\mathfrak{F}}(\mathcal{G}, \wp), \mathfrak{S}\_{\mathfrak{F}}(\mathcal{G}, \wp) \rangle, \langle \mathfrak{a}\_{\mathfrak{F}}(\mathcal{G}, \wp), \mathfrak{f}\_{\mathfrak{F}}(\mathcal{G}, \wp) \rangle \rangle : (\mathcal{G}, \wp) \in \mathfrak{G} \times \mathcal{G} \} $$

given as:

$$\begin{aligned} \mathfrak{G}\_{\langle\eta,\theta\rangle} &= \{ ((\mathcal{G},\dot{\wp}) \in \mathcal{Q} \times \mathcal{G} : \mathcal{J}\_{\hat{\mathbb{B}}}(\mathcal{G},\dot{\wp}) \ge \eta, a\_{\hat{\mathbb{B}}}(\mathcal{G},\dot{\wp}) \ge \theta \} \\ \mathfrak{G}\_{\langle\eta,\theta\rangle}(\mathcal{G}) &= \{ \dot{\wp} \in \mathcal{G} : \mathcal{J}\_{\hat{\mathbb{B}}}(\mathcal{G},\dot{\wp}) \ge \eta, a\_{\hat{\mathbb{B}}}(\mathcal{G},\dot{\wp}) \ge \theta \} \text{ for } \eta, \theta \in [0,1] \end{aligned}$$

$$\mathfrak{G}\_{\langle\eta,\theta\rangle^{+}} = \{ ((\mathcal{G},\dot{\wp}) \in \mathcal{Q} \times \mathcal{G} : \mathcal{G}\_{\tilde{\otimes}}^{\*}(\mathcal{G},\dot{\wp}) > \eta, a\_{\tilde{\otimes}}(\mathcal{G},\dot{\wp}) > \theta \}$$

$$\tilde{\mathfrak{G}}\_{\langle\eta,\theta\rangle^{+}}(\mathcal{G}) = \{ \dot{\wp} \in \dot{\mathcal{G}} : \ddot{\mathcal{P}}\_{\tilde{\otimes}}(\mathcal{G},\dot{\wp}) > \eta, a\_{\tilde{\otimes}}(\mathcal{G},\dot{\wp}) > \theta \} \text{ for } \eta, \theta \in [0,1)$$

$$\tilde{\simeq}(\mathfrak{u}\,\theta) \quad \text{and} \quad \ddot{\mathfrak{L}} \quad \ddot{\mathfrak{L}} \quad \ddot{\mathfrak{L}} \quad \ddot{\mathfrak{L}} \quad \ddot{\mathfrak{L}} \quad \ddot{\mathfrak{L}} \quad \ddot{\mathfrak{L}} \quad \ddot{\mathfrak{L}} \quad \ddot{\mathfrak{L}} \quad \ddot{\mathfrak{L}} \quad \ddot{\mathfrak{L}} \quad \ddot{\mathfrak{L}} \quad \ddot{\mathfrak{L}} \quad \ddot{\mathfrak{L}}$$

$$\widetilde{\mathbb{G}}^{\langle\eta,\theta\rangle} = \left\{ ((\mathcal{G},\dot{\wp}) \in \ddot{\mathcal{Q}} \times \dot{\mathcal{G}} : \ddot{\mathcal{S}}\_{\mathbb{B}}(\mathcal{G},\dot{\wp}) \le \eta, \beta\_{\mathbb{B}}(\mathcal{G},\dot{\wp}) \le \theta \right\}$$

$$\widetilde{\mathbb{G}}^{\langle\eta,\theta\rangle}(\mathcal{G}) = \left\{ \dot{\wp} \in \dot{\mathcal{G}} : \ddot{\mathcal{S}}\_{\mathbb{B}}(\mathcal{G},\dot{\wp}) \le \eta, \beta\_{\mathbb{B}}(\mathcal{G},\dot{\wp}) \le \theta \right\} \text{ for } \eta, \theta \in [0, 1]$$

$$\mathfrak{G}^{\langle\eta,\theta\rangle^{+}} = \left\{ ((\mathcal{G},\dot{\wp}) \in \dot{\mathcal{Q}} \times \mathcal{G} : \mathcal{S}\_{\mathbb{B}}(\mathcal{G},\dot{\wp}) < \eta, \beta\_{\mathbb{B}}(\mathcal{G},\dot{\wp}) < \theta \right\}$$

$$\mathfrak{G}^{\langle\eta,\theta\rangle^{+}}(\mathcal{G}) = \left\{ \dot{\wp} \in \mathcal{G} : \mathcal{S}\_{\mathbb{B}}(\mathcal{G},\dot{\wp}) < \eta, \beta\_{\mathbb{B}}(\mathcal{G},\dot{\wp}) < \theta \right\} \text{ for } \eta, \theta \in (0, 1]$$

where all the calculated cuts are crisp soft relations. Now, we present a result to show that LDFSR approximation operators can be written as crisp soft rough approximation operators.

**Theorem 6.** *Consider that for LDFSR approximation space* (Q¨, <sup>G</sup>˙ , <sup>ð</sup>˜) *and* <sup>D</sup> <sup>∈</sup> <sup>D</sup>(Q¨)*, the upper approximation operators can be represented as:*

*1.*

$$\begin{split} \langle \mathcal{B}\_{\mathfrak{J}^{\omega}(\mathcal{B})}(\mathcal{G}), \mathfrak{a}\_{\mathfrak{J}^{\omega}(\mathcal{B})}(\mathcal{G}) \rangle &= \bigvee\_{\eta, \theta \in [0,1]} [\langle \eta, \theta \rangle \wedge \mathfrak{d}^{\ast}\_{\langle \eta, \theta \rangle}(\mathcal{G}\_{\langle \eta, \theta \rangle})(\mathcal{G})] \\ &= \bigvee\_{\eta, \theta \in [0,1]} [\langle \eta, \theta \rangle \wedge \mathfrak{d}^{\ast}\_{\langle \eta, \theta \rangle}(\mathcal{G}\_{\langle \eta, \theta \rangle^{+}})(\mathcal{G})] \\ &= \bigvee\_{\eta, \theta \in [0,1]} [\langle \eta, \theta \rangle \wedge \mathfrak{d}^{\ast}\_{\langle \eta, \theta \rangle^{+}}(\mathcal{G}\_{\langle \eta, \theta \rangle})(\mathcal{G})] \\ &= \bigvee\_{\eta, \theta \in [0,1]} [\langle \eta, \theta \rangle \wedge \mathfrak{d}^{\ast}\_{\langle \eta, \theta \rangle^{+}}(\mathcal{G}\_{\langle \eta, \theta \rangle^{+}})(\mathcal{G})]. \end{split}$$

*2.*

$$\begin{split} \langle \check{\mathcal{S}}\_{\mathfrak{H}^{\star}(\mathcal{B})}(\mathcal{G}), \mathcal{B}\_{\mathfrak{H}^{\star}(\mathcal{B})}(\mathcal{G}) \rangle &= \bigwedge\_{\eta, \theta \in [0,1]} [\langle \eta, \theta \rangle \vee (1 - \tilde{\mathfrak{J}}\_{\langle 1-\eta, 1-\theta \rangle}^{\star}(\mathcal{G}^{\langle \eta, \theta \rangle})(\mathcal{G}))] \\ &= \bigwedge\_{\eta, \theta \in [0,1]} [\langle \eta, \theta \rangle \vee \tilde{\mathfrak{J}}\_{\langle 1-\eta, 1-\theta \rangle}^{\star}(\mathcal{G}^{\langle \eta, \theta \rangle^{+}})(\mathcal{G})] \\ &= \bigwedge\_{\eta, \theta \in [0,1]} [\langle \eta, \theta \rangle \vee \tilde{\mathfrak{J}}\_{\langle 1-\eta, 1-\theta \rangle^{+}}^{\star}(\mathcal{G}^{\langle \eta, \theta \rangle})(\mathcal{G})] \\ &= \bigwedge\_{\eta, \theta \in [0,1]} [\langle \eta, \theta \rangle \vee \tilde{\mathfrak{J}}\_{\langle 1-\eta, 1-\theta \rangle^{+}}^{\star}(\mathcal{G}\langle \eta, \theta \rangle^{+})(\mathcal{G})] \end{split}$$

*and for arbitrary* h*η*, *θ*i ∈ [0, 1]*, we have:*

$$\mathfrak{A}.\quad[\mathfrak{J}^{\#}(\mathcal{Q})]\_{\langle\eta,\mathfrak{f}\rangle^{+}}\subseteq\mathfrak{J}^{\#}\_{\langle\eta,\mathfrak{f}\rangle^{+}}(\mathcal{Q}\_{\langle\eta,\mathfrak{f}\rangle^{+}})\subseteq\mathfrak{J}^{\#}\_{\langle\eta,\mathfrak{f}\rangle^{+}}(\mathcal{Q}\_{\langle\eta,\mathfrak{f}\rangle})\subseteq\mathfrak{J}^{\#}\_{\langle\eta,\mathfrak{f}\rangle}(\mathcal{Q}\_{\langle\eta,\mathfrak{f}\rangle})\subseteq[\mathfrak{J}^{\#}(\mathcal{Q})]\_{\langle\eta,\mathfrak{f}\rangle}.$$

$$4. \quad \left[\mathfrak{d}^{\pm}(\mathcal{O})\right]^{\left(\eta,\theta\right)^{+}} \subseteq \mathfrak{d}^{\pm}\_{\left(1-\eta,1-\theta\right)^{+}}(\mathcal{O}^{\left(\eta,\theta\right)^{+}}) \subseteq \mathfrak{d}^{\pm}\_{\left(1-\eta,1-\theta\right)^{+}}(\mathcal{O}^{\left(\eta,\theta\right)}) \subseteq \mathfrak{d}^{\pm}\_{\left(1-\eta,1-\theta\right)}(\mathcal{O}^{\left(\eta,\theta\right)}) \subseteq \left[\mathfrak{d}^{\pm}(\mathcal{O})\right]^{\left(\eta,\theta\right)}.$$

### **Proof.** One can conclude the proof of this theorem directly by using Definitions 12 and 14.

**Theorem 7.** *Consider that for LDFSR approximation space* (Q¨, <sup>G</sup>˙ , <sup>ð</sup>˜) *and* <sup>D</sup> <sup>∈</sup> <sup>D</sup>(Q¨)*, the upper approximation operators can be represented as:*

*1.*

$$\begin{split} \langle \check{\mathcal{O}}\_{\widehat{\otimes}\_{\pi}(\mathcal{G})}(\mathcal{G}), \mathfrak{a}\_{\widehat{\otimes}\_{\pi}(\mathcal{G})}(\mathcal{G}) \rangle &= \bigwedge\_{\eta, \theta \in [0,1]} [\langle \eta, \theta \rangle \vee \check{\otimes}\_{\{1-\eta, 1-\theta\}\_{\star}} (\mathcal{G}\_{\langle \eta, \theta \rangle^{+}})(\mathcal{G})] \\ &= \bigwedge\_{\eta, \theta \in [0,1]} [\langle \eta, \theta \rangle \vee \mathfrak{d}\_{\{1-\eta, 1-\theta\}\_{\star}^{+}} (\mathcal{G}\_{\langle \eta, \theta \rangle})(\mathcal{G})] \\ &= \bigwedge\_{\eta, \theta \in [0,1]} [\langle \eta, \theta \rangle \vee \mathfrak{d}\_{\{1-\eta, 1-\theta\}\_{\star}^{+}} (\mathcal{G}\_{\langle \eta, \theta \rangle^{+}})(\mathcal{G})] \\ &= \bigwedge\_{\eta, \theta \in [0,1]} [\langle \eta, \theta \rangle \vee \mathfrak{d}\_{\{1-\eta, 1-\theta\}\_{\star}} (\mathcal{G}\_{\langle \eta, \theta \rangle})(\mathcal{G})] \end{split}$$

*2.*

$$\begin{split} \langle \mathcal{S}\_{\mathfrak{B}\_{\pi}(\mathcal{G})}(\mathcal{G}), \mathcal{B}\_{\mathfrak{B}\_{\pi}(\mathcal{G})}(\mathcal{G}) \rangle &= \bigvee\_{\eta, \theta \in [0,1]} [\langle \eta, \theta \rangle \wedge (1 - \mathfrak{\mathfrak{J}}\_{\langle \eta, \theta \rangle\_{\mathfrak{A}}}(\mathcal{G}^{\langle \eta, \theta \rangle})(\mathcal{G}))] \\ &= \bigvee\_{\eta, \theta \in [0,1]} [\langle \eta, \theta \rangle \wedge (1 - \mathfrak{\mathfrak{J}}\_{\langle \eta, \theta \rangle\_{\mathfrak{A}}^{+}}(\mathcal{G}^{\langle \eta, \theta \rangle})(\mathcal{G}))] \\ &= \bigvee\_{\eta, \theta \in [0,1]} [\langle \eta, \theta \rangle \wedge (1 - \mathfrak{\mathfrak{J}}\_{\langle \eta, \theta \rangle\_{\mathfrak{A}}^{+}}(\mathcal{G}^{\langle \eta, \theta \rangle^{+}})(\mathcal{G}))] \\ &= \bigvee\_{\eta, \theta \in [0,1]} [\langle \eta, \theta \rangle \wedge (1 - \mathfrak{\mathfrak{J}}\_{\langle \eta, \theta \rangle\_{\mathfrak{A}}}(\mathcal{G}^{\langle \eta, \theta \rangle^{+}})(\mathcal{G}))]. \end{split}$$

*and for arbitrary* h*η*, *θ*i ∈ [0, 1]*, we have:*

*3.* [ð˜>(D)]h*η*,*θ*<sup>i</sup> <sup>+</sup> <sup>⊆</sup> <sup>ð</sup>˜ h1−*η*,1−*θ*i<sup>&</sup>gt; (Dh*η*,*θ*<sup>i</sup> <sup>+</sup> ) <sup>⊆</sup> <sup>ð</sup>˜ h1−*η*,1−*θ*i + > (Dh*η*,*θ*<sup>i</sup> <sup>+</sup> ) <sup>⊆</sup> <sup>ð</sup>˜ h1−*η*,1−*θ*i<sup>&</sup>gt; (Dh*η*,*θ*<sup>i</sup> ) ⊆ [ð˜>(D)]h*η*,*θ*<sup>i</sup> *.*

$$\begin{array}{rclcrcl} 4. & \left[\eth\_{\#}(\mathcal{O})\right]^{\langle\eta,\emptyset\rangle^{+}} & \subseteq & \eth\_{\langle 1-\eta,1-\theta\rangle^{+}\_{\mathfrak{s}}}(\mathcal{O}^{\langle\eta,\emptyset\rangle^{+}}) & \subseteq & \eth\_{\langle 1-\eta,1-\theta\rangle^{+}\_{\mathfrak{s}}}(\mathcal{O}^{\langle\eta,\emptyset\rangle}) & \subseteq & \eth\_{\langle 1-\eta,1-\theta\rangle\_{\mathfrak{s}}}(\mathcal{O}^{\langle\eta,\emptyset\rangle}) & \subseteq & \\ & \left[\eth\_{\mathfrak{s}}(\mathcal{O})\right]^{\langle\eta,\emptyset\rangle} . & & & & \end{array}$$

**Proof.** The proof of this theorem can be obtained directly by using Definitions 12 and 14.

### **4. MCDM for Sustainable Material Handling Equipment**

The determination of material handling equipment is extremely substantial in the project of an operative industrial system. The efficiency of material flow depends on the selection of appropriate material handling equipment. It promotes capability utilization and increases productivity. Decision support systems and various programs have been developed by various researchers for the selection of the best material handling equipment. In this section, we establish the novel methodologies for the selection of the appropriate and most reliable material handling equipment by using the LDFSRSs and SRLDFSs. The intelligent system, which consists of both technical and economical criteria in the material handling equipment selection process, is presented in Figure 2.

**Figure 2.** Configuration of modules in the material handling equipment selection process.

### *4.1. Selection of a Sustainable Material Handling Equipment by Using LDFSRSs*

We suppose that a manufacturing company wants to increase efficiency and needs to deal with the materials professionally. The company wants to select that alternative that decreases the lead times and increases productivity. After some basic assessment, the board of the company constructs the set of suitable alternatives given as <sup>Q</sup>¨ <sup>=</sup> {G1, <sup>G</sup>2, <sup>G</sup>3, <sup>G</sup>4, <sup>G</sup>5, <sup>G</sup>6, <sup>G</sup>7}. To measure the appropriate alternative, several decision makers from the company's technical board are organized. They choose some significant decision variables according to their requirements, given as set <sup>G</sup>˙ <sup>=</sup> {℘˙ <sup>1</sup>, ˙℘2, ˙℘3, ˙℘4}, where:


We divide the attributes into sub-criteria under the effect of parameterizations. This categorizes the data and gives us a wide domain for the selection of truth and falsity grades for the alternatives to the corresponding decision variables. The categorization is given as follows:


Table 10 represents the sub-attributes of the listed criteria.



We developed two novel algorithms (Algorithms 1 and 2) for the selection of best material handling equipment by using LDFSRSs. The flowchart diagram of both algorithms is given in Figure 3.

**Figure 3.** Flowchart diagram of Algorithms 1 and 2.

**Algorithm 1:** Selection of a best material handling equipment by using LDFSRSs.

### **Input:**


### **Construction:**


### **Calculation:**

5. Calculate the "LDFSR approximation operators" ð˜>(B<sup>D</sup> ) and ð˜>(B<sup>D</sup> ) as lower and upper using Definition 12.

6. By using Definition 13 of the ring sum operation, find the choice of LDFS <sup>ð</sup>˜>(B<sup>D</sup> ) <sup>⊕</sup> <sup>ð</sup>˜>(B<sup>D</sup> ).

### **Output:**

7. We use the definitions of score, quadratic score, and expectation score functions for LDFNs <sup>A</sup>¨<sup>D</sup> = (h˙*t*<sup>D</sup> , ˙ *f*<sup>D</sup> i,h*α*<sup>D</sup> , *β*<sup>D</sup> i) given in [54] and written respectively as:

<sup>L</sup>1(A¨<sup>D</sup> ) = <sup>1</sup> 2 [(˙*t*<sup>D</sup> <sup>−</sup> ˙ *f*<sup>D</sup> ) + (*α*<sup>D</sup> − *β*<sup>D</sup> )] <sup>L</sup>2(A¨<sup>D</sup> ) = <sup>1</sup> 2 [(˙*t* 2 <sup>D</sup> <sup>−</sup> ˙ *f* 2 D ) + (*α* 2 <sup>D</sup> − *β* 2 D )] <sup>L</sup>3(A¨<sup>D</sup> ) = <sup>1</sup> 2 [ (˙*t*<sup>D</sup> <sup>−</sup> ˙ *f*<sup>D</sup> + 1) 2 + (*α*<sup>D</sup> − *β*<sup>D</sup> + 1) 2

]

of every alternative in <sup>ð</sup>˜>(B<sup>D</sup> ) <sup>⊕</sup> <sup>ð</sup>˜>(B<sup>D</sup> ).

8. Rank the alternatives by using calculated score values.

### **Final decision:**

9. Choose the alternative having the maximum score value.


**Algorithm 2:** Selection of the best material handling equipment by using LDFSRSs.

### 4.1.1. Calculations by Using Algorithm 1

The indiscernibility relation is "the selection of best material handling equipment". This relation can be observed by LDFSR, <sup>ð</sup>˜ : Q →¨ <sup>G</sup>˙ given as Table 11.


**Table 11.** LDFSR.

Thus, <sup>ð</sup>˜ is an LDFSR on Q ×¨ <sup>G</sup>˙ . This relation gives us the numeric values in the form of LDFNs of each alternative corresponding to every decision variable. For example, for the alternative G1, the decision variable ℘˙ <sup>1</sup> ("Technical: convenience, maintainability, safety required") has numeric value (h0.73, 0.41i,h0.31, 0.13i). This value shows that the alternative G<sup>1</sup> is 73% technical and 41% has a falsity value for technicality. The pair h0.31, 0.13i represents the reference parameters for the truth and falsity grades, where we can observe that alternative G<sup>1</sup> is 31% highly technical and it has 13% low technicality. These sub-criteria for the alternatives can be observed from Table 10. All the remaining values can be constructed according to a similar pattern. We consider that experts give some opinion about the attributes and rank them according to their requirement. We convert the verbal description into the LDFS numeric values in the form of LDFS <sup>B</sup><sup>D</sup> . The set <sup>B</sup><sup>D</sup> is the LDF-subset of <sup>G</sup>˙ and written

as follows:

B<sup>D</sup> = {(℘˙ <sup>1</sup>,h00.63, 00.41i,h00.31, 00.33i),(℘˙ <sup>2</sup>,h00.71, 00.51i,h00.41, 00.38i), (℘˙ <sup>3</sup>,h00.75, 00.63i,h00.51, 00.32i),(℘˙ <sup>4</sup>,h00.83, 00.51i,h00.41, 00.21i)}.

### We evaluate the "lower and upper approximations" of LDFS <sup>B</sup><sup>D</sup> on LDFSR ð˜.


$$\begin{aligned} \partial\_{\#} (\mathcal{A} \emptyset\_{\mathcal{B}}) \oplus \eth^{\#} (\mathcal{A} \emptyset\_{\mathcal{B}}) &= \{ (\mathcal{G}\_{1}, \langle 0.900, 0.260 \rangle, \langle 0.780, 0.140 \rangle), (\mathcal{G}\_{2}, \langle 0.890, 0.230 \rangle, \langle 0.710, 0.190 \rangle), \\ &(\mathcal{G}\_{3}, \langle 0.890, 0.260 \rangle, \langle 0.690, 0.220 \rangle), (\mathcal{G}\_{4}, \langle 0.900, 0.200 \rangle, \langle 0.640, 0.220 \rangle), \\ &(\mathcal{G}\_{5}, \langle 0.937, 0.249 \rangle, \langle 0.699, 0.185 \rangle, \langle \mathcal{G}\_{6}, \langle 0.892, 0.200 \rangle, \langle 0.745, 0.220 \rangle), \\ &(\mathcal{G}\_{7}, \langle 0.890, 0.260 \rangle, \langle 0.710, 0.190 \rangle) \} \end{aligned}$$

Now, we calculate the score values, quadratic score values, and expectation score values of the alternatives in <sup>ð</sup>˜>(B<sup>D</sup> ) <sup>⊕</sup> <sup>ð</sup>˜>(B<sup>D</sup> ). The final ranking is given in Table 12.


**Table 12.** Ranking of alternatives for different score values.

From Table 12, we can observe that the alternative G<sup>1</sup> is most suitable for the final decision. The bar chart of the ranking results for alternatives is given in Figure 4.

**Figure 4.** Bar chart of alternatives under LDFSRS for SF L<sup>1</sup> , QSF L2, and ESF L3.

### 4.1.2. Calculations by Using Algorithm 2

In Algorithm 1, we use the input data in the form of linguistic terms as LDFNs. We only deal with the truth and falsity grades with their reference parameters, and we have no idea about the expert's opinion. Due to the lack of information, we have some uncertainty in our decision. This uncertainty can be removed by giving some weight to the expert's opinion. Therefore, we establish upper and lower reducts for all the experts one by one. The initial five steps of Algorithm 2 are the same as Algorithm 1. We will proceed next by constructing the upper and lower reducts from "upper and lower approximations" of LDFS for all the experts. Suppose that we have three experts from the company's technical committee given as:

### Expert X Expert Y Expert Z

The reducts from approximations can be constructed by using the following terms.

	- scores L<sup>3</sup> for all the alternatives. The alternatives that have a greater
	- or equal score L<sup>3</sup> than/to the average can be selected as "YES"; those who
	- have a lesser score than the average value can be neglected as "NO",

The final decision is based on Lcand L <sup>∗</sup> given in Table 13.

**Table 13.** The criteria for the final decision (F.D).


For expert-X, the upper reduct of upper approximation ð˜>(B<sup>D</sup> ) (calculated in Algorithm 1) of LDFS <sup>B</sup><sup>D</sup> is given as Table 14. The average of the score values of all the alternatives for ð˜>(B<sup>D</sup> ) is 0.520.


**Table 14.** Upper reduct for expert-X (*UX*) from ð˜>(B<sup>D</sup> ).

This implies that *<sup>U</sup><sup>X</sup>* <sup>=</sup> {G5, <sup>G</sup>6, <sup>G</sup>7}. For expert-X, the lower reduct of lower approximation <sup>ð</sup>˜>(B<sup>D</sup> ) (calculated in Algorithm 1) of LDFS B<sup>D</sup> is given as Table 15. The average of the score values of all the alternatives for ð˜>(B<sup>D</sup> ) is 0.572.


**Table 15.** Lower reduct for expert-X (*LX*) from ð˜>(B<sup>D</sup> ).

This implies that *<sup>L</sup><sup>X</sup>* = {G<sup>1</sup> , <sup>G</sup>5, <sup>G</sup>6}. For expert-Y, the upper reduct of upper approximation <sup>ð</sup>˜>(B<sup>D</sup> ) (calculated in Algorithm 1) of LDFS B<sup>D</sup> is given as Table 16. The average of the score values of all the alternatives for ð˜>(B<sup>D</sup> ) is 0.520.


**Table 16.** Upper reduct for expert-Y (*UY*) from ð˜>(B<sup>D</sup> ).

This implies that *<sup>U</sup><sup>Y</sup>* = {G<sup>4</sup> , <sup>G</sup>5, <sup>G</sup>6}. For expert-Y, the lower reduct of lower approximation <sup>ð</sup>˜>(B<sup>D</sup> ) (calculated in Algorithm 1) of LDFS B<sup>D</sup> is given as Table 17. The average of the score values of all the alternatives for ð˜>(B<sup>D</sup> ) is 0.572.

**Table 17.** Lower reduct for expert-Y (*LY*) from ð˜>(B<sup>D</sup> ).


This implies that *<sup>L</sup><sup>Y</sup>* <sup>=</sup> {G2, <sup>G</sup>5, <sup>G</sup>6}. For expert-Z, the upper reduct of upper approximation <sup>ð</sup>˜>(B<sup>D</sup> ) (calculated in Algorithm 1) of LDFS B<sup>D</sup> is given as Table 18. The average of the score values of all the alternatives for ð˜>(B<sup>D</sup> ) is 0.520.


**Table 18.** Upper reduct for expert-Z (*UZ*) from ð˜>(B<sup>D</sup> ).

This implies that *<sup>U</sup><sup>Z</sup>* <sup>=</sup> {G5, <sup>G</sup>7}. For expert-Z, the lower reduct of lower approximation <sup>ð</sup>˜>(B<sup>D</sup> ) (calculated in Algorithm 1) of LDFS B<sup>D</sup> is given as Table 19. The average of the score values of all the alternatives for ð˜>(B<sup>D</sup> ) is 0.572.

(*LZ*) <sup>T</sup>**¨**<sup>D</sup> <sup>S</sup>**¨**<sup>D</sup> *<sup>α</sup>*<sup>D</sup> *<sup>β</sup>*<sup>D</sup> <sup>L</sup>**<sup>3</sup>** L L <sup>c</sup> <sup>∗</sup> **F.D** G<sup>1</sup> 0.63 0.51 0.69 0.23 0.645 1 L<sup>3</sup> > 0.572 → YES YES G<sup>2</sup> 0.63 0.41 0.41 0.33 0.575 1 L<sup>3</sup> > 0.572 → YES YES G<sup>3</sup> 0.63 0.51 0.49 0.38 0.557 0 L<sup>3</sup> < 0.572 → NO NO G<sup>4</sup> 0.63 0.51 0.39 0.38 0.532 0 L<sup>3</sup> < 0.572 → NO NO G<sup>5</sup> 0.63 0.49 0.49 0.32 0.577 1 L<sup>3</sup> > 0.572 → YES YES G<sup>6</sup> 0.63 0.49 0.59 0.38 0.587 0 L<sup>3</sup> > 0.572 → YES NO G<sup>7</sup> 0.63 0.63 0.51 0.38 0.532 1 L<sup>3</sup> < 0.572 → NO NO

**Table 19.** Lower reduct for expert-Z (*LZ*) from ð˜>(B<sup>D</sup> ).

This implies that *<sup>L</sup><sup>Z</sup>* = {G<sup>1</sup> , G2, G5}. Now, we calculate the core set by taking the intersection of all upper and lower reducts for all three experts.

$$\text{core} = \mathcal{U}\_X \cap L\_X \cap \mathcal{U}\_Y \cap L\_Y \cap \mathcal{U}\_Z \cap L\_Z = \{\mathcal{G}\_\mathsf{5}\}$$

This means that "G5" is the most suitable alternative for the final decision.

### *4.2. Selection of the Most Appropriate Material Handling Equipment by Using SRLDFSs*

Now, we use our second novel structure of SRLDFS and "crisp soft approximation space" for the selection of the most appropriate material handling equipment. We construct two novel algorithms (Algorithms 3 and 4) for the selection. The flowchart diagram of both algorithms is given in Figure 5.


**Input:**

1. Input the reference set <sup>Q</sup>¨.

2. Input the assembling of attributes <sup>G</sup>˙ .

**Construction:**

3. According to the necessity of the DM, build a crisp soft relation <sup>A</sup>˜ over Q ×¨ <sup>G</sup>˙ .

4. Based on the needs of the decision maker, construct LDF-subset <sup>H</sup> of <sup>G</sup>˙ as an optimal normal decision set.

**Calculation:**

5. Calculate the "SRLDF approximation operators" <sup>A</sup>˜>(H) and <sup>A</sup>˜>(H) as "lower and upper approximations" by using Definition 9.

6. By using Definition 13 of the ring sum operation, find the choice of LDFS <sup>A</sup>˜>(H) <sup>⊕</sup> <sup>A</sup>˜>(H). **Output:**

7. We use the definitions of the score, quadratic score, and expectation score functions for LDFNs <sup>A</sup>¨<sup>D</sup> = (h˙*t*<sup>D</sup> , ˙ *<sup>f</sup>*<sup>D</sup> i,h*α*<sup>D</sup> , *<sup>β</sup>*<sup>D</sup> i) given in [54] and written respectively as:

$$\mathcal{Q}\_1(\vec{\mathcal{A}}\_{\mathcal{\mathcal{B}}}) = \frac{1}{2} [(\mathfrak{l}\_{\mathcal{\mathcal{B}}} - \mathfrak{f}\_{\mathcal{\mathcal{B}}}) + (\mathfrak{a}\_{\mathcal{\mathcal{B}}} - \mathfrak{f}\_{\mathcal{\mathcal{B}}})]$$

$$\mathcal{Q}\_2(\mathcal{A}\_{\mathcal{\mathcal{B}}}) = \frac{1}{2} [(\mathfrak{l}\_{\mathcal{\mathcal{B}}}^2 - \mathfrak{f}\_{\mathcal{\mathcal{B}}}^2) + (\mathfrak{a}\_{\mathcal{\mathcal{B}}}^2 - \mathfrak{f}\_{\mathcal{\mathcal{B}}}^2)]$$

$$\mathcal{Q}\_3(\vec{\mathcal{A}}\_{\mathcal{\mathcal{B}}}) = \frac{1}{2} [\frac{(\mathfrak{l}\_{\mathcal{\mathcal{B}}} - \mathfrak{f}\_{\mathcal{\mathcal{B}}} + 1)}{2} + \frac{(\mathfrak{a}\_{\mathcal{\mathcal{B}}} - \mathfrak{f}\_{\mathcal{\mathcal{B}}} + 1)}{2}]$$

]

of every alternative in <sup>A</sup>˜>(H) <sup>⊕</sup> <sup>A</sup>˜>(H).

8. Rank the alternatives by using calculated score values.

**Final decision:**

9. Select the object having the highest score value.

### **Algorithm 4:** Selection of the best material handling equipment by using SRLDFSs.

### **Input:**

1. Input the reference set <sup>Q</sup>¨.

2. Input the assembling of attributes <sup>G</sup>˙ .

### **Construction:**


normal decision set.

### **Calculation:**


### **Output:**

7. From calculated "2<sup>N</sup> " reducts, we get "2<sup>N</sup> " crisp subsets of the reference set <sup>Q</sup>¨. The subsets can be constructed by using the "YES" and "NO" logic. The only alternatives in the reduct having final decision "YES" will become the object of the crisp subset.

8. Calculate the core set by taking the intersection of all crisp subsets obtained from the calculated reducts.

### **Final decision:**

9. The alternatives in the core will be our choice for the final decision.

**Figure 5.** Flowchart diagram of Algorithms 3 and 4.

### 4.2.1. Calculations by Using Algorithm 3

We consider the indiscernibility relation "selection of best material handling equipment". This relation is represented as a crisp soft relation <sup>A</sup>˜ over Q ×¨ <sup>G</sup>˙ given as Table 20.



Thus, <sup>A</sup>˜ over Q ×¨ <sup>G</sup>˙ is a crisp soft relation. Table 20 shows that we have:

$$\begin{aligned} \mathcal{Q}\_{\mathbf{s}}^{\mathcal{J}}(\mathcal{G}\_{1}) &= \{\dot{\wp}\_{3\prime}\dot{\wp}\_{4}\} \\ \mathcal{Q}\_{\mathbf{s}}^{\mathcal{J}}(\mathcal{G}\_{2}) &= \{\dot{\wp}\_{2\prime}\dot{\wp}\_{3}\} \\ \mathcal{Q}\_{\mathbf{s}}^{\mathcal{J}}(\mathcal{G}\_{3}) &= \{\dot{\wp}\_{1\prime}\dot{\wp}\_{2}\} \\ \mathcal{Q}\_{\mathbf{s}}^{\mathcal{J}}(\mathcal{G}\_{4}) &= \{\dot{\wp}\_{1\prime}\dot{\wp}\_{2\prime}\dot{\wp}\_{3}\} \\ \mathcal{Q}\_{\mathbf{s}}^{\mathcal{J}}(\mathcal{G}\_{5}) &= \{\dot{\wp}\_{1\prime}\dot{\wp}\_{2\prime}\dot{\wp}\_{4}\} \\ \mathcal{Q}\_{\mathbf{s}}^{\mathcal{J}}(\mathcal{G}\_{6}) &= \{\dot{\wp}\_{1\prime}\dot{\wp}\_{4}\} \\ \mathcal{Q}\_{\mathbf{s}}^{\mathcal{J}}(\mathcal{G}\_{7}) &= \{\dot{\wp}\_{2\prime}\dot{\wp}\_{3}\} \end{aligned}$$

We consider that experts give some opinion about the attributes and rank them according to their requirements. We convert the verbal description into the LDFS numeric values in the form of LDFS H. The set <sup>H</sup> is the LDF-subset of <sup>G</sup>˙ and written as follows:

$$\mathcal{H} = \{ (\wp\_1, \langle 0.63, 0.41 \rangle, \langle 0.31, 0.33 \rangle), (\wp\_2, \langle 0.71, 0.51 \rangle, \langle 0.41, 0.38 \rangle), \}$$

$$(\wp\_3, \langle 0.75, 0.63 \rangle, \langle 0.51, 0.32 \rangle), (\wp\_{4\prime} \langle 0.83, 0.51 \rangle, \langle 0.41, 0.21 \rangle))\}.$$

Now, we find "upper and lower approximations" of set <sup>H</sup> over the relation <sup>A</sup>˜ by using Definition 9 given as:

<sup>A</sup>˜>(H) = {(G<sup>1</sup> ,h00.83, 00.51i,h00.51, 00.21i),(G2,h00.75, 00.51i,h00.51, 00.32i),(G3,h00.71, 00.41i,h00.41, 00.33i), (G<sup>4</sup> ,h00.75, 00.41i,h00.51, 00.32i),(G5,h00.83, 00.41i,h00.41, 00.21i,(G6,h00.83, 00.41i,h00.41, 00.21i, (G7,h00.75, 00.51i,h00.51, 00.32i)}

<sup>A</sup>˜>(H) = {(G<sup>1</sup> ,h00.75, 00.63i,h00.41, 00.32i),(G2,h00.71, 00.63i,h00.41, 00.38i),(G3,h00.63, 00.51i,h00.31, 00.38i), (G<sup>4</sup> ,h00.63, 00.63i,h00.31, 00.38i),(G5,h00.63, 00.51i,h00.31, 00.38i,(G6,h00.63, 00.51i,h00.31, 00.33i, (G7,h00.71, 00.63i,h00.41, 00.38i)}

$$\rho\_{\sf F}^{\sf F}(\mathcal{H}) \oplus \rho^{\sf f \sf \sf \sf }(\mathcal{H}) = \{ (\mathcal{G}\_{1\prime} \langle 0.957, 0.321 \rangle, \langle 0.710, 0.067 \rangle), (\mathcal{G}\_{2\prime} \langle 0.927, 0.321 \rangle, \langle 0.710, 0.121 \rangle), \\ \},$$

$$(\mathcal{G}\_{3\prime} \langle 0.892, 0.209 \rangle, \langle 0.592, 0.125 \rangle), (\mathcal{G}\_{4\prime} \langle 0.907, 0.258 \rangle, \langle 0.661, 0.121 \rangle),$$

$$(\mathcal{G}\_{5\prime} \langle 0.937, 0.209 \rangle, \langle 0.592, 0.079 \rangle, \langle \mathcal{G}\_{6\prime} \langle 0.937, 0.209 \rangle, \langle 0.592, 0.069 \rangle,$$

$$(\mathcal{G}\_{7\prime} \langle 0.927, 0.321 \rangle, \langle 0.710, 0.121 \rangle))$$

Now, we calculate the score values, quadratic score values, and expectation score values of alternatives in <sup>A</sup>˜>(H) <sup>⊕</sup> <sup>A</sup>˜>(H). The calculated data with the final ranking is given in Table 21.



From Table 21, we can observe that the alternative G<sup>1</sup> is most suitable for the final decision. The bar chart of the ranking results for alternatives is given in Figure 6.

**Figure 6.** Bar chart of alternatives under SRLDFS for SF L<sup>1</sup> , QSF L2, and ESF L3.

### 4.2.2. Calculations by Using Algorithm 4

.

In this part, we establish upper and lower reducts for all the experts one by one. The initial five steps of Algorithm 4 are the same as Algorithm 3. We will proceed next by constructing the upper and lower reducts from the "upper and lower approximations" of LDFS for all the experts under "crisp soft approximation space". Suppose that we have three experts from the company's technical committee given as:

```
Expert X˙
Expert Y˙
Expert Z˙
```
The characteristics and terms for finding the upper and lower reducts are the same as we used in Algorithm 2. Therefore, we directly calculate the reducts for experts.

For expert-*X*˙ , the upper reduct of upper approximation <sup>A</sup>˜>(H) (calculated in Algorithm 3) of LDFS <sup>H</sup> is given as Table 22. The average of the score values of all the alternatives for <sup>A</sup>˜>(H) is 0.629.


**Table 22.** Upper reduct for expert-*X*˙ , (*UX*˙ ) from <sup>A</sup>˜>(H).

This implies that (*UX*˙ ) = {G<sup>1</sup> , <sup>G</sup>5, <sup>G</sup>6}. For expert-*X*˙ , the lower reduct of lower approximation <sup>A</sup>˜>(H) (calculated in Algorithm 3) of LDFS <sup>H</sup> is given as Table 23. The average of the score values of all the alternatives for <sup>A</sup>˜>(H) is 0.519.


**Table 23.** Lower reduct for expert-*X*˙ (*LX*˙ ) from <sup>A</sup>˜>(H).

This implies that *<sup>L</sup>X*˙ = {G<sup>1</sup> , <sup>G</sup>2, <sup>G</sup>6}. For expert-*Y*˙ , the upper reduct of upper approximation <sup>A</sup>˜>(H) (calculated in Algorithm 3) of LDFS H is given as Table 24. The average of the score values of all the alternatives for <sup>A</sup>˜>(H) is 0.629.

**Table 24.** Upper reduct for expert-*Y*˙ , (*UY*˙) from <sup>A</sup>˜>(H).


This implies that (*UY*˙) = {G<sup>1</sup> , G<sup>4</sup> , <sup>G</sup>5}. For expert-*Y*˙ , the lower reduct of lower approximation <sup>A</sup>˜>(H) (calculated in Algorithm 3) of LDFS H is given as Table 25. The average of the score values of all the alternatives for <sup>A</sup>˜>(H) is 0.519.


**Table 25.** Lower reduct for expert-*Y*˙ (*LY*˙) from <sup>A</sup>˜>(H).

This implies that *<sup>L</sup>Y*˙ = {G<sup>1</sup> , <sup>G</sup>7}. For expert-*Z*˙ , the upper reduct of upper approximation <sup>A</sup>˜>(H) (calculated in Algorithm 3) of LDFS H is given as Table 26. The average of the score values of all the alternatives for <sup>A</sup>˜>(H) is 0.629.

**Table 26.** Upper reduct for expert-*Z*˙ , (*UZ*˙ ) from <sup>A</sup>˜>(H).


This implies that (*UZ*˙ ) = {G<sup>1</sup> , G<sup>4</sup> , <sup>G</sup>6}. For expert-*Z*˙ , the lower reduct of lower approximation <sup>A</sup>˜>(H) (calculated in Algorithm 3) of LDFS <sup>H</sup> is given as Table 27. The average of the score values of all the alternatives for <sup>A</sup>˜>(H) is 0.519.


**Table 27.** Lower reduct for expert-*Z*˙ (*LZ*˙ ) from <sup>A</sup>˜>(H).

This implies that *<sup>L</sup>Z*˙ = {G<sup>1</sup> , G2, G6, G7}.

Now, we calculate the core set by taking the intersection of all upper and lower reducts for all three experts.

$$\text{core} = \mathcal{U}\_{\hat{X}} \cap L\_{\hat{X}} \cap \mathcal{U}\_{\hat{Y}} \cap L\_{\hat{Y}} \cap \mathcal{U}\_{\hat{Z}} \cap L\_{\hat{Z}} = \{\mathcal{G}\_1\}$$

This means that "G1" is the most suitable alternative for the final decision.

### *4.3. Discussion, Comparison, and Symmetrical Analysis*

In this part, we compare our models to the existing approaches and discuss the superiority, authenticity, symmetry, and validity of our proposed structures. The comparison of the proposed structures with existing models is shown in Tables 28 and 29. Such tables reflect the characteristics and limitations of certain current hypotheses. We will observe that our presented models are superior and handle the MCDM techniques efficiently.


**Table 28.** Comparison of LDFSRS and SRLDFS with the existing concepts.


**Table 29.** Comparison of LDFSRS and SRLDFS with the existing concepts.

We constructed four algorithms based on LDFSRSs, SRLDFSs, and their corresponding approximation spaces. The final results for the decision making problem of material handling equipment selection obtained from these algorithms are given in Table 30.


**Table 30.** Comparison of the results obtained from the proposed algorithms.

In existing work, the superiority of the proposed model was discussed by examining its degeneration towards some existing rough set models (see Tables 5 and 9). The proposed algorithms are based on the SRLDFSs and LDFSRSs and their approximation operators. Algorithms 1 and 3 are based on the structures with LDFN score values. These algorithms provide us with information about the best and worst alternative. Algorithms 2 and 4 are focused on the core and reducts of the suggested structures. This also involves expert opinion and produces an outcome only for the essential alternative. This does not offer any comparison of the alternatives. Depending on the situation, each algorithm is essential and useful for real-life issues (see Tables 12 and 21).

By using different score functions and evaluating the reducts and core set, we check the behavior of "upper and lower approximations". The final results of Algorithms 1, 3, and 4 are exactly the same. The result of Algorithm 2 is different from the others. This difference is due to the different formulae and different ordering strategies used in the proposed algorithms. As we can see, the three algorithms produce the same decision, so we will go with the alternative G<sup>1</sup> for the final decision. Such structures demonstrate the symmetry in the findings and provide us with an appropriate, ideal approach for the

### problem of decision making.

Validity test:

To demonstrate the validity and symmetry of the results, Wang and Triantaphyllou [36] constructed the following test criteria.

### Test Criterion 1:

"If we replace non-optimal alternative rating values with the worst alternative then the best alternative should not change, provided the relative weighted criteria remain unchanged". Test Criterion 2:

"Process should have transitive nature".

Test Criterion 3:

"When a given problem is decomposed into smaller ones and the same MCDM method has been applied, then the combined ranking of alternatives should be identical to the ranking of un-decomposed one".

Via these parameters, when we test our results, we see that our findings are correct and reliable and provide us a satisfactory solution to the MCDM problem. Various researchers used numerous techniques based on rough set theory and its hybrid structures to solve decision making difficulties (see [2,3,7,8,17–19,23,24,26,33–35]). Comparing these hypotheses, we found that our proposed models are reliable, efficient, superior, symmetrical, and valid in comparison with those current models.

### **5. Conclusions**

There are two viewpoints in rough set theory knowledge: positive and axiomatic methods, and it is the same for LDFSRSs and SRLDFSs. This manuscript is a crystal reflection of both aspects of it. We have practiced fundamental ingredients of rough sets, soft sets, and LDFSs and established the proposed structures. With their accompanying illustrations, we provided some findings of such models. Many of the barriers to decision making in the input dataset include unclear, ambiguous, and imprecise details. These models can control these ambiguities better than the fuzzy sets, IFSs, PFSs, q-ROFSs, and LDFSs due to their mathematical formulation, variations, symmetry, and novelty. We introduced several level cut sets of LDFSs and related the recommended approximation operators with these level cut relations. We established various illustrations and results based on LDFSRSs and SRLDFSs approximation operators and corresponding approximations based on level cut sets. We utilized two different approximation spaces to produce variety in the decision making results. We listed the results of the degeneration of the proposed operators and found that our proposed models are generalizations of various existing rough set models. By using approximation spaces, score functions, upper and lower reductions, and core series, we introduced four novel algorithms for the assortment of sustainable material handling equipment. Depending on the situation, each algorithm is essential and useful for solving real-life problems. We discussed the advantages and limitations of the proposed structures with some existing models briefly (see Table 1). In the future, we will expand this research for topological spaces and solve MCDM problems based on the TOPSIS, VIKOR, and AHP families.

**Author Contributions:** M.R., M.R.H., H.K., D.P., and Y.-M.C. conceived of and worked together to achieve this manuscript; M.R., M.R.H., and D.P. constructed the ideas and algorithms for data analysis and designed the model of the manuscript; M.R.H., H.K., and Y.-M.C. processed the data collection and wrote the paper. All authors read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Acknowledgments:** The authors are highly thankful to the Editor-in-Chief and referees for their valuable comments and suggestions for the improvement of our manuscript.

**Conflicts of Interest:** The authors declare that they have no conflict of interest.

### **Abbreviations**


### **Appendix A**

(1) From Definition 9, we can write that:

<sup>∼</sup> <sup>A</sup>˜>(<sup>∼</sup> <sup>Y</sup><sup>D</sup> ) = {(G,hS¨ <sup>A</sup>˜>(∼Y<sup>D</sup> ) (G), <sup>T</sup>¨ <sup>A</sup>˜>(∼Y<sup>D</sup> ) (G)i,h*β*A˜>(∼Y<sup>D</sup> ) (G), *<sup>α</sup>*A˜>(∼Y<sup>D</sup> ) (G)i) : G ∈ Q}¨ = {(G,h min <sup>℘</sup>˙ <sup>∈</sup>A˜ *<sup>s</sup>*(G) (S¨ (∼Y<sup>D</sup> ) (℘˙ )), max <sup>℘</sup>˙ <sup>∈</sup>A˜ *<sup>s</sup>*(G) (T¨ (∼Y<sup>D</sup> ) (℘˙ ))i, h min <sup>℘</sup>˙ <sup>∈</sup>A˜ *<sup>s</sup>*(G) (*β*(∼Y<sup>D</sup> ) (℘˙ )), max <sup>℘</sup>˙ <sup>∈</sup>A˜ *<sup>s</sup>*(G) (*α*(∼Y<sup>D</sup> ) (℘˙ ))i) : G ∈ Q}¨ = {(G,h min <sup>℘</sup>˙ <sup>∈</sup>A˜ *<sup>s</sup>*(G) (T¨Y<sup>D</sup> (℘˙ )), max <sup>℘</sup>˙ <sup>∈</sup>A˜ *<sup>s</sup>*(G) (S¨Y<sup>D</sup> (℘˙ ))i,h min <sup>℘</sup>˙ <sup>∈</sup>A˜ *<sup>s</sup>*(G) (*α*Y<sup>D</sup> (℘˙ )), max <sup>℘</sup>˙ <sup>∈</sup>A˜ *<sup>s</sup>*(G) (*β*Y<sup>D</sup> (℘˙ ))i) : G ∈ Q}¨ <sup>=</sup> {(G,hT¨ A˜>(Y<sup>D</sup> ) (G), <sup>S</sup>¨ A˜>(Y<sup>D</sup> ) (G)i,h*α*A˜>(Y<sup>D</sup> ) (G), *β*A˜>(Y<sup>D</sup> ) (G)i) : G ∈ Q}¨ = <sup>A</sup>˜>(Y<sup>D</sup> )


<sup>A</sup>˜>(Y<sup>D</sup> <sup>∩</sup> <sup>B</sup><sup>D</sup> ) = {(G,hT¨ <sup>A</sup>˜>(Y<sup>D</sup> <sup>∩</sup>B<sup>D</sup> ) (G), <sup>S</sup>¨ <sup>A</sup>˜>(Y<sup>D</sup> <sup>∩</sup>B<sup>D</sup> ) (G)i,h*α*A˜>(Y<sup>D</sup> <sup>∩</sup>B<sup>D</sup> ) (G), *<sup>β</sup>*A˜>(Y<sup>D</sup> <sup>∩</sup>B<sup>D</sup> ) (G)i) : G ∈ Q}¨ = {(G,h min <sup>℘</sup>˙ <sup>∈</sup>A˜ *<sup>s</sup>*(G) T¨ (Y<sup>D</sup> ∩B<sup>D</sup> ) (℘˙ ), max <sup>℘</sup>˙ <sup>∈</sup>A˜ *<sup>s</sup>*(G) S¨ (Y<sup>D</sup> ∩B<sup>D</sup> ) (℘˙ )i, h min <sup>℘</sup>˙ <sup>∈</sup>A˜ *<sup>s</sup>*(G) *<sup>α</sup>*(Y<sup>D</sup> <sup>∩</sup>B<sup>D</sup> ) (℘˙ ), max <sup>℘</sup>˙ <sup>∈</sup>A˜ *<sup>s</sup>*(G) *<sup>β</sup>*(Y<sup>D</sup> <sup>∩</sup>B<sup>D</sup> ) (℘˙ )i) : G ∈ Q}¨ = {(G,h min <sup>℘</sup>˙ <sup>∈</sup>A˜ *<sup>s</sup>*(G) (T¨Y<sup>D</sup> (℘˙ ) ∧ *<sup>α</sup>*T¨B<sup>D</sup> (℘˙ )), max <sup>℘</sup>˙ <sup>∈</sup>A˜ *<sup>s</sup>*(G) (S¨Y<sup>D</sup> (℘˙ ) ∨ *<sup>α</sup>*S¨B<sup>D</sup> (℘˙ ))i, h min <sup>℘</sup>˙ <sup>∈</sup>A˜ *<sup>s</sup>*(G) (*α*Y<sup>D</sup> (℘˙ ) ∧ *<sup>α</sup>*T¨B<sup>D</sup> (℘˙ )), max <sup>℘</sup>˙ <sup>∈</sup>A˜ *<sup>s</sup>*(G) (*β*Y<sup>D</sup> (℘˙ ) ∨ *<sup>α</sup>*S¨B<sup>D</sup> (℘˙ ))i)} = {(G,h min <sup>℘</sup>˙ <sup>∈</sup>A˜ *<sup>s</sup>*(G) (T¨Y<sup>D</sup> (℘˙ )) ∧ min <sup>℘</sup>˙ <sup>∈</sup>A˜ *<sup>s</sup>*(G) (T¨B<sup>D</sup> (℘˙ )), max <sup>℘</sup>˙ <sup>∈</sup>A˜ *<sup>s</sup>*(G) (S¨Y<sup>D</sup> (℘˙ )) ∨ max <sup>℘</sup>˙ <sup>∈</sup>A˜ *<sup>s</sup>*(G) (S¨B<sup>D</sup> (℘˙ ))i, h min <sup>℘</sup>˙ <sup>∈</sup>A˜ *<sup>s</sup>*(G) (*α*Y<sup>D</sup> (℘˙ )) ∧ min <sup>℘</sup>˙ <sup>∈</sup>A˜ *<sup>s</sup>*(G) (*α*B<sup>D</sup> (℘˙ )), max <sup>℘</sup>˙ <sup>∈</sup>A˜ *<sup>s</sup>*(G) (*β*Y<sup>D</sup> (℘˙ )) ∨ max <sup>℘</sup>˙ <sup>∈</sup>A˜ *<sup>s</sup>*(G) (*β*B<sup>D</sup> (℘˙ ))i)} <sup>=</sup> {(G,hT¨ A˜>(Y<sup>D</sup> ) (G) <sup>∧</sup> <sup>T</sup>¨ A˜>(B<sup>D</sup> ) (G), <sup>S</sup>¨ A˜>(Y<sup>D</sup> ) (G) <sup>∨</sup> <sup>S</sup>¨ A˜>(B<sup>D</sup> ) (G)i, h*α*A˜>(Y<sup>D</sup> ) (G) ∧ *α*A˜>(B<sup>D</sup> ) (G), *β*A˜>(Y<sup>D</sup> ) (G) ∨ *β*A˜>(B<sup>D</sup> ) (G)i) : G ∈ Q}¨ <sup>=</sup> <sup>A</sup>˜>(Y<sup>D</sup> ) <sup>∩</sup> <sup>A</sup>˜>(B<sup>D</sup> )

### (4) From Definition 9, we can write that:

<sup>A</sup>˜>(Y<sup>D</sup> <sup>∩</sup> <sup>B</sup><sup>D</sup> ) = {(G,hT¨ <sup>A</sup>˜>(Y<sup>D</sup> <sup>∪</sup>B<sup>D</sup> ) (G), <sup>S</sup>¨ <sup>A</sup>˜>(Y<sup>D</sup> <sup>∪</sup>B<sup>D</sup> ) (G)i,h*α*A˜>(Y<sup>D</sup> <sup>∪</sup>B<sup>D</sup> ) (G), *<sup>β</sup>*A˜>(Y<sup>D</sup> <sup>∪</sup>B<sup>D</sup> ) (G)i) : G ∈ Q}¨ = {(G,h min <sup>℘</sup>˙ <sup>∈</sup>A˜ *<sup>s</sup>*(G) T¨ (Y<sup>D</sup> ∪B<sup>D</sup> ) (℘˙ ), max <sup>℘</sup>˙ <sup>∈</sup>A˜ *<sup>s</sup>*(G) S¨ (Y<sup>D</sup> ∪B<sup>D</sup> ) (℘˙ )i, h min <sup>℘</sup>˙ <sup>∈</sup>A˜ *<sup>s</sup>*(G) *<sup>α</sup>*(Y<sup>D</sup> <sup>∪</sup>B<sup>D</sup> ) (℘˙ ), max <sup>℘</sup>˙ <sup>∈</sup>A˜ *<sup>s</sup>*(G) *<sup>β</sup>*(Y<sup>D</sup> <sup>∪</sup>B<sup>D</sup> ) (℘˙ )i) : G ∈ Q}¨ = {(G,h min <sup>℘</sup>˙ <sup>∈</sup>A˜ *<sup>s</sup>*(G) (T¨Y<sup>D</sup> (℘˙ ) ∨ *<sup>α</sup>*T¨B<sup>D</sup> (℘˙ )), max <sup>℘</sup>˙ <sup>∈</sup>A˜ *<sup>s</sup>*(G) (S¨Y<sup>D</sup> (℘˙ ) <sup>∧</sup> <sup>S</sup>¨B<sup>D</sup> (℘˙ ))i, h min <sup>℘</sup>˙ <sup>∈</sup>A˜ *<sup>s</sup>*(G) (*α*Y<sup>D</sup> (℘˙ ) ∨ *<sup>α</sup>*T¨B<sup>D</sup> (℘˙ )), max <sup>℘</sup>˙ <sup>∈</sup>A˜ *<sup>s</sup>*(G) (*β*Y<sup>D</sup> (℘˙ ) <sup>∧</sup> <sup>S</sup>¨B<sup>D</sup> (℘˙ ))i)} ⊇ {(G,h min <sup>℘</sup>˙ <sup>∈</sup>A˜ *<sup>s</sup>*(G) (T¨Y<sup>D</sup> (℘˙ )) ∨ min <sup>℘</sup>˙ <sup>∈</sup>A˜ *<sup>s</sup>*(G) (T¨B<sup>D</sup> (℘˙ )), \_ <sup>℘</sup>˙ <sup>∈</sup>A˜ *<sup>s</sup>*(G) (S¨Y<sup>D</sup> (℘˙ )) ∧ max <sup>℘</sup>˙ <sup>∈</sup>A˜ *<sup>s</sup>*(G) (S¨B<sup>D</sup> (℘˙ ))i, h min <sup>℘</sup>˙ <sup>∈</sup>A˜ *<sup>s</sup>*(G) (*α*Y<sup>D</sup> (℘˙ )) ∨ min <sup>℘</sup>˙ <sup>∈</sup>A˜ *<sup>s</sup>*(G) (*α*B<sup>D</sup> (℘˙ )), max <sup>℘</sup>˙ <sup>∈</sup>A˜ *<sup>s</sup>*(G) (*β*Y<sup>D</sup> (℘˙ )) ∧ max <sup>℘</sup>˙ <sup>∈</sup>A˜ *<sup>s</sup>*(G) (*β*B<sup>D</sup> (℘˙ ))i)} <sup>=</sup> {(G,hT¨ A˜>(Y<sup>D</sup> ) (G) <sup>∨</sup> <sup>T</sup>¨ A˜>(B<sup>D</sup> ) (G), <sup>S</sup>¨ A˜>(Y<sup>D</sup> ) (G) <sup>∧</sup> <sup>S</sup>¨ A˜>(B<sup>D</sup> ) (G)i, h*α*A˜>(Y<sup>D</sup> ) (G) ∨ *α*A˜>(B<sup>D</sup> ) (G), *β*A˜>(Y<sup>D</sup> ) (G) ∧ *β*A˜>(B<sup>D</sup> ) (G)i) : G ∈ Q}¨ <sup>=</sup> <sup>A</sup>˜>(Y<sup>D</sup> ) <sup>∪</sup> <sup>A</sup>˜>(B<sup>D</sup> )

Thus, <sup>A</sup>˜>(Y<sup>D</sup> <sup>∪</sup> <sup>B</sup><sup>D</sup> ) <sup>⊇</sup> <sup>A</sup>˜>(Y<sup>D</sup> ) <sup>∪</sup> <sup>A</sup>˜>(B<sup>D</sup> ). Similarly, we can prove the remaining axioms by following these arguments.

### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

*Article*
