**Proof.**

∼

∼

(1)

$$\begin{split} \tilde{\mathsf{C}}(A') &= \{ \langle \mathsf{x}, \vee\_{\mathcal{Y} \in \mathcal{U}} [T\_{\widehat{\mathsf{N}}\_{\mathbf{z}}^{\mathfrak{d}}}(y) \wedge T\_{A'}(y)], \vee\_{\mathcal{Y} \in \mathcal{U}} [I\_{\widehat{\mathsf{N}}\_{\mathbf{z}}^{\mathfrak{d}}}(y) \wedge I\_{A'}(y)], \wedge\_{\mathcal{Y} \in \mathcal{U}} [F\_{\widehat{\mathsf{N}}\_{\mathbf{z}}^{\mathfrak{d}}}(y) \vee F\_{A'}(y)] \rangle : \mathbf{x} \in \mathsf{U} \} \\ &= \{ \langle \mathsf{x}, \vee\_{\mathcal{Y} \in \mathcal{U}} [T\_{\widehat{\mathsf{N}}\_{\mathbf{z}}^{\mathfrak{d}}}(y) \wedge F\_{A}(y)], \vee\_{\mathcal{Y} \in \mathcal{U}} [I\_{\widehat{\mathsf{N}}\_{\mathbf{z}}^{\mathfrak{d}}}(y) \wedge (1 - I\_{A}(y))], \wedge\_{\mathcal{Y} \in \mathcal{U}} [F\_{\widehat{\mathsf{N}}\_{\mathbf{z}}^{\mathfrak{d}}}(y) \vee T\_{A}(y)] \rangle : \mathbf{x} \in \mathsf{U} \} \\ &= (\mathsf{C}(A))'. \end{split}$$

If we replace *A* by *A* in this proof, we can also prove C ∼ (*A* )=(C (*A*)) . (2) Since *A* ⊆ *B*, so *TA*(*x*) ≤ *TB*(*x*), *IB*(*x*) ≤ *IA*(*x*) and *FB*(*x*) ≤ *FA*(*x*) for all *x* ∈ *U*. Therefore,

$$\begin{split} T\_{\underline{\mathbb{C}}(A)}(\boldsymbol{x}) &= \wedge\_{\mathcal{Y}\in\mathcal{U}}[\boldsymbol{F}\_{\widetilde{\mathbb{N}}\_{\mathbb{C}}^{d}}(\boldsymbol{y}) \vee \boldsymbol{T}\_{A}(\boldsymbol{y})] \leq \wedge\_{\mathcal{Y}\in\mathcal{U}}[\boldsymbol{F}\_{\widetilde{\mathbb{N}}\_{\mathbb{C}}^{d}}(\boldsymbol{y}) \vee \boldsymbol{T}\_{\mathcal{B}}(\boldsymbol{y})] = \boldsymbol{T}\_{\underline{\mathbb{C}}(B)}(\boldsymbol{x}), \\ I\_{\underline{\mathbb{C}}(A)}(\boldsymbol{x}) &= \wedge\_{\mathcal{Y}\in\mathcal{U}}[(1-I\_{\widetilde{\mathbb{N}}\_{\mathbb{C}}^{d}}(\boldsymbol{y})) \vee I\_{A}(\boldsymbol{y})] \geq \wedge\_{\mathcal{Y}\in\mathcal{U}}[(1-I\_{\widetilde{\mathbb{N}}\_{\mathbb{C}}^{d}}(\boldsymbol{y})) \vee I\_{B}(\boldsymbol{y})] = I\_{\underline{\mathbb{C}}(B)}(\boldsymbol{x}), \\ F\_{\underline{\mathbb{C}}(A)}(\boldsymbol{x}) &= \vee\_{\mathcal{Y}\in\mathcal{U}}[T\_{\widetilde{\mathbb{N}}\_{\mathbb{C}}^{d}}(\boldsymbol{y}) \wedge F\_{A}(\boldsymbol{y})] \geq \vee\_{\mathcal{Y}\in\mathcal{U}}[T\_{\widetilde{\mathbb{N}}\_{\mathbb{C}}^{d}}(\boldsymbol{y}) \wedge F\_{B}(\boldsymbol{y})] = F\_{\underline{\mathbb{C}}(B)}(\boldsymbol{x}). \end{split}$$

C

 G

∼

∼

Hence, C (*A*) ⊆ C (*B*). In the same way, there is C (*A*) ⊆ C (*B*).

(3)

∼ (*AB*)={*<sup>x</sup>*, <sup>∧</sup>*y*∈*<sup>U</sup>*[*<sup>F</sup>*N*βx*(*y*) ∨ *TA* G *<sup>B</sup>*(*y*)], <sup>∧</sup>*y*∈*<sup>U</sup>*[(<sup>1</sup> − *I*N*βx* (*y*)) ∨ *IA* G *<sup>B</sup>*(*y*)], <sup>∨</sup>*y*∈*<sup>U</sup>*[*<sup>T</sup>*N*βx*(*y*) ∧ *FA* G *B*(*y*)] : *x* ∈ *U*}


∼

∼

Similarly, we can obtain C (*A* A *B*) = C (*A*) A C (*B*). (4) Since *A* ⊆ *A* ∪ *B*, *B* ⊆ *A* ∪ *B*, *A* ∩ *B* ⊆ *A* and *A* ∩ *B* ⊆ *B*,

∼

∼

$$
\mathbb{C}(A) \subseteq \mathbb{C}(A \cup B), \\
\mathbb{C}(B) \subseteq \mathbb{C}(A \cup B), \\
\overline{\mathbb{C}}(A \cap B) \subseteq \overline{\mathbb{C}}(A) \text{ and } \overline{\mathbb{C}}(A \cap B) \subseteq \overline{\mathbb{C}}(B).
$$

∼ Hence, C (*A* A *B*) ⊇ C (*A*) A C (*B*), C (*A* G *B*) ⊆ C (*A*) G C (*B*).

∼

∼

We propose the other SVN covering rough set model, which concerns the crisp lower and upper approximations of each crisp set in the SVN environment.

**Definition 7.** *Let* (*<sup>U</sup>*,*<sup>C</sup>* F ) *be a SVN β-covering approximation space. For each crisp subset X* ∈ *P*(*U*) *(P*(*U*) *is the power set of U), we define the SVN covering upper approximation* C(*X*) *and lower approximation* C(*X*) *of X as:*

$$\begin{aligned} \overline{\mathbb{C}}(X) &= \{ x \in \mathcal{U} : \overline{\mathbb{N}}\_{x}^{\emptyset} \cap X \neq \mathcal{O} \}, \\ \underline{\mathbb{C}}(X) &= \{ x \in \mathcal{U} : \overline{\mathbb{N}}\_{x}^{\emptyset} \subseteq X \}. \end{aligned} \tag{6}$$

If C(*X*) = C(*X*), then *X* is called the second type of SVN covering rough set.

**Example 4** (Continued from Example 2)**.** *Let β* = 0.5, 0.3, 0.8*, X* = {*<sup>x</sup>*1, *<sup>x</sup>*2}*, Y* = {*<sup>x</sup>*2, *x*4, *<sup>x</sup>*5}*. Then,*

$$\begin{aligned} \overline{\mathbb{C}}(X) &= \{ \mathbf{x}\_1, \mathbf{x}\_2, \mathbf{x}\_4 \}, \underline{\mathbb{C}}(X) = \{ \mathbf{x}\_1, \mathbf{x}\_2 \}, \\ \overline{\mathbb{C}}(Y) &= \{ \mathbf{x}\_1, \mathbf{x}\_2, \mathbf{x}\_3, \mathbf{x}\_4, \mathbf{x}\_5 \}, \underline{\mathbb{C}}(Y) = \{ \mathbf{x}\_2, \mathbf{x}\_4, \mathbf{x}\_5 \}, \\ \underline{\mathbb{C}}(U) &= \mathcal{U}, \underline{\mathbb{C}}(U) = \mathcal{U}, \overline{\mathbb{C}}(\mathcal{O}) = \mathcal{O}, \underline{\mathbb{C}}(\mathcal{O}) = \mathcal{O}. \end{aligned}$$

**Proposition 5.** *Let C* F *be a SVN β-covering of U. Then, the SVN covering upper and lower approximation operators in Definition 7 satisfy the following properties: for all X*,*Y* ∈ *<sup>P</sup>*(*U*)*,*

(1) C(∅) = ∅*,* C(*U*) = *U.* (2) C(*U*) = *U,* C(∅) = ∅*.* (3) C(*X* )=(C(*X*)) *,* C(*X* )=(C(*X*)) *.* (4) *If X* ⊆ *Y, then* C(*X*) ⊆ C(*Y*)*,* C(*X*) ⊆ C(*Y*)*.* (5) C(*X* G *Y*) = C(*X*) G C(*Y*)*,* C(*X* A *Y*) = C(*X*) A C(*Y*)*.* (6) C(*X* A *Y*) ⊇ C(*X*) A C(*Y*)*,* C(*X* G *Y*) ⊆ C(*X*) G C(*Y*)*.* (7) C(C(*X*)) ⊆ C(*X*), C(C(*X*)) ⊇ C(*X*)*.* (8) C(*X*) ⊆ *X* ⊆ C(*X*)*.* (9) *X* ⊆ *YorY* ⊆ *X* ⇔ C(*X* ∩ *Y*)= C(*X*) ∩ C(*Y*),C(*X* ∪ *Y*)= C(*X*) ∪ C(*Y*)*.*

**Proof.** It can be directly followed from Definitions 5 and 7.

#### **5. Matrix Representations of These Single Valued Neutrosophic Covering Rough Set Models**

In this section, matrix representations of the proposed SVN covering rough set models are investigated. Firstly, some new matrices and matrix operations are presented. Then, we show the matrix representations of these SVN approximation operators defined in Definitions 6 and 7. The order of elements in *U* is given.

**Definition 8.** *Let C* F *be a SVN β-covering of U with U* = {*<sup>x</sup>*1, *x*2, ··· , *xn*} *and C* F = {*<sup>C</sup>*1, *C*2, ··· , *Cm*}*. Then, <sup>M</sup>C*<sup>F</sup> = (*Cj*(*xi*))*n*×*m is named a matrix representation of C* F *, and M<sup>β</sup>C*F = (*sij*)*n*×*m is called a β-matrix representation of C* F *, where*

$$\mathbf{s}\_{i\bar{j}} = \begin{cases} \mathbf{1}\_{\prime} & \mathbf{C}\_{\bar{j}}(\mathbf{x}\_{i}) \ge \beta; \\\mathbf{0}\_{\prime} & \text{otherwise.} \end{cases}$$

**Example 5** (Continued from Example 1)**.** *Let β* = 0.5, 0.3, 0.8*.*

$$
\mathcal{M}\_{\widehat{\mathbb{C}}} = \begin{pmatrix} \langle 0.7, 0.2, 0.5 \rangle & \langle 0.6, 0.2, 0.4 \rangle & \langle 0.4, 0.1, 0.5 \rangle & \langle 0.1, 0.5, 0.6 \rangle \\ \langle 0.5, 0.3, 0.2 \rangle & \langle 0.5, 0.2, 0.8 \rangle & \langle 0.4, 0.5, 0.4 \rangle & \langle 0.6, 0.1, 0.7 \rangle \\ \langle 0.4, 0.5, 0.2 \rangle & \langle 0.2, 0.3, 0.6 \rangle & \langle 0.5, 0.2, 0.4 \rangle & \langle 0.6, 0.3, 0.4 \rangle \\ \langle 0.6, 0.1, 0.7 \rangle & \langle 0.4, 0.5, 0.7 \rangle & \langle 0.3, 0.6, 0.5 \rangle & \langle 0.5, 0.3, 0.2 \rangle \\ \langle 0.3, 0.2, 0.6 \rangle & \langle 0.7, 0.3, 0.5 \rangle & \langle 0.6, 0.3, 0.5 \rangle & \langle 0.8, 0.1, 0.2 \rangle \\ \end{pmatrix}, \\
\mathcal{M}\_{\mathcal{C}}^{\mathcal{G}} = \begin{pmatrix} 1 & 1 & 0 & 0 \\ 1 & 1 & 0 & 1 \\ 0 & 0 & 1 & 1 \\ 1 & 0 & 0 & 1 \\ 0 & 1 & 1 & 1 \end{pmatrix}.
$$

**Definition 9.** *Let A* = (*aik*)*n*×*m and B* = (*b*<sup>+</sup>*kj*, *bkj*, *<sup>b</sup>*<sup>−</sup>*kj*)<sup>1</sup>≤*k*≤*<sup>m</sup>*,1≤*j*≤*<sup>l</sup> be two matrices. We define D* = *A* ∗ *B* = (*d*+*ij* , *dij*, *d*−*ij* )<sup>1</sup>≤*i*≤*<sup>n</sup>*,1≤*j*≤*l, where*

$$\langle \langle d\_{ij}^{+}, d\_{lj}, d\_{lj}^{-} \rangle = \langle \wedge\_{k-1}^{\mathfrak{m}} [(1 - a\_{ik}) \vee b\_{kj}^{+}], 1 - \wedge\_{k-1}^{\mathfrak{m}} [(1 - a\_{ik}) \vee (1 - b\_{kj})], 1 - \wedge\_{k-1}^{\mathfrak{m}} [(1 - a\_{ik}) \vee (1 - b\_{kj}^{-})] \rangle. \tag{7}$$

Based on Definitions 8 and 9, all N *β x* for any *x* ∈ *U* can be obtained by matrix operations.

**Proposition 6.** *Let C* F *be a SVN β-covering of U with U* = {*<sup>x</sup>*1, *x*2, ··· , *xn*} *and C* F = {*<sup>C</sup>*1, *C*2, ··· , *Cm*}*. Then*

$$\boldsymbol{M}\_{\widehat{\mathbf{C}}}^{\beta} \ast \boldsymbol{M}\_{\widehat{\mathbf{C}}}^{T} = (\widetilde{\mathbb{N}}\_{\mathbf{x}\_{i}}^{\beta}(\mathbf{x}\_{j}))\_{1 \leq i \leq\_{n} \mathbf{1} \leq\_{j} j \leq\_{n} n} \tag{8}$$

*where <sup>M</sup>TC*<sup>F</sup> *is the transpose of <sup>M</sup>C*<sup>F</sup>*.*

**Proof.** Suppose *<sup>M</sup>T***C**<sup>F</sup> = (*Ck*(*xj*))*m*×*n*, *Mβ***C**F = (*sik*)*n*×*m* and *Mβ***C**F ∗ *<sup>M</sup>T***C**<sup>F</sup> = (*d*+*ij* , *dij*, *d*−*ij* )<sup>1</sup>≤*i*≤*<sup>n</sup>*,1≤*j*≤*<sup>n</sup>*. Since **C** F is a SVN *β*-covering of *U*, for each *i* (1 ≤ *i* ≤ *n*), there exists *k* (1 ≤ *k* ≤ *m*) such that *sik* = 1. Then,

*d*+*ij* , *dij*, *d*−*ij* = ∧*mk*=<sup>1</sup>[(<sup>1</sup> − *sik*) ∨ *TCk* (*xj*)], 1 − <sup>∧</sup>*mk*=<sup>1</sup>[(<sup>1</sup> − *sik*) ∨ (1 − *ICk* (*xj*))], 1 − <sup>∧</sup>*mk*=<sup>1</sup>[(<sup>1</sup> − *sik*) ∨ (1 − *FCk* (*xj*))] = ∧*sik*=<sup>1</sup>[(<sup>1</sup> − *sik*) ∨ *TCk* (*xj*)], 1 − <sup>∧</sup>*sik*=<sup>1</sup>[(<sup>1</sup> − *sik*) ∨ (1 − *ICk* (*xj*))], 1 − <sup>∧</sup>*sik*=<sup>1</sup>[(<sup>1</sup> − *sik*) ∨ (1 − *FCk* (*xj*))] = ∧*sik*=<sup>1</sup>*TCk* (*xj*), 1 − <sup>∧</sup>*sik*=<sup>1</sup>(<sup>1</sup> − *ICk* (*xj*)), 1 − <sup>∧</sup>*sik*=<sup>1</sup>(<sup>1</sup> − *FCk* (*xj*)) = ∧*Ck* (*xi*)≥*βTCk* (*xj*), 1 − <sup>∧</sup>*Ck* (*xi*)≥*β*(<sup>1</sup> − *ICk* (*xj*)), 1 − <sup>∧</sup>*Ck* (*xi*)≥*β*(<sup>1</sup> − *FCk* (*xj*)) = (G*Ck* (*xi*)≥*β Ck*)(*xj*)

$$= \quad \bar{\mathbb{N}}\_{\alpha\_{\downarrow}}^{\#}(\mathfrak{x}\_{\slash}), 1 \le i, j \le n.$$

Hence, *Mβ***C**F ∗ *<sup>M</sup>T***C**<sup>F</sup> = (N*βxi*(*xj*))<sup>1</sup>≤*i*≤*<sup>n</sup>*,1≤*j*≤*<sup>n</sup>*. **Example 6** (Continued from Example 1)**.**

*M<sup>β</sup> C* F ∗ *<sup>M</sup>TC*<sup>F</sup> = ⎛⎜⎜⎜⎜⎜⎝ 1100 1101 0011 1001 0111 ⎞⎟⎟⎟⎟⎟⎠ ∗ ⎛⎜⎜⎜⎜⎜⎝ 0.7, 0.2, 0.5 0.6, 0.2, 0.4 0.4, 0.1, 0.5 0.1, 0.5, 0.6 0.5, 0.3, 0.2 0.5, 0.2, 0.8 0.4, 0.5, 0.4 0.6, 0.1, 0.7 0.4, 0.5, 0.2 0.2, 0.3, 0.6 0.5, 0.2, 0.4 0.6, 0.3, 0.4 0.6, 0.1, 0.7 0.4, 0.5, 0.7 0.3, 0.6, 0.5 0.5, 0.3, 0.2 0.3, 0.2, 0.6 0.7, 0.3, 0.5 0.6, 0.3, 0.5 0.8, 0.1, 0.2 ⎞⎟⎟⎟⎟⎟⎠*T* = ⎛⎜⎜⎜⎜⎜⎝ 1100 1101 0011 1001 0111 ⎞⎟⎟⎟⎟⎟⎠ ∗ ⎛⎜⎜⎜⎝ 0.7, 0.2, 0.5 0.5, 0.3, 0.2 0.4, 0.5, 0.2 0.6, 0.1, 0.7 0.3, 0.2, 0.6 0.6, 0.2, 0.4 0.5, 0.2, 0.8 0.2, 0.3, 0.6 0.4, 0.5, 0.7 0.7, 0.3, 0.5 0.4, 0.1, 0.5 0.4, 0.5, 0.4 0.5, 0.2, 0.4 0.3, 0.6, 0.5 0.6, 0.3, 0.5 0.1, 0.5, 0.6 0.6, 0.1, 0.7 0.6, 0.3, 0.4 0.5, 0.3, 0.2 0.8, 0.1, 0.2 ⎞⎟⎟⎟⎠ = ⎛⎜⎜⎜⎜⎜⎝ 0.6, 0.2, 0.5 0.5, 0.3, 0.8 0.2, 0.5, 0.6 0.4, 0.5, 0.7 0.3, 0.3, 0.6 0.1, 0.5, 0.6 0.5, 0.3, 0.8 0.2, 0.5, 0.6 0.4, 0.5, 0.7 0.3, 0.3, 0.6 0.1, 0.5, 0.6 0.4, 0.5, 0.7 0.5, 0.3, 0.4 0.3, 0.6, 0.5 0.6, 0.3, 0.5 0.1, 0.5, 0.6 0.5, 0.3, 0.7 0.4, 0.5, 0.4 0.5, 0.3, 0.7 0.3, 0.2, 0.6 0.1, 0.5, 0.6 0.4, 0.5, 0.8 0.2, 0.3, 0.6 0.3, 0.6, 0.7 0.6, 0.3, 0.5 ⎞⎟⎟⎟⎟⎟⎠ = (*Nβxi*(*xj*))<sup>1</sup>≤*i*≤5,1≤*j*≤5.

**Definition 10.** *Let A* = (*c*+*ij* , *cij*, *c*−*ij* )*m*×*n and B* = (*d*+*j* , *dj*, *d*−*j* )*n*×<sup>1</sup> *be two matrices. We define C* = *A* ◦ *B* = (*e*+*i* ,*ei*,*e*<sup>−</sup>*<sup>i</sup>* )*m*×<sup>1</sup> *and D* = *A* " *B* = (*f* +*i* , *fi*, *f* −*i* )*m*×1*, where*

$$\begin{aligned} \langle \mathfrak{e}\_{i}^{+}, \mathfrak{e}\_{i}, \mathfrak{e}\_{i}^{-} \rangle &= \langle \vee\_{j=1}^{n} (\mathfrak{c}\_{ij}^{+} \wedge d\_{j}^{+}), \vee\_{j=1}^{n} (\mathfrak{c}\_{ij} \wedge d\_{j}), \wedge\_{j=1}^{n} (\mathfrak{c}\_{ij}^{-} \vee d\_{j}^{-}) \rangle, \\ \langle \mathfrak{f}\_{i}^{+}, f\_{i}, \mathfrak{f}\_{i}^{-} \rangle &= \langle \wedge\_{j=1}^{n} (\mathfrak{c}\_{ij}^{-} \vee d\_{j}^{+}), \wedge\_{j=1}^{n} [(1 - \mathfrak{c}\_{ij}) \vee d\_{j}], \vee\_{j=1}^{n} (\mathfrak{c}\_{ij}^{+} \wedge d\_{j}^{-}) \rangle. \end{aligned} \tag{9}$$

According to Proposition 6 and Definition 10, the set representations of C (*A*) and C ∼ (*A*) (for any *A* ∈ *SVN*(*U*)) can be converted to matrix representations.

**Theorem 3.** *Let C* F *be a SVN β-covering of U with U* = {*<sup>x</sup>*1, *x*2, ··· , *xn*} *and C* F = {*<sup>C</sup>*1, *C*2, ··· , *Cm*}*. Then, for any A* ∈ *SVN*(*U*)*,*

$$\begin{aligned} \widetilde{\mathbb{C}}(A) &= (M\_{\widetilde{\mathbb{C}}}^{\widetilde{\mathbb{C}}} \ast M\_{\widetilde{\mathbb{C}}}^{T}) \diamond A, \\ \underline{\mathbb{C}}(A) &= (M\_{\widetilde{\mathbb{C}}}^{\widetilde{\mathbb{C}}} \ast M\_{\widetilde{\mathbb{C}}}^{T}) \diamond A, \end{aligned} \tag{10}$$

*where A* = (*ai*)*n*×<sup>1</sup> *with ai* = *TA*(*xi*), *IA*(*xi*), *FA*(*xi*) *is the vector representation of the SVNS A.* C (*A*) *and* C (*A*) *are also vector representations.*

**Proof.** According to Proposition 6 and Definitions 6 and 10, for any *xi* (*i* = 1, 2, ··· , *n*),

$$\begin{aligned} ((M\_{\tilde{\mathbf{C}}}^{\tilde{\beta}} \ast M\_{\tilde{\mathbf{C}}}^{\mathbf{T}}) \circ A)(\mathbf{x}\_{i}) &= \langle \vee\_{j=1}^{n} (T\_{\tilde{\mathbf{N}}\_{\mathbf{x}\_{j}}^{\beta}}(\mathbf{x}\_{j}) \wedge T\_{A}(\mathbf{x}\_{j})), \vee\_{j=1}^{n} (I\_{\tilde{\mathbf{N}}\_{\mathbf{x}\_{j}}^{\beta}}(\mathbf{x}\_{j}) \wedge I\_{A}(\mathbf{x}\_{j})), \wedge\_{j=1}^{n} (F\_{\tilde{\mathbf{N}}\_{\mathbf{x}\_{j}}^{\beta}}(\mathbf{x}\_{j}) \vee F\_{A}(\mathbf{x}\_{j})) \rangle \\ &= (\tilde{\mathbf{C}}(A))(\mathbf{x}\_{i}), \end{aligned}$$

and

∼

$$\begin{split} \langle ((M\_{\widehat{\mathbf{C}}}^{\mathfrak{f}} \ast M\_{\widehat{\mathbf{C}}}^{T}) \diamond A)(\mathbf{x}\_{l}) \rangle &= \langle \wedge\_{j=1}^{\mathfrak{n}} (F\_{\mathbb{R}\_{x\_{l}}^{\mathfrak{f}}}(\mathbf{x}\_{l}) \vee T\_{A}(\mathbf{x}\_{j})), \wedge\_{j=1}^{\mathfrak{n}} [(1 - I\_{\mathbb{R}\_{x\_{l}}^{\mathfrak{f}}}(\mathbf{x}\_{j})) \vee I\_{A}(\mathbf{x}\_{j})], \vee\_{j=1}^{\mathfrak{n}} (T\_{\mathbb{R}\_{x\_{l}}^{\mathfrak{f}}}(\mathbf{x}\_{l}) \wedge F\_{A}(\mathbf{x}\_{j})) \rangle \\ &= (\mathbb{C}(A))(\mathbf{x}\_{l}). \end{split}$$

Hence, C (*A*)=(*Mβ***C**<sup>F</sup> ∗ *<sup>M</sup>T***C**<sup>F</sup> ) ◦ *A*, C∼(*A*)=(*Mβ***C**<sup>F</sup> ∗ *<sup>M</sup>T***C**<sup>F</sup> ) " *A*.

∼

**Example 7** (Continued from Example 3)**.** *Let β* = 0.5, 0.3, 0.8*, A* = (0.6,0.3,0.5) *x*1 + (0.4,0.5,0.1) *x*2 + (0.3,0.2,0.6) *x*3+ (0.5,0.3,0.4) *x*4+ (0.7,0.2,0.3) *x*5*. Then,*

$$
\begin{aligned}
\tilde{C}(A) \\
&= \begin{pmatrix}
(M\_{\tilde{C}}^{\tilde{T}} \times M\_{\tilde{C}}^{\tilde{T}}) \times A \\
(0.6, 0.2, 0.5) & (0.5, 0.3, 0.8) & (0.2, 0.5, 0.6) \\
(0.1, 0.5, 0.6) & (0.5, 0.3, 0.8) & (0.2, 0.5, 0.6) \\
(0.1, 0.5, 0.6) & (0.4, 0.5, 0.7) & (0.5, 0.3, 0.4) \\
(0.1, 0.5, 0.6) & (0.5, 0.3, 0.7) & (0.3, 0.6, 0.5) \\
(0.1, 0.5, 0.6) & (0.4, 0.5, 0.4) & (0.5, 0.3, 0.7) \\
(0.1, 0.5, 0.6) & (0.4, 0.5, 0.8) & (0.2, 0.3, 0.6) \\
(0.6, 0.3, 0.5) & (0.2, 0.3, 0.6) & (0.3, 0.6, 0.7) \\
(0.6, 0.3, 0.6) & (0.5, 0.5, 0.5) \\
(0.6, 0.5, 0.5) & (0.5, 0.5, 0.5) \\
(0.6, 0.5, 0.5) & (0.6, 0.5, 0.5) \\
(0.6, 0.5, 0.5) & (0.6, 0.5, 0.5)
\end{aligned}
\end{aligned}
$$

*and*

C ∼ (*A*) = (*MβC*<sup>F</sup> ∗ *<sup>M</sup>TC*<sup>F</sup> ) " *A* = ⎛⎜⎜⎜⎜⎜⎝ 0.6, 0.2, 0.5 0.5, 0.3, 0.8 0.2, 0.5, 0.6 0.4, 0.5, 0.7 0.3, 0.3, 0.6 0.1, 0.5, 0.6 0.5, 0.3, 0.8 0.2, 0.5, 0.6 0.4, 0.5, 0.7 0.3, 0.3, 0.6 0.1, 0.5, 0.6 0.4, 0.5, 0.7 0.5, 0.3, 0.4 0.3, 0.6, 0.5 0.6, 0.3, 0.5 0.1, 0.5, 0.6 0.5, 0.3, 0.7 0.4, 0.5, 0.4 0.5, 0.3, 0.7 0.3, 0.2, 0.6 0.1, 0.5, 0.6 0.4, 0.5, 0.8 0.2, 0.3, 0.6 0.3, 0.6, 0.7 0.6, 0.3, 0.5 ⎞⎟⎟⎟⎟⎟⎠ " ⎛⎜⎜⎜⎜⎜⎝ 0.6, 0.3, 0.5 0.4, 0.5, 0.1 0.3, 0.2, 0.6 0.5, 0.3, 0.4 0.7, 0.2, 0.3 ⎞⎟⎟⎟⎟⎟⎠ = ⎛⎜⎜⎜⎜⎜⎝ 0.6, 0.5, 0.5 0.6, 0.5, 0.4 0.4, 0.4, 0.5 0.4, 0.5, 0.6 0.6, 0.4, 0.3 ⎞⎟⎟⎟⎟⎟⎠ .

Two operations of matrices are defined in [28]. We can use them to study the matrix representations of C(*X*) and C(*X*) of every crisp subset *X* ∈ *<sup>P</sup>*(*U*).

**Definition 11** ([28])**.** *Let A* = (*aik*)*n*×*m and B* = (*bkj*)*m*×*<sup>l</sup> be two matrices. We define C* = *A* · *B* = (*cij*)*n*×*<sup>l</sup> and D* = *A* # *B* = (*dij*)*n*×*<sup>l</sup> as follows:*

$$\begin{aligned} c\_{ij} &= \vee\_{k=1}^{\mathfrak{m}} (a\_{ik} \wedge b\_{kj}), \\ d\_{ij} &= \wedge\_{k=1}^{\mathfrak{m}} [(1 - a\_{ik}) \vee b\_{kj}], \text{ for any } i = 1, 2, \dots, n, \text{ and } j = 1, 2, \dots, l. \end{aligned} \tag{11}$$

Let *U* = {*<sup>x</sup>*1, ··· , *xn*} and *X* ∈ *<sup>P</sup>*(*U*). Then, the characteristic function of the crisp subset *X* is defined as *χX*, where

$$\chi\_X(\mathfrak{x}\_i) = \begin{cases} 1, & \mathfrak{x}\_i \in X; \\ 0, & \text{otherwise}. \end{cases}$$

**Proposition 7.** *Let C* F *be a SVN β-covering of U with U* = {*<sup>x</sup>*1, *x*2, ··· , *xn*} *and C* F = {*<sup>C</sup>*1, *C*2, ··· , *Cm*}*. Then,*

$$M\_{\widehat{\mathbb{C}}}^{\beta} \odot (M\_{\widehat{\mathbb{C}}}^{\beta})^T = (\chi\_{\overline{\mathbb{N}}\_{x\_i}^{\beta}}(\mathfrak{x}\_{\widehat{\mathbb{I}}}))\_{1 \le i \le n, 1 \le j \le n\prime} \tag{12}$$

**Proof.** Suppose *Mβ***C**F = (*sik*)*n*×*m* and *Mβ***C**F # (*Mβ***C**<sup>F</sup> )*T* = (*tij*)*n*×*n*. Since **C**F is a SVN *β*-covering of *U*, for each *i* (1 ≤ *i* ≤ *n*) there exists *k* (1 ≤ *k* ≤ *m*) such that *sik* = 1. If *tij* = 1, then <sup>∧</sup>*mk*=<sup>1</sup>[(<sup>1</sup> − *sik*) ∨ *sjk*] = 1.

It implies that if *sik* = 1, then *sjk* = 1. Hence, *Ck*(*xi*) ≥ *β* implies *Ck*(*xj*) ≥ *β*. Therefore, *xj* ∈ N*βxi* , i.e., *<sup>χ</sup>*N*βxi*(*xj*) = 1 = *tij*.

If *tij* = 0, then <sup>∧</sup>*mk*=<sup>1</sup>[(<sup>1</sup> − *sik*) ∨ *sjk*] = 0. This implies that if *sik* = 1, then *sjk* = 0. Hence, *Ck*(*xi*) ≥ *β* implies *Ck*(*xj*) < *β*. Thus, we have *xj* ∈/ N*βxi* , i.e., *<sup>χ</sup>*N*βxi*(*xj*) = 1 = *tij*.

**Example 8** (Continued from Example 2)**.** *According to M<sup>β</sup>C*F*in Example 5, we have the following result.*

$$M\_{\widehat{\mathbb{C}}}^{\mathbb{B}} \odot (M\_{\widehat{\mathbb{C}}}^{\mathbb{B}})^{T} = \begin{pmatrix} 1 & 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 1 \\ 0 & 1 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 \end{pmatrix} = (\chi\_{\mathbb{B}\_{x\_{i}}^{\mathbb{B}}}(x\_{j}))\_{1 \le i \le 5, 1 \le j \le 5}$$

For any *X* ∈ *<sup>P</sup>*(*U*), we also denote *χX* = (*ai*)*n*×<sup>1</sup> with *ai* = 1 iff *xi* ∈ *X*; otherwise, *ai* = 0.

Then, the set representations of C(*X*) and C(*X*) (for any *X* ∈ *P*(*U*)) can be converted to matrix representations.

**Theorem 4.** *Let C* F *be a SVN β-covering of U with U* = {*<sup>x</sup>*1, *x*2, ··· , *xn*} *and C* F = {*<sup>C</sup>*1, *C*2, ··· , *Cm*}*. Then, for any X* ∈ *<sup>P</sup>*(*U*)*,*

$$\begin{split} \chi\_{\square(X)} &= (M\_{\widehat{\mathbb{C}}}^{\mathbb{B}} \odot (M\_{\widehat{\mathbb{C}}}^{\mathbb{B}})^T) \cdot \chi\_{X \prime} \\ \chi\_{\square(X)} &= (M\_{\widehat{\mathbb{C}}}^{\mathbb{B}} \odot (M\_{\widehat{\mathbb{C}}}^{\mathbb{B}})^T) \odot \chi\_{X \prime} \end{split} \tag{13}$$

**Proof.** Suppose (*Mβ***C**<sup>F</sup> # (*Mβ***C**<sup>F</sup> )*<sup>T</sup>*) · *χX* = (*ai*)*n*×<sup>1</sup> and (*Mβ***C**<sup>F</sup> # (*Mβ***C**<sup>F</sup> )*<sup>T</sup>*) # *χX* = (*bi*)*n*×1. For any *xi* ∈ *U* (*i* = 1, 2, ··· , *n*),

$$\begin{split} \chi\_{i} &\in \overline{\mathbb{C}}(X) \quad \Leftrightarrow \chi\_{\overline{\mathbb{C}}(X)}(\mathbf{x}\_{i}) = 1 \\ &\Leftrightarrow a\_{i} = 1 \\ &\Leftrightarrow \vee\_{k=1}^{n} [\chi\_{\overline{\mathbb{N}}\_{x\_{i}}^{\mathbb{R}}}(\mathbf{x}\_{k}) \wedge \chi\_{X}(\mathbf{x}\_{k})] = 1 \\ &\Leftrightarrow \exists k \in \{1, 2, \cdots, n\}, \text{ s.t. } \chi\_{\overline{\mathbb{N}}\_{x\_{i}}^{\mathbb{R}}}(\mathbf{x}\_{k}) = \chi\_{X}(\mathbf{x}\_{k}) = 1 \\ &\Leftrightarrow \exists k \in \{1, 2, \cdots, n\}, \text{ s.t. } \mathbf{x}\_{k} \in \mathbb{R}\_{x\_{i}}^{\mathbb{R}} \cap X \\ &\Leftrightarrow \overline{\mathbb{N}}\_{x\_{i}}^{\mathbb{R}} \cap X \neq \mathcal{O}, \end{split}$$

and *xi*

$$\begin{split} & \in \underline{\mathbb{C}}(X) \quad \Leftrightarrow \chi\_{\underline{\mathbb{C}}(X)}(\mathbf{x}\_{i}) = 1 \\ & \quad \Leftrightarrow b\_{i} = 1 \\ & \quad \Leftrightarrow \wedge\_{k=1}^{n} [(1 - \chi\_{\overline{\mathbb{R}}\_{x\_{i}}^{\beta}}(\mathbf{x}\_{k})) \vee \chi\_{X}(\mathbf{x}\_{k})] = 1 \\ & \quad \Leftrightarrow \chi\_{\overline{\mathbb{R}}\_{x\_{i}}^{\beta}}(\mathbf{x}\_{k}) = 1 \to \chi\_{X}(\mathbf{x}\_{k}) = 1, \; k = 1, 2, \cdots, n \\ & \quad \Leftrightarrow \mathbf{x}\_{k} \in \mathbb{R}\_{x\_{i}}^{\beta} \to \mathbf{x}\_{k} \in X, \; k = 1, 2, \cdots, n \\ & \quad \Leftrightarrow \overline{\mathbb{R}}\_{x\_{i}}^{\beta} \subseteq X. \end{split}$$

**Example 9** (Continued from Example 4)**.** *Let X* = {*<sup>x</sup>*1, *<sup>x</sup>*2}*. By M<sup>β</sup>C*F # (*MβC*<sup>F</sup> )*T in Example 8, we have*

$$\begin{split}(M\_{\widehat{\mathcal{C}}}^{\widehat{\mathcal{C}}}\odot(M\_{\widehat{\mathcal{C}}}^{\mathcal{B}})^{\top})\cdot\chi\_{\mathcal{X}} &= \begin{pmatrix} 1 & 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 1 \\ 0 & 1 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 \end{pmatrix} \cdot \begin{pmatrix} 1 \\ 1 \\ 0 \\ 0 \\ 0 \end{pmatrix} = \begin{pmatrix} 1 \\ 1 \\ 0 \\ 1 \\ 0 \end{pmatrix} = \chi\_{\mathsf{T}(\mathcal{X})'} \\ (M\_{\widehat{\mathcal{C}}}^{\widehat{\mathcal{C}}}\odot(M\_{\widehat{\mathcal{C}}}^{\mathcal{B}})^{\top})\odot\chi\_{\mathcal{X}} &= \begin{pmatrix} 1 & 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 \end{pmatrix} \odot \begin{pmatrix} 1 \\ 1 \\ 0 \\ 0 \\ 0 \\ 0 \end{pmatrix} = \begin{pmatrix} 1 \\ 1 \\ 0 \\ 0 \\ 0 \end{pmatrix} = \chi\_{\mathsf{T}(\mathcal{X})'} \cdot \begin{pmatrix} 1 \\ 0 \\ 0 \\ 0 \end{pmatrix} \end{split}$$

#### **6. An Application to Decision Making Problems**

In this section, we present a novel approach to DM problems based on the SVN covering rough set model. Then, a comparative study with other methods is shown.

#### *6.1. The Problem of Decision Making*

Let *U* = {*xk* : *k* = 1, 2, ··· , *l*} be the set of patients and *V* = {*yi*|*<sup>i</sup>* = 1, 2, ··· , *m*} be the *m* main symptoms (for example, cough, fever, and so on) for a Disease *B*. Assume that Doctor *R* evaluates every Patient *xk* (*k* = 1, 2, ··· , *l*).

Assume that Doctor *R* believes each Patient *xk* ∈ *U* (*k* = 1, 2, ··· , *l*) has a symptom value *Ci* (*i* = 1, 2, ··· , *m*), denoted by *Ci*(*xk*) = *TCi*(*xk*), *ICi*(*xk*, *FCi*(*xk*), where *TCi*(*xk*) ∈ [0, 1] is the degree that Doctor *R* confirms Patient *xk* has symptom *yi*, *ICi*(*xk*) ∈ [0, 1] is the degree that Doctor *R* is not sure Patient *xk* has symptom *yi*, *FCi*(*xk*) ∈ [0, 1] is the degree that Doctor *R* confirms Patient *xk* does not have symptom *yi*, and *TCi*(*xk*) + *ICi*(*xk*) + *FCi*(*xk*) ≤ 3.

Let *β* = *a*, *b*, *c* be the critical value. If any Patient *xk* ∈ *U*, there is at least one symptom *yi* ∈ *V* such that the symptom value *Ci* for Patient *xk* is not less than *β*, respectively, then **C** F = {*<sup>C</sup>*1, *C*2, ··· , *Cm*} is a SVN *β*-covering of *U* for some SVN number *β*.

If *d* is a possible degree, *e* is an indeterminacy degree and *f* is an impossible degree of Disease *B* of every Patient *xk* ∈ *U* that is diagnosed by Doctor *R*, denoted by *<sup>A</sup>*(*xk*) = *d*,*e*, *f*, then the decision maker (Doctor *R*) for the decision making problem needs to know how to evaluate whether Patients *xk* ∈ *U* have Disease *B*.

#### *6.2. The Decision Making Algorithm*

In this subsection, we give an approach for the problem of DM with the above characterizations by means of the first type of SVN covering rough set model. According to the characterizations of the DM problem in Section 6.1, we construct the SVN decision information system and present the Algorithm 1 of DM under the framework of the first type of SVN covering rough set model.

**Algorithm 1** The decision making algorithm based on the SVN covering rough set model.

**Input**: SVN decision information system (*U*, **C** F , *β*, *A*). **Output**: The score ordering for all alternatives.

∼

1: Compute the SVN *β*-neighborhood N *β x* of *x* induced by **C** F , for all *x* ∈ *U* according to Definition 4;


$$s(x) = \frac{T\_{\bar{R}\_A}(x)}{\sqrt{(T\_{\bar{R}\_A}(x))^2 + (I\_{\bar{R}\_A}(x))^2 + (F\_{\bar{R}\_A}(x))^2}}$$

5: Rank all the alternatives *s*(*x*) by using the principle of numerical size and select the most possible patient.

According to the above process, we can ge<sup>t</sup> the decision making according to the ranking. In Step 4, *<sup>S</sup>*(*x*) is the cosine similarity measure between *R A*(*x*) and the ideal solution (1, 0, <sup>0</sup>), which was proposed by Ye [44].

#### *6.3. An Applied Example*

**Example 10.** *Assume that U* = {*<sup>x</sup>*1, *x*2, *x*3, *x*4, *<sup>x</sup>*5} *is a set of patients. According to the patients' symptoms, we write V* = {*y*1, *y*2, *y*3, *y*4} *to be four main symptoms (cough, fever, sore and headache) for Disease B. Assume that Doctor R evaluates every Patient xk (k* = 1, 2, ··· , 5*) as shown in Table 1.*

*Let β* = 0.5, 0.3, 0.8 *be the critical value. Then, C* = {*<sup>C</sup>*1, *C*2, *C*3, *<sup>C</sup>*4} *is a SVN β-coverings of U.* N *β xk(k* = 1, 2, 3, 4, 5*) are shown in Table 2.*

F

*Assume that Doctor R diagnoses the value A* = (0.6,0.3,0.5) *x*1 + (0.4,0.5,0.1) *x*2 + (0.3,0.2,0.6) *x*3 + (0.5,0.3,0.4) *x*4 + (0.7,0.2,0.3) *x*5 *of Disease B of every patient. Then,*

$$
\widetilde{\mathbb{C}}(A) = \{ \langle \mathbf{x}\_1, 0.6, 0.3, 0.5 \rangle, \langle \mathbf{x}\_2, 0.4, 0.3, 0.6 \rangle, \langle \mathbf{x}\_3, 0.6, 0.5, 0.5 \rangle, \langle \mathbf{x}\_4, 0.5, 0.3, 0.6 \rangle, \langle \mathbf{x}\_5, 0.6, 0.5, 0.5 \rangle \},
$$

$$
\underline{\mathbb{C}}(A) = \{ \langle \mathbf{x}\_1, 0.6, 0.5, 0.5 \rangle, \langle \mathbf{x}\_2, 0.6, 0.5, 0.4 \rangle, \langle \mathbf{x}\_3, 0.4, 0.4, 0.5 \rangle, \langle \mathbf{x}\_4, 0.4, 0.5, 0.4 \rangle, \langle \mathbf{x}\_5, 0.6, 0.4, 0.3 \rangle \}.
$$

*Then,*

*R A*

$$= \quad \tilde{\mathbb{C}}(A) \oplus \mathbb{C}(A)$$

∼

= {*<sup>x</sup>*1, 0.84, 0.15, 0.25,*<sup>x</sup>*2, 0.76, 0.15, 0.24,*<sup>x</sup>*3, 0.76, 0.2, 0.25,*<sup>x</sup>*4, 0.7, 0.15, 0.24,*<sup>x</sup>*5, 0.84, 0.2, 0.15}.

*Hence, we can obtain s*(*xk*) *(k* = 1, 2, ··· , 5*) in Table 3.*

**Table 3.** *s*(*xk*) (*k* = 1, 2, ··· , 5).


*According to the principle of numerical size, we have:*

> *<sup>s</sup>*(*<sup>x</sup>*4) < *<sup>s</sup>*(*<sup>x</sup>*3) < *<sup>s</sup>*(*<sup>x</sup>*2) < *<sup>s</sup>*(*<sup>x</sup>*1) < *<sup>s</sup>*(*<sup>x</sup>*5)*.*

*Therefore, Doctor R diagnoses Patient x*5 *as more likely to be sick with Disease B.*

#### *6.4. A Comparison Analysis*

To validate the feasibility of the proposed decision making method, a comparative study was conducted with other methods. These methods, which were introduced by Liu [43], Yang et al. [32] and Ye [44], are compared with the proposed approach using SVN information system.

6.4.1. The Results of Liu's Method

Liu's method is shown in Algorithm 2.


1: Compute

$$\begin{array}{lcl} n\_{k} &= \langle T\_{\mathbb{H}\_{k}}, I\_{\mathbb{H}\_{k}}, F\_{\mathbb{H}\_{k}} \rangle \\ &= HSVNNWA(n\_{k1}, n\_{k2}, \dots, n\_{\prime}) \\ &= \langle \frac{\prod\_{i=1}^{\overline{m}} (1 + (\gamma - 1)T\_{ki})^{w\_{i}} - \prod\_{i=1}^{\overline{m}} (1 - T\_{ki})^{w\_{i}}}{\prod\_{i=1}^{\overline{m}} (1 + (\gamma - 1)T\_{ki})^{w\_{i}} + (\gamma - 1)\prod\_{i=1}^{\overline{m}} (1 - T\_{ki})^{w\_{i}}} \rangle \\ &= \sqrt{\sum\_{i=1}^{\overline{m}} l\_{ki}^{w\_{i}}} \\ &= \sum\_{i=1}^{\overline{m}} (1 + (\gamma - 1)(1 - I\_{ki}))^{w\_{i}} + (\gamma - 1)\prod\_{i=1}^{\overline{m}} l\_{ki}^{w\_{i}}} \\ & \qquad \qquad \qquad \qquad \qquad \qquad \gamma \prod\_{i=1}^{\overline{m}} F\_{ki}^{w\_{i}} \\ & \xrightarrow{\overline{m}} \prod\_{i=1}^{\overline{m}} (1 + (\gamma - 1)(1 - F\_{ki}))^{w\_{i}} + (\gamma - 1)\prod\_{i=1}^{\overline{m}} F\_{ki}^{w\_{i}}} \\ \end{array}$$

2: Calculate *s*(*nk*) = *Tn* + *k <sup>T</sup>*2*nk*+*I*2*nk*+*F*2*nk* ;

3: Obtain the ranking for all *s*(*nk*) by using the principle of numerical size and select the most possible patient.

Then, Algorithm 2 can be used for Example 10. Let *nki* = *Tki*, *Iki*, *Fki* be the evaluation information of *xk* on *Ci* in Table 1. That is to say, Table 1 is the SVN decision matrix *D*. We suppose the weight vector of the criteria is **w** = (0.35, 0, 25, 0.3, 0.1) and *γ* = 1.

*Step 1*: Based on HSVNNWA operator, we ge<sup>t</sup>

$$\begin{aligned} n\_1 &= \langle 0.557, 0.178, 0.482 \rangle, \ n\_2 = \langle 0.484, 0.283, 0.395 \rangle, \\ n\_3 &= \langle 0.414, 0.318, 0.347 \rangle, \ n\_4 = \langle 0.465, 0.286, 0.558 \rangle, \\ n\_5 &= \langle 0.578, 0.233, 0.486 \rangle. \end{aligned}$$

*Step 2*: We ge<sup>t</sup>

$$s(n\_1) = 0.735, s(n\_2) = 0.706, s(n\_3) = 0.660, s(n\_4) = 0.596, s(n\_5) = 0.734.$$

*Step 3*: According to the cosine similarity degrees *s*(*nk*) (*k* = 1, 2, ··· , 5), we obtain *x*4 < *x*3 < *x*2 < *x*5 < *x*1.

Therefore, Patient *x*1 is more likely to be sick with Disease *B*.

#### 6.4.2. The Results of Yang's Method

Yang's method is shown in Algorithm 3. **Algorithm 3** The decision making algorithm [32].

**Input**: A generalized SVN approximation space (*U*, *V*, *R* ), *B* ∈ *SVN*(*V*). **Output**: The score ordering for all alternatives.


$$s(n\_{x\_k}, n^\*) = \frac{T\_{n x\_k} \cdot T\_{n^\*} + I\_{n x\_k} \cdot I\_{n^\*} + F\_{n x\_k} \cdot F\_{n^\*}}{\sqrt{T\_{n x\_k}^2 + I\_{n x\_k}^2 + F\_{n x\_k}^2} \cdot \sqrt{(T\_{n^\*})^2 + (I\_{n^\*})^2 + (F\_{n^\*})^2}} \ (k = 1, 2, \dots, J),$$

where *n*<sup>∗</sup> = *Tn*∗ , *In*∗ , *Fn*∗ = 1, 0, <sup>0</sup>;

4: Obtain the ranking for all *s*(*nxk* , *n*<sup>∗</sup>) by using the principle of numerical size and select the most possible patient.

For Example 10, we suppose Disease *B* ∈ *SVN*(*V*) and *B* = (0.3,0.6,0.5) *y*1 + (0.7,0.2,0.1) *y*2 + (0.6,0.4,0.3) *y*3 + (0.8,0.4,0.5) *y*4 According to Table 1, the generalized SVN approximation space (*U*, *V*, *R* ) can be obtained in Table 4, where *U* = {*<sup>x</sup>*1, *x*2, *x*3, *x*4, *<sup>x</sup>*5} and *V* = {*y*1, *y*2, *y*3, *y*4}.

.



*Step 1*: We ge<sup>t</sup>

$$
\check{R}(B) = \{ \langle \mathbf{x}\_1, 0.6, 0.2, 0.4 \rangle, \langle \mathbf{x}\_2, 0.6, 0.2, 0.4 \rangle, \langle \mathbf{x}\_3, 0.6, 0.3, 0.4 \rangle, \langle \mathbf{x}\_4, 0.5, 0.4, 0.5 \rangle, \langle \mathbf{x}\_5, 0.8, 0.3, 0.5 \rangle \}, \dots 
$$

$$
\check{R}(B) = \{ \langle \mathbf{x}\_1, 0.5, 0.6, 0.5 \rangle, \langle \mathbf{x}\_2, 0.3, 0.6, 0.5 \rangle, \langle \mathbf{x}\_3, 0.3, 0.5, 0.5 \rangle, \langle \mathbf{x}\_4, 0.6, 0.6, 0.5 \rangle, \langle \mathbf{x}\_5, 0.6, 0.6, 0.5 \rangle \}.
$$

*Step 2*:

*R* (*B*) *R* (*B*) = {*<sup>x</sup>*1, 0.80, 0.12, 0.20,*<sup>x</sup>*2, 0.72, 0.12, 0.20,*<sup>x</sup>*3, 0.72, 0.15, 0.20,*<sup>x</sup>*4, 0.80, 0.24, 0.25, *<sup>x</sup>*5, 0.92, 0.18, 0.25}.

*Step 3*: Let *n*<sup>∗</sup> = 1, 0, <sup>0</sup>. Then,

$$s(n\_{\mathbf{x}\_1}, n^\*) = 0.960,\\ s(n\_{\mathbf{x}\_2}, n^\*) = 0.951,\\ s(n\_{\mathbf{x}\_3}, n^\*) = 0.945,\\ s(n\_{\mathbf{x}\_4}, n^\*) = 0.918,\\ s(n\_{\mathbf{x}\_5^\*}, n^\*) = 0.948.$$
 
$$\text{Step 4:}$$

$$\mathrm{s}\,(\mathfrak{n}\_{\mathbf{x}\_{4'}},\mathfrak{n}^\*) \precsim \mathrm{s}\,(\mathfrak{n}\_{\mathbf{x}\_{3'}},\mathfrak{n}^\*) \precsim \mathrm{s}\,(\mathfrak{n}\_{\mathbf{x}\_{5'}},\mathfrak{n}^\*) \precsim \mathrm{s}\,(\mathfrak{n}\_{\mathbf{x}\_{2'}},\mathfrak{n}^\*) \precsim \mathrm{s}\,(\mathfrak{n}\_{\mathbf{x}\_{1'}},\mathfrak{n}^\*).$$

Therefore, Patient *x*1 is more likely to be sick with Disease *B*.

#### 6.4.3. The Results of Ye's Methods

Ye presented two methods [44]. Thus, Algorithms 4 and 5 are presented for Example 10.

**Algorithm 4** The decision making algorithm [44].

**Input**: A SVN decision matrix *D* and a weight vector **w**. **Output**: The score ordering for all alternatives.

1: Compute

$$\mathcal{W}\_{k}(\mathbf{x}\_{k},\mathbf{A}^{\*}) = \frac{\sum\_{i=1}^{m} w\_{i}[a\_{ki} \cdot a\_{i}^{\*} + b\_{ki} \cdot b\_{i}^{\*} + c\_{ki} \cdot c\_{i}^{\*}]}{\sqrt{\sum\_{i=1}^{m} w\_{i}[a\_{ki}^{2} + b\_{ki}^{2} + c\_{ki}^{2}]} \cdot \sqrt{\sum\_{i=1}^{m} w\_{i}[(a\_{i}^{\*})^{2} + (b\_{i}^{\*})^{2} + (c\_{i}^{\*})^{2}]}}} \ (\mathbf{k} = \mathbf{1}, \mathbf{2}, \cdots, \mathbf{l}),$$

where *α*∗*i*= *a*<sup>∗</sup>*i*, *b*∗*i*, *c*∗*i* = 1, 0, 0 (*i* = 1, 2, ··· , *m*);

2: Obtain the ranking for all *Wk*(*xk*, *A*∗) by using the principle of numerical size and select the most possible patient.

For Example 10, Table 1 is the SVN decision matrix *D*. We suppose the weight vector of the criteria is **w** = (0.35, 0, 25, 0.3, 0.1).

*Step 1*:

$$\mathbb{W}\_1(\mathbf{x}\_1, A^\*) = 0.677,\\ \mathbb{W}\_2(\mathbf{x}\_2, A^\*) = 0.608,\\ \mathbb{W}\_3(\mathbf{x}\_3, A^\*) = 0.580,\\ \mathbb{W}\_4(\mathbf{x}\_4, A^\*) = 0.511,\\ \mathbb{W}\_5(\mathbf{x}\_5, A^\*) = 0.666.$$

*Step 2*: The ranking order of {*<sup>x</sup>*1, *x*2, ··· , *<sup>x</sup>*5} is *x*4 < *x*3 < *x*2 < *x*5 < *x*1. Therefore, Patient *x*1 is more likely to be sick with Disease *B*.

**Algorithm 5** The other decision making algorithm [44].

**Input**: A SVN decision matrix *D* and a weight vector **w**. **Output**: The score ordering for all alternatives.

1: Compute

$$M\_{k}(\mathbf{x}\_{k}, A^\*) = \sum\_{i=1}^{\text{nr}} w\_{i} \frac{a\_{ki} \cdot a\_{i}^\* + b\_{ki} \cdot b\_{i}^\* + c\_{ki} \cdot c\_{i}^\*}{\sqrt{a\_{ki}^2 + b\_{ki}^2 + c\_{ki}^2} \cdot \sqrt{(a\_{i}^\*)^2 + (b\_{i}^\*)^2 + (c\_{i}^\*)^2}} \ (k = 1, 2, \dots, l)\_{\text{st}}$$

where *α*∗*i* = *a*<sup>∗</sup>*i* , *b*∗*i* , *c*∗*i* = 1, 0, 0 (*i* = 1, 2, ··· , *m*);

2: Obtain the ranking for all *Mk*(*xk*, *A*∗) by using the principle of numerical size and select the most possible patient.

By Algorithms 5, we have: *Step 1*:

$$\begin{aligned} M\_1(\mathbf{x}\_1, A^\*) &= 0.676, M\_2(\mathbf{x}\_2, A^\*) = 0.637, M\_3(\mathbf{x}\_3, A^\*) = 0.581, \\ M\_4(\mathbf{x}\_4, A^\*) &= 0.521, M\_5(\mathbf{x}\_5, A^\*) = 0.654. \end{aligned}$$

*Step 2*: The ranking order of {*<sup>x</sup>*1, *x*2, ··· , *<sup>x</sup>*5} is *x*4 < *x*3 < *x*2 < *x*5 < *x*1. Therefore, Patient *x*1 is more likely to be sick with Disease *B*.

All results are shown in Table 5, Figures 1 and 2.

**Table 5.** The results utilizing the different methods of Example 10.


**Figure 1.** The first chat of different values of patient in utilizing different methods in Example 10.

**Figure 2.** The second chat of different values of patient in utilizing different methods in Example 10.

Liu [43] and Ye [44] presented the methods by SVN theory. In their methods, the ranking order would be changed by different **w** and *γ*. We as well as Yang et al. [32] used different rough set models to make the decision. Yang et al. present a SVN rough set model based on SVN relations, while we present a new SVN rough set model based on coverings. The results are different by Yang's and our methods, although the methods are both based on an operator presented by Ye [44].

In any method, if there are more than one most possible patient, then each patient will be the optimal decision. In this case, we need other methods to make a further decision. By means of different methods, the obtained results may be different. To achieve the most accurate results, further diagnosis is necessary in combination with other hybrid methods.
