**Measures of Probabilistic Neutrosophic Hesitant Fuzzy Sets and the Application in Reducing Unnecessary Evaluation Processes**

#### **Songtao Shao 1,2,†,‡ and Xiaohong Zhang 1,3,\*,‡**


Received: 20 June 2019; Accepted: 16 July 2019; Published: 19 July 2019

**Abstract:** Distance measure and similarity measure have been applied to various multi-criteria decision-making environments, like talent selections, fault diagnoses and so on. Some improved distance and similarity measures have been proposed by some researchers. However, hesitancy is reflected in all aspects of life, thus the hesitant information needs to be considered in measures. Then, it can effectively avoid the loss of fuzzy information. However, regarding fuzzy information, it only reflects the subjective factor. Obviously, this is a shortcoming that will result in an inaccurate decision conclusion. Thus, based on the definition of a probabilistic neutrosophic hesitant fuzzy set (PNHFS), as an extended theory of fuzzy set, the basic definition of distance, similarity and entropy measures of PNHFS are established. Next, the interconnection among the distance, similarity and entropy measures are studied. Simultaneously, a novel measure model is established based on the PNHFSs. In addition, the new measure model is compared by some existed measures. Finally, we display their applicability concerning the investment problems, which can be utilized to avoid redundant evaluation processes.

**Keywords:** probabilistic neutrosophic hesitant fuzzy set; distance measure; similarity measure; entropy measure; multi-criteria decision-making (MCDM)

#### **1. Introduction**

Neutrosophic set (NS) [1,2] as a more general theory form of fuzzy sets (FS) [3] provides a simple method to describe uncertain information under the MCDM environment. Afterwards, in order to better combine with practical problems, Wang et al. proposed the single-valued neutrosophic set (SVNS) [4–6] and interval neutrosophic set (INS) [7–9] by depicting the range of different membership functions to encourage the application of FS. For instance, NS adds three independent membership functions: truth-membership function *T*(*x*), indeterminacy-membership function *I*(*x*) and falsity-membership function *F*(*x*). In development, according to the complexity of the information in the MCDM problems, SVNS and INS have been applied to deal with some different types of problems [10–16]. When some decision makers (DMs) make a decision, some DMs may at the hesitancy among truth membership, indeterminacy membership and falsity membership. Thus, different forms of NS have been proposed, like single-valued neutrosophic hesitant FS (SVNHFS) [17–19], multi-valued NS (MVNS) [20–23], some types of linguistic NS [24–26], and other types of NS [27–32]. Some experts applied to algebraic

systems [33–40], which clarified that the extended NSs are the effective tools for describing uncertainty and imprecise information, the information including imperfect, fuzzy, uncertainty and so on. Then, based on the different requirements of practical applications, the axioms of NS are investigated. The most important thing is how to minimize the loss of information when uncertain problems are resolved.

The use of truth-membership, indeterminacy-membership and falsity-membership degrees to depict the fuzziness only expresses subjective uncertainty. However, the statistical data can describe the occurrence frequency of membership degree based on objective views. The elements that decide on the accurate evaluation conclusion of MCDM include both fuzzy and statistic information. The DMs can explain the subjective information by utilizing NSs, SVNSs, SVNHSs and so on. As the amount of information increases, the impact of statistical information on decision outcomes will increase.

Xu et al. proposed hesitant probabilistic fuzzy set [41] and researched its basic operations. Next, Hao et al. [42] constructed probabilistic dual hesitant fuzzy set and applied in risk evaluation. Zhai et al. [43] took the probabilistic interval-valued intuitionistic hesitant fuzzy set and investigated its distance, similarity and entropy measures. Later, these theories have been widely studied and applied to solve MCDM problems [44–47]. However, when solving some decision problems, the decision makers will give the indeterminacy-membership hesitant degrees and corresponding probability information. In order to solve this situation, Shao et al. [48] and Peng et al. [49] established probabilistic single-valued neutrosophic hesitant fuzzy set (PSVNHFS or PNHFS) and probability multi-valued neutrosophic set (PMVN), respectively. Shao et al. investigated the basic operation laws of PNHFSs and their characteristics. Next, they established the probabilistic neutrosophic hesitant fuzzy weighted averaging (geometric) operators to fuse the uncertain information. Peng et al. presented a new QUALIFLEX method to fuse and analyze the uncertain information. The new form of expression is conductive to reducing the loss of uncertain information and improving the application in MCDM environments.

Distance measure, similarity measure and entropy measure are three effective ways to solve MCDM problems. As the key step of implicating fuzzy information explanation into MCDM, different types of distance and similarity measure for NSs [50,51], SVNSs [52,53], and SVNHFS [54,55] have been investigated. On the other hand, some ranking methods and MCDM approaches based on the measures of linguistic NSs have been established and utilized in various practical problems [56,57]. The effectiveness of similarity measure is to express the degree of similarity between factors. Additionally, the distance measure focuses on the divergence of items, which is opposite to the similarity measure. Simultaneously, similarity measure is an effective tool to express the relationship between items. Distance measure also has this characteristic.

The present notions of measures include the three independent membership degrees (truth, indeterminacy, falsity membership degrees) of fuzzy information, which can be effective to reduce the loss of information. Researchers pay attention to study the measures to improve the exactness and effectiveness in MCDM problems. According to the inner construction of present measure formulae, we establish a novel distance measure and a novel similarity. Sahin [58] proposed the Hamming distance measure of SVNHFSs as follows:

$$D\_{SVNHFS} = \frac{1}{3} \sum\_{\mathbf{x} \in X} \left( \frac{1}{l} \sum\_{i=1}^{l} |a\_{N\_1}^{\mu(i)}(\mathbf{x}) - a\_{N\_2}^{\mu(i)}(\mathbf{x})| + \frac{1}{p} \sum\_{i=1}^{p} |\beta\_{N\_1}^{\mu(i)}(\mathbf{x}) - \beta\_{N\_2}^{\mu(i)}(\mathbf{x})| + \frac{1}{q} \sum\_{i=1}^{l} |\gamma\_{N\_1}^{\mu(i)}(\mathbf{x}) - \gamma\_{N\_2}^{\mu(i)}(\mathbf{x})| \right),$$

in which *α*, *β* and *γ* are the truth-membership, indeterminacy-membership and falsity-membership degrees of *xi* ∈ *X* to a situation, *N*<sup>1</sup> and *N*<sup>2</sup> are SVNHFSs. However, there are some drawbacks, for which it is necessary to be concerned. For instance, the truth-membership and falsity-membership degrees are utilized to describe DMs' determination on *x* to the situation *A*. According to DMs, there is some associated information about *x* to the situation *A*, and *α* and *γ* are given at the same time when DMs make judgements. However, *β* expresses the vagueness of DMs' un-known about *x*, and this is distinct to the *α* and *β*. Obviously, it is not logical for any DCDM problems when DMs characterize them by utilizing the same formula and equal potentiality in a measure function.

Due to the complexity of the uncertain information, the evaluation information given by the decision makers will be fused. For example, *TA*, *IA* and *FA* describe the proportion of pros, cons, and abstentions, respectively, in the voting model. In the case of some subjective factors, the decision maker cannot be sure that it is fully or completely opposed, so some of the abstentions tend to vote in favor, expressed by *T I*. Similarly, *IF* describes the fusion information between abstention and opposition. *TF* describes the fusion information between approval and opposition. *T*1, *F*<sup>1</sup> and *I*<sup>1</sup> represent information that is fully in favor, totally opposed, and completely abstained. Then, this type of information can be solved with neutrosophic hesitation fuzzy theory, *TA* = *T*<sup>1</sup> + *TF* + *T I*, *FA* = *F*<sup>1</sup> + *TF* + *IF* and *IA* = *I*<sup>1</sup> + *T I* + *IF*.

The whole uncertainty set is separated into vagueness, non-vagueness and hesitancy. The non-vagueness sub-domain includes truth-membership and falsity-membership regions, whereas the vagueness sub-domain is organized by the indeterminacy–membership region. The uncertainty in the non-vagueness sub-domain can be expressed as an undetermined attribute. The indeterminacy indicates that there are a variety of thoughts about *x* belonging to the situation *A*. Every thought can not be certain. Hesitancy sub-domain describes the hesitancy degrees of DMs. Thus, it is appropriate to explore and solve the uncertain information based on the vagueness, non-vagueness and hesitant degrees. The distinction among the novel measures and previous measures is distinguished.

According to the instructions above, our main aim is to accomplish the fuzzy description system based on the PNHFS. By holding more uncertainty parameters, the uncertain information is expressed. At the same time, the uncertainty information is divided more clearly. The particular introduction is related in Section 2. The second aim is to propose novel distance, similarity and entropy measures. This work is done in Section 3 exactly. We expect to take advantages of this new approach to improve the accuracy of practical MCDM results. In Section 4, the detail is described and an application case about reducing the excess re-evaluation is shown, respectively. Finally, the discussion and future research are presented followed by the Conclusions section.

#### **2. Preliminaries**

Firstly, the basic theoretical knowledge used in this paper is reviewed. For convenience, SVNHFS is simply called the neutrosophic hesitant fuzzy set (NHFS) in this work.

#### *2.1. Several Types of NS*

**Definition 1.** *Suppose X is a non-empty reference set. An NHFS is described by the following mathematical formula [4]:*

$$N = \{ \langle \mathbf{x}, \mathbf{\tilde{f}}(\mathbf{x}), \mathbf{\tilde{i}}(\mathbf{x}), \mathbf{\tilde{f}}(\mathbf{x}) \rangle \, | \, \mathbf{x} \in X \},$$

*where* ˜*t*(*x*)*,* ˜*i*(*x*) *and* ˜ *<sup>f</sup>*(*x*) <sup>∈</sup> [0, 1]*.* ˜*t,* ˜*<sup>i</sup> and* ˜ *<sup>f</sup> denote three different types of degrees, respectively.* ˜*<sup>t</sup>* : *<sup>X</sup>* → [0, 1] *describes the truth-membership degree,* ˜*<sup>i</sup>* : *<sup>X</sup>* <sup>→</sup> [0, 1] *denotes the indeterminacy-membership degree,* ˜ *f* : *X* → [0, 1] *depicts the falsity-membership degree.* ˜*t*(*x*), ˜*i*(*x*) *and* ˜ *<sup>f</sup>*(*x*) *satisfy the following condition:* <sup>0</sup> <sup>≤</sup> ˜*t*(*x*) + ˜*i*(*x*) + ˜ *f*(*x*) ≤ 3 *.*

**Definition 2.** *Suppose that X is a non-empty reference set; then, an NHFS involved with X on the basis of three functions to X return three subsets of [0, 1]. Ye proposed an NHFS with the following mathematical sign [18]:*

$$N = \{ \langle \mathbf{x}, T(\mathbf{x}), I(\mathbf{x}), F(\mathbf{x}) \rangle | \mathbf{x} \in X \},$$

*where T*(*x*)*, I*(*x*) *and F*(*x*) *are three subsets of* [0, 1]*, respectively. Moreover, the definition of single-valued neutrosophic hesitant fuzzy element (SVNHFE) is proposed. If T*(*x*)*, I*(*x*) *and F*(*x*) *are three finite subsets, then the SVNHFE can be expressed by*

$$\begin{aligned} & \langle (a\_1(\mathbf{x}), a\_2(\mathbf{x}), \dots, a\_{L(\mathbf{T})}(\mathbf{x})), (\pounds\_1(\mathbf{x}), \pounds\_2(\mathbf{x}), \dots, \pounds\_{L(\mathbf{I})}(\mathbf{x})), (\gamma\_1(\mathbf{x}), \gamma\_2(\mathbf{x}), \dots, \gamma\_{L(\mathbf{F})}(\mathbf{x})) \rangle \\ &= \langle T(\mathbf{x}), I(\mathbf{x}), F(\mathbf{x}) \rangle, \end{aligned}$$

*in which L*(*T*), *L*(*I*) *and L*(*F*) *are three positive integers to describe the corresponding number of values in the T*(*x*), *I*(*x*) *and F*(*x*)*. Simultaneously, α<sup>a</sup> (a* ∈ {1, 2, ··· , *L*(*T*)*) describes the ath possible truth-membership degree, β<sup>b</sup> (b* ∈ {1, 2, ··· , *L*(*I*)*) describes the bth possible indeterminacy-membership degree, and γ<sup>c</sup> (c* ∈ {1, 2, ··· , *L*(*F*)*) describes the cth possible falsity-membership degree of x* ∈ *X to a situation. The restrictions of SVNHFS are listed below:*

<sup>0</sup> <sup>≤</sup> *<sup>α</sup>a*, *<sup>β</sup>b*, *<sup>γ</sup><sup>c</sup>* <sup>≤</sup> <sup>1</sup> *and* <sup>0</sup> <sup>≤</sup> *<sup>α</sup>*<sup>+</sup> <sup>+</sup> *<sup>β</sup>*<sup>+</sup> <sup>+</sup> *<sup>γ</sup>*++ <sup>≤</sup> <sup>3</sup>*, <sup>α</sup>*<sup>+</sup> <sup>=</sup> *max*{*αa*}*, <sup>β</sup>*<sup>+</sup> <sup>=</sup> *max*{*βb*}*, <sup>γ</sup>*<sup>+</sup> <sup>=</sup> *max*{*γc*} *for x* ∈ *X.*

*After that, single-valued neutrosophic hesitant fuzzy measures and correlation coefficients, aggregation operators on SVNHFS have been investigated to solve MCDM problems, medical diagnoses and so on.*

#### *2.2. The Distance and Similarity Measures for SVNHFSs*

**Definition 3.** *A mapping D* : *NHFS*(*X*) × *NHFS*(*X*) → [0, 1]*, "*×*" is the Cartesian product. Then, D is defined to be a distance measure of NHFS, if it satisfies the following four conditions [58] : A*, *B*, *C* ∈ *SVNHFS*(*X*)*,*


**Definition 4.** *A mapping S* : *NHFS*(*X*) × *NHFS*(*X*) → [0, 1]*, "*×*" is the Cartesian product. Then, S is defined a similarity measure, if S has the following four axioms [58]: A*, *B*, *C* ∈ *NHFS*(*X*)*,*


**Definition 5.** *A mapping E* : *NS*(*X*) → [0, 1] *is called an entropy on NS*(*X*)*, "*×*" is the Cartesian product. Then, E holds the following properties [51]: A*, *B* ∈ *NS*(*X*)*,*


#### **3. The Distance and Similarity Measures of PSVNHFS**

For the content of this part, as an extended theory of FS, Shao et al. [48] first proposed the probabilistic single-valued neutrosophic hesitant fuzzy set (PSVNHFS). The PSVNHFS can better describe the uncertainty by involving objectively uncertain information and subjective uncertain information. However, the vote set was first introduced by Zhai et al. [43]. Thus, according to the division of certain opinion, indeterminacy opinion and contradictory (vagueness) opinion, inference set as a new kind of vote set is constructed and applied to the NHFS. Finally, the distance measure and similarity measure are introduced and investigated.

**Definition 6.** *Suppose that X is a finite reference set. A PNHFS on X is denoted by the following mathematical symbol [48]:*

*Mathematics* **2019**, *7*, 649

$$N = \{ \langle \mathbf{x}, T(\mathbf{x}) | P^T(\mathbf{x}), I(\mathbf{x}) | P^I(\mathbf{x}), F(\mathbf{x}) | P^F(\mathbf{x}) \rangle | \mathbf{x} \in X \}. \tag{1}$$

*The <sup>T</sup>*(*x*)|*PT*(*x*)*, <sup>I</sup>*(*x*)|*P<sup>I</sup>* (*x*) *and <sup>F</sup>*(*x*)|*PF*(*x*) *are three elements of N, in which <sup>T</sup>*(*x*)*, <sup>I</sup>*(*x*) *and <sup>F</sup>*(*x*) *is defined as the possible truth-membership hesitant function, possible indeterminacy-membership hesitant function and possible falsity-membership hesitant function of x, respectively. PT*(*x*)*, P<sup>I</sup>* (*x*) *and PF*(*x*) *is the probabilistic information of factors in the components T*(*x*)*, I*(*x*) *and F*(*x*)*, respectively. This subjective information and objective information have the following requirements:*

$$\{a\_{a\_r}\}\_{b^+} \gamma\_{b^-} \in [0,1], 0 \le a^+ + \beta^+ + \gamma^+ \le 3; P\_a^T, P\_b^I, P\_c^F \in [0,1]; \sum\_{a=1}^{L(T)} P\_a^T \le 1, \sum\_{b=1}^{L(I)} P\_b^I \le 1, \sum\_{c=1}^{L(F)} P\_c^F \le 1, \sum\_{c=1}^{L(F)} P\_c^I \le 1, \sum\_{c=1}^{L(F)} P\_c^I \le 1, \sum\_{c=1}^{L(F)} P\_c^I \le 1, \sum\_{c=1}^{L(F)} P\_c^I \le 1$$

*where <sup>α</sup><sup>a</sup>* <sup>∈</sup> *<sup>T</sup>*(*x*)*, <sup>β</sup><sup>b</sup>* <sup>∈</sup> *<sup>I</sup>*(*x*)*, <sup>γ</sup><sup>c</sup>* <sup>∈</sup> *<sup>F</sup>*(*x*)*. <sup>α</sup>*<sup>+</sup> <sup>=</sup> *max*{*αa*}*, <sup>β</sup>*<sup>+</sup> <sup>=</sup> *max*{*βb*}*, <sup>γ</sup>*<sup>+</sup> <sup>=</sup> *max*{*γc*}*, <sup>P</sup><sup>T</sup> <sup>a</sup>* <sup>∈</sup> *<sup>P</sup>T, PI <sup>b</sup>* <sup>∈</sup> *<sup>P</sup><sup>I</sup> , P<sup>F</sup> <sup>c</sup>* <sup>∈</sup> *<sup>P</sup>F. The symbols <sup>L</sup>*(*T*)*, <sup>L</sup>*(*I*) *and <sup>L</sup>*(*F*) *are the cardinal numbers of elements in the components <sup>T</sup>*(*x*)|*PT*(*x*)*, I*(*x*)|*P<sup>I</sup>* (*x*) *and F*(*x*)|*PF*(*x*)*, respectively.*

*Generally, a probabilistic neutrosophic hesitant fuzzy number (PNHFN) of x is expressed by the mathematical symbol:*

$$\begin{split} \mathbb{N} &= \langle (\mathfrak{a}\_{1}|\boldsymbol{P}\_{1}^{\mathrm{T}}, \mathfrak{a}\_{2}|\boldsymbol{P}\_{2}^{\mathrm{T}}, \dots, \mathfrak{a}\_{L(\boldsymbol{T})}|\boldsymbol{P}\_{L(\boldsymbol{T})}^{\mathrm{T}}), (\mathfrak{f}\_{1}|\boldsymbol{P}\_{1}^{\mathrm{I}}, \mathfrak{f}\_{2}|\boldsymbol{P}\_{2}^{\mathrm{I}}, \dots, \mathfrak{f}\_{L(\boldsymbol{I})}|\boldsymbol{P}\_{L(\boldsymbol{I})}^{\mathrm{I}}), (\gamma\_{1}|\boldsymbol{P}\_{1}^{\mathrm{F}}, \gamma\_{2}|\boldsymbol{P}\_{1}^{\mathrm{F}}, \dots, \gamma\_{L(\boldsymbol{F})}|\boldsymbol{P}\_{L(\boldsymbol{P})}^{\mathrm{F}}) \rangle \\ &= \{T|\boldsymbol{P}^{\mathrm{T}}, \boldsymbol{I}|\boldsymbol{P}^{\mathrm{I}}, \boldsymbol{F}|\boldsymbol{P}^{\mathrm{F}}\}. \end{split}$$

**Definition 7.** *If X is a finite reference set and N is a PNHFN, then N is a normalized PNHFN [* ˜ *49]:*

$$\hat{N} = \{T(\mathbf{x})|\mathcal{P}^T(\mathbf{x}), I(\mathbf{x})|\mathcal{P}^I(\mathbf{x}), F(\mathbf{x})|\mathcal{P}^F(\mathbf{x})\},\tag{2}$$

*where P*˜*<sup>T</sup> <sup>a</sup>* <sup>=</sup> *<sup>P</sup><sup>T</sup> a* ∑ *P<sup>T</sup> a , P*˜*<sup>I</sup> <sup>b</sup>* <sup>=</sup> *<sup>P</sup><sup>I</sup> b* ∑ *P<sup>I</sup> b , P*˜*<sup>F</sup> <sup>c</sup>* <sup>=</sup> *<sup>P</sup><sup>F</sup> c* ∑ *P<sup>F</sup> c .*

**Example 1.** *If X* = {*x*} *is a reference set, an PNHFS can be denoted by*

$$N = \{ \mathbf{x}, \langle \{ 0.5 | 0.3, 0.6 | 0.5 \}, \{ 0.4 | 0.4, 0.6 | 0.6 \}, \{ 0.3 | 0.6 \} \rangle \}.$$

*For every membership function, the PNHFN <sup>N</sup>*˜ <sup>=</sup> {0.5|0.3, 0.6|0.5}, {0.4|0.4, 0.6|0.6}, {0.3|0.6} *independently denotes the whole uncertain area with three probabilistic membership functions, where* <sup>∑</sup>*L*(*T*) *<sup>a</sup>*=<sup>1</sup> *<sup>P</sup><sup>T</sup> <sup>a</sup>* = 0.3 + 0.5 = 0.8*,* <sup>∑</sup>*L*(*I*) *<sup>b</sup>*=<sup>1</sup> *<sup>P</sup><sup>I</sup> <sup>b</sup>* <sup>=</sup> 0.4 <sup>+</sup> 0.6 <sup>=</sup> 1, <sup>∑</sup>*L*(*F*) *<sup>c</sup>*=<sup>1</sup> *<sup>P</sup><sup>F</sup> <sup>c</sup>* = 0.6.

The PNHFS is considered a generalized theory of aforementioned various of FS, including FS, IFS, HFS, etc. Next, some special cases of normal PNHFS are introduced.

(1) If the probability values are equal for the same type of hesitant membership function, i.e.,

$$P\_1^T = P\_1^T = \dots = P\_{L(\mathbb{T})'}^T \\ P\_1^I = P\_1^I = \dots = P\_{L(\mathbb{I})'}^I \\ P\_1^F = P\_1^F = \dots = P\_{L(\mathbb{F})'}^F$$

Then, the normal PNHFS is reduced to the SVNHFS.


**Definition 8.** *Suppose that X* = {*x*1, *x*2, ··· , *xn*} *is a finite reference set and N is a PNHFN, then the hesitant degree of xi is defined by the following mathematical symbol:*

$$\chi(\mathbf{x}\_i) = 1 - \frac{1}{3} (\frac{1}{L(T)} + \frac{1}{L(I)} + \frac{1}{L(F)});\tag{3}$$

$$\chi(N) = \frac{1}{n} \sum\_{i=1}^{n} \chi(\mathbf{x}\_i),\tag{4}$$

*where <sup>L</sup>*(*T*)*, <sup>L</sup>*(*I*) *and <sup>L</sup>*(*F*) *represent the total numbers of factors in the components <sup>T</sup>*(*x*)|*P*˜*T*(*x*)*, <sup>I</sup>*(*x*)|*P*˜*<sup>I</sup>* (*x*) *and <sup>F</sup>*(*x*)|*P*˜*F*(*x*)*.*

The hesitant degree of *xi* reflects the decision maker's degree of hesitation, the bigger *χ*(*N*), the bigger the hesitation of decision maker in making decisions. If *χ*(*N*) = 0, then the decision information is completely unhesitating.

By the definition of PNHFS, we know that the information {*α*1|*P<sup>T</sup>* <sup>1</sup> , *<sup>α</sup>*2|*P<sup>T</sup>* <sup>2</sup> , ··· , *<sup>α</sup>L*(*T*)|*P<sup>T</sup> L*(*T*) } denotes the positive attitude for *x* to a situation *A*, Those data express a certain and non-vagueness component. In this case, we can not obtain effective data to denote the specific truth-membership degree. Similarly, the information elucidated by the data {*γ*1|*P<sup>F</sup>* <sup>1</sup> , *<sup>γ</sup>*2|*P<sup>F</sup>* <sup>2</sup> , ··· , *<sup>γ</sup>L*(*F*)|*P<sup>F</sup> L*(*F*) } is like the introduction of the truth-membership hesitant degrees with probability, which denotes determinate attitude and uncertain settled data. However, the information {*β*1|*P<sup>I</sup>* <sup>1</sup> , *<sup>β</sup>*2|*P<sup>I</sup>* <sup>2</sup> , ··· , *<sup>β</sup>L*(*I*)|*P<sup>I</sup> L*(*I*) } expresses uncertain attitude and inconclusive membership degree with probability. Thus, through the above analysis, the truth-membership hesitant degrees and false-membership hesitant degrees are considered as the components of non-vagueness subspace. The indeterminacy-membership degrees expresses the uncertain attitude. It denotes the imprecise notion of people's knowledge about *x*. The rest of the region denotes a contradictory (vague) attitude about whether the *x* belongs to an event. It represents the unexplored domain of people's knowledge about *x*. As people acquire more and more knowledge, the fuzzy information represented by contradictory (vague) subspace will be converted to the uncertain knowledge repressed by the information *<sup>T</sup>*(*x*)|*PT*(*x*), *<sup>I</sup>*(*x*)|*P<sup>I</sup>* (*x*) and *<sup>F</sup>*(*x*)|*PF*(*x*).

Thus, we propose a method to get all uncertain parameters and accurately describe the certain attitude subspace, indeterminate attitude subspace and contradictory (vague) subspace. Considering the certain subspace, the standpoint about the truth-membership hesitant degrees and false-membership hesitant degrees is correct. Thus, we let the truth-membership hesitant degrees have assigned positive values; the value domain is [0, 1] and then the false-membership hesitant degrees are assigned negative values; the value domain is [−1, 0]. Eventually, the value of certain attitude belongs to [−1, 1]. Obviously, by the Definition 6, the value of indeterminate attitude belongs to [0, 1]. Next, through the above analysis, we found that PNHFS is a convenient way to express fuzzy information. However, for decision makers, they prefer to get the optimal result more conveniently. However, the hesitant degree can describe the hesitation of uncertain information. Thus, we fuse the truth-membership hesitant degrees, false-membership hesitant degrees and hesitant degree into an attitude presentation. The uncertain neutrosophic space is relatively macroeconomic expressed by a certain attitude, indeterminate attitude and hesitation. The calculation process can be simplified and made more feasible for solving problems. Based on the above analysis, the definition of inference set (IS) is established as follows:

**Definition 9.** *Suppose that X is a finite reference set; then, a inference set (IS) is expressed by the following mathematical symbol:*

$$IS = \{ \langle \mathbf{x}, d(\mathbf{x}), e(\mathbf{x}), \mathbf{g}(\mathbf{x}) \rangle | \mathbf{x} \in X \}, \tag{5}$$

*where IE* = *x*, *d*(*x*),*e*(*x*), *g*(*x*) *is defined as an inference element (IE),* (*d*(*x*),*e*(*x*), *andg*(*x*)) *is called an inference number (IN). The function d* : *X* → [−1, 1] *describes the attitude of x belonging to the situation A. It is a compositive product about the truth-membership hesitant degrees and false-membership hesitant degrees. The mapping e* : *X* → [0, 1] *expresses the un-vagueness opinion of x belonging to the situation A. In addition, the mapping g* : *X* → [0, 1] *figures the contradictory (vague) degree for people's attitudes about x belonging to the situation A. Note, when* 0 < *d*(*x*) ≤ 1*, the decision makers remain optimistic about x belonging to the situation A; when* −1 ≤ *d*(*x*) < 0*, the decision makers are pessimistic about x belonging to the situation A. If d*(*x*) = 0*, then the decision makers' attitude is neutral.*

**Example 2.** *The mathematical symbol x*, 0.4, 0.7, 0.2 *is an IE. It describes the decision maker having a* 40% *degree of agreement about x belonging to the situation A. However, there is a* 70% *degree of determination about the information on x to the situation A. In addition, there is a* 20% *degree of non-hesitation on the x belonging to the situation A.*

#### *3.1. The Method of Comparing PNHFSs*

In this subsection, a way to convert the PNHFE to the IE is established. Next, the PNHFS can be compared by utilizing IEs. In the entire space, the certain attitude subspace, the indeterminate attitude subspace, the contradictory (vague) attitude subspace and corresponding probabilistic values express the different meanings. The certain attitude subspace represents the degrees of agreement or disagreement about *x* belonging to the situation *A*; the indeterminate attitude subspace can be described to the lack of decision makers' information, whereas the contradictory (vague) subspace represents the contradiction of decision makers' knowledge. Additionally, the probability theory expresses uncertainty, which is shared by the certain attitude subspace, the indeterminate sub-space and contradictory (vagueness) subspace. Thus, the probability values are integrated to reduce uncertain variables. Next, in order to establish distance measure and similarity measure, a function from a PNHFS to an IS is given.

**Definition 10.** *Suppose that X is a finite reference set, N is a finite PNHFE, and a mapping H is defined as follows:*

$$H(N) = \{ \sum\_{a=1}^{L(T)} t\_d P\_a^T - \sum\_{c=1}^{L(F)} f\_c P\_c^F \sum\_{b=1}^{L(I)} (1 - i\_b) P\_b^F, 1 - \chi(\mathbf{x}\_i) \}. \tag{6}$$

For instance, when *P<sup>T</sup>* <sup>1</sup> = *<sup>P</sup><sup>T</sup>* <sup>2</sup> <sup>=</sup> ··· <sup>=</sup> *<sup>P</sup><sup>T</sup> L*(*T*) , *P<sup>I</sup>* <sup>1</sup> = *<sup>P</sup><sup>I</sup>* <sup>2</sup> <sup>=</sup> ··· <sup>=</sup> *<sup>P</sup><sup>I</sup> L*(*I*) , *P<sup>F</sup>* <sup>1</sup> = *<sup>P</sup><sup>F</sup>* <sup>2</sup> <sup>=</sup> ··· <sup>=</sup> *<sup>P</sup><sup>F</sup> L*(*F*) , the PNHFS is reduced to an NHFS. Thus, the function *H*(*N*) can be transformed to an IS as

$$H(N) = \{ \frac{\sum\_{a=1}^{L(T)} t\_a}{L(T)} - \frac{\sum\_{c=1}^{L(F)} f\_c}{L(F)}, \frac{\sum\_{b=1}^{L(I)} (1 - i\_b)}{L(I)}, 1 - \chi(x\_l) \}.$$

According to Equation (6), the IS includes the probabilistic information and fuzzy information, which can be illustrated with the help of investigating the Definition 10. The formula <sup>∑</sup>*L*(*T*) *<sup>a</sup>*=<sup>1</sup> *taP<sup>T</sup> <sup>a</sup>* <sup>−</sup> <sup>∑</sup>*L*(*F*) *<sup>c</sup>*=<sup>1</sup> *fcP<sup>F</sup> c* introduces the average value of certain attitude obtained by the truth-membership subspace and the false-membership subspace. The expression <sup>∑</sup>*L*(*I*) *<sup>b</sup>*=<sup>1</sup> (<sup>1</sup> <sup>−</sup> *ib*)*P<sup>F</sup> <sup>b</sup>* explains the average degree of an un-hesitant opinion given by the indeterminate-membership subspace. Then, the formula 1 − *χ*(*xi*) illustrates the average value of the un-sloppy attitude for known information about *x* related to the situation *A*. By Definition 6, all objective and subjective uncertain elements are considered and different types of fuzzy spaces are distinguished. However, if PNHFE is infinite, the formula 6 will change

$$H(N) = \{ \int\_{a=1}^{L(T)} t\_d P\_a^T - \int\_{c=1}^{L(F)} f\_c P\_{c'}^F \int\_{b=1}^{L(I)} (1 - i\_b) P\_b^F, 0 \}. \tag{7}$$

Based on the importance of objective and subjective information, the method of comparison for IEs is defined as follows:

**Definition 11.** *Let X be a finite reference set, IE*<sup>1</sup> = *d*1(*x*),*e*1(*x*), *g*1(*x*) *and IE*<sup>2</sup> = *d*2(*x*),*e*2(*x*), *g*2(*x*) *be two IEs, then*


The division of entire uncertain field to describe the certain, indeterminate and hesitant attitude. By Definition 9, based on the internal perspective and external perspective, the IE expresses the certain subdomain without probabilistic information. Thus, according to the degree of information obtained and the importance of experience in decision-making activities, the method of comparison for IEs is based on the rule "degree of non-hesitation, determinacy and lastly opinion".

Supposing that *A* and *B* are two PNHFEs to the finite reference set *X*, then the corresponding IEs can be expressed by *IEA* = *dA*(*x*),*eA*(*x*), *gA*(*x*) and *IEB* = *dB*(*x*),*eB*(*x*), *gB*(*x*), respectively. Thus, the notion of binary relation for PNHFEs can be described as follows:

**Definition 12.** *Suppose that A and B are two PNHFEs to the finite reference set X. Then, the binary relations for PNHFEs are given as follows:*


#### *3.2. Distance and Similarity Measures of PNHFSs*

According to the work mentioned above, the distance measure, similarity measure and entropy measure of PNHFE are established in this subsection. The inclusion between *ISA* and *ISB* is given. Similarity, the inclusion between *PNHFSA* and *PNHFSB* are proposed.

Suppose that *X* is a finite reference set, *A* and *B* are PNHFS to set *X*, and *ISA* and *ISB* are corresponding ISs of *A* and *B*, respectively.

$$A \subseteq B \text{ iff } \forall \mathbf{x} \in X, \overline{T\_A | P^{T\_A}} \le \overline{T\_B | P^{T\_B}}, \overline{I\_A | P^{I\_A}} \ge \overline{I\_B | P^{I\_B}}, \overline{F\_A | P^{T\_A}} \ge \overline{F\_B | P^{F\_B}} \text{ and } \chi(A) \ge \chi(B).$$

where *TA*|*PTA* and *TB*|*PTB* describe the average value of truth-membership hesitant degree of *A* and *B*, respectively, *IA*|*PIA* and *IB*|*PIB* express the average indeterminate-membership hesitant degree of *A* and *B*, respectively. Similarly, *FA*|*PFA* and *FB*|*PFB* represent the corresponding average false-membership hesitant degree of *A* and *B*.

Additionally, if *ISA* ⊆ *ISB*, the following conditions need to hold:

$$a\_A \le a\_{B'}b\_A \le b\_{B'}c\_A \le c\_B.$$

**Definition 13.** *Suppose that X is a finite reference set, ISA, ISB and ISC are three ISs in X. A function DIS* : *IS*(*X*) × *IS*(*X*) → [0, 1]*, where "*×*" means the Cartesian production. Then, DIS is called a distance measure, if DIS satisfies the following three requirements:*


$$(3)\quad D\_{IS}(IS\_A, IS\_C) \ge D\_{IS}(IS\_A, IS\_B),\\ D\_{IS}(IS\_{A\nu}IS\_C) \ge D\_{IS}(IS\_B, IS\_C) \text{ when } IS\_A \subseteq IS\_B \subseteq I\mathcal{S}\_C.$$

**Theorem 1.** *Suppose that ISA* = {*dA*(*x*),*eA*(*x*), *gA*(*x*)|*x* ∈ *X*} *and ISB* = {*dB*(*x*),*eB*(*x*), *gB*(*x*)|*x* ∈ *X*} *are three ISs in X, then the function*

$$D\_{\rm IS} = AIO(MT(MIIL\_1(|d\_A(\mathbf{x}) - d\_B(\mathbf{x})|), MIIL)(|e\_A(\mathbf{x}) - e\_B(\mathbf{x})|), MIILs(|g\_A(\mathbf{x}) - g\_B(\mathbf{x})|))) \tag{8}$$

*is a distance measure for IS, where the mappings: MIU*1, *MIU*2, *MIU*<sup>3</sup> : [0, 1] → [0, 1]*, satisfy the conditions: MIU*1*, MIU*<sup>2</sup> *and MIU*<sup>3</sup> *are three monotonically increasing unary functions and MIU*1(0) = 0*, MIU*2(0) = 0*, MIU*3(0) = 0*. Those functions can be the same and are not mandatory here. The mapping MIT* : [0, 1] <sup>3</sup> <sup>→</sup> [0, 1] *is a monotonically increasing ternary function; MIT holds the following requirements: MIT*(0, 0, 0) = 0*; MIT* <sup>1</sup> ≥ 0*, MIT* <sup>2</sup> ≥ 0*, and MIT* <sup>3</sup> ≥ 0*, MIT* <sup>1</sup>*, MIT* <sup>2</sup> *and MIT* <sup>1</sup> *are corresponding partial derivatives of MIU*1*, MIU*<sup>2</sup> *and MIU*3*, respectively. Additionally, AIO* : [0, 1] *<sup>n</sup>* <sup>→</sup> [0, 1] *is an aggregation operator and the partial derivative AIO <sup>i</sup>* ≥ 0 *(i* ∈ {1, 2, ··· , *n*}*); n represses the total numbers of factors in X.*

**Proof.** According to the conditions of *MIU*1, *MIU*2, *MIU*3, *MIT* and *AIO*, Definition 13 (1) and (2) obviously hold. Thus, the proof process of condition (3) is listed, here. Since the restrictive conditions *ISA* ⊆ *ISB* ⊆ *ISC* hold, thus the inequalities are listed below:

$$\begin{split} |d\_{A}(\mathbf{x}) - d\_{\mathcal{C}}(\mathbf{x})| &\geq |d\_{A}(\mathbf{x}) - d\_{\mathcal{B}}(\mathbf{x})|, |\boldsymbol{\varepsilon}\_{A}(\mathbf{x}) - \boldsymbol{\varepsilon}\_{\mathcal{C}}(\mathbf{x})| \geq |\boldsymbol{\varepsilon}\_{A}(\mathbf{x}) - \boldsymbol{\varepsilon}\_{\mathcal{B}}(\mathbf{x})|, |\underline{g}\_{A}(\mathbf{x}) - \boldsymbol{g}\_{\mathcal{C}}(\mathbf{x})| \geq |\underline{g}\_{A}(\mathbf{x}) - \underline{g}\_{B}(\mathbf{x})|; \\ |d\_{A}(\mathbf{x}) - d\_{\mathcal{C}}(\mathbf{x})| &\geq |d\_{\mathcal{B}}(\mathbf{x}) - d\_{\mathcal{C}}(\mathbf{x})|, |\underline{e}\_{A}(\mathbf{x}) - \boldsymbol{e}\_{\mathcal{C}}(\mathbf{x})| \geq |\underline{e}\_{B}(\mathbf{x}) - \boldsymbol{e}\_{\mathcal{C}}(\mathbf{x})|, |\underline{g}\_{A}(\mathbf{x}) - \underline{g}\_{\mathcal{C}}(\mathbf{x})| \geq |\underline{g}\_{B}(\mathbf{x}) - \underline{g}\_{\mathcal{C}}(\mathbf{x})|. \end{split}$$

Because functions *MIU*1, *MIU*<sup>2</sup> and *MIU*<sup>3</sup> are three monotonically increasing functions, so we can get, ∀*x* ∈ *X*

$$\begin{split} \textit{MIU}\_{1}(|d\_{A}(\mathbf{x}) - d\_{\mathbb{C}}(\mathbf{x})|) &\geq \textit{MIU}\_{1}(|d\_{A}(\mathbf{x}) - d\_{\mathbb{B}}(\mathbf{x})|), \textit{MIU}\_{2}(|\varepsilon\_{A}(\mathbf{x}) - \varepsilon\_{\mathbb{C}}(\mathbf{x})|) \geq \textit{MIU}\_{2}(|\varepsilon\_{A}(\mathbf{x}) - \varepsilon\_{\mathbb{B}}(\mathbf{x})|), \\\textit{MIU}\_{3}(|\underline{\mathcal{g}}\_{A}(\mathbf{x}) - \underline{\mathcal{g}}\_{\mathbb{C}}(\mathbf{x})|) &\geq \textit{MIU}\_{3}(|\underline{\mathcal{g}}\_{A}(\mathbf{x}) - \underline{\mathcal{g}}\_{\mathbb{B}}(\mathbf{x})|); \textit{MIU}\_{1}(|d\_{A}(\mathbf{x}) - d\_{\mathbb{C}}(\mathbf{x})|) \geq \textit{MIU}\_{1}(|d\_{\mathbb{B}}(\mathbf{x}) - d\_{\mathbb{C}}(\mathbf{x})|), \\\textit{MIU}\_{2}(|\varepsilon\_{A}(\mathbf{x}) - \varepsilon\_{\mathbb{C}}(\mathbf{x})|) &\geq \textit{MIU}\_{2}(|\varepsilon\_{\mathbb{B}}(\mathbf{x}) - \varepsilon\_{\mathbb{C}}(\mathbf{x})|), \textit{MIU}\_{3}(|\underline{\mathcal{g}}\_{A}(\mathbf{x}) - \underline{\mathcal{g}}\_{\mathbb{C}}(\mathbf{x})|) \geq \textit{MIU}\_{3}(|\underline{\mathcal{g}}\_{\mathbb{B}}(\mathbf{x}) - \underline{\mathcal{g}}\_{\mathbb{C}}(\mathbf{x})|). \end{split}$$

However, the partial derivatives *MIT* ≥ 0, *MIT* ≥ 0, and *MIT* ≥ 0, thus

$$\begin{split} \operatorname{MIT}(\operatorname{MII}\_{1}(|d\_{A}(\mathbf{x})-d\_{\mathbb{C}}(\mathbf{x})|), \operatorname{MII}l2\_{2}(|e\_{A}(\mathbf{x})-e\_{\mathbb{C}}(\mathbf{x})|), \operatorname{MII}l3\_{3}(|\operatorname{g}\_{A}(\mathbf{x})-\operatorname{g}\_{\mathbb{C}}(\mathbf{x})|)) \\ \geq \operatorname{MIT}(\operatorname{MII}l\_{1}(|d\_{A}(\mathbf{x})-d\_{\mathbb{B}}(\mathbf{x})|), \operatorname{MII}l\_{2}(|e\_{A}(\mathbf{x})-e\_{\mathbb{B}}(\mathbf{x})|), \operatorname{MII}l\_{3}(|\operatorname{g}\_{A}(\mathbf{x})-\operatorname{g}\_{\mathbb{B}}(\mathbf{x})|)) \cdot \\ \qquad \qquad \operatorname{MIT}(\operatorname{MII}l\_{1}(|d\_{A}(\mathbf{x})-d\_{\mathbb{C}}(\mathbf{x})|), \operatorname{MII}l\_{2}(|e\_{A}(\mathbf{x})-e\_{\mathbb{C}}(\mathbf{x})|), \operatorname{MII}l\_{3}(|\operatorname{g}\_{A}(\mathbf{x})-\operatorname{g}\_{\mathbb{C}}(\mathbf{x})|)) \\ \geq \operatorname{MIT}(\operatorname{MII}l\_{1}(|d\_{A}(\mathbf{x})-d\_{\mathbb{C}}(\mathbf{x})|), \operatorname{MII}l\_{2}(|e\_{A}(\mathbf{x})-e\_{\mathbb{C}}(\mathbf{x})|), \operatorname{MII}l\_{3}(|\operatorname{g}\_{A}(\mathbf{x})-\operatorname{g}\_{\mathbb{C}}(\mathbf{x})|)). \end{split}$$

According to the characteristic of function *AIO*, the following results are shown:

$$\begin{split} \operatorname{AIO}(\operatorname{MIT}(\operatorname{MII}\_{1}(|d\_{A}(\mathbf{x})-d\_{\mathcal{C}}(\mathbf{x})|), \operatorname{MII}l\_{2}(|e\_{A}(\mathbf{x})-e\_{\mathcal{C}}(\mathbf{x})|), \operatorname{MII}l\_{3}(|\underline{\operatorname{g}}\_{A}(\mathbf{x})-\underline{\operatorname{g}\_{\mathcal{C}}}(\mathbf{x})|))) \\ \geq \operatorname{AIO}(\operatorname{MIT}(\operatorname{MII}\_{1}(|d\_{A}(\mathbf{x})-d\_{\mathcal{C}}(\mathbf{x})|), \operatorname{MII}l\_{2}(|e\_{A}(\mathbf{x})-e\_{\mathcal{C}}(\mathbf{x})|), \operatorname{MII}l\_{3}(|\underline{\operatorname{g}}\_{A}(\mathbf{x})-\underline{\operatorname{g}\_{\mathcal{C}}}(\mathbf{x})|))); \\ \operatorname{AIO}(\operatorname{MIT}(\operatorname{MII}\_{1}(|d\_{A}(\mathbf{x})-d\_{\mathcal{C}}(\mathbf{x})|), \operatorname{MII}l\_{2}(|e\_{A}(\mathbf{x})-e\_{\mathcal{C}}(\mathbf{x})|), \operatorname{MII}l\_{3}(|\underline{\operatorname{g}}\_{A}(\mathbf{x})-\underline{\operatorname{g}\_{\mathcal{C}}}(\mathbf{x})|))) \\ \geq \operatorname{AIO}(\operatorname{MIT}(\operatorname{MII}\_{1}(|d\_{A}(\mathbf{x})-d\_{\mathcal{C}}(\mathbf{x})|), \operatorname{MII}l\_{2}(|e\_{A}(\mathbf{x})-e\_{\mathcal{C}}(\mathbf{x})|), \operatorname{MII}l\_{3}(|\underline{\operatorname{g}}\_{A}(\mathbf{x})-\underline{\operatorname{g}\_{\mathcal{C}}}(\mathbf{x})|)). \end{split}$$

Namely, *DIS*(*ISA*, *ISC*) = *DIS*(*ISA*, *ISB*), *DIS*(*ISA*, *ISC*) = *DIS*(*ISB*, *ISC*).

**Theorem 2.** *Suppose that ISA* = {*dA*(*x*),*eA*(*x*), *gA*(*x*)|*x* ∈ *X*} *and ISB* = {*dB*(*x*),*eB*(*x*), *gB*(*x*)|*x* ∈ *X*} *are three ISs in X, then the function*

$$D\_{IS} = AIO(MDT(MDUL\_1(|d\_A(\mathbf{x}) - d\_B(\mathbf{x})|), MDlL\_2(|e\_A(\mathbf{x}) - e\_B(\mathbf{x})|), MDlL\_3(|\mathbf{g}\_A(\mathbf{x}) - \mathbf{g}\_B(\mathbf{x})|))) \tag{9}$$

*is a distance measure on IS, where the mappings: MDU*1, *MDU*2, *MDU*<sup>3</sup> : [0, 1] → [0, 1] *satisfy the conditions: MIU*1*, MIU*<sup>2</sup> *and MIU*<sup>3</sup> *are three monotonically decreasing unary functions, respectively. MDU*1(1) = 0*, MDU*2(1) = 0*, MDU*3(1) = 0*. Those functions can be the same and are not mandatory here. The mapping MDT* : [0, 1] <sup>3</sup> <sup>→</sup> [0, 1] *is a monotonically decreasing ternary function, MDT holds the following requirements: MDT*(1, 1, 1) = 0*; MDT* <sup>1</sup> ≤ 0*, MDT* <sup>2</sup> ≤ 0*, and MDT* <sup>3</sup> ≤ 0*, MDT* <sup>1</sup>*, MDT* <sup>2</sup> *and MDT* <sup>1</sup> *are corresponding partial derivatives of MDU*1*, MDU*<sup>2</sup> *and MDU*3*, respectively. AIO* : [0, 1] *<sup>n</sup>* <sup>→</sup> [0, 1] *is an aggregation operator and the partial derivative AIO <sup>i</sup>* ≥ 0 *(i* ∈ {1, 2, ··· , *n*}*), n represses the total numbers of factors in X.*

**Proof.** Since the process of proof is similar to Theorem 1, thus the whole conditions of Definition 13 are held by Theorem 2.

**Definition 14.** *Suppose that X is a finite reference set; A, B and C are three PNHFSs on X, a mapping DPNHFS* : [0, 1] × [0, 1] *is called a distance measure on PNHFS*(*X*)*, if it holds the following three requirements: "*×*" is the Cartesian product,*

*(1) DPNHFS*(*A*, *B*) = 0 *iff A* = *B;*

$$(2) \quad D\_{PNHFS}(A,B) = D\_{PNHFS}(B,A);$$

*(3) If A* ⊆ *B* ⊆ *C, then DPNHFS*(*A*, *B*) ≤ *DPNHFS*(*A*, *C*) *and DPNHFS*(*B*, *C*) ≤ *DPNHFS*(*A*, *C*)*.*

**Theorem 3.** *Suppose that X is a finite reference set, A, B and C are three PNHFSs in X, ISA, ISB and ISC are corresponding ISs of A, B and C, respectively. Then, a real-valued mapping:*

$$D\_{PNHFS}(A,B) = MI\mathcal{U}(D\_{IS}(IS\_{A\prime}IS\_B))$$

*is a distance measure on PNHFS(X), where MIU* : [0, 1] → [0, 1] *is a monotonically increasing unary mapping, MIU.*

**Proof.** According to the conditions of Theorem 3, the mapping *DPNHFS* holds the requirements of Definition 14 (1),(2). Thus, the requirement (3) merely needs to be proved.

Based on the explanation of *A* ⊆ *B* ⊆ *C*, *A*, *B*, *C* ∈ *PNHFS*(*X*), thus, by Definition 10, the corresponding ISs of *A*, *B*, *C* exist in the following inclusion relation:

$$I\mathcal{S}\_A \subseteq I\mathcal{S}\_B \subseteq I\mathcal{S}\_C.$$

Obviously, the following inequalities are obtained:

$$D\_{IS}(IS\_{A\prime}IS\_{\mathbb{C}}) \ge D\_{IS}(IS\_{A\prime}IS\_{B}),$$

$$D\_{IS}(IS\_{A\prime}IS\_{\mathbb{C}}) \ge D\_{IS}(IS\_{B\prime}IS\_{\mathbb{C}}).$$

Since the function *MIU* is a monotonically increasing unary mapping, so the following inequalities are shown:

$$MIIL(D\_{IS}(IS\_A, IS\_B)) \le MIIL(D\_{IS}(IS\_A, IS\_C)),\\ MIIL(D\_{IS}(IS\_B, IS\_C)) \le MIIL(D\_{IS}(IS\_A, IS\_C)).$$

This completes the proof process.

*Mathematics* **2019**, *7*, 649

**Example 3.** *Suppose that X is a finite reference set, A, B are PNHFSs on X, ISA* = {*dA*(*x*),*eA*(*x*), *gA*(*x*)|*x* ∈ *X*} *and ISB* = {*dB*(*x*),*eB*(*x*), *gB*(*x*)|*x* ∈ *X*} *are the corresponding ISs for those two PNHFSs. Based on the Theorem <sup>1</sup> and Theorem 3, let MIU*<sup>1</sup> <sup>=</sup> *<sup>y</sup>φ, MIU*<sup>2</sup> <sup>=</sup> *<sup>y</sup>μ, MIU*<sup>3</sup> <sup>=</sup> *<sup>y</sup>ν, <sup>y</sup>* <sup>∈</sup> [0, 1], 0 <sup>≤</sup> *<sup>φ</sup>*, *<sup>μ</sup>*, *<sup>ν</sup>* <sup>≤</sup> <sup>1</sup>*. MIT* <sup>=</sup> *log*4(<sup>1</sup> <sup>+</sup> *<sup>y</sup>*<sup>1</sup> <sup>+</sup> *<sup>y</sup>*<sup>2</sup> <sup>+</sup> *<sup>y</sup>*3)*, <sup>y</sup>*1, *<sup>y</sup>*2, *<sup>y</sup>*<sup>3</sup> <sup>∈</sup> [0, 1]*. Additionally, suppose MIU* <sup>=</sup> *<sup>y</sup>λ, where <sup>y</sup>* <sup>∈</sup> [0, 1], 0 <sup>≤</sup> *<sup>λ</sup>. Then, we have*

$$D\_1(A, B) = \frac{1}{2n} \sum\_{\mathbf{x} \in X} (\log\_4(1 + (\frac{|d\_A(\mathbf{x}) - d\_B(\mathbf{x})|}{2})^{\phi} + (|\varepsilon\_A(\mathbf{x}) - \varepsilon\_B(\mathbf{x})|)^{\mu} + (\frac{|g\_A(\mathbf{x}) - g\_B(\mathbf{x})|}{2})^{\nu}))^{\lambda}. \tag{10}$$

$$If \phi = \mu = \nu = \lambda = 1, we \text{ have }$$

$$D\_1^{\mathfrak{H}=\mu=\nu=\lambda=1}(A,B) = \frac{1}{2\pi} \sum\_{x \in X} (\log\_4(1 + \left(\frac{|d\_A(\mathbf{x}) - d\mathbf{g}(\mathbf{x})|}{2}\right) + (|\varepsilon\_A(\mathbf{x}) - \varepsilon\_B(\mathbf{x})|) + (\frac{|g\_A(\mathbf{x}) - g\_B(\mathbf{x})|}{2}))).\tag{11}$$

$$If \,\phi = \mu = \nu = 2, \lambda = \frac{1}{2}, \, then$$

$$D\_1^{\theta-\mu-\nu-2,\lambda-\frac{1}{2}}(A,B) = \frac{1}{2\overline{n}}\sum\_{x\in X}((\log\_4(1+\frac{|d\_A(x)-d\_B(x)|}{2}))^2+(|e\_A(x)-e\_B(x)|)^2+\frac{(|g\_A(x)-g\_B(x)|)^2}{4})^{\frac{1}{2}}.\tag{12}$$

From the formulas of *<sup>D</sup>*1(*A*, *<sup>B</sup>*), *<sup>D</sup>φ*=*μ*=*ν*=*λ*=<sup>1</sup> <sup>1</sup> (*A*, *<sup>B</sup>*) and *<sup>D</sup>φ*=*μ*=*ν*=2,*λ*<sup>=</sup> <sup>1</sup> 2 <sup>1</sup> (*A*, *B*), we know that the parameters *φ*, *μ*, *ν* manage the functions of |*dA*(*x*) − *dB*(*x*)|, |*eA*(*x*) − *eB*(*x*)| and |*gA*(*x*) − *gB*(*x*)| to establish the internal framework of *D*1(*A*, *B*). However, the parameter *λ* is utilized to regulate the reciprocity among the |*dA*(*x*) − *dB*(*x*)|, |*eA*(*x*) − *eB*(*x*)| and |*gA*(*x*) − *gB*(*x*)| in the regulate area. Based on different application environments, the parameters *φ*, *μ*, *ν* are decided. Thus, for a MCDM problem, it is a tool applied to measure the distinction in their knowledge background. Thus, it is rational to decide the parameters utilized to manage the internal framework of measures based on respective importance degree. By dispatching different functions to |*dA*(*x*) − *dB*(*x*)|, |*eA*(*x*) − *eB*(*x*)| and |*gA*(*x*) − *gB*(*x*)|, the value of adjusting the feasibility of |*dA*(*x*) − *dB*(*x*)|, |*eA*(*x*) − *eB*(*x*)| and |*gA*(*x*) − *gB*(*x*)| can also be solved.

**Example 4.** *Suppose that X, A, B, ISA and ISB are as mentioned above in Example 3, MIU*<sup>1</sup> = *ln*(1 + *y*)*,* (*<sup>y</sup>* <sup>∈</sup> [0, 1])*; MIU*<sup>1</sup> <sup>=</sup> *<sup>y</sup>φ,* (*<sup>y</sup>* <sup>∈</sup> [0, 1]), *<sup>φ</sup>* <sup>≥</sup> <sup>0</sup>*; MIU*<sup>3</sup> <sup>=</sup> *<sup>y</sup>μ,* (*<sup>y</sup>* <sup>∈</sup> [0, 1]), *<sup>μ</sup>* <sup>≥</sup> <sup>0</sup>*, MIT* = (*y*<sup>1</sup> · *<sup>y</sup>*<sup>2</sup> · *<sup>y</sup>*3)*λ,* (*y*1, *<sup>y</sup>*2, *<sup>y</sup>*<sup>3</sup> <sup>∈</sup> [0, 1], *<sup>λ</sup>* <sup>≥</sup> <sup>0</sup>)*. Additionally, MIU* <sup>=</sup> *<sup>t</sup>*(*lny*)*,* (*<sup>y</sup>* <sup>∈</sup> [0, 1], *<sup>t</sup>* <sup>≥</sup> <sup>0</sup>)*. Then,*

$$D\_2(A,B) = \sum\_{\mathbf{x} \in X} t(((\ln(1 + \frac{|d\_A(\mathbf{x}) - d\_B(\mathbf{x})|}{2}))^\phi |e\_A(\mathbf{x}) - e\_B(\mathbf{x})|^\mu |\frac{g\_A(\mathbf{x}) - g\_B(\mathbf{x})}{2}|^\nu)^\lambda)^y.$$

*In addition, if φ* = *μ* = *ν* = *λ* = *t* = 1*, then*

$$D\_{2,\lambda=1,y=1}^{\Phi=\mu-\nu=1}(A,B) = \sum\_{\mathbf{x}\in X} \ln(1+\frac{|d\_A(\mathbf{x})-d\_B(\mathbf{x})|}{2}) |\varepsilon\_A(\mathbf{x})-\varepsilon\_B(\mathbf{x})| |\frac{\mathbf{g}\_A(\mathbf{x})-\mathbf{g}\_B(\mathbf{x})}{2}|.$$

**Definition 15.** *Suppose that X is a finite reference set, ISA, ISB and ISC are three ISs on X, SIS* : *IS*(*X*) × *IS*(*X*) → [0, 1] *is a real-valued function, where "*×*" is the Cartesian product. Then, SIS is called a similarity measure on IS*(*X*)*, if it holds the following three axiomatic conditions:*

*(1) SIS*(*ISA*, *ISB*) = 1 *iff ISA* = *ISB;*

*(2) SIS*(*ISA*, *ISB*) = *SIS*(*ISB*, *ISA*)*;*

*(3) If ISA* ⊆ *ISB* ⊆ *ISC, then SIS*(*ISA*, *ISB*) ≥ *SIS*(*ISA*, *ISC*)*, SIS*(*ISB*, *ISC*) ≥ *SIS*(*ISA*, *ISC*)*.*

**Theorem 4.** *Suppose that X is a finite reference set, ISA* = {*dA*(*x*),*eA*(*x*), *gA*(*x*)|*x* ∈ *X*}*, ISB* = {*dB*(*x*),*eB*(*x*), *gB*(*x*)|*x* ∈ *X*} *are two ISs; then, function SIS*(*ISA*, *ISB*) *is called a similarity measure, and the mathematical symbol is as follows:*

$$S\_{IS}(IS\_A, IS\_B) = AIO(MDT(MIIL\_1(\frac{|d\_A(\mathbf{x}) - d\_B(\mathbf{x})|}{2}), MIL\_2(|\varepsilon\_A(\mathbf{x}) - \varepsilon\_B(\mathbf{x})|), MILb(|\underline{g}\_A(\mathbf{x}) - \mathbf{g}\_B(\mathbf{x})|))), \tag{13}$$

*where MIU*1, *MIU*2, *MIU*<sup>3</sup> : [0, 1] → [0, 1] *hold the following conditions: MIU*1, *MIU*<sup>2</sup> *and MIU*<sup>3</sup> *are three monotonically increasing unary mappings, MIU*1(0) = *MIU*2(0) = *MIU*3(0) = 0*. They may be the same functions, and there are no requirements here. MDT* : [0, 1] <sup>3</sup> <sup>→</sup> [0, 1] *is a monotonically decreasing ternary mapping, MDT* <sup>1</sup>, *MDT* <sup>2</sup>, *MDT* <sup>3</sup> *are three corresponding partial derivatives of MDT with respect to MIU*1, *MIU*2, *MIU*3*, respectively. Those partial derivatives hold the following requirements: MDT* <sup>1</sup> ≤ 0, *MDT* <sup>2</sup> ≤ 0, *MDT* <sup>3</sup> ≤ 0 *and MDT*(0, 0, 0) = 1*. The mapping AIO* : [0, 1] *<sup>n</sup>* <sup>→</sup> [0, 1] *is an aggregation operator, the partial derivative describes AIO <sup>i</sup>* ≥ 0 (*i* ∈ {1, 2, ··· , *n*})*; n describes the total numbers of factors in X.*

**Proof.** The process of proof is similar to Theorem 1, thus it is unimportant here.

**Theorem 5.** *Suppose that X is a finite reference set, ISA* = {*dA*(*x*),*eA*(*x*), *gA*(*x*)|*x* ∈ *X*}*, ISB* = {*dB*(*x*),*eB*(*x*), *gB*(*x*)|*x* ∈ *X*} *are two ISs, then function SIS*(*ISA*, *ISB*) *is called a similarity measure, and the mathematical symbol is as follows:*

$$S\_{IS}(IS\_A, IS\_B) = AIO(MTT(MDIL\_1(\frac{|d\_A(\mathbf{x}) - d\_B(\mathbf{x})|}{2}), MDIL\_2(|e\_A(\mathbf{x}) - e\_B(\mathbf{x})|), MDIL\_3(|g\_A(\mathbf{x}) - g\_B(\mathbf{x})|))), \tag{14}$$

*where MDU*1, *MDU*2, *MDU*<sup>3</sup> : [0, 1] → [0, 1] *satisfy the following requirements: MDU*1, *MDU*<sup>2</sup> *and MDU*<sup>3</sup> *are three monotonically decreasing unary mappings, MDU*1(1) = *MDU*2(1) = *MDU*3(1) = 0*. They may have equal functions, and there are no requirements here. MIT* : [0, 1] <sup>3</sup> <sup>→</sup> [0, 1] *is a monotonically increasing ternary mapping, MIT* <sup>1</sup>, *MIT* <sup>2</sup>, *MIT* <sup>3</sup> *are three corresponding partial derivatives of MIT with respect to MIU*1, *MIU*2, *MIU*3*, respectively. Those partial derivatives hold the following requirements: MIT* <sup>1</sup> ≥ 0*, MIT* <sup>2</sup> ≥ 0*, MIT* <sup>3</sup> ≥ 0 *and MIT*(0, 0, 0) = 0*. The mapping AIO* : [0, 1] *<sup>n</sup>* <sup>→</sup> [0, 1] *is an aggregation operator and the partial derivative AIO <sup>i</sup>* ≥ 0 (*i* ∈ {1, 2, ··· , *n*})*, n describe the total numbers of factors in X.*

**Proof.** The proof process is omitted.

**Definition 16.** *Suppose that X is a finite reference set, for any three PNHFSs A, B and C on X, a function SPNHFS* : *PNHFS*(*X*) × *PNHFS*(*X*) → [0, 1] *is called a similarity measure, if it holds the following three axiomatic conditions: "*×*" is the Cartesian product,*


**Theorem 6.** *Let X be a finite reference set, and A and B be two PNHFSs on X. The ISA and ISB are corresponding ISs of A ,B, respectively. Then, the mapping SPNHFS is called a similarity measure on PNHFS*(*X*)*, and the mathematical symbol is*

$$S\_{\rm PNHFS}(A,B) = MILI(S\_{IS}(IS\_{A\prime}IS\_B)),\tag{15}$$

*where MIU* : [0, 1] → [0, 1] *is an increasing function and MIU*(0) = 0

**Proof.** According to the Theorem 14, we know the proof is obvious. Thus, the process of proof is omitted.

**Example 5.** *Suppose that X, A, B, ISA, ISB are as above mentioned, MDU*<sup>1</sup> = *MDU*<sup>2</sup> = *MDU*<sup>3</sup> = *t <sup>y</sup>* <sup>−</sup> *t,* (<sup>0</sup> <sup>≤</sup> *<sup>t</sup>*, *<sup>y</sup>* <sup>≤</sup> <sup>1</sup>)*; MIT* = (*y*<sup>1</sup> <sup>+</sup> *<sup>y</sup>*<sup>2</sup> <sup>+</sup> *<sup>y</sup>*3)*φ,* <sup>0</sup> <sup>≤</sup> *<sup>φ</sup>*, *<sup>y</sup>*1, *<sup>y</sup>*2, *<sup>y</sup>*<sup>3</sup> <sup>≤</sup> <sup>1</sup>*. Additionally, suppose MIU* <sup>=</sup> *<sup>y</sup>λ,* 0 ≤ *y* ≤ 1, *λ* ≥ 0*. The similarity measure is described as follows:*

$$S\_1(A,B) = \sum\_{\mathbf{x} \in X} \left( t^{\frac{|d\_A(\mathbf{x}) - d\_B(\mathbf{x})|}{2}} + t^{|\mathbf{c}\_A(\mathbf{x}) - \mathbf{c}\_B(\mathbf{x})|} + t^{|\mathbf{g}\_A(\mathbf{x}) - \mathbf{g}\_B(\mathbf{x})|} - 3t \right)^\lambda.$$

*In addition, suppose t* <sup>=</sup> <sup>1</sup> 3 , *φ* = *λ* = 1*, thus*

$$S\_{1,t=\frac{1}{3}}^{\phi=\lambda=1}(A,B) = \sum\_{x\in X} (\frac{1}{3})^{\frac{|d\_A(x) - d\_B(x)|}{2}} + (\frac{1}{3})^{|\varepsilon\_A(x) - \varepsilon\_B(x)|} + (\frac{1}{3})^{|\mathfrak{g}\_A(x) - \mathfrak{g}\_B(x)|} - 1.$$

Through Example 5, we know that those parameters and mappings to decide the effects of |*dA*(*x*) − *dB*(*x*)|, |*eA*(*x*) − *eB*(*x*)| and |*gA*(*x*) − *gB*(*x*)| to establish the internal framework of similarity measures. Those parameters and mappings' selection methods are similar to the methods of Example 3.

#### *3.3. The Interrelations among Distance, Similarity and Entropy Measures*

According to the concept of "duality", the distance and similarity measures among SVNS, IVNS were investigated. However, different knowledge backgrounds of decision makers will lead to different results. According to the interrelation among distance and similarity measures, Wang [23] first proposed the definition of entropy and across entropy of MVNS and applied them to solving MCDM problems.

In the section, the interrelations among distance, similarity and entropy measures of PNHFS are investigated. According to Subsection 3.2, the distance measure shows the difference between factors. Additionally, the similarity measure investigated the uniformity of factors. Because distance measure and similarity measure describe two opposite aspects, the relationship between these two measures is investigated based on the following theorem:

**Theorem 7.** *Suppose that A and B are two PNHFS on X, the distance measure DPNHFS*(*A*, *B*) *holds the conditions in Definition 14, and then SPNHFS*(*A*, *B*) = *FN*(*DPNHFS*(*A*, *B*)) *is a similarity measure, which holds the axiomatic conditions in Definition 16, in which FN* : [0, 1] → [0, 1] *is a fuzzy negation.*

**Proof.** By Definition 14 and Definition 16, the process proof is obvious, so it is omitted.

According to the interpretation of the divisions of the neutrosophic space, to better describe stability of PNHFS, the entropy measure of a PNHFS is designed as follows:

**Definition 17.** *Suppose that <sup>X</sup> is a reference set, <sup>A</sup>* <sup>=</sup> {*x*, {*T*|*PT*}, {*I*|*P<sup>I</sup>* }, {*F*|*PF*}|*<sup>x</sup>* <sup>∈</sup> *<sup>X</sup>*} *is a PNHFS in X. Then, the complement of A is expressed by the following mathematical symbol:*

$$A^{\mathfrak{c}} = \{ \langle \mathfrak{x}, \{ F | P^F \} , \{ I | P^I \} , \{ T | P^T \} \rangle \, | \ge \mathfrak{x} \}.$$

*Obviously, A<sup>c</sup> is also a PNHFS.*

**Definition 18.** *Suppose that X is a finite reference set, A and B are two PNHFSs in X, ISA, ISB are corresponding ISs of A and B, respectively. Then, a function E* : *PNHFS*(*X*) → [0, 1] *is called to be an entropy measure when it holds the following four requests:*


Since we only are concerned with the importance of *a*(*x*), *b*(*x*) and *c*(*x*) on the stationarity of IS, the following theorems are introduced:

**Theorem 8.** *Suppose that X is a finite reference set, A is a PNHFS in X, and the corresponding IS of A is described by ISA. Then, the following formula:*

$$E(A) = MDT(MIIL\_1(|d\_A(\mathbf{x})|), MIIL\_2(|2e\_A(\mathbf{x}) - 1|), MIIL\_3(|g\_A(\mathbf{x})|) \tag{16}$$

*is an entropy measure, in which MIU*1, *MIU*2, *MIU*<sup>3</sup> : [0, 1] → [0, 1] *are three monotonically increasing unary mappings with MIU* <sup>1</sup> ≥ 0*, MIU* <sup>2</sup> ≥ 0*, MIU* <sup>3</sup> ≥ 0*, and MIU*1(0) = *MIU*2(0) = *MIU*3(0) = 0*, MIU*1(1) = *MIU*2(1) = *MIU*3(1) = 1*. The function MDT* : [0, 1] <sup>3</sup> <sup>→</sup> [0, 1] *is a monotonically decreasing ternary mapping, and its partial derivatives are lower than zero with the requirements: MDT*(0, 0, 0) = 1*, MDT*(1, 1, 1) = 0*.*

**Proof.** The function *E*(*A*) is illustrated to hold all the conditions of Definition 18.

(1) Let *A* = {*x*, {1|1}, {0|1}, {0|1}|*x* ∈ *X*}, *A* = {*x*, {0|1}, {0|1}, {0|1}|*x* ∈ *X*} or *A* = {*x*, {0|1}, {0|1}, {1|1}|*x* ∈ *X*}, thus the corresponding ISs of *A* are shown:

$$IS\_A = (1, 1, 1) \text{ or } IS\_A = (-1, 1, 1).$$

Next, the entropy measure of *A* is calculated as follows:

$$E(A) = MDT(MIIL\_1(1), MIL2(1), MIL3(1)) = MDT(1, 1, 1) = 0.$$

(2)

$$\begin{aligned} E(A) &= 1 \\ \Leftrightarrow & MDT(MIL\_1(|d\_A(\mathbf{x})|), MIL\_2(|2e\_A(\mathbf{x}) - 1|), MIL\_3(|\lg\_A(\mathbf{x})|) = 1) \\ \Leftrightarrow & MIL\_1(0) = 0, MIL\_2(0) = 0, MIL\_3(0) = 0 \\ \Leftrightarrow & |d(\mathbf{x})| = 0, |2e(\mathbf{x}) - 1| = 0, |\lg(\mathbf{x})| = 0, \\ \Leftarrow t\_a &= f\_c = 0.5, i\_b = 0.5, a, b, c \in \infty. \end{aligned}$$


$$\begin{aligned} S\_{\mathrm{PNHFS}}(A,B) &= MII(AIO(MIB(MDlI\_1(\frac{|d\_\mathcal{B}(\mathbf{x})|}{2}), MDlI\_2(|e\_\mathcal{B}(\mathbf{x})|), MDlI\_3(|g\_\mathcal{B}(\mathbf{x})|))); \\ S\_{\mathrm{PNHFS}}(A,\mathsf{C}) &= MII(AIO((MIB(MDlI\_1(\frac{|d\_\mathcal{C}(\mathbf{x})|}{2}), MDlI\_2(|e\_\mathcal{C}(\mathbf{x})|), MDlI\_3(|g\_\mathcal{C}(\mathbf{x})|)))). \end{aligned}$$

Since *SPNHFS*(*A*, *B*) ≤ *SPNHFS*(*A*, *C*), every function is monotonous, thus, we have |*dB*(*x*)|≥|*dC*(*x*)|, |*eB*(*x*)|≥|*eC*(*x*)| and |*gB*(*x*)|≥|*gC*(*x*)|. Finally, based on the requirements of Theorem 8, *E*(*B*) ≤ *E*(*C*).

Additionally, *DPNHFS*(*A*, *B*) = 1 − *SPNHFS*(*A*, *B*), *DPNHFS*(*A*, *C*) = 1 − *SPNHFS*(*A*, *C*). Thus, the process of proof based on the distance measure is omitted.

**Theorem 9.** *Suppose that X is a finite reference set, A is a PNHFS on X, and ISA is the corresponding IS about A. Then, Equation (17) is an entropy measure:*

$$E(A) = MIT(MDUL\_1(|d\_A(\mathbf{x})|), MDUL\_2(|2e\_A(\mathbf{x}) - 1|), MDUL\_3(|g\_A(\mathbf{x})|)).\tag{17}$$

*E*(*A*) *satisfies the following limits: MDU*1, *MDU*2, *MDU*<sup>3</sup> : [0, 1] → [0, 1] *are two monotonically decreasing unary mappings, and MDU*1(0) = *MDU*2(0) = *MDU*3(0) = 1*, MDU*1(1) = *MDU*2(1) = *MDU*3(1) = 0*. The mapping MIB* : [0, 1] <sup>3</sup> <sup>→</sup> [0, 1] *is a monotonically increasing binary function, its partial derivatives are better than* 0*, MIB*(0, 0, 0) = 0*, MIB*(1, 1, 1) = 1*.*

Based on the Equations (16) and (17), the different entropy measures can be established.

Through the above analysis, we know that entropy measure can be depicted by the unsteadiness of a PNHFS. However, distance measure and similarity measure play a vital role. Vice versa, entropy measure can better help us to comprehend distance measurement and similarity measurement. Next, according to the distance measure and similarity measure, respectively, the entropy measure can be established.

**Theorem 10.** *Suppose D is a distance measure obtained according to Definition 14, <sup>B</sup>* <sup>=</sup> { *<sup>x</sup>*, {0.5|*P<sup>T</sup> <sup>a</sup>* }, {0.5|*P<sup>I</sup> <sup>b</sup>* }, {0.5|*P<sup>F</sup> <sup>c</sup>* }|*x* ∈ *X*}*, then E*(*A*) = *MDU*(*DPNHFS*(*A*, *B*))*. The MDU* : [0, 1] → [0, 1] *is a decreasing unary function, its partial derivatives are lower than* 0*, and MDU*(0) = 1*, MDU*(1) = 0*.*

**Theorem 11.** *Suppose S is a similarity measure obtained according to Definition 14, <sup>B</sup>* <sup>=</sup> {*x*, {0.5|*P<sup>T</sup> <sup>a</sup>* }, {0.5|*P<sup>I</sup> <sup>b</sup>* }, {0.5|*P<sup>F</sup> <sup>c</sup>* }|*x* ∈ *X*}*, then E*(*A*) = *MIU*(*SPNHFS*(*A*, *B*))*. The MIU* : [0, 1] → [0, 1] *is a decreasing unary function, its partial derivatives are bigger than* 0*, and MDU*(0) = 0*, MDU*(1) = 1*.*

The process of proof about Theorem 10 and Theorem 11 is not unfolded here. Similarity, we can also get the following theorems. The proof processes are visualized.

**Theorem 12.** *Supposing that DPNHFS is the distance measure of PNHFS A, SPNHFS is the similarity measure of PNHFS A, <sup>B</sup>* <sup>=</sup> {*x*, {0.5|*P<sup>T</sup> <sup>a</sup>* }, {0.5|*P<sup>I</sup> <sup>b</sup>* }, {0.5|*P<sup>F</sup> <sup>c</sup>* }|*x* ∈ *X*}*, then E*(*A*) = *MIB*(*MDU*(*DPNHFS*(*A*, *B*)), *MIU*(*SPNHFS*(*A*, *B*))) *is a entropy measure. MIB* : [0, 1] → [0, 1] *is an increasing binary function under the conditions that the partial derivatives are bigger than* 0*, MIB*(0, 0) = 0*, MIB*(1, 1) = 1*. The mappings MDU* : [0, 1] → [0, 1] *and MIU* : [0, 1] → [0, 1] *are decreasing unary function and increasing function, respectively. In addition, MDU*(0) = 1*, MDU*(1) = 0*, MIU*(0) = 0*, MDU*(1) = 1*.*

**Theorem 13.** *Supposing that DPNHFS is the distance measure of PNHFS A, SPNHFS is the similarity measure of PNHFS A, <sup>B</sup>* <sup>=</sup> {*x*, {0.5|*P<sup>T</sup> <sup>a</sup>* }, {0.5|*P<sup>I</sup> <sup>b</sup>* }, {0.5|*P<sup>F</sup> <sup>c</sup>* }|*x* ∈ *X*}*, then E*(*A*) = *MDB*(*MIU*(*DPNHFS*(*A*, *B*)), *MDU*(*SPNHFS*(*A*, *B*))) *is an entropy measure. MDB* : [0, 1] → [0, 1] *is a decreasing binary function under the conditions that the partial derivatives are lower than* 0*, MIB*(1, 1) = 0*, MIB*(0, 0) = 1*. The mappings MIU* : [0, 1] → [0, 1] *and MDU* : [0, 1] → [0, 1] *are increasing unary function and decreasing function, respectively. In addition, MIU*(1) = 1*, MIU*(0) = 0*, MDU*(1) = 0*, MDU*(1) = 1*.*

#### **4. Method Analysis Based on Illustrations and Applications**

#### *4.1. Comparative Evaluations*

In real life, the investment problem is a common MCDM problem, and many researchers have proposed different types of distance and similarity measures of SVNHFS to settle this problem. In this part, a famous investment selection situation is introduced. The specific evaluation and the precise data of alternatives for investment company to invest the money problem are listed in Table 1. Table 1 displays the decision matrix of four alternatives *A*1, *A*2, *A*3, *A*<sup>4</sup> and three evaluated criteria *C*1, *C*2, *C*3. The four alternatives are Real Estate, Oil Exploitation, Bank Financial and Western Restaurant, respectively. The three criteria are Market Prospect, Risk Assessment and Earning Cycle, respectively. The idea element *A*<sup>∗</sup> = 1|1, 0|0, 0|0}.

**Table 1.** Probabilistic neutrosophic hesitant fuzzy decision matrix of the investment problem.


**Note 1.** *The data on this investment selection problem in Table 1 is in the form of PNHFNs. The PNHFS is one of the generalized from the NHFS, which we have described by item (1) after Definition 6. Thus, the definition of PNHFS can also utilized to NHFS. For instance,* {0.5, 0.6}, {0.1}, {0.3} *is an NHFE. We can describe it as* {0.5|0.5, 0.6|0.5}, {0.1|1}, {0.3|1}*, which is an PNHFE.*

**Note 2.** *The results are listed in Table 2, and the optimal result is according to the minimum value among distance measures.*



The optimal selections are shown in Table 3. By comparing the conclusions shown by the present distance measures Xu and Xia's Method, Singh's Method, and Sahin's Method, we found that the selections calculated are the same as our method with *Dφ*=*μ*=*ν*=*λ*<sup>=</sup>1, *Dφ*=*μ*=*ν*=2,*λ*=<sup>1</sup> <sup>2</sup> and *Dφ*=*μ*=1,*ν*=2,*λ*<sup>=</sup>1. However, the conclusions calculated by *Dφ*=*ν*=1,*μ*=2*λ*<sup>=</sup>1, *Dφ*=2,*ν*=1=*μ*=1*λ*=<sup>1</sup> are different from the present method.


**Table 3.** Relationships between presenting methods and our method .

Thus, we deduce that the consequences may change if we change the inner frames of the distance measure formula. According to the components of |*dA*(*x*) − *dB*(*x*)|, |*eA*(*x*) − *eB*(*x*)| and |*gA*(*x*) − *gB*(*x*)|, which describe the certain attitudes, knowledge backgrounds and hesitancy degree, respectively, we trust that the new type of distance measures are effective and significant. If the difference of the decision makers' hesitancy degree and background knowledge is relatively big, it does not have a lot of effective consult values regarding whether they have the same conclusions. However, when the difference between the decision maker's hesitation and background knowledge is not too big, analyzing the reasons for the difference in their opinions is significant. Thus, it is important for making rational decisions.

#### *4.2. Streamlining the Talent Selection Process*

In many areas of life, the existing evaluation systems are incomplete, resulting in redundancy in the evaluation processes and waste of resources. This situation results in the low efficiency of evaluation for the entire decision-making section. Through the evaluation and analysis of the existing concerned decision documents, the matter of unnecessary waste of manpower resource is extensive. For example, many companies with well-established evaluation systems are concentrated in large cities or large countries, under the context of rapid growth in information and the trend of economic globalization. In addition, the untimely exchange of information is an important reason for the waste of decision resources. In the process of multi-criteria decision-making, the final results show inaccurate features in the case of a loss of decision information. Thus, in this situation, we explain the application by taking the investment company's choice of the best investment project as an example.

ABC Investment Co., Ltd. is a large investment consulting company. The company's decision-making level is in the leading position. Thus, policymakers prefer to choose ABC Investment Co., Ltd. instead of other relatively backward companies. As a result, large investment companies are common, and small investment sectors create a waste of corporate resources. Ultimately, helping companies to share information in decision-making systems to improve decision-making processes is critical to guiding companies to choose more rational decision-making companies. Thus, when enterprises face risky decision-making problems, they should choose large decision-making departments to deal with them effectively, but not all decision-making problems blindly choose large investment departments to solve.

With regard to those decision-making issues that need to be transferred to the upper-level department for processing, the decision given by the decision-maker is a critical step. Therefore, accurate judgment, the consensus of the decision-makers at the corresponding level and the decision-making departments at higher levels provide a reference for the development of the enterprise. This can synthesize different levels of knowledge information to improve decision-making efficiency.

Combined with the above considerations, companies establish decision-making systems to improve decision-making efficiency. It is necessary for companies to have a database of their decision information. In some enterprises, decision information storage and retrieval systems have been established based on computer networks for enterprise-centric data collection and investigation. Effectively sharing decision data among decision-making departments is beneficial to the development of companies. Therefore, in reducing excessive unnecessary decisions, PNHFNs are used to express the conclusions of decision makers for the MCDM problems faced by companies.

For instance, the formula {*T*|*PT*, *<sup>I</sup>*|*P<sup>I</sup>* , *<sup>F</sup>*|*PF*} is a decision maker's judgment for an MCDM problem, where *T* describes that the decision maker's support degrees for the problem can be solved, *I* indicates

that the professor's indeterminacy degrees for the problem can be solved, and *F* expresses that the decision maker's dissentient degrees for the problem can be solved. The probabilities *PT*, *P<sup>I</sup>* and *P<sup>F</sup>* are the corresponding statistic value of *T*, *I* and *F*, respectively.

Next, we introduce an illustration by utilizing the new distance and similarity measures to perfect the accurate evaluation for reducing the excessive re-evaluations. The special illustration of a talents selection problem is introduced as follows:

*C* : {*A*1, *A*2, *A*3, *A*4} is a set of three investors.

*E* : {*E*1, *E*2} is a set of two stock consultants from the higher and lower companies, respectively.

*A* : {RE Network Technology Company (RE); DR Biotechnology Company (DR); EV Chemical Company (EV); and FL Technology Company (FL)} is a set of stocks that the investors need to be premeditated.

Then, regarding the investment questions, the evaluation information of the two experts is described and listed in Tables 4 and 5.


**Table 4.** Probabilistic neutrosophic hesitant fuzzy decision matrix of *E*1.

**Table 5.** Probabilistic neutrosophic hesitant fuzzy decision matrix of *E*2.


First, normalize the evaluation information, since the space is limited, so the results are neglected. According to the above-mentioned explanations, the distance and similarity measures among the two reports' evaluations are calculated by utilizing the following functions:

$$D(\text{Ei}, \text{E}\_2) = \begin{cases} \text{Slog3} \left( 1 + \frac{|\underline{d}\_A(\underline{x}) - \underline{d}\_B(\underline{x})|^2}{4} + |e\_A(\underline{x}) - e\_B(\underline{x})| + \frac{|\underline{g}\_A(\underline{x}) - \underline{g}\_B(\underline{x})|}{2} \right), \text{when} |e\_A(\underline{x}) - e\_B(\underline{x})| \ge 0.15; \\\ \text{Slog3} \left( 1 + \frac{|\underline{d}\_A(\underline{x}) - \underline{d}\_B(\underline{x})|}{2} + |e\_A(\underline{x}) - e\_B(\underline{x})|^2 + \frac{|\underline{g}\_A(\underline{x}) - \underline{g}\_B(\underline{x})|}{2} \right), \text{when} |e\_A(\underline{x}) - e\_B(\underline{x})| \le 0.15. \end{cases} \tag{18}$$

$$S(E\_1, E\_2) = \begin{cases} \frac{1}{2} ( (1 - \frac{2 - |d\_A(\mathbf{x}) - d\_B(\mathbf{x})|}{2})^3 + (\frac{1}{2})^{|e\_A(\mathbf{x}) - e\_B(\mathbf{x})|} + \frac{|g\_A(\mathbf{x}) - g\_B(\mathbf{x})|}{2} - 0.5), \text{when} |e\_A(\mathbf{x}) - e\_B(\mathbf{x})| \ge 0.15;\\ \frac{1}{2} ( (\frac{1}{2})^3 |e\_A(\mathbf{x}) - e\_B(\mathbf{x})| - \frac{|d\_A(\mathbf{x}) - d\_A(\mathbf{x})|}{2} + \frac{|g\_A(\mathbf{x}) - g\_B(\mathbf{x})|}{2} + 0.5), \text{when} |e\_A(\mathbf{x}) - e\_B(\mathbf{x})| \le 0.15. \end{cases} \tag{19}$$

According to the investors knowledge backgrounds, the threshold value is set to 0.15. If the difference of the stock consultant evaluation is lower than 0.15, the discussion of their evaluations is worth deeply discussing and studying, and it may be a key factor of the investment choice. Conversely, the impact of the difference in conclusions is not the most important.

Next, for the consequences of distance and similarity measures of every criterion for each investment problem, these are described by the corresponding matrices *D*(*E*1, *E*2), *S*(*E*1, *E*2):

$$D(E\_1, E\_2) = \begin{pmatrix} 0.\overline{242}\overline{42}, 0.0396, \overline{0.\overline{738}}\overline{0}, 0.0715\\ \overline{0.\overline{777}}\overline{7}, 0.4676, 0.5701, 0.1101\\ 0.2693, 0.3948, 0.\overline{735}\overline{1}, \widehat{0.7932} \\\ 0.2208, 0.3892, \widehat{0.5937}, 0.2866 \end{pmatrix}^T$$

$$S(E\_1, E\_2) = \begin{pmatrix} 0.6575, 0.7257, \overline{0.6833}, 0.7455\\ \widehat{0.5023}, 0.6088, 0.5272, 0.6933\\ 0.6367, 0.5848, 0.6522, \overline{0.6589}\\ 0.6806, 0.6400, \overline{0.5485}, 0.6628 \end{pmatrix}.$$

Based on the above conclusions, in order to confirm which criterion needs further examination, the stock consultant should discuss the threshold value of the distance value with investors. However, the similarity consequences are considered as a reference for the investor and stock consultant for the consideration of further examinations. According to this question background, 0.15 is the threshold value of distance measures for every investor (the threshold value of distance measures is determined by a third party data source, and we are not discussing this here.). On the basis of the explanation of distance measure, the threshold value of similarity measure will be determined. Next, the matrices *D*(*E*1, *E*2) and *S*(*E*1, *E*2) help us to understand the meaning.

Observing the matrix *D*(*E*1, *E*2), investor *A*<sup>1</sup> needs to focus on *EV*; investor *A*<sup>2</sup> needs to focus on *RE*; investor *A*<sup>3</sup> needs to focus on *EV* and *FL*; and investor *A*<sup>4</sup> does not need to focus on: *RE*, *DR* and *FL*.

Likewise, about the matrix *S*(*E*1, *E*2), for the investors *A*<sup>2</sup> and *A*4, we can obtain the same conclusion as the ones explained by *D*(*E*1, *E*2). However, the similarity measure of *A*<sup>1</sup> is not the smallest, and neither is the similarity measure of *A*<sup>3</sup> for *EV*and *FL*. Both *A*<sup>1</sup> and *A*<sup>3</sup> reflect the greater distance and similarity measures. The reason is that the context of the problem is different, and the distance and similarity measure of *A*<sup>1</sup> are investigated by the corresponding first formulas in (18) and (19); the conclusions of the *A*<sup>3</sup> are investigated by the corresponding second formula in (18) and (19). Obviously, the different knowledge background of the stock consultants caused the results of *A*1, the results of *A*<sup>1</sup> are relatively less strict for rule *EV*. Furthermore, stock consultants need more in-depth communication to make judgments and suggestions about rules *EV* and *FL* for *A*3.

However, in order to make a decision faster for *A*3, the entropy measure can be utilized. For *A*3, the stock consultants provide the normal probabilistic neutrososphic hesitant fuzzy information with respect to the *EV* listed:

$$E\_1 = \langle \{ 0.6 | 1 \}, \{ 0.4 | 0.625, 0.5 | 0.375 \}, \{ 0.4 | 0.56, 0.6 | 0.44 \} \rangle,$$

$$E\_2 = \langle \{ 0.5 | 0.44, 0.6 | 0.56 \}, \{ 0.5 | 0.44, 0.7 | 0.56 \}, \{ 0.5 | 1 \} \rangle.$$

By utilizing Equations (18) and (19), and the following entropy measures

$$E(A) = \frac{1D(A,B) + S(A,B)}{2} \tag{20}$$

to obtain the stock consultants' entropy for rule *EV*, in which *B* = {*x*, {0.5|*P*1}, {0.5|*P*2}, {0.5|*P*3}|*x* ∈ *X*}, we can get

$$E\_1 = 0.5393; E\_2 = 0.5977.5$$

The bigger the entropy value, the easier it is for the stock consultant to change his/her mind. The investor should make a contract with the stock consultant *E*<sup>2</sup> first, then make a contract with *E*1. Suppose the stock consultant *E*<sup>2</sup> changes his mind previously, and his opinion is closer to *E*1. Then, it is not necessary for investor *A*<sup>3</sup> to make an appointment with *E*1. Obviously, this method is more convenient, flexible and efficient. This method is beneficial for reducing the unnecessary selective re-examinations. In addition, the entropy measure is applied in MCDM situations, which is conducive to improving resource utilization.

It is worth noting that the evaluation information is described by PNHFS, which include the objective information and subjective degrees. The decision makers can select the optimal form of expression of PNHFS to solve practical situations.

#### **5. Conclusions and Future Research**

Based on the concept of PNHFS, the theories of NSs are enriched and its application ranges are increased. Next, the different types of fuzziness related to the uncertainty neutrosophic space are investigated. Through analysis and comparison, we know that the neutrosophic space is composed of indeterminate subspace and relatively certain subspace. These two different types of subspace should be distinguished. Simultaneously, the connections among these subspaces are investigated. According to the drawbacks of distance and similarity measures, a new method is established to describe the measures of PNHFSs. The basic axioms of measure are satisfied. Next, the connections among the novel distance, similarity and entropy measures are researched, and compared with other proposed methods. It shows that our methods are more effective. Finally, under the background of investment selection, the novel distance, similarity and entropy measures are shown for reducing the invalid evaluation processes. This is important for improving the evaluation efficiency of the entire selection system. The results have expressed that our proposed methods are meaningful and, if applied, solve the more complicated problems, like talent selections.

Furthermore, in Example 3 and Example 5, the parameters *φ*, *μ*, *ν* and *λ* can depict the experts' individual preferences and knowledge background. Additionally, the more information that is expressed, the more accurate the parameters will be. Thus, how to decide the parameters in measurements is a significant problem. Next, the practicality of new measures is explained by applying distance, similarity and entropy measures into the investment selection. The new distance (similarity) and entropy measures will be researched by integrating them with some related backgrounds to promote the other practical situations. Considering the privacy of information, the related situations of new measurements will help with evaluation to guide decision makers. In the future, the novel measures will be investigated and integrate some related methods in order to expand the scope of application. Based on the correlation and complexity of investors' information, the novel measures will be established. Finally, the properties of entropy measurements have not been studied in full. Thus, in the future, the axioms of the entropy measure will be given more attention. The basic operation laws of PNHFSs and IS have been omitted, so the research about this situation will be studied further.

**Author Contributions:** All authors have contributed equally to this paper. S.S. and X.Z. initiated the investigation and organized the draft. S.S. put forward this idea and completed the preparation of the paper. S.S. collected existing research results on PNHFS. X.Z. revised and submitted the document.

**Funding:** This work was supported by the National Natural Science Foundation of China (Grant No. 61573240). **Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


c 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Neutrosophic Quadruple Vector Spaces and Their Properties**

#### **Vasantha Kandasamy W.B. 1, Ilanthenral Kandasamy 1,***∗***and Florentin Smarandache <sup>2</sup>**


Received: 03 July 2019; Accepted: 16 August 2019; Published: 19 August 2019

**Abstract:** In this paper authors for the first time introduce the concept of Neutrosophic Quadruple (NQ) vector spaces and Neutrosophic Quadruple linear algebras and study their properties. Most of the properties of vector spaces are true in case of Neutrosophic Quadruple vector spaces. Two vital observations are, all quadruple vector spaces are of dimension four, be it defined over the field of reals *R* or the field of complex numbers *C* or the finite field of characteristic *p*, *Zp*; *p* a prime. Secondly all of them are distinct and none of them satisfy the classical property of finite dimensional vector spaces. So this problem is proposed as a conjecture in the final section.

**Keywords:** Neutrosophic Quadruple (NQ); Neutrosophic Quadruple set; NQ vector spaces; NQ linear algebras; NQ basis; NQ vector spaces; orthogonal or dual NQ vector subspaces

#### **1. Introduction**

In this section we just give a brief literature survey of this new field of Neutrosophic Quadruples [1]. Neutrosophic triplet groups, modal logic Hedge algebras were introduced in [2,3]. Duplet semigroup, neutrosophic homomorphism theorem and triplet loops and strong AG(1, 1) loops are defined and described in [4–6]. Neutrosophic triplet neutrosophic rings application to mathematical modelling, classical group of neutrosophic triplets on {*Z*2*p*, ×} and neutrosophic duplets in neutrosophic rings are developed and analyzed in [7–11]. Study of Algebraic structures of neutrosophic triplets and duplets, quasi neutrosophic triplet loops, extended triplet groups, AG-groupoids, NT-subgroups are carried out in [6,12–17]. Refined neutrosophic sets were developed by [18–21]. Neutrosophic algebraic structures in general were studied in [22–25]. The new notion of Neutrosophic Quadruples which assigns a known part happens to be very interesting and innovative, and was introduced by Smarandache [1,26] in 2015. Several research papers on the algebraic structure of Neutrosophic Quadruples, such as groups, monoids, ideals, BCI-algebras, BCI-positive implicative ideals, hyperstructures, BCK/BCI algebras [27–32] have been recently studied and analyzed. However in this paper authors have defined the new notion of Neutrosophic Quadruple vector spaces (NQ vector spaces) and Neutrosophic Quadruple linear algebras (NQ linear algebras) and have studied a few related properties. This work can later be used to propose neutrosophic based dynamical systems in particular in the area of hyperchoaos from cellular neural networks [33].

This paper is organized into five sections. Basic concepts needed to make this paper a self contained one is given in Section 2. NQ vector spaces are introduced in Section 3, further NQ subspaces are introduced and the notion of direct sum and NQ bases are analysed. It is shown all NQ vector spaces are of dimension 4 be it defined over *R* or *C* or *Zp*, *p* a prime. Section 4 defines and develops the properties of NQ linear algebras. The final section proposes a conjecture which is related with the finite dimensional vector spaces, which are always isomorphic to finite direct product of fields over which the vector space is defined. Finally we give the future direction of research on this topic.

#### **2. Basic Concepts**

In this section basic concepts on vector spaces and a few of its properties and some NQ algebraic structures and their properties needed for this paper are given.

Through out this paper *R* denotes the field of reals, *C* denotes the field of complex numbers and *Zp* denotes the finite field of characteristic *p*, *p* a prime. *NQ* = {(*a*, *bT*, *cI*, *dF*) denotes the Neutrosophic Quadruple; with *a*, *b*, *c*, *d* in *R* or *C* or *Zp*, where *T*, *I* and *F* has the usual neutrosophic logic meaning of Truth, Indeterminate and False respectively and *a* denotes the known part [26].

For basic properties of vector spaces and linear algebras please refer [22].

**Definition 1** ([22])**.** *A vector space or a linear space V consists of the following;*

	- *(a) x* + *y* = *y* + *x (addition is commutative).*
	- *(b) x* + (*y* + *z*)=(*x* + *y*) + *z (addition is associative).*
	- *(c) There is a unique vector* 0 *in V such that x* + 0 = *x for all x* ∈ *V.*
	- *(d) For each vector x* ∈ *V there is a unique vector* −*x* ∈ *V such that x* + −*x* = 0.
	- *(e) A rule or operation called scalar multiplication that associates with each scalar c* ∈ *R or C or Zp and for a vector x* ∈ *V, called product denoted by '.' of c and x in such a way that for x* ∈ *V and c*.*x* ∈ *V and ;*
		- *i. c*.*x* = *x*.*c for every x* ∈ *V.*
		- *ii.* (*c* + *d*).*x* = *c*.*x* + *d*.*x*
		- *iii. c*.(*x* + *y*) = *c*.*x* + *c*.*y*
		- *iv. c*.(*d*.*x*)=(*c*.*d*)*x*;

*for all x*, *y* ∈ *V and c*, *d in R or C or Zp.*

*We can just say* (*V*, +) *is a vector space over a field R or C or Zp if* (*V*, +) *is an additive abelian group and V is compatible with the product by the scalars. If on V is defined a product such that* (*V*, ×) *is a monoid and c*(*x* × *y*)=(*cx*) × *y then V is a linear algebra over R or C or Zp [22].*

**Definition 2** ([22])**.** *Let V be a vector space over R (or C or Zp). A subspace of V is a subset W of V which is itself a vector space over R (or C or Zp) with the operations of addition and scalar multiplication as in V.*

**Definition 3.** *Let V be a vector space over R (or C or Zp). A subset B of V is said to be linearly dependent or simply dependent if there exist distinct vectors, x*1, *x*2, *x*3, ... , *xt* ∈ *B and scalars a*1, *a*2, *a*3, ... , *at* ∈ *R or C or Zp not all of which are zero such that a*1*x*<sup>1</sup> + *a*2*x*<sup>2</sup> + *a*3*x*<sup>3</sup> + ... + *atxt* = 0*. A set which is not linearly dependent is called independent or linearly independent. If B contains only finitely many vectors x*1, *x*2, *x*3,..., *xk we sometimes say x*1, *x*2, *x*3,..., *xk are dependent instead of saying B is dependent.*

The following facts are true [22].


For a vector space *V* over a field *R* or *C* or *Zp* , the basis for *V* is a linearly independent set of vectors in *V* which spans the space *V*. We say the vector space *V* over *R* or *C* or *Zp* is a direct sum of subspaces *W*1, *W*2, ... , *Wt* if and only if *V* = *W*<sup>1</sup> + *W*<sup>2</sup> + ... + *Wt* and *Wi* ∩ *Wj* is the zero vector for *i* = *j* and 1 ≤ *i*, *j* ≤ *t*.

The other properties of vector spaces are given in book [22].

Now we proceed on to recall some essential definitions and properties of Neutrosophic Quadruples [26].

**Definition 4** ([26])**.** *The quadruple* (*a*, *bT*, *cI*, *dF*) *where a*, *b*, *c*, *d* ∈ *R or C or Zp, with T*, *I*, *F as in classical Neutrosophic logic with a the known part and* (*bT*, *cI*, *dF*) *defined as the unknown part, denoted by NQ* = {(*a*, *bT*, *cI*, *dF*)|*a*, *b*, *c*, *d* ∈ *R or C or Zn*} *in called the Neutrosophic set of quadruple numbers.*

The following operations are defined on NQ, for more refer [26]. For *x* = (*a*, *bT*, *cI*, *dF*) and *y* = (*e*, *f T*, *gI*, *hF*) in *NQ* [26] have defined

$$x + y = (a, bT, cI, dF) + (e, fT, gI, hF) = (a + e, (b + f)T, (c + g)I, (d + h)F)$$

and *x* − *y* = (*a* − *e*,(*b* − *f*)*T*,(*c* − *g*)*I*,(*d* − *h*)*F*)

are in NQ. For *x* = (*a*, *bT*, *cI*, *dF*) in NQ and *s* in *R* or *C* or *Zp* where *s* is a scalar and *x* is a vector in *V*. *s*.*x* = *s*.(*a*, *bT*, *cI*, *dF*)=(*sa*,*sbT*,*scI*,*sdF*) ∈ *V*.

If *x* = 0 = (0, 0, 0, 0) in *V* usually termed as zero Neutrosophic Quadruple vector and for any scalar *s* in *R* or *C* or *Zp* we have *s*.0 = 0.

Further (*s* + *t*)*x* = *sx* + *tx*,*s*(*tx*)=(*st*)*x*,*s*(*x* + *y*) = *sx* + *sy* for all *s*, *t* ∈ *R* or *C* or *Zp* and *x*, *y* ∈ *NQ*. −*x* = (−*a*, −*bT*, −*cI*, −*dF*) which is in NQ.

The main results proved in [26] and which is used in this paper are mentioned below;

**Theorem 1** ([26])**.** (*NQ*, +) *is an abelian group.*

**Theorem 2** ([26])**.** (*NQ*, .) *is a monoid which is commutative.*

We mainly use only these two results in this paper, for more literature about Neutrosophic Quadruples refer [26].

#### **3. Neutrosophic Quadruple Vector Spaces and Their Properties**

In this section we proceed on to define for the first time the new notion of Neutrosophic Quadruple vector spaces (NQ -vector spaces) their NQ vector subspaces, NQ bases and direct sum of NQ vector subspaces. All these NQ vector spaces are defined over *R*, the field of reals or *C*, the field of complex numbers and finite field of characteristic *p*, *Zp*, *p* a prime. All these three NQ vector spaces are different in their properties and we prove all three NQ vector spaces defined over *R* or *C* or *ZP* are of dimension 4.

We mostly use the notations from [26]. They have proved (*NQ*, +) = {(*a*, *bT*, *cI*, *dF*)|*a*, *b*, *c*, *d* ∈ *R* or *C* or *Zp*, *p* a prime; +} is an infinite abelian group under addition.

We prove the following theorem.

**Theorem 3.** (*NQ*, +) = {(*a*, *bT*, *cI*, *dF*)|*a*, *b*, *c*, *d* ∈ *R or C or Zp; p a prime,* +} *be the Neutrosophic quadruple group. Then V* = (*NQ*, +, ◦) *is a Neutrosophic Quadruple vector space (NQ-vector space) over R or C or Zp, where '*◦*' is the special type of operation between V and R (or C or Zp) defined as scalar multiplication.*

**Proof.** To prove *V* is a Neutrosophic quadruple vector space over *R* (or *C* or *Zp*, *p* is a prime), we have to show all the conditions given in Section two (Definition 1) of this paper is satisfied. In the first place we have *R* or *C* or *Zp* are field of scalars, and elements of *V* we call as vectors. It has been proved by [26] that *V* = (*NQ*, +) is an additive abelian group, which is the basic property on *V* to be a vector space. Further the quadruple is defined using *R* or *C* or *Zp*, *p* a prime, or used in the mutually exclusive sense. Now we see if *x* = (*a*, *bT*, *cI*, *dF*) is in *V* and *n* ∈ *R* (or *C* or *Zp*) then the scalar multiplication '◦' which associates with each scalar *n* ∈ *R* and the NQ vector *x* ∈ *V*,

*n* ◦ *x* = *n* ◦ (*a*, *bT*, *cI*, *dF*)=(*n* ◦ *a*, *n* ◦ *bT*, *n* ◦ *cI*, *n* ◦ *dF*) which is in *V*, called the product of *n* with *x* in such a way that


for all *m*, *n* ∈ *R* or *C* or *Zp* and *v*, *w* ∈ *V*.

0 = (0, 0, 0, 0) is the zero vector of *V* and for 0 in *R* or *C* or *Zp*; we have 0 ◦ *x* = 0 ◦ (*a*, *bT*, *cI*, *dF*) = (0, 0, 0, 0); ∀*x* ∈ *V*.

Clearly *V* = (*NQ*, +, ◦) is a vector space known as the NQ vector space over *R* or *C* or *Zp*.

However we can as in case of vector spaces say in case of NQ-vector spaces also (*NQ*, +) is a NQ vector space with special scalar multiplication ◦.

We now proceed on to define the concept of linear dependence, linear independence and basis of NQ vector spaces.

**Definition 5.** *Let V* = (*NQ*, +) *be a NQ vector space over R (or C or Zp). A subset L of V is said to be NQ linearly dependent or simply dependent, if there exists distinct vectors a*1, *a*2, ... , *ak* ∈ *L and scalars d*1, *d*2, ... , *dk* ∈ *R (or C or Zp) not all zero such that d*<sup>1</sup> ◦ *a*<sup>1</sup> + *d*<sup>2</sup> ◦ *a*<sup>2</sup> + ... + *dk* ◦ *ak* = 0*. We say the set of vectors a*1, *a*2,..., *ak is NQ linearly independent if it is not NQ linearly dependent.*

We provide an example of this situation.

**Example 4.** *Let V* = (*NQ*, +) *vector space over R. Let x* = (3, −4*T*, 5*I*, 2*F*), *y* = (−2, 3*T*, −2*I*, −2*F*) *and z* = (−1, *T*, −3*I*, 0) *be in V. We see* 1 ◦ *x* + 1 ◦ *y* + 1 ◦ *z* = (0, 0, 0, 0), *so x*, *y and z are NQ linearly dependent. Let x* = (5, 0, 0, 2*F*) *and y* = (0, 5*T*, −3*I*, 0) *be in V. We cannot find a a*, *b* ∈ *R such that a* ◦ *x* + *b* ◦ *y* = (0, 0, 0, 0)*. If possible a* ◦ *x* + *b* ◦ *y* = (0, 0, 0, 0)*; this implies a* ◦ 5 + *b* ◦ 0 = 0*, forcing a* = 0; *a* ◦ 0 + *b* ◦ 5 = 0*, forcing b* = 0*; a* ◦ 0 + *b* ◦ −3 = 0, *forcing b* = 0 *and a* ◦ 2 + *b* ◦ 0 = 0 *forcing a* = 0*. Thus the equations are consistent and a* = *b* = 0*. So x and y are NQ linearly independent over R.*

The following properties are true in case of all vector spaces hence true in case of NQ vector spaces also.


We now proceed on to define Neutrosophic Quadruple basis (NQ basis) for *V* = (*NQ*, +), Neutrosophic Quadruple vector space over *R* or *C* or *Zp* (or used in the mutually exclusive sense).

**Definition 6.** *Let V* = (*NQ*, +) *vector space over R (or C or Zp). We say a subset L of V spans V if and only if every vector in V can be got as a linear combination of elements from L and scalars from R (or C or Zp). That is if a*1, *a*2, ... , *an are n elements in L; then v* = *d*<sup>1</sup> ◦ *a*<sup>1</sup> + *d*<sup>2</sup> ◦ *a*<sup>2</sup> + ... + *dn* ◦ *an, is the NQ linear combination of vectors of L; where d*1, *d*2,..., *dn are in R or C or Zp and not all these scalars are zero.*

*The Neutrosophic Quadruple basis for V* = (*NQ*, +) *is a set of vectors in V which spans V. We say a set of vectors B in V is a basis of V if B is a linearly independent set and spans V over R or C or Zp.*

We say *V* is finite dimensional if the number of elements in basic of *V* is a finite set; otherwise *V* is infinite dimensional.

**Theorem 5.** *Let V* = (*NQ*, +) *be the Neutrosophic Quadruple vector space over R (or C or Zp). V is a finite dimensional NQ vector space over R (or C or Zp) and dimension of these NQ vector spaces over R(or C or Zp) are always four.*

**Proof.** Let *V* = (*NQ*, +) = {(*a*, *bT*, *cI*, *dF*)|*a*, *b*, *c*, *d* ∈ *R* (or *C* or *Zp*), +}, be the collection of all neutrosophic quadruples of the Neutrosophic Quadruple vector space over *R* (or *C* or *Zp*). To prove dimension of *V* over *R* is four it is sufficient to prove that *V* has four linearly independent vectors which can span *V*, which will prove the result. Take the set *B* = {(1, 0, 0, 0),(0, *T*, 0, 0),(0, 0, *I*, 0),(0, 0, 0, *F*)} contained in *V*; to show *B* is independent and spans *V* it enough if we prove for any *v* = (*a*, *bT*, *cI*, *dF*) ∈ *V*, *v* can be represented uniquely as a linear combination of elements from *B* and scalars from *R* (or *C* or *Zp*). Now *v* = (*a*, *bT*, *cI*, *dF*) = *a* ◦ (1, 0, 0, 0) + *b* ◦ (0, *T*, 0, 0) + *c* ◦ (0, 0, *I*, 0) + *d* ◦ (0, 0, 0, *F*) for the scalars *a*, *b*, *c*, *d* ∈ *R* (or *C* or *Zp*). Hence we see the elements of *V* are uniquely represented as a linear combination of vectors using only *B*, further *B* is a set of linearly independent elements, hence *B* is a basis of *V* and *B* is finite, so *V* is finite dimensional over *R* (or *C* or *Zp*). As order of *B* is four, dimension of all NQ vector spaces *V* over *R* (or *C* or *Zp*) is four. Hence the theorem.

We call the NQ basis *B* as the special standard NQ basis of *V*.

**Definition 7.** *Let V* = (*NQ*, +) *be a NQ vector space over R (or C or Zp). A subset W of V is said to be Neutrosophic Quadruple vector subspace of V if W itself is a Neutrosophic Quadruple vector space over R (or C or Zp).*

We will illustrate this situation by examples.

**Example 6.** *Let V* = {*NQ*, +} *be a NQ vector space over R. W* = {(*a*, *bT*, 0, 0)|*a*, *b* ∈ *R*} *is a subset of V which is a NQ vector subspace of V over R. U* = {(0, 0, *cI*, *dF*)|*c*, *d* ∈ *R*} *is again a vector subspace of V and is different from W.*

*We observe that the only common element between W and U is the zero quadruple vector* (0, 0, 0, 0)*.*

*Further it is observed if we define the dot product or inner product on elements in V. For x* = (*a*, *bT*, *cI*, *dF*) *and y* = (*e*, *f T*, *gI*, *hF*) ∈ *V, x* • *y denoted as x* • *y* = (*a* • *e*, *bT* • *f T*, *cI* • *gI*, *dF* • *hF*)*; and x* • *y is in V. If x* • *y* = (0, 0, 0, 0) *for some x*, *y* ∈ *V then we say x is orthogonal (or dual) with y and vice versa. In fact x* • *y* = *y* • *x*; ∀*x*, *y* ∈ *V. We say two NQ vector subspaces W and U are orthogonal (or dual subspaces) if for every x* ∈ *W and for every y* ∈ *U; x* • *y* = (0, 0, 0, 0)*, that is two NQ vector subspaces are orthogonal if and only if the dot product of every vector in W with every vector in U is the zero vector.*

{(0, 0, 0, 0)} *is the zero vector subspace of V. Every NQ vector subspace of V trivial or nontrivial is orthogonal with the zero vector subspace* {(0, 0, 0, 0)} *of V. V the NQ vector space is orthogonal with only the zero vector subspace of V, and with no other vector subspace of V. W orthogonal U = W* • *U =* {*w* • *u*|*w* ∈ *W and u* ∈ *U*} *=* {(0, 0, 0, 0)}*; we call the pair of NQ subspaces as orthogonal or dual NQ subspaces of V.*

**Definition 8.** *Let V* = (*NQ*, +) *be a Neutrosophic Quadruple vector space over R (or C or Zp); W*1, *W*2, ... , *Wn be n distinct NQ vector subspaces of V. We say V* = *W*<sup>1</sup> ⊕ *W*<sup>2</sup> ⊕ ... ⊕ *Wn is a direct sum of NQ vector subspaces if and only if the following conditions are true;*


First we record that in case of all NQ vector spaces over *R* (or *C* or *Zp*) we can have the value of *n* given in definition to be only four, we cannot have more than four as dimension of all NQ vector spaces are only four. Secondly the minimum of *n* can be two which is true in case of all vector spaces of any finite dimension. Finally we wish to prove not all NQ vector subspaces are orthogonal and there are only finitely many nontrivial NQ vector subspaces for any NQ vector space over *R* (or *C* or *Zp*).

We prove as theorem a few of the properties.

**Theorem 7.** *Let V* = (*NQ*, +) *be a NQ vector space over R (or C or Zp). V has only finite number of NQ vector subspaces.*

**Proof.** We see in case of NQ vector spaces over *R* (or *C* or *Zp*) the dimension is four and the special standard NQ basis for *V* is *B* = {(1, 0, 0, 0),(0, *T*, 0, 0),(0, 0, *I*, 0),(0, 0, 0, *F*)}. So any non trivial subspace of *V* can be of dimension less than four; so it can be 1 or 2 or 3. Clearly there are some vector subspaces of dimension one given by, *W*<sup>1</sup> = (1, 0, 0, 0), *W*<sup>2</sup> = (0, *T*, 0, 0), *W*<sup>3</sup> = (0, 0, *I*, 0), *W*<sup>4</sup> = (0, 0, 0, *F*), *W*<sup>5</sup> = (1, *T*, 0, 0), *W*<sup>6</sup> = (1, 0, *I*, 0), *W*<sup>7</sup> = (1, 0, 0, *F*), *W*<sup>8</sup> = (0, *T*, *I*, 0), *W*<sup>9</sup> = (0, *T*, 0, *F*), *W*<sup>10</sup> = (0, 0, *I*, *F*), *W*<sup>11</sup> = (1, *T*, *I*, 0), *W*<sup>12</sup> = (1, *T*, 0, *F*), *W*<sup>13</sup> = (1, 0, *I*, *F*), *W*<sup>14</sup> = (0, *T*, *I*, *F*) and *W*<sup>15</sup> = (1, *T*, *I*, *F*). Some the two dimensional vector spaces are *U*<sup>1</sup> = (1, 0, 0, 0),(0, *T*, 0, 0), *U*<sup>2</sup> = (1, 0, 0, 0),(0, 0, *I*, 0),..., *U*<sup>105</sup> = (0, *T*, *I*, *F*),(1, *T*, *I*, *F*);

in fact there are 105 NQ vector subspaces of dimension two. Further there are 1365 NQ vector subspaces of dimension three. Thus there are 1485 non trivial NQ vector subspaces in any NQ vector space *V* = (*NQ*, +) over *R* (or *C* or *Zp*). We have shown that there are four NQ vector subspaces of dimension three all of them are hyper subspaces of *V*, of course we are not enumerating other types of dimension three subspaces generated by vectors of the form *M*<sup>1</sup> = {(1, *T*, 0, 0),(0, 0, *I*, 0),(0, 0, 0, *F*)}, or *M*<sup>2</sup> = {(1, 0, 0, *F*),(0, 0, *I*, 0),(0, *T*, 0, 0)} are spaces of dimension three which we do not take into account as hyper subspaces.

We define the three dimensional NQ vector subspace generated only by {(0, *T*, 0, 0),(0, 0, *I*, 0),(0, 0, 0, *F*)} is defined as the special pseudo Singled Valued Neutrosophic hyper NQ vector subspace of *V* [22,24].

#### **4. Neutrosophic Quadruple Linear Algebras over** *R* **or** *C* **or** *Zp*

In this section we take the basic concepts defined in [26] (*NQ*, +) for the Neutrosophic Quadruple additive abelian group and (*NQ*, .) as the commutative monoid with (1, 0, 0, 0) as the identity with respect to '.' and for any (*a*, *bT*, *cI*, *dF*) = *x*, and *y* = (*e*, *f T*, *gI*, *hF*) in NQ [26] have defined *x*.*y* = (*ae*,(*a f* + *be* + *b f*)*T*,(*ag* + *bg* + *ce* + *c f* + *cg*)*I*,(*ah* + *bh* + *ch* + *de* + *d f* + *dg* + *dh*)*F*).

**Theorem 8.** *V* = (*NQ*, +, .) *is a Neutrosophic Quadruple linear algebra (NQ linear algebra) over R (or C or Zp).*

**Proof.** To prove *V* is a NQ linear algebra we have to prove the following; (*NQ*, +) is an abelian group under addition given in [26] and it is proved that (*NQ*, +) is a vector space (Theorem 3). To prove *V* is a NQ linear algebra it is sufficient if we prove (*NQ*, .) is a monoid under product '.' which is proved in [26], further *d* ◦ (*x*.*y*)=(*d* ◦ *x*).*y* for *d* ∈ *R* (or *C* or *Zp*) and *x*, *y* ∈ *V* which is true as *x*.*y* is in *V*. Thus (*V*, +, .) is a NQ linear algebra over *R* (or *C* or *Zp*).

**Definition 9.** *Let V* = (*NQ*, +, .) *be a NQ linear algebra over R (or C or Zp). Let W be a nonempty proper subset of V, we say W is a NQ sublinear algebra of V over R (or C or Zp), if W itself is a linear algebra over R (or C or Zp).*

We provide some examples of them.

**Example 9.** *Let V* = (*NQ*, +.) *be a linear algebra over the field Z*7*. W* = {(1, 0, 0, 0)} *generated under +, . and '*◦*' multiplication by scalar from elements of Z*<sup>7</sup> *is a sublinear algebra and of order 7 and dimension of W over Z*<sup>7</sup> *is one. Similarly U* = {(1, *t*, 0, 0),(0, 0, *I*, 0)} *generated by these two vectors is a sublinear algebra of dimension two. Just we show how the product of x* = (3, 4*T*, *I*, 5*F*) *and y* = (2, 3*T*, 4*I*, *F*) *in V is carried out; x*.*y* = (6, 2*T*, *I*, 2*F*) *which is in V.*

We can as in case of NQ vector spaces derive all properties of NQ linear algebras , further as in case of NQ vector spaces dimension of all these NQ-linear algebras is four.

We in the following section propose some open conjectures and the future work to be carried out in this direction.

#### **5. Conclusions and Open Conjectures**

In this paper for the first time we define the notion of NQ vector spaces and NQ linear algebras. All the three NQ vector spaces are of dimension four only. The NQ vector space *V* over *R*, is different from the NQ vector space *W* over *C*, and both has infinite number of vectors; but is of dimension four and *U* the NQ vector space over *Zp* has only *p*<sup>4</sup> elements and is of dimension four.

We know the classical result on vector spaces states "A vector space *V* of say dimension *n* (*n* a finite integer) defined over the field *F* is isomorphic to *F* × *F* × ... × *F* n-times"; in view of this we propose the following conjectures:


Finally we would be developing the new notion of NQ algebraic codes and analyse them for future research. In our opinion a new type of NQ algebraic codes can certainly be defined with appropriate modifications. Also we would develop the notion of Neutrosophic quadruples in which the unknown part would be these neutrosophic triplets or modified form of neutrosophic duplets which would be taken for further study.

**Author Contributions:** The contributions of authors are roughly equal.

**Funding:** This research received no external funding

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Abbreviations**

The following abbreviations are used in this manuscript:

NQ Neutrosophic Quadruple

#### **References**


c 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Classification of the State of Manufacturing Process under Indeterminacy**

#### **Muhammad Aslam \* and Osama Hasan Arif**

Department of Statistics, Faculty of Science, King Abdulaziz University, Jeddah 21589, Saudi Arabia; oarif@kau.edu.sa

**\*** Correspondence: magmuhammad@kau.edu.sa or aslam\_ravian@hotmail.com; Tel.: +966-593-329-841

Received: 27 August 2019; Accepted: 15 September 2019; Published: 19 September 2019

**Abstract:** In this paper, the diagnosis of the manufacturing process under the indeterminate environment is presented. The similarity measure index was used to find the probability of the in-control and the out-of-control of the process. The average run length (ARL) was also computed for various values of specified parameters. An example from the Juice Company is considered under the indeterminate environment. From this study, it is concluded that the proposed diagnosis scheme under the neutrosophic statistics is quite simple and effective for the current state of the manufacturing process under uncertainty. The use of the proposed method under the uncertainty environment in the Juice Company may eliminate the non-conforming items and alternatively increase the profit of the company.

**Keywords:** similarity index; diagnosis; process; indeterminacy; neutrosophic statistics

#### **1. Introduction**

To control the non-conforming products in the industry is an important task for industrial engineers. Their mission is to minimize the non-conforming product which can be achieved only if the problems in the manufacturing process can be tackled immediately. The control charts are essential tools in the industry to monitor the manufacturing process. These tools are used to indicate the state of the process. A timely indication about the state of the process leads to the high quality of the product. Epprecht et al. [1] and Chiu and Kuo [2] proposed a chart for monitoring one, and more than one, non-conforming product, respectively. Hsu [3] designed a variable chart using the improved sampling schemes. Ho and Quinino [4] proposed an attribute chart to control the variation in the process. Aslam et al. [5] and Aslam et al. [6] worked on a time-truncated chart for the Birnbaum-Saunders distribution and the Weibull distribution respectively. Jeyadurga et al. [7] worked on an attribute chart under truncated life tests.

To analyze the vague and fuzzy data, the fuzzy logic is applied. The fuzzy logic is applied to analyze the data when the experimenters are unsure about the exact values of the parameters. Therefore, the monitoring of the process having fuzzy data is done using the fuzzy-based control charts. Afshari and Gildeh [8] and Ercan Teksen and Anagun [9] worked on fuzzy attribute and variable charts, respectively. Fadaei and Pooya [10] worked on a fuzzy operating characteristic curve. For more details, the reader may refer to Jamkhaneh et al. [11] who discussed the rectifying fuzzy single sampling plan. Senturk and Erginel [12] studied variable control charts using fuzzy approach. Ercan Teksen and Anagun [9] worked on the fuzzy X-bar and R-charts. More details on fuzzy logic can be seen in Lee and Kim [13] and Grzegorzewski [14].

A fuzzy and imprecise data usually have indeterminate values. Fuzzy and vague data only considered the membership of the truth and false values. A neutrosophic logic deals with membership of truth, false and indeterminacy values. Therefore, the neutrosophic logic is useful to analyze the data

having indeterminacy. Smarandache [15] introduced the neutrosophic statistics, which analyze the data when indeterminacy is presented. Aslam [16] and Aslam and Arif [17] introduced the neutrosophic statistics in the area of quality control. More details about the neutrosophic logic can be seen in references [18–23].

The similarity measure index (SMI) has been widely used in a variety of fields for classification purposes. In medical sciences, this index is used to classify the patients having a particular disease or not under indeterminacy, see De and Mishra [24]. By exploring the literature and to the best of the author's knowledge, there is no work on the process monitoring using SMI. In this paper, a method to classify the state of the process using SMI is introduced. The operational process of the proposed method is also given. The proposed classification method is simple in application compared to the existing method under classical statistics. It is expected that the proposed diagnosis method for the manufacturing process under the indeterminate environment will be effective, adequate and easy compared to the existing control charts under classical statistics. In Section 2, the SMI index is introduced in process control. A comparative study and application are given in Sections 3 and 4, respectively. Some concluded remarks are given in the last section.

#### **2. The Proposed Chart Based on SMI**

Suppose that *ZN* = *sN* + *uNI*; *ZN* ∈ [*ZL*,*ZU*] is a neutrosophic number having a determined part *sN* and an indeterminate part *uNI*, *I* ∈ [inf*I*, inf*U*] denotes the indeterminacy. Note here that *ZN* ∈ [*ZL*,*ZU*] is reduced to the determined number *ZN* = *sN* when no indeterminacy is found. The practitioners cannot record observations of the variable of interest in the precise and determined form in the presence of indeterminacy. The monitoring of the data having neutrosophic numbers using classical statistics as discussed in reference [25] may mislead decision-makers regarding the state of the process. For example, the practitioners decide the process is in the control state using classical statistics, but in fact, some observations are in the indeterminacy interval. More details on this issue can be seen in reference [26]. Suppose that *tU*, *fU* and *IU* presents the probabilities of the non-defective, defective and indeterminate. For the classification of the state of the process, let *t* = 1 and *f* = 0 show that the process is in control. Therefore, the value of SMI close to 1 indicates that the process is in control and the values away from the SMI show the process is out-of-control. The SMI from De and Mishra [24] is given by:

$$\text{SMI} = \sqrt{\left(1 - \frac{|(t\_L - t\_{lI}) - (I\_L - I\_{lI}) - (f\_L - f\_{lI})|}{\mathfrak{Z}}\right)} \left(1 - |(t\_L - t\_{lI}) + (I\_L - I\_{lI}) + (f\_L - f\_{lI})|\right) \tag{1}$$

Note here 0 ≤ *tL*,*IL*, *fL* ≤ 1, 0 ≤ *tU*,*IU*, *fU* ≤ 1, 0 ≤ *tL* + *fL* ≤ 1, 0 ≤ *tU* + *fU* ≤ 1, *tL* + *IL* + *fL* ≤ 2, *tU* + *IU* + *fU* ≤ 2.

Based on SMI, the following classification procedure is proposed to diagnose the state of the manufacturing process.


The operational process of the proposed method is also given with the help of Figure 1.

**Figure 1.** The operational process of the proposed method.

Note here that unlike the traditional control charts under classical statistics, the proposed chart using SMI is independent of the control limits and the control limits coefficients. The proposed chart reduces to the traditional control charts under classical statistics if no indeterminacy is found. Suppose that the probability of in-control of the process is determined from SMI. Let *SMI* = *Pin*, the *Pin* for the process is given by

$$P\_{in} = \sqrt{\left(1 - \frac{|(t\_L - t\_{lI}) - (I\_L - I\_{lI}) - (f\_L - f\_{lI})|}{3}\right)} \left(1 - |(t\_L - t\_{lI}) + (I\_L - I\_{lI}) + (f\_L - f\_{lI})|\right) \tag{2}$$

The average run length (ARL) is used to see when on the average the process is expected to be out-of-control. The ARL under indeterminacy is given by:

$$ARL = \frac{1}{\left[\sqrt{\left(1 - \frac{|(t\_L - t\_{l\bar{l}}) - (I\_L - I\_{l\bar{l}}) - (f\_L - f\_{l\bar{l}})|}{3}\right)} (1 - |(t\_L - t\_{l\bar{l}}) + (I\_L - I\_{l\bar{l}}) + (f\_L - f\_{l\bar{l}})|) \right]}\tag{3}$$

The values of *tU*, *fU* and *IU* for various values of *n* are given in Tables 1–3. Tables 1 and 2 are given when *n* = 25 and *n* = 50, respectively. Table 3 is presented for a variable sample size. In Table 4, the values of *Pin* and ARL are given for the parameters given in Tables 1–3. The classification of the state of the process based on SMI is also presented in Table 4. The process is said to be the in-control (IN) state if SMI ≥ 0.95 and the out-of-control (OOC) state if SMI < 0.95. It is noted no specific trend in ARL values. The following algorithm is used to classify the state of the process using the proposed method.



**Table 1.** Neutrosophic data when *n* = 25.

**Table 2.** Neutrosophic data when *n* = 50.



**Table 3.** Neutrosophic data with variable sample size.

**Table 4.** Classification of the process.


Note: IN = in-control and OOC = out-of-control.

#### **3. Comparative Study**

In this section, a comparison of the effectiveness of the proposed method is given over the control charts under classical statistics reported in reference [25]. According to Aslam et al. [26], a method which deals with indeterminacy is said to be more effective than the method which provides the determined values. The proposed method reduces to the traditional method under classical statistics if no indeterminacy is recorded. From reference [25], it is noted that the control chart under classical statistics does not consider the measure of indeterminacy which makes it limited to be used in an uncertainty environment. The performance of the existing control chart depends on the control limit coefficient which is determined through the complicated simulation process. On the other hand, the current method considered the measure of indeterminacy to evaluate the performance of the control chart. In addition, the proposed method is independent of the control limit coefficient. The proposed process can be applied easily to classify the state of the process. Note here that, the proposed method reduces to the method under classical statistics if no indeterminacy is found in the production data. The values of ARL from the proposed method and method under classical statistics discussed by Montgomery [25] are shown in Table 5 when *n* = 25 and *D* = 2. It is well-known theory that the smaller the values of ARL means more efficient the control chart process [25]. From Table 5, it can be seen that the proposed method provides the smaller values of ARL than the existing method. It means the proposed control chart has the ability to detect a shift in the process earlier than the method under classical statistics. For example, when *n* = 25 and *d* = 2, the value of ARL of the existing method from Table 5 is 37. On the other hand, the proposed method provides smaller values of ARL which are 24, 14, 18 and 12. From this comparison, it is concluded that the process is classified as IN. The industrial engineers can expect the process to be out-of-control at the 37th sample by using the existing method and on the 12th sample for sample number 22 using the proposed method. Therefore, the proposed method is efficient in detecting shifts earlier than the existing method. From this comparison, is the authors concluded that the proposed method is more effective than the existing charts as it considered the measure of indeterminacy and indicated when the process was OCC.


**Table 5.** The comparison of the proposed method with existing method when *n* = 25 and *D* = 2.

#### **4. Application**

In this section, a discussion of the application of the proposed method in an orange juice company is given. According to Montgomery [25], "Frozen orange juice concentrate is packed in 6-oz cardboard cans. These cans are formed on a machine by spinning them from cardboard stock and attaching a metal bottom panel". By inspection, it was found that a sample of 50 juice cans was formed. Some cans were found to be leaking and some were labeled as good. For some cans, the industrial engineer is indeterminate about whether the juice product is labeled as either conforming and non-conforming. Therefore, classical statistics cannot be applied to monitor the process in the presence of indeterminacy. The data for *n* = 50 is shown in Table 2. The classification of the state of the process for the juice cans is shown in Table 4. From Table 4, it is noted that the first five subgroups show that the process is the IN control state. The 5th subgroup shows that the process is OOC and industrial engineer should take action to bring back the process in the IN state. It is noted that overall eight samples are in OOC state. From this study, it is concluded that the use of the proposed method to classify the state of the process is quite easy, effective and adequate to be applied under an uncertainty environment.

#### **5. Conclusions and Remarks**

In this paper, the diagnosis of the manufacturing process under the indeterminate environment was presented. The similarity measure index was used to find the probability of the in-control and the out-of-control of the process. The average run length (ARL) was also computed for various values of specified parameters. An industrial example was given to explain the state of the process. An industrial example under the indeterminate environment was presented. From this study, it is concluded that the proposed diagnosis scheme under the neutrosophic statistics is quite simple and effective for the current state of the manufacturing process under uncertainty. The practitioners can apply the proposed method to save time and efforts in the industry. The proposed method using non-normal measures can be considered as future research.

**Author Contributions:** Conceived and designed the experiments, M.A.; Performed the experiments, M.A. Analyzed the data, M.A. and O.H.A.; Contributed reagents/materials/analysis tools, M.A.; Wrote the paper, M.A. and O.H.A.

**Funding:** This article was funded by the Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah. The authors, therefore, acknowledge with thanks DSR technical and financial support.

**Acknowledgments:** The authors are deeply thankful to editor and reviewers for their valuable suggestions to improve the quality of this manuscript.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Time-Truncated Group Plan under a Weibull Distribution based on Neutrosophic Statistics**

#### **Muhammad Aslam 1,\*, P. Jeyadurga 2, Saminathan Balamurali <sup>2</sup> and Ali Hussein AL-Marshadi <sup>1</sup>**


Received: 26 July 2019; Accepted: 20 September 2019; Published: 27 September 2019

**Abstract:** The aim of reducing the inspection cost and time using acceptance sampling can be achieved by utilizing the features of allocating more than one sample item to a single tester. Therefore, group acceptance sampling plans are occupying an important place in the literature because they have the above-mentioned facility. In this paper, the designing of a group acceptance sampling plan is considered to provide assurance on the product's mean life. We design the proposed plan based on neutrosophic statistics under the assumption that the product's lifetime follows a Weibull distribution. We determine the optimal parameters using two specified points on the operating characteristic curve. The discussion on how to implement the proposed plan is provided by an illustrative example.

**Keywords:** time-truncated test; Weibull distribution; risk; uncertainty; neutrosophic

#### **1. Introduction**

The ambition of each producer is to globalize their business by means of marketing the products. However, few producers reach this goal since they only make sincere efforts in improving and controlling the product's quality to accomplish this target. The producer who enhances the product's quality need not concern its globalization because the continuous improvement in quality helps to increase the positive opinion of the products and to fulfill the consumer's expectations. Hence, the involvement of the producers with great efforts supports to attain the desired result and to achieve the ambition. For quality improvement and maintenance purposes, the producer uses certain statistical techniques, namely control charts and acceptance sampling (see Montgomery [1] and Schilling and Neubauer [2]). In spite of the application of control charts in quality maintenance via monitoring the manufacturing process, it is not suitable for assuring the quality of the finished products. But there is a necessity to provide quality assurance for the products before they are received by the consumer. Under this situation, the manufacturers may prefer complete inspection. However, complete inspections are not appropriate for all situations because they are costly, require quality inspectors, and are time consuming. Therefore, in most of the cases, manufacturers adopt sampling inspections to provide quality assurance. In sampling inspection, a sample of items is selected randomly from the entire lot for inspection.

Acceptance sampling is also a form of sampling inspection, in which the decision to accept or reject a lot is made based on the results of sample items taken from the concerned lot. Obviously, acceptance sampling overcomes the drawbacks of complete inspections, such as inspection cost and time consumption, since it inspects only a part of the items of the lot for making decisions. Acceptance sampling plans yield the sample size and acceptance criteria associated with the sampling rules to be implemented. For further details on acceptance sampling, one may refer to Dodge [3] and Schilling and Neubauer [2]. In the literature, several sampling plans are available for lot sentencing with different

sampling procedures; however, a single-sampling plan (SSP) is the most basic, as well as the easiest, sampling plan in terms of the implementation process. In SSP, a single sample of size *n* is taken for lot sentencing, and the acceptance/rejection decision is made immediately by comparing the sample results with acceptance numbers determined from attribute inspections or with acceptance criteria from variables inspections. Many authors have investigated SSPs under various situations (see, for example, Loganathan et al. [4], Liu and Cui [5], Govindaraju [6], and Hu and Gui [7]).

In SSP implementation, a sample of *n* items is distributed to *n* testers, and the decision is made after consolidating the information obtained from all the testers. Obviously, it requires much time to make a decision, and the inspection cost is also high. One can overcome these drawbacks by implementing a group acceptance sampling plan (GASP) instead of using SSP. In GASP, a certain number of sample items are allocated to a single tester, and the test is conducted simultaneously on the sample items. Therefore, the testing time and inspection cost are reduced automatically under GASP when compared to SSP. It is to be mentioned that the number of testers involved in the inspection is frequently referred to as the number of groups, and the number of sample items allocated to each group is defined as the group size. For the purposes of making a decision on the lot by utilizing minimum cost and time, GASP has been used for the inspection of different quality characteristics by several authors (see, for example, Aslam and Jun [8]).

When industrial practitioners are uncertain about the parameters, the inspection cannot be done using traditional sampling plans. In this case, the use of fuzzy-based sampling plans is the best alternative to traditional sampling plans. Fuzzy-based sampling plans have been widely used for lot sentencing. Kanagawa and Ohta [9] proposed a single-attribute plan using fuzzy logic. More details on fuzzy sampling plans can be seen in Chakraborty [10], Jamkhaneh and Gildeh [11], Turano ˘glu et al. [12], Jamkhaneh and Gildeh [13], Tong and Wang [14], Uma and Ramya [15], Afshari and Gildeh [16], and Khan et al. [17].

The fuzzy approach has been used to compute the degree of truth. Fuzzy logic is a special case of neutrosophic logic. The later approach computes measures of indeterminacy in addition to the first approach (see Smarandache [18]). Abdel-Basset et al. [19] discussed the application of neutrosophic logic in decision making. Abdel-Basset et al. [20] worked on linear programming using the idea of neutrosophic logic. Broumi et al. [21] provided the minimum spanning tree using neutrosophic logic. More details can be seen in [22,23]. Neutrosophic statistics is treated as an extension of classical statistics, in which set values are considered rather than crisp values. Sometimes, the data may be imprecise, incomplete, and unknown, and exact computation is not possible. Under these situations, the neutrosophic statistics concept is used (see Smarandache [24]). Broumi and Smarandache [25] discussed the correlations of sets using neutrosophic logic. More details about the use of neutrosophic logic in sets can be seen in [26–28]. But one can use a set of values (that respectively approximates these crisp numbers) for a single variable using neutrosophic statistics. Chen et al. [29,30] introduced neutrosophic numbers to solve rock engineering problems. Patro and Smarandache [31] and Alhabib et al. [32] discussed some basicsofprobablity distribution under neutrosophic numbers. Nowadays, the neutrosophic statistics concept is used for quality control purposes. When designing the control chart and sampling plans under classical statistics, it is assumed that the value which represents the quality of the product is known. But in neutrosophic statistics, such value is indeterminate or lies between an interval. Some researchers have designed the control chart and acceptance sampling plans under these statistics (see, for example, Aslam et al. [33]). Aslam [34] introduced neutrosophic statistics in the area of acceptance sampling plans. Aslam and Arif [35] proposed a sudden death testing plan under uncertainty.

As mentioned earlier, Aslam and Jun [8] designed GASP to ensure the Weibull-distributed mean life of the products under classical statistics. They determined the optimal parameters for some calculated values of failure probability; however, they did not consider the case where the failure probability is uncertain. Therefore, in this paper, we attempted to design GASP for providing Weibull-distributed mean life assurance where the values of shape parameters and failure probabilities are uncertain. That is, we considered the design of GASP under neutrosophic statistics, which is the main difference between the proposed work and the work done by Aslam and Jun [8]. We will compare the proposed plan with the existing sampling plan under classical statistics in terms of the sample size required for inspection. We expect that the proposed plan will be quite effective, adequate, and efficient compared to the existing plan in an uncertainty environment.

#### **2. Design of the Proposed Plan using Neutrosophic Statistics**

The method to design the proposed GASP for providing quality assurance of the product in terms of mean life is discussed in this section. The ratio between the true mean life and the specified mean life of the product is considered as the quality of the product. A Weibull distribution is considered as an appropriate model to express the lifetime of the product because of its flexible nature. So, we assume that the lifetime of the product *tN* ∈ {*tL*, *tU*} under study follows a neutrosophic Weibull distribution, which has the shape parameter δ*<sup>N</sup>* ∈ {δ*L*, δ*U*} and scale parameter λ*<sup>N</sup>* ∈ {λ*L*, λ*U*}. Then, the cumulative distribution function (cdf) of the Weibull distribution is obtained as follows.

$$F(t\_N; \lambda\_N, \delta\_N) = 1 - \exp\left(-\left(\frac{t\_N}{\lambda\_N}\right)^{\delta\_N}\right), t\_N \ge 0, \lambda\_N > 0, \delta\_N > 0. \tag{1}$$

In this study, it is assumed that the scale parameter λ*<sup>N</sup>* is unknown and the shape parameter δ*<sup>N</sup>* is known. It can be seen that the cdf depends only on *tN*/λ*<sup>N</sup>* since the shape parameter is known. One can estimate the shape parameter from the available history of the production process when it is unknown. The true mean life of the product under the neutrosophic Weibull distribution is calculated by the following equation

$$
\mu\_N = \left(\frac{\lambda\_N}{\delta\_N}\right) \Gamma\left(\frac{1}{\delta\_N}\right) \tag{2}
$$

where Γ(.) represents the complete gamma function. Then, the probability that the product will fail before it reaches the experiment time *tN*0,is denoted by *pN* and is given as follows

$$p\_N = 1 - \exp\left(-\left(\frac{t\_{N0}}{\lambda\_N}\right)^{\delta\_N}\right). \tag{3}$$

As pointed out by Aslam and Jun [8], we can write *tN*<sup>0</sup> as a constant multiple of the specified mean life μ*N*0, such as *tN*<sup>0</sup> = *t*<sup>0</sup> = aμ0*a*μ*N*<sup>0</sup> where '*a*' is called the experiment termination ratio. Also, we can express the unknown scale parameter in terms of the true mean life and known shape parameters. After *tN*0, λ*<sup>N</sup>* value substitution, and possible simplification, one can obtain the probability that the product will fail before attaining the experiment time *tN*0, using the following equation.

$$p\_N = 1 - \exp\left(-a^{\delta\_N} \left(\frac{\mu\_{N0}}{\mu\_N}\right)^{\delta\_N} \left(\frac{\Gamma\left(\frac{1}{\delta\_N}\right)}{\delta\_N}\right)^{\delta\_N}\right) \tag{4}$$

With respect to the ratios between the true mean life and the specified mean life, μ*N*/μ*N*0, the acceptable quality level (AQL, i.e., *pN*1) and limiting quality level (LQL, i.e., *pN*2) are defined. That is, the failure probabilities obtained when the mean ratio values greater than one are taken as AQL and the same are obtained at a mean ratio equal to one are considered as LQL. The operating procedure of the proposed GASP for a time-truncated life test is described as follows:


**Step 3.** If at most, *cN* sample items found to be failed in each of all *gN* groups, then accept the lot where *cN* ∈ {*cL*, *cU*}. Otherwise, reject the lot.

Two parameters used to characterize the proposed plan are number of groups *gN* and the acceptance number *cN*. It is to be noted that *rN* ∈ {*rL*, *rU*} denotes the number of items in each group and is called the group size. The operating procedure of the proposed GASP is represented by a flow chart and is shown in Figure 1.

**Figure 1.** Operating procedure of the proposed group acceptance sampling plan (GASP) under a truncated life test.

In general, an operating characteristic (OC) function helps to investigate the performance of the sampling plan. The OC function of the proposed GASP under a Weibull model based on time-truncated test is given by

$$P\_{aN}(p\_N) = \left[\sum\_{d\_N=0}^{\mathcal{N}} \binom{\mathcal{N}}{d\_N} p\_N^{d\_N} (1-p\_N)^{r\_N - d\_N} \right]^{\mathcal{S}\mathcal{N}}.\tag{5}$$

Generally, each producer wishes that the sampling plan should provide a chance greater than (1 − α) to accept the product when the product quality is at AQL, where α is the producer's risk, whereas the consumer wants that the chance to accept the lot to be less than β when quality of the product is at LQL, where β is the consumer's risk. Obviously, the sampling plan that involves the minimum risks to both producer and consumer will be favorable. The design of the sampling plan by considering AQL and LQL, along with producer and consumer risks is known as two points on the OC curve approach and this approach is considered as the most important among others. Similarly, the sampling plan that makes its decision on the submitted lot using minimum sample size or average sample number (ASN) will be attractive. Therefore, in this study, we design GASP with the intention of assuring a Weibull-distributed mean life of the products with minimum sample size and minimum cost using two points on the OC curve approach. It should be mentioned that the ASN of the proposed

plan is the product of the number of groups and group size (i.e., *nN* = *gNrN*). For determining the optimal parameters, we use the following optimization problem.

$$\begin{aligned} \text{Minimize } & \mathbf{g\_N} \\ \text{Subject to } & P\_a(p\_{N1}) \ge 1 - \alpha, \\ & P\_a(p\_{N2}) \le \beta, \\ & \mathbf{g\_N} \ge 1, r\_N > 1, \ c\_N \ge 0, \end{aligned} \tag{6}$$

where *pN*<sup>1</sup> and *pN*<sup>2</sup> are the failure probabilities obtained from the following equations

$$p\_{N1} = 1 - \exp\left(-a^{\delta\_{N1}} \left(\frac{\mu\_{N0}}{\mu\_N}\right)^{\delta\_{N1}} \left(\frac{\Gamma\left(\frac{1}{\delta\_{N1}}\right)}{\delta\_{N1}}\right)^{\delta\_{N1}}\right) \delta\_{N1} \in \{\delta\_{L1}, \delta\_{L1}\},\tag{7}$$

$$p\_{\rm N2} = 1 - \exp\left(-a^{\delta \chi\_2} \left(\frac{\mu\_{\rm N0}}{\mu\_{\rm N}}\right)^{\delta \chi\_2} \left(\frac{\Gamma\left(\frac{1}{\delta \chi\_2}\right)}{\delta \chi\_2}\right)^{\delta \chi\_2}\right) \delta\_{\rm N2} \in \{\delta\_{\rm I2}, \delta\_{\rm I2}\},\tag{8}$$

$$P\_{d\mathcal{N}}(p\_{\mathcal{N}1}) = \left[ \sum\_{d\_N=0}^{c\_{\mathcal{N}}} \binom{r\_{\mathcal{N}}}{d\_N} p\_{\mathcal{N}1}^{d\_{\mathcal{N}}} (1 - p\_{\mathcal{N}1})^{r\_{\mathcal{N}} - d\_{\mathcal{N}}} \right]^{\mathbb{S}\mathcal{N}},\tag{9}$$

$$P\_{aN}(p\_{N2}) = \left[\sum\_{d\_N=0}^{\mathcal{N}} \binom{r\_N}{d\_N} p\_{N2}^{d\_N} (1 - p\_{N2})^{r\_N - d\_N} \right]^{\mathbb{S}^{\mathcal{N}}}.\tag{10}$$

In this designing, we define AQL as the failure probability corresponding to the mean ratios μ*N*/μ*N*<sup>0</sup> = 2, 4, 6, 8, 10. Similarly, the LQL is defined as the failure probability corresponding to the mean ratio μ*N*/μ*N*<sup>0</sup> = 1. The optimal parameters of the proposed GASP are determined for various combinations of group size, shape parameter, and producer's risk. We used the grid search method under neutrosophic statistics to find the optimal values of parameters [*gL*, *gU*] and [*cL*, *cU*]. We selected those values of parameters from several combinations of parameters that satisfy the given conditions where the range between *gL* and *gU* is at a minimum. For this determination, we considered two sets of group sizes, such as *rN* = {10, 12} and *rN* = {4, 6}, and two sets of shape parameters, such as δ*<sup>N</sup>* = {0.9, 1.1} and δ*<sup>N</sup>* = {1.9, 2.1}. Similarly, the producer risks are assumed to be α = 0.1 and α = 0.05, and four values of the consumer's risk, namely β = 0.25, 0.10, 0.05, 0.01, are used. The experiment termination ratios involved in this determination are *a* = 0.5 and *a* = 1. Then, the optimal parameters are reported in Tables 1–4. We can observe the following trends from tables.



**Table 1.** Optimal parameters of the proposed GASP under neutrosophic statistics when *rN* = [10, 12] and δ*<sup>N</sup>* = [0.9, 1.1].

\*: Plan does not exist.

**Table 2.** Optimal parameters of the proposed GASP under neutrosophic statistics when *rN* = [10, 12] and δ*<sup>N</sup>* = [1.9, 2.1].



**Table 3.** Optimal parameters of the proposed GASP under neutrosophic statistics when *rN* = [4, 6] and δ*<sup>N</sup>* = [0.9, 1.1].

\*: Plan does not exist.

**Table 4.** Optimal parameters of the proposed GASP under neutrosophic statistics when *rN* = [4, 6] and δ*<sup>N</sup>* = [1.9, 2.1].


\*: Plan does not exist.

#### **3. Illustrative Example**

Suppose that a manufacturer wants to provide the mean life assurance for his product, and he claims that the true mean life of the product is μ*<sup>N</sup>* = 500 h. The quality inspector decides to check whether the manufacture's claim on the lifetime of the product is true or not and, therefore, specifies the experiment time as *tN*<sup>0</sup> = 500 h. Hence, the experiment termination ratio is calculated as *a* = 1.0. The failure probability corresponding to the mean ratio μ*N*/μ*N*<sup>0</sup> = 4 is considered as AQL and the same at mean ratio 1 is taken as LQL. The consumer risk is assumed to be β = 0.25. The shape parameter of the Weibull distribution is specified as δ*<sup>N</sup>* = 0.9. Suppose the quality inspector wants to implement the proposed GASP under neutrosophic statistics, and he decides to allocate *rN* = 6 items to each tester. Therefore, in order to execute the proposed plan for the above-specified conditions, we obtain the optimal parameters *gN* = [8, 10] and *cN* = [3, 4] from Table 3. The input values (or specified values) and the optimal values determined for those input values are reported in Table 5 for easy identification.


This shows that the number of groups for the inspection lies between 8 and 10. Suppose the quality inspector chooses eight groups. Then, implementation procedure of the proposed plan is as follows.

A random sample of 48 items is chosen from the submitted lot, and 6 sample items are distributed to 8 groups. The sample items are included in the life test, and the test is conducted up to the specified time 500 h. The submitted lot is accepted if there are, at most, 3 sample items that failed before the time 500 h in each of all 8 groups. Otherwise, the lot is rejected.

#### **4. Comparison**

To show the efficiency of the proposed plan in terms of number of groups (sample size) over the existing SSP, we tabulated the optimal parameters determined for some specified values of *a*, *rN*, and δ*N*. The minimum number of groups required for inspecting the lot under neutrosophic statistics and classical statistics is shown in Table 6. We note from Table 6 that the proposed sampling plan under neutrosophic statistics has the smaller number of groups compared to the time-truncated plan under classical statistics. For example, when *a* = 0.5 and μ*N*/μ*N*<sup>0</sup> = 2, the number of groups in an indeterminate interval under classical statistics is larger than the proposed sampling plan under neutrosophic statistics. The same efficiency of the proposed plan can be observed for all other specified parameters. By comparing both sampling plans, it can be noted that time-truncated group sampling plan under neutrosophic statistics is better than the plan using classical statistics. Hence, the proposed plan is more economical than the existing plan in saving cost, time, and efforts in uncertainty environments.

**Table 6.** Values of the proposed GASP and single-sampling plan (SSP) under neutrosophic statistics when *rN* = [10, 12] and δ*<sup>N</sup>* = [0.9, 1.1].



**Table 6.** *Cont.*

\*: Plan does not exist.

#### **5. Conclusions**

In this paper, we have designed a group acceptance sampling plan for cases where the quality of the product is in determinate and vague. Therefore, neutrosophic statistics has been used in this design instead of classical statistics. The optimal parameters determined for different combinations of group sizes and shape parameters have been tabulated. It is concluded from this study that one can use the proposed plan if there is uncertainty in the product's quality. The proposed sampling plan using some other neutrosophic distributions or sampling schemes can be considered in future research. The extension of the proposed plan for big data is also a fruitful area for future research.

**Author Contributions:** Conceived and designed the experiments, M.A., P.J. and S.B., and A.H.A.-M. Performed the experiments, M.A. and A.H.A.-M. Analyzed the data, M.A. and A.H.A.-M. Contributed reagents/materials/analysis tools, M.A. Wrote the paper, M.A.

**Funding:** This article was funded by the Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah. The authors, therefore, acknowledge and thank DSR technical and financial support.

**Acknowledgments:** The authors are deeply thankful to the editor and reviewers for their valuable suggestions to improve the quality of this manuscript. The author (P. Jeyadurga) would like to thank the Kalasalingam Academy of Research and Education for providing financial support through postdoctoral fellowship.

**Conflicts of Interest:** The authors declare no conflicts of interest regarding this paper.

#### **Glossary**



#### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **A New X-Bar Control Chart for Using Neutrosophic Exponentially Weighted Moving Average**

#### **Muhammad Aslam 1,\*, Ali Hussein AL-Marshadi <sup>1</sup> and Nasrullah Khan <sup>2</sup>**


Received: 22 August 2019; Accepted: 8 October 2019; Published: 12 October 2019

**Abstract:** The existing Shewhart X-bar control charts using the exponentially weighted moving average statistic are designed under the assumption that all observations are precise, determined, and known. In practice, it may be possible that the sample or the population observations are imprecise or fuzzy. In this paper, we present the designing of the X-bar control chart under the symmetry property of normal distribution using the neutrosophic exponentially weighted moving average statistics. We will first introduce the neutrosophic exponentially weighted moving average statistic, and then use it to design the X-bar control chart for monitoring the data under an uncertainty environment. We will determine the neutrosophic average run length using the neutrosophic Monte Carlo simulation. The efficiency of the proposed plan will be compared with existing control charts.

**Keywords:** neutrosophic logic; fuzzy logic; control chart; neutrosophic numbers; monitoring

#### **1. Introduction**

The production process may shift from the target due to a number of reasons. Therefore, to produce the product according to given specifications, it is watched to indicate any shift in the process. The control charts are popularly used in the industry to watch the production process. In the industries, usually, the Shewhart control charts are used for the monitoring of the process. Although these control charts have a simple operational procedure, they are unable to detect a small shift in the process. Therefore, the Shewhart control charts do not detect a very small shift, and cause a high non-conforming product. The applications of such charts can be seen in [1–6].

The control charts using the exponentially weighted moving average (EWMA) used the current subgroup and previous subgroup information, and were said to be more efficient in detecting a very small shift in the process. The control chart based on this statistic is more efficient than the traditional Shewhart control charts. Roberts [7] designed a control chart using this statistic first time. Haq [8] and Haq et al. [9,10] used the EWMA statistic to propose a variety of control charts. Abbasi et al. [11] and Abbasi [12] introduced its setting in normal and non-normal situations and for measurement errors, respectively. Sanusi et al. [13] presented an alternative for the EWMA-based chart when additional information about the main variable is available. References [14–17] presented such control charts. More basic information about the control charts can be seen in [18,19].

The traditional Shewhart control charts cannot be applied when uncertainty or randomness is expected in the data. The fuzzy-based control charts are the best alternative to monitor the process when observations or the parameters under study are fuzzy. As mentioned by Khademi and Amirzadeh [20], "Fuzzy data exist ubiquitously in the modern manufacturing process"; therefore, serval authors paid attention to work on such control charts, such as for example [21–26].

The traditional fuzzy logic is a special case of neutrosophic logic. The latter one has the ability to deal with the measure of indeterminacy; see Smarandache [27]. The classic statistics (CS) method is applied under the assumption that all observations in data are determined, precise, and certain. However, in the modern manufacturing process, it may not be possible to record all determined observations in the data. In this situation, the neutrosophic statistics (NS) can be applied for the analysis of the data. The NS was introduced by Smarandache [28] using neutrosophic logic, which is the generation of CS. The NS is more effective to be applied for the analysis of imprecise data than CS. Chen et al. [29,30] proved the effectiveness of the NS-based analysis. Aslam [31] introduced a new area of neutrosophic quality control (NQC). Aslam et al. [32,33] introduced NS-based attributes and variable charts. Aslam and Khan [34] proposed the X-bar chart under NS. Aslam et al. [35] designed a chart to monitor reliability under uncertainty. Aslam [36,37] proposed the attribute and variable charts using resampling under NS.

Sentürk et al. [ ¸ 38] proposed the EWMA control chart using the fuzzy approach, which is the special case of the control chart using the neutrosophic logic, as mentioned by Smarandache [27]. By looking into the literature of the control chart under the uncertainty environment, we did not find any work on the X-bar control chart based on the neutrosophic exponentially weighted moving average (NEWMA). In this paper, we will first introduce NEWMA. We will introduce the new Monto Carlo simulation under the neutrosophic statistical interval method (NSIM). We will determine the neutrosophic average run length (NARL) of the proposed chart to compare its performance. We hope that the proposed chart will be more sensitive in detecting a small shift in the process as compared to the traditional Shewhart X-bar chart, EWMA X-bar chart under CS [19] and X-bar chart under NS [34].

#### **2. The Proposed NEWMA Statistics**

In this section, we will introduce NEWMA statistics. Let *XN nL <sup>i</sup>*=<sup>1</sup> *Xi nL* , *nU <sup>i</sup>*=<sup>1</sup> *Xi nU* ; *XN XL*, *XU* be the neutrosophic sample average of a neutrosophic random variable (nrv) *XiN* {*XL*, *XU*} = *i* = 1,2,3, ... , *nN*, where *nN* is the neutrosophic sample size. Suppose that *S*<sup>2</sup> *<sup>N</sup>* <sup>=</sup> *nN i*=1 *XN* − *XN* 2 /*nN* − 1; *S*<sup>2</sup> *N S*2 *<sup>L</sup>*, *<sup>S</sup>*<sup>2</sup> *L* represents the neutrosophic sample variance. By following Smarandache [28] and Aslam [31], the neutrosophic sample average follows the neutrosophic normal distribution (NND) with a neutrosophic population mean <sup>μ</sup>*<sup>N</sup>* = *NN <sup>i</sup>*=<sup>1</sup> *XN*/*NN*; μ*<sup>N</sup>* μ*L*, μ*<sup>U</sup>* and neutrosophic population variance σ<sup>2</sup> *<sup>N</sup>* <sup>=</sup> *nN <sup>i</sup>*=1(*XN* <sup>−</sup> <sup>μ</sup>*N*) 2 /*NN* − 1 /*nN* ; σ<sup>2</sup> *N* σ2 *<sup>L</sup>*/*nN*, <sup>σ</sup><sup>2</sup> *<sup>L</sup>*/*nN* . Based on the given information, we define NEWMA statistics as follows:

$$EWMA\_{N,i} = \lambda\_N \overline{X}\_N + (1 - \lambda\_N) EWMA\_{N,i-1}; \ EVMA\_{N,i} \in \left\{ EWMA\_{L,i}, EVMA\_{UL,i} \right\} \tag{1}$$

where λ*<sup>N</sup>* {λ*L*, λ*U*}; [0, 0] ≤ λ*<sup>N</sup>* ≤ [1, 1] denotes the neutrosophic smoothing constant. Note here that *XN XL*, *XU* are assumed to be independent random variables with neutrosophic variance σ<sup>2</sup> *<sup>N</sup>*/*nN* (σ<sup>2</sup> *N* σ2 *<sup>L</sup>*/*nN*, <sup>σ</sup><sup>2</sup> *<sup>L</sup>*/*nN* and known neutrosophic population variance, as shown in [38]. The setting of λ*<sup>N</sup>* {λ*L*, λ*U*} is matter of personal experience. Montgomery [14] recommended that it should be selected from 0.05 to 0.25. The *EWMAN*,*<sup>i</sup>* follows the NND with neutrosophic mean μ*<sup>N</sup>* μ*L*, μ*<sup>U</sup>* and neutrosophic standard deviation √ σ*<sup>N</sup> nN* - <sup>λ</sup>*<sup>N</sup>* <sup>2</sup>−λ*<sup>N</sup>* .

#### **3. The Proposed NEWMA X-Bar Control Chart**

The proposed X-bar control chart using the NS is described as follows:

1. Choose a random sample of size *nN* {*nL*, *nU*} and compute *EWMAN*,*<sup>i</sup>* statistics.

$$EWMA\_{N,i} = \lambda\_N \overline{X}\_N + (1 - \lambda\_N) EWMA\_{N,i-1}; \; EWMA\_{N,i} \in \left\{ \text{EIVMA}\_{L,i}, \; EWMA\_{L,i} \right\}$$

2. Declare the process is an in-control state if *LCLN* < *EWMAN*,*<sup>i</sup>* < *UCLN*; otherwise, it is in an out-of-control state. Note here that *LCLN* [*LCLL*, *LCLU*] and *LCUN* [*LCUL*, *LCUU*] are the neutrosophic lower and upper control limits.

The proposed chart becomes a chart based on NS proposed by Aslam and Khan [34] when λ*<sup>N</sup>* {1, 1}. When all the observations are precise, the proposed chart becomes the traditional Shewhart chart under CS. The neutrosophic control limits are given by:

$$\text{LCL}\_{N} = \mu\_{N} - k\_{N} \frac{\sigma\_{N}}{\sqrt{n\_{N}}} \sqrt{\frac{\lambda\_{N}}{2 - \lambda\_{N}}}; \text{ LCL}\_{N} \text{e} \left[ \text{LCL}\_{L}, \text{LCL}\_{\text{U}} \right], \ k\_{N} \in \left\{ k\_{\text{L}\_{\text{r}}}, k\_{\text{U}} \right\}, \ \mu\_{N} \in \left\{ \mu\_{\text{L}\_{\text{r}}}, \mu\_{\text{U}} \right\} \tag{2}$$

$$\text{LCL}\_{N} = \mu\_{N} + k\_{N} \frac{\sigma\_{N}}{\sqrt{n\_{N}}} \sqrt{\frac{\lambda\_{N}}{2 - \lambda\_{N}}}; \text{ LCL}\_{N} \text{ε} \left[ \text{LCL}\_{L}, \text{LCL}\_{\text{UL}} \right], \ k\_{N} \in \left\{ k\_{\text{L},\prime}, k\_{\text{L}} \right\}, \ \mu\_{N} \in \left\{ \mu\_{\text{L},\prime}, \mu\_{\text{U}\_{\text{s}}} \right\} \tag{3}$$

where *kN* <sup>∈</sup> *kL*,, *kU* is the neutrosophic control limits coefficient, and will be determined later. Let μ0*<sup>N</sup>* μ0*L*, μ0*<sup>U</sup>* be the target value for the process. According to the operational process of the proposed control, the probability that the process under the NS is an in-control state is given by:

$$P\_{inN}^{0} = P\{LCL\_N \le \overline{X} \le LCL\_N / \mu\_{0N}\}; \ \mu\_{0N} \in \{\mu\_{0L}, \mu\_{01I}\} \tag{4}$$

The neutrosophic average run length (NARL) of the proposed chart is given by:

$$ARL\_{\text{0N}} = \frac{1}{1 - P\_{in\text{N}}^0}; \text{ ARL}\_{\text{0N}} \in \{ARL\_{\text{0L}}, ARL\_{\text{0U}}\} \tag{5}$$

Suppose now that the process has shifted to a new target at μ1*<sup>N</sup>* = μ0*<sup>N</sup>* + *d*σ*N*; μ1*<sup>N</sup>* μ1*L*, μ1*<sup>U</sup>* , where *d* is the shift constant. The neutrosophic probability of an in-control state at μ1*<sup>N</sup>* μ1*L*, μ1*<sup>U</sup>* is given by:

$$P\_{in}^1 = P\{L\!CL\_N \le \overline{X}\_N \le \text{L}\!CL\_N / \mu\_{N1} = \mu\_N + \text{d}\sigma\_N\}; \;\mu\_{1N}\epsilon\{\mu\_{1L}, \mu\_{1U}\}.$$

The NARL at μ1*<sup>N</sup>* μ1*L*, μ1*<sup>U</sup>* is defined by:

$$ARL\_{1N} = \frac{1}{1 - P\_{in}^1}; \; ARL\_{1N} \& \{ARL\_{1L}, ARL\_{1II}\} \tag{6}$$

#### **4. The Proposed Neutrosophic Monte Carlo Simulation (NMCS)**

As we mentioned earlier, the neutrosophic control limits coefficient *kN kL*,, *kU* will be determined through the neutrosophic Monte Carlo Simulation (NMCS) under the given constraints. The proposed NMCS is stated as follows.

#### *4.1. For In-Control State*

**Step 1**: A random sample of size *nN* {*nL*, *nU*} is generated from a standard normal distribution. The mean of the random sample interval of size *nN* {*nL*, *nU*} is computed as *XN XL*, *XU* is computed. The plotting *EWMAN*,*<sup>i</sup>* statistic is computed as:

$$EWMA\_{N,i} = \lambda\_N \overline{\mathbf{x}}\_N + (1 - \lambda\_N) EWMA\_{N,i-1}$$

**Step 2**: The proposed statistic *EWMAN*,*<sup>i</sup>* is plotted over the *LCLN* [*LCLL*, *LCLU*] and *LCUN* [*LCUL*, *LCUU*] by selecting a suitable value of *kN kL*,, *kU* , and *ARL*0*<sup>N</sup>* {*ARL*0*L*, *ARL*0*U*} is computed.

**Step 3**: The *ARL*0*<sup>N</sup>* {*ARL*0*L*, *ARL*0*U*} and neutrosophic standard deviation (NSD) are computed by iterating process 10,000; only those *kN kL*,, *kU* values along with their respective parameters are selected for which *ARL*0*<sup>N</sup>* = *r*0*N*; *ARL*0*<sup>N</sup>* {*ARL*0*L*, *ARL*0*U*}, where *r*0*<sup>N</sup>* is the specified value of *ARL*0*<sup>N</sup>* {*ARL*0*L*, *ARL*0*U*}.

#### *4.2. For Shifted Process*

**Step 1:** For selected values of *kN kL*,, *kU* and their corresponding parameters, *LCLN* [*LCLL*, *LCLU*] and *LCUN* [*LCUL*, *LCUU*] constructed.

**Step 2:** As per explained for the in control process in step 1, now data is generated at μ1*<sup>N</sup>* μ1*L*, μ1*<sup>U</sup>* and plotted on *LCLN* [*LCLL*, *LCLU*] and *LCUN* [*LCUL*, *LCUU*], and *ARL*1*<sup>N</sup>* {*ARL*1*L*, *ARL*1*U*} is computed.

**Step 3:** The *ARL*1*<sup>N</sup>* {*ARL*1*L*, *ARL*1*U*} is computed for a specified shift level by 10,000 iterations of the process.

**Step 4:** For various shifts, levels step 2 and 3 are repeated, the values *ARL*1*<sup>N</sup>* {*ARL*1*L*, *ARL*1*U*} and NSD are computed at various values of *d*.

Note here that the proposed NMCS is the generalization of Monte Carlo simulation under CS. The values of *ARL*1*<sup>N</sup>* {*ARL*1*L*, *ARL*1*U*} and NSD are determined for various values of *d*, *nN* {*nL*, *nU*} and λ*<sup>N</sup>* {λ*L*, λ*U*} *ARL*0*<sup>N</sup>* {*ARL*0*L*, *ARL*0*U*}, and are shown in Tables 1–4 for *ARL*0*<sup>N</sup>* {300, 300} rather than *ARL*0*<sup>N</sup>* {370, 370}. The values of NARL when *nN* [3, 5] and λ*<sup>N</sup>* [0.08, 0.12] are shown in Table 1. The values of NARL when *nN* [3, 5] and λ*<sup>N</sup>* [0.18, 0.22] are shown in Table 2. The values of NARL when *nN* [3, 5] and λ*<sup>N</sup>* [0.28, 0.32] are shown in Table 3. The values of NARL when *nN* [5, 10], *nN* [5, 8], and λ*<sup>N</sup>* [0.08, 0.12] are given in Table 4. From Tables 1–4, it is worth to note that when all other parameters are constant, the values of NSD are smaller for *ARL*0*<sup>N</sup>* {300, 300} than for *ARL*0*<sup>N</sup>* {370, 370}. With the increase in λ*<sup>N</sup>* {λ*L*, λ*U*}, we note the decreasing trend in *ARL*1*<sup>N</sup>* {*ARL*1*L*, *ARL*1*U*} and increasing trend in NSD. From Table 4, we observe that the indeterminacy interval in *ARL*1*<sup>N</sup>* {*ARL*1*L*, *ARL*1*U*} increases as *nN* {*nL*, *nU*} increases from *nN* [5,8] to *nN* [5,10]. On the other hand, the indeterminacy interval in NSD deceases as *nN* {*nL*, *nU*} increases.


**Table 1.** The values neutrosophic average run length (NARL) and neutrosophic standard deviation (NSD) when *nN* [3, 5] and λ*<sup>N</sup>* [0.08, 0.12].


**Table 2.** The values NARL and NSD when *nN* [3, 5] and λ*<sup>N</sup>* [0.18, 0.22].

**Table 3.** The values NARL and NSD when *nN* [3, 5] and λ*<sup>N</sup>* [0.28, 0.32].



**Table 4.** The values NARL and NSD when *nN* [5, 10], *nN* [5, 8],and λ*<sup>N</sup>* [0.08, 0.12].

#### **5. Comparative Studies**

In traditional control under CS, it is known that a control chart having the smaller values of average run length (ARL) and standard deviation of run length (SDRL) is said to be efficient in detecting the shift in the process. In the neutrosophic theory, according to [29,30], a method is said to be efficient if it provides the parameter in the indeterminacy interval rather than the determined values in uncertainty. As mentioned by [32], a chart under the NS is said to be more efficient if it has smaller values of NARL than the competitor's charts. We will compare the efficiency of the proposed chart in NARL with the traditional Shewhart X-bar, EWMA X-bar chart proposed by [19] and chart proposed by [34] under NS. We will compare the performance of all the charts at the same specified neutrosophic parameters. Table 5 shows the NARL values of the control charts when *nN* [3, 5], *ARL*0*<sup>N</sup>* {370, 370}, and λ*<sup>N</sup>* [0.08, 0.12]. We note that the proposed chart under the NS has smaller values of NARL as compared to the traditional Shewhart X-bar, EWMA X-bar chart [19] and charts proposed by [34]. For example, when *d* = 0.05, the NARL and NSD from the present chart are *ARL*1*<sup>N</sup>* {270.32, 248.92} and *NSD* [257.15,238.27]; from [34], it is *ARL*1*<sup>N</sup>* [356.86, 348.52], and from [19], they are charts 278 and 261, respectively. From this comparison, it is clear that the proposed chart has smaller values of NARL and NSD, which has the ability to detect a small shift in the process. The theoretical comparisons in NARL of the three charts show the superiority of the proposed control chart.


**Table 5.** The comparison between three charts.

For the summated data, we suppose that *nN* [3, 5], *ARL*0*<sup>N</sup>* {370, 370}, and λ*<sup>N</sup>* [0.08, 0.12]. The 40 observations from NND are generated, having half of the data generated assuming that the process is in-control state, and next 20 observations are generated assuming that the process has shifted with *d* = 0.25. The simulated data along with *XN XL*, *XU* and *EWMAN*,*<sup>i</sup>* are shown in Table 6. From Table 1, the tabulated NARL is *ARL*1*<sup>N</sup>* {24.42, 17.5}, so it is expected that the shift should be detected between the 17th sample and the 24th sample. We constructed Figure 1 for the proposed control chart, Figure 2 for the chart proposed by [34], and Figure 3 for the traditional Shewhart X-bar chart. From Figures 1–3, it is worth noting that the proposed control chart detects the shift in the process between the 17th sample and the 24th sample. Figure 2 shows that although the process is an in-control state, some points are in an indeterminacy interval. Figure 3 shows that the process is an in-control state, and all the parameters are determined. By comparing Figures 1–3, it is concluded that the proposed control under NS is quite effective, flexible, and efficient in detecting the shift in the process as compared to the existing control charts.



**Figure 1.** The proposed chart for the summated data.

**Figure 2.** The Aslam and Khan (2019) chart for the summated data.

**Figure 3.** The X-bar chart under classic statistics (CS) for the summated data.

#### **6. Application**

A famous automobile industry situated in Saudi Arabia is interested in applying the proposed control chart under the NS for monitoring the production of engine piston rings (EPR). The EPR is an important part of the engine, which improves its efficiency by minimizing the gas or oil leakage and transforming the heat to the cylinder wall. The EPR is a continuous variable and has the possibility of imprecise, fuzzy, and in-determined values. In such a case, the use of the proposed control to monitor the production process of EPR using the proposed control chart under the NS will be more effective and informative than the use of the existing control chart. The proposed control chart will enhance the power of the monitoring of the process using the current sample and previous sample information. Furthermore, the simulation study showed the efficiency of the proposed chart over the existing chart proposed by Aslam and Khan [34]. Therefore, the use of the proposed control chart for the monitoring of ERP production in the industry will help in minimizing the non-conforming ERP product. Suppose that the automobile industry is interested in seeing the efficiency of the proposed chart when *nN* [3, 5], *ARL*0*<sup>N</sup>* {370, 370}, and λ*<sup>N</sup>* [0.12, 0.12]. The neutrosophic control limit coefficient is *kN* {3.001, 3.002}. The neutrosophic data of ERP is taken from Aslam and Khan [34] and shown in Table 7 for easy reference. The neutrosophic statistic and neutrosophic control limits for monitoring the ERP data shown in Table 7 are *LCLN* {73.9964, 73.9969}; σ*<sup>N</sup>* {0.008896, 0.009399} and *UCLN* {74.0051, 74.0055}; σ*<sup>N</sup>* {0.008896, 0.009399}. We constructed Figure 4 for the proposed control chart, Figure 5 for the chart proposed by Aslam and Khan [34], and Figure 6 for the traditional Shewhart X-bar chart. From Figures 4–6, it is noted that the proposed control chart shows that the process is near the neutrosophic target lines. On the other hand, the existing control chart by Aslam and Khan [34] shows much variation in the process. The traditional Shewhart has the determined values of parameters, and is not suitable in uncertainty. By comparing the three charts, it is concluded that the proposed chart has the ability to centralize EPR production process.



**Sr#**

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

[73.982,73.982]

[74.015,74.011]

 [74.01,74.01]

[74.004,74.004]

[73.982,73.982]

 [74]

 [74.01,74.01]

[73.984,73.989]

[74.008,74.008]

[73.989,73.989]

[73.999,73.999]

[74.001,74.001]

[73.984,73.984]

[74.006,74.006]

[73.994,73.994]

 [74,74]

[74.012,74.012]

[74.006,74.006]

[74.002,74.002]

[74.01,74.011]

[74.012,74.012]

[73.984,73.984]

[74.014,74.012]

[73.967,73.985]

 [73.99,73.99]

 [73.99,73.99]

[73.995,73.995]

[74.017,74.012]

[73.993,73.993]

[74.015,74.015]

[74.013,74.009]

[74.003,74.003]

[74.018,74.018]

[73.986,73.986]

[74.005,74.005]

[73.998,73.998]

 [74,74]

[74.009,74.005]

[74.006,74.006]

[74.005,74.005]

[74.02,74.015]

[74.005,74.005]

[74.003,74.003]

[74.005,74.005]

[73.998,73.998]

[73.999,73.999]

[73.994,73.994]

 [74,74]

 [74,74.001]

[74.013,74.01]

[74.01,74.011]

[74.014,74.011]

[74.009,74.002]

[73.996,73.996]

[74.003,74.003]

[73.997,73.997]

[74.007,74.007]

[73.996,73.996]

[74.007,74.007]

[73.984,73.996]

[73.9982,73.9976]

[74.0052,74.0046]

[74.0024,74.001]

[74.0016,74.0002]

[73.9998,73.9998]

[74.0092,74.0074]

[73.9982,73.9982]

[74.0074,74.0078]

[74.0008,74.0008]

[73.9966,73.9966]

[74.006,74.0056]

[73.9902,73.9962]

[74.0014,74.001]

[74.0018,74.0018]

[74.0013,74.0014]

[74.0012,74.0011]

[74.0011,74.0012]

[74.0013,74.0018]

[74.0003,74.0011]

[74.0005,74.0021]

[73.9996,74.0014]

[73.9994,74.0013]

[73.9998,74.0019]

[73.999,74.0006]

**Figure 4.** The proposed chart for the real data.

**Figure 5.** The Aslam and Khan (2019) chart for the real data.

**Figure 6.** The X-bar chart under CS for the real data.

#### **7. Concluding Remarks**

We presented the designing of the X-bar control chart using the neutrosophic EWMA (NEWMA) statistics. The neutrosophic NEWMA and NMSC are introduced in this paper. Some tables for various neutrosophics are presented for practical use in the industry. The theoretical comparisons in the NARL and simulation study showed that the proposed chart performs better than the competitor's charts. The real example of ERP data from the automobile industry also showed the efficiency of the proposed chart. We recommend using the proposed control chart for monitoring the process in the automobile, aerospace, mobiles, water drinking, and medical instrument industries. The proposed chart can be only applied when the variable of interest follows the neutrosophic normal distribution. The proposed chart using some non-normal distributions can be considered as future research. The proposed control chart using some advanced sampling schemes will be considered as future research.

**Author Contributions:** M.A.; A.H.A.-M. and N.K. conceived and designed the experiments; M.A. and N.K. performed the experiments; M.A. and N.K. analyzed the data; M.A. contributed reagents/materials/analysis tools; M.A. wrote the paper.

**Funding:** This work was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah. The author, therefore, gratefully acknowledges the DSR technical and financial support.

**Acknowledgments:** The authors are deeply thankful to the editor and reviewers for their valuable suggestions to improve the quality of this manuscript.

**Conflicts of Interest:** The authors declare no conflict of interest regarding this paper.

#### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Neutrosophic Portfolios of Financial Assets. Minimizing the Risk of Neutrosophic Portfolios**

#### **Marcel-Ioan Bolos, 1, Ioana-Alexandra Bradea <sup>2</sup> and Camelia Delcea 2,\***


Received: 16 September 2019; Accepted: 24 October 2019; Published: 3 November 2019

**Abstract:** This paper studies the problem of neutrosophic portfolios of financial assets as part of the modern portfolio theory. Neutrosophic portfolios comprise those categories of portfolios made up of financial assets for which the neutrosophic return, risk and covariance can be determined and which provide concomitant information regarding the probability of achieving the neutrosophic return, both at each financial asset and portfolio level and also information on the probability of manifestation of the neutrosophic risk. Neutrosophic portfolios are characterized by two fundamental performance indicators, namely: the neutrosophic portfolio return and the neutrosophic portfolio risk. Neutrosophic portfolio return is dependent on the weight of the financial assets in the total value of the portfolio but also on the specific neutrosophic return of each financial asset category that enters into the portfolio structure. The neutrosophic portfolio risk is dependent on the weight of the financial assets that enter the portfolio structure but also on the individual risk of each financial asset. Within this scientific paper was studied the minimum neutrosophic risk at the portfolio level, respectively, to establish what should be the weight that the financial assets must hold in the total value of the portfolio so that the risk is minimum. These financial assets weights, after calculations, were found to be dependent on the individual risk of each financial asset but also on the covariance between two financial assets that enter into the portfolio structure. The problem of the minimum risk that characterizes the neutrosophic portfolios is of interest for the financial market investors. Thus, the neutrosophic portfolios provide complete information about the probabilities of achieving the neutrosophic portfolio return but also of risk manifestation probability. In this context, the innovative character of the paper is determined by the use of the neutrosophic triangular fuzzy numbers and by the specific concepts of financial assets, in order to substantiating the decisions on the financial markets.

**Keywords:** financial assets; neutrosophicportfolio; neutrosophic portfolio return; neutrosophic portfolio risk; neutrosophic covariance

#### **1. Introduction**

The portfolios of financial assets have been the subject of numerous researches in the specialized literature, the main concern of the specialists being to identify a solution for the portfolio risk management, known being the fact that the capital market can generate huge losses if no solution is identified against the losses generated by the manifestation of the financial risk. In a first stage, to solve this sensitive problem, solutions were identified at the level of each financial asset, determining a set of three financial performance indicators that characterize the financial assets, namely: financial return, financial risk and covariance.

In order to quantify the financial asset's return, we took into consideration the profit realized by the investors, both from the price fluctuations of the financial assets and from the dividends obtained. The methods of calculating the financial return were different; starting from the return on financial assets based on time series recorded in previous time periods, to the returns determined based on estimates for future time periods. Subsequently, the foundations of another performance indicator of financial assets known as financial risk were made, determined with the help of the statistical indicators. These statistical indicators were—the square deviation from the mean and the variance that measures the deviation of the return of financial assets from its average value. The greater this deviation, the greater the risk associated with financial assets is.

In terms of covariance, the third indicator used for evaluating the performance of financial assets, measures the intensity of the link between the return of two financial assets; simultaneously being introduced also the correlation coefficient, which, depending on the recorded value, provides information on the return of the financial assets evolution. A positive correlation coefficient indicates that the return on financial assets increases or decreases as appropriate. A negative value of the correlation coefficient indicates that the evolutions of the return of the financial assets are of opposite sign, respectively while the return of one financial asset increases, the return of the other financial asset may decrease and vice versa.

Financial performance indicators have been a step forward in evaluating the performance of financial assets but not enough. To these was added the modern portfolio theory which lays the foundation of the correlation between profitability and risk at the level of the financial assets portfolio. The mathematical model for correlating the relationship between return and risk is known as the Markowitz efficient frontier, which essentially shows that the portfolio risk of financial assets increases in proportion to the value of the portfolio's return or, on the contrary, the portfolio risk decreases in proportion to the return value; between these two variables being a direct proportionality relation. Moreover, Markowitz's frontier theory demonstrates that risk management of a financial assets portfolio is much more efficient if capital market investments are made in a diversified portfolio of financial assets [1].

Despite all the progress made by introducing the relationship between return and risk but also by diversifying the risk making investments in diversified portfolios, not enough information is provided to investors in the capital market regarding the probability of achieving the return on financial assets or financial risk. This category of information is necessary to properly substantiate the investment decision on the capital market.

To solve this problem caused by the lack of information regarding the probabilities of achieving the financial performance indicators, fuzzy neutrosophic numbers were introduced to model the performance indicators of the financial assets. The use of neutrosophic fuzzy numbers in modelling the performance indicators of financial assets brings several advantages over the existing theory so far, as:


In order to complete the modelling of financial performance indicators with the help of fuzzy neutrosophic numbers, the present paper bases two fundamental concepts in the portfolio theory literature, namely: it introduces a new category of portfolios, respectively the neutrosophic portfolios of financial assets and bases the algorithm for minimizing the risk of the neutrosophic portfolio of financial assets. Regarding the neutrosophic portfolios of financial assets, they will provide information on the risk and return of the portfolio, together with the probabilities of their achievement, with the mention that the probability of achievement is influenced by the risk and return of each financial asset that enters into the portfolio structure.

The risk minimization algorithm of the neutrosophic portfolio of financial assets provides solutions to the investor when it seeks to minimize the risk, respectively, sets the value of the investments that the investor will make in each of the financial assets that enter the portfolio structure, so that the risk is minimal.

The paper is organized as follows: Section 2 deals with the state of the art in the area of neutrosophic theory, while stating the main characteristics and assumptions related to the structure of the financial assets. Section 3 presents the neutrosophic portfolios concept, by highlighting some of the specific notions, structure and formation related to the neutrosophic portfolios theory. Two numerical examples are provided with Section 3 in order to better explain the introduced concepts. Section 4 deals with the neutrosophic portfolio equations. Both the analytical and matrix form are discussed within this section. Sections 5 and 6 deal with the minimizing the risk of the neutrosophic portfolio consisting of two or more financial assets, while Section 7 presents the limitations of the study and draws the main concluding remarks.

#### **2. State of the Art**

#### *2.1. The Classical Theory of Financial Asset Portfolios. A New Approach*

The structure of a portfolio is based on one or more financial assets (*A*1, *<sup>A</sup>*2, *<sup>A</sup>*3, ... *An*) or *Ai*, *i* = 1, *n* . Each of the financial assets that enter into the portfolio structure is characterized by an average financial return *RAi* , a financial risk of the form σ*Ai* but also of the covariance *cov Ai*, *Aj* between the asset *Ai*, *i* = 1, *n* and *Aj*, *j* = 1, *m* . The covariance measures the intensity of the link between the returns of the two assets. Thus, for the modern portfolio theory, the financial asset (*Ai*) ⎧

will have the characteristic performance indicators of *Ai* : ⎪⎪⎪⎪⎨ ⎪⎪⎪⎪⎩ *RAi* σ*Ai xAi* , respectively the average financial

return, the financial risk and *xAi* , which represents the weight of a financial asset (*VAi*) in the total value of the portfolio *<sup>n</sup> <sup>i</sup>*=<sup>1</sup> *VAi* : *xAi* <sup>=</sup> *VAi <sup>n</sup> <sup>i</sup>*=<sup>1</sup> *VAi* × 100[%].

The calculations regarding the performance indicators of financial assets are already known in the literature. The average return of a financial asset is determined either in the form of historical yields using the arithmetic mean *RAi* = <sup>1</sup> *N n <sup>i</sup>*=<sup>1</sup> *RAi* or using the geometric mean according to the formula: *RAi* <sup>=</sup> \$*<sup>n</sup> i*=1 1 + *RAi* 1 *<sup>n</sup>* <sup>−</sup> 1, either in the form of expected returns using the probabilities assigned by investors (*pi*) for each evolution scenario (S) of the financial asset expected return of the form: *RAi* <sup>=</sup> *<sup>S</sup> <sup>i</sup>*=<sup>1</sup> *pi* × *RAi* . If the variable represented by the return on the financial asset is continuous, a normal distribution can be used, with *RAi* = 0, σ*Ai* = 1 and the distribution density function of the form: *f RAi* ,*RAi* , σ*Ai* = √ <sup>1</sup> <sup>2</sup>πσ*Ai e* −1 2 (*RAi* <sup>−</sup>*RAi* ) 2 σ*Ai* <sup>2</sup> , for which the probability of occurrence the expected return on the financial asset *Ai* will be *P* <sup>−</sup>1.96 <sup>≤</sup> *RAi* <sup>≤</sup> <sup>+</sup>1.96 <sup>=</sup> 95% or <sup>−</sup>1.645 <sup>≤</sup> *RAi* <sup>≤</sup> <sup>+</sup>1.645 = 90%. If the variable *RAi* has *RAi* - 0 and σ*Ai* - 1, then the expected average return *RAi* can be transformed into a variable of the form: *RAi* (*z*) <sup>=</sup> *RAi* −*RAi* σ*Ai* for which *P RAi* − 1.96σ*Ai* ≤ *RAi* (*z*) ≤ *RAi* + 1.96σ*Ai* = 95% and *P RAi* − 1.645σ*Ai* ≤ *RAi* (*z*) ≤ *RAi* + 1.645σ*Ai* = 90%.

The financial risk, as financial assets performance indicator studied in the specialized literature, is determined with the help of the squared deviations from the mean, by using the calculation formula σ2 *Ai* <sup>=</sup> <sup>1</sup> *N*−1 *N i*=1 *RAi* − *RAi* 2 , as well as using the statistical indicator known as variance using the calculation formula σ*Ai* = - <sup>1</sup> *N*−1 *N i*=1 *RAi* − *RAi* 2 . Regardless how the financial risk is calculated, it measures the deviation of the financial asset return *RAi* from its average return *RAi* . The greater the deviation, the greater the financial asset risk is, otherwise the smaller the deviation, the smaller the

financial risk, between the magnitude of the deviation and the size of the financial risk being a directly proportional relationship.

The third statistical indicator used in the portfolio theory is the covariance between two assets (*Ai*) and respectively *Aj* , established using the calculation formula: *cov Ai*.*Aj* = 1 *N*−1 *n i*,*j*=1 *RAi* − *RAi RAj* <sup>−</sup> *RAj* , which, as mentioned above, measures the intensity of the connections, respectively the dependency or how two assets return mutual influence each other. The correlation coefficient was introduced in the portfolio literature, as: ρ*i*,*<sup>j</sup>* = <sup>σ</sup>*AiAj*/σ*Ai* <sup>σ</sup>*Aj* with values between ρ*i*,*<sup>j</sup>* = [−1, +1]. If ρ*i*,*<sup>j</sup>* = −1, the returns of the two financial assets evolve in the opposite direction, respectively when one increases the other decreases and vice versa. If ρ*i*,*<sup>j</sup>* = 0, the returns of the two financial assets do not influence each other. If ρ*i*,*<sup>j</sup>* = +1, the returns of the two financial assets increase or, as the case may be, they decrease simultaneously.

As mentioned previously, the performance indicators presented above, respectively the average return of the financial asset *RAi* , the financial risk σ*Ai* and the covariance between two financial assets *cov Ai*.*Aj* are specific to the financial assets which are part of a portfolio structure.

The modern theory of the financial asset's portfolio has devoted notions specific to the portfolio such as the portfolio return *Rp* and the portfolio risk σ2 *P* , in order to mathematically quantify the relationship between return and risk. The portfolio return *Rp* determined by the existence of *N* financial assets in the portfolio is mathematically quantified as the sum of the products between the weight *xAi* of each asset (*Ai*) in the total value of the portfolio and the average return specific to each asset *RAi* , of form:

$$R\_{\mathcal{P}} = \mathbf{x}\_{A\_1}\mathbf{R}\_{A\_1} + \mathbf{x}\_{A\_2}\mathbf{R}\_{A\_2} + \dots + \mathbf{x}\_{A\_n}\mathbf{R}\_{A\_n} = \sum\_{i=1}^{N} \mathbf{x}\_{A\_i}\mathbf{R}\_{A\_i} \tag{1}$$

The above expression can be written in matrix form as follows:

$$R\_p = \begin{pmatrix} \mathbf{x}\_{A\_1}\mathbf{x}\_{A\_2}\dots\mathbf{x}\_{A\_n} \end{pmatrix} \begin{pmatrix} R\_{A\_1} \\ R\_{A\_2} \\ \dots \\ R\_{A\_n} \end{pmatrix} = \mathbf{x}\_A^T R\_A \tag{2}$$

The portfolio risk σ2 *P* also made up of *N* financial assets is determined by squared deviations from the mean and is influenced by the weight held by each financial asset in the total portfolio *xAi* , as well as by the individual risk of each asset entering the portfolio structure σ2 *Ai* , respectively the covariance between two assets *cov Ai*.*Aj* , according to an expression of the form:

σ2 <sup>P</sup> <sup>=</sup> <sup>x</sup><sup>2</sup> A1 σ2 A1 <sup>+</sup> <sup>x</sup><sup>2</sup> A2 σ2 A2 <sup>+</sup> ··· <sup>+</sup> x2 An σ2 An + 2xA1xA2σA1A2 + 2xA1xA3σA1A3 + ··· +2xA1xAnσA1An + 2xA2xA1σA2A1 + 2xA2xA3σA2A3 + ··· +2xA2xAnσA2An + 2xAn xA1σAnA1 + 2xAn xA2σAnA2 + ··· +2xAn xAn−<sup>1</sup>σAnAn−<sup>1</sup> (3)

$$
\sigma\_P^2 = \sum\_{i=1}^n \mathbf{x}\_{A\_i}^2 \sigma\_{A\_i}^2 + 2 \sum\_{i=1}^n \sum\_{j=1}^n \mathbf{x}\_{A\_i} \mathbf{x}\_{A\_j} \sigma\_{A\_i A\_j} \tag{4}
$$

The portfolio risk in matrix form can be written as:

$$\sigma\_P^2 = (\mathbf{x}\_{A\_1}\mathbf{x}\_{A\_2}\dots\mathbf{x}\_{A\_n}) \left( \begin{array}{cccc} \sigma\_{A\_1A\_1} & \sigma\_{A\_1A\_2} & \dots & \sigma\_{A\_1A\_n} \\ \sigma\_{A\_2A\_1} & \sigma\_{A\_2A\_2} & \dots & \sigma\_{A\_2A\_n} \\ \dots & \dots & \dots & \dots & \dots \\ \sigma\_{A\_nA\_1} & \sigma\_{A\_nA\_2} & \dots & \sigma\_{A\_nA\_n} \end{array} \right) \begin{array}{c} \mathbf{x}\_{A\_1} \\ \mathbf{x}\_{A\_2} \\ \dots \\ \mathbf{x}\_{A\_n} \end{array} \right) = \mathbf{x}\_A^T \boldsymbol{\Omega} \mathbf{x}\_A \tag{5}$$

In the specialized literature, starting with the modern portfolio theory, the relationship between the portfolio return and portfolio risk was established and also the concept of an optimal portfolio has been stipulated. According to this theory, financial asset portfolios are considered optimal if the portfolio return *Rp* = ρ, in which ρ has a fixed level, while the portfolio risk σ<sup>2</sup> *<sup>P</sup>* → *min*. The equations of an optimal portfolio will be of the form:

$$\begin{cases} \begin{aligned} R\_p &= \sum\_{i=1}^N \mathbf{x}\_{A\_i} R\_{A\_i} = \rho \\ \sigma\_p^2 &= \sum\_{i=1}^n \mathbf{x}\_{A\_i}^2 \sigma\_{A\_i}^2 + 2 \sum\_{i=1}^n \sum\_{j=1}^n \mathbf{x}\_{A\_i} \mathbf{x}\_{A\_j} \sigma\_{A\_i A\_j} \to \min \\ &\sum\_{i=1}^n \mathbf{x}\_{A\_i} = 1 \end{aligned} \tag{6}$$

The mathematical model for quantifying the relationship between the portfolio return *Rp* and its risk σ2 *P* is known as Markowitz's frontier and has the following form:

$$
\sigma\_P^2 = \frac{1}{D} \left( A R\_P^2 - 2 B R\_P + \mathbb{C} \right) \tag{7}
$$

where the coefficients *A*, *B*, *C*, *D* have the following calculation formulas (results from the literature): *A* = *e*Ω−1*e*; *B* = *e*Ω−1*R<sup>T</sup> <sup>A</sup>* <sup>=</sup> *<sup>e</sup>T*Ω−<sup>1</sup>*RA*; *<sup>C</sup>* <sup>=</sup> *RA*Ω−<sup>1</sup>*RT <sup>A</sup>*. From the Markowitz's frontier relation, it emerged that there is a direct proportionality relation between risk and return, respectively, the higher the portfolio's return, the higher the risk. All the investment portfolios located on the Markowitz's frontier (the upper branch of hyperbole) are considered to be efficient portfolios. Any portfolio located for example below the Markowitz's frontier will have an equivalent portfolio located on the frontier which will have the same risk and a higher return.

Regardless the popularity of the portfolio theory and how advanced the research in the field of capital markets is, any portfolio, regardless of the number of financial assets, has a certain degree of certainty/uncertainty (*Gr*(*RP*)) to realize the portfolio return and to produce the risk. This degree of portfolio returns and risk is divided into three categories:

The first category: certain degree for the portfolio return and risk μ(*Gr*(σ*P*,*RP*)), corresponding to that situation where the portfolio return and risk have an achievement degree, estimated using professional judgment, around the value of 50%. Each portfolio constitutes a specific degree of achievement for return and risk of each portfolio.

The second category: very poor or almost null degree for the portfolio return and risk ϑ(*Gr*(σ*P*,*RP*)), corresponding to that situation where the return and risk of a portfolio have an estimated degree of achievement of 10–20%. The causes that can lead to such situations are numerous: the assumption of a certain level of return and risk by investors, the poor ability to pay financial assets, the negative influence of national macroeconomic factors.

Third category: uncertain degree for the portfolio return and risk, noted as λ(*Gr*(σ*P*,*RP*)) representing the situation where the degree of return and risk is quite uncertain, estimated based on professional reasoning at 20–30%.

The introduction of these measuring degrees for the financial asset portfolios return and risk allows the creation of neutrosophic portfolios, modelled using triangular fuzzy numbers. These portfolios meet the real needs of investors on the financial market. Thus, if a portfolio will have a high degree of return and risk, the investors will have a degree of certainty that they will obtain the expected returns from the financial market. It is worth mentioning that each financial asset that constitutes the portfolio has in turn a certain return and a specific risk which will determine a certain influence on the portfolio return and risk.

The introduction of these ways of measuring the degree of portfolio return and the degree of producing the portfolio risk creates the basis for the formation of the neutrosophic portfolios of financial assets, as mentioned, modelled using the neutrosophic triangular fuzzy numbers. Neutrosophic portfolios have as performance indicators the neutrosophic portfolio return but also the neutrosophic portfolio risk.

#### *2.2. Literature Review*

Regarding the studies in the area of neutrosophic theory, it can be underline the fact that the neutrosophic theory and its derivates has been extensively applied in the last two decades various economic and social fields such as—decision making [2–13], supply chain management [14], best product selection [15], management [16] forecasting [17], sentiment analysis [18,19] and so forth.

As for the portfolio theory, there are only few studies who have tried to use the advantages of the neutrosophic theory. Islam and Ray [20] propose a multi-objective portfolio selection which is used through a neutrosophic optimization technique. The authors introduce a new objective function based on entropy and generalize the portfolio selection problem with diversification (GPSPD), stating that, as the proposed method is general, it can easily be applied to other areas in the engineering sciences or operations research. Pamucar et al. [21] propose a multicriteria decision making model in which the authors have considered the linguistic neutrosophic numbers for the purpose of eliminating the subjectivity which derives from the qualitative assessment and the assumptions made by the decision-making in complex situations.

The problem of project selection has been addressed through the use of the neutrosophic set theory by Abdel-Basset et al. [22]. The authors state in the paper the importance of a proper identification of the important criteria based on which the project selection had to be done and propose a model base on TOPSIS and DEMATEL for selection of the best project alternative. Villegas Alava et al. [23] used the single value neutrosophic numbers for project selection. In the paper, the authors present a case study for information technology project selection for proving the applicability of the proposed approach.

In the area of project management, Saleh Al-Subhi et al. [24] use the neutrosophic sets and propose a new decision making model based on neutrosophic cognitive maps and compare the proposed approach with a traditional model in order to prove its efficiency and efficacy. Perez Pupo et al. [25] use the neutrosophic theory for project management decisions, while Su et al. [26] develop a project procurement method selection model under an interval neutrosophic environment. The results gathered in the papers are compared with exiting methods and the results are encouraging.

The project risk assessment in the area of construction engineering is addressed in Reference [27] through the use of 2-tuple linguistic neutrosophic hamy mean operators. The authors provide both the theoretical background and an applicable example for better explain the proposed approach.

Regarding the identified problem within this paper, the modern portfolio theory currently quantifies with the help of Markowitz model the relationship between return and risk. The main disadvantage of this financial asset portfolio theory is that it does not provide sufficient information to investors regarding the probability of realizing the return and of producing the portfolio risk. In addition, the risk and return of the portfolio is influenced by the risk and return of each financial asset that makes up the portfolio. Under these conditions, the substantiation of the investment decision on the financial market is not based on complete information that would also include the probability of achieving the portfolio return and risk and could have as an impact a risk decrease assumed by investors.

The proposed solution in the present research paper is to use the neutrosophic triangular fuzzy numbers, that use the aforementioned categories of information, regarding the degree of achieving the return and producing the portfolio risk. At the same time, the neutrosophic triangular fuzzy numbers allow the stratification of the values recorded by each financial asset for the return and risk specific to each asset. The information resulting from neutrosophic fuzzy modelling has a much more detailed character and allows the financial market investors to more rigorously base their financial decisions. In addition, as a way of solving the problem, are proposed the concepts of neutrosophic return, neutrosophic risk and neutrosophic covariance specific to financial assets.

The innovative character of the paper is determined by the use of the neutrosophic triangular fuzzy numbers but also by the specific concepts of financial assets, namely: neutrosophic return, neutrosophic risk and/or neutrosophic covariance. The information for substantiating the decisions on the financial market is based on neutrosophic fuzzy modelling as a way to improve the decision-making process on the market.

#### **3. The Neutrosophic Portfolios Concept. Specific Notions, Structure and Formation**

The theory of neutrosophic fuzzy numbers of the form: *A*- = *<sup>a</sup>*, <sup>μ</sup>*<sup>a</sup>*, <sup>ϑ</sup>*<sup>a</sup>*, <sup>λ</sup>*a* 2 /*<sup>a</sup>* <sup>∈</sup> *<sup>A</sup>* has the characteristic that besides to the specific membership functions related to the fuzzy numbers of the form: <sup>μ</sup>*<sup>a</sup>* : *A*-<sup>→</sup> [0, 1]; <sup>ϑ</sup>*<sup>a</sup>* : *A*-<sup>→</sup> [0, 1] and <sup>λ</sup>*<sup>a</sup>* : *A*-→ [0, 1], also contain the achievement degree of fuzzy numbers of the form: (*wA*-, *uA*-, *yA*-), with the following meanings: *wA*--certainty degree for the achievement of the fuzzy number, *uA*--indeterminacy degree for the achievement of the fuzzy number and *yA*--falsity degree for the achievement of the fuzzy number. The membership functions for the neutrosophic fuzzy numbers of the form *A*- = *<sup>a</sup>*, <sup>μ</sup>*<sup>a</sup>*, <sup>ϑ</sup>*<sup>a</sup>*, <sup>λ</sup>*a* 2 /*<sup>a</sup>* <sup>∈</sup> *<sup>A</sup>* are determined according to their achievement degrees [28]:

The membership function for the neutrosophic numbers with truth value, the truth membership (μ*A*-(*x*)) is of the form [28]:

$$\mu \widetilde{A}\_{(\mathbf{x})} = \begin{cases} \frac{w\_{\widetilde{A}}(\widetilde{A}\_{\mathbf{x}} - \widetilde{A}\_{\mathbf{x}1})}{\widetilde{A}\_{\mathbf{b}1} - \widetilde{A}\_{\mathbf{t}1}} \text{ for } \widetilde{A}\_{\mathbf{t}1} \le \widetilde{A}\_{\mathbf{x}} \le \widetilde{A}\_{\mathbf{b}1} \\\ u\_{\widetilde{A}} \text{ for } \widetilde{A}\_{\mathbf{x}} = \widetilde{A}\_{\mathbf{b}1} \\\ \frac{w\_{\widetilde{A}}(\widetilde{A}\_{\mathbf{c}1} - \widetilde{A}\_{\mathbf{x}})}{\widetilde{A}\_{\mathbf{c}1} - \widetilde{A}\_{\mathbf{b}1}} \text{ for } \widetilde{A}\_{\mathbf{b}1} \le \widetilde{A}\_{\mathbf{x}} \le \widetilde{A}\_{\mathbf{c}1} \\\ 0, \text{ for any other value out of range } [\widetilde{A}\_{\mathbf{c}1}; \widetilde{A}\_{\mathbf{c}1}] \end{cases} \tag{8}$$

The membership function for the neutrosophic numbers with uncertain achievement degree, the indeterminacy membership (ϑ*A*-(*x*)) is of the form [28]:

$$
\mathfrak{SA}\_{(x)} = \begin{cases}
& \frac{u\_{\widetilde{A}}(\widetilde{A}\_{x} - \widetilde{A}\_{a1}) + \widetilde{A}\_{b1} - \widetilde{A}\_{x}}{\widetilde{A}\_{b1} - \widetilde{A}\_{a1}} \text{ for } \widetilde{A}\_{a1} \le \widetilde{A}\_{x} \le \widetilde{A}\_{b1} \\
& u\_{\mathrm{Ra}} \text{ for } \widetilde{A}\_{x} = \widetilde{A}\_{b1} \\
& \frac{u\_{\widetilde{A}}(\widetilde{A}\_{c1} - \widetilde{A}\_{x}) + \widetilde{A}\_{x} - \widetilde{A}\_{b1}}{\widetilde{A}\_{c1} - \widetilde{A}\_{b1}} \text{ for } \widetilde{A}\_{b1} \le \widetilde{A}\_{x} \le \widetilde{A}\_{c1} \\
0, \text{ for any other value out of range } \left[\widetilde{A}\_{c1}; \widetilde{A}\_{a1}\right]
\end{cases} \tag{9}
$$

The membership function for the neutrosophic numbers with false achievement degree, the falsity membership (λ*Ra* 3(*x*)) is of the form [28]:

$$
\widetilde{\Lambda}\widetilde{A}\_{(x)} = \begin{cases}
\frac{y\_{\widetilde{A}}(\widetilde{A}\_{x} - \widetilde{A}\_{a1}) + \widetilde{A}\_{b1} - \widetilde{A}\_{x}}{\widetilde{A}\_{b1} - \widetilde{A}\_{a1}} \text{ for } \widetilde{A}\_{a1} \le \widetilde{A}\_{x} \le \widetilde{A}\_{b1} \\
\qquad \qquad \lambda\_{\widetilde{A}} \text{ for } \widetilde{A}\_{x} = \widetilde{A}\_{b1} \\
\frac{y\_{\widetilde{A}}(\widetilde{A}\_{a1} - \widetilde{A}\_{x}) + \widetilde{A}\_{x} - \widetilde{A}\_{b1}}{\widetilde{A}\_{a1} - \widetilde{A}\_{b1}} \text{ for } \widetilde{A}\_{b1} \le \widetilde{A}\_{x} \le \widetilde{A}\_{c1} \\
0, \text{ for any other value out of } \text{range}\left[\widetilde{A}\_{c1}; \widetilde{A}\_{a1}\right]
\end{cases} \tag{10}
$$

The neutrosophic fuzzy number theory, helps to obtain complete information about fuzzy numbers, by taking into account the achievement degrees, namely: the degree of truth, uncertainty (indeterminacy) degree or falsity degree, that are extremely useful in substantiating decisions on the capital market.

Neutrosophic portfolios can consist of two or more financial assets. Let N be the number of financial assets denoted by (*A*1, *<sup>A</sup>*2, *<sup>A</sup>*3, ... *An*) or *Ai*, *i* = 1, *n* . Their characteristic is that the financial asset performance indicators, noted with *Ai* that enter into a neutrosophic portfolio structure are:

The neutrosophic return: *R* 3*Ai* = (*R* 3*Aai*,*R* 3*Abi*,*R* 3*Aci*); *wR* 3*A*, *uR* 3*A*, *yR* 3*A*; The neutrosophic risk: <sup>σ</sup>*Ai* <sup>=</sup> (<sup>σ</sup>*Aai*,<sup>σ</sup>*Abi*,<sup>σ</sup>*Aci*); *<sup>w</sup>*<sup>σ</sup>*A*, *<sup>u</sup>*<sup>σ</sup>*A*, *<sup>y</sup>*σ*A*; The neutrosophic covariance: *cov*(*R*

3*A*1,*R* 3*A*2); The neutrosophic triangular fuzzy numbers that underlie the financial assets performance

indicators, of the form: *A*- <sup>=</sup> {*<sup>a</sup>*, <sup>μ</sup>*<sup>a</sup>*, <sup>ϑ</sup>*<sup>a</sup>*, <sup>λ</sup>*<sup>a</sup>*/*a* ∈ *A*}, were defined in Bolos et al. [28] and are characterized by the membership functions of the form <sup>μ</sup>*<sup>a</sup>* : *<sup>A</sup>* <sup>→</sup> [0, 1]; <sup>ϑ</sup>*<sup>a</sup>* : *<sup>A</sup>* <sup>→</sup> [0, 1] and <sup>λ</sup>*<sup>a</sup>* : *A* → [0, 1] and by the achievement degree of the performance indicators, of the form: (*wA*-, *uA*-, *yA*-), with the following meanings: *wA*--certain achievement degree for the performance indicators, *uA*--indeterminate achievement degree for the performance indicators and *yA*--falsity achievement degree for the performance indicators.

**Definition 1.** *Is defined the neutrosophic average return Ef*(*R* 3*Ai*); *wR* 3*A*, *uR* 3*A*, *yR* 3*A for the neutrosophic triangular fuzzy number R* 3*Ai* = (*R* 3*Aai*,*R* 3*Abi*,*R* 3*Aci*); *wR* 3*A*, *uR* 3*A*, *yR* 3*A*, *specific for the financial asset (Ai) and component part of the neutrosophic portfolio* (*P* -;*wP* -,*uP* -,*yP* -,) *any value of the financial asset return appreciated after the achievement degree, using the following coe*ffi*cients: wR* 3*<sup>A</sup>* ∈ [0, 1] *for certain achievement degree, uR* 3*<sup>A</sup>* ∈ [0, 1] *for indeterminate achievement degree and yR* 3*<sup>A</sup>* ∈ [0, 1] *for falsity achievement degree; determined by the calculation formula:*

$$\langle \mathbb{E}\_f(\widetilde{\mathcal{R}\_A}); u\widetilde{\mathcal{R}\_{A\prime}}u\widetilde{\mathcal{R}\_{A\prime}}y\widetilde{\mathcal{R}\_A}\rangle = \langle \Big(\frac{1}{6}(\widetilde{\mathcal{R}\_{Aa1}} + \widetilde{\mathcal{R}\_{Ac1}}) + \frac{2}{3}\widetilde{\mathcal{R}\_{Ab1}}\Big); u\widetilde{\mathcal{R}\_{A\prime}}u\widetilde{\mathcal{R}\_{A\prime}}y\widetilde{\mathcal{R}\_A}\rangle \tag{11}$$

*Note 1: The formula for neutrosophic average return was demonstrated in Bolos, et.al. [28].*

**Definition 2.** *Is defined the neutrosophic risk* <sup>σ</sup> *<sup>f</sup>* <sup>2</sup> *Ai* ; *<sup>w</sup>*<sup>σ</sup>*A*, *<sup>u</sup>*<sup>σ</sup>*A*, *<sup>y</sup>*σ*A for the neutrosophic triangular fuzzy number* <sup>σ</sup>*Ai* <sup>=</sup> (<sup>σ</sup>*Aai*,<sup>σ</sup>*Abi*,<sup>σ</sup>*Aci*); *<sup>w</sup>*<sup>σ</sup>*A*, *<sup>u</sup>*<sup>σ</sup>*A*, *<sup>y</sup>*σ*A determined for the financial asset (Ai) and component part of the neutrosophic portfolio* (*P* -;*wP* -,*uP* -,*yP* -,) *any value of the financial asset risk appreciated after the achievement degree, using the following coe*ffi*cients: <sup>w</sup>*<sup>σ</sup>*<sup>A</sup>* <sup>∈</sup> [0, 1] *for certain achievement degree, <sup>u</sup>*σ*<sup>A</sup>* ∈ [0, 1] *for indeterminate achievement degree and <sup>y</sup>*σ*<sup>A</sup>* ∈ [0, 1] *for falsity achievement degree; determined by the calculation formula:*

$$\begin{split} \langle \sigma\_{A\_i^1}^2; \widetilde{uw}\_{A\_i}, \widetilde{uw}\_{A\_i}, \widetilde{y\sigma}\_A \rangle \\ &= \langle \frac{1}{4} \Big[ \left( \widetilde{R\_{Ab1}} - \widetilde{R\_{Aa1}} \right)^2 + \left( \widetilde{R\_{Ac1}} - \widetilde{R\_{Ab1}} \right)^2 \Big]; \widetilde{wR\_{A\nu}u\widetilde{R\_{A\nu}}, y\widetilde{R\_{A}}} \rangle \\ &+ \langle \frac{1}{3} \Big[ \widetilde{R\_{Aa1}} \left( \widetilde{R\_{Ab1}} - \widetilde{R\_{Aa1}} \right) - \widetilde{R\_{Ac1}} \left( \widetilde{R\_{Ac1}} - \widetilde{R\_{Ab1}} \right) \right]; \widetilde{wR\_{A\nu}u\widetilde{R\_{A\nu}}, y\widetilde{R\_{A}}} \rangle \\ &+ \langle \frac{1}{2} \Big( \widetilde{R\_{Aa1}} + \widetilde{R\_{Ac1}}^2 \big); \widetilde{wR\_{A\nu}u\widetilde{R\_{A\nu}}, y\widetilde{R\_{A}}} \rangle - \langle \frac{1}{2} \mathcal{E}\_f^2(\widetilde{R\_{A\nu}}); \widetilde{wR\_{A\nu}u\widetilde{R\_{A\nu}}} \widetilde{yR\_{A\nu}} \rangle \end{split} \tag{12}$$

*Note 2: The formula for neutrosophic risk was demonstrated in Bolos, et al. [28].*

**Definition 3.** *Is defined the neutrosophic covariance cov*(*R* 3*A*1,*R* 3*A*2); *wR* 3*A*1, *uR* 3*A*1, *yR* 3*A*1; *wR* 3*A*2, *uR* 3*A*2, *yR* 3*A*2 *for two neutrosophic triangular fuzzy numbers R* 3*A*<sup>1</sup> = (*R* 3*Aa*1,*R* 3*Ab*1,*R* 3*Ac*1); *wR* 3*A*1, *uR* 3*A*1, *yR* 3*A*1 *and respectively R* 3*A*<sup>2</sup> = (*R* 3*Aa*2,*R* 3*Ab*2,*R* 3*Ac*2); *wR* 3*A*2, *uR* 3*A*2, *yR* 3*A*2 *characterizing two financial assets* (*A*1, *A*2) *and component parts of the neutrosophic portfolio* (*P* -;*wP* -,*uP* -,*yP* -,) *any value of the financial asset covariance appreciated after the achievement degree, using the following coe*ffi*cients: for certain achievement degree, uR* 3*A*1, *uR* 3*A*<sup>2</sup> ∈ [0, 1] *for indeterminate achievement degree and for falsity achievement degree; determined by the calculation formula:*

$$\begin{array}{c} \langle \alpha v(\widetilde{R}\_{A1}, \widetilde{R}\_{A2}); u\widetilde{R}\_{A1}, \mu \widetilde{R}\_{A1}, y\widetilde{R}\_{A1}; u\widetilde{R}\_{A2}, u\widetilde{R}\_{A2}, y\widetilde{R}\_{A2} \rangle = \\ \langle \left(\frac{1}{4}\bigg[(\widetilde{R}\_{A1b1} - \widetilde{R}\_{Aa11})(\widetilde{R}\_{Ab21} - \widetilde{R}\_{Aa21}) + (\widetilde{R}\_{Aa11} - \widetilde{R}\_{Ab11})(\widetilde{R}\_{Aa21} - \widetilde{R}\_{Ab21})\right] \\ \qquad + \frac{1}{2} \bigg[\widetilde{R}\_{A21}(\widetilde{R}\_{Ab11} - \widetilde{R}\_{Aa11}) + \widetilde{R}\_{Aa11}(\widetilde{R}\_{Ab21} - \widetilde{R}\_{Ab21})\Bigr] \\ \qquad - \left[\widetilde{R}\_{A21}(\widetilde{R}\_{Aa21} - \widetilde{R}\_{Ab21}) + \widetilde{R}\_{Aa21}(\widetilde{R}\_{Ab11} - \widetilde{R}\_{Ab11})\right] \\ \qquad + \frac{1}{2}(\widetilde{R}\_{A11}\widetilde{R}\_{A21} + \widetilde{R}\_{Aa11}\widetilde{R}\_{A21}) \\ \qquad + \frac{1}{2}\mathbb{E}\left(\widetilde{R}\_{A1}\widetilde{E}\_{f}(\widetilde{R}\_{A2})\right); u\widetilde{R}\_{A1} \wedge u\widetilde{R}\_{A2}, u\widetilde{R}\_{A1} \vee u\widetilde{R}\_{A2}, y\widetilde{R}\_{A1} \vee y\widetilde{R}\_{A2}\right) \end{array} \tag{13}$$

*Note 3: The formula for neutrosophic covariance was demonstrated in Bolos, et al. [28]. Upon these demonstrations we will no longer return.*

**Definition 4.** *Any portfolio P is called a neutrosophic portfolio of financial assets and is denoted P* -;*wP* -,*uP* -,*yP* - *if it cumulativelysatisfies two conditions:*


**Proposition 1.** *The neutrosophic portfolio return* -*RP*; *wRp*3, *uRp*3, *yRp*3 *modeled using neutrosophic triangular fuzzy numbers of the form: R* 3*Ai* = (*R* 3*Aai*,*R* 3*Abi*,*R* 3*Aci*); *wR* 3*A*, *uR* 3*A*, *yR* 3*A* is a fundamental variable that characterizes the neutrosophic portfolio and is determined by the formula:

$$
\langle \widetilde{R}\_P; w\widetilde{R}p, u\widetilde{R}p, y\widetilde{R}p\rangle = \sum\_{i=1}^n \langle \mathbf{x}\_{A\_i} \Big( \frac{1}{6} \langle \widetilde{R}\_{Aai} + \widetilde{R}\_{Aci} \rangle + \frac{2}{3} \widetilde{R}\_{Ahi} \Big); w\widetilde{R}\_{A\_ir}, u\widetilde{R}\_{A\_ir}, y\widetilde{R}\_{A\_i} \rangle \tag{14}
$$

Demonstration: From the calculation relation of the neutrosophic portfolio return made up of N financial assets we know that:

$$
\begin{split}
\langle \overline{\mathcal{R}}p\_{\mathcal{I}};u\overline{\mathcal{R}}p\_{\nu}u\overline{\mathcal{R}}p,y\overline{\mathcal{R}}p\rangle \\
=\langle \underline{\times}\_{A\_{1}}\overline{\mathcal{R}}\_{A\_{1}};u\overline{\mathcal{R}}\_{A\_{1}},u\overline{\mathcal{R}}\_{A\_{1}},y\overline{\mathcal{R}}\_{A\_{1}}\rangle + \langle \underline{\times}\_{A\_{2}}\overline{\mathcal{R}}\_{A\_{2}};u\overline{\mathcal{R}}\_{A\_{2}},u\overline{\mathcal{R}}\_{A\_{2}},y\overline{\mathcal{R}}\_{A\_{2}}\rangle + \cdots \\
\quad + \langle \underline{\times}\_{A\_{n}}\overline{\mathcal{R}}\_{A\_{n}};u\overline{\mathcal{R}}\_{A\_{n}},u\overline{\mathcal{R}}\_{A\_{n}},y\overline{\mathcal{R}}\_{A\_{n}}\rangle
\end{split}
\tag{15}
$$

The above relationship can be written as follows:

$$
\langle \widetilde{R}\_P; w\widetilde{R}p, u\widetilde{R}p, y\widetilde{R}p\rangle = \sum\_{i=1}^n \langle \mathbf{x}\_{A\_i}\widetilde{R}\_{A\_i i}; w\widetilde{R}\_{A\_i i}u\widetilde{R}\_{A\_{i'}}y\widetilde{R}\_{A\_i}\rangle \tag{16}
$$

From the definition no.1 we know that the average neutrosophic return specific to a financial asset is of the form:

$$\langle \langle E\_f(\widetilde{R\_{Aj}}); w\widetilde{R\_{A\prime}}, u\widetilde{R\_{A\prime}}, y\widetilde{R\_A} \rangle = \langle \left(\frac{1}{6}(\widetilde{R\_{A\prime 1}} + \widetilde{R\_{A\prime 1}}) + \frac{2}{3}\widetilde{R\_{A\prime 1}}\right); w\widetilde{R\_{A\prime}}, u\widetilde{R\_{A\prime}}, y\widetilde{R\_A} \rangle \tag{17}$$

Substituting the expression of the average neutrosophic return of a financial asset in the calculation formula of the neutrosophic portfolio return is obtained:

$$
\langle \widetilde{R}p, u\widetilde{R}p, u\widetilde{R}p, y\widetilde{R}p\rangle = \sum\_{i=1}^{n} \langle \mathbf{x}\_{Ai} \Big( \frac{1}{6} (\widetilde{R}\_{Aai} + \widetilde{R}\_{Aci}) + \frac{2}{3} \widetilde{R}\_{Abi} \Big) w\widetilde{R}\_{Ai}, u\widetilde{R}\_{Ai'}, y\widetilde{R}\_{Ai} \rangle \tag{18}
$$

where -*RAai*; -*RAbi*; -*RAci* represents the financial asset return values, component part of the neutrosophic triangular fuzzy number determined according to the calculation relationships known in the specialized literature.

**Example 1.** *There are considered three financial assets (A*1, *A*2, *A*3*) to which three triangular neutrosophic numbers are specified for the financial assets return, of the form:*

$$R\_{A1} = \langle (0.2 \, 0.3 \, 0.5); 0.5, 0.2, 0.3 \rangle \text{for} \mathbb{R}\_A \in \overline{\mathbb{R}}\_{A2} = \langle (0.1 \, 0.2 \, 0.3); 0.6, 0.3, 0.2 \rangle \text{for} \overline{\mathbb{R}}\_A \in [0, 1; 0, 3] \tag{19}$$

$$\widetilde{R}\_{A3} = \langle (0.3 \, 0.4 \, 0.6); 0.4, 0.3, 0.3 \rangle \text{for} \overline{\mathbb{R}}\_A \in [0, 3; 0, 6]$$

The weights held by the three financial assets in the total portfolio are determined according to the value of each financial asset and the total value of the portfolio and have the values: *xA*<sup>1</sup> = 0, 4; *xA*<sup>2</sup> = 0, 3 s,i *xA*<sup>3</sup> = 0, 3. In order to establish the neutrosophic portfolio return, from proposition1 it is known that:

$$
\langle \widetilde{R}p; w\widetilde{R}p, u\widetilde{R}p, y\widetilde{R}p\rangle = \sum\_{i=1}^{n} \langle \mathbf{x}\_{A\_i} \Big( \frac{1}{6} \langle \widetilde{R}\_{Aai} + \widetilde{R}\_{Aci} \rangle + \frac{2}{3} \widetilde{R}\_{Abi} \Big); w\widetilde{R}\_{A\_i r} u\widetilde{R}\_{A\_i r} y\widetilde{R}\_{A\_i} \rangle \tag{20}
$$

By replacing in the above expression is obtained:

$$\begin{aligned} \langle R\_P; wRp, uRp, yRp \rangle \\ &= \langle 0, 4\{\frac{1}{6}(0.2 + 0.5) + \frac{2}{3} \times 0.3\}; 0.5, 0.2, 0.3 \rangle \\ &+ \langle 0, 3\{\frac{1}{6}(0.1 + 0.3) + \frac{2}{3} \times 0.2\}; 0.6, 0.3, 0.2 \rangle \\ &+ \langle 0, 3\{\frac{1}{6}(0.3 + 0.6) + \frac{2}{3} \times 0.4\}; 0.4, 0.3, 0.3 \rangle \end{aligned} \tag{21}$$

$$\begin{aligned} \langle \overline{R}\_P; u\overline{R}\overline{p}, u\overline{R}\overline{p}, y\overline{R}\overline{p} \rangle \\ &= \langle 0, 4\{\frac{1}{6}0.7 + \frac{2}{3}0.3\}; 0.5, 0.2, 0.3\rangle \\ &+ \langle 0, 3\{\frac{1}{6}0.4 + \frac{2}{3}0.2\}; 0.6, 0.3, 0.2\rangle \\ &+ \langle 0, 3\{\frac{1}{6}0.9 + \frac{2}{3}0.4\}; 0.4, 0.3, 0.3\rangle \end{aligned} \tag{22}$$

$$\begin{aligned} \langle \overline{R}p, u\overline{R}p, u\overline{R}p, y\overline{R}p \rangle \\ &= \langle 0.4 \times 0.316; 0.5, 0.2, 0.3 \rangle + \langle 0.3 \times 0.199; 0.5, 0.2, 0.3 \rangle \\ &+ \langle 0.3 \times 0.416; 0.4, 0.3, 0.3 \rangle \end{aligned} \tag{23}$$
 
$$\langle \overline{R}p, u\overline{R}p, u\overline{R}p, y\overline{R}p \rangle$$

$$\begin{aligned} \langle \mathbb{K}\_{\mathbb{P}}; \mathbb{W}\mathbb{K}\_{\mathbb{P}}, \mathbb{L}\mathbb{K}\_{\mathbb{P}}, \mathbb{Y}\mathbb{K}\rangle\\ = \langle 0.1264; 0.5, 0.2, 0.3 \rangle + \langle 0.0597; 0.5, 0.2, 0.3 \rangle\\ + \langle 0.1248; 0.4, 0.3, 0.3 \rangle\\ \langle \overline{\mathbb{R}}\_{\mathbb{P}}; w\overline{\mathbb{R}}p, u\overline{\mathbb{R}}p, y\overline{\mathbb{R}}p \rangle = \langle 0.3109; 0.5, 0.2, 0.3 \rangle \end{aligned} \tag{24}$$

Result interpretation: The average neutrosophic portfolio return has a value of 31.09% with a degree of certainty of 50%, a degree of uncertainty of 20% and a degree of falsification of 30%. In order to obtain the neutrosophic portfolio return, the addition rule for two triangular neutrosophic numbers was applied according to which:

$$
\widetilde{R}\_{A1} + \widetilde{R}\_{A2} = \left\langle \begin{array}{c} \widetilde{R}\_{Aa1} + \widetilde{R}\_{Aa2}, \widetilde{R}\_{Ab1} + \widetilde{R}\_{Ab2}, \\ \widetilde{R}\_{Ac1} + \widetilde{R}\_{Ac2} \end{array} \right\rangle \colon w\mathbf{R}\_{A\_1} \wedge w\widetilde{R}\_{A2}, \mathbf{u}\widetilde{R}\_{A1} \vee u\widetilde{R}\_{A2}, y\widetilde{R}\_{A1} \vee y\widetilde{R}\_{A2} \rangle \tag{25}
$$

**Proposition 2.** *The neutrosophic portfolio risk noted with* σ2 *<sup>P</sup>*; *<sup>w</sup>*3σ*p*, *<sup>u</sup>*3σ*p*, *<sup>y</sup>*3σ*p modeled using the fuzzy neutrosophic numbers of the form:* <sup>σ</sup>*Ai* <sup>=</sup> (<sup>σ</sup>*Aai*,<sup>σ</sup>*Abi*,<sup>σ</sup>*Aci*); *<sup>w</sup>*<sup>σ</sup>*A*, *<sup>u</sup>*<sup>σ</sup>*A*, *<sup>y</sup>*σ*A is also a fundamental variable of the neutrosophic portfolio that is determined by the calculation formula:*

σ2 *<sup>P</sup>*; *<sup>w</sup>*3σ*p*, *<sup>u</sup>*3σ*p*, *<sup>y</sup>*3σ*p* <sup>=</sup> *<sup>n</sup> i*=1 *x*2 *Ai* 1 4 (*R* 3*Abi* − *R* 3*Aai*) 2 + (*R* 3*Aci* − *R* 3*Abi*) 2 ; *wR* 3*Ai*, *uR* 3*Ai*, *yR* 3*Ai* + <sup>2</sup> 3 *R* 3*Aai*(*R* 3*Abi* − *R* 3*Aai*) − *R* 3*Aci*(*R* 3*Aci* − *R* 3*Abi*) ; *wR* 3*Ai*, *uR* 3*Ai*, *yR* 3*Ai* + <sup>1</sup> <sup>2</sup> (*R* 3*A* 2 *ai* + *R* 3*A* 2 *ci*); *wRa* 3*i*, *uRa* 3*i*, *yRa* 3*i* − <sup>1</sup> 2*E*<sup>2</sup> *f* (*R* 3*Ai*); *wR* 3*Ai*, *uR* 3*Ai*, *yR* 3*Ai* <sup>+</sup><sup>2</sup> *<sup>n</sup> i*=1 *n j*=1 *xAi xAj* 1 4 (*R* 3*Abi*<sup>1</sup> − *R* 3*Aai*1)(*R* 3*Abj*<sup>1</sup> − *R* 3*Aaj*1) +(*R* 3*Aci*<sup>1</sup> − *R* 3*Abi*1)(*R* 3*Acj*<sup>1</sup> − *R* 3*Abj*1) +<sup>1</sup> 3 *R* 3*Aaj*1(*R* 3*Abi*<sup>1</sup> − *R* 3*Aai*1) + *R* 3*Aai*1(*R* 3*Abj*<sup>1</sup> − *R <sup>R</sup>* 3*Aaaj*1) − *R* 3*Aci*1(*R* 3*Acj*<sup>1</sup> − *R* 3*Abj*1) + *R* 3*Acj*1(*R* 3*Aci*<sup>1</sup> − *R* 3*Abi*1) } +<sup>1</sup> <sup>2</sup> (*R* 3*Aai*1*R* 3*Aaj*<sup>1</sup> + *R* 3*Aci*1*R* 3*Acj*1) +<sup>1</sup> <sup>2</sup>*Ef*(*R* 3*Ai*)*Ef* (*R* 3*<sup>A</sup> <sup>j</sup>*) ; *wR* 4*Ai* ∧ *wR* 3*<sup>A</sup> <sup>j</sup>*, *uR* 3*Ai* ∨ *uR* 3*<sup>A</sup> <sup>j</sup>*, *yR* 3*Ai* ∨ *yR* 3*A j* (26)

Demonstration: It is known that the neutrosophic portfolio risk made up of N financial assets is of the form:

σ2 *<sup>P</sup>*; *<sup>w</sup>*3σ*p*, *<sup>u</sup>*3σ*p*, *<sup>y</sup>*3σ*p* = *x*<sup>2</sup> *A*1 σ2 *A*1 ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>y</sup>*<sup>σ</sup>*A*<sup>1</sup> + *x*<sup>2</sup> *A*2 σ2 *A*2 ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>2</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>2</sup> , *<sup>y</sup>*σ*A*2 + ··· +*x*<sup>2</sup> *An* σ2 *An* ; *<sup>w</sup>*<sup>σ</sup>*An* , *<sup>u</sup>*<sup>σ</sup>*An* , *<sup>y</sup>*σ*An* +2*xA*<sup>1</sup> *xA*<sup>2</sup> <sup>σ</sup>*A*1*A*<sup>2</sup> ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>1</sup> <sup>∧</sup> *<sup>w</sup>*<sup>σ</sup>*A*<sup>2</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>1</sup> <sup>∨</sup> *<sup>u</sup>*<sup>σ</sup>*A*<sup>2</sup> , *<sup>y</sup>*<sup>σ</sup>*A*<sup>1</sup> <sup>∨</sup> *<sup>y</sup>*σ*A*<sup>2</sup> +2*xA*<sup>1</sup> *xA*<sup>3</sup> <sup>σ</sup>*A*1*A*<sup>3</sup> ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>1</sup> <sup>∧</sup> *<sup>w</sup>*<sup>σ</sup>*A*<sup>3</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>1</sup> <sup>∨</sup> *<sup>u</sup>*<sup>σ</sup>*A*<sup>3</sup> , *<sup>y</sup>*<sup>σ</sup>*A*<sup>1</sup> <sup>∨</sup> *<sup>y</sup>*σ*A*<sup>3</sup> + ··· <sup>+</sup>2*xA*<sup>1</sup> *xAn*<sup>σ</sup>*A*1*An* ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>1</sup> <sup>∧</sup> *<sup>w</sup>*<sup>σ</sup>*An* , *<sup>u</sup>*<sup>σ</sup>*A*<sup>1</sup> <sup>∨</sup> *<sup>u</sup>*<sup>σ</sup>*An* , *<sup>y</sup>*<sup>σ</sup>*A*<sup>1</sup> <sup>∨</sup> *<sup>y</sup>*σ*An* +2*xA*<sup>2</sup> *xA*<sup>1</sup> <sup>σ</sup>*A*2*A*<sup>1</sup> ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>2</sup> <sup>∧</sup> *<sup>w</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>2</sup> <sup>∨</sup> *<sup>u</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>y</sup>*<sup>σ</sup>*A*<sup>2</sup> <sup>∨</sup> *<sup>y</sup>*σ*A*<sup>1</sup> +2*xA*<sup>2</sup> *xA*<sup>3</sup> <sup>σ</sup>*A*2*A*<sup>3</sup> ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>2</sup> <sup>∧</sup> *<sup>w</sup>*<sup>σ</sup>*A*<sup>3</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>2</sup> <sup>∨</sup> *<sup>u</sup>*<sup>σ</sup>*A*<sup>3</sup> , *<sup>y</sup>*<sup>σ</sup>*A*<sup>2</sup> <sup>∨</sup> *<sup>y</sup>*σ*A*<sup>3</sup> + ··· <sup>+</sup>2*xA*<sup>2</sup> *xAn*<sup>σ</sup>*A*2*An* ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>2</sup> <sup>∧</sup> *<sup>w</sup>*<sup>σ</sup>*An* , *<sup>u</sup>*<sup>σ</sup>*A*<sup>2</sup> <sup>∨</sup> *<sup>u</sup>*<sup>σ</sup>*An* , *<sup>y</sup>*<sup>σ</sup>*A*<sup>2</sup> <sup>∨</sup> *<sup>y</sup>*σ*An* +2*xAn xA*<sup>1</sup> <sup>σ</sup>*AnA*<sup>1</sup> ; *<sup>w</sup>*<sup>σ</sup>*An* <sup>∧</sup> *<sup>w</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>u</sup>*<sup>σ</sup>*An* <sup>∨</sup> *<sup>u</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>y</sup>*<sup>σ</sup>*An* <sup>∨</sup> *<sup>y</sup>*σ*A*<sup>1</sup> +2*xAn xA*<sup>2</sup> <sup>σ</sup>*AnA*<sup>2</sup> ; *<sup>w</sup>*<sup>σ</sup>*An* <sup>∧</sup> *<sup>w</sup>*<sup>σ</sup>*A*<sup>2</sup> , *<sup>u</sup>*<sup>σ</sup>*An* <sup>∨</sup> *<sup>u</sup>*<sup>σ</sup>*A*<sup>2</sup> , *<sup>y</sup>*<sup>σ</sup>*An* <sup>∨</sup> *<sup>y</sup>*σ*A*<sup>2</sup> + ··· (27)

The analytical relation above can be written as follows:

$$\begin{aligned} \langle \widetilde{\sigma}\_{\overline{p}}^2; u\widetilde{\sigma}\overline{p}, u\widetilde{\sigma}\overline{p}, y\widetilde{\sigma}\overline{p} \rangle \\ &= \sum\_{i=1}^n \langle \chi\_{A\_i}^2 \widetilde{\sigma\_{A\_i}}; u\widetilde{\sigma\_{A\_i}}, u\widetilde{\sigma\_{A\_i}}, y\widetilde{\sigma\_{A\_i}} \rangle \\ &+ 2\sum\_{i=1}^n \sum\_{j=1}^n \langle \chi\_{A\_i} \chi\_{A\_j} \widetilde{\sigma\_{A\_i A\_j}}; u\widetilde{\sigma\_{A\_i}} \wedge u\widetilde{\sigma\_{A\_j}}, u\widetilde{\sigma\_{A\_i}} \vee u\widetilde{\sigma\_{A\_j}}, y\widetilde{\sigma\_{A\_i}} \vee y\widetilde{\sigma\_{A\_j}} \rangle \end{aligned} \tag{28}$$

In the neutrosophic portfolio risk relation, we substitute the expression for the determination of the mean square deviation according to the Definition 2 and the expression for the covariance according to the Definition 3, established for a financial asset and we obtain the calculation relation for determining the risk size of the portfolio according to the weight of the financial asset in the total value of the portfolio *xAi* but also of the individual financial asset risk σ2 *Ai* ; *<sup>w</sup>*σ*Ai* , *<sup>u</sup>*σ*Ai* , *<sup>y</sup>*σ*Ai* and the covariance between two financial assets σ*AiAj* ; *<sup>w</sup>*<sup>σ</sup>*Ai* <sup>∧</sup> *<sup>w</sup>*σ*Aj* , *<sup>u</sup>*<sup>σ</sup>*Ai* <sup>∨</sup> *<sup>u</sup>*σ*Aj* , *<sup>y</sup>*<sup>σ</sup>*Ai* <sup>∨</sup> *<sup>y</sup>*σ*Aj* :

σ2 *<sup>P</sup>*; *<sup>w</sup>*3σ*p*, *<sup>u</sup>*3σ*p*, *<sup>y</sup>*3σ*p* <sup>=</sup> *<sup>n</sup> i*=1 *x*2 *Ai* 1 4 (*R* 3*Abi* − *R* 3*Aai*) 2 + (*R* 3*Aci* − *R* 3*Abi*) 2 ; *wR* 3*Ai*, *uR* 3*Ai*, *yR* 3*Ai* + <sup>2</sup> 3 *R* 3*Aai*(*R* 3*Abi* − *R* 3*Aai*) − *R* 3*Aci*(*R* 3*Aci* − *R* 3*Abi*) ; *wR* 3*Ai*, *uR* 3*Ai*, *yR* 3*Ai* + <sup>1</sup> <sup>2</sup> (*R* 3*A* 2 *ai* + *R* 3*A* 2 *ci*); *wRa* 3*i*, *uRa* 3*i*, *yRa* 3*i* − <sup>1</sup> 2*E*<sup>2</sup> *f* (*R* 3*Ai*); *wR* 3*Ai*, *uR* 3*Ai*, *yR* 3*Ai* <sup>+</sup><sup>2</sup> *<sup>n</sup> i*=1 *n j*=1 *xAi xAj* 1 4 (*R* 3*Abi*<sup>1</sup> − *R* 3*Aai*1)(*R* 3*Abj*<sup>1</sup> − *R* 3*Aaj*1) +(*R* 3*Aci*<sup>1</sup> − *R* 3*Abi*1)(*R* 3*Acj*<sup>1</sup> − *R* 3*Abj*1) +<sup>1</sup> 3 *R* 3*Aaj*1(*R* 3*Abi*<sup>1</sup> − *R* 3*Aai*1) + *R* 3*Aai*1(*R* 3*Abj*<sup>1</sup> − *R <sup>R</sup>* 3*Aaaj*1) − *R* 3*Aci*1(*R* 3*Acj*<sup>1</sup> − *R* 3*Abj*1) + *R* 3*Acj*1(*R* 3*Aci*<sup>1</sup> − *R* 3*Abi*1) +<sup>1</sup> <sup>2</sup> (*R* 3*Aai*1*R* 3*Aaj*<sup>1</sup> + *R* 3*Aci*1*R* 3*Acj*1) +<sup>1</sup> <sup>2</sup>*Ef*(*R* 3*Ai*)*Ef* (*R* 3*<sup>A</sup> <sup>j</sup>*) ; *wR* 4*Ai* ∧ *wR* 3*<sup>A</sup> <sup>j</sup>*, *uR* 3*Ai* ∨ *uR* 3*<sup>A</sup> <sup>j</sup>*, *yR* 3*Ai* ∨ *yR* 3*A j* (29)

**Example 2.** *There are considered three financial assets (A*1, *A*2, *A*3) *to which three triangular neutrosophic numbers are specified for the financial assets return, of the form:*

$$
\widetilde{R}\_{A1} = \langle (0.2 \, 0.3 \, 0.5); 0.5, 0.2, 0.3 \rangle \text{ pentru valori ale } \overline{R}\_{A} \in [0, 2; 0, 5]
$$

$$
\widetilde{R}\_{A2} = \langle (0.1 \, 0.2 \, 0.3); 0.6, 0.3, 0.2 \rangle \text{ pentru valori ale } \widetilde{R}\_{A} \in [0, 1; 0, 3] \tag{30}
$$

$$
\widetilde{R}\_{A3} = \langle (0.3 \, 0.4 \, 0.6); 0.4, 0.3, 0.3 \rangle \text{ pentru valori ale } \widetilde{R}\_{A} \in [0, 3; 0, 6]
$$

The weights held by the three financial assets in the total portfolio are determined according to the value of each financial asset and the total value of the portfolio and have the values: *xA*<sup>1</sup> = 0, 4; *xA*<sup>2</sup> = 0, 3 s,i *xA*<sup>3</sup> = 0, 3. In order to establish the neutrosophic portfolio risk, from proposition2 it is known that:

$$\begin{aligned} \langle \widetilde{\sigma\_p^2}; \widetilde{w\sigma\_p}, \widetilde{\mu}\widetilde{\sigma\_p}, \widetilde{y\sigma\_p} \rangle \\ &= \sum\_{i=1}^n \langle \chi\_{A\_i}^2 \widetilde{\sigma\_{A\_i}}; \widetilde{w\sigma\_{A\_i}}, \widetilde{\mu}\widetilde{\sigma\_{A\_i}}, \widetilde{y\sigma\_{A\_i}} \rangle \\ &+ 2 \sum\_{i=1}^n \sum\_{j=1}^n \langle \chi\_{A\_i} \chi\_{A\_j} \widetilde{\sigma\_{A\_i A\_j}}; \widetilde{w\sigma\_{A\_i}} \wedge \widetilde{w\sigma\_{A\_j}}, \widetilde{w\sigma\_{A\_i}} \vee \widetilde{\mu}\widetilde{\sigma\_{A\_j}}, \widetilde{y\sigma\_{A\_i}} \vee \widetilde{y\sigma\_{A\_j}} \rangle \end{aligned} \tag{31}$$

The values of the neutrosophic risk for a financial asset are determined:

$$\begin{array}{l} \bullet \overline{\sigma}\_{f\_{A\_1}}^2 = \langle \frac{1}{4} [(0.3 - 0.2)^2 + (0.5 - 0.3)^2]; 0.5, 0.2, 0.3 \rangle \\ &+ \langle \frac{2}{3} (0.2(0.3 - 0.2) - 0.5(0.5 - 0.2)); 0.5, 0.2, 0.3 \rangle \\ &+ \langle \frac{1}{2} (0.2^2 + 0.5^2); 0.5, 0.2, 0.3 \rangle - \langle \frac{1}{2} (0.316)^2; 0.5, 0.2, 0.3 \rangle \end{array} \tag{32}$$

$$\left| \overline{\sigma}\_{f\_{A\_1}}^2 = \langle 0.0225; 0.5, 0.2, 0.3 \rangle \right. \tag{33}$$

Proceeding in the same manner, we get the following results for <sup>σ</sup>*f* 2 *<sup>A</sup>*<sup>2</sup> and <sup>σ</sup>*f* 2 *A*3 :

$$\left| \overline{\sigma}\_{f\_{A\_2}}^2 = \langle 0.0180; 0.6, 0.3, 0.2 \rangle \right. \tag{34}$$

$$
\left|\overline{\sigma}\right|\_{A\_3}^2 = \langle 0.0925; 0.4, \ 0.3, \ 0.3 \rangle \tag{35}
$$

We establish the covariance between financial assets according to the Definition 3 as follows:

$$\begin{array}{l} \sigma\_{A\_{1}A\_{2}} = \langle \frac{1}{4} [(0.3-0.2)(0.2-0.1) + (0.5-0.3)(0.3-0.2)] \\ \qquad + \frac{1}{3} [0.1(0.3-0.2) + 0.2(0.2-0.1)] \\ \qquad - [0.5(0.3-0.2) + 0.3(0.5-0.3)] + \frac{1}{2}(0.2 \ast 0.1 + 0.5 \ast 0.3) \\ \qquad + \frac{1}{2} 0.316 \ast 0.199; 0.5 \wedge 0.6, 0.2 \vee 0.3, 0.3 \vee 0.2) \end{array}$$

$$
\sigma\_{A\_1 A\_2} = \langle 0.0705; 0.6, 0.2, 0.2 \rangle \tag{36}
$$

In the same way, we get:

$$
\sigma\_{A\_1 A\_3} = \langle 0.1914; 0.5, 0.2, 0.3 \rangle \tag{37}
$$

$$
\sigma\_{A\_2A\_3} = \langle 0.0805; \ 0.6, \ 0.3, \ 0.2 \rangle \tag{38}
$$

σ2 *<sup>p</sup>*; *<sup>w</sup>*3σ*p*, *<sup>u</sup>*3σ*p*, *<sup>y</sup>*3σ*p*

= *x*<sup>2</sup> *A*1 σ2 *A*1 ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>y</sup>*<sup>σ</sup>*A*<sup>1</sup> + *x*<sup>2</sup> *A*2 σ2 *A*2 ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>2</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>2</sup> , *<sup>y</sup>*σ*A*<sup>2</sup> +*x*<sup>2</sup> *A*3 σ2 *A*3 ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>3</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>3</sup> , *<sup>y</sup>*σ*A*<sup>3</sup> +2*xA*<sup>1</sup> *xA*<sup>2</sup> <sup>σ</sup>*A*1*A*<sup>2</sup> ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>1</sup> <sup>∧</sup> *<sup>w</sup>*<sup>σ</sup>*A*<sup>2</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>1</sup> <sup>∨</sup> *<sup>u</sup>*<sup>σ</sup>*A*<sup>2</sup> , *<sup>y</sup>*<sup>σ</sup>*A*<sup>1</sup> <sup>∨</sup> *<sup>y</sup>*σ*A*<sup>2</sup> +2*xA*<sup>1</sup> *xA*<sup>3</sup> <sup>σ</sup>*A*1*A*<sup>3</sup> ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>1</sup> <sup>∧</sup> *<sup>w</sup>*<sup>σ</sup>*A*<sup>3</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>1</sup> <sup>∨</sup> *<sup>u</sup>*<sup>σ</sup>*A*<sup>3</sup> , *<sup>y</sup>*<sup>σ</sup>*A*<sup>1</sup> <sup>∨</sup> *<sup>y</sup>*σ*A*3 +2*xA*<sup>2</sup> *xA*<sup>1</sup> <sup>σ</sup>*A*2*A*<sup>1</sup> ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>2</sup> <sup>∧</sup> *<sup>w</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>2</sup> <sup>∨</sup> *<sup>u</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>y</sup>*<sup>σ</sup>*A*<sup>2</sup> <sup>∨</sup> *<sup>y</sup>*σ*A*<sup>1</sup> +2*xA*<sup>2</sup> *xA*<sup>3</sup> <sup>σ</sup>*A*2*A*<sup>3</sup> ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>2</sup> <sup>∧</sup> *<sup>w</sup>*<sup>σ</sup>*A*<sup>3</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>2</sup> <sup>∨</sup> *<sup>u</sup>*<sup>σ</sup>*A*<sup>3</sup> , *<sup>y</sup>*<sup>σ</sup>*A*<sup>2</sup> <sup>∨</sup> *<sup>y</sup>*σ*A*<sup>3</sup> +2*xA*<sup>3</sup> *xA*<sup>1</sup> <sup>σ</sup>*A*3*A*<sup>1</sup> ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>3</sup> <sup>∧</sup> *<sup>w</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>3</sup> <sup>∨</sup> *<sup>u</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>y</sup>*<sup>σ</sup>*A*<sup>3</sup> <sup>∨</sup> *<sup>y</sup>*σ*A*<sup>1</sup> +2*xA*<sup>3</sup> *xA*<sup>2</sup> <sup>σ</sup>*A*3*A*<sup>2</sup> ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>3</sup> <sup>∧</sup> *<sup>w</sup>*<sup>σ</sup>*A*<sup>2</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>3</sup> <sup>∨</sup> *<sup>u</sup>*<sup>σ</sup>*A*<sup>2</sup> , *<sup>y</sup>*<sup>σ</sup>*A*<sup>3</sup> <sup>∨</sup> *<sup>y</sup>*σ*A*<sup>2</sup> (39)

σ2 *<sup>p</sup>*; *<sup>w</sup>*3σ*p*, *<sup>u</sup>*3σ*p*, *<sup>y</sup>*3σ*p*

$$\begin{aligned} &= \langle 0.16 \times 0.0225; 0.5, 0.2, 0.3 \rangle + \langle 0.09 \times 0.0180; 0.6, 0.3, 0.2 \rangle \\ &+ \langle 0.09 \times 0.0925; 0.4, 0.3, 0.3 \rangle + \langle 2 \times 0.12 \times 0.0705; 0.6, 0.2, 0.2 \rangle \\ &+ \langle 2 \times 0.12 \times 0.1914; 0.5, 0.2, 0.3 \rangle + \langle 2 \times 0.12 \times 0.0705; 0.6, 0.2, 0.2 \rangle \\ &+ \langle 2 \times 0.09 \times 0.0805; 0.6, 0.3, 0.2 \rangle \\ &+ \langle 2 \times 0, 12 \times 0.1914; 0.5, 0.2, 0.3 \rangle \\ &+ \langle 2 \times 0.09 \times 0.0805; 0.6, 0.3, 0.2 \rangle \end{aligned} \tag{40}$$

$$
\langle \sigma\_p; u\widetilde{\sigma}\overline{p}, u\widetilde{\sigma}\overline{p}, \widetilde{y\sigma}\overline{p} \rangle = \sqrt{\langle 0, 168237; 0.6, 0.2, 0.2 \rangle} = \langle 0, 41016; 0.6, 0.2, 0.2 \rangle \tag{41}
$$

Interpretation: The neutrosophic portfolio return was previously determined -*RP*; *wRp*3, *uRp*3, *yRp*3 = 0, 3109; 0.5, 0.2, 0.3. For this neutrosophic portfolio return value corresponds a high risk σ*p*; *<sup>w</sup>*3σ*p*, *<sup>u</sup>*3σ*p*, *<sup>y</sup>*3σ*<sup>p</sup>* <sup>=</sup> 0, 41016; 0.6, 0.2, 0.2 which confirms that between return and risk there is a directly proportional relationship. The probabilities for risk manifestation is about 60%, while the probability that the risk is certain/uncertain is 20% and the probability that the risk does not occur is quite small and has a value of 20%.

#### **4. Neutrosophic Portfolio Equations. The Analytical and Matrix Form**

Neutrosophic portfolios of the form *P* -;*w<sup>p</sup>*, *<sup>u</sup><sup>p</sup>*, *<sup>y</sup>p* have the characteristic that each of the financial assets they contain can be modelled using the neutrosophic performance indicators such as: neutrosophic return *Ef*(*R* 3*A*); *wR* 3*A*, *uR* 3*A*, *yR* <sup>3</sup>*A*, the neutrosophic risk <sup>σ</sup> *<sup>f</sup>* <sup>2</sup> *Ai* ; *<sup>w</sup>*<sup>σ</sup>*A*, *<sup>u</sup>*<sup>σ</sup>*A*, *<sup>y</sup>*σ*A*, the neutrosophic covariance *cov*(*R* 3*A*1,*R* 3*A*2); *wR* 3*A*1, *uR* 3*A*1, *yR* 3*A*1; *wR* 3*A*2, *uR* 3*A*2, *yR* 3*A*2.

With these neutrosophic performance indicators specific to each financial asset, are determined the two fundamental variables of the neutrosophic portfolios, namely: neutrosophic portfolio return -*RP*; *wRp*3, *uRp*3, *yRp*3 and the neutrosophic portfolio risk σ2 *<sup>P</sup>*; *<sup>w</sup>*3σ*p*, *<sup>u</sup>*3σ*p*, *<sup>y</sup>*3σ*p*. The neutrosophic portfolio return, according to sentence no.1 can be written in analytical form as follows:

$$
\langle \widetilde{\mathbb{R}}p; w\widetilde{\mathbb{R}}p, u\widetilde{\mathbb{R}}p, y\widetilde{\mathbb{R}}p\rangle = \sum\_{i=1}^{n} \langle \mathbf{x}\_{A\_{i}}\widetilde{\mathbb{R}}\_{A\_{i}i}; w\widetilde{\mathbb{R}}\_{A\_{i'}}u\widetilde{\mathbb{R}}\_{A\_{i'}}y\widetilde{\mathbb{R}}\_{A\_{i}}\rangle \tag{42}
$$

The neutrosophic portfolio risk can be written in analytical form as follows:

$$\begin{aligned} \langle \widetilde{\sigma\_{P'}^2}; \widetilde{u\sigma\_{P}}, \widetilde{u\sigma\_{P}}, y\widetilde{\sigma\_{P}} \rangle \\ &= \sum\_{i=1}^{n} \langle \mathbf{x}\_{A\_{i}}^{2} \widetilde{\sigma\_{A\_{i}}^{2}}; \widetilde{u\sigma\_{A\_{i'}}} \widetilde{u\sigma\_{A\_{i'}}} y\widetilde{\sigma\_{A\_{i}}} \rangle \\ &+ 2 \sum\_{i=1}^{n} \sum\_{j=1}^{n} \langle \mathbf{x}\_{A\_{i}} \mathbf{x}\_{A\_{j}} \widetilde{\sigma\_{A\_{i}A\_{j'}}}; \widetilde{u\sigma\_{A\_{i}}} \wedge \widetilde{u\sigma\_{A\_{j'}}} \widetilde{u\sigma\_{A\_{i}}} \vee \widetilde{u\sigma\_{A\_{j'}}} y\widetilde{\sigma\_{A\_{i}}} \vee y\widetilde{\sigma\_{A\_{j}}} \rangle \end{aligned} \tag{43}$$

In order to form the system of equations that characterize the neutrosophic portfolios of financial assets, it should be mentioned that these portfolios are made up of financial assets whose weight in the total value of the portfolio is 100% which can be mathematically quantified by the formula:

$$\sum\_{i=1}^{n} \mathbf{x}\_{A\_i} = 100\% \tag{44}$$

Under these conditions, the system of equations of the neutrosophicportfolio of financial assets in analytical form will be written as follows:

$$\begin{aligned} \langle \overline{R}\_{P}; u\overline{R}\overline{p}, u\overline{R}\overline{p}, y\overline{R}\overline{p} \rangle &= \sum\_{i=1}^{n} \langle \mathbf{x}\_{A\_{i}} \overline{\mathbf{R}}\_{A\_{i}}; u\overline{\mathbf{R}}\_{A\_{i}}, u\overline{\mathbf{R}}\_{A\_{i}}, y\overline{R}\_{A\_{i}} \rangle \\ \langle \overline{\sigma}\_{P}^{2}; u\overline{\mathbf{w}}\overline{p}, u\overline{\mathbf{w}}\overline{p}, y\overline{\mathbf{w}}\overline{p} \rangle &= \sum\_{i=1}^{n} \langle \mathbf{x}\_{A\_{i}}^{2} \overline{\sigma}\_{A\_{i}}^{2}; u\overline{\mathbf{w}}\_{A\_{i}}, u\overline{\mathbf{w}}\_{A\_{i}}, y\overline{\sigma}\_{A\_{i}} \rangle + \\ + 2\sum\_{i=1}^{n} \sum\_{j=1}^{n} \langle \mathbf{x}\_{A\_{i}} \mathbf{x}\_{A\_{j}} \overline{\sigma}\_{A\_{i}A\_{j}}; u\overline{\mathbf{w}}\_{A\_{i}} \wedge u\overline{\mathbf{w}}\_{A\_{j}}, u\overline{\sigma}\_{A\_{i}} \vee u\overline{\mathbf{w}}\_{A\_{j}}, y\overline{\sigma}\_{A\_{i}} \vee y\overline{\sigma}\_{A\_{j}} \rangle \\ \end{aligned} \tag{45}$$

In matrix form the equations of the neutrosophicportfolio made up of Nfinancial assets will be written as follows:

$$
\langle \widetilde{\mathbb{R}}p; w \widetilde{\mathbb{R}}p, \iota \widetilde{\mathbb{R}}p, y \widetilde{\mathbb{R}}p \rangle = \langle \mathbf{x}\_{A\_1} \mathbf{x}\_{A\_2} \dots \mathbf{x}\_{A\_n} \rangle \begin{pmatrix}
\langle \overline{\mathcal{R}}\_{A\_1}; w \overline{\mathcal{R}}\_{A\_1}, \iota \overline{\mathcal{R}}\_{A\_1}, y \overline{\mathcal{R}}\_{A\_1} \rangle \\
\langle \overline{\mathcal{R}}\_{A\_2}; w \overline{\mathcal{R}}\_{A\_2}, \iota \overline{\mathcal{R}}\_{A\_2}, y \overline{\mathcal{R}}\_{A\_2} \rangle \\
\cdots \\
\langle \overline{\mathcal{R}}\_{A\_n}; w \overline{\mathcal{R}}\_{A\_n}, \iota \overline{\mathcal{R}}\_{A\_n}, y \overline{\mathcal{R}}\_{A\_n} \rangle
\end{pmatrix} \tag{46}
$$

We note: *X<sup>T</sup> <sup>A</sup>* <sup>=</sup> *xA*<sup>1</sup> *xA*<sup>2</sup> ... *xAn* and

$$
\langle \underline{\mathcal{R}}\_{A\circ}; w\overline{\mathcal{R}}\_{A\circ}, u\overline{\mathcal{R}}\_{A\circ}; y\overline{\mathcal{R}}\_{A\circ} \rangle = \begin{pmatrix}
\langle \underline{\mathcal{R}}\_{A\_1}; w\overline{\mathcal{R}}\_{A\_1}, u\overline{\mathcal{R}}\_{A\_1}, y\overline{\mathcal{R}}\_{A\_1} \rangle \\
\langle \overline{\mathcal{R}}\_{A\_2}; w\overline{\mathcal{R}}\_{A\_2}, u\overline{\mathcal{R}}\_{A\_2}, y\overline{\mathcal{R}}\_{A\_2} \rangle \\
\cdots \\
\langle \overline{\mathcal{R}}\_{A\_n}; w\overline{\mathcal{R}}\_{A\_n}, u\overline{\mathcal{R}}\_{A\_n}, y\overline{\mathcal{R}}\_{A\_n} \rangle
\end{pmatrix} \tag{47}
$$

Under these conditions, the equation of the neutrosophic portfolio return will be written in matrix form as follows:

$$\langle \widetilde{R\_p}; w\widetilde{R}p, u\widetilde{R}p, y\widetilde{R}p\rangle = X\_A^T \langle \overline{R}\_A; w\overline{R}\_{A\prime}, u\overline{R}\_{A\prime}, y\overline{R}\_A\rangle \tag{48}$$

The portfolio risk equation above can be written in matrix form as follows:

$$= \left( \mathbf{x}\_{A\_{1}} \mathbf{x}\_{A\_{2}} \dots \mathbf{x}\_{A\_{n}} \right) \begin{pmatrix} \langle \overline{\sigma}\_{A\_{11}}; \overline{\boldsymbol{w}} \overline{\sigma}\_{A\_{11}}, \boldsymbol{u} \overline{\sigma}\_{A\_{11}}, \boldsymbol{y} \overline{\sigma}\_{A\_{11}} \rangle & \dots & \langle \overline{\sigma}\_{A\_{1n}}; \overline{\boldsymbol{w}} \overline{\sigma}\_{A\_{1n}}, \boldsymbol{u} \overline{\sigma}\_{A\_{1n}}, \boldsymbol{y} \overline{\sigma}\_{A\_{1n}} \rangle \\ \langle \overline{\sigma}\_{A\_{21}}; \overline{\boldsymbol{w}} \overline{\sigma}\_{A\_{21}}, \boldsymbol{u} \overline{\sigma}\_{A\_{21}}, \boldsymbol{y} \overline{\sigma}\_{A\_{21}} \rangle & \dots & \langle \overline{\sigma}\_{A\_{2n}}; \overline{\boldsymbol{w}} \overline{\sigma}\_{A\_{2n}}, \boldsymbol{u} \overline{\sigma}\_{A\_{2n}}, \boldsymbol{y} \overline{\sigma}\_{A\_{2n}} \rangle \\ \vdots & \dots & \dots & \dots \\ \langle \overline{\sigma}\_{A\_{n1}}; \overline{\boldsymbol{w}} \overline{\sigma}\_{A\_{n1}}, \boldsymbol{u} \overline{\sigma}\_{A\_{n1}}, \boldsymbol{y} \overline{\sigma}\_{A\_{n1}} \rangle & \dots & \langle \overline{\sigma}\_{A\_{nn}}; \overline{\boldsymbol{w}} \overline{\sigma}\_{A\_{nn}}, \boldsymbol{u} \overline{\sigma}\_{A\_{nn}}, \boldsymbol{y} \overline{\sigma}\_{A\_{nn}} \rangle \end{pmatrix} \begin{pmatrix} \mathbf{x}\_{A\_{1}} \\ \mathbf{x}\_{A\_{2}} \\ \cdots \\ \cdots \\ \mathbf{x}\_{A\_{n}} \end{pmatrix} \tag{49}$$

*Mathematics* **2019**, *7*, 1046

In the matrix equation of the neutrosophic portfolio risk above we note:

$$\mathbf{X}\_A^T = \left(\mathbf{x}\_{A\_1}\mathbf{x}\_{A\_2}\dots\mathbf{x}\_{A\_m}\right)^T$$

and

Ω <sup>3</sup>;*w*<sup>σ</sup>*A*, *<sup>u</sup>*<sup>σ</sup>*A*, *<sup>y</sup>*σ*A* = ⎛ ⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝ <sup>σ</sup>*A*<sup>11</sup> ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>11</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>11</sup> , *<sup>y</sup>*<sup>σ</sup>*A*<sup>11</sup> ... <sup>σ</sup>*A*1*<sup>n</sup>* ; *<sup>w</sup>*<sup>σ</sup>*A*1*<sup>n</sup>* , *<sup>u</sup>*<sup>σ</sup>*A*1*<sup>n</sup>* , *<sup>y</sup>*σ*A*1*<sup>n</sup>* <sup>σ</sup>*A*<sup>21</sup> ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>21</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>21</sup> , *<sup>y</sup>*<sup>σ</sup>*A*<sup>21</sup> ... <sup>σ</sup>*A*2*<sup>n</sup>* ; *<sup>w</sup>*<sup>σ</sup>*A*2*<sup>n</sup>* , *<sup>u</sup>*<sup>σ</sup>*A*2*<sup>n</sup>* , *<sup>y</sup>*σ*A*2*<sup>n</sup>* ... ... ... <sup>σ</sup>*An*<sup>1</sup> ; *<sup>w</sup>*<sup>σ</sup>*An*<sup>1</sup> , *<sup>u</sup>*<sup>σ</sup>*An*<sup>1</sup> , *<sup>y</sup>*<sup>σ</sup>*An*<sup>1</sup> ... <sup>σ</sup>*Ann* ; *<sup>w</sup>*<sup>σ</sup>*Ann* , *<sup>u</sup>*<sup>σ</sup>*Ann* , *<sup>y</sup>*σ*Ann* ⎞ ⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ (50)

Under these conditions the matrix equation of the neutrosophic portfolio risk becomes:

$$
\langle \widetilde{\sigma\_P}; w\widetilde{\sigma}p, u\widetilde{\sigma}p, y\widetilde{\sigma\_P} \rangle = X\_A^T \langle \widetilde{\Omega}\widetilde{\pi}w\widetilde{\sigma}\_A, u\widetilde{\sigma}\_A, y\widetilde{\sigma}\_A \rangle X\_A \tag{51}
$$

The equation of the financial assets weight in the total value of the neutrosophic portfolio can be written as:

$$\mathbf{x} \begin{pmatrix} \mathbf{x}\_{A\_1} \mathbf{x}\_{A\_2} \dots \mathbf{x}\_{A\_n} \end{pmatrix} \begin{pmatrix} 1 \\ 1 \\ \dots \\ 1 \end{pmatrix} = 1 \tag{52}$$

In matrix form, the equation of weights will be written as follows: *X<sup>T</sup> <sup>A</sup>e* = 1. The system of equations of the neutrosophic portfolio in matrix form will be written as follows:

$$\begin{cases} \langle \widetilde{\mathcal{R}\_p}; w\widetilde{\mathcal{R}p}, u\widetilde{\mathcal{R}p}, y\widetilde{\mathcal{R}p} \rangle = \mathcal{X}\_A^T \langle \widetilde{\mathcal{R}}\_A; w\widetilde{\mathcal{R}}\_A, u\widetilde{\mathcal{R}}\_A, y\widetilde{\mathcal{R}}\_A \rangle \\ \langle \widetilde{\sigma}\_{\widetilde{\mathcal{P}}^\*}; w\widetilde{\mathcal{P}p}, u\widetilde{\mathcal{P}p}, y\widetilde{\mathcal{P}p} \rangle = \mathcal{X}\_A^T \langle \widetilde{\Omega}; w\widetilde{\sigma}\_{A\prime}u\widetilde{\sigma}\_{A\prime}y\widetilde{\sigma}\_A \rangle \mathcal{X}\_A \\ \mathcal{X}\_A^T \mathfrak{e} = 1 \end{cases} \tag{53}$$

Neutrosophic portfolio equations in analytical or matrix form will be used for risk minimization calculations or for determining the optimal portfolio structure depending on the needs.

#### **5. Minimizing the Risk of the Neutrosophic Portfolio**

#### *5.1. Minimizing the Risk of the Neutrosophic Portfolio Consisting of Two Financial Assets*

The purpose of this section is to determine that structure of the neutrosophic portfolio *xA*<sup>1</sup> *xA*<sup>2</sup> ... *xAn* for which the risk is minimal. The financial assets that enter in the structure of the neutrosophic portfolio allow the determination of the performance indicators using the neutrosophic triangular fuzzy numbers according to theorem no.1. For this we write the equations of the neutrosophic portfolio consisting of two financial assets *A*1, *A*<sup>2</sup> as follows:

$$
\begin{pmatrix}
\langle \widetilde{R}\_{\overline{p}}; w\widetilde{R}p, u\widetilde{R}p, y\widetilde{R}p \rangle = \langle \mathbf{x}\_{A\_1}\widetilde{R}\_{A\_1}; w\widetilde{R}\_{A1}, u\widetilde{R}\_{A1}, y\widetilde{R}\_{A1} \rangle + \langle \mathbf{x}\_{A\_2}\widetilde{R}\_{A\_2}; w\widetilde{R}\_{A2}, u\widetilde{R}\_{A2}, y\widetilde{R}\_{A2} \rangle \\
\langle \widetilde{\sigma}^2\_{p}; u\widetilde{\sigma}\widetilde{p}, u\widetilde{\sigma}\widetilde{p}, y\widetilde{\sigma}p \rangle = \langle \mathbf{x}\_{A\_1}^2 \widetilde{\sigma}^2\_{A\_1}; u\widetilde{\sigma}\widetilde{A}\_{1\*}, u\widetilde{\sigma}\widetilde{A}\_{1\*}, y\widetilde{\sigma}\widetilde{A}\_{1\*} \rangle + \langle \mathbf{x}^2\_{A\_2}\widetilde{\sigma}^2\_{A\_2}; w\widetilde{\sigma}\widetilde{A}\_{2\*}, u\widetilde{\sigma}\widetilde{\sigma}\widetilde{A}\_{2\*} \rangle + \\
& + \langle 2\chi\_{A\_1}\chi\_{A\_2}\sigma\_{A\_1A\_2}; u\sigma\_{A\_2}u\sigma\_{A\_1}y\sigma\_A \rangle \\
& \qquad \qquad \qquad \qquad \qquad \qquad \mathbf{x}\_{A\_1} + \mathbf{x}\_{A\_2} = 1
\end{pmatrix}
\tag{54}
$$

In order to establish the structure of the neutrosophic portfolio for which its risk is minimal, are imposed the minimum conditions for the portfolio risk resulted from the cancellation of the first order derivative of the neutrosophic portfolio risk:

$$\begin{cases} \frac{\partial(\widetilde{\alpha}\_{\overline{p}}^2; \widetilde{\alpha}\widetilde{\nu}\overline{p}, \widetilde{\alpha}\widetilde{\nu}\overline{p}, y\widetilde{\alpha}\overline{p})}{\partial \mathbf{x}\_{A\_1}} = 0\\ \frac{\partial(\widetilde{\alpha}\_{\overline{p}}^2; \widetilde{\alpha}\widetilde{\nu}\overline{p}, \widetilde{\alpha}\widetilde{\nu}\overline{p}, y\widetilde{\alpha}\overline{p})}{\partial \mathbf{x}\_{A\_2}} = 0 \end{cases} \tag{55}$$

*Mathematics* **2019**, *7*, 1046

**Theorem 1.** *Let be two financial assets A*1, *A*<sup>2</sup> *for which the neutrosophic return can be determined* -*RA*<sup>1</sup> ; *w*-*RA*1, *u*-*RA*1, *y*-*RA*1 *and* -*RA*<sup>2</sup> ; *w*-*RA*2, *u*-*RA*2, *y*-*RA*2 *. Also, the specific neutrosophic risk can be determined for each financial asset* σ2 *A*1 ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>y</sup>*<sup>σ</sup>*A*<sup>1</sup> *and* σ2 *A*2 ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>2</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>2</sup> , *<sup>y</sup>*σ*A*<sup>2</sup> *. These two financial assets form a neutrosophic portfolio of the form: P* -;*wP* -, *uP* -, *yP* -*. The risk of the neutrosophic portfolio has the minimum value for xA*<sup>1</sup> *, respectively xA*<sup>2</sup> *of the form:*

$$\mathbf{x}\_{A\_1} = \frac{\langle \overline{\sigma}\_{A\_1 A\_2}; w\overline{\sigma}\_{A\_1}, \mu \overline{\sigma}\_{A\_2}, y\overline{\sigma}\_A \rangle}{\langle \overline{\sigma}\_{A\_1 A\_2}; w\overline{\sigma}\_{A\_1}, \mu \overline{\sigma}\_{A\_2}, y\overline{\sigma}\_A \rangle - \langle \overline{\sigma}\_{A\_1}^2; w\overline{\sigma}\_{A\_1}, \mu \overline{\sigma}\_{A\_1}, y\overline{\sigma}\_{A\_1} \rangle} \tag{56}$$

$$\mathbf{x}\_{A\_2} = \frac{\langle \widetilde{\sigma}\_{A\_1}^2; u\widetilde{\sigma}\_{A\_1}, u\widetilde{\sigma}\_{A\_1}, y\widetilde{\sigma}\_{A\_1} \rangle}{\langle \widetilde{\sigma}\_{A\_1}^2; u\widetilde{\sigma}\_{A\_1}, u\widetilde{\sigma}\_{A\_1}, y\widetilde{\sigma}\_{A\_1} \rangle - \langle \widetilde{\sigma}\_{A\_1 A\_2}; u\widetilde{\sigma}\_{A\_1}, u\widetilde{\sigma}\_{A\_1}, y\widetilde{\sigma}\_{A\_1} \rangle} \tag{57}$$

*with the condition that* <sup>σ</sup>*A*1*A*<sup>2</sup> ; *<sup>w</sup>*<sup>σ</sup>*A*, *<sup>u</sup>*<sup>σ</sup>*A*, *<sup>y</sup>*σ*A* - σ2 *A*1 ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>y</sup>*σ*A*<sup>1</sup> .

Demonstration: We know that the equations of the neutrosophic portfolio in analytical form can be written according to the above equations:

*R* 3*p*; *wRp*3, *uRp*3, *yRp*3 = *xA*<sup>1</sup> -*RA*<sup>1</sup> ; *w*-*RA*1, *u*-*RA*1, *y*-*RA*1 + *xA*<sup>2</sup> -*RA*<sup>2</sup> ; *w*-*RA*2, *u*-*RA*2, *y*-*RA*2 σ2 *<sup>P</sup>*; *<sup>w</sup>*3σ*p*, *<sup>u</sup>*3σ*p*, *<sup>y</sup>*3σ*p* <sup>=</sup> *x*<sup>2</sup> *A*1 σ2 *A*1 ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>y</sup>*<sup>σ</sup>*A*<sup>1</sup> + *x*<sup>2</sup> *A*2 σ2 *A*2 ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>2</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>2</sup> , *<sup>y</sup>*σ*A*<sup>2</sup> + +2*xA*<sup>1</sup> *xA*<sup>2</sup> <sup>σ</sup>*A*1*A*<sup>2</sup> ; *<sup>w</sup>*<sup>σ</sup>*A*, *<sup>u</sup>*<sup>σ</sup>*A*, *<sup>y</sup>*σ*A xA*<sup>1</sup> + *xA*<sup>2</sup> = 1 (58)

We set the minimum conditions for the neutrosophic portfolio risk and obtain:

$$\begin{cases} \frac{\partial \langle \widetilde{\alpha}\_{\overline{p}}^2; \widetilde{\alpha} \widetilde{\text{vp}}, \widetilde{\alpha} \widetilde{\text{vp}}, \widetilde{\text{y}} \widetilde{\text{vp}} \rangle}{\partial \mathbf{x}\_{A\_1}} = 0\\ \frac{\partial \langle \widetilde{\alpha}\_{\overline{p}}^2; \widetilde{\text{av}} \widetilde{\text{pv}}, \widetilde{\text{av}} \widetilde{\text{pv}}, \widetilde{\text{y}} \widetilde{\text{vp}} \rangle}{\partial \mathbf{x}\_{A\_2}} = 0 \end{cases} \tag{59}$$

Based on these conditions, will obtain:

$$\begin{cases} \langle \langle 2\mathbf{x}\_{A\_1}\widetilde{\sigma}\_{A\_1}^2; u\widetilde{\sigma}\_{A\_1}, u\widetilde{\sigma}\_{A\_1}, y\widetilde{\sigma}\_{A\_1} \rangle + \langle 2\mathbf{x}\_{A\_2}\widetilde{\sigma}\_{A\_1 A\_2}; u\widetilde{\sigma}\_{A\_1}, u\widetilde{\sigma}\_{A\_2}, y\widetilde{\sigma}\_A \rangle = 0 \\ \langle \langle 2\mathbf{x}\_{A\_2}\widetilde{\sigma}\_{A\_2}^2; u\widetilde{\sigma}\_{A\_2}, u\widetilde{\sigma}\_{A\_2}, y\widetilde{\sigma}\_{A\_2} \rangle + \langle 2\mathbf{x}\_{A\_1}\widetilde{\sigma}\_{A\_1 A\_2}; u\widetilde{\sigma}\_{A\_1}, u\widetilde{\sigma}\_{A\_2}, y\widetilde{\sigma}\_A \rangle = 0 \\ \qquad \qquad \qquad \qquad \qquad \qquad \mathbf{x}\_{A\_1} + \mathbf{x}\_{A\_2} = 1 \end{cases} \tag{60}$$

From the last equation of the above system results that *xA*<sup>2</sup> = 1 − *xA*<sup>1</sup> and replacing in the first equation we will have:

$$\langle 2\mathbf{x}\_{A\_1}\widetilde{\sigma}\_{A\_1}^2; w\widetilde{\sigma}\_{A\_1}, \mu\widetilde{\sigma}\_{A\_1}, y\widetilde{\sigma}\_{A\_1}\rangle + \langle 2(1 - \mathbf{x}\_{A\_1})\widetilde{\sigma}\_{A\_1 A\_2}; w\widetilde{\sigma}\_{A\_1}, \mu\widetilde{\sigma}\_{A\_1}, y\widetilde{\sigma}\_A\rangle = 0 \tag{61}$$

$$\langle \langle 2\mathbf{x}\_{A\_1}\widetilde{\sigma}\_{A\_1}^2; \mathfrak{u}\widetilde{\sigma}\_{A\_1}, \mathfrak{u}\widetilde{\sigma}\_{A\_1}, \widetilde{y\sigma}\_{A\_1} \rangle + \langle 2\widetilde{\sigma}\_{A\_1 A\_2}; \mathfrak{u}\widetilde{\sigma}\_{A\_1}, \mathfrak{u}\widetilde{\sigma}\_{A\_2}, \widetilde{y\sigma}\_A \rangle - \langle 2\mathbf{x}\_{A\_1}\widetilde{\sigma}\_{A\_1 A\_2}; \mathfrak{u}\widetilde{\sigma}\_{A\_1}, \widetilde{\mu}\widetilde{\sigma}\_{A\_1}, \widetilde{y\sigma}\_A \rangle = 0 \tag{62}$$

$$<\langle \mathbf{x}\_{A\_1} \overline{\sigma}\_{A\_1}^2; \mathbf{u} \overline{\sigma}\_{A\_1}, \mathbf{u} \overline{\sigma}\_{A\_1}, \mathbf{y} \overline{\sigma}\_{A\_1} \rangle - \langle \mathbf{x}\_{A\_1} \overline{\sigma}\_{A\_1 A\_2}; \mathbf{u} \overline{\sigma}\_{A\_1}, \mathbf{u} \overline{\sigma}\_{A\_1}, \mathbf{y} \overline{\sigma}\_A \rangle = -\langle 2 \overline{\sigma}\_{A\_1 A\_2}; \mathbf{u} \overline{\sigma}\_{A\_1}, \overline{\mu} \overline{\sigma}\_{A\_1}, \overline{\mathbf{y}} \overline{\sigma}\_A \rangle \tag{63}$$

$$\propto\_{A\_1} \left( \langle \widetilde{\sigma}\_{A\_1 A\_2}; u\widetilde{\sigma}\_{A\_1}, u\widetilde{\sigma}\_{A\_2}; \widetilde{y\sigma}\_A \rangle - \langle \widetilde{\sigma}\_{A\_1}^2; u\widetilde{\sigma}\_{A\_1}, u\widetilde{\sigma}\_{A\_1}, y\widetilde{\sigma}\_{A\_1} \rangle \right) = \langle \widetilde{\sigma}\_{A\_1 A\_2}; u\widetilde{\sigma}\_{A\_1}, u\widetilde{\sigma}\_{A\_2}; y\widetilde{\sigma}\_A \rangle \tag{64}$$

$$\mathbf{x}\_{A\_1} = \frac{\langle \overline{\sigma}\_{A\_1 A\_2}; w\overline{\sigma}\_{A\_1}, \mathbf{u}\overline{\sigma}\_{A\_2}; y\overline{\sigma}\_A \rangle}{\langle \overline{\sigma}\_{A\_1 A\_2}; w\overline{\sigma}\_{A\_1}, \mathbf{u}\overline{\sigma}\_{A\_2}, y\overline{\sigma}\_A \rangle - \langle \overline{\sigma}\_{A\_1}^2; w\overline{\sigma}\_{A\_1}, \mathbf{u}\overline{\sigma}\_{A\_1}, y\overline{\sigma}\_{A\_1} \rangle} \tag{65}$$

With the condition that: <sup>σ</sup>*A*1*A*<sup>2</sup> ; *<sup>w</sup>*<sup>σ</sup>*A*, *<sup>u</sup>*<sup>σ</sup>*A*, *<sup>y</sup>*σ*A* - σ2 *A*1 ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>y</sup>*σ*A*<sup>1</sup> . By substituting the formula for *xA*<sup>1</sup> in the expression *xA*<sup>2</sup> = 1 − *xA*<sup>1</sup> it is obtained:

$$\mathbf{x}\_{A\_2} = 1 - \frac{\langle \overline{\sigma}\_{A\_1 A\_2}; \mathbf{u} \overline{\sigma}\_{A\_2}, \mathbf{u} \overline{\sigma}\_{A\_1} y \overline{\sigma}\_A \rangle}{\langle \overline{\sigma}\_{A\_1 A\_2}; \mathbf{u} \overline{\sigma}\_{A\_2}, \mathbf{u} \overline{\sigma}\_{A\_1} y \overline{\sigma}\_A \rangle - \langle \overline{\sigma}\_{A\_1}^2; \mathbf{u} \overline{\sigma}\_{A\_1}, \mathbf{u} \overline{\sigma}\_{A\_1}, y \overline{\sigma}\_{A\_1} \rangle} \tag{66}$$

$$\text{fix}\_{A\_2} = \frac{\langle \widetilde{\sigma}\_{A\_1}^2; u\widetilde{\sigma}\_{A\_1}, u\widetilde{\sigma}\_{A\_1}, y\widetilde{\sigma}\_{A\_1} \rangle}{\langle \widetilde{\sigma}\_{A\_1}^2; u\widetilde{\sigma}\_{A\_1}, u\widetilde{\sigma}\_{A\_1}, y\widetilde{\sigma}\_{A\_1} \rangle - \langle \widetilde{\sigma}\_{A\_1 A\_2}; u\widetilde{\sigma}\_{A\_1}, u\widetilde{\sigma}\_{A\_1}, y\widetilde{\sigma}\_{A\_1} \rangle} \tag{67}$$

With the same condition: <sup>σ</sup>*A*1*A*<sup>2</sup> ; *<sup>w</sup>*<sup>σ</sup>*A*, *<sup>u</sup>*<sup>σ</sup>*A*, *<sup>y</sup>*σ*A* - σ2 *A*1 ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>y</sup>*σ*A*<sup>1</sup> .

#### *5.2. Minimizing the Risk of the Neutrosophic Portfolio Consisting of N Financial Assets*

The neutrosophic portfolio is composed of N financial assets and assumes that in the portfolio there are financial assets that allow the determination of performance indicators using neutrosophic triangular fuzzy numbers. As a result, each financial asset that is part of the portfolio allows the determination of the neutrosophic return -*RAi* ; *w*-*RAi*, *u*-*RAi*, *y*-*RAi* and of the neutrosophic risk σ2 *Ai* ; *<sup>w</sup>*σ*Ai* , *<sup>u</sup>*σ*Ai* , *<sup>y</sup>*σ*Ai* . Each financial asset *Ai* holds a weight in the total value of the neutrosophic portfolio noted with *xAi* . According to the above mentioned, for each financial asset *Ai* we will have:

$$\begin{array}{c} A\_1 \colon \mathbf{x}\_{A\_1}; \langle \overline{\mathcal{R}}\_{A\_1}; u\overline{\mathcal{R}}\_{A1}, u\overline{\mathcal{R}}\_{A1}, y\overline{\mathcal{R}}\_{A1} \rangle; \langle \overline{\mathcal{O}}^2\_{A\_1}; u\overline{\mathcal{O}}\_{A\_1}, u\overline{\mathcal{O}}\_{A\_1}, y\overline{\mathcal{O}}\_{A\_1} \rangle\\ A\_2 \colon \mathbf{x}\_{A\_2}; \langle \overline{\mathcal{R}}\_{A\_2}; u\overline{\mathcal{R}}\_{A2}, u\overline{\mathcal{R}}\_{A2}, y\overline{\mathcal{R}}\_{A2} \rangle; \langle \overline{\mathcal{O}}^2\_{A\_2}; u\overline{\mathcal{O}}\_{A\_2}, u\overline{\mathcal{O}}\_{A\_2}, y\overline{\mathcal{O}}\_{A\_2} \rangle\\ \end{array} \tag{68}$$
 
$$A\_n \colon \mathbf{x}\_{A\_n}; \langle \overline{\mathcal{R}}\_{A\_n}; u\overline{\mathcal{R}}\_{An}, u\overline{\mathcal{R}}\_{An}, y\overline{\mathcal{R}}\_{An} \rangle; \langle \overline{\mathcal{O}}^2\_{A\_n}; u\overline{\mathcal{O}}\_{A\_n}, u\overline{\mathcal{O}}\_{A\_n}, y\overline{\mathcal{O}}\_{A\_n} \rangle$$

The equations that describe the portfolio refer to the neutrosophic portfolio return and risk and are of the form:

$$\begin{cases} \langle \widetilde{R\_{p}};u\widetilde{\mathbf{R}\boldsymbol{p}},u\widetilde{\mathbf{R}\boldsymbol{p}},y\widetilde{\mathbf{R}\boldsymbol{p}}\rangle = \sum\_{i}^{n} \langle \mathbf{x}\_{A\_{i}}\widetilde{\mathbf{R}}\_{A\_{i}};u\widetilde{\mathbf{R}}\_{A\_{i}}u\widetilde{\mathbf{R}}\_{A\_{i}},y\widetilde{\mathbf{R}}\_{A\_{i}}\rangle \\ \qquad \qquad \qquad \langle \widetilde{\sigma\_{P}^{2}};u\widetilde{\mathbf{w}\boldsymbol{p}},y\widetilde{\mathbf{w}\boldsymbol{p}}\rangle = \\ \quad = \sum\_{i=1}^{n} \langle \mathbf{x}\_{A\_{i}}^{2}\widetilde{\sigma\_{A\_{i}}};u\widetilde{\mathbf{w}\boldsymbol{A}}\_{A\_{i}}u\widetilde{\mathbf{R}}\_{A\_{i}},y\widetilde{\mathbf{w}\boldsymbol{A}}\_{i}\rangle + 2\sum\_{i=1}^{n} \sum\_{j=1}^{n} \langle \mathbf{x}\_{A\_{i}}\mathbf{x}\_{A\_{j}}\widetilde{\sigma\_{A\_{i}}};u\widetilde{\mathbf{w}\boldsymbol{A}}\_{A\_{i}}u\widetilde{\mathbf{w}}\_{A\_{i}} \vee u\widetilde{\mathbf{w}}\_{A\_{j}},y\widetilde{\mathbf{w}}\_{A\_{i}}\vee y\widetilde{\mathbf{w}}\_{A\_{j}}\rangle \\ \qquad \qquad \qquad \qquad \sum\_{i=1}^{n} \mathbf{x}\_{A\_{i}} = 100\% \end{cases} \tag{69}$$

The analytical equations for the neutrosophic portfolio return and risk are written as follows:

$$\begin{split} \langle \overline{\mathcal{R}}p; w\overline{\mathcal{R}}p, u\overline{\mathcal{R}}p, y\overline{\mathcal{R}}p \rangle \\ = & \langle \mathbf{x}\_{\mathcal{A}}\overline{\mathcal{R}}\_{A\iota}; w\overline{\mathcal{R}}\_{A\iota}, u\overline{\mathcal{R}}\_{A\iota}, y\overline{\mathcal{R}}\_{A\iota} \rangle + \langle \mathbf{x}\_{\mathcal{A}}\overline{\mathcal{R}}\_{A\iota}; w\overline{\mathcal{R}}\_{A\iota}, u\overline{\mathcal{R}}\_{A\iota}, y\overline{\mathcal{R}}\_{A\iota} \rangle + \cdots \\ + & \langle \mathbf{x}\_{\mathcal{A}\_{\operatorname{tr}}}\overline{\mathcal{R}}\_{A\iota\_{\operatorname{tr}}}; w\overline{\mathcal{R}}\_{A\iota\_{\operatorname{tr}}}, u\overline{\mathcal{R}}\_{A\iota\_{\operatorname{tr}}}, y\overline{\mathcal{R}}\_{A\iota\_{\operatorname{tr}}} \rangle \end{split} \tag{70}$$

And respectively:

σ2 *<sup>P</sup>*; *<sup>w</sup>*3σ*p*, *<sup>u</sup>*3σ*p*, *<sup>y</sup>*3σ*p* = *x*<sup>2</sup> *A*1 σ2 *A*1 ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>y</sup>*<sup>σ</sup>*A*<sup>1</sup> + *x*<sup>2</sup> *A*2 σ2 *A*2 ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>2</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>2</sup> , *<sup>y</sup>*σ*A*<sup>2</sup> + ··· +*x*<sup>2</sup> *An* σ2 *An* ; *<sup>w</sup>*<sup>σ</sup>*An* , *<sup>u</sup>*<sup>σ</sup>*An* , *<sup>y</sup>*σ*An* +2*xA*<sup>1</sup> *xA*<sup>2</sup> <sup>σ</sup>*A*1*A*<sup>2</sup> ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>1</sup> <sup>∧</sup> *<sup>w</sup>*<sup>σ</sup>*A*<sup>2</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>1</sup> <sup>∨</sup> *<sup>u</sup>*<sup>σ</sup>*A*<sup>2</sup> , *<sup>y</sup>*<sup>σ</sup>*A*<sup>1</sup> <sup>∨</sup> *<sup>y</sup>*σ*A*<sup>2</sup> +2*xA*<sup>1</sup> *xA*<sup>3</sup> <sup>σ</sup>*A*1*A*<sup>3</sup> ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>1</sup> <sup>∧</sup> *<sup>w</sup>*<sup>σ</sup>*A*<sup>3</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>1</sup> <sup>∨</sup> *<sup>u</sup>*<sup>σ</sup>*A*<sup>3</sup> , *<sup>y</sup>*<sup>σ</sup>*A*<sup>1</sup> <sup>∨</sup> *<sup>y</sup>*σ*A*<sup>3</sup> + ··· <sup>+</sup>2*xA*<sup>1</sup> *xAn*<sup>σ</sup>*A*1*An* ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>1</sup> <sup>∧</sup> *<sup>w</sup>*<sup>σ</sup>*An* , *<sup>u</sup>*<sup>σ</sup>*A*<sup>1</sup> <sup>∨</sup> *<sup>u</sup>*<sup>σ</sup>*An* , *<sup>y</sup>*<sup>σ</sup>*A*<sup>1</sup> <sup>∨</sup> *<sup>y</sup>*σ*An* +2*xA*<sup>2</sup> *xA*<sup>1</sup> <sup>σ</sup>*A*2*A*<sup>1</sup> ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>2</sup> <sup>∧</sup> *<sup>w</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>2</sup> <sup>∨</sup> *<sup>u</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>y</sup>*<sup>σ</sup>*A*<sup>2</sup> <sup>∨</sup> *<sup>y</sup>*σ*A*<sup>1</sup> +2*xA*<sup>2</sup> *xA*<sup>3</sup> <sup>σ</sup>*A*2*A*<sup>3</sup> ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>2</sup> <sup>∧</sup> *<sup>w</sup>*<sup>σ</sup>*A*<sup>3</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>2</sup> <sup>∨</sup> *<sup>u</sup>*<sup>σ</sup>*A*<sup>3</sup> , *<sup>y</sup>*<sup>σ</sup>*A*<sup>2</sup> <sup>∨</sup> *<sup>y</sup>*σ*A*<sup>3</sup> + ··· <sup>+</sup>2*xA*<sup>2</sup> *xAn*<sup>σ</sup>*A*2*An* ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>2</sup> <sup>∧</sup> *<sup>w</sup>*<sup>σ</sup>*An* , *<sup>u</sup>*<sup>σ</sup>*A*<sup>2</sup> <sup>∨</sup> *<sup>u</sup>*<sup>σ</sup>*An* , *<sup>y</sup>*<sup>σ</sup>*A*<sup>2</sup> <sup>∨</sup> *<sup>y</sup>*σ*An* +2*xAn xA*<sup>1</sup> <sup>σ</sup>*AnA*<sup>1</sup> ; *<sup>w</sup>*<sup>σ</sup>*An* <sup>∧</sup> *<sup>w</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>u</sup>*<sup>σ</sup>*An* <sup>∨</sup> *<sup>u</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>y</sup>*<sup>σ</sup>*An* <sup>∨</sup> *<sup>y</sup>*σ*A*<sup>1</sup> +2*xAn xA*<sup>2</sup> <sup>σ</sup>*AnA*<sup>2</sup> ; *<sup>w</sup>*<sup>σ</sup>*An* <sup>∧</sup> *<sup>w</sup>*<sup>σ</sup>*A*<sup>2</sup> , *<sup>u</sup>*<sup>σ</sup>*An* <sup>∨</sup> *<sup>u</sup>*<sup>σ</sup>*A*<sup>2</sup> , *<sup>y</sup>*<sup>σ</sup>*An* <sup>∨</sup> *<sup>y</sup>*σ*A*<sup>2</sup> + ··· +2*xAn xAn*−<sup>1</sup> <sup>σ</sup>*AnAn*−<sup>1</sup> ; *<sup>w</sup>*<sup>σ</sup>*An* <sup>∧</sup> *<sup>w</sup>*<sup>σ</sup>*An*−<sup>1</sup> , *<sup>u</sup>*<sup>σ</sup>*An* <sup>∨</sup> *<sup>u</sup>*<sup>σ</sup>*An*−<sup>1</sup> , *<sup>y</sup>*<sup>σ</sup>*An* <sup>∨</sup> *<sup>y</sup>*σ*An*−<sup>1</sup> (71)

**Theorem 2.** *There are considered N financial assets A*1, *A*2, ... *An for which the neutrosophic return can be determined* -*RA*<sup>1</sup> ; *w*-*RA*1, *u*-*RA*1, *y*-*RA*1*,* -*RA*<sup>2</sup> ; *w*-*RA*2, *u*-*RA*2, *y*-*RA*2*,* ... *,* -*RAn* ; *w*-*RAn*, *u*-*RAn*, *y*-*RAn. It is also possible to determine the neutrosophic risk specific to each financial asset that is part of the portfolio* σ2 *A*1 ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>y</sup>*<sup>σ</sup>*A*<sup>1</sup> *,* σ2 *A*2 ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>2</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>2</sup> , *<sup>y</sup>*<sup>σ</sup>*A*<sup>2</sup> *and* σ2 *An* ; *<sup>w</sup>*<sup>σ</sup>*An* , *<sup>u</sup>*<sup>σ</sup>*An* , *<sup>y</sup>*σ*An* . *These N financial assets form a neutrosophic portfolio of the form: P* -;*wP* -, *uP* -, *yP* -. *The risk of the neutrosophic portfolio has the minimum value for xA*<sup>1</sup> *, xA*<sup>2</sup> *,* ... *, xAn generalized using the weight xAk of the form:*

*xAk* = ⎛ ⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝ <sup>σ</sup>*Ak*<sup>2</sup> ; *<sup>w</sup>*<sup>σ</sup>*Ak*<sup>2</sup> , *<sup>u</sup>*<sup>σ</sup>*Ak*<sup>2</sup> , *<sup>y</sup>*σ*Ak*<sup>2</sup> <sup>σ</sup>*Ak*<sup>3</sup> ; *<sup>w</sup>*<sup>σ</sup>*Ak*<sup>3</sup> , *<sup>u</sup>*<sup>σ</sup>*Ak*<sup>3</sup> , *<sup>y</sup>*σ*Ak*<sup>3</sup> ... <sup>σ</sup>*Akn* ; *<sup>w</sup>*<sup>σ</sup>*Akn* , *<sup>u</sup>*<sup>σ</sup>*Akn* , *<sup>y</sup>*σ*Akn* ⎞ ⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ − ⎛ ⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝ σ*Akk* ; *<sup>w</sup>*<sup>σ</sup>*Akk* , *<sup>u</sup>*<sup>σ</sup>*Akk* , *<sup>y</sup>*σ*Akk* σ*Akk* ; *<sup>w</sup>*<sup>σ</sup>*Akk* , *<sup>u</sup>*<sup>σ</sup>*Akk* , *<sup>y</sup>*σ*Akk* ... σ*Akk* ; *<sup>w</sup>*<sup>σ</sup>*Akk* , *<sup>u</sup>*<sup>σ</sup>*Akk* , *<sup>y</sup>*σ*Akk* ⎞ ⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ⎛ ⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝ <sup>σ</sup>*Ak*<sup>2</sup> ; *<sup>w</sup>*<sup>σ</sup>*Ak*<sup>2</sup> , *<sup>u</sup>*<sup>σ</sup>*Ak*<sup>2</sup> , *<sup>y</sup>*σ*Ak*<sup>2</sup> <sup>σ</sup>*Ak*<sup>3</sup> ; *<sup>w</sup>*<sup>σ</sup>*Ak*<sup>3</sup> , *<sup>u</sup>*<sup>σ</sup>*Ak*<sup>3</sup> , *<sup>y</sup>*σ*Ak*<sup>3</sup> ... <sup>σ</sup>*Akn* ; *<sup>w</sup>*<sup>σ</sup>*Akn* , *<sup>u</sup>*<sup>σ</sup>*Akn* , *<sup>y</sup>*σ*Akn* ⎞ ⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ (72) *with the condition that:* ⎛ ⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝ <sup>σ</sup>*Ak*<sup>2</sup> ; *<sup>w</sup>*<sup>σ</sup>*Ak*<sup>2</sup> , *<sup>u</sup>*<sup>σ</sup>*Ak*<sup>2</sup> , *<sup>y</sup>*σ*Ak*<sup>2</sup> <sup>σ</sup>*Ak*<sup>3</sup> ; *<sup>w</sup>*<sup>σ</sup>*Ak*<sup>3</sup> , *<sup>u</sup>*<sup>σ</sup>*Ak*<sup>3</sup> , *<sup>y</sup>*σ*Ak*<sup>3</sup> ... <sup>σ</sup>*Akn* ; *<sup>w</sup>*<sup>σ</sup>*Akn* , *<sup>u</sup>*<sup>σ</sup>*Akn* , *<sup>y</sup>*σ*Akn* ⎞ ⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ -0.

Demonstration: The equations of the neutrosophic portfolio in analytical form have been written previously and refer to those portfolios that contain a number of N financial assets. The conditions for minimizing the risk of the neutrosophic portfolio are obtained at the points where the first order derivative of the neutrosophic portfolio risk is null, respectively from the equations:

$$\begin{cases} \frac{\partial \langle \widehat{\sigma}\_{\mathrm{p}}^{2}; \widehat{\mu}\widetilde{\sigma}\_{\mathrm{p}}, \widehat{\mu}\widetilde{\sigma}\_{\mathrm{p}}, y\widetilde{\sigma}\rangle}{\partial x\_{A\_{1}}} = 0\\ \frac{\partial \langle \widehat{\sigma}\_{\mathrm{p}}^{2}; \widehat{\mu}\widetilde{\sigma}\_{\mathrm{p}}, \widehat{\mu}\widetilde{\sigma}\_{\mathrm{p}}, y\widetilde{\sigma}\rangle}{\partial x\_{A\_{2}}} = 0\\ \vdots\\ \frac{\partial \langle \widehat{\sigma}\_{\mathrm{p}}^{2}; \widehat{\mu}\widetilde{\sigma}\_{\mathrm{p}}, \widehat{\mu}\widetilde{\sigma}\_{\mathrm{p}}, y\widetilde{\sigma}\rangle}{\partial x\_{A\_{\mathrm{th}}}} = 0 \end{cases} \tag{73}$$

Based on the condition for minimizing the neutrosophic portfolio risk will be obtained:

⎧ ⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨ ⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩ 2*xA*<sup>1</sup> σ2 *A*1 ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>y</sup>*σ*A*<sup>1</sup> + 2*xA*<sup>2</sup> <sup>σ</sup>*A*1*A*<sup>2</sup> ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>1</sup> <sup>∧</sup> *<sup>w</sup>*<sup>σ</sup>*A*<sup>2</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>1</sup> <sup>∨</sup> *<sup>u</sup>*<sup>σ</sup>*A*<sup>2</sup> , *<sup>y</sup>*<sup>σ</sup>*A*<sup>1</sup> <sup>∨</sup> *<sup>y</sup>*σ*A*<sup>2</sup> + <sup>+</sup> ··· <sup>+</sup> 2*xAn*<sup>σ</sup>*A*1*An* ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>1</sup> <sup>∧</sup> *<sup>w</sup>*<sup>σ</sup>*An* , *<sup>u</sup>*<sup>σ</sup>*A*<sup>1</sup> <sup>∨</sup> *<sup>u</sup>*<sup>σ</sup>*An* , *<sup>y</sup>*<sup>σ</sup>*A*<sup>1</sup> <sup>∨</sup> *<sup>y</sup>*σ*An* = 0 2*xA*<sup>2</sup> σ2 *A*2 ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>y</sup>*σ*A*<sup>1</sup> + 2*xA*<sup>1</sup> <sup>σ</sup>*A*2*A*<sup>1</sup> ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>2</sup> <sup>∧</sup> *<sup>w</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>2</sup> <sup>∨</sup> *<sup>u</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>y</sup>*<sup>σ</sup>*A*<sup>2</sup> <sup>∨</sup> *<sup>y</sup>*σ*A*<sup>1</sup> <sup>+</sup> ··· <sup>+</sup> 2*xAn*<sup>σ</sup>*A*2*An* ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>2</sup> <sup>∧</sup> *<sup>w</sup>*<sup>σ</sup>*An* , *<sup>u</sup>*<sup>σ</sup>*A*<sup>2</sup> <sup>∨</sup> *<sup>u</sup>*<sup>σ</sup>*An* , *<sup>y</sup>*<sup>σ</sup>*A*<sup>2</sup> <sup>∨</sup> *<sup>y</sup>*σ*An* = 0 2*xA*<sup>1</sup> <sup>σ</sup>*AnA*<sup>1</sup> ; *<sup>w</sup>*<sup>σ</sup>*An* <sup>∧</sup> *<sup>w</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>u</sup>*<sup>σ</sup>*An* <sup>∨</sup> *<sup>u</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>y</sup>*<sup>σ</sup>*An* <sup>∨</sup> *<sup>y</sup>*σ*A*<sup>1</sup> + <sup>+</sup>2*xAn*<sup>σ</sup>*AnA*<sup>1</sup> ; *<sup>w</sup>*<sup>σ</sup>*An* <sup>∧</sup> *<sup>w</sup>*<sup>σ</sup>*A*<sup>2</sup> , *<sup>u</sup>*<sup>σ</sup>*An* <sup>∨</sup> *<sup>u</sup>*<sup>σ</sup>*A*<sup>2</sup> , *<sup>y</sup>*<sup>σ</sup>*An* <sup>∨</sup> *<sup>y</sup>*<sup>σ</sup>*A*<sup>2</sup> <sup>+</sup> ··· <sup>+</sup> <sup>2</sup>*xAn*σ2 *An* ; *<sup>w</sup>*<sup>σ</sup>*An* , *<sup>u</sup>*<sup>σ</sup>*An* , *<sup>y</sup>*σ*An* = 0 *xA*<sup>1</sup> + *xA*<sup>2</sup> + ··· + *xAn* = 1 (74)

In matrix form the equations above, ordered according to the neutrosophic variance are written as follows:

$$2\langle \mathbf{x}\_{A\_{1}}\widetilde{\boldsymbol{\sigma}}\_{A\_{11}};\widetilde{\boldsymbol{w}}\widetilde{\boldsymbol{\sigma}}\_{A\_{11}},\boldsymbol{\mu}\widetilde{\boldsymbol{\sigma}}\_{A\_{11}},\boldsymbol{y}\widetilde{\boldsymbol{\sigma}}\_{A\_{11}}\rangle + 2\langle \mathbf{x}\_{A\_{2}}\ldots\mathbf{x}\_{A\_{n}}\rangle \begin{pmatrix} \langle \widetilde{\boldsymbol{\sigma}}\_{A\_{12}};\widetilde{\boldsymbol{w}}\widetilde{\boldsymbol{\sigma}}\_{A\_{12}},\widetilde{\boldsymbol{\mu}}\widetilde{\boldsymbol{\sigma}}\_{A\_{12}},\widetilde{\boldsymbol{y}}\widetilde{\boldsymbol{\sigma}}\_{A\_{12}} \rangle\\\langle \widetilde{\boldsymbol{\sigma}}\_{A\_{13}};\widetilde{\boldsymbol{w}}\widetilde{\boldsymbol{\sigma}}\_{A\_{13}},\boldsymbol{y}\widetilde{\boldsymbol{\sigma}}\_{A\_{13}},\widetilde{\boldsymbol{y}}\widetilde{\boldsymbol{\sigma}}\_{A\_{13}} \rangle\\\vdots\\\langle \widetilde{\boldsymbol{\sigma}}\_{A\_{1n}};\widetilde{\boldsymbol{w}}\widetilde{\boldsymbol{\sigma}}\_{A\_{1n}},\boldsymbol{y}\widetilde{\boldsymbol{\sigma}}\_{A\_{1n}},\widetilde{\boldsymbol{y}}\widetilde{\boldsymbol{\sigma}}\_{A\_{1n}} \rangle \end{pmatrix} = 0\tag{75}$$

$$2\langle \mathbf{x}\_{A2}\widetilde{\boldsymbol{\sigma}}\_{A2};\widetilde{\boldsymbol{w}}\widetilde{\boldsymbol{\sigma}}\_{A2},\boldsymbol{u}\widetilde{\boldsymbol{\sigma}}\_{A2},\boldsymbol{y}\widetilde{\boldsymbol{\sigma}}\_{A2}\rangle + 2\langle \mathbf{x}\_{A\_{1}}\ldots\mathbf{x}\_{A\_{n}}\rangle \begin{pmatrix} \widetilde{\boldsymbol{\sigma}}\_{A\_{21}};\widetilde{\boldsymbol{w}}\widetilde{\boldsymbol{\sigma}}\_{A\_{21}},\widetilde{\boldsymbol{u}}\widetilde{\boldsymbol{\sigma}}\_{A\_{21}},\widetilde{\boldsymbol{y}}\widetilde{\boldsymbol{\sigma}}\_{A\_{21}}\\ \widetilde{\boldsymbol{\sigma}}\_{A2};\widetilde{\boldsymbol{w}}\widetilde{\boldsymbol{\sigma}}\_{A2},\boldsymbol{u}\widetilde{\boldsymbol{\sigma}}\_{A2},\widetilde{\boldsymbol{y}}\widetilde{\boldsymbol{\sigma}}\_{A2}\\ \cdots\\ \widetilde{\boldsymbol{\sigma}}\_{A\_{2n}};\widetilde{\boldsymbol{u}}\widetilde{\boldsymbol{\sigma}}\_{A\_{2n}},\boldsymbol{u}\widetilde{\boldsymbol{\sigma}}\_{A\_{2n}},\widetilde{\boldsymbol{y}}\widetilde{\boldsymbol{\sigma}}\_{A\_{2n}} \end{pmatrix} = 0\tag{76}$$

<sup>2</sup>*xAn*<sup>σ</sup>*Ann* ; *<sup>w</sup>*<sup>σ</sup>*Ann* , *<sup>u</sup>*<sup>σ</sup>*Ann* , *<sup>y</sup>*σ*Ann*

$$+\ + 2(\mathbf{x}\_{A\_1}\ldots\mathbf{x}\_{A\_{n-1}}) \begin{pmatrix} \langle \overline{\sigma}\_{A\_{n1}}; \mathbf{u}\overline{\sigma}\_{A\_{n1}}, \mathbf{u}\overline{\sigma}\_{A\_{n1}}, y\overline{\sigma}\_{A\_{n1}} \rangle \\ \langle \overline{\sigma}\_{A\_{n3}}; \mathbf{u}\overline{\sigma}\_{A\_{n3}}, \mathbf{u}\overline{\sigma}\_{A\_{n3}}, y\overline{\sigma}\_{A\_{n3}} \rangle \\ \cdots \\ \langle \overline{\sigma}\_{A\_{m-1}}; \mathbf{u}\overline{\sigma}\_{A\_{m-1}}, \mathbf{u}\overline{\sigma}\_{A\_{m-1}}, y\overline{\sigma}\_{A\_{m-1}} \rangle \end{pmatrix} = 0 \tag{77}$$

The last neutrosophic portfolio equation *xA*<sup>1</sup> + *xA*<sup>2</sup> + ··· + *xAn* = 1, written in the matrix form will be:

$$\left\{ \mathbf{x}\_{A\_1} + \left( \mathbf{x}\_{A\_2} \dots \mathbf{x}\_{A\_n} \right) \begin{pmatrix} 1 \\ \dots \\ 1 \end{pmatrix} = 1 \right. \tag{78}$$

Resulting that:

$$\mathbf{x}\_{A\_2}\dots\mathbf{x}\_{A\_n} = \left(1 - \mathbf{x}\_{A\_1}\right)\mathbf{e}^{-1} \tag{79}$$

By replacing the obtained expression for *xA*<sup>2</sup> ... *xAn* in the first relationship, it is obtained that:

<sup>2</sup>*xA*1σ*A*<sup>11</sup> ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>11</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>11</sup> , *<sup>y</sup>*σ*A*<sup>11</sup> + 2(1 − *xA*<sup>1</sup> )*e* −1 ⎛ ⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝ <sup>σ</sup>*A*<sup>12</sup> ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>12</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>12</sup> , *<sup>y</sup>*σ*A*<sup>12</sup> <sup>σ</sup>*A*<sup>13</sup> ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>13</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>13</sup> , *<sup>y</sup>*σ*A*<sup>13</sup> ... <sup>σ</sup>*A*1*<sup>n</sup>* ; *<sup>w</sup>*<sup>σ</sup>*A*1*<sup>n</sup>* , *<sup>u</sup>*<sup>σ</sup>*A*1*<sup>n</sup>* , *<sup>y</sup>*σ*A*1*<sup>n</sup>* ⎞ ⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ = 0 (80) *xA*<sup>1</sup> ⎡ ⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣ ⎛ ⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝ <sup>σ</sup>*A*<sup>12</sup> ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>12</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>12</sup> , *<sup>y</sup>*σ*A*<sup>12</sup> <sup>σ</sup>*A*<sup>13</sup> ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>13</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>13</sup> , *<sup>y</sup>*σ*A*<sup>13</sup> ... <sup>σ</sup>*A*1*<sup>n</sup>* ; *<sup>w</sup>*<sup>σ</sup>*A*1*<sup>n</sup>* , *<sup>u</sup>*<sup>σ</sup>*A*1*<sup>n</sup>* , *<sup>y</sup>*σ*A*1*<sup>n</sup>* ⎞ ⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ − ⎛ ⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝ σ*A*<sup>11</sup> ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>11</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>11</sup> , *<sup>y</sup>*σ*A*<sup>11</sup> σ*A*<sup>11</sup> ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>11</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>11</sup> , *<sup>y</sup>*σ*A*<sup>11</sup> ... σ*A*<sup>11</sup> ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>11</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>11</sup> , *<sup>y</sup>*σ*A*<sup>11</sup> ⎞ ⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ⎤ ⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ = ⎛ ⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝ <sup>σ</sup>*A*<sup>12</sup> ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>12</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>12</sup> , *<sup>y</sup>*σ*A*<sup>12</sup> <sup>σ</sup>*A*<sup>13</sup> ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>13</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>13</sup> , *<sup>y</sup>*σ*A*<sup>13</sup> ... <sup>σ</sup>*A*1*<sup>n</sup>* ; *<sup>w</sup>*<sup>σ</sup>*A*1*<sup>n</sup>* , *<sup>u</sup>*<sup>σ</sup>*A*1*<sup>n</sup>* , *<sup>y</sup>*σ*A*1*<sup>n</sup>* ⎞ ⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ (81)

$$\mathbf{x}\_{A\_{1}} = \frac{\begin{pmatrix} \langle \widetilde{\sigma}\_{A\_{12}}; \widetilde{\boldsymbol{w}\sigma}\_{41\_{12}}, \widetilde{\boldsymbol{w}\sigma}\_{41\_{12}}, \widetilde{\boldsymbol{y}\sigma}\_{41\_{12}} \rangle \\ \langle \widetilde{\sigma}\_{A\_{13}}; \widetilde{\boldsymbol{w}\sigma}\_{41\_{13}}, \widetilde{\boldsymbol{w}\sigma}\_{41\_{13}}, \widetilde{\boldsymbol{y}\sigma}\_{41\_{13}} \rangle \\ \vdots \\ \langle \widetilde{\sigma}\_{A\_{1n}}; \widetilde{\boldsymbol{w}\sigma}\_{41\_{1n}}, \widetilde{\boldsymbol{w}\sigma}\_{41\_{1n}}, \widetilde{\boldsymbol{y}\sigma}\_{41\_{1n}} \rangle \end{pmatrix} - \begin{pmatrix} \langle \widetilde{\sigma}\_{A\_{11}}; \widetilde{\boldsymbol{w}\sigma}\_{41\_{12}}, \widetilde{\boldsymbol{w}\sigma}\_{41\_{13}}, \widetilde{\boldsymbol{y}\sigma}\_{41\_{13}} \rangle \\ \langle \widetilde{\sigma}\_{A\_{11}}; \widetilde{\boldsymbol{w}\sigma}\_{41\_{1n}}, \widetilde{\boldsymbol{w}\sigma}\_{41\_{1n}}, \widetilde{\boldsymbol{y}\sigma}\_{41\_{11}} \rangle \\ \vdots \\ \langle \widetilde{\sigma}\_{A\_{12}}; \widetilde{\boldsymbol{w}\sigma}\_{41\_{12}}, \widetilde{\boldsymbol{w}\sigma}\_{41\_{12}}, \widetilde{\boldsymbol{y}\sigma}\_{41\_{13}} \rangle \\ \langle \widetilde{\sigma}\_{A\_{13}}; \widetilde{\boldsymbol{w}\sigma}\_{41\_{13}}, \widetilde{\boldsymbol{w}\sigma}\_{41\_{13}}, \widetilde{\boldsymbol{y}\sigma}\_{41\_{13}} \rangle \end{pmatrix} \tag{82}$$

For the weight *xAk* of the asset *Ak* it will be obtained:

$$\mathbf{x}\_{A\_{k}} = \frac{\begin{pmatrix} \langle \widetilde{\sigma}\_{A\_{k2}}; \widetilde{w}\widetilde{\sigma}\_{A\_{k2}}, \widetilde{\mu}\widetilde{\sigma}\_{A\_{k2}}, \widetilde{y\widetilde{\sigma}}\Lambda\_{k2} \rangle \\ \langle \widetilde{\sigma}\_{A\_{k3}}; \widetilde{w}\widetilde{\sigma}\_{A\_{k3}}, \widetilde{\mu}\widetilde{\sigma}\_{A\_{k3}}, \widetilde{y\widetilde{\sigma}}\Lambda\_{k3} \rangle \\ \cdots \\ \langle \widetilde{\sigma}\_{A\_{kn}}; \widetilde{w}\widetilde{\sigma}\_{A\_{kn}}, \widetilde{\mu}\widetilde{\sigma}\_{A\_{kn}}, \widetilde{y\sigma}\Lambda\_{k\nu} \rangle \end{pmatrix} - \begin{pmatrix} \langle \sigma\_{A\_{kk}}; \widetilde{w}\widetilde{\sigma}\_{A\_{kk}}, \widetilde{\mu}\widetilde{\sigma}\_{A\_{kk}}, \widetilde{y\widetilde{\sigma}}\Lambda\_{k\nu} \rangle \\ \langle \sigma\_{A\_{kk}}; \widetilde{w}\widetilde{\sigma}\_{A\_{kk}}, \widetilde{\mu\widetilde{\sigma}}\_{A\_{kk}}, \widetilde{y\sigma}\Lambda\_{k\nu} \rangle \\ \cdots \\ \langle \sigma\_{A\_{kk}}; \widetilde{w}\widetilde{\sigma}\_{A\_{k2}}, \widetilde{\mu\widetilde{\sigma}}\_{A\_{k2}}, \widetilde{y\sigma}\widetilde{\sigma}\_{A\_{k2}} \rangle \\ \langle \widetilde{\sigma}\_{A\_{k3}}; \widetilde{w}\widetilde{\sigma}\_{A\_{kk}}, \widetilde{\mu\widetilde{\sigma}}\_{A\_{kk}}, \widetilde{y\sigma}\widetilde{\sigma}\_{A\_{k3}} \rangle \\ \cdots \\ \cdots \\ \langle \widetilde{\sigma}\_{A\_{kn}}; \widetilde{w}\widetilde{\sigma}\_{A\_{kn}}, \widetilde{\mu\widetilde{\sigma}}\_{A\_{kn}}, \widetilde{y\sigma}\_{A\_{kn}} \rangle \end{pmatrix} \tag{83}$$

Respecting the condition that:

$$
\begin{pmatrix}
\langle \widetilde{\sigma}\_{A\_{k2}}; \widetilde{\boldsymbol{\omega}} \widetilde{\sigma}\_{A\_{k2}}, \widetilde{\boldsymbol{\omega}} \widetilde{\sigma}\_{A\_{k2}}, \widetilde{\boldsymbol{y}} \widetilde{\sigma}\_{A\_{k2}} \rangle \\
\langle \widetilde{\sigma}\_{A\_{k3}}; \widetilde{\boldsymbol{\omega}} \widetilde{\sigma}\_{A\_{k3}}, \widetilde{\boldsymbol{\omega}} \widetilde{\sigma}\_{A\_{k3}}, \widetilde{\boldsymbol{y}} \widetilde{\sigma}\_{A\_{k3}} \rangle \\
\vdots \\
\langle \widetilde{\sigma}\_{A\_{kn}}; \widetilde{\boldsymbol{\omega}} \widetilde{\sigma}\_{A\_{kn}}, \widetilde{\boldsymbol{\omega}} \widetilde{\sigma}\_{A\_{kn}}, \widetilde{\boldsymbol{y}} \widetilde{\sigma}\_{A\_{kn}} \rangle
\end{pmatrix} \neq 0 \tag{84}
$$

#### **6. Numerical Applications**

#### *6.1. Numerical Application for the Case of the Neutrosophic Portfolio Consisting of Two Financial Assets*

Two financial assets (*A*1, *A*2) are considered to which two triangular neutrosophic fuzzy numbers are specified for the financial assets return of the form:

$$
\widetilde{R\_{A1}} = \langle (0.2 \, 0.3 \, 0.5); 0.5, \, 0.2, \, 0.3 \rangle \text{ for } \widetilde{R\_A} \in [0.2; 0.5]
$$

$$
\widetilde{R\_{A2}} = \langle (0.1 \, 0.2 \, 0.3); 0.6, \, 0.3, \, 0.2 \rangle \text{ for } \widetilde{R\_A} \in [0.1; 0.3]
$$

And we aim at:


Starting from Example 2, the following elements are known:


$$
\overline{\sigma}\_f^2 A\_1 = \langle 0.0225; 0.5, 0.2, 0.3 \rangle
$$


$$
\left.\overline{\sigma}\_{f\_{A\_2}}^2\right| = \langle 0.0180; 0.6, 0.3, 0.2 \rangle^{\circ}
$$

Also, from Example 2 we know that the covariance between asset *A*<sup>1</sup> and *A*<sup>2</sup> has the value:

$$\sigma\_{A\_1 A\_3} = \langle 0.1914; 0.5, 0.2, 0.3 \rangle$$

(a) In this context, the variance-covariance matrix has the following form:

$$
\Omega = \begin{pmatrix}
\frac{\overline{\sigma}\_{fA\_1A\_1}}{\sigma\_{fA\_2A\_1}} & \frac{\overline{\sigma}\_{fA\_1A\_3}}{\sigma\_{fA\_2A\_2}}
\end{pmatrix} = \begin{pmatrix}
\langle 0.0225; 0.5, 0.2, 0.3 \rangle & \langle 0, 0705; 0.6, 0.2, 0.2 \rangle \\
\langle 0.0705; 0.6, 0.2, 0.2 \rangle & \langle 0.0180; 0.6, 0.3, 0.2 \rangle
\end{pmatrix}
$$

(b) It is known that the rentability of the neutrosophic portfolio P has the following form:

$$\langle \overline{R}\_{\mathcal{P}}; w\overline{R}p, u\overline{R}p, y\overline{R}p \rangle = \langle \mathbf{x}\_{A\_1}\overline{R}\_{A\_1}; w\overline{R}\_{A\_1}, u\overline{R}\_{A\_1}, y\overline{R}\_{A\_1} \rangle + \langle \mathbf{x}\_{A\_2}\overline{R}\_{A\_2}; w\overline{R}\_{A\_2}, u\overline{R}\_{A\_2}, y\overline{R}\_{A\_2} \rangle$$

Also, it is given that: *x*<sup>2</sup> = 1 − *x*<sup>1</sup> and that the *P* portfolio has the following form: *P* = *x*<sup>1</sup> *x*2 ;

By replacing these expressions in the formula for the portfolio rentability -*RP*; *wRp*3, *uRp*3, *yRp*3 we get that:

$$
\begin{aligned}
\langle \overline{\mathcal{R}}p; w\overline{\mathcal{R}}p, u\overline{\mathcal{R}}p, y\overline{\mathcal{R}}p \rangle &= \langle \mathbf{x}\_{A\_1}\overline{\mathcal{R}}\_{A\_1}; w\overline{\mathcal{R}}\_{A\_1}, u\overline{\mathcal{R}}\_{A\_1}, y\overline{\mathcal{R}}\_{A\_1} \rangle + \langle \mathbf{x}\_{A\_2}\overline{\mathcal{R}}\_{A\_2}; w\overline{\mathcal{R}}\_{A\_2}, u\overline{\mathcal{R}}\_{A\_2}, y\overline{\mathcal{R}}\_{A\_2} \rangle \\&= \langle \mathbf{x}\_{A\_1}(0.2 \, 0.3 \, 0.5); 0.5, 0.2, 0.3 \rangle + \langle \left(1 - \mathbf{x}\_{A\_1}\right)(0.1 \, 0.2 \, 0.3); 0.6, 0.3, 0.2 \rangle
\end{aligned}
$$

$$\langle \mathbb{R}p; w\mathbb{R}p, \mu \mathbb{R}p, y\mathbb{R}p \rangle = \langle \mathbb{x}\_{A\_1}(0.1\ 0.1\ 0.2); 0.6, \ 0.2, \ 0.2 \rangle - \left( \langle 0.1\ 0.2\ 0.3 \rangle; 0.6, \ 0.3, \ 0.2 \rangle \right)$$

As for the portfolio risk, it will be of the following form: σ2 *<sup>P</sup>*; *<sup>w</sup>*3σ*p*, *<sup>u</sup>*3σ*p*, *<sup>y</sup>*3σ*p* <sup>=</sup> *x*2 *A*1 σ2 *A*1 ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>y</sup>*<sup>σ</sup>*A*<sup>1</sup> + *x*<sup>2</sup> *A*2 σ2 *A*2 ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>2</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>2</sup> , *<sup>y</sup>*σ*A*<sup>2</sup> + 2*xA*<sup>1</sup> *xA*2σ*A*1*A*<sup>2</sup> ; *w*σ*A*, *u*σ*A*, *y*σ*A*

By replacing it in the equations for *x*<sup>1</sup> and *x*<sup>2</sup> = 1 − *x*<sup>1</sup> we get that:

$$
\begin{split}
\langle\widetilde{\sigma}^{2}\_{\overline{p}};\widetilde{\boldsymbol{w}\circ\overline{p}},\boldsymbol{\mu}\widetilde{\boldsymbol{\sigma}\boldsymbol{p}},\boldsymbol{y}\widetilde{\boldsymbol{\sigma}\boldsymbol{p}}\rangle \\
&= \langle\boldsymbol{\chi}^{2}\_{A\_{1}}\widetilde{\sigma}^{2}\_{A\_{1}};\boldsymbol{w}\widetilde{\boldsymbol{\sigma}}\_{A\_{1}},\boldsymbol{u}\widetilde{\boldsymbol{\sigma}}\_{A\_{1}},\boldsymbol{y}\widetilde{\boldsymbol{\sigma}}\_{A\_{1}}\rangle + \langle(1-\boldsymbol{x}\_{A\_{1}})^{2}\widetilde{\sigma}^{2}\_{A\_{2}};\boldsymbol{w}\widetilde{\boldsymbol{\sigma}}\_{A\_{2}},\boldsymbol{u}\widetilde{\boldsymbol{\sigma}}\_{A\_{2}},\boldsymbol{y}\widetilde{\boldsymbol{\sigma}}\_{A\_{2}}\rangle \\
&+ \langle2\boldsymbol{x}\_{A\_{1}}(1-\boldsymbol{x}\_{A\_{1}})\boldsymbol{\sigma}\_{A\_{1}A\_{2}};\boldsymbol{u}\boldsymbol{w}\_{A\_{1}}\boldsymbol{u}\sigma\_{A\_{2}},\boldsymbol{y}\sigma\_{A}\rangle
\end{split}
$$

σ2 *<sup>P</sup>*; *<sup>w</sup>*3σ*p*, *<sup>u</sup>*3σ*p*, *<sup>y</sup>*3σ*p*

$$\begin{array}{l} = \langle \mathbf{x}\_{A\_1}^2 \, 0.0225; 0.5, 0.2, 0.3 \rangle + \langle (1 - 2\mathbf{x}\_{A\_1} + \mathbf{x}\_{A\_1}^2) 0.0180; 0.6, 0.3, 0.2 \rangle \\ + \langle (2\mathbf{x}\_{A\_1} - 2\mathbf{x}\_{A\_1}^2) 0.0705; 0.6, 0.2, 0.2 \rangle \end{array}$$

σ2 *<sup>P</sup>*; *<sup>w</sup>*3σ*p*, *<sup>u</sup>*3σ*p*, *<sup>y</sup>*3σ*p*

$$\mathbf{x} = -\langle \mathbf{x}\_{A\_1}^2 \, 0.1005; 0.6, \, 0.2, \, 0.2 \rangle - \langle 0.105 \mathbf{x}\_{A\_1}; 0.6, \, 0.2, \, 0.2 \rangle + \langle 0.141; 0.6, \, 0.2, \, 0.2 \rangle$$

(c) Further on, we know from Example 1 that the covariance between *A*<sup>1</sup> and *A*<sup>2</sup> has the following form:

$$
\overline{\sigma}\_{fA\_1A\_2} = \langle 0.0705; 0.6, 0.2, 0.2 \rangle
$$

By replacing the values obtained in the expression of *xA*<sup>1</sup> and *xA*<sup>2</sup> it is obtained that:

$$
\overline{\sigma}\_{fA\_1A\_2} = \langle 0.0705; 0.6, 0.2, 0.2 \rangle
$$

*xA*<sup>1</sup> <sup>=</sup> <sup>σ</sup>*A*1*A*<sup>2</sup> ; *<sup>w</sup>*<sup>σ</sup>*A*, *<sup>u</sup>*<sup>σ</sup>*A*, *<sup>y</sup>*σ*A* <sup>σ</sup>*A*1*A*<sup>2</sup> ; *<sup>w</sup>*<sup>σ</sup>*A*, *<sup>u</sup>*<sup>σ</sup>*A*, *<sup>y</sup>*<sup>σ</sup>*A*−σ2 *A*1 ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>y</sup>*σ*A*<sup>1</sup> *xA*<sup>1</sup> <sup>=</sup> 0, 1914; 0.5, 0.2, 0.3 0, 1914; 0.5, 0.2, 0.3−0.0225; 0.5, 0.2, 0.3 <sup>×</sup> <sup>100</sup> *xA*<sup>1</sup> <sup>=</sup> 0, 1914; 0.5, 0.2, 0.3 0, 1689; 0.5, 0.2, 0.3 <sup>×</sup> <sup>100</sup> *xA*<sup>1</sup> = 112.72% *xA*<sup>2</sup> = σ2 *A*1 ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>y</sup>*σ*A*<sup>1</sup> σ2 *A*1 ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>y</sup>*<sup>σ</sup>*A*<sup>1</sup> −<sup>σ</sup>*A*1*A*<sup>2</sup> ; *<sup>w</sup>*<sup>σ</sup>*A*, *<sup>u</sup>*<sup>σ</sup>*A*, *<sup>y</sup>*σ*A xA*<sup>2</sup> <sup>=</sup> 0.0225; 0.5, 0.2, 0.3 0.0225; 0.5, 0.2, 0.3 <sup>−</sup> 0, 1914; 0.5, 0.2, 0.3 <sup>×</sup> <sup>100</sup> *xA*<sup>2</sup> <sup>=</sup> <sup>−</sup> 0.0225; 0.5, 0.2, 0.3 0, 1689; 0.5, 0.2, 0.3 <sup>×</sup> <sup>100</sup> *xA*<sup>2</sup> = −13.250%

The portfolio risk will be given by the relationship:

σ2 *<sup>P</sup>*; *<sup>w</sup>*3σ*p*, *<sup>u</sup>*3σ*p*, *<sup>y</sup>*3σ*p* = *x*<sup>2</sup> *A*1 σ2 *A*1 ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>y</sup>*<sup>σ</sup>*A*<sup>1</sup> + *x*<sup>2</sup> *A*2 σ2 *A*2 ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>2</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>2</sup> , *<sup>y</sup>*σ*A*<sup>2</sup> +2*xA*<sup>1</sup> *xA*2σ*A*1*A*<sup>2</sup> ; *w*σ*A*, *u*σ*A*, *y*σ*A* σ2 *<sup>P</sup>*; *<sup>w</sup>*3σ*p*, *<sup>u</sup>*3σ*p*, *<sup>y</sup>*3σ*p* = (1.1272) 2 0.0225; 0.5, 0.2, 0.3 +(−0.13250) 2 0.0180; 0.6, 0.3, 0.2 +2(1.1272)(−0.13250)0.1914; 0.5, 0.2, 0.3 σ2 *<sup>P</sup>*; *<sup>w</sup>*3σ*p*, *<sup>u</sup>*3σ*p*, *<sup>y</sup>*3σ*p* = 0.0285; 0.5, 0.2, 0.3 + 0.0033; 0.5, 0.2, 0.3 +0.05717; 0.5, 0.2, 0.3 σ2 *<sup>P</sup>*; *<sup>w</sup>*3σ*p*, *<sup>u</sup>*3σ*p*, *<sup>y</sup>*3σ*p* <sup>=</sup> 0.08897; 0.5, 0.2, 0.3 σ*p*; *<sup>w</sup>*3σ*p*, *<sup>u</sup>*3σ*p*, *<sup>y</sup>*3σ*p* <sup>=</sup> - σ2 *<sup>P</sup>*; *<sup>w</sup>*3σ*p*, *<sup>u</sup>*3σ*p*, *<sup>y</sup>*3σ*p* <sup>=</sup> <sup>5</sup> 0, 08897; 0.5, 0.2, 0.3 = 0.2982

Conclusion: The neutrosophic portfolio risk is minimal and registers the value of 29.82%.

#### *6.2. Numerical Application for the Case of the Neutrosophic Portfolio Consisting of N Financial Assets*

For three financial assets held by three listed companies (*A*1, *A*2, *A*3), the financial return was determined according to the information provided by the stock exchange website. The financial returns were fuzzified with the help of three triangular neutrosophic numbers and the following values were obtained:

$$
\widetilde{R\_{A1}} = \langle (0.3 \, 0.4 \, 0.6); 0.5, \, 0.2, \, 0.3 \rangle, \, \text{for } \overline{R\_A} \in [0.3; 0.6]
$$

$$
\widetilde{R\_{A2}} = \langle (0.15 \, 0.25 \, 0.35); 0.6, \, 0.3, \, 0.2 \rangle, \, \text{for } \widetilde{R\_A} \in [0.15; 0.35]
$$

$$
\widetilde{R\_{A3}} = \langle (0.25 \, 0.45 \, 0.65); 0.4, \, 0.3, \, 0.3 \rangle, \, \text{for } \widetilde{R\_A} \in [0.25; 0.65]
$$

In order to establish:



The followings are undertaken:

For computing the variance-covariance matrix, the financial returns of the three assets are established in a first stage according to the calculation formula:

$$\langle \widetilde{R}\_{A\_i i} z \widetilde{R}\_{A\_{i'}} u \widetilde{R}\_{A\_{i'}} y \widetilde{R}\_{A\_i} \rangle = \langle \left( \frac{1}{6} (\widetilde{R}\_{Aai} + \widetilde{R}\_{Aci}) + \frac{2}{3} \widetilde{R}\_{Abi} \right) z \widetilde{R}\_{A\_{i'}} u \widetilde{R}\_{A\_{i'}} y \widetilde{R}\_{A\_i} \rangle$$

Thus:

$$\begin{aligned} \langle \widetilde{R}\_{A\_1}; u\widetilde{R}\_{A\_{1'}}, u\widetilde{R}\_{A\_{1'}}, y\widetilde{R}\_{A\_1} \rangle &= \langle \left( \frac{1}{6}(0.3 + 0.6) + \frac{2}{3} \times 0.4 \right); 0.5, 0.2, 0.3 \rangle \\ &= \langle 0.166 \times 0.9 + 0.666 \times 0.4; 0.5, 0.2, 0.3 \rangle \\ &= \langle 0.149 + 0.266; 0.5, 0.2, 0.3 \rangle = \langle 0.415; 0.5, 0.2, 0.3 \rangle \end{aligned}$$

Proceeding in the same way, we get the following results:

$$
\langle \overline{R}\_{A\_2}; w\overline{R}\_{A\_2}, u\overline{R}\_{A\_{2'}}, y\overline{R}\_{A\_2} \rangle = \langle 0.249; 0.6, 0.3, 0.2 \rangle
$$

$$
\langle \overline{R}\_{A\_3}; w\overline{R}\_{A\_3}, u\overline{R}\_{A\_3}, y\overline{R}\_{A\_3} \rangle = \langle 0.448; 0.4, 0.3, 0.3 \rangle
$$

The following calculation formula is used to determine the variance of the three financial assets:

$$
\begin{array}{c}
\widehat{\sigma\_{f}A\_{i}} = \langle \frac{1}{4} \Big[ \left( \widetilde{\mathcal{R}a\_{li}} - \widetilde{\mathcal{R}a\_{il}} \right)^{2} + \left( \widetilde{\mathcal{R}a\_{ci}} - \widetilde{\mathcal{R}a\_{li}} \right)^{2} \Big]; w\widetilde{\mathcal{R}a}, \iota\widetilde{\mathcal{R}a}, y\widetilde{\mathcal{R}a} \rangle \\
+ \langle \frac{2}{3} \Big[ \widetilde{\mathcal{R}a\_{il}} \big( \widetilde{\mathcal{R}a\_{bi}} - \widetilde{\mathcal{R}a\_{il}} \big) - \widetilde{\mathcal{R}a\_{ci}} \big( \widetilde{\mathcal{R}a\_{ci}} - \widetilde{\mathcal{R}a\_{bi}} \big) \big]; w\widetilde{\mathcal{R}a}, \iota\widetilde{\mathcal{R}a}, y\widetilde{\mathcal{R}a} \rangle \\
+ \langle \frac{1}{2} \Big( \widetilde{\mathcal{R}a\_{il}} + \widetilde{\mathcal{R}a\_{ci}} \big); w\widetilde{\mathcal{R}a}, \iota\widetilde{\mathcal{R}a}, y\widetilde{\mathcal{R}a} \rangle - \langle \frac{1}{2} E\_{f}^{2} \big( \widetilde{\mathcal{R}a\_{i}} \big); w\widetilde{\mathcal{R}a}, \iota\widetilde{\mathcal{R}a}, y\widetilde{\mathcal{R}a} \rangle
\end{array}
$$

By replacing the data in the above expression, we will have:

$$\widetilde{\sigma\_f}a\_1 = \langle \frac{1}{4} [(0.4 - 0.3)^2 + (0.6 - 0.4)^2]; 0.5, 0.2, 0.3 \rangle + \langle \frac{2}{3} (0.3(0.4 - 0.3) - 0.6(0.6 - 0.4)); 0.5, 0.2, 0.3 \rangle + \langle \frac{1}{2} (0.3^2 + 0.6^2); 0.5, 0.2, 0.3 \rangle + \langle \frac{1}{2} (0.415)^2; 0.5, 0.2, 0.3 \rangle$$

For the remaining of the values we will obtain:

$$
\overline{\sigma\_f} a\_2 = \langle 0.033; 0.6, 0.3, 0.2 \rangle
$$

$$
\overline{\sigma\_f} a\_3 = \langle 0.0910; 0.4, 0.3, 0.3 \rangle
$$

In order to establish the covariance between these three assets, respectively to measure the intensity of the connection between the financial assets, will be applied the following calculation formula:

*cov*(*Ra* 3*i*,*Ra* 3*j*) = 1 4 (*Ra* 3*bii* − *Ra* 3*aii*)(*Ra* 3*bji* − *Ra* 3*aji*) + (*Ra* 3*cii* − *Ra* 3*bii*)(*Ra* 3*cji* − *Ra* 3*bji*) +<sup>1</sup> 3 *Ra* 3*aji*(*Ra* 3*bii* − *Ra* 3*aii*) + *Ra* 3*aii*(*Ra* 3*bji* − *Ra* 3*aji*) − *Ra* 3*cii*(*Ra* 3*cji* − *Ra* 3*bji*) + *Ra* 3*cji*(*Ra* 3*cii* − *Ra* 3*bii*) } +<sup>1</sup> <sup>2</sup> (*Ra* 3*aiiRa* 3*aji* + *Ra* 3*ciiRa* 3*cji*) +<sup>1</sup> <sup>2</sup>*Ef*(*Ra* 3*i*)*Ef*(*Ra* 3*j*) ; *wRa*3*<sup>i</sup>* ∧ *wRa* 3*j*, *uRa* 3*<sup>i</sup>* ∨ *uRa* 3*j*, *yRa* 3*<sup>i</sup>* ∨ *yRa* 3*j* σ*A*1*A*<sup>2</sup> = 0.160; 0.6, 0.2, 0.2

Proceeding in the same manner, the following results are obtained:

$$\sigma\_{A\_1 A\_3} = \langle 0.284; 0.6, 0.2, 0.2 \rangle$$

$$\sigma\_{A\_2A\_3} = \langle 0.171; 0.6, 0.2, 0.2 \rangle$$

The variance-covariance matrix will be of the form:

$$
\Omega = \begin{pmatrix}
\overleftarrow{\sigma}\_{f\_{A11}} & \overleftarrow{\sigma}\_{f\_{A12}} & \overleftarrow{\sigma}\_{f\_{A13}} \\
\overleftarrow{\sigma}\_{f\_{A21}} & \overleftarrow{\sigma}\_{f\_{A22}} & \overleftarrow{\sigma}\_{f\_{A23}} \\
\overleftarrow{\sigma}\_{f\_{A31}} & \overleftarrow{\sigma}\_{f\_{A32}} & \overleftarrow{\sigma}\_{f\_{A33}}
\end{pmatrix}
$$

By replacing the above values, we will have:

$$
\Omega = \begin{pmatrix}
\langle 0.0925; 0.5, 0.2, 0.3 \rangle & \langle 0.160; 0.6, 0.2, 0.2 \rangle & \langle 0.284; 0.6, 0.2, 0.2 \rangle \\
\langle 0.160; 0.6, 0.2, 0.2 \rangle & \langle 0.033; 0.6, 0.3, 0.2 \rangle & \langle 0.171; 0.6, 0.2, 0.2 \rangle \\
\langle 0.284; 0.6, 0.2, 0.2 \rangle & \langle 0.171; 0.6, 0.2, 0.2 \rangle & \langle 0.0910; 0.4, 0.3, 0.3 \rangle
\end{pmatrix}
$$

According to Theorem 2 the weight of the financial asset *xA*<sup>1</sup> in the total value of the portfolio will be given by the relation:

$$\mathbf{x}\_{A\_{1}} = \frac{\begin{pmatrix} \langle \widetilde{\sigma}\_{A\_{12}}; \widetilde{\mathbf{w}\sigma}\_{A\_{12}}, \widetilde{\mathbf{w}\sigma}\_{A\_{12}}, \widetilde{\mathbf{y}\sigma}\_{A\_{12}} \rangle \\ \langle \widetilde{\sigma}\_{A\_{13}}; \widetilde{\mathbf{w}\sigma}\_{A\_{13}}, \widetilde{\mathbf{w}\sigma}\_{A\_{13}}, \widetilde{\mathbf{y}\sigma}\_{A\_{13}} \rangle \end{pmatrix} - \begin{pmatrix} \langle \sigma\_{A\_{11}}; \widetilde{\mathbf{w}\sigma}\_{A\_{11}}, \widetilde{\mathbf{w}\sigma}\_{A\_{11}}, \widetilde{\mathbf{y}\sigma}\_{A\_{11}} \rangle \\ \langle \sigma\_{A\_{11}}; \widetilde{\mathbf{w}\sigma}\_{A\_{11}}, \widetilde{\mathbf{w}\sigma}\_{A\_{11}}, \widetilde{\mathbf{y}\sigma}\_{A\_{11}} \rangle \end{pmatrix}}{\begin{pmatrix} \langle \widetilde{\sigma}\_{A\_{12}}; \widetilde{\mathbf{w}\sigma}\_{A\_{12}}, \widetilde{\mathbf{w}\sigma}\_{A\_{12}}, \widetilde{\mathbf{y}\sigma}\_{A\_{12}} \rangle \\ \langle \widetilde{\sigma}\_{A\_{13}}; \widetilde{\mathbf{w}\sigma}\_{A\_{13}}, \widetilde{\mathbf{w}\sigma}\_{A\_{13}}, \widetilde{\mathbf{y}\sigma}\_{A\_{13}} \rangle \end{pmatrix}}$$

Through the calculations the following value is obtained:

$$\mathbf{x}\_{A\_1} = 0.2376$$

For the weight of the financial asset *xA*<sup>2</sup> and *xA*<sup>3</sup> in the total value of the neutrosophic portfolio, the same calculation formula will be used and the results are the following:

$$\begin{aligned} \mathbf{x}\_{A\_2} &= 0.6808 \\\\ \mathbf{x}\_{A\_3} &= 0.0816 \end{aligned}$$

Conclusion: In order to mitigate the neutrosophic portfolio risk, it is necessary to invest: in the first financial asset (*A*1) a weight of *xA*<sup>1</sup> = 0.2376, in the second asset (*A*2) a weight of *xA*<sup>2</sup> = 0.6808 and respectively in the third financial asset (*A*3) a weight *xA*<sup>3</sup> = 0.0816.

The neutrosophic portfolio return will be:

$$
\begin{split}
\langle \overline{\mathcal{R}}\_{p};\boldsymbol{\omega}\widetilde{\boldsymbol{\sigma}\boldsymbol{\overline{p}}},\boldsymbol{\mu}\widetilde{\boldsymbol{\sigma}\boldsymbol{\overline{p}}},\boldsymbol{y}\widetilde{\boldsymbol{\sigma}\boldsymbol{\overline{p}}}\rangle \\
&= \langle \boldsymbol{\chi}\_{A\_{1}}\widetilde{\boldsymbol{R}}\_{A\_{1}};\boldsymbol{\omega}\widetilde{\boldsymbol{\sigma}}\_{A\_{1}},\boldsymbol{\mu}\widetilde{\boldsymbol{\sigma}}\_{A\_{1}},\boldsymbol{y}\widetilde{\boldsymbol{\sigma}}\_{A\_{1}}\rangle + \langle \boldsymbol{\chi}\_{A\_{2}}\widetilde{\boldsymbol{R}}\_{A\_{2}};\boldsymbol{\omega}\widetilde{\boldsymbol{\sigma}}\_{A\_{2}},\boldsymbol{\mu}\widetilde{\boldsymbol{\sigma}}\_{A\_{2}},\boldsymbol{y}\widetilde{\boldsymbol{\sigma}}\_{A\_{2}}\rangle \\
&+ \langle \boldsymbol{\chi}\_{A\_{3}}\widetilde{\boldsymbol{R}}\_{A\_{3}};\boldsymbol{\omega}\widetilde{\boldsymbol{\sigma}}\_{A\_{3}},\boldsymbol{\mu}\widetilde{\boldsymbol{\sigma}}\_{A\_{3}},\boldsymbol{y}\widetilde{\boldsymbol{\sigma}}\_{A\_{3}}\rangle
\end{split}
$$

By replacing in the formula, will be obtained:

$$\begin{aligned} \langle R\_p; u\widetilde{\alpha\eta}; u\widetilde{\eta\eta}, y\widetilde{\alpha\eta} \rangle \\ &= \langle 0.2376 \times 0.415; 0.5, 0.2, 0.3 \rangle + \langle 0.6808 \times 0.249; 0.6, 0.3, 0.2 \rangle \\ &+ \langle 0.0816 \times 0.448; 0.4, 0.3, 0.3 \rangle \end{aligned}$$

$$
\langle R\_{\overline{p}}; u\overline{o\tau}p, u\overline{o\tau}p, \overline{y\sigma}p \rangle = \langle 0.0986; 0.5, \ 0.2, \ 0.3 \rangle + \langle 0.1849; 0.6, \ 0.3, \ 0.2 \rangle + \langle 0.0365; 0.4, \ 0.3, \ 0.3 \rangle
$$

σ2

After performing the neutrosophic calculations we obtain:

$$\langle \mathcal{R}\_{\overline{p}}; u\widetilde{\sigma p}, u\widetilde{\sigma p}, y\widetilde{\sigma p} \rangle = \langle 0.3200; 0.6, 0.2, 0.2 \rangle$$

The neutrosophic portfolio risk will be as follows:

*<sup>p</sup>*; *<sup>w</sup>*3σ*p*, *<sup>u</sup>*3σ*p*, *<sup>y</sup>*3σ*p* = *x*<sup>2</sup> *A*1 σ2 *A*1 ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>y</sup>*<sup>σ</sup>*A*<sup>1</sup> + *x*<sup>2</sup> *A*2 σ2 *A*2 ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>2</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>2</sup> , *<sup>y</sup>*σ*A*<sup>2</sup> +*x*<sup>2</sup> *A*3 σ2 *A*3 ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>3</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>3</sup> , *<sup>y</sup>*σ*A*<sup>3</sup> +2*xA*<sup>1</sup> *xA*<sup>2</sup> <sup>σ</sup>*A*1*A*<sup>2</sup> ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>1</sup> <sup>∧</sup> *<sup>w</sup>*<sup>σ</sup>*A*<sup>2</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>1</sup> <sup>∨</sup> *<sup>u</sup>*<sup>σ</sup>*A*<sup>2</sup> , *<sup>y</sup>*<sup>σ</sup>*A*<sup>1</sup> <sup>∨</sup> *<sup>y</sup>*σ*A*<sup>2</sup> +2*xA*<sup>1</sup> *xA*<sup>3</sup> <sup>σ</sup>*A*1*A*<sup>3</sup> ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>1</sup> <sup>∧</sup> *<sup>w</sup>*<sup>σ</sup>*A*<sup>3</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>1</sup> <sup>∨</sup> *<sup>u</sup>*<sup>σ</sup>*A*<sup>3</sup> , *<sup>y</sup>*<sup>σ</sup>*A*<sup>1</sup> <sup>∨</sup> *<sup>y</sup>*σ*A*<sup>3</sup> +2*xA*<sup>2</sup> *xA*<sup>1</sup> <sup>σ</sup>*A*2*A*<sup>1</sup> ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>2</sup> <sup>∧</sup> *<sup>w</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>2</sup> <sup>∨</sup> *<sup>u</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>y</sup>*<sup>σ</sup>*A*<sup>2</sup> <sup>∨</sup> *<sup>y</sup>*σ*A*<sup>1</sup> +2*xA*<sup>2</sup> *xA*<sup>3</sup> <sup>σ</sup>*A*2*A*<sup>3</sup> ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>2</sup> <sup>∧</sup> *<sup>w</sup>*<sup>σ</sup>*A*<sup>3</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>2</sup> <sup>∨</sup> *<sup>u</sup>*<sup>σ</sup>*A*<sup>3</sup> , *<sup>y</sup>*<sup>σ</sup>*A*<sup>2</sup> <sup>∨</sup> *<sup>y</sup>*σ*A*<sup>3</sup> +2*xA*<sup>3</sup> *xA*<sup>1</sup> <sup>σ</sup>*A*3*A*<sup>1</sup> ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>3</sup> <sup>∧</sup> *<sup>w</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>3</sup> <sup>∨</sup> *<sup>u</sup>*<sup>σ</sup>*A*<sup>1</sup> , *<sup>y</sup>*<sup>σ</sup>*A*<sup>3</sup> <sup>∨</sup> *<sup>y</sup>*σ*A*<sup>1</sup> +2*xA*<sup>3</sup> *xA*<sup>2</sup> <sup>σ</sup>*A*3*A*<sup>2</sup> ; *<sup>w</sup>*<sup>σ</sup>*A*<sup>3</sup> <sup>∧</sup> *<sup>w</sup>*<sup>σ</sup>*A*<sup>2</sup> , *<sup>u</sup>*<sup>σ</sup>*A*<sup>3</sup> <sup>∨</sup> *<sup>u</sup>*<sup>σ</sup>*A*<sup>2</sup> , *<sup>y</sup>*<sup>σ</sup>*A*<sup>3</sup> <sup>∨</sup> *<sup>y</sup>*<sup>σ</sup>*A*<sup>2</sup> + σ<sup>2</sup> *<sup>p</sup>*; *<sup>w</sup>*3σ*p*, *<sup>u</sup>*3σ*p*, *<sup>y</sup>*3σ*p* = 49.42%

Conclusion: In order to minimize the neutrosophic portfolio risk, the investments in financial assets must have the following weights: *xA*<sup>1</sup> = 23.76%, *xA*<sup>2</sup> = 68.08% and *xA*<sup>3</sup> = 8.16%. For these weights respective for the investments in financial assets *A*1, *A*2, *A*3, the profitability of the neutrosophic portfolio will be of the form:

$$
\langle \overline{\mathcal{R}}\_{p\text{'}} u \widehat{\sigma p}, u \widehat{\sigma p}, y \widehat{\sigma p} \rangle = \langle 0.3200; 0.6, 0.2, 0.2 \rangle
$$

The neutrosophic portfolio risk will be minimal, respectively will have the value of σ*p*; *<sup>w</sup>*3σ*p*, *<sup>u</sup>*3σ*p*, *<sup>y</sup>*3σ*p* <sup>=</sup> 49.42%. The risk was determined based on the high values of the return. The obtained results validate the risk minimization model for the neutrosophic portfolio, in the sense that for a financial return of 32%, the portfolio risk is high, reaching the value of 49.42%.

#### **7. General Conclusions and Limitation**

The neutrosophic portfolios are made up of financial assets for which it is possible to determine the financial performance indicators respectively: the neutrosophic return *Ef*(*R* 3*A*); *wR* 3*A*, *uR* 3*A*, *yR* 3*A* specific for the financial asset *Ai*, the neutrosophic risk <sup>σ</sup> *<sup>f</sup>* <sup>2</sup> *Ai* ; *<sup>w</sup>*<sup>σ</sup>*A*, *<sup>u</sup>*<sup>σ</sup>*A*, *<sup>y</sup>*σ*A*, specific to the same financial asset and the neutrosophic covariance *cov*(*R* 3*A*1,*R* 3*A*2); *wR* 3*A*1, *uR* 3*A*1, *yR* 3*A*1; *wR* 3*A*2, *uR* 3*A*2, *yR* 3*A*2 between two financial assets *A*<sup>1</sup> and *A*2, which measures the intensity of the links between the neutrosophic returns specific to the two financial assets. Such a portfolio made up of financial assets for which financial performance indicators can be determined is called a neutrosophic portfolio.

For the neutrosophic portfolio *P* -;*w<sup>p</sup>*, *<sup>u</sup><sup>p</sup>*, *<sup>y</sup>p* can be determined: the neutrosophicreturn of the portfolio -*RP*; *wRp*3, *uRp*3, *yRp*3 and the neutrosophic portfolio risk σ2 *<sup>P</sup>*; *<sup>w</sup>*3σ*p*, *<sup>u</sup>*3σ*p*, *<sup>y</sup>*3σ*p*. These two performance indicators are fundamental indicators that characterize the neutrosophic portfolios.

Thus, the neutrosophic portfolio return is dependent on the weight held by the financial assets in the total value of the neutrosophic portfolio *xAi* , as well as on the neutrosophic return of each financial asset that makes up the portfolio -*RAi* ; *w*-*RAi* , *u*-*RAi* , *y*-*RAi* . At the same time, the neutrosophic portfolio risk is dependent on the weight held by the financial assets in the total value of the portfolio *xAi* , as well as on the neutrosophic risk of each financial asset of the form: <sup>σ</sup> *<sup>f</sup>* <sup>2</sup> *Ai* ; *<sup>w</sup>*<sup>σ</sup>*A*, *<sup>u</sup>*<sup>σ</sup>*A*, *<sup>y</sup>*σ*A* and the covariance between two financial assets σ*AiAj* ; *<sup>w</sup>*<sup>σ</sup>*Ai* <sup>∧</sup> *<sup>w</sup>*σ*Aj* , *<sup>u</sup>*<sup>σ</sup>*Ai* <sup>∨</sup> *<sup>u</sup>*σ*Aj* , *<sup>y</sup>*<sup>σ</sup>*Ai* <sup>∨</sup> *<sup>y</sup>*σ*Aj* .

Also the neutrosophic portfolio risk, consisting of N financial assets admits a minimum value at the point where the first order derivative of the neutrosophic portfolio risk is zero <sup>∂</sup>σ2 *<sup>P</sup>*;*w*3σ*p*,*u*3σ*p*,*y*3σ*p* <sup>∂</sup>*xAi* = 0. Thus, it can be determined what the weight of every financial assets should be in the total value of the neutrosophic portfolio *xAi* , so that the portfolio risk is minimal σ2 *<sup>P</sup>*; *<sup>w</sup>*3σ*p*, *<sup>u</sup>*3σ*p*, *<sup>y</sup>*3σ*p* → *min*. From

calculations but also from studying this risk category, it was found that the financial assets weight in the total value of the portfolio is dependent on the individual neutrosophic risk of each financial asset but also on the covariance between two financial assets.

The neutrosophic portfolios enables the access to complete information for the financial market investors, in order to substantiate investment decisions. This information provided by the neutrosophic portfolios refers to the probability of realizing the neutrosophic portfolio return, which in turn is influenced by the individual probabilities of achieving the desired return for each financial asset that enters the portfolio structure. Also, the neutrosophic portfolios provide information regarding the probability of producing the neutrosophic portfolio risk, which depends on the probability of producing the neutrosophic risk for each financial asset that enters the portfolio. These categories of information are stratified by means of linguistic variables, so that we will distinguish: the probability of obtaining the return and/or the production of the portfolio risk almost certainly, the probability that the return/production of the portfolio risk will not be realized and the probability that the return/production of the portfolio risk to be uncertain.

Obtaining concomitant information regarding risk and return at the level of the neutrosophic portfolio, as well as the probability of producing the risk and return for the neutrosophic portfolio as well as for the financial assets confer a strong innovative approach for this research paper. Neutrosophic portfolios also have certain limits which mainly refer to the determination of the probability of producing the risk and/or of realizing the return, both at the level of each financial asset as well as at the level of the neutrosophic portfolio as a whole.

**Author Contributions:** Conceptualization, M.-I.B., I.-A.B.; Data curation, C.D.; Formal analysis, M.-I.B. and I.-A.B.; Investigation, M.-I.B., I.-A.B. and C.D.; Methodology, M.-I.B. and I.-A.B.; Supervision, M.-I.B.; Validation, I.-A.B.; Visualization, C.D.; Writing—original draft, M.-I.B. and I.-A.B.; Writing—review & editing, C.D.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Generalized Abel-Grassmann's Neutrosophic Extended Triplet Loop**

**Xiaogang An 1,\*, Xiaohong Zhang <sup>1</sup> and Yingcang Ma <sup>2</sup>**


Received: 5 November 2019; Accepted: 4 December 2019; Published: 9 December 2019

**Abstract:** A group is an algebraic system that characterizes symmetry. As a generalization of the concept of a group, semigroups and various non-associative groupoids can be considered as algebraic abstractions of generalized symmetry. In this paper, the notion of generalized Abel-Grassmann's neutrosophic extended triplet loop (GAG-NET-Loop) is proposed and some properties are discussed. In particular, the following conclusions are strictly proved: (1) an algebraic system is an AG-NET-Loop if and only if it is a strong inverse AG-groupoid; (2) an algebraic system is a GAG-NET-Loop if and only if it is a quasi strong inverse AG-groupoid; (3) an algebraic system is a weak commutative GAG-NET-Loop if and only if it is a quasi Clifford AG-groupoid; and (4) a finite interlaced AG-(l,l)-Loop is a strong AG-(l,l)-Loop.

**Keywords:** Abel-Grassmann's neutrosophic extended triplet loop; generalized Abel-Grassmann's neutrosophic extended triplet loop; strong inverse AG-groupoid; quasi strong inverse AG-groupoid; quasi Clifford AG-groupoid

#### **1. Introduction**

The concept of an Abel-Grassmann's groupoid (AG-groupoid) was first given by Kazim and Naseeruddin [1] in 1972 and they have called it a left almost semigroup (LA-semigroup). In [2], the same structure is called a left invertive groupoid. In [3–9], some properties and different classes of an AG-groupoid are investigated.

Smarandache proposed the new concept of neutrosophic set, which is an extension of fuzzy set and intuitionistic fuzzy set [10]. Until now, neutrosophic sets have been applied to many fields such as decision making [11–13], forecasting [14], best product selection [15], the shortest path problem [16], minimum spanning tree [17], neutrosophic portfolios of financial assets [18], etc. Some new theoretical studies are also developed [19–24]. In [25], Xiaohong Zhang introduced the concept of Abel-Grassmann's neutrosophic extended triplet loop (AG-NET-loop), and some properties and structure about AG-NET-loop are discussed. Recently, a new algebraic system, generalized neutrosophic extended triplet set, is proposed in [26].

In this paper, we combine the notions of generalized neutrosophic extended triplet set and AG-groupoid, introduce the new concept of generalized Abel-Grassmann's neutrosophic extended triplet loop (GAG-NET-loop); that is, GAG-NET-loop is both AG-groupoid and generalized neutrosophic extended triplet set. We deeply analyze the internal connecting link between GAG-NETloop and other AG-groupoid and obtain some important results.

GAG-NET-loop is an extension of AG-NET-loop. Compared with AG-NET-loop, GAG-NET-loop relaxes the restriction on the elements in the AG-groupoid. According to our research, corresponding to the decomposition theorem of AG-NET-loop, some GAG-NET-loops can also be decomposed into smaller ones. This is also the embodiment of the research method of regular semigroups to quasi-regular semigroups in non-associative groupoid.

The paper is organized as follows. Section 2 gives the basic definitions. Some properties about finite interlaced AG-(l,l)-Loop and some structures about strong inverse AG-groupoid are discussed in Section 3. We proposed the GAG-NET-Loop and discussed its properties and structure in Section 4. Finally, the summary and future work are presented in Section 5.

#### **2. Basic Definitions**

In this section, the related research and results of the AG-NET-loop are presented. Some related notions are introduced first.

Let *S* be non-empty set, ∗ is a binary operation on *S*. If ∀*a*, *b* ∈ *S*, implies *a* ∗ *b* ∈ *S*, then (*S*, ∗) is called a groupoid. A groupoid (*S*, ∗) is called an Abel-Grassmann's groupoid (AG-groupoid) [27,28] if it holds the left invertive law, that is, for all *a*, *b*, *c* ∈ *S*, (*a* ∗ *b*) ∗ *c* = (*c* ∗ *b*) ∗ *a*. In an AG-groupoid the medial law holds, for all *a*, *b*, *c*, ∈ *S*, (*a* ∗ *b*) ∗ (*c* ∗ *d*)=(*a* ∗ *c*) ∗ (*b* ∗ *d*). In an AG-groupoid (*S*, ∗), for all *<sup>a</sup>* <sup>∈</sup> *<sup>S</sup>*, *<sup>n</sup>* <sup>∈</sup> *<sup>Z</sup>*+, the recursive definition of *<sup>a</sup><sup>n</sup>* is as follows: *<sup>a</sup>*<sup>1</sup> <sup>=</sup> *<sup>a</sup>*, *<sup>a</sup>*<sup>2</sup> <sup>=</sup> *<sup>a</sup>* <sup>∗</sup> *<sup>a</sup>*, *<sup>a</sup>*<sup>3</sup> <sup>=</sup> *<sup>a</sup>*<sup>2</sup> <sup>∗</sup> *<sup>a</sup>* <sup>=</sup> (*<sup>a</sup>* <sup>∗</sup> *<sup>a</sup>*) <sup>∗</sup> *<sup>a</sup>*, *<sup>a</sup>*<sup>4</sup> <sup>=</sup> *<sup>a</sup>*<sup>3</sup> <sup>∗</sup> *<sup>a</sup>*, ..., *<sup>a</sup><sup>n</sup>* <sup>=</sup> *<sup>a</sup>n*−<sup>1</sup> <sup>∗</sup> *<sup>a</sup>*.

**Definition 1** ([29])**.** *Let N be a non-empty set together with a binary operation* ∗*. Then, N is called a neutrosophic extended triplet set if for any a* ∈ *N, there exists a neutral of "a" (denoted by neut*(*a*)*), and an opposite of "a"(denoted by anti*(*a*))*, such that neut*(*a*) ∈ *N, anti*(*a*) ∈ *N and:*

$$a\*n \\ \mathfrak{a} \\ \mathfrak{t}(a) = n \\ \mathfrak{t}(a)\*a = a\_{\mathfrak{t}}$$

$$a\*an ti(a) = anti(a)\*a = n 
mathbb{(a)}.$$

*The triplet* (*a*, *neut*(*a*), *anti*(*a*)) *is called a neutrosophic extended triplet.*

Note that, for a neutrosophic triplet set (*N*, ∗), *a* ∈ *N*, *neut*(*a*) and *anti*(*a*) may not be unique. In order not to cause ambiguity, we use the following notations to distinguish: *neut*(*a*) denotes any certain one of neutral of *a*, {*neut*(*a*)} denotes the set of all neutral of *a*, *anti*(*a*) denotes any certain one of opposite of *a*, and {*anti*(*a*)} denotes the set of all opposite of *a*.

**Definition 2** ([25])**.** *Let* (*N*, ∗) *be a neutrosophic extended triplet set. Then, N is called a neutrosophic extended triplet loop (NET-Loop), if* (*N*, ∗) *is well-defined, i.e., for any a*, *b* ∈ *N, one has a* ∗ *b* ∈ *N.*

**Definition 3** ([25])**.** *Let* (*N*, ∗) *be a neutrosophic extended triplet loop (NET-Loop). Then, N is called an AG-NET-Loop, if* (*N*, ∗) *is an AG-groupoid.*

*An AG-NET-Loop N is called a commutative AG-NET-Loop if for all a*, *b* ∈ *N*, *a* ∗ *b* = *b* ∗ *a.*

**Theorem 1** ([25])**.** *Let* (*N*, ∗) *be an AG-NET-loop. Then, for any x*, *y* ∈ {*anti*(*a*)}*,*


**Definition 4** ([5])**.** *An element a of an AG-groupoid* (*S*, ∗) *is called a regular if there exists x* ∈ *S such that a* = (*a* ∗ *x*) ∗ *a and S is called regular if all elements of S are regular.*

*An AG-groupoid* (*S*, ∗) *is called quasi regular if, for any a* ∈ *S, there exists a positive integer n such that a<sup>n</sup> is regular.*

**Definition 5** ([6])**.** *An element a of an AG-groupoid* (*S*, ∗) *is called a fully regular element of S if there exist some p*, *q*,*r*,*s*, *t*, *u*, *v*, *w*, *x*, *y*, *z* ∈ *S (p*, *q*, ..., *z may be repeated) such that*

$$\begin{aligned} a &= (p \ast a^2) \ast q = (r \ast a) \ast (a \ast s) = (a \ast t) \ast (a \ast u) \\ &= (a \ast a) \ast v = w \ast (a \ast a) = (\times \ast a) \ast (y \ast a) \\ &= (a^2 \ast z) \ast a^2. \end{aligned}$$

*An AG-groupoid* (*S*, ∗) *is called fully regular if all elements of S are fully regular.*

*An AG-groupoid* (*S*, ∗) *is called quasi fully regular if for any a* ∈ *S, there exists a positive integer n such that a<sup>n</sup> is fully regular.*

#### **3. Strong Inverse AG-Groupoid and Finite Interlaced AG-Groupoid**

**Definition 6** ([30])**.** *An AG-groupoid* (*S*, ∗) *is called an inverse AG-groupoid if for each element a* ∈ *S, there exists an element x in S such that a* = (*a* ∗ *x*) ∗ *a and x* = (*x* ∗ *a*) ∗ *x.*

**Definition 7.** *An AG-groupoid* (*S*, ∗) *is called a strong inverse AG-groupoid if for any a* ∈ *S, there exists a unary operation a* <sup>→</sup> *<sup>a</sup>*−<sup>1</sup> *on S such that*

$$(a^{-1})^{-1} = a, \ (a \ast a^{-1}) \ast a = a \ast (a \ast a^{-1}) = a, \ a \ast a^{-1} = a^{-1} \ast a.$$

The following example shows that an inverse AG-groupoid may not be a strong inverse AG-groupoid.

**Example 1.** *Let S* = {1, 2, 3, 4}*, an operation* ∗ *on S is defined as in Table 1. Being* 1 = (1 ∗ 3) ∗ 1, 3 = (3 ∗ 1) ∗ 3, 2 = (2 ∗ 4) ∗ 2, 4 = (4 ∗ 2) ∗ 4*, from Definition 6, S is an inverse AG-groupoid. Being* (1 ∗ 1) ∗ 1 = 3 = 1,(1 ∗ 2) ∗ 1 = 4 = 1,(1 ∗ 3) ∗ 1 = 1 = 3 = 1 ∗ (1 ∗ 3),(1 ∗ 4) ∗ 1 = 2 = 1*, from Definition 7, S is not a strong inverse AG-groupoid.*



**Proposition 1.** *Let* (*N*, ∗) *be an AG-NET-loop. Then, for any a* ∈ *N, x* ∈ {*anti*(*a*)}*,*

$$\text{new}(\text{new}(a) \* \text{x}) \* \text{anti}(\text{new}(a) \* \text{x}) = a.$$

**Proof.** For any *x* ∈ {*anti*(*a*)}, we have

$$\begin{aligned} (next(a) \ast x) \ast next(a) &= (next(a) \ast x) \ast (a \ast x) \\ &= (next(a) \ast a) \ast (x \ast x) \quad (applying \ the \ median \ law) \\ &= (a \ast next(a)) \ast (x \ast x) \\ &= (a \ast x) \ast (next(a) \ast x) \quad (applying \ the \ median \ law) \\ &= next(a) \ast (next(a) \ast x), \end{aligned}$$

$$\begin{aligned} (next(a) \* (next(a) \* \mathbf{x}) &= (\mathbf{x} \* a) \* (next(a) \* \mathbf{x}) \\ &= (\mathbf{x} \* next(a)) \* (a \* \mathbf{x}) \quad (applying \text{ the } median \text{ law}) \\ &= (\mathbf{x} \* next(a)) \* next(a) \\ &= (next(a) \* next(a)) \* \mathbf{x} \\ &= next(a) \* \mathbf{x}, \quad (by \, Proposition \ 1(4)) \end{aligned}$$

we have (*neut*(*a*) ∗ *x*) ∗ *neut*(*a*) = *neut*(*a*) ∗ (*neut*(*a*) ∗ *x*) = *neut*(*a*) ∗ *x*. From Theorem 1 (2) and (3), we have

$$ment(new(a)\*x) = next(a), a \in \mathit{anti\{\mathit{next}(a)\*x\}}.$$

From Theorem 1 (1) *neut*(*a*) ∗ *x* is unique, we have

*neut*(*neut*(*a*) ∗ *x*) ∗ *anti*(*neut*(*a*) ∗ *x*) = *neut*(*a*) ∗ *a* = *a*.

**Example 2.** *Let N* = {*a*, *b*, *c*}*, an operation* ∗ *on N is defined as in Table 2. Since neut*(*a*) = *a*, *anti*(*a*) = *a*, *neut*(*b*) = *a*, *anti*(*b*) = *c*, *neut*(*c*) = *a*, *anti*(*c*) = *b*, *so* (*N*, ∗) *is an AG-NET-Loop. Being*

*neut*(*neut*(*a*) ∗ *a*) ∗ *anti*(*neut*(*a*) ∗ *a*) = *a* ∗ *a* = *a*,

$$\begin{aligned} \text{new}(\text{new}(b) \ast \mathbf{c}) \ast \text{anti}(\text{new}(b) \ast \mathbf{c}) &= \text{new}(\mathbf{c}) \ast \text{anti}(\mathbf{c}) = b, \\ \text{new}(\text{new}(\mathbf{c}) \ast b) \ast \text{anti}(\text{new}(\mathbf{c}) \ast b) &= \text{new}(b) \ast \text{anti}(b) = \mathbf{c}, \end{aligned}$$

*that is for any a* ∈ *N, x* ∈ {*anti*(*a*)}*, neut*(*neut*(*a*) ∗ *x*) ∗ *anti*(*neut*(*a*) ∗ *x*) = *a*.

**Table 2.** An AG-NET-Loop of Example 2.


**Theorem 2.** *Let* (*N*, ∗) *be a groupoid. Then, N is an AG-NET-Loop if and only if it is a strong inverse AG-groupoid.*

**Proof.** Necessity: Suppose *N* is an AG-NET-Loop, from Definition 3, for each *a* ∈ *N*, such that *a* has the neutral element and opposite element, denoted by *neut*(*a*) and *anti*(*a*), respectively. Set

$$a^{-1} = 
mathbb{(a) \* }{a} 
mathbb{(a) \,}$$

by Theorem <sup>1</sup> (1), *neut*(*a*) <sup>∗</sup> *anti*(*a*) is unique, so *<sup>a</sup>*−<sup>1</sup> is unique. By Proposition 1, we have

$$(a^{-1})^{-1} = 
neut(
enet(a)\*
anti(a))\*
anti(
next(a)\*
anti(a)) = a.$$

Being

$$a^{-1} \ast a = (neu(a) \ast anti(a)) \ast a = (a \ast anti(a)) \ast neut(a) = neut(a) \ast neut(a) = neut(a),$$

$$\begin{aligned} a \ast a^{-1} &= a \ast (\mathit{ncut}(a) \ast \mathit{anti}(a)) \\ &= (\mathit{ncut}(a) \ast a) \ast (\mathit{ncut}(a) \ast \mathit{anti}(a)) \\ &= (\mathit{ncut}(a) \ast \mathit{ncut}(a)) \ast (a \ast \mathit{anti}(a)) \\ &= (\mathit{ncut}(a) \ast \mathit{ncut}(a)) \ast \mathit{ncut}(a) \\ &= \mathit{ncut}(a), \\\\ (a \ast a^{-1}) \ast a &= \mathit{ncut}(a) \ast a = a, \\ a \ast (a \ast a^{-1}) &= a \ast \mathit{ncut}(a) = a, \end{aligned}$$

we have

$$a^{-1} \* a = a \* a^{-1},$$

$$(a \* a^{-1}) \* a = a \* (a \* a^{-1}) = a.$$

From Definition 7, *N* is a strong inverse AG-groupoid.

Sufficiency: If *<sup>N</sup>* is a strong inverse AG-groupoid and *<sup>a</sup>*−<sup>1</sup> <sup>∈</sup> *<sup>N</sup>*, such that *<sup>a</sup>* <sup>∗</sup> *<sup>a</sup>*−<sup>1</sup> <sup>=</sup> *<sup>a</sup>*−<sup>1</sup> <sup>∗</sup> *<sup>a</sup>* and (*<sup>a</sup>* <sup>∗</sup> *<sup>a</sup>*−1) <sup>∗</sup> *<sup>a</sup>* <sup>=</sup> *<sup>a</sup>* <sup>∗</sup> (*<sup>a</sup>* <sup>∗</sup> *<sup>a</sup>*−1) = *<sup>a</sup>*. Set

$$\operatorname{neut}(a) = a \ast a^{-1} \ast$$

then *neut*(*a*) <sup>∗</sup> *<sup>a</sup>* = (*<sup>a</sup>* <sup>∗</sup> *<sup>a</sup>*−1) <sup>∗</sup> *<sup>a</sup>* <sup>=</sup> *<sup>a</sup>* <sup>∗</sup> (*<sup>a</sup>* <sup>∗</sup> *<sup>a</sup>*−1) = *<sup>a</sup>* <sup>∗</sup> *neut*(*a*) = *<sup>a</sup>*, *<sup>a</sup>* <sup>∗</sup> (*a*)−<sup>1</sup> = (*a*)−<sup>1</sup> <sup>∗</sup> *<sup>a</sup>* <sup>=</sup> *neut*(*a*). From Definition 3, we have that *<sup>N</sup>* is an AG-NET-Loop and *<sup>a</sup>*−<sup>1</sup> ∈ {*anti*(*a*)}.

**Example 3.** *Apply* (*S*, ∗) *in Example 2, we know that it is an AG-NET-Loop. We show that it is a strong inverse AG-groupoid in the following.*

*For b, there exists a inverse element <sup>b</sup>*−<sup>1</sup> <sup>=</sup> *c, such that* (*b*−1)−<sup>1</sup> <sup>=</sup> *<sup>b</sup>*,(*<sup>b</sup>* <sup>∗</sup> *<sup>b</sup>*−1) <sup>∗</sup> *<sup>b</sup>* <sup>=</sup> *<sup>b</sup>* <sup>∗</sup> (*<sup>b</sup>* <sup>∗</sup> *<sup>b</sup>*−1) = *<sup>b</sup>*, *<sup>b</sup>* <sup>∗</sup> *<sup>b</sup>*−<sup>1</sup> <sup>=</sup> *<sup>b</sup>*−<sup>1</sup> <sup>∗</sup> *<sup>b</sup>* <sup>=</sup> *a, so <sup>b</sup> is strong inverse. <sup>a</sup> and <sup>c</sup> are strong inverse for the same reason, so* (*S*, <sup>∗</sup>) *is a strong inverse AG-groupoid by Definition 7.*

An AG-groupoid (S, \*) is called interlaced if it satisfies (*a* ∗ *a*) ∗ *b* = *a* ∗ (*a* ∗ *b*), *a* ∗ (*b* ∗ *b*)=(*a* ∗ *b*) ∗ *b* for all *a*, *b* in *S*. An AG-groupoid (S, \*) is called locally associative if it satisfies (*a* ∗ *a*) ∗ *a* = *a* ∗ (*a* ∗ *a*) for all *a* in *S*.

**Theorem 3.** *Let*(*D*, ∗) *be a locally associative AG-groupoid with respect to \*. If D is finite, there is an idempotent element in D. That is,* ∃*a* ∈ *D*, *a* ∗ *a* = *a.*

**Proof.** Assume that *D* is a finite locally associative AG-groupoid with respect to \*. Then, for any *a* ∈ *D*, *<sup>a</sup>*, *<sup>a</sup>* <sup>∗</sup> *<sup>a</sup>* <sup>=</sup> *<sup>a</sup>*2, *<sup>a</sup>* <sup>∗</sup> *<sup>a</sup>* <sup>∗</sup> *<sup>a</sup>* <sup>=</sup> *<sup>a</sup>*3, ..., *<sup>a</sup>n*, ... <sup>∈</sup> *<sup>D</sup>*. Since *<sup>D</sup>* is finite, there exists natural number *<sup>m</sup>*,*<sup>k</sup>* such that *a<sup>m</sup>* = *am*+*k*.

Case 1: If *<sup>k</sup>* <sup>=</sup> *<sup>m</sup>*, then *<sup>a</sup><sup>m</sup>* <sup>=</sup> *<sup>a</sup>*2*m*, that is, *<sup>a</sup><sup>m</sup>* <sup>=</sup> *<sup>a</sup><sup>m</sup>* <sup>∗</sup> *<sup>a</sup>m*, *<sup>a</sup><sup>m</sup>* is an idempotent element in *<sup>D</sup>*. Case 2: If *k* > *m*, then from *a<sup>m</sup>* = *am*+*<sup>k</sup>* we can get

$$a^k = a^m \ast a^{k-m} = a^{m+k} \ast a^{k-m} = a^{2k} = a^k \ast a^k.$$

This means that *a<sup>k</sup>* is an idempotent element in *D*. Case 3: If *k* < *m*, then from *a<sup>m</sup>* = *am*+*<sup>k</sup>* we can get

$$a^m = a^{m+k} = a^m \ast a^k = a^{m+k} \ast a^k = a^{m+2k};$$

$$a^m = a^{m+2k} = a^m \ast a^{2k} = a^{m+k} \ast a^{2k} = a^{m+3k};$$

$$\dots \dots$$

$$a^m = a^{m+mk}.$$

Since *<sup>m</sup>* and *<sup>k</sup>* are natural numbers, then *mk* <sup>≥</sup> *<sup>m</sup>*. Therefore, from *<sup>a</sup><sup>m</sup>* <sup>=</sup> *<sup>a</sup>m*+*mk*, applying Case 1 or Case 2, we know that there exists an idempotent element in *D*.

**Definition 8** ([31])**.** *Let* (*N*, ∗) *be an AG-groupoid. Then, N is called an AG-(l, l)-Loop, if for any a* ∈ *N, there exist two elements b and c in N that satisfy the condition: b* ∗ *a* = *a, and c* ∗ *a* = *b. In an AG-(l,l)-Loop, a neutral of "a" denoted by neut*(*l*,*l*)(*a*)*.*

**Definition 9** ([31])**.** *Let* (*N*, ∗) *be an AG-(l, l)-Loop. Then, N is a strong AG-(l, l)-Loop if neut*(*l*,*l*)(*a*) ∗ *neut*(*l*,*l*)(*a*) = *neut*(*l*,*l*)(*a*), ∀*a* ∈ *N*.

**Definition 10.** *Let* (*D*, ∗) *be an AG-(l,l)-Loop. Then, D is called an interlaced AG-(l,l)-Loop, if it satisfies*(*a* ∗ *a*) ∗ *b* = *a* ∗ (*a* ∗ *b*)*, a* ∗ (*b* ∗ *b*)=(*a* ∗ *b*) ∗ *b, for all a*, *b in D.*

**Theorem 4.** *Let*(*D*, ∗) *be an interlaced AG-(l, l)-Loop with respect to \*. If D is finite, there is an idempotent left neutral element in D. That is,* ∀*a* ∈ *D*, ∃*s*, *p* ∈ *D*,*s* ∗ *a* = *a*, *p* ∗ *a* = *s*,*s* ∗ *s* = *s.*

**Proof.** Assume that *D* is a finite interlaced AG-(l,l)-Loop with respect to \*. Then, for any *a* ∈ *D*, ∃*s*, *p* ∈ *D*,*s* ∗ *a* = *a*, *p* ∗ *a* = *s*, we have *s* ∗ *a* = (*p* ∗ *a*) ∗ *a* = (*a* ∗ *a*) ∗ *p* = *a* ∗ (*a* ∗ *p*) = *a*,

$$\begin{aligned} a\*s &= (a\*(a\*p))\*s \\ &= (s\*(a\*p))\*a \quad (by \text{ the left } invertible \text{ law}) \\ &= ((p\*a)\*(a\*p))\*a \\ &= (((a\*p)\*a)\*p)\*a \quad (by \text{ the left } invertible \text{ law}) \\ &= (a\*p)\*((a\*p)\*a) \quad (by \text{ the left } invertible \text{ law}) \\ &= ((a\*p)\*(a\*p))\*a \quad (by \text{ the interlaced law}) \\ &= (a\*(a\*p))\*(a\*p) \quad (by \text{ the left } invertible \text{ law}) \\ &= a\*(a\*p) = a\_\* \end{aligned}$$

$$s^2 \ast a = (s \ast s) \ast a = (a \ast s) \ast s = a,$$

$$s^3 \ast a = (s^2 \ast s) \ast a = (a \ast s) \ast s^2 = a \ast s^2 = a \ast (s \ast s) = (a \ast s) \ast s = a \ast s = a.$$

When *m* > 3, *m* ≡ 0(mod 2), we have

$$\begin{aligned} \mathbf{s}^m \star a &= (\mathbf{s}^{m-2} \ast \mathbf{s}^2) \star a \\ &= (a \ast \mathbf{s}^2) \ast \mathbf{s}^{m-2} \\ &= a \ast \mathbf{s}^{m-2} \\ &= a \ast (\mathbf{s}^{(m-2)/2} \ast \mathbf{s}^{(m-2)/2}) \\ &= (a \ast \mathbf{s}^{(m-2)/2}) \ast \mathbf{s}^{(m-2)/2} \quad (\text{by the interlaced law}) \\ &= (\mathbf{s}^{(m-2)/2} \ast \mathbf{s}^{(m-2)/2}) \ast a \quad (\text{by the left-invertive law}) \\ &= \mathbf{s}^{m-2} \ast a \\ &= \dots \\ &= \mathbf{s}^2 \ast a = a. \end{aligned}$$

When *m* > 3, *m* ≡ 1(mod 2), we have

$$\begin{aligned} \mathbf{s}^m \star a &= (\mathbf{s}^{m-1} \star \mathbf{s}) \star a \\ &= (a \star \mathbf{s}) \star \mathbf{s}^{m-1} \\ &= a \star \mathbf{s}^{m-1} \\ &= a \star (\mathbf{s}^{(m-1)/2} \star \mathbf{s}^{(m-1)/2}) \\ &= (a \star \mathbf{s}^{(m-1)/2}) \star \mathbf{s}^{(m-1)/2} \quad \text{(by the interlaced law)} \\ &= (\mathbf{s}^{(m-1)/2} \star \mathbf{s}^{(m-1)/2}) \star a \\ &= \mathbf{s}^{m-1} \star a \\ &= \dots \\ &= \mathbf{s}^2 \star a = a. \end{aligned}$$

Thus, *s*,*s*2,*s*3......*sm*...... are all left neutral element. Applying Theorem 3, we know that there exists an idempotent left neutral element in *D*.

**Theorem 5.** *Assume that* (*N*, ∗) *is a finite interlaced AG-(l,l)-Loop. Then, for all a in N, if neut*(*l*,*l*)(*a*) *is an idempotent element, then it is unique.*

**Proof.** Assume that *N* is a finite interlaced AG-(l,l)-Loop with respect to \*. Suppose that there exist *x*, *y* ∈ {*neut*(*l*,*l*)(*a*)}, *a* ∈ *N*. By Definition 8, *x* ∗ *a* = *a*, *y* ∗ *a* = *a*, and there exist *p*, *q* ∈ *N* which satisfy *p* ∗ *a* = *x*, *q* ∗ *a* = *y*. If *x* ∗ *x* = *x*, *y* ∗ *y* = *y*, we have

$$\mathbf{x} = \mathbf{x} \* \mathbf{x} = (p \* a) \* \mathbf{x} = (\mathbf{x} \* a) \* p = a \* p,$$

$$y = y \* y = (q \* a) \* y = (y \* a) \* q = a \* q,$$

$$\mathbf{x} \* y = (p \* a) \* y = (y \* a) \* p = a \* p = \mathbf{x},$$

$$y \* \mathbf{x} = (q \* a) \* \mathbf{x} = (\mathbf{x} \* a) \* q = a \* q = y,$$

$$\mathbf{x} = \mathbf{x} \* y = (\mathbf{x} \* \mathbf{x}) \* y = (y \* \mathbf{x}) \* \mathbf{x} = y \* \mathbf{x} = y.$$

We know that *x* = *y*, *neut*(*l*,*l*)(*a*) is unique.

**Theorem 6.** *Let* (*N*, ∗) *be a finite interlaced AG-(l,l)-Loop. Then, N is a strong AG-(l,l)-Loop.*

**Proof.** For any *a* in *N*, applying Theorem 4, we have ∃*s*, *p* ∈ *N*,*s* ∗ *a* = *a*, *p* ∗ *a* = *s*,*s* ∗ *s* = *s*. From this and Definition 9, we know that *N* is a strong AG-(l,l)-Loop.

**Example 4.** *Let S* = {1, 2, 3}*, an operation* ∗ *on S is defined as in Table 3. Being* (1 ∗ 1) ∗ 2 = 1 ∗ (1 ∗ 2) = 2, 1 ∗ (2 ∗ 2)=(1 ∗ 2) ∗ 2 = 3,(1 ∗ 1) ∗ 3 = 1 ∗ (1 ∗ 3) = 3, 1 ∗ (3 ∗ 3)=(1 ∗ 3) ∗ 3 = 2,(2 ∗ 2) ∗ 3 = 2 ∗ (2 ∗ 3) = 2, 2 ∗ (3 ∗ 3)=(2 ∗ 3) ∗ 3 = 3*, and* 1 ∗ 1 = 1, 1 ∗ 2 = 2, 3 ∗ 2 = 1, 1 ∗ 3 = 3, 2 ∗ 3 = 1*, we have S is a finite interlaced AG-(l,l)-Loop by Definition 10. Being neut*(*l*,*l*)(1) = *neut*(*l*,*l*)(2) = *neut*(*l*,*l*)(3) = 1*, 1\*1=1, we have S is a strong AG-(l,l)-Loop by Definition 9.*

**Table 3.** A finite interlaced AG-(l,l)-Loop of Example 4.


The following example shows that a strong AG-(l,l)-Loop may not be an interlaced AG-(l,l)-Loop.

**Example 5.** *Let S* = {1, 2, 3}*, an operation* ∗ *on S is defined as in Table 4. Being* 1 ∗ 1 = 1, 1 ∗ 2 = 2, 2 ∗ 2 = 1, 1 ∗ 3 = 3, 3 ∗ 3 = 1*, we have S is a strong AG-(l,l)-Loop by Definition 9. However, it is not an interlaced AG-(l,l)-Loop because* 2 ∗ (3 ∗ 3) = 3 = 2 = (2 ∗ 3) ∗ 3*.*

**Table 4.** A strong AG-(l,l)-Loop of Example 5.


#### **4. GAG-NET-Loop**

**Definition 11** ([26])**.** *Let N be a non-empty set together with a binary operation* ∗*. Then, N is called a generalized neutrosophic extended triplet set if for any a* ∈ *N, at least exists a positive integer n, a<sup>n</sup> exists neutral element ( denoted by neut*(*an*)*) and opposite element (denoted by anti*(*an*)*), such that neut*(*an*) <sup>∈</sup> *<sup>N</sup>*, *anti*(*an*) <sup>∈</sup> *N and*

$$a^n \ast next(a^n) = next(a^n) \ast a^n = a^n,\\ a^n \ast anti(a^n) = anti(a^n) \ast a^n = next(a^n).$$

*The triplet* (*a*, *neut*(*an*), *anti*(*an*)) *is called a generalized neutrosophic extended triplet with degree n.*

**Definition 12.** *Let* (*N*, ∗) *be a generalized neutrosophic extended triplet set. Then, N is called a generalized Abel-Grassmann's neutrosophic extended triplet loop (GAG-NET-Loop), if the following conditions are satisfied: for all a*, *b*, *c* ∈ *N,* (*a* ∗ *b*) ∗ *c* = (*c* ∗ *b*) ∗ *a.*

*A GAG-NET-Loop N is called a commutative GAG-NET-Loop if for all a*, *b* ∈ *N*, *a* ∗ *b* = *b* ∗ *a.*

**Example 6.** *Let S* = {*a*, *b*, *c*}*, an operation* ∗ *on S is defined as in Table 5. We can see that* (*a*, *a*, *a*),(*a*, *a*, *b*)*, and* (*a*, *a*, *c*) *are neutrosophic extended triplets, but b and c do not have the neutral element and opposite element. Thus, S is not an AG-NET-Loop. Moreover, b*<sup>2</sup> = *c*<sup>2</sup> = *a has the neutral element and opposite element, thus* (*S*, ∗) *is a GAG-NET-Loop.* (*b*, *a*, *a*) *and* (*c*, *a*, *a*) *are generalized neutrosophic extended triplets with degree* 2*. We can infer that* (*S*, ∗) *is a GAG-NET-Loop but not an AG-NET-Loop. Moreover it is not a commutative GAG-NET-Loop being b* ∗ *c* = *c* ∗ *b.*

**Table 5.** A GAG-NET-Loop of Example 6.


The algebraic system (*Zn*, ⊗), ⊗ is the classical mod multiplication, where *Zn* = {[0], [1], ··· , [*n* − <sup>1</sup>]} and *<sup>n</sup>* <sup>∈</sup> *<sup>Z</sup>*+, *<sup>n</sup>* <sup>≥</sup> 2.

**Example 7.** *Consider* (*Z*4, ⊗)*, an operation* ⊗ *on Z*<sup>4</sup> *is defined as in Table 6. We have:*


**Table 6.** The operation table of *Z*4.


Thus, *Z*<sup>4</sup> is a generalized neutrosophic extended triplet set, but it is not a neutrosophic extended triplet set. Moreover, (*Z*4, ⊗) is a commutative GAG-NET-Loop.

**Proposition 2.** *Let* (*N*, <sup>∗</sup>) *be a GAG-NET-Loop, <sup>a</sup>* <sup>∈</sup> *<sup>N</sup> and* (*a*, *neut*(*an*), *anti*(*an*)) *is a generalized neutrosophic extended triplet with degree n. We have:*


**Proof.** Assume *<sup>c</sup>*, *<sup>d</sup>* ∈ {*neut*(*an*)}, so *<sup>a</sup><sup>n</sup>* <sup>∗</sup> *<sup>c</sup>* <sup>=</sup> *<sup>c</sup>* <sup>∗</sup> *<sup>a</sup><sup>n</sup>* <sup>=</sup> *<sup>a</sup>n*, *<sup>a</sup><sup>n</sup>* <sup>∗</sup> *<sup>d</sup>* <sup>=</sup> *<sup>d</sup>* <sup>∗</sup> *<sup>a</sup><sup>n</sup>* <sup>=</sup> *<sup>a</sup>n*, and there exists *<sup>x</sup>*, *<sup>y</sup>* <sup>∈</sup> *<sup>N</sup>* such that

$$a^n \ast \ge = \ge \ast \\
a^n = c, \\
a^n \ast \y = y \ast a^n = d.$$

We can obtain

$$c \ast d = (\mathfrak{x} \ast a^n) \ast d = (d \ast a^n) \ast \mathfrak{x} = a^n \ast \mathfrak{x} = c,$$

$$\begin{aligned} \mathcal{L} \ast d &= (a^n \ast \mathbf{x}) \ast (y \ast a^n) \\ &= (a^n \ast y) \ast (\mathbf{x} \ast a^n) \\ &= (a^n \ast y) \ast \mathbf{c} \\ &= (y \ast a^n) \ast \mathbf{c} \\ &= (\mathsf{c} \ast a^n) \ast y \\ &= a^n \ast y = d. \end{aligned}$$

We have *<sup>c</sup>* <sup>=</sup> *<sup>d</sup>* <sup>=</sup> *<sup>c</sup>* <sup>∗</sup> *<sup>d</sup>*. Thus, *neut*(*an*) is unique and *neut*(*an*) <sup>∗</sup> *neut*(*an*) = *neut*(*an*).

**Proposition 3.** *Let* (*N*, <sup>∗</sup>) *be a GAG-NET-Loop, <sup>a</sup>* <sup>∈</sup> *<sup>N</sup> and* (*a*, *neut*(*an*), *anti*(*an*)) *is a generalized neutrosophic extended triplet with degree n. Then,*

$$(1)\quad (a^n\*a^n)\*a^n=a^n\*(a^n\*a^n).$$

$$(2)\quad \textit{new}(a^n)\*\,\mathtt{x} = \mathit{next}(a^n)\*\,\mathtt{y}, \mathtt{for}\,\mathtt{any}\,\mathtt{x}, \mathtt{y} \in \{\mathtt{anti}(a^n)\}.$$

*(3) neut*(*neut*(*an*)) = *neut*(*an*)*.*

$$(4)\quad a^n \ast (\mathbf{x} \ast n \iota ut(a^n)) = (\mathbf{x} \ast n \iota ut(a^n)) \ast a^n = n \iota ut(a^n), \text{for any } \mathbf{x} \in \{anti(a^n)\}.$$

$$(5)\quad a^n \ast (n \text{ent}(a^n) \ast \mathbf{x}) = (n \text{ent}(a^n) \ast \mathbf{x}) \ast a^n = n \text{ent}(a^n), \\ \text{for any } \mathbf{x} \in \{an \text{it}(a^n)\}.$$


#### **Proof.**

$$(1)\quad \text{For } a \in N,\\\\nu(a^n) \* a^n = a^n \* n \\\\nu(a^n) = a^n,\\ \text{ we have}$$

$$(a^n \* a^n) \* a^n = (a^n \* a^n) \* (n \\\\nu(a^n) \* a^n) = (a^n \* n \\\\nu(a^n)) \* (a^n \* a^n) = a^n \* (a^n \* a^n).$$


*Mathematics* **2019**, *7*, 1206

$$(y\*x)\*a^n = (a^n\*x)\*y = 
mathbb{(a^n)\*y = 
mathbb{(a^n)})...$$

Moreover,

$$\begin{aligned} ((y\*x)\*a^n)\*neut(a^n) &= (neut(a^n)\*y)\*neut(a^n) \\ &= (y\*neuut(a^n))\*neut(a^n) \\ &= (neuut(a^n)\*neut(a^n))\*y \\ &= neuut(a^n)\*y \\ &= neuut(neuut(a^n)). \end{aligned}$$

Thus, *neut*(*an*) = *neut*(*neut*(*an*)) <sup>∗</sup> *neut*(*an*) = ((*<sup>y</sup>* <sup>∗</sup> *<sup>x</sup>*) <sup>∗</sup> *<sup>a</sup>n*) <sup>∗</sup> *neut*(*an*) = *neut*(*neut*(*an*)).

(4) For any *<sup>x</sup>* ∈ {*anti*(*an*)}, from Definition <sup>11</sup> and Proposition 2, we have

$$\begin{aligned} a^n \ast (\mathbf{x} \ast n \iota \iota (a^n)) &= (a^n \ast n \iota \iota (a^n)) \ast (\mathbf{x} \ast n \iota \iota \iota (a^n)) \\ &= (a^n \ast \mathbf{x}) \ast (n \iota \iota \iota (a^n) \star n \iota \iota \iota (a^n)) \\ &= n \iota \iota (a^n) \ast n \iota \iota (a^n) \\ &= n \iota \iota (a^n), \end{aligned}$$

$$(\mathbf{x} \ast \operatorname{neut}(a^n)) \ast a^n = (a^n \ast \operatorname{neut}(a^n)) \ast \mathbf{x} = a^n \ast \mathbf{x} = \operatorname{neut}(a^n).$$
 
$$\text{Thus, } a^n \ast (\mathbf{x} \ast \operatorname{neut}(a^n)) = (\mathbf{x} \ast \operatorname{neut}(a^n)) \ast a^n = \operatorname{ent}(a^n) \text{, for any } \mathbf{x} \in \{\operatorname{enti}(a^n)\}.$$

(5) For any *<sup>x</sup>* ∈ {*anti*(*an*)}, we have

$$\begin{aligned} (\mathit{next}(a^n) \ast \mathbf{x}) \ast a^n &= (\mathit{next}(a^n) \ast \mathbf{x}) \ast (\mathit{next}(a^n) \ast a^n) \\ &= (\mathit{next}(a^n) \ast \mathit{next}(a^n)) \ast (\mathbf{x} \ast a^n) \\ &= \mathit{next}(a^n) \ast \mathit{next}(a^n) \\ &= \mathit{next}(a^n), \end{aligned}$$

$$\begin{aligned} a^n \* (\mathit{next}(a^n) \* \mathbf{x}) &= (\mathit{next}(a^n) \* a^n) \* (\mathit{next}(a^n) \* \mathbf{x}) \\ &= (\mathit{next}(a^n) \* \mathit{next}(a^n)) \* (a^n \* \mathbf{x}) \\ &= \mathit{next}(a^n) \* \mathit{next}(a^n) \\ &= \mathit{next}(a^n). \end{aligned}$$

Thus, *<sup>a</sup><sup>n</sup>* <sup>∗</sup> (*neut*(*an*) <sup>∗</sup> *<sup>x</sup>*)=(*neut*(*an*) <sup>∗</sup> *<sup>x</sup>*) <sup>∗</sup> *<sup>a</sup><sup>n</sup>* <sup>=</sup> *neut*(*an*).

(6) For any *<sup>x</sup>* ∈ {*anti*(*an*)}, we have

$$\begin{aligned} (\mathit{next}(a^{\mathfrak{n}}) \ast \mathbf{x}) \ast \mathit{next}(a^{\mathfrak{n}}) &= (\mathit{next}(a^{\mathfrak{n}}) \ast \mathbf{x}) \ast (a^{\mathfrak{n}} \ast \mathbf{x}) \\ &= (\mathit{next}(a^{\mathfrak{n}}) \ast a^{\mathfrak{n}}) \ast (\mathbf{x} \ast \mathbf{x}) \\ &= (a^{\mathfrak{n}} \ast \mathit{next}(a^{\mathfrak{n}})) \ast (\mathbf{x} \ast \mathbf{x}) \\ &= (a^{\mathfrak{n}} \ast \mathbf{x}) \ast (\mathit{next}(a^{\mathfrak{n}}) \ast \mathbf{x}) \\ &= \mathit{next}(a^{\mathfrak{n}}) \ast (\mathit{next}(a^{\mathfrak{n}}) \ast \mathbf{x}) \\ \mathit{next}(a^{\mathfrak{n}}) \ast (\mathit{next}(a^{\mathfrak{n}}) \ast \mathbf{x}) &= (\mathbf{x} \ast a^{\mathfrak{n}}) \ast (\mathit{next}(a^{\mathfrak{n}}) \ast \mathbf{x}) \\ &= (\mathbf{x} \ast \mathit{next}(a^{\mathfrak{n}})) \ast (a^{\mathfrak{n}} \ast \mathbf{x}) \\ &= (\mathbf{x} \ast \mathit{next}(a^{\mathfrak{n}})) \ast \mathit{next}(a^{\mathfrak{n}}) \\ &= (\mathit{next}(a^{\mathfrak{n}}) \ast \mathit{next}(a^{\mathfrak{n}})) \ast \mathbf{x} \\ &= \mathit{next}(a^{\mathfrak{n}}) \ast \mathbf{x}. \end{aligned}$$

Thus, (*neut*(*an*) <sup>∗</sup> *<sup>x</sup>*) <sup>∗</sup> *neut*(*an*) = *neut*(*an*) <sup>∗</sup> (*neut*(*an*) <sup>∗</sup> *<sup>x</sup>*) = *neut*(*an*) <sup>∗</sup> *<sup>x</sup>*.

(7) From (5) and (6), we have *neut*(*neut*(*an*) <sup>∗</sup> *<sup>x</sup>*) = *neut*(*an*), *<sup>a</sup><sup>n</sup>* <sup>∈</sup> *anti*{*neut*(*an*) <sup>∗</sup> *<sup>x</sup>*}. From (2), *neut*(*an*) <sup>∗</sup> *anti*(*an*) is unique, we have

$$n \cdot \text{neut}(\text{neut}(a^n) \ast \mathbf{x}) \ast \text{anti}(\text{neut}(a^n) \ast \mathbf{x}) = n \cdot \text{ent}(\text{neut}(a^n) \ast \mathbf{x}) \ast a^n = n \cdot \text{ent}(a^n) \ast a^n = a^n.$$

$$\mathbf{0}$$

**Example 8.** *Let S* = {*a*, *b*, *c*, *d*}*, an operation* ∗ *on S is defined as in Table 7. Since neut*(*a*) = *a*, {*anti*(*a*)} = {*a*, *<sup>b</sup>*, *<sup>c</sup>*}, *neut*(*d*) = *<sup>a</sup>*, *anti*(*d*) = *<sup>d</sup> and <sup>b</sup>*<sup>2</sup> <sup>=</sup> *<sup>a</sup>*, *<sup>c</sup>*<sup>2</sup> <sup>=</sup> *a, so* (*S*, <sup>∗</sup>) *is a GAG-NET-Loop. We can get that (Corresponding to the results of Proposition 3):*

**Table 7.** A GAG-NET-Loop of Example 8.



**Proposition 4.** *Let* (*N*, ∗) *be a GAG-NET-Loop, then* ∀*a*, *b* ∈ *N, there are two positive integers n and m such that the following hold:*


**Proof.** Being (*N*, <sup>∗</sup>) be a GAG-NET-Loop, then for *<sup>a</sup>* <sup>∈</sup> *<sup>N</sup>*, there is a positive integer *<sup>n</sup>*, such that *<sup>a</sup><sup>n</sup>* has the neutral element and opposite element, denoted by *neut*(*an*) and *anti*(*an*), respectively. For *<sup>b</sup>* <sup>∈</sup> *<sup>N</sup>*, there is a positive integer *m*, such that *b<sup>m</sup>* has the neutral element and opposite element, denoted by *neut*(*bm*) and *anti*(*bm*), respectively. Thus,

$$\begin{aligned} (\mathit{ncut}(a^n) \ast \mathit{ncut}(b^m)) \ast (a^n \ast b^m) &= (\mathit{ncut}(a^n) \ast a^n) \ast (\mathit{ncut}(b^m) \ast b^m) \\ &= a^n \ast b^m. \end{aligned}$$

In the same way, we have (*a<sup>n</sup>* <sup>∗</sup> *<sup>b</sup>m*) <sup>∗</sup> (*neut*(*an*) <sup>∗</sup> *neut*(*bm*)) = *<sup>a</sup><sup>n</sup>* <sup>∗</sup> *<sup>b</sup>m*. That is,

$$(a^n \ast b^m) \ast (neuut(a^n) \ast neut(b^m)) = (neuut(a^n) \ast neut(b^m)) \ast (a^n \ast b^m) = a^n \ast b^m.$$

Moreover, for any *anti*(*an*) ∈ {*anti*(*an*)} and *anti*(*bm*) ∈ {*anti*(*bm*)}, we can get

$$\begin{aligned} (anti(a^n)\*anti(b^m))\*(a^n\*b^m) &= (anti(a^n)\*a^n)\*(anti(b^m)\*b^m) \\ &= 
neut(a^n)\*
neut(b^m). \end{aligned}$$

Similarly, we have (*a<sup>n</sup>* <sup>∗</sup> *<sup>b</sup>m*) <sup>∗</sup> (*anti*(*an*) <sup>∗</sup> *anti*(*bm*)) = *neut*(*an*) <sup>∗</sup> *neut*(*bm*). That is:

$$(a^n \ast b^m) \ast (anti(a^n) \ast anti(b^m)) = (anti(a^n) \ast anti(b^m)) \ast (a^n \ast b^m) = neut(a^n) \ast nent(b^m).$$

Thus, we have

$$\operatorname{new}(a^n) \ast \operatorname{next}(b^m) \in \{\operatorname{next}(a^n \ast b^m)\}.$$

From this, by Proposition 2, we get *neut*(*an*) <sup>∗</sup> *neut*(*bm*) = *neut*(*a<sup>n</sup>* <sup>∗</sup> *<sup>b</sup>m*). Therefore, we get *anti*(*an*) <sup>∗</sup> *anti*(*bm*) ∈ {*anti*(*a<sup>n</sup>* <sup>∗</sup> *<sup>b</sup>m*)}.

**Example 9.** *Apply the* (*S*, ∗) *in Example 8, since neut*(*a*) = *a*, {*anti*(*a*)} = {*a*, *b*, *c*}, *neut*(*d*) = *<sup>a</sup>*, *anti*(*d*) = *d and b*<sup>2</sup> <sup>=</sup> *<sup>a</sup>*, *<sup>c</sup>*<sup>2</sup> <sup>=</sup> *a, so* (*S*, <sup>∗</sup>) *is a GAG-NET-Loop, we can get:*


**Theorem 7.** *Let* (*N*, ∗) *be a GAG-NET-Loop. Then, N is a quasi regular AG-groupoid.*

**Proof.** For any *<sup>a</sup>* in *<sup>N</sup>*, by Definition <sup>11</sup> we have (*a<sup>n</sup>* <sup>∗</sup> *anti*(*an*)) <sup>∗</sup> *<sup>a</sup><sup>n</sup>* <sup>=</sup> *neut*(*an*) <sup>∗</sup> *<sup>a</sup><sup>n</sup>* <sup>=</sup> *<sup>a</sup>n*. From this and Definition 4, we know that *N* is a quasi regular AG-groupoid.

The following example shows that a quasi regular AG-groupoid may not be a GAG-NET-loop.

**Example 10.** *Apply the* (*S*, ∗) *in Example 1, Being* 1 = (1 ∗ 3) ∗ 1, 2 = (2 ∗ 4) ∗ 2, 3 = (3 ∗ 1) ∗ 3, 4 = (4 ∗ 2) ∗ 4*, From Definition 4, S is a quasi regular AG-groupoid. However, it is not a GAG-NET-Loop.*

**Theorem 8.** *Let* (*N*, ∗) *be a GAG-NET-Loop. Then, N is a quasi fully regular AG-groupoid.*

**Proof.** Suppose *<sup>a</sup>* <sup>∈</sup> *<sup>N</sup>* and (*a*, *neut*(*an*), *anti*(*an*)) is a generalized neutrosophic extended triplet with degree *<sup>n</sup>*, then there exists *<sup>m</sup>* ∈ {*anti*(*an*)}, *<sup>a</sup><sup>n</sup>* <sup>∗</sup> *<sup>m</sup>* <sup>=</sup> *<sup>m</sup>* <sup>∗</sup> *<sup>a</sup><sup>n</sup>* <sup>=</sup> *neut*(*an*). Denote *<sup>p</sup>* <sup>=</sup> *<sup>m</sup>* <sup>∗</sup> *neut*(*an*), *<sup>q</sup>* <sup>=</sup> *neut*(*an*);*<sup>r</sup>* <sup>=</sup> *<sup>m</sup>*,*<sup>s</sup>* <sup>=</sup> *neut*(*an*); *<sup>t</sup>* <sup>=</sup> *<sup>m</sup>*, *<sup>u</sup>* <sup>=</sup> *neut*(*an*); *<sup>v</sup>* <sup>=</sup> *<sup>m</sup>*; *<sup>w</sup>* <sup>=</sup> *<sup>m</sup>* <sup>∗</sup> *neut*(*an*); *<sup>x</sup>* <sup>=</sup> *<sup>m</sup>*, *<sup>y</sup>* <sup>=</sup> *neut*(*an*), then

(*<sup>p</sup>* <sup>∗</sup> (*an*)2) <sup>∗</sup> *<sup>q</sup>* = ((*<sup>m</sup>* <sup>∗</sup> *neut*(*an*)) <sup>∗</sup> (*an*)2) <sup>∗</sup> *neut*(*an*) = (((*an*)<sup>2</sup> <sup>∗</sup> *neut*(*an*)) <sup>∗</sup> *<sup>m</sup>*) <sup>∗</sup> *neut*(*an*) (*by the le f t invertive law*) = (((*a<sup>n</sup>* <sup>∗</sup> *<sup>a</sup>n*) <sup>∗</sup> *neut*(*an*)) <sup>∗</sup> *<sup>m</sup>*) <sup>∗</sup> *neut*(*an*) = (((*neut*(*an*) <sup>∗</sup> *<sup>a</sup>n*) <sup>∗</sup> *<sup>a</sup>n*) <sup>∗</sup> *<sup>m</sup>*) <sup>∗</sup> *neut*(*an*) (*by the le f t invertive law*) = ((*a<sup>n</sup>* <sup>∗</sup> *<sup>a</sup>n*) <sup>∗</sup> *<sup>m</sup>*) <sup>∗</sup> *neut*(*an*) = ((*<sup>m</sup>* <sup>∗</sup> *<sup>a</sup>n*) <sup>∗</sup> *<sup>a</sup>n*) <sup>∗</sup> *neut*(*an*) (*by the le f t invertive law*) = (*neut*(*an*) <sup>∗</sup> *<sup>a</sup>n*) <sup>∗</sup> *neut*(*an*) <sup>=</sup> *<sup>a</sup><sup>n</sup>* <sup>∗</sup> *neut*(*an*) = *<sup>a</sup>n*, (*<sup>r</sup>* <sup>∗</sup> *<sup>a</sup>n*) <sup>∗</sup> (*a<sup>n</sup>* <sup>∗</sup> *<sup>s</sup>*)=(*<sup>m</sup>* <sup>∗</sup> *<sup>a</sup>n*) <sup>∗</sup> (*a<sup>n</sup>* <sup>∗</sup> *neut*(*an*)) = *neut*(*an*) <sup>∗</sup> *<sup>a</sup><sup>n</sup>* <sup>=</sup> *<sup>a</sup>n*, (*a<sup>n</sup>* <sup>∗</sup> *<sup>t</sup>*) <sup>∗</sup> (*a<sup>n</sup>* <sup>∗</sup> *<sup>u</sup>*)=(*a<sup>n</sup>* <sup>∗</sup> *<sup>m</sup>*) <sup>∗</sup> (*a<sup>n</sup>* <sup>∗</sup> *neut*(*an*)) = *neut*(*an*) <sup>∗</sup> *<sup>a</sup><sup>n</sup>* <sup>=</sup> *<sup>a</sup>n*, (*a<sup>n</sup>* <sup>∗</sup> *<sup>a</sup>n*) <sup>∗</sup> *<sup>v</sup>* = (*a<sup>n</sup>* <sup>∗</sup> *<sup>a</sup>n*) <sup>∗</sup> *<sup>m</sup>* = (*<sup>m</sup>* <sup>∗</sup> *<sup>a</sup>n*) <sup>∗</sup> *<sup>a</sup><sup>n</sup>* <sup>=</sup> *neut*(*an*) <sup>∗</sup> *<sup>a</sup><sup>n</sup>* <sup>=</sup> *<sup>a</sup>n*, *<sup>w</sup>* <sup>∗</sup> (*a<sup>n</sup>* <sup>∗</sup> *<sup>a</sup>n*)=(*<sup>m</sup>* <sup>∗</sup> *neut*(*an*)) <sup>∗</sup> (*a<sup>n</sup>* <sup>∗</sup> *<sup>a</sup>n*) = (*<sup>m</sup>* <sup>∗</sup> *<sup>a</sup>n*) <sup>∗</sup> (*neut*(*an*) <sup>∗</sup> *<sup>a</sup>n*) (*by the medial law*) = (*<sup>m</sup>* <sup>∗</sup> *<sup>a</sup>n*) <sup>∗</sup> *<sup>a</sup><sup>n</sup>* <sup>=</sup> *neut*(*an*) <sup>∗</sup> *<sup>a</sup><sup>n</sup>* <sup>=</sup> *<sup>a</sup>n*,

$$(\mathbf{x} \ast \mathbf{a}^{\mathrm{n}}) \ast (\mathbf{y} \ast \mathbf{a}^{\mathrm{n}}) = (m \ast \mathbf{a}^{\mathrm{n}}) \ast (\operatorname{neut}(\mathbf{a}^{\mathrm{n}}) \ast \mathbf{a}^{\mathrm{n}}) = \operatorname{neut}(\mathbf{a}^{\mathrm{n}}) \ast \mathbf{a}^{\mathrm{n}} = \mathbf{a}^{\mathrm{n}}.$$

Moreover, from Proposition 4, we get:

$$ment(a^n) \ast next(b^m) = next(a^n \ast b^m),\\anti(a^n) \ast anti(b^m) \in \{anti(a^n \ast b^m)\}.$$

If *<sup>b</sup><sup>m</sup>* <sup>=</sup> *<sup>a</sup>n*, we have *neut*(*an*) <sup>∗</sup> *neut*(*an*) = *neut*(*a<sup>n</sup>* <sup>∗</sup> *<sup>a</sup>n*), *anti*(*an*) <sup>∗</sup> *anti*(*an*) ∈ {*anti*(*a<sup>n</sup>* <sup>∗</sup> *<sup>a</sup>n*)}, there exists *<sup>k</sup>* ∈ {*anti*(*a<sup>n</sup>* <sup>∗</sup> *<sup>a</sup>n*)}. Denote *<sup>z</sup>* <sup>=</sup> *<sup>k</sup>* <sup>∗</sup> *<sup>m</sup>*, then

((*an*)<sup>2</sup> <sup>∗</sup> *<sup>z</sup>*) <sup>∗</sup> (*an*)<sup>2</sup> = ((*a<sup>n</sup>* <sup>∗</sup> *<sup>a</sup>n*) <sup>∗</sup> *<sup>z</sup>*) <sup>∗</sup> (*an*)<sup>2</sup> = ((*<sup>z</sup>* <sup>∗</sup> *<sup>a</sup>n*) <sup>∗</sup> *<sup>a</sup>n*) <sup>∗</sup> (*an*)<sup>2</sup> (*applying the le f t invertive law*) = ((*an*)<sup>2</sup> <sup>∗</sup> *<sup>a</sup>n*) <sup>∗</sup> (*<sup>z</sup>* <sup>∗</sup> *<sup>a</sup>n*) (*applying the le f t invertive law*) = ((*an*)<sup>2</sup> <sup>∗</sup> *<sup>a</sup>n*) <sup>∗</sup> ((*<sup>k</sup>* <sup>∗</sup> *<sup>m</sup>*) <sup>∗</sup> *<sup>a</sup>n*) = ((*an*)<sup>2</sup> <sup>∗</sup> *<sup>a</sup>n*) <sup>∗</sup> ((*a<sup>n</sup>* <sup>∗</sup> *<sup>m</sup>*) <sup>∗</sup> *<sup>k</sup>*) (*by the le f t invertive law*) = ((*an*)<sup>2</sup> <sup>∗</sup> *<sup>a</sup>n*) <sup>∗</sup> (*neut*(*an*) <sup>∗</sup> *<sup>k</sup>*) (*by m* ∈ {*anti*(*an*)}) = ((*a<sup>n</sup>* <sup>∗</sup> *<sup>a</sup>n*) <sup>∗</sup> (*neut*(*an*) <sup>∗</sup> *<sup>a</sup>n*)) <sup>∗</sup> (*neut*(*an*) <sup>∗</sup> *<sup>k</sup>*) = ((*a<sup>n</sup>* <sup>∗</sup> *neut*(*an*)) <sup>∗</sup> (*a<sup>n</sup>* <sup>∗</sup> *<sup>a</sup>n*)) <sup>∗</sup> (*neut*(*an*) <sup>∗</sup> *<sup>k</sup>*) (*applying the medial law*) = (*a<sup>n</sup>* <sup>∗</sup> (*an*)2) <sup>∗</sup> (*neut*(*an*) <sup>∗</sup> *<sup>k</sup>*) = (*a<sup>n</sup>* <sup>∗</sup> *neut*(*an*)) <sup>∗</sup> ((*an*)<sup>2</sup> <sup>∗</sup> *<sup>k</sup>*) (*applying the medial law*) <sup>=</sup> *<sup>a</sup><sup>n</sup>* <sup>∗</sup> *neut*(*a<sup>n</sup>* <sup>∗</sup> *<sup>a</sup>n*) (*by the definition o f k* ∈ {*anti*(*a<sup>n</sup>* <sup>∗</sup> *<sup>a</sup>n*)}) <sup>=</sup> *<sup>a</sup><sup>n</sup>* <sup>∗</sup> (*neut*(*an*) <sup>∗</sup> *neut*(*an*)) <sup>=</sup> *<sup>a</sup><sup>n</sup>* <sup>∗</sup> *neut*(*an*) (*by Proposition* <sup>2</sup> (2)) = *an*.

Therefore, combining above results, by Definition 5, we know that *N* is a quasi fully regular AG-groupoid.

*Mathematics* **2019**, *7*, 1206

The following example shows that a quasi fully regular AG-groupoid may not be a GAG-NET-loop.

**Example 11.** *Applying the* (*S*, ∗) *in Example 1, when a* = 1*,p* = 1, *q* = 3,*r* = 4,*s* = 3, *t* = 2, *u* = 3, *v* = 2, *w* = 2, *x* = 4, *y* = 2, *z* = 3*, we have a*<sup>2</sup> = 2*, and*

$$\begin{aligned} 1 &= (1 \ast 2) \ast 3 = (4 \ast 1) \ast (1 \ast 3) = (1 \ast 2) \ast (1 \ast 3) \\ &= (1 \ast 1) \ast 2 = 2 \ast (1 \ast 1) = (4 \ast 1) \ast (2 \ast 1) \\ &= (2 \ast 3) \ast 2. \end{aligned}$$

*When a* = 4*,p* = 1, *q* = 3,*r* = 4,*s* = 4, *t* = 3, *u* = 2, *v* = 3, *w* = 3, *x* = 4, *y* = 4, *z* = 2*, we have a*<sup>2</sup> = 3*, and*

$$\begin{aligned} 4 &= (1 \ast 3) \ast 3 = (4 \ast 4) \ast (4 \ast 4) = (4 \ast 3) \ast (4 \ast 2) \\ &= (4 \ast 4) \ast 3 = 3 \ast (4 \ast 4) = (4 \ast 4) \ast (4 \ast 4) \\ &= (3 \ast 2) \ast 3. \end{aligned}$$

*Being* 22 = 1, 3<sup>3</sup> = 1*, from Definition 5, S is a quasi fully regular AG-groupoid. However, it is not a GAG-NET-Loop.*

**Definition 13.** *An AG-groupoid* (*S*, ∗) *is called a quasi strong inverse AG-groupoid, if the following conditions are satisfied: for any <sup>a</sup>* <sup>∈</sup> *S, there exists a positive integer n, <sup>a</sup><sup>n</sup>* <sup>∈</sup> *S, and a unary operation <sup>a</sup><sup>n</sup>* <sup>→</sup> (*an*)−<sup>1</sup> *on <sup>S</sup> such that*

$$((a^n)^{-1})^{-1} = a^n, \ (a^n \* (a^n)^{-1}) \* a^n = a^n \* (a^n \* (a^n)^{-1}) = a^n, \ a^n \* (a^n)^{-1} = (a^n)^{-1} \* a^n.$$

**Theorem 9.** *Let* (*N*, ∗) *be a groupoid. Then, N is a GAG-NET-Loop if and only if it is a quasi strong inverse AG-groupoid.*

**Proof.** Necessity: Suppose *N* is a GAG-NET-Loop, from Definition 12, for each *a* ∈ *N*, there exists a generalized neutrosophic extended triplet with degree n denoted by (*a*, *neut*(*an*), *anti*(*an*)). Set

$$(a^n)^{-1} = next(a^n) \* anti(a^n),$$

by Proposition 3(2), *neut*(*an*) <sup>∗</sup> *anti*(*an*) is unique, so (*an*)−<sup>1</sup> is unique. By Proposition 3(7), we have

$$<\langle (a^n)^{-1} \rangle^{-1} = \operatorname{neut}(\operatorname{neut}(a^n) \* \operatorname{anti}(a^n)) \* \operatorname{anti}(\operatorname{recut}(a^n) \* \operatorname{anti}(a^n)) = a^n.$$

Being

$$(a^n)^{-1} \ast a^n = (n \iota ut(a^n) \ast anti(a^n)) \ast a^n = (a^n \ast anti(a^n)) \ast n \iota ut(a^n) = n \iota ut(a^n) \ast n \iota ut(a^n) = n \iota ut(a^n).$$

$$\begin{aligned} a^n \* (a^n)^{-1} &= a^n \* (n 
u 
u t(a^n) \* 
a 
xi(a^n)) \\ &= (n 
u t(a^n) \* a^n) \* (n 
u t(a^n) \* 
a 
xi(a^n)) \\ &= (n 
u t(a^n) \* 
u 
u t(a^n)) \* (a^n \* 
a 
xi(a^n)) \\ &= 
u 
u t(a^n), \end{aligned}$$

we have

$$(a^n)^{-1} \* a^n = a^n \* (a^n)^{-1},$$

$$(a^n \* (a^n)^{-1}) \* a^n = 
mathbb{(a^n)} \* a^n = a^n,$$

$$a^n \* (a^n \* (a^n)^{-1}) = a^n \* n \\
\text{ent}(a^n) = a^n,$$

$$(a^n \* (a^n)^{-1}) \* a^n = a^n \* (a^n \* (a^n)^{-1}) = a^n.$$

From Definition 13, *N* is a quasi strong inverse AG-groupoid.

Sufficiency: If *<sup>N</sup>* is a quasi strong inverse AG-groupoid, and (*an*)−<sup>1</sup> <sup>∈</sup> *<sup>N</sup>*, such that *<sup>a</sup><sup>n</sup>* <sup>∗</sup> (*an*)−<sup>1</sup> <sup>=</sup> (*an*)−<sup>1</sup> <sup>∗</sup> *<sup>a</sup><sup>n</sup>* and (*a<sup>n</sup>* <sup>∗</sup> (*an*)−1) <sup>∗</sup> *<sup>a</sup><sup>n</sup>* <sup>=</sup> *<sup>a</sup><sup>n</sup>* <sup>∗</sup> (*a<sup>n</sup>* <sup>∗</sup> (*an*)−1) = *<sup>a</sup>n*. Set

$$\operatorname{neut}(a^n) = a^n \* (a^n)^{-1},$$

then *neut*(*an*) <sup>∗</sup> *<sup>a</sup><sup>n</sup>* = (*a<sup>n</sup>* <sup>∗</sup> (*an*)−1) <sup>∗</sup> *<sup>a</sup><sup>n</sup>* <sup>=</sup> *<sup>a</sup><sup>n</sup>* <sup>∗</sup> (*a<sup>n</sup>* <sup>∗</sup> (*an*)−1) = *<sup>a</sup><sup>n</sup>* <sup>∗</sup> *neut*(*an*) = *<sup>a</sup>n*,

$$a^n \ast (a^n)^{-1} = (a^n)^{-1} \ast a^n = n \iota \iota t(a^n).$$

From Definition 12, we have that *<sup>N</sup>* is a GAG-NET-Loop and (*an*)−<sup>1</sup> ∈ {*anti*(*an*)}.

**Example 12.** *Applying* (*S*, ∗) *in Example 8, we know that it is a GAG-NET-Loop. We will show that it is a quasi strong inverse AG-groupoid in the following.*

*For d, there exists an inverse element <sup>d</sup>*−<sup>1</sup> <sup>=</sup> *d, such that* (*d*−1)−<sup>1</sup> <sup>=</sup> *d,* (*<sup>d</sup>* <sup>∗</sup> *<sup>d</sup>*−1) <sup>∗</sup> *<sup>d</sup>* <sup>=</sup> *<sup>d</sup>* <sup>∗</sup> (*<sup>d</sup>* <sup>∗</sup> *<sup>d</sup>*−1) = *<sup>d</sup>*, *<sup>d</sup>* <sup>∗</sup> *<sup>d</sup>*−<sup>1</sup> <sup>=</sup> *<sup>d</sup>*−<sup>1</sup> <sup>∗</sup> *<sup>d</sup>* <sup>=</sup> *a, so <sup>d</sup> is quasi strong inverse. <sup>a</sup> is quasi strong inverse for the same reason. Moreover, being <sup>b</sup>*<sup>2</sup> <sup>=</sup> *<sup>a</sup>*, *<sup>c</sup>*<sup>2</sup> <sup>=</sup> *a, <sup>b</sup> and <sup>c</sup> are quasi strong inverse, thus* (*S*, <sup>∗</sup>) *is a quasi strong inverse AG-groupoid by Definition 13.*

**Definition 14.** *Let* (*N*, ∗) *be a GAG-NET-Loop. N is called a weak commutative GAG-NET-Loop if* ∀*a*, *b* ∈ *N, there exist a generalized neutrosophic extended triplet with degree n (denoted by* (*a*, *neut*(*an*), *anti*(*an*))*) and a generalized neutrosophic extended triplet with degree m (denoted by* (*b*, *neut*(*bm*), *anti*(*bm*))*), <sup>n</sup>*, *<sup>m</sup>* <sup>∈</sup> *<sup>Z</sup>*+*, <sup>a</sup><sup>n</sup>* <sup>∗</sup> *neut*(*bm*) = *neut*(*bm*) <sup>∗</sup> *<sup>a</sup>n.*

**Example 13.** *Let S* = {1, 2, 3, 4, 5, 6, 7}*, an operation* ∗ *on S is defined as in Table 8. Since* (1, 1, 1),(2, 2, 2) *and* (6, 6, 6) *are neutrosophic extended triplets, but* 3, 4, 5, 7 *do not have the neutral element and opposite element, thus S is not an AG-NET-Loop. Moreover* 3<sup>2</sup> = 1, 4<sup>2</sup> = 1, 5<sup>2</sup> = 2, 7<sup>2</sup> = 6 *have the neutral element and opposite element, so* (*S*, ∗) *is a GAG-NET-Loop. It is not a commutative GAG-NET-Loop being* 3 ∗ 1 = 1 ∗ 3*. We can show that it is a weak commutative GAG-NET-Loop.*

*For* 1, 2, 3, 4, 5, 6 *and* 7*, there exist positive integers* 1, 1, 2, 2, 2, 1 *and* 2*, respectively, thus S* = {11, 21, 32, 42, 52, 61, 72} <sup>=</sup> {1, 2, 6} *being* 32 <sup>=</sup> 1, 4<sup>2</sup> <sup>=</sup> 1, 5<sup>2</sup> <sup>=</sup> 2, 7<sup>2</sup> <sup>=</sup> <sup>6</sup>*. We know that neut*(1) = 1, *neut*(2) = 2, *neut*(6) = 6*, thus* {*neut*(1), *neut*(2), *neut*(6)} ⊆ *S . In Table 8, we can get the sub algebra system* (*S* , ∗) *of* (*S*, ∗) *as in Table 9, and* (*S* , ∗) *is commutative. Thus,* (*S*, ∗) *is a weak commutative GAG-NET-Loop.*


**Table 8.** The operation table of Example 13.

**Table 9.** The sub algebra system *S* of *S* in Example 13.


**Example 14.** *Let S* = {1, 2, 3, 4}*, an operation* ∗ *on S is defined as in Table 10. Being neut*(1) ∗ 2 = 4 = 3 = 2 ∗ *neut*(1)*, S is not a weak commutative GAG-NET-Loop. Moreover, it is not a commutative AG-NET-Loop.*

**Table 10.** The operation table of Example 14.


**Proposition 5.** *Let* (*N*, ∗) *be a GAG-NET-Loop. Then,* (*N*, ∗) *is a weak commutative GAG-NET-Loop if and only if N satisfies the following conditions:* ∀*a*, *b* ∈ *N, there exist a generalized neutrosophic extended triplet with degree n (denoted by* (*a*, *neut*(*an*), *anti*(*an*))*) and a generalized neutrosophic extended triplet with degree m (denoted by* (*b*, *neut*(*bm*), *anti*(*bm*))*), n*, *<sup>m</sup>* <sup>∈</sup> *<sup>Z</sup>*+*, a<sup>n</sup>* <sup>∗</sup> *<sup>b</sup><sup>m</sup>* <sup>=</sup> *<sup>b</sup><sup>m</sup>* <sup>∗</sup> *<sup>a</sup>n*.

**Proof.** Necessity: If (*N*, ∗) is a weak commutative GAG-NET-Loop, then there are two positive integers *n*, *m*, such that *a<sup>n</sup>* and *b<sup>m</sup>* have the neutral element and opposite element. Thus, from Definition 14, ∀*a*, *b* ∈ *N*, we have

$$\begin{aligned} a^{\mathfrak{n}} \ast b^{\mathfrak{m}} &= (\mathit{neut}(a^{\mathfrak{n}}) \ast a^{\mathfrak{n}}) \ast (b^{\mathfrak{m}} \ast \mathit{neut}(b^{\mathfrak{m}})) \\ &= (\mathit{neut}(a^{\mathfrak{n}}) \ast b^{\mathfrak{m}}) \ast (a^{\mathfrak{n}} \ast \mathit{neut}(b^{\mathfrak{m}})) \\ &= (b^{\mathfrak{m}} \ast \mathit{neut}(a^{\mathfrak{n}})) \ast (\mathit{neut}(b^{\mathfrak{m}}) \ast a^{\mathfrak{n}}) \\ &= (b^{\mathfrak{m}} \ast \mathit{neut}(b^{\mathfrak{m}})) \ast (\mathit{neut}(a^{\mathfrak{n}}) \ast a^{\mathfrak{n}}) \\ &= b^{\mathfrak{m}} \ast a^{\mathfrak{n}}. \end{aligned}$$

Sufficiency: If (*N*, ∗) is a GAG-NET-Loop, then for *a* ∈ *N*, there is a positive integer *n*, such that *a<sup>n</sup>* has the neutral element and opposite element, denoted by *neut*(*an*) and *anti*(*an*), respectively. For *<sup>b</sup>* <sup>∈</sup> *<sup>N</sup>*, there is a positive integer *<sup>m</sup>*, such that *<sup>b</sup><sup>m</sup>* has the neutral element and opposite element, denoted by *neut*(*bm*) and *anti*(*bm*), respectively. Suppose that (*N*, <sup>∗</sup>) satisfies the conditions *<sup>a</sup><sup>n</sup>* <sup>∗</sup> *<sup>b</sup><sup>m</sup>* <sup>=</sup> *<sup>b</sup><sup>m</sup>* <sup>∗</sup> *<sup>a</sup>n*, From Proposition 2, we have *neut*(*bm*) exists neutral element and opposite element. We get *<sup>a</sup><sup>n</sup>* <sup>∗</sup> *neut*(*bm*) = *neut*(*bm*) <sup>∗</sup> *<sup>a</sup>n*. From Definition 14, we know that (*N*, <sup>∗</sup>) is a weak commutative GAG-NET-Loop.

**Definition 15.** *A GAG-NET-Loop* (*S*, ∗) *is called a quasi Clifford AG-groupoid, if it is a quasi strong inverse AG-groupoid and for any a*, *b* ∈ *S, there are two positive integers n*, *m such that*

$$a^{\mathfrak{n}} \ast (b^{\mathfrak{m}} \ast (b^{\mathfrak{m}})^{-1}) = (b^{\mathfrak{m}} \ast (b^{\mathfrak{m}})^{-1}) \ast a^{\mathfrak{n}}.$$

**Theorem 10.** *Let* (*N*, ∗) *be a groupoid. Then, N is a weak commutative GAG-NET-Loop if and only if it is a quasi Clifford AG-groupoid.*

**Proof.** Necessity: Suppose that *N* is a weak commutative GAG-NET-Loop. By Theorem 9, we know that *N* is a quasi strong inverse AG-groupoid, then ∀*a*, *b* ∈ *N* there are two positive integers *n*, *m*, such that *a<sup>n</sup>* and *b<sup>m</sup>* have the neutral element and opposite element. Set

$$(a^n)^{-1} = n 
euit (a^n)\* 
anti(a^n)...$$

For any *a*, *b* ∈ *N*, we have

$$a^n \ast (b^m \ast (b^m)^{-1}) = a^n \ast n \\ \text{ent}(b^m) = n \\ \text{ent}(b^m) \ast a^n = (b^m \ast (b^m)^{-1}) \ast a^n.$$

From Definition 15, we know that *N* is a quasi Clifford AG-groupoid.

Sufficiency: Assume that *N* is a quasi Clifford AG-groupoid, from Definition 15, it is a quasi strong inverse AG-groupoid. By Theorem 9, we know that *N* is a GAG-NET-Loop. Then, ∀*a*, *b* ∈ *N* there are two positive integers *n*, *m*, such that *a<sup>n</sup>* and *b<sup>m</sup>* have the neutral element and opposite element, (*an*)−<sup>1</sup> <sup>∈</sup> *<sup>N</sup>*,(*bm*)−<sup>1</sup> <sup>∈</sup> *<sup>N</sup>*. Set

$$ment(a^n) = a^n \* (a^n)^{-1}, \\ next(b^m) = b^m \* (b^m)^{-1}.$$

From Definition 15, being *<sup>a</sup><sup>n</sup>* <sup>∗</sup> (*b<sup>m</sup>* <sup>∗</sup> (*bm*)−1)=(*b<sup>m</sup>* <sup>∗</sup> (*bm*)−1) <sup>∗</sup> *<sup>a</sup>n*, we have *<sup>a</sup><sup>n</sup>* <sup>∗</sup> *neut*(*bm*) = *neut*(*bm*) <sup>∗</sup> *<sup>a</sup>n*. We can get that *<sup>N</sup>* is a weak commutative GAG-NET-Loop by Definition 14.

**Example 15.** *Let S* = {1, 2, 3, 4, 5, 6, 7, 8}*, an operation* ∗ *on S is defined as in Table 11. It is a weak commutative GAG-NET-Loop. We show that it is a quasi Clifford AG-groupoid. From Theorem 9, we can see that* (*S*, ∗) *is a quasi strong inverse AG-groupoid. We just show for any x*, *y* ∈ *S, there are two positive integers n and m such that x<sup>n</sup>* <sup>∗</sup> (*y<sup>m</sup>* <sup>∗</sup> (*ym*)−1)=(*y<sup>m</sup>* <sup>∗</sup> (*ym*)−1) <sup>∗</sup> *<sup>x</sup>n.*

*In Example 15,* 1, 2, 3, 4, 5, 6, 7 *and* 8*, there exist positive integers* 1, 1, 2, 2, 2, 1, 2 *and* 2*, respectively, and set* 1−<sup>1</sup> = 1, 2−<sup>1</sup> = 2,(32)−<sup>1</sup> = 1,(42)−<sup>1</sup> = 1,(52)−<sup>1</sup> = 2, 6−<sup>1</sup> = 6,(72)−<sup>1</sup> = 6,(82)−<sup>1</sup> = 6*. For any <sup>x</sup>*, *<sup>y</sup>* ∈ {11, 21, 32, 42, 52, 61, 72, 82}*, without losing generality, let <sup>x</sup>* <sup>=</sup> 1, *<sup>y</sup>* <sup>=</sup> <sup>2</sup>*, we can get* <sup>1</sup><sup>1</sup> <sup>∗</sup> (2<sup>1</sup> <sup>∗</sup> (21)−1) = (21 <sup>∗</sup> (21)−1) <sup>∗</sup> <sup>1</sup><sup>1</sup> <sup>=</sup> <sup>2</sup>*. We can verify other cases, thus* (*S*, <sup>∗</sup>) *is a quasi Clifford AG-groupoid.*


**Table 11.** The operation table of Example 15.

**Example 16.** *Let S* = {1, 2, 3, 4, 5}*, an operation* ∗ *on S is defined as in Table 12. it is not a weak commutative GAG-NET-Loop. We show that there exist <sup>x</sup>*, *<sup>y</sup>* <sup>∈</sup> *S, for any two positive integers <sup>n</sup> and <sup>m</sup> such that <sup>x</sup><sup>n</sup>* <sup>∗</sup> (*y<sup>m</sup>* <sup>∗</sup> (*ym*)−1) = (*y<sup>m</sup>* <sup>∗</sup> (*ym*)−1) <sup>∗</sup> *<sup>x</sup>n.*

*In Example 16, for any <sup>n</sup>*, *<sup>m</sup>* <sup>∈</sup> *<sup>Z</sup>*+*,* <sup>1</sup>*<sup>n</sup>* <sup>=</sup> 1, 2*<sup>m</sup>* <sup>=</sup> <sup>2</sup> *and* (1*n*)−<sup>1</sup> <sup>=</sup> 1,(2*m*)−<sup>1</sup> <sup>=</sup> <sup>2</sup>*, but* <sup>1</sup>*<sup>n</sup>* <sup>∗</sup> (2*<sup>m</sup>* <sup>∗</sup> (2*m*)−1) = <sup>4</sup> <sup>=</sup> <sup>3</sup> = (2*<sup>m</sup>* <sup>∗</sup> (2*m*)−1) <sup>∗</sup> <sup>1</sup>*n. That is, for* 1, 2 <sup>∈</sup> *S, there are not two positive integers <sup>n</sup>*, *<sup>m</sup> such that* <sup>1</sup>*<sup>n</sup>* <sup>∗</sup> (2*<sup>m</sup>* <sup>∗</sup> (2*m*)−1)=(2*<sup>m</sup>* <sup>∗</sup> (2*m*)−1) <sup>∗</sup> <sup>1</sup>*n. Thus,* (*S*, <sup>∗</sup>) *is not a quasi Clifford AG-groupoid.*


**Table 12.** The operation table of Example 16.

#### **5. Conclusions**

We thoroughly study the GAG-NET-Loop from the perspective of the AG-groupoid theory and obtained some important results. Figures 1 and 2 give the relations of the GAG-NET-Loop and other algebraic structures.

**Figure 1.** The relations of GAG-NET-Loop and other algebraic structures.

**Figure 2.** The relations of GAG-NET-Loop and other AG-groupoids.

As can be seen in Figure 1, we prove that the AG-NET-Loop is equal to the strong inverse AG-groupoid, the GAG-NET-Loop is equal to the quasi strong inverse AG-groupoid, and the weak commutative GAG-NET-Loop is equal to the quasi Clifford AG-groupoid.

As can be seen in Figure 2, we prove that a GAG-NET-loop is a quasi regular AG-groupoid, but a quasi regular AG-groupoid may not be a GAG-NET-loop; a GAG-NET-loop is a quasi fully regular AG-groupoid, but a quasi fully regular AG-groupoid may not be a GAG-NET-loop.

Figure 3 can be used to further express the relationships among GAG-NET-Loop and some algebraic systems. Here, as shown in Example 2, A represents a commutative AG-NET-Loop; as shown in Example 15, B represents a weak commutative GAG-NET-Loop, but it is not an AG-NET-Loop; as i s shown in Example 14, C represents a non-commutative AG-NET-Loop; D represents a GAG-NET- Loop, but it is neither an AG-NET-Loop nor a weak commutative GAG-NET-Loop; as shown in Example 10, E represents a quasi regular AG-groupoid, but it is not a GAG-NET-Loop; and as shown in Example 11, F represents a quasi fully regular AG-groupoid, but it is not a GAG-NET-Loop. A+B represents a weak commutative GAG-NET-Loop, A+C represents an AG-NET-Loop, A+B+C+D represents a

GAG-NET-Loop, A+B+C+D+E represents a quasi regular AG-groupoid, and A+B+C+D+F represents a quasi fully regular AG-groupoid.

**Figure 3.** The relationships among some algebraic systems and GAG-NET-Loop.

All these results are interesting for the exploration of the structure characterization of GAG-NET-Loop. As the next research topics, we want to find some special GAG-NET-Loops which can be decomposed into some smaller GAG-NET-Loops, and explore the relationship between these special GAG-NET-Loops and the related AG-groupoids.

**Author Contributions:** Conceptualization, Funding acquisition, Writing—review and editing, X.Z.; Data curation, Software, Writing—original draft, Writing–review and editing, X.A.; Resources, Writing–review and editing, Y.M.

**Funding:** This research was funded by National Natural Science Foundation of China (Grant No. 61976130).

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


c 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Regular CA-Groupoids and Cyclic Associative Neutrosophic Extended Triplet Groupoids (CA-NET-Groupoids) with Green Relations**

#### **Wangtao Yuan and Xiaohong Zhang \***

Department of Mathematics, Shaanxi University of Science & Technology, Xi'an 710021, China; 1809007@sust.edu.cn

**\*** Correspondence: zhangxiaohong@sust.edu.cn

Received: 18 December 2019; Accepted: 3 February 2020; Published: 6 February 2020

**Abstract:** Based on the theories of AG-groupoid, neutrosophic extended triplet (NET) and semigroup, the characteristics of regular cyclic associative groupoids (CA-groupoids) and cyclic associative neutrosophic extended triplet groupoids (CA-NET-groupoids) are further studied, and some important results are obtained. In particular, the following conclusions are strictly proved: (1) an algebraic system is a regular CA-groupoid if and only if it is a CA-NET-groupoid; (2) if (*S*, \*) is a regular CA-groupoid, then every element of *S* lies in a subgroup of *S*, and every H-class in *S* is a group; and (3) an algebraic system is an inverse CA-groupoid if and only if it is a regular CA-groupoid and its idempotent elements are commutative. Moreover, the Green relations of CA-groupoids are investigated, and some examples are presented for studying the structure of regular CA-groupoids.

**Keywords:** semigroup; CA-groupoid; regular CA-groupoid; neutrosophic extended triplet (NET); Green relation

#### **1. Introduction**

The theory of group is an essential branch of algebra. The research of group has become an important trend in the theory of semigroup. Various algebraic structures are related to groups, such as regular semigroups, generalized groups, and neutrosophic extended triplet groups (see [1–6]). With the development of semigroup, the study of generalized regular semigroup has become an important topic. This paper focuses on the regularity of non-associative algebraic structures satisfying the cyclic associative law: *x*(*yz*) = *z*(*xy*).

As early as 1954, Sholander [7] used the term of cyclic associative law to express the following operation law: (*ab*)*c* = (*bc*)*a*. Obviously, its dual form is as follows: *a*(*bc*) = *c*(*ab*). At the same time, in 1954, Hosszu also used the term of cyclic associative law in the study of functional equation (see the introduction and explanation by Maksa [8]). In 1995, Kleinfeld [9] studied the rings with cyclic associative law *x*(*yz*) = *y*(*zx*). Moreover, Zhan and Tan [10] introduced the notion of left weakly Novikov algebra. In many fields (such as non-associative rings and non-associative algebras [11–14]), image processing [15], and networks [16]), non-associativity has essential research significance. Since cyclic associative law is widely used in algebraic systems, we have been focusing on the basic algebraic structure of cyclic associative groupoids (CA-groupoids) and other relevant algebraic structures (see [17,18]).

Smarandache first proposed the new concept of neutrosophic set in [19]. The theory of neutrosophic set has been applied in many fields, such as applying neutrosophic soft sets in decision making, and proposing a new model of similarity in medical diagnosis and verifying its validity of l through a numerical example with practical background [20]. Later, Smarandache and colleagues extended the neutrosophic logic to the neutrosophic extended triplet group (NETG) [6]. In this paper, we analyze the structure of cyclic associative neutrosophic extended triplet groupoids (CA-NET-Groupoids).

Green's relations, first studied by Green [21] in 1951, have played a fundamental role in the development of regular semigroup theory. This has in turn completely illustrated the effectiveness of Green's method in studying semigroups, especially regular semigroups. Research on the Green relations of regular semigroups is at the core, and it involves almost all aspects of semigroup algebra theory. In 2011, Mary [22] studied the generalized inverse of semigroups by means of Green's relations. In 2017, Kufleitner and Manfred [23] considered the complexity of Green's relations when the semigroup is given by transformations on a finite set. This paper focuses on the Green's relations of CA-groupoids, in particular regular CA-groupoids. Recently, we analyzed these new results and studied them from the perspective of CA-groupoid theory. Miraculously, we obtained some unexpected results that, if *S* is a regular CA-groupoid, then every element of *S* lies in a subgroup of *S*, and every H-class in *S* is a group.

The rest of this paper is organized as follows. In Section 2, we give the related concepts and results of the CA-groupoid. In Section 3, we give some basic concepts and examples of regular elements, strongly regular elements, inverse elements, and local associative and quasi-regular elements. In Section 4, we prove the equivalence of regular CA-groupoids and CA-NET-groupoids, and give corresponding examples. In Section 5, we discuss the Green's relations of CA-groupoids and the Green's relations of regular CA-groupoids. In Section 6, we propose a new concept of inverse CA-groupoids and prove that regular CA-groupoids, strongly regular CA-groupoids, CA-NET-groupoids, inverse CA-groupoids and commutative regular semigroups are equivalent. Finally, the summary and plans for future work are presented in Section 7.

#### **2. Preliminaries**

In this section, we give the related research and results of the CA-groupoid. Some related notions are introduced.

A groupoid is a pair (*S*, ×) where *S* is a non-empty set together with a binary operation ×. Traditionally, the × operator is omitted without confusion.

**Definition 1.** *([4,5]) A groupoid (S,* ×*) is called a neutrosophic extended triplet-groupoid NET-groupoid) if, for any a* ∈ *S, there exist a neutral of "a" (denoted by neut(a)), and an opposite of "a" (denoted by anti(a)), such that neut(a)*∈*S, anti(a)*∈*S, and:*

*a* × *neut*(*a*) = *neut*(*a*) × *a* = *a; a* × *anti*(*a*) = *anti*(*a*) × *a* = *neut*(*a*)

*The triplet (a, neut(a), anti(a)) is called a neutrosophic extended triplet.*

Let (*S*, ×) be a groupoid. Some concepts are defined as follows:


Here, recall some basic concepts in the semigroup theory. A non-empty subset *A* of a semigroup (*S*, ×) is called a left ideal if *SA* ⊆ *A*, a right ideal if *AS* ⊆ *A*, and an ideal if it is both a left and a right ideal. If *a* is an element of a semigroup (*S*, ×), the smallest left ideal containing *a* is *Sa* ∪ {*a*}, which we may conveniently write as *S*1*a*.

An element a of a semigroup *S* is called regular if there exists *x* in *S* such that *a* × *x* × *a* = *a*. The semigroup *S* is called regular if all its elements are regular.

Among idempotents in an arbitrary semigroup, there is a natural (partial) order relation defined by the rule that *e* ≤ *f* if and only if *e* × *f* = *f* × *e* = *e*. It is easy to verify that the given relation has the properties (reflexive), (antisymmetric) that define an order relation. Certainly, it is clear that *e* ≤ *e*, and that *e* ≤ *f* and *f* ≤ *e* together implies that *e* = *f*. To show transitivity, notice that, if *e* ≤ *f* and *f* ≤ *g*, so that *e* × *f* = *f* × *e* = *e* and *f* × *g* = *g* × *f* = *f*, then *e* × *g* = *e* × *f* × *g* = *e* × *f* = *e* and *g* × *e* = *g* × *f* × *e* = *f* × *e* = *e*, and thus *e* ≤ *g*.

Let *S* be a regular semigroup and let *E(S)* denote the set of idempotents of *S*. For each *e* ∈ *E(S)*, let *Ge* be a subgroup of *S* with identity *e*. If *T(S)* = ∪ *(Ge: e* ∈ *E(S))* is a subsemigroup and *e*, *f*, *g* ∈ *E(S)*, *e* ≥ *f*, and *e* ≥ *g* imply *f* × *g* = *g* × *f*, we term *S* a strongly regular semigroup [24].

An equivalent relation <sup>L</sup> on *<sup>S</sup>* is defined by the rule that *<sup>a</sup>*L*<sup>b</sup>* if and only if *<sup>S</sup>*1*<sup>a</sup>* <sup>=</sup> *<sup>S</sup>*1*b*; an equivalent relation <sup>R</sup> on *<sup>S</sup>* is defined by the rule that *<sup>a</sup>*R*<sup>b</sup>* if and only if *aS*<sup>1</sup> <sup>=</sup> *bS*1; denote <sup>H</sup> <sup>=</sup> L∩R,<sup>D</sup> <sup>=</sup> L∪R, that is, *<sup>a</sup>*H*<sup>b</sup>* if and only if *<sup>S</sup>*1*<sup>a</sup>* <sup>=</sup> *<sup>S</sup>*1*<sup>b</sup>* and *aS*<sup>1</sup> <sup>=</sup> *bS*1; *<sup>a</sup>*D*<sup>b</sup>* if and only if *<sup>S</sup>*1*<sup>a</sup>* <sup>=</sup> *<sup>S</sup>*1*<sup>b</sup>* or *aS*<sup>1</sup> <sup>=</sup> *bS*1. An equivalent relation <sup>J</sup> on *<sup>S</sup>* is defined by the rule that *<sup>a</sup>*J*<sup>b</sup>* if and only if *<sup>S</sup>*1*aS*<sup>1</sup> <sup>=</sup> *<sup>S</sup>*1*bS*1, where:

$$S^1 a S^1 = S a S \cup a S \cup \dots \text{Sa} \cup \{a\} \dots$$

That is, *<sup>a</sup>*J*<sup>b</sup>* if and only if there exists *<sup>x</sup>*, *<sup>y</sup>*, *<sup>u</sup>*, *<sup>v</sup>* <sup>∈</sup> *<sup>S</sup>*<sup>1</sup> for which *<sup>x</sup>* <sup>×</sup> *<sup>a</sup>* <sup>×</sup> *<sup>y</sup>* <sup>=</sup> *<sup>b</sup>*, *<sup>u</sup>* <sup>×</sup> *<sup>b</sup>* <sup>×</sup> *<sup>v</sup>* <sup>=</sup> *<sup>a</sup>*. The <sup>L</sup>-class (R-class, H-class,D-class,J-class) containing the element *a* is written L*a* (R*a*,H*a*,D*a*,J*a*).

**Definition 2.** *([7–10,25]) Let (S,* ×*) be a groupoid. If, for all a, b, c* ∈ *S*,

$$a \times (b \times c) = c \times (a \times b)\_r$$

*then (S,* ×*) is called a cyclic associative groupoid (shortly, CA-groupoid).*

**Proposition 1.** *([25]) Let (S,* ×*) be a CA-groupoid. Then, for any a, b, c, d, x, y* ∈ *S,*

*(1) (a* × *b)* × *(c* × *d)* = *(d* × *a)* × *(c* × *b); and*

*(2) (a* × *b)* × *((c* × *d)* × *(x* × *y))* = *(d* × *a)* × *((c* × *b)* × *(x* × *y)).*

**Definition 3.** *([25]) A NET-groupoid (S,* ×*) is called cyclic associative (shortly, CA-NET-groupoid) if it is cyclic associative as a groupoid. S is called a commutative CA-NET-groupoid if, for all a, b* ∈ *N, a* × *b* = *b* × *a.*

**Theorem 1.** *([25]) Let (S,* ×*) be a CA-NET-groupoid. Then, for any a, p, q* ∈ *N and anti(a)* ∈ *{anti(a)}*,


**Remark 1.** *Since there may be more than one anti-element of an element a, the symbol {anti(a)} is used to represent the set of all anti elements of a. Therefore, the meaning of q* ∈ *{anti(a)} is that q is an anti-element of a.*

**Theorem 2.** *([25]) Let (S,* ×*) be a CA-NET-groupoid. Denote the set of all di*ff*erent neutral element in S by E(S). For any e* ∈ *E(S), denote S(e)* = *{a* ∈ *S*| *neut(a)* = *e}. Then, for any e* ∈ *E(S), S(e) is a subgroup of S.*

#### **3. Regular and Inverse Elements in Cyclic Associative Groupoids (CA-Groupoids)**

**Definition 4.** *An element a of a CA-groupoid (S,* ×*) is called regular if there exists x* ∈ *S such that*

$$a = a \times (\infty \times a)$$

*(S,* ×*) is called a regular CA-groupoid if all its elements are regular.*

**Definition 5.** *An element a of a CA-groupoid (S,* ×*) is called strongly regular if there exists x* ∈ *S such that*

$$a = a \times (\mathbf{x} \times a) \text{ and } a = (a \times \mathbf{x}) \times a$$

*(S,* ×*) is called strongly regular CA-groupoid if all its elements are strongly regular.*

**Example 1.** *Denote S* = *{a, b, c} and define operations* × *on S, as shown in Table 1. We can verify that a is strongly regular, since a* = *a* × *(a* × *a)* = *(a* × *a*) × *a; b is regular, since b* = *b* × *(b* × *b). However, b is not strongly regular, since b* -*(b* × *b)* × *b, and there does not exist x* ∈ *S such that b* = *b* × *(x* × *b)* = *(b* × *x)* × *b.*


**Table 1.** The operation × on *S*.

**Example 2.** *Let S* = *{1, 2, 3, 4}. The operation* × *on S is defined as Table 2. We can verify that (S,* ×*) is a commutative semigroup, then for any a, b, c* ∈ *S, we have a* × *(b* × *c)* = *(a* × *b)* × *c* = *c* × *(a* × *b). Thus, (S,* ×*) is a commutative CA-groupoid. Moreover, (S,* ×*) is an AG-groupoid because (S,* ×*) is a commutative CA-groupoid. In addition, (S,* ×*) is a regular semigroup, because 1* = *1* × *1* × *1, 2* = *2* × *2* × *2, 3* = *3* × *1* × *3, 4* = *4* × *2* × *4. (S,* ×*) is also a regular CA-groupoid, since 1* = *1* × *(1* × *1), 2* = *2* × *(2* × *2), 3* = *3* × *(1* × *3), 4* = *4* × *(2* × *4). (S,* ×*) is also a regular AG-groupoid, since 1* = *(1* × *1)* × *1, 2* = *(2* × *2)* × *2, 3* = *(3* × *1)* × *3, 4* = *(4* × *4)* × *4.*

**Table 2.** The operation × on *S*.


**Example 3.** *Let S* = *{1, 2, 3, 4, 5}. The operation* × *on S is defined as Table 3. We can verify that (S,* ×*) is a strongly regular semigroup. However, (S,* ×*) is not a CA-groupoid because 3* × *(4* × *5)*-*5* × *(3* × *4).*

**Table 3.** The operation × on *S*.


**Example 4.** *Let S* = *{1, 2, 3, 4}. The operation* × *on S is defined as Table 4. We can verify that (S,* ×*) is a strongly regular CA-groupoid, since 1* = *1* × *(1* × *1)* = *(1* × *1)* × *1, 2* = *2* × *(4* × *2)* = *(2* × *4)* × *2, 3* = *3* × *(3* × *3)* = *(3* × *3)* × *3, 4* = *4* × *(2* × *4)* = *(4* × *2)* × *4. (S,* ×*) is also a strongly regular semigroup*.

**Table 4.** The operation × on *S*.


An idea of great important in CA-groupoid theory is that of an inverse of an element.

**Definition 6.** *For any element a in a CA-groupoid S, we say that a*−*<sup>1</sup> is an inverse of a if satisfied*

$$a = a \times (a^{-1} \times a), \\ a^{-1} \times (a \times a^{-1}) = a^{-1} \tag{1}$$

*Notice that an element with an inverse is necessarily regular. Less obviously, each regular element has an inverse; for if a* <sup>×</sup> *(x* <sup>×</sup> *a)* <sup>=</sup> *a we need only define a*−<sup>1</sup> <sup>=</sup> *<sup>x</sup>* <sup>×</sup> *(a* <sup>×</sup> *x) and verify that Equation (1) are satisfied.*

**Theorem 3.** *Let (S,* ×*) be a regular CA-groupoid; then, each of its elements has an inverse and the inverse is unique.*

**Proof.** Let *x*1, *x*<sup>2</sup> be inverses of *a* in *S*. Then, we have *a* = *a* × (*x*<sup>1</sup> × *a*), *x*<sup>1</sup> = *x*<sup>1</sup> × (*a* × *x*1) and *a* = *a* × (*x*<sup>2</sup> × *a*), *x*<sup>2</sup> = *x*<sup>2</sup> × (*a* × *x*2),

*x*<sup>1</sup> = *x*<sup>1</sup> × (*a* × *x*1) = *x*<sup>1</sup> × (*x*<sup>1</sup> × *a*) = *x*<sup>1</sup> × (*x*<sup>1</sup> × (*a* × (*x*<sup>2</sup> × *a*))) = *x*<sup>1</sup> × (*x*<sup>1</sup> × (*a* × (*a* × *x*2))) = *x*<sup>1</sup> × ((*a* × *x*2) × (*x*<sup>1</sup> × *a*))

= *x*<sup>1</sup> × ((*a* × *a*) × (*x*<sup>1</sup> × *x*2)) (Applying Proposition 1)

$$\mathbf{x} = (\mathbf{x}\_1 \times \mathbf{x}\_2) \times (\mathbf{x}\_1 \times (a \times a)) = (\mathbf{x}\_1 \times \mathbf{x}\_2) \times (a \times (\mathbf{x}\_1 \times a)) = (\mathbf{x}\_1 \times \mathbf{x}\_2) \times a.$$

Similarly, we can get that *x*<sup>2</sup> = (*x*<sup>2</sup> × *x*1) × *a*. Then, we have

$$\begin{aligned} (\mathbf{x}\_1 \times \mathbf{a}) \times \mathbf{x}\_2 &= (\mathbf{x}\_1 \times \mathbf{a}) \times ((\mathbf{x}\_2 \times \mathbf{x}\_1) \times \mathbf{a}) = a \times ((\mathbf{x}\_1 \times \mathbf{a}) \times (\mathbf{x}\_2 \times \mathbf{x}\_1)) = (\mathbf{x}\_2 \times \mathbf{x}\_1) \times (a \times (\mathbf{x}\_1 \times \mathbf{a})) \\ &= (\mathbf{x}\_2 \times \mathbf{x}\_1) \times a = \mathbf{x}\_2, \end{aligned}$$

$$\mathbf{x}\_1 \times \mathbf{x}\_2 = \mathbf{x}\_1 \times ((\mathbf{x}\_2 \times \mathbf{x}\_1) \times a) = a \times (\mathbf{x}\_1 \times (\mathbf{x}\_2 \times \mathbf{x}\_1)) = (\mathbf{x}\_2 \times \mathbf{x}\_1) \times (a \times \mathbf{x}\_1) \mathbf{x}\_2$$

= (*x*<sup>1</sup> × *x*2) × (*a* × *x*1) (Applying Proposition 1)

$$=\mathbf{x}\_1 \times ((\mathbf{x}\_1 \times \mathbf{x}\_2) \times a) = \mathbf{x}\_1 \times \mathbf{x}\_1 \dots$$

Similarly, we can get that *x*<sup>2</sup> × *x*<sup>1</sup> = *x*<sup>2</sup> × *x*2. Further, we have,

$$\begin{aligned} \mathbf{x}\_1 \times \mathbf{x}\_2 = \mathbf{x}\_1 \times ((\mathbf{x}\_1 \times a) \times \mathbf{x}\_2) &= \mathbf{x}\_2 \times (\mathbf{x}\_1 \times (\mathbf{x}\_1 \times a)) = \mathbf{x}\_2 \times (a \times (\mathbf{x}\_1 \times \mathbf{x}\_1)) = (\mathbf{x}\_1 \times \mathbf{x}\_1) \times (\mathbf{x}\_2 \times a) \\ &= (\mathbf{x}\_1 \times \mathbf{x}\_2) \times (\mathbf{x}\_2 \times a) \end{aligned}$$

= (*a* × *x*1) × (*x*<sup>2</sup> × *x*2) (Applying Proposition 1)

$$=(a \times x\_1) \times (x\_2 \times x\_1)$$

= (*x*<sup>1</sup> <sup>×</sup> *a*) × (*x*<sup>2</sup> × *x*2) (Applying Proposition 1 and *x*<sup>2</sup> × *x*<sup>1</sup> = *x*<sup>2</sup> × *x*2)

= *x*<sup>2</sup> × ((*x*<sup>1</sup> × *a*) × *x*2) = *x*<sup>2</sup> × *x*2.

Thus, *x*<sup>1</sup> × *x*<sup>2</sup> = *x*<sup>2</sup> × *x*1, *x*<sup>1</sup> = (*x*<sup>1</sup> × *x*2) × *a* = (*x*<sup>2</sup> × *x*1) × *a* = *x*2.


Therefore, in a regular CA-groupoid, each of its elements has an inverse and the inverse is unique.

**Example 5.** *Let S* = *{1, 2, 3, 4, 5, 6}. The operation* × *on S is defined as Table 5. We can verify that (S,* ×*) is a CA-groupoid; element 3 is an inverse of 3 because 3* = *3* × *(3* × *3), 3* = *3* × *(3* × *3), obviously element 3 is a regular; and element 5 is an inverse of 5 since 5* = *5* × *(5* × *5), 5* = *5* × *(5* × *5), obviously element 5 is a regular. However, elements 1, 2, 4, and 6 have no inverses because there exists no x, y, p, q*∈*S such that 1* = *1* × *(x* × *1), x* = *x* × *(1* × *x); 2* = *2* × *(y* × *2), y* = *y* × *(2* × *y); 4* = *4* × *(p* × *4), p* = *p* × *(4* × *p); and 6* = *6* × *(q* × *6), q* = *q* × *(6* × *q). Obviously, for any a* ∈ *S, if a a* × *S, then a has not inverse*.

**Table 5.** The operation × on *S*.


**Example 6.** *Let S* = *{1, 2, 3, 4, 5, 6}. The operation* × *on S is defined as Table 6. We can verify that (S,* ×*) is a regular CA-groupoid, since 1* = *1* × *(1* × *1), 2* = *2* × *(2* × *2), 3* = *3* × *(3* × *3), 4* = *4* × *(4* × *4), 5* = *5* × *(5* × *5), 6* = *6* × *(6* × *6), and the inverse is unique.*

**Table 6.** The operation × on *S*.


**Definition 7.** *An element a of a CA-groupoid (S,* ×*) is called locally associative if satisfied*

$$a \times (a \times a) = (a \times a) \times a.$$

*(S,* ×*) is called locally associative CA-groupoid if all its elements are locally associative*.

**Example 7.** *Let S* = *{1, 2, 3, 4, 5}. The operation* × *on S is defined as Table 7. We can verify that (S,* ×*) is a locally associative CA-groupoid, since 1* × *(1* × *1)* = *(1* × *1)* × *1, 2* × *(2* × *2)* = *(2* × *2)* × *2, 3* × *(3* × *3)* = *(3* × *3)* × *3, 4* × *(4* × *4)* = *(4* × *4)* × *4, and 5* × *(5* × *5)* = *(5* × *5)* × *5. However, (S,* ×*) is not a semigroup because (3* × *4)* × *3*-*3* × *(4* × *3).*


**Table 7.** The operation × on *S*.

**Definition 8.** *An element a of a CA-groupoid (S,* ×*) is called quasi-regular if there exists x* ∈ *S, m*∈ *N such that*

$$a^m \times (\mathfrak{x} \times a^m) = a^m. \text{ ( $a^m$  is defined by  $a \times a^{m-1}$ )}$$

*(S,* ×*) is called quasi-regular CA-groupoid if all its elements are quasi-regular*.

**Example 8.** *Let S* = *{1, 2, 3, 4}. The operation* × *on S is defined as Table 8. We can verify that (S,* ×*) is a quasi-regular CA-groupoid, since 1* = *1<sup>2</sup>* × *(3* × *12), 2* = *2* × *(2* × *2), 3* = *3* × *(3* × *3), 4<sup>2</sup>* = *42* × *(2* × *42). However, (S,* ×*) is not a regular CA-groupoid because there exists no x, y*∈*S such that 1* = *1* × *(x* × *1), 4* = *4* × *(y* × *4). Moreover, (S,* ×*) is not a semigroup because (4* × *1)* × *1* -*4* × *(1* × *1).*

**Table 8.** The operation × on *S*.


**Definition 9.** *Let (S,* ×*) be a groupoid. If for all a, b, c* ∈ *S,*

$$a \times (b \times c) = (a \times b) \times c,\\ a \times (b \times c) = c \times (a \times b),$$

*then (S,* ×*) is called cyclic associative semigroup (shortly, CA-semigroup).*

**Example 9.** *Suppose S* = *{1, 2, 3, 4} and define a binary operation* × *on S as shown in Table 9. We can verify that (S,* ×*) is a CA-groupoid, but (S,* ×*) is not a CA-semigroup because (3* × *4)* × *3* -*3* × *(4* × *3).*


**Table 9.** The operation × on *S.*

Obviously on the CA-groupoid *S*, there is: strongly regular element ⇒ regular element ⇒ inverse element ⇒ quasi-regular element.

According to Examples 1, 2, and 5–9, we can get the relationship between CA-groupoids and related algebraic systems, which we can be expressed as Figure 1.

**Figure 1.** The relationships among some algebraic systems.

**Remark 2.** *In Figure 1, each letter only indicates the smallest area in which it is located. Here, A represents the set of all strongly regular CA-groupoids, and*

*A*∪*B represents the set of all regular CA-groupoids; A*∪*B*∪*C represents the set of all CA-semigroups; A*∪*B*∪*C*∪*D represents the set of all quasi-regular CA-groupoids; A*∪*B*∪*C*∪*D*∪*E represents the set of all locally associative CA-groupoids; A*∪*B*∪*C*∪*D*∪*E*∪*F represents the set of all CA-groupoids; and A*∪*B*∪*C*∪*G represents the set of all semigroups.*

#### **4. Regular Cyclic Associative Groupoids (CA-Groupoids) and Cyclic Associative Neutrosophic Extended Triplet Groupoids (CA-NET-Groupoids)**

**Theorem 4.** *Let (S,* ×*) be a CA-NET-groupoid. Then, its idempotents are commutative.*

**Proof.** Let *a*, *b* an idempotent in *S*; then, we have

(*a* × *b*) × (*a* × *b*) = (*b* × *a*) × (*a* × *b*) (Applying Proposition 1)

= (*b* × *b*) × (*a* × *a*) (Applying Proposition 1) = *b* × *a.*

Moreover

(*a* × *b*) × (*a* × *b*) = (*b* × (*neut*(*b*) × *a*)) × (*a* × *b*)

$$a = (b \times b) \times (a \times (new(b) \times a)) \text{ (Applying Proposition 1)}$$

= (*b* × *b*) × (*a* × (*a* × *neut*(*b*))) = (*b* × *b*) × (*neut*(*b*) × (*a* × *a*))

$$a = b \times (new(b) \times a) = a \times (b \times mean(b)) = a \times b.$$

Therefore, *a* × *b* = *b* × *a*. In a CA-NET-groupoid, its idempotents are commutative. -

**Corollary 1.** *Every CA-NET-groupoid is commutative.*

**Proof.** Let (*S*, ×) be a CA-NET-groupoid. By Theorem 4, for any *x* ∈ *S*, *neut*(*x*) is idempotent. Then, for any *a, b* ∈ *S*, we have

*neut*(*a*) × *neut*(*b*) = *neut*(*b*) × *neut*(*a*),

Furthermore,

$$\times \operatorname{neut}(a) \times b = \operatorname{neut}(a) \times (\operatorname{neut}(b) \times b) = b \times (\operatorname{neut}(a) \times \operatorname{neut}(b))$$

= *neut*(*b*) × (*b* × *neut*(*a*)) = (*neut*(*b*) × *neut*(*b*)) × (*b* × *neut*(*a*))

= (*neut*(*a*) × *neut*(*b*)) × (*b* × *neut*(*b*)) (Applying Proposition 1)

$$= (new(a) \times mean(b)) \times (new(b) \times b)$$

= (*b* × *neut*(*a*)) × (*neut*(*b*) × *neut*(*b*)) (Applying Proposition 1)

= (*b* × *neut*(*a*)) × *neut*(*b*)

Further, for any *a, b* ∈ *S*, we have

$$a \times b = (\textit{new}(a) \times a) \times (\textit{new}(b) \times b)$$

= (*b* × *neut*(*a*)) × (*neut*(*b*) × *a*) (Applying Proposition 1)

= *a* × ((*b* × *neut*(*a*)) × *neut*(*b*))

= *a* × (*neut*(*a*) × *b*) (by *neut*(*a*) × *b*)

= (*b* × *neut*(*a*)) × *neut*(*b*))

= *b* × (*a* × *neut*(*a*))

= *b* × *a*

Therefore, every CA-NET-groupoid is commutative. -

**Example 10.** *Let S* = *{1, 2, 3, 4, 5}. The operation* × *on S is defined as Table 10. We can verify that (S,* ×*) is a CA-NET-groupoid, and*

$$\text{neut}(1) = 1,\text{anti}(1) = \{1,5\};\text{neut}(2) = 2,\text{anti}(2) = \{1,2,3,4,5\};$$

*neut(3)* = *3, anti(3)* = *{3, 5}; neut(4)* = *4, anti(4)*= *{1, 3, 4, 5}; neut(5)* = *5, anti(3)* = *5.*

*Obviously, (S,* ×*) is a commutative.*


**Table 10.** The operation × on *S*.

**Theorem 5.** *Let (S,* ×*) be a groupoid. Then, S is a CA-NET-groupoid if and only if it is a regular CA-groupoid.*

*5 12345*

**Proof.** Assume that *S* is a CA-NET-groupoid. For any *a* in *S*, by Definitions 1 and 3, we have

$$a \times (anti(a) \times a) = a \times neut(a) = a...$$

From this and Definition 4, we know that element *a* is a regular element and *S* is a regular CA-groupoid.

Therefore, we prove that S is a regular CA-groupoid.

Now, we assume that S is a regular CA-groupoid. For any *a* in a regular CA-groupoid S, we have

$$a \times (\mathfrak{x} \times a) = a.$$

Furthermore,

$$(\mathbf{x} \times \mathbf{a}) \times a = (\mathbf{x} \times \mathbf{a}) \times (a \times (\mathbf{x} \times \mathbf{a})) = (\mathbf{x} \times \mathbf{a}) \times ((\mathbf{x} \times \mathbf{a}) \times \mathbf{a}) = a \times ((\mathbf{x} \times \mathbf{a}) \times (\mathbf{x} \times \mathbf{a})) $$

$$= a \times (a \times ((x \times a) \times x))$$

$$= a \times (x \times (a \times (x \times a)))$$

$$= a \times (x \times a) = a.$$

Therefore, there exists (*x* × *a*) ∈ *S*, such that (*x* × *a*) × *a* = *a* × (*x* × *a*) = *a*. Moreover, we have

$$(\mathbf{x} \times \mathbf{a}) = \mathbf{x} \times (\mathbf{a} \times (\mathbf{x} \times \mathbf{a})) = (\mathbf{x} \times \mathbf{a}) \times (\mathbf{x} \times \mathbf{a}) = \mathbf{a} \times ((\mathbf{x} \times \mathbf{a}) \times \mathbf{x}).$$

Furthermore,

$$((\mathbf{x}\times\mathbf{a})\times\mathbf{x})\times\mathbf{a} = ((\mathbf{x}\times\mathbf{a})\times\mathbf{x})\times(\mathbf{a}\times(\mathbf{x}\times\mathbf{a}))$$

$$=(\mathbf{x}\times\mathbf{a})\times(((\mathbf{x}\times\mathbf{a})\times\mathbf{x})\times\mathbf{a}) = a\times((\mathbf{x}\times\mathbf{a})\times((\mathbf{x}\times\mathbf{a})\times\mathbf{x})) $$

$$=a\times(\mathbf{x}\times((\mathbf{x}\times\mathbf{a})\times(\mathbf{x}\times\mathbf{a}))) $$

$$=a\times(\mathbf{x}\times(\mathbf{x}\times\mathbf{a}))\text{ (by }(\mathbf{x}\times\mathbf{a})\times(\mathbf{x}\times\mathbf{a})=(\mathbf{x}\times\mathbf{a})\text{)}$$

$$=(\mathbf{x}\times\mathbf{a})\times(\mathbf{a}\times\mathbf{x})$$

$$=\mathbf{x}\times((\mathbf{x}\times\mathbf{a})\times\mathbf{a})\text{ (by }(\mathbf{x}\times\mathbf{a})\times\mathbf{a}=\mathbf{a})$$

$$=\mathbf{x}\times\mathbf{a}$$

Therefore, there exists ((*x* × *a*) × *x*)∈*S*, such that *a* × ((*x* × *a*) × *x*) = ((*x* × *a*) × *x*) × *a* = *x* × *a*. Then, *S* is a CA-NET-groupoid. -

**Example 11.** *Let S* = *{1, 2, 3, 4}. The operation* × *on S is defined as Table 11. We can verify that (S,* ×*) is a CA-NET- groupoid, and neut(1)* = *1, anti(1)* = *{1, 2, 3, 4}; neut(2)* = *3, anti(2)* = *4; neut(3)* = *3, anti(3)* = *3; neut(4)* = *3, anti(4)* = *2.*


**Table 11.** The operation × on *S.*

*Moreover, (S,* ×*) is a regular CA-groupoid, since 1* = *1* × *(1* × *1), 2* = *2* × *(4* × *2), 3* = *3* × *(3* × *3), 4* = *4* × *(2* × *4).*

**Definition 10.** *Let (S,* ×*) be a groupoid.*


**Theorem 6.** *Let (S,* ×*) be a groupoid. Then, S is a CA-(r, l)-NET-groupoid if and only if it is a regular CA-groupoid.*

**Proof.** Assume that *S* is a *CA-(r, l)-NET*-groupoid. For any *a* in *S*, by Definitions 1 and 10(1), we have

$$a \times neut(a) = a, \textit{anti}(a) \times a = neut(a)$$

$$a \times (anti(a) \times a) = a \times newut(a) = a$$

From this and Definition 4, we know that element *a* is a regular element and *S* is a regular CA-groupoid. Therefore, we prove that *S* is a regular CA-groupoid.

Now, we assume that *S* is a regular CA-groupoid. For any *a* in a regular CA-groupoid *S*, we have

$$a \times (\mathfrak{x} \times a) = a.$$

Thus, there exists (*x* × *a*) ∈ *S*, such that *a* × (*x* × *a*) = *a*. Moreover, we have:

*x* × *a* = (*x* × *a*).

Therefore, there exists *x* ∈ *S*, such that *x* × *a* = (*x* × *a*). Then, *S* is a CA-(r, l)-NET-groupoid. -

**Theorem 7.** *Let (S,* ×*) be a groupoid. Then, S is a CA-(r, r)-NET-groupoid if and only if it is a regular CA-groupoid.*

**Proof.** Assume that *S* is a CA-(r, r)-NET-groupoid. For any *a* in *S*, by Definitions 1 and 10(2), we have

$$a \times n 
u 
u t(a) = a, \; a \times an 
ti(a) = n 
u 
u t(a),$$

$$a \times (anti(a) \times a) = a \times (a \times anti(a)) = a \times ment(a) = a$$

From this and Definition 4, we know that element *a* is a regular element and *S* is a regular CA-groupoid. Therefore, we prove that *S* is a regular CA-groupoid.

Now, we assume that *S* is a regular CA-groupoid, for any *a* in a regular CA-groupoid *S*, we have

$$a \times (\mathbf{x} \times \mathbf{a}) = a \times (a \times \mathbf{x}) = a$$

Thus, there exists (*a* × *x*) ∈ *S*, such that *a* × (*a* × *x*) = *a*. Moreover, we have

$$a \times \infty = (a \times \infty)$$

Therefore, there exists *x* ∈ *S*, such that *a* × *x* = (*a* × *x*). Then, *S* is a CA-(r, r)-NET-groupoid. -

**Theorem 8.** *Let (S,* ×*) be a groupoid. Then, S is a CA-(l, r)-NET-groupoid if and only if it is a regular CA-groupoid.*

**Proof.** Assume that *S* is a CA-(l, r)-NET-groupoid. For any *a* in *S*, by Definitions 1 and 10(3), we have

$$neut(a) \times a = a, \ a \times anti(a) = newut(a)$$

*neut*(*a*) × *a* = (*a* × *anti*(*a*)) × *a* = (*a* × *anti*(*a*)) × (*neut*(*a*) × *a*)


= (*anti*(*a*) × *a*) × *a.*

Thus, *a* × *anti*(*a*) = *anti*(*a*) × *a* = *neut*(*a*). Moreover, we have

*a* × *neut*(*a*) = (*neut*(*a*) × *a*) × (*anti*(*a*) × *a*)

= (*a* × *neut*(*a*)) × (*anti*(*a*) × *a*) (Applying Proposition 1)

= (*a* × *neut*(*a*)) × *neut*(*a*).

Thus, a × *neut*(*a*) = *neut*(*a*) × *a* = *a*. Then,

$$(anti(a)\times a)\times a = neut(a)\times a = a\times ment(a) = a\times (anti(a)\times a) = a$$

From this and Definition 4, we know that element *a* is a regular element and *S* is a regular CA-groupoid. Therefore, we prove that *S* is a regular CA-groupoid.

Now, we assume that *S* is a regular CA-groupoid. For any *a* in a regular CA-groupoid *S*, let *a* = (*a* × *x*) × *a.* We have

$$a \times a = \mathbf{x} \times ((a \times \mathbf{x}) \times a) = a \times (\mathbf{x} \times (a \times \mathbf{x})) = ((a \times \mathbf{x}) \times (a \times \mathbf{x})),$$

(*a* × *x*) × *a* = (*a* × *x*) × ((*a* × *x*) × *a*) = *a* × ((*a* × *x*) × (*a* × *x*)) = *a* × (*x* × *a*) = *a*, *a* × *x* = (*a* × *x*)

Therefore, *S* is a CA-(l, r)-NET-groupoid. -

**Theorem 9.** *Let (S,* ×*) be a groupoid. Then, S is a CA-(l, l)-NET-groupoid if and only if it is a regular CA-groupoid.*

**Proof.** Assume that *S* is a CA-(l, l)-NET-groupoid. For any *a* in *S*, by Definitions 1 and 10(4), we have

*neut*(*a*) × *a* = *a*, *anti*(*a*) × *a* = *neut*(*a*),

*a* × *neut*(*a*) = (*neut*(*a*) × *a*) × (*anti*(*a*) × *a*)

= (*a* × *neut*(*a*)) × (*anti*(*a*) × *a*) (Applying Proposition 1)

= (*a* × *neut*(*a*)) × *neut*(*a*)

Thus, *a* × *neut*(*a*) = *neut*(*a*) × *a* = *a*. Then,

(*anti*(*a*) × *a*) × *a* = *neut*(*a*) × *a* = *a* × *neut*(*a*) = *a* × (*anti*(*a*) × *a*) = *a.*

From this and Definition 4, we know that element *a* is a regular element and *S* is a regular CA-groupoid. Therefore, we prove that *S* is a regular CA-groupoid.

Now, we assume that *S* is a regular CA-groupoid. For any *a* in a regular CA-groupoid *S*, let *a* = (*x* × *a*) × *a*, we have

$$\mathbf{x} \times \mathbf{a} = \mathbf{x} \times ((a \times \mathbf{x}) \times a) = a \times (\mathbf{x} \times (a \times \mathbf{x})) = ((a \times \mathbf{x}) \times (a \times \mathbf{x})),$$

$$\mathbf{a} \times (\mathbf{x} \times a) \times a = (\mathbf{x} \times a) \times ((\mathbf{x} \times a) \times a) = a \times ((\mathbf{x} \times a) \times (\mathbf{x} \times a)) = (\mathbf{x} \times a) \times (a \times (\mathbf{x} \times a))$$

$$= (\mathbf{x} \times a) \times (a \times (a \times \mathbf{x})) = (a \times \mathbf{x}) \times ((\mathbf{x} \times a) \times a) = (a \times \mathbf{x}) \times a$$

$$= (a \times \mathbf{x}) \times ((a \times \mathbf{x}) \times a) \text{ (by } (a \times \mathbf{x}) \times a = a)$$

$$= a \times ((a \times \mathbf{x}) \times (a \times \mathbf{x})) $$

$$= a \times (\mathbf{x} \times a) = a.$$

Moreover, we have *x* × *a* = *(x* × *a).* Therefore, *S* is a CA-(l, l)-NET-groupoid. - **Example 12.** *Denote S* = *{1, 2, 3, 4} and define operations* × *on S as shown in* Table 12*. We can verify that (S,* ×*) is a CA-(r, l)-NET-groupoid, and,*

*neut(r, l)(1)* = *1, anti(r, l)(1)* = *{1, 2, 3, 4}; neut(r, l)(2)* = *4, anti(r, l)(2)* = *2;*

*neut(r, l)(3)* = *3, anti(r, l)(3)* = *3; neut(r, l)(4)* = *4, anti(r, l)(4)* = *4*

**Table 12.** The operation × on *S.*


*It is easy to verify that S is also a CA-(r, r)-NET-groupoid, CA-(l, r)-NET-groupoid, CA-(l, l)-NET-groupoid. Moreover, (S,* ×*) is a regular CA-groupoid, since 1* = *1* × *(2* × *1), 2* = *2* × *(2* × *2), 3* = *3* × *(3* × *3), and 4* = *4* × *(4* × *4).*

#### **5. Green Relations in Cyclic Associative Groupoids (CA-Groupoids)**

If *a* is an element of a CA-groupoid *S*, the smallest left ideal of *S* containing *a* is *Sa*∪*{a}*.

**Definition 11.** *Let (S,* ×*) be a CA-groupoid, for any a, b* ∈ *S, define the following binary relationships:*

$$\begin{aligned} a\mathcal{L}b \Leftrightarrow \mathcal{S}a \cup \{a\} &= \mathcal{S}b \cup \{b\};\\ a\mathcal{R}b \Leftrightarrow a\mathcal{S} \cup \{a\} &= b\mathcal{S} \cup \{b\};\\ a\mathcal{J}b \Leftrightarrow (\mathcal{S}a \cup \{a\})\mathcal{S} \cup (\mathcal{S}a \cup \{a\}) &= (\mathcal{S}b \cup \{b\})\mathcal{S} \cup (\mathcal{S}b \cup \{b\});\\ \mathcal{H} &= \mathcal{L} \cap \mathcal{R}.\end{aligned}$$

*We call* L,R,J, *and* H the Green's relations on the CA-groupoid.

**Definition 12.** *Let (S,* ×*) be a CA-groupoid. A relation R on the set S is called left compatible (with the operation on S) if*

(∀*a*, *s*, *t* ∈ *S*) (*s*, *t*) ∈ *R* ⇒ (*a* × *s* , *a* × *t*) ∈ *R*,

*and right compatible if*

$$(\forall a, \text{ s. } t \in S) \ (\text{s. } t) \in R \implies (s \times a \,\, , t \times a) \in R.$$

*It is called compatible if*

(∀*s*, *t*,*s t* ∈ *S*) [(*s*, *t*) ∈ *R and* (*s* , *t* ∈ *R*)] ⇒ (*s* × *s* , *t* × *t* ) ∈ *R*.

*A left [right] compatible equivalence is called a left [right] congruence. A compatible equivalence relation is called a congruence.*

**Proposition 2.** *Let a, b be elements of a CA-groupoid S. If a* = *b, then a*L*a, a*R*a. If ab, then a*L*b if and only if there exists x, y in S such that x* × *a* = *b, y* × *b* = *a. In addition, a*R *b if and only if there exists u, v in S such that a* × *u* = *b, b* × *v* = *a.*

Another immediate property of this is as follows:

**Proposition 3.** L *is a left congruence and* R *is a right congruence.*

**Corollary 2.** *In a CA-groupoid S,* L *and* R *are not commutative. That is, as a binary relationship,* L◦R-R◦L.

**Example 13.** *Let S* = *{1, 2, 3, 4, 5, 6}. The operation* × *on S is defined as Table 13. Then, (S,* ×*) is a CA-groupoid.*


**Table 13.** The operation × on *S*.

L = *{*< *3, 5*>*,* <*5, 3*>*},* R = *{*<*3, 4*>*,* <*4, 3*>*}.* L◦R = *{*<*5, 4*>*}* - R◦L = *{*<*4, 5*>*}. Then,* L *and* R *are not commutative.*

In a regular CA-groupoid *S* we have a particularly useful way of looking at the equivalences L and R. First, notice that if *S* is regular then *a* = *a* × *(x* × *a*) ∈*aS,* and similarly *a* ∈ *Sa*, *a*∈*SaS*. Hence, in describing the Green equivalences for a regular CA-groupoid we can drop all reference to *Sa*∪*{a}*, and assert simply that

$$\begin{aligned} a\mathcal{L}b &\Leftrightarrow \mathcal{S}a = Sb; \\ a\mathbf{b} &\Leftrightarrow a\mathcal{S} = b\mathcal{S}; \\ a\mathcal{J}b &\Leftrightarrow \mathcal{S}a\mathcal{S} = Sb\mathcal{S}; \\ \mathcal{H} &= \mathcal{L}\cap\mathcal{R} \end{aligned}$$

**Definition 13.** *Let (S,* ×*) be a regular CA-groupoid, define the following binary relationship:*

D = L∪R

*We call* D *the Green's relations on the regular CA-groupoid.*

**Theorem 10.** *In a regular CA-groupoid* S*, the relations*L *and* R *are commutative. That is, as a binary relationship,* L◦R = R◦L.

**Proof.** Let (*S,* ×) be a regular CA-groupoid, let *a, b* ∈ *S*, and suppose that (*a*, *b*) ∈ L◦R. Then, there exists *c* in *S* such that *a*L*c* and *c*R*b*. That is, there exist *x, y, u, v* in *S* such that

$$\begin{array}{c} \text{x} \times a = c, \,\text{c} \times u = b, \\\\ y \times c = a, \, b \times v = c. \end{array}$$

If we now write *d* for the element (*y* × *c*) × *u* of *S*, applying Theorem 5, *S* is a CA-NET-groupoid. As such, we have

$$a \times u = (y \times c) \times u = d,$$

$$a = y \times c = y \times (b \times v) = v \times (y \times b) = b \times (v \times y) = (c \times u) \times (v \times y).$$

$$= (y \times c) \times (v \times u) \text{ (Applying Proposition 1)}$$

$$= a \times (v \times u) = u \times (a \times v) = v \times (u \times a)$$

$$= v \times (u \times (n \text{cut}(a) \times a)) = v \times (a \times (u \times n \text{cut}(a)))$$

$$= v \times (n \text{cut}(a) \times (a \times u)) = v \times (n \text{cut}(a) \times d) = d \times (v \times n \text{cut}(a))$$

hence *a*R*d.* In addition*,*

$$b = c \times u = (\mathbf{x} \times a) \times u = (\mathbf{x} \times a) \times (\mathbf{u} \times n \text{u} \text{ut}(\mathbf{u})) = n \text{u} \text{ut}(\mathbf{u}) \times ((\mathbf{x} \times a) \times \mathbf{u}) = n \text{u} \text{ut}(\mathbf{u}) \times b$$

$$d = (y \times c) \times u = (y \times c) \times (\text{next}(u) \times u) = u \times ((y \times c) \times \text{next}(u)) = \text{next}(u) \times (u \times (y \times c))$$

= *neut*(*u*) × (*c* × (*u* × *y*)) = *neut*(*u*) × (*y* × (*c* × *u*)) = *neut*(*u*) × (*y* × *b*) = *neut*(*u*) × (*y* × (*neut*(*u*) × *b*))

$$b = next(u) \times (b \times (y \times next(u))) = (y \times next(u)) \times (next(u) \times b) = (y \times next(u)) \times b,$$

$$d = a \times u = (y \times c) \times u = (y \times c) \times (u \times nent(u)) = nent(u) \times ((y \times c) \times u) = nent(u) \times d,$$

$$a \cdot b = c \times u = (\mathbf{x} \times \mathbf{a}) \times u = (\mathbf{x} \times \mathbf{a}) \times (n \times t \mathbf{u} (u) \times u) = u \times ((\mathbf{x} \times \mathbf{a}) \times n \mathbf{u} \times t (u)) = n \mathbf{r} \mathbf{u} (u) \times (\mathbf{u} \times (\mathbf{x} \times \mathbf{a})) $$

$$= \operatorname{neut}(\mathfrak{u}) \times (\mathfrak{u} \times \mathfrak{c}) = \operatorname{cent}(\mathfrak{u}) \times (\mathfrak{u} \times (\mathfrak{x} \times \mathfrak{a})) = \operatorname{neut}(\mathfrak{u}) \times (\mathfrak{u} \times (\mathfrak{x} \times (\mathfrak{y} \times \mathfrak{c})))$$

$$\mathbf{x} = \text{next(u)} \times ((\mathbf{y} \times \mathbf{c}) \times (\mathbf{u} \times \mathbf{x})) = \text{next(u)} \times (\mathbf{a} \times (\mathbf{u} \times \mathbf{x})) = \text{next(u)} \times (\mathbf{x} \times (\mathbf{a} \times \mathbf{u})) = \text{next(u)} \times (\mathbf{x} \times \mathbf{d})$$

$$\mathbf{u} = \text{next(u)} \times \left(\mathbf{x} \times \left(\text{next(u)} \times \mathbf{d}\right)\right) = \text{next(u)} \times \left(\mathbf{d} \times \left(\mathbf{x} \times \text{next(u)}\right)\right) = \left(\mathbf{x} \times \text{next(u)}\right) \times \left(\text{next(u)} \times \mathbf{d}\right)$$

$$= (\mathbf{x} \times \operatorname{ment}(\mathbf{u})) \times d\_{\text{'}}$$

*thus d*L*b.* We deduce that *(a, b)*∈R◦L. We have shown that L◦R⊆R◦L; the reverse inclusion follows in a similar way. -

**Theorem 11.** *In a regular CA-groupoid S,*L *is equivalent to* R. *That is, as a binary relationship,* L=R.

**Proof.** By Theorem 10, we have *d*L *b.* Then,

$$b = \mathbf{c} \times \boldsymbol{u} = (\mathbf{x} \times \mathbf{a}) \times \boldsymbol{u} = (\mathbf{x} \times \mathbf{a}) \times (\operatorname{next}(\mathbf{u}) \times \mathbf{u}) = \boldsymbol{u} \times ((\mathbf{x} \times \mathbf{a}) \times \operatorname{next}(\mathbf{u})) \mathbf{b}$$

= *neut*(*u*) × (*u* × (*x* × *a*)) = *neut*(*u*) × (*u* × *c*) = *neut*(*u*) × (*u* × (*x* × *a*)) = *neut*(*u*) × (*u* × (*x* × (*y* × *c*)))

$$= \operatorname{neut}(\mathfrak{u}) \times ((\mathfrak{y} \times \mathfrak{c}) \times (\mathfrak{u} \times \mathfrak{x})) = \operatorname{neut}(\mathfrak{u}) \times (\mathfrak{a} \times (\mathfrak{u} \times \mathfrak{x})) = \operatorname{cent}(\mathfrak{u}) \times (\mathfrak{x} \times (\mathfrak{a} \times \mathfrak{u})) $$

= *neut*(*u*) × (*x* × *d*) = *d* × (*neut*(*u*) × *x*).

$$d = (y \times c) \times u = (y \times c) \times (next(u) \times u) = u \times ((y \times c) \times next(u)) = next(u) \times (u \times (y \times c))$$

$$= next(u) \times (c \times (u \times y)) = next(u) \times (y \times (c \times u)) = next(u) \times (y \times b) = next(u)(y \times (c \times u))$$

$$= next(u) \times (y \times b) = b \times (next(u) \times y).$$

Thus, *d*R*b*.

Therefore, in a regular CA-groupoid *S*, L *is equivalent to* R. -

**Example 14.** *Let S* = *{1, 2, 3, 4, 5, 6, 7, 8}. The operation* × *on S is defined as Table 14. Then, (S,* ×*) is a regular CA-groupoid*. L = *{*<*1, 2*>*,* <*2, 1*>*,* <*3, 4*>*,* <*4, 3*>*,* <*5, 6*>*,* <*6, 5*>*,* <*7, 8*>*,* <*8, 7*>*},* R = *{*<*1, 2*>*,* <*2, 1*>*,* <*3, 4*>*,* <*4, 3*>*,* <*5, 6*>*,* <*6, 5*>*,* <*7, 8*>*,* <*8, 7*>*}*, L◦R = *{*<*1, 1*>*,* <*2, 2*>*,* <*3, 3*>*,* <*4, 4*>*,* <*5, 5*>*,* <*6, 6*>*,* <*7, 7*>*,* <*8, 8*>} = R◦L = *{*<*1, 1*>*,* <*2, 2*>*,* <*3, 3*>*,* <*4, 4*>*,* <*5, 5*>*,* <*6, 6*>*,* <*7, 7*>*,* <*8, 8*>*}. Thus,* L *and* R *are commutative, and* L = R.


**Table 14.** The operation × on *S*.

Obviously on the regular CA-groupoid *S*, there is

H = L = R = D = J.

**Lemma 1.** *In a regular CA-groupoid S*, *each* L*-class contains at least one idempotent.*

**Proof.** For any *a* ∈ *S*, there exist *x* ∈ *S*, such that *a* = *a* × (*x* × *a*); then,

(*x* × *a*) × (*x* × *a*) = *a* × ((*x* × *a*) × *x*) = *x* × (*a* × (*x* × *a*)) = *x* × *a*

Therefore*, (x* × *a*) is idempotent and *a*L(*x* × *a*). -

**Lemma 2.** *Every idempotent e in a regular CA-groupoid S is a left identity for* L*e.*

**Proof.** If *a* ∈L*e*, then *a* = *x* × *e*. For some *x* in *S* and

$$
\varepsilon \times a = \varepsilon \times (\mathbf{x} \times \mathbf{e}) = \varepsilon \times (\varepsilon \times \mathbf{x}) = \mathbf{x} \times \mathbf{e}^2 = \mathbf{x} \times \mathbf{e} = a.
$$


> **Proposition 4.** *Let a be an element of a regular* L*-class L in a regular CA-groupoid S. If* L*a contains idempotents e, then* L*e contains an inverse a*−*<sup>1</sup> of a such that a* × *a*−*<sup>1</sup>* = *a*−*<sup>1</sup>* × *a* = *e.*

> **Proof.** Since *a*L*e* it follows by Lemma 2 that *e* × *a* = *a*. Again, from *a*R*e*, it follows that there exists *x* in *<sup>S</sup>* such that *<sup>a</sup>* <sup>×</sup> *<sup>x</sup>* <sup>=</sup> *e.* If we denote *<sup>x</sup>* <sup>×</sup> *<sup>e</sup>* by *<sup>a</sup>*<sup>−</sup>1, we easily see that

> > *<sup>a</sup>* <sup>×</sup> (*a*−<sup>1</sup> <sup>×</sup> *<sup>a</sup>*) <sup>=</sup> *<sup>a</sup>* <sup>×</sup> ((*<sup>x</sup>* <sup>×</sup> *<sup>e</sup>*) <sup>×</sup> *<sup>a</sup>*) <sup>=</sup> *<sup>a</sup>* <sup>×</sup> (*<sup>a</sup>* <sup>×</sup> (*<sup>x</sup>* <sup>×</sup> *<sup>e</sup>*)) <sup>=</sup> *<sup>a</sup>* <sup>×</sup> (*<sup>e</sup>* <sup>×</sup> (*<sup>a</sup>* <sup>×</sup> *<sup>x</sup>*))

$$a = a \times (\varepsilon \times \varepsilon) = \varepsilon \times (a \times \varepsilon) = \varepsilon \times (\varepsilon \times a) = \varepsilon \times a = a\_{\prime}$$

$$a^{-1} \times (a \times a^{-1}) = (\mathbf{x} \times \boldsymbol{e}) \times (a \times (\mathbf{x} \times \boldsymbol{e})) = (\mathbf{x} \times \boldsymbol{e}) \times (\boldsymbol{e} \times (a \times \mathbf{x})) = (\mathbf{x} \times \boldsymbol{e}) \times (\boldsymbol{e} \times \boldsymbol{e}) = \boldsymbol{e} \times ((\mathbf{x} \times \boldsymbol{e}) \times \boldsymbol{e})$$

$$\mathbf{e} = \boldsymbol{\varepsilon} \times (\boldsymbol{\varepsilon} \times (\mathbf{x} \times \boldsymbol{\varepsilon})) = \boldsymbol{\varepsilon} \times (\boldsymbol{\varepsilon} \times (\boldsymbol{\varepsilon} \times \mathbf{x})) = \boldsymbol{\varepsilon} \times (\mathbf{x} \times \boldsymbol{\varepsilon}) = \boldsymbol{\varepsilon} \times (\boldsymbol{\varepsilon} \times \mathbf{x}) = \mathbf{x} \times (\boldsymbol{\varepsilon} \times \boldsymbol{\varepsilon}) = \mathbf{x} \times \boldsymbol{\varepsilon} = \boldsymbol{a}^{-1} \dots$$

Thus, *a*−<sup>1</sup> is an inverse of *a*. Moreover,

$$a \times a^{-1} = a \times (\mathbf{x} \times \mathbf{e}) = \mathbf{e} \times (a \times \mathbf{x}) = \mathbf{e} \times \mathbf{e} = \mathbf{e}$$

Further,

$$a \times a = (\mathbf{x} \times \mathbf{e}) \times a = (\mathbf{x} \times \mathbf{e}) \times (\mathbf{e} \times \mathbf{a}) = a \times ((\mathbf{x} \times \mathbf{e}) \times \mathbf{e})$$

$$= \mathbf{e} \times (a \times (\mathbf{x} \times \mathbf{e})) = \mathbf{e} \times (\mathbf{e} \times (a \times \mathbf{x})) = \mathbf{e} \times (\mathbf{e} \times \mathbf{e}) = \mathbf{e}.$$

It now follows easily that

$$a \times a^{-1} = a^{-1} \times a = e...$$


**Theorem 12.** *Let (S,* ×*) be a CA-groupoid. Then, the following statements are equivalent:*

(1) *S is regular;*

(2) *Every element of S lies in a subgroup of S; and*

(3) *Every* H*-class in S is a group.*

**Proof.** (1)⇒ (2). Assume that *S* is a regular CA-groupoid. By Theorem 5, we know that *S* is a CA-NET-groupoid. By Theorem 2, we know that, in a CA-NET-groupoid *S*, every element of *S* lies in a subgroup of *S.* Thus, if *S* is a regular CA-groupoid, then every element of *S* lies in a subgroup of *S*.

(2)⇒(3). Assume that every element of *S* lies in a subgroup of *S.* Let *a* ∈ *S;* then, *a* ∈ *G* for some subgroup *G* of *S*. Denote the identity element of *G* by *e*, and the inverse of *a* within G by *a*<sup>−</sup>1. Then, from

$$a \times a = a \times e = a \text{ and } a \times a^{-1} = a^{-1} \times a = e^{-1}$$

it follows that *a*H*e*, and hence *Ha*= *He*, every H-class in *S* is a group.

(3)⇒(1). Assume that every H-class in *S* is a group. For each *a* in *S*, *a* ∈ *Ha,* because *Ha* is a group, then element a has a unique inverse *a*−<sup>1</sup> within the group *Ha.* Let *x*= *a*<sup>−</sup>1; then, it is clear that

$$a \times (\mathfrak{x} \times a) = a.$$

Therefore, *S* is a regular CA-groupoid. -

**Example 15.** *Let S* = *{a, b, c, d, e}. Define operation* × *on S as Table 15. Then, (S,* ×*) is a CA-groupoid.*


**Table 15.** The operation × on *S*.

*(S,* ×*) is a regular CA-groupoid, since a* = *a* × *(a* × *a), b* = *b* × *(b* × *b), c* = *c* × *(c* × *c), d* = *d* × *(d* × *d), and e* = *e* × *(e* × *e). Every element of CA-groupoid S lies in a subgroup of S, because {a, b}, {c, d}, {e} is a subgroup of S. Moreover, a, b* ∈ *{a, b}, c, d* ∈ *{c, d}, and e* ∈ *{e}. Every*H*-class in S is a group. Then, H*1*, H*2*, H*<sup>3</sup> *of* H*-class in S, H1* = *{a, b}, H2* = *{c, d}, H3* = *{e}. Moreover, a* × *b* = *b, b* × *b* = *a; c* × *d* = *d, d* × *d* = *c and e* × *e* = *e, H1, H2, H3 is a group.*

#### **6. Relationships between Some Cyclic Associative Groupoids (CA-Groupoids)**

**Definition 14.** *A CA-groupoid (S,* ×*) is called inverse CA-groupoid if there exists a unary operation a*−*<sup>1</sup> on S with the properties*

$$(a^{-1})^{-1} = a,\\ a \times (a^{-1} \times a) = a,$$

*and for any x, y* ∈ *S,*

$$(\mathbf{x} \times \mathbf{x}^{-1}) \times (\mathbf{y} \times \mathbf{y}^{-1}) = (\mathbf{y} \times \mathbf{y}^{-1}) \times (\mathbf{x} \times \mathbf{x}^{-1})$$

**Theorem 13.** *Let (S,* ×*) be a CA-groupoid. Then, S is an inverse CA-groupoid if and only if it is a regular CA-groupoid and its idempotent is commutative.*

**Proof.** Let *S* be an inverse CA-groupoid, which follows if we show that every idempotent in *S* can be expressed in the form *xx*<sup>−</sup>1. Let *e* be an idempotent in *S.* Then, the inverse CA-groupoid property ensures that there is an element *<sup>e</sup>*−<sup>1</sup> in *<sup>S</sup>* such that *<sup>e</sup>* <sup>×</sup> (*e*−<sup>1</sup> <sup>×</sup> *<sup>e</sup>*) <sup>=</sup> *<sup>e</sup>*, (*e*<sup>−</sup>1) <sup>−</sup><sup>1</sup> = *e*. Hence,

$$\begin{aligned} \varepsilon^{-1} = \varepsilon^{-1} \times ((\varepsilon^{-1})^{-1} \times \varepsilon^{-1}) = \varepsilon^{-1} \times (\varepsilon \times \varepsilon^{-1}) = \varepsilon^{-1} \times ((\varepsilon \times \varepsilon) \times \varepsilon^{-1}) = \varepsilon^{-1} \times (\varepsilon^{-1} \times (\varepsilon \times \varepsilon)) = \varepsilon^{-1} \\\varepsilon^{-1} \times (\varepsilon \times (\varepsilon^{-1} \times \varepsilon)) = \varepsilon^{-1} \times \varepsilon = \varepsilon^{-1} \times (\varepsilon \times \varepsilon) = \varepsilon \times (\varepsilon^{-1} \times \varepsilon) = \varepsilon. \end{aligned}$$

and thus *<sup>e</sup>* <sup>=</sup> *<sup>e</sup>*<sup>2</sup> <sup>=</sup> *<sup>e</sup>* <sup>×</sup> *<sup>e</sup>* <sup>=</sup> *<sup>e</sup>* <sup>×</sup> *<sup>e</sup>*<sup>−</sup>1.

According to the definition of an inverse CA-groupoid, idempotents commute. If *x, y* are idempotent, then *<sup>x</sup>* <sup>×</sup> *<sup>y</sup>* <sup>=</sup> (*<sup>x</sup>* <sup>×</sup> *<sup>x</sup>*<sup>−</sup>1) <sup>×</sup> (*<sup>y</sup>* <sup>×</sup> *<sup>y</sup>*<sup>−</sup>1) <sup>=</sup> (*<sup>y</sup>* <sup>×</sup> *<sup>y</sup>*<sup>−</sup>1) <sup>×</sup> (*<sup>x</sup>* <sup>×</sup> *<sup>x</sup>*<sup>−</sup>1) <sup>=</sup> *<sup>y</sup>* <sup>×</sup> *<sup>x</sup>*.

Therefore, *S* is a regular CA-groupoid and its idempotents are commutative.

Now, we assume that *S* is a regular CA -groupoid and its idempotents are commutative. Then, according to regularity, for any *x* ∈ *S*, there exists *neut*(*x*) ∈ *S*, *anti*(*x*) ∈ *S*. By Theorem 1, let *anti*(*x*) × *neut*(*x*) = *x*<sup>−</sup>1; then, we have

$$(\mathbf{x}^{-1})^{-1} = \mathbf{x}, \; \mathbf{x} \times (\mathbf{x}^{-1} \times \mathbf{x}) = \mathbf{x},$$

(*<sup>x</sup>* <sup>×</sup> *<sup>x</sup>*<sup>−</sup>1) <sup>×</sup> (*<sup>y</sup>* <sup>×</sup> *<sup>y</sup>*<sup>−</sup>1) <sup>=</sup> *neut*(*x*) <sup>×</sup> *neut*(*y*) <sup>=</sup> *neut*(*y*) <sup>×</sup> *neut*(*x*) <sup>=</sup> (*<sup>y</sup>* <sup>×</sup> *<sup>y</sup>*<sup>−</sup>1) <sup>×</sup> (*<sup>x</sup>* <sup>×</sup> *<sup>x</sup>*<sup>−</sup>1)

Therefore, *S* is an inverse CA-groupoid. -

**Corollary 3.** *Let (S,* ×*) be a regular CA-groupoid. Then, S is a commutative CA-groupoid.*

**Proof.** Let *(S,* ×*)* be a regular CA-groupoid. By Theorem 5, *S* is a CA-NET-groupoid. By Corollary 1, *S* is a commutative CA-groupoid. -

**Theorem 14.** *Let (S,* ×*) be a CA-groupoid. Then, the following statements are equivalent:*


**Proof.** (1)⇒(2). Assume that *S* is a regular CA-groupoid. By Corollary 3, we know that *S* is a commutative CA-groupoid. Then, for any *a* ∈ *S*, there exists *x* ∈ *S,* such that *a* = *a* × (*x* × *a*) and *a* = (*a* × *x*) × *a.* According to the definition of strongly regular CA-groupoid (Definition 5), *S* is a strongly regular CA-groupoid.

(2)⇒(3). Assume that *S* is a strongly regular CA-groupoid. By Definitions 4 and 5, *S* is a regular CA-groupoid. By Theorem 5, *S* is a CA-NET-groupoid.

(3)⇒(4). Let (*S*, ×) be a CA-NET-groupoid. According to Theorem 4, the idempotent of *S* is commutative. By Theorem 5, *S* is a regular CA-groupoid. By Theorem 13, *S* is an inverse CA-groupoid.

(4)⇒(5). Let (*S*, ×) be an inverse CA-groupoid. By Theorem 13, *S* is a regular CA-groupoid and its idempotent is commutative. Then, we only need proof a regular CA-groupoid is a commutative regular semigroup. By Corollary 3, *S* is a commutative CA-groupoid. For any *a, b, c* ∈ *S*, we have

$$a \times (b \times c) = c \times (a \times b) = (a \times b) \times c$$

and there exists *x* ∈ *S*, such that *a* = *a* × (*x* × *a*) = *a* × (*a* × *x*) = (*a* × *x*) × *a* = *a* × *x* × *a*.

Therefore, *S* is a commutative regular semigroup.

(5)⇒(1). Assume that (*S*, ×) is a commutative regular semigroup. For any *a, b, c* ∈ *S*, we have

$$a \times (b \times c) = (a \times b) \times c = c \times (a \times b)$$

and there exists *x* ∈ *S*, such that *a* = *a* × *x* × *a* = *a* × (*x* × *a*).

Therefore, *S* is a regular CA-groupoid. -

**Example 16.** *Let S* = *{1, 2, 3, 4}. The operation* × *on S is defined as Table 16. Then, (S,* ×*) is a regular CA-groupoid, since 1* = *1* × *(1* × *1), 2* = *2* × *(4* × *2), 3* = *3* × *(3* × *3), and 4* = *4* × *(4* × *4). (S,* ×*) is also a strongly regular CA-groupoid because 1* = *1* × *(1* × *1), 1* = *(1* × *1)* × *1; 2* = *2* × *(4* × *2), 2* = *(2* × *4)* × *2; 3* = *3* × *(3* × *3), 3* = *(3* × *3)* × *3; 4* = *4* × *(4* × *4), and 4* = *(4* × *4)* × *4. We can verify that (S,* ×*) is a CA-NET-groupoid, and neut(1)* = *1, anti(1)* = *1; neut(2)* = *2, anti(2)* = *{1, 2, 3, 4}; neut(3)* = *3, anti(3)* = *3; and neut(4)* = *4, anti(4)* = *{1, 3, 4}. (S,* ×*) is an inverse CA-groupoid, since 1* × *2* = *2* × *1, 1* × *3* = *3* × *1, 1* × *4* = *4* × *1, 2* × *3* = *3* × *2, 2* × *4* = *4* × *2, and 3* × *4* = *4* × *3. (S,* ×*) is also a commutative regular semigroup because 1* = *1* × *1* × *1, 2* = *2* × *2* × *2, 3* = *3* × *3* × *3, and 4* = *4* × *4* × *4.*

**Table 16.** The operation × on *S.*


**Corollary 4.** *Let (S,* ×*) be a strongly regular CA-groupoid. Then, S is a strongly regular semigroup*.

**Proof.** Let *(S,* ×*)* be a strongly regular CA-groupoid. By Theorem 14 (2), (5), *S* is a strongly regular semigroup. -

#### **7. Conclusions**

Starting from various backgrounds (for examples, non-associative rings with *x*(*yz*) = *y*(*zx*), cyclic associative Abel-Grassman groupoids, regular semigroup, and regular AG-groupoid), this paper introduces the concept of regular cyclic associative groupoid (CA-groupoid) for the first time. Furthermore, we study the relationship between regular CA-groupoids and other relevant algebraic structures. The research shows that the regular CA-groupoids, as a kind of non-associative algebraic structures, has typical representativeness and rich connotation, and is closely related to many kinds of algebraic structures. This paper concludes some important results, which are listed as follows:


These results are important for exploring the structure characterizations of regular CA-groupoids and CA-NET-groupoids.

For future research, we will discuss the integration of the related topics, such as the ideals in CA-groupoids and the relationships among some algebraic structures (see [26–28]).

**Author Contributions:** W.Y. and X.Z. initiated the research and wrote the paper. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by National Natural Science Foundation of China (Grant No. 61976130).

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

#### *Article*
