*Article* **On the Generalized Cross-Law of Importation in Fuzzy Logic**

### **Yifan Zhao and Kai Li \***

School of Cyber Security and Computer, Hebei University, Baoding 071002, China; wmk6905@sina.com **\*** Correspondence: likai@hbu.cn

Received: 28 August 2020; Accepted: 27 September 2020; Published: 1 October 2020

**Abstract:** Recently, Baczy ´nski et al. introduced two pexider-type generalisations of the law of importation in fuzzy logic, i.e., *I*(*C*(*x*, α), *y*) = *I*(*x*, *J*(α, *y*) (GLI) and *I*(*C*(*x*, α), *y*) = *J*(*x*, *I*(α, *y*) (CLI), where *C* is a fuzzy conjunction and *I*, *J* are fuzzy implications. However, (CLI) has not been adequately investigated so far. In this paper, we firstly show that (CLI) can be derived from the α-migrativity of an *R*-implication obtained from an α-migrative t-norm. Secondly, the relationships between the satisfaction of the law of importation (LI) by the pairs (*C*, *I*) or (*C*, *J*) and the satisfaction of (CLI) by the triple (*C*, *I*, *J*) are studied. Moreover, some necessary conditions of (CLI) are given. Finally, we study (CLI) under three different perspectives.

**Keywords:** fuzzy logic connectives; fuzzy implication; law of importation; α-migrativity; t-norm

#### **1. Introduction**

### *1.1. On the Laws of Importation in Fuzzy Logic*

Fuzzy logic connectives attract a good deal of attention for research because of their interesting properties and wide range of applications, not only in approximate reasoning and fuzzy control, but also in many other research area they have proved to be valuable like composition of fuzzy relations [1,2], fuzzy relational equations [3,4], fuzzy mathematical morphology [5], fuzzy neural networks [6], fuzzy rough sets [7–9] and data mining [10]. This fact has led more and more people to a systematic research of many fuzzy logic connectives in theory, analyze additional properties of fuzzy implications and solve functional equations involving this kind of operators (see the recent survey [11–13]).

In the framework of fuzzy logic, the law of importation on fuzzy implications plays an important role in fuzzy relational inference mechanisms (FRIM), since one can generate an equivalent multi-layered scheme that markedly improves the computational efficiency of the whole system (see [14]). Furthermore, some applications of the law of importation dealing with Zadeh's compositional rule of inference (CRI) have been studied in [15], and most of them believed it is necessary to theoretically study this law before applying it.

In classical two-valued logic, one of the most important tautologies is the following law of importation:

$$(p \land q) \to r \equiv (p \to (q \to r)).\tag{1}$$

The general form of the above equivalence is the well-known law of important (LI, for short):

$$I(T(\mathbf{x}, \alpha), y) = I(\mathbf{x}, I(\alpha, y)), \quad \alpha, \mathbf{x}, y \in [0, 1], \tag{2}$$

where *T* is a t-norm and *I* is a fuzzy implication. Moreover, Mas et al. [16] extended the above equation to the following form:

$$I(\mathcal{U}\_{\mathbf{c}}(\mathbf{x},a),\,\,\underline{y}) = I(\mathbf{x},I(a,\,\underline{y})), \,\,\,\alpha,\mathbf{x}, \,\,\underline{y} \in [0,1],\tag{3}$$

where *U*c is a conjunctive uninorm and *I* is a fuzzy implication derived from uninorms. There are some results already known about this property. Specifically, in *A*-implications defined by Türk¸sena et al. [17], the Equation (2) with *T* as the product t-norm *T*P(*x*, *y*) = *xy* was taken as one of the axioms. Later, Mas et al. [18] studied the law of important for (*S*, *N*)-, *R-*, *QL-* and *D*-implications derived from smooth discrete t-norms and t-conorms. In [19], Massanet and Torrens introduced a weaker version of Equation (2), called the weak law of important (WLI, for short):

$$I(F(\mathbf{x}, \alpha), y) = I(\mathbf{x}, I(\alpha, y)), \quad \alpha, \mathbf{x}, \ y \in [0, 1], \tag{4}$$

where *F* is a commutative, conjunctive, and non-decreasing function. Moreover, they have made new characterizations of (*S*, *N*)-implications, *R*-implications, and their counterparts for uninorms based on Equation (4). Therefore, it seems interesting and important to study various laws of importation in fuzzy logic.

#### *1.2. Motivation of This Work*

Recently, Baczy ´nski et al. [20] have generalized Equation (2) to the following functional equations through the α-migrativity of fuzzy implications, called generalized laws of importation, as can be seen below:

$$I(\mathbb{C}(\mathbf{x}, \boldsymbol{\alpha}), \boldsymbol{y}) = I(\mathbf{x}, I(\boldsymbol{\alpha}, \boldsymbol{y})), \quad \boldsymbol{\alpha}, \mathbf{x}, \boldsymbol{y} \in [0, 1], \tag{5}$$

$$I(\mathbb{C}(\mathbf{x}, \boldsymbol{\alpha}), \boldsymbol{y}) = I(\mathbf{x}, I(\boldsymbol{\alpha}, \boldsymbol{y})), \quad \boldsymbol{\alpha}, \mathbf{x}, \boldsymbol{y} \in [0, 1], \tag{6}$$

where *I* and *J* are fuzzy implications and *C* is a fuzzy conjunction. Note that when *C* = *T* and *J* = *I*, then both Equations (5) and (6) reduce to Equation (2). In other words, Equation (2) is a special case of Equations (5) and (6).

However, the generalized cross-law of importation (6) has not been investigated yet. A meaningful way of establishing the connection between (6) and the law of α-migrativity is desired. Moreover, there are two questions that we want to study:

Are there different implication functions *I* and *J* such that (6) holds for some fuzzy conjunction *C*? If the answer is yes, then the next question arises: Are there triples (*C*, *I*, *J*) that satisfy both (5) and (6)? Both of those questions are answered in this paper.

In this work, we want to study the recently proposed property of generalized cross-law of importation in fuzzy logic. Furthermore, we go on to investigate the relationship with α-migrativity and some conditions to satisfy the studied functional equations. On the one hand, we hope that our work give a chance for better understanding of a connection between the law of importation and the law of α-migrativity. On the other hand, we believe that the connection will be useful in results and applications wherein (LI) plays a key role (see [15,21,22] for details).

#### *1.3. Novelties of This Work*

The novelties of this work are threefold:


The structure of the paper is organized as follows: In Section 2, we recall some basic definitions and provide several examples that are useful in further considerations. In Section 3, we give some necessary conditions of (6), and study the generalized cross-law of importation from three different perspectives. In Section 4, we discuss with different results given in the previous section and present some issues worth further investigate. Finally, Section 5 covers some conclusions.

#### **2. Preliminaries**

In order to help the reader ger familiar with the theory, we recall here some basic definitions and facts which are necessary for the development of this article. More details about t-norms, fuzzy negations, fuzzy implications, and fuzzy conjunctions can be found in [23–25].

**Definition 1** ([23])**.** *A binary function T* : [0, 1] <sup>2</sup> <sup>→</sup> [0, 1] *is called a t-norm, if it satisfies, for all <sup>x</sup>*, *<sup>y</sup>*, *<sup>z</sup>* <sup>∈</sup> [0, 1], *the following conditions:*

$$T(\mathbf{x}, \mathbf{y}) = T(\mathbf{y}, \mathbf{x}),\tag{7}$$

$$T(T(\mathbf{x}, y), z) = T(\mathbf{x}, T(y, z)),\tag{8}$$

$$T(\mathbf{x}, \ y) \le T(\mathbf{x}, z) \text{ for } y \le z,\tag{9}$$

$$T(\mathbf{x}, 1) = \mathbf{x}.\tag{10}$$

A t-norm *T* is called α-migrative (see [24]) if it satisfies the condition (11) for a fixed α ∈ (0, 1) and for all *x*, *y* ∈ [0, 1]

$$T(\alpha \mathbf{x}, y) = T(\mathbf{x}, \alpha y). \tag{11}$$

Note that if *T* is α-migrative for all α ∈ (0, 1), then *T* is said to be migrative. However, not all t-norms are migrative, for instance, the Gödel t-norm *T*G(*x*, *y*) = min(*x*, *y*).

**Definition 2** ([25])**.** *A decreasing function N* : [0, 1] → [0, 1] *is called a fuzzy negation, if N*(0) = 1 *and N*(1) = 0. *Furthermore, a fuzzy negation N is called*


**Definition 3** ([25])**.** *A binary function I* : [0, 1] <sup>2</sup> <sup>→</sup> [0, 1] *is called a fuzzy implication if it satisfies, the following conditions:*

$$I(\mathbf{x}, z) \ge I(y, z) \text{ when } \mathbf{x} \le y \text{, for all } z \in [0, 1], \tag{12}$$

$$I(\mathbf{x}, \ y) \le I(\mathbf{x}, z) \text{ when } y \le z \text{, for all } z \in [0, 1], \tag{13}$$

$$I(0,0) = I(1,1) = 1 \text{ and } I(1,0) = 0. \tag{14}$$

**Remark 1.** *Note that from Definition 3, we can deduce that I*(*x*, 1) = 1 *and I*(0, *y*) = 1 *for all x*, *y* ∈ [0, 1], *whereas the symmetric values I*(1, *x*) *and I*(*y*, 0) *are not derived from the above definition. Moreover, the family of all fuzzy implications will be denoted by* F I.

A fuzzy implication *I* is called α-migrative (see [20]), if it satisfies the condition (15) for a fixed α ∈ (0, 1) and for all *x*, *y* ∈ [0, 1]

$$I(ax, y) = I(x, 1 - a + ay). \tag{15}$$

Note that if *I* is α-migrative for all α ∈ (0, 1), then *I* is said to be migrative. Every fuzzy implication is α-migrative when α = 0 or α = 1.

There are many other properties usually required for fuzzy implications (see [25]). We present here several properties that are used in this paper.

**Definition 4** ([25])**.** *A fuzzy implication I is said to satisfy*

(i) The boundary property if:

$$I(\mathbf{x},0) = N(\mathbf{x}), \quad \mathbf{x} \in [0,1], \tag{16}$$

where *N* is a (continuous, strict, strong) fuzzy negation.

(ii) The exchange principle if

$$I(\mathbf{x}, I(y, z)) = I(y, I(\mathbf{x}, z)), \text{ x, y, z \in [0, 1].} \tag{17}$$

**Example 1** ([25])**.** *Examples of fuzzy implications that are used in this paper are:*

1. The least *I*Lt and the greatest *I*Gt fuzzy implications:

$$I\_{\mathrm{Lt}}(\mathbf{x},\ \mathbf{y}) = \begin{cases} 1, & \text{if } \mathbf{x} = 0 \text{ or } \mathbf{y} = 1, \\ 0, & \text{if } \mathbf{x} > 0 \text{ and } \mathbf{y} < 1, \end{cases} \quad I\_{\mathrm{Gd}}(\mathbf{x},\ \mathbf{y}) = \begin{cases} 1, & \text{if } \mathbf{x} < 1 \text{ or } \mathbf{y} > 0, \\ 0, & \text{if } \mathbf{x} = 1 \text{ and } \mathbf{y} = 0. \end{cases}$$

2. *R*-implications derived from a left-continuous t-norm *T*:

*IT*(*x*, *y*) = sup *z* ∈ [0, 1] *T*(*x*, *<sup>z</sup>*) <sup>≤</sup> *<sup>y</sup>* , *x*, *y* ∈ [0, 1].

3. (*S*, *N*)-implications derived from a t-conorm *S* and a fuzzy negation *N*:

$$I\_{\mathcal{S}N}(\mathbf{x}, \boldsymbol{y}) = \mathcal{S}(N(\mathbf{x}), \boldsymbol{y}), \quad \mathbf{x}, \boldsymbol{y} \in [0, 1].$$

**Theorem 1** ([19])**.** *Let I* ∈FI. *If I satisfies (16) with a continuous fuzzy negation N, then*

*I* satisfies (2) ⇔ *I* satisfies (4).

**Theorem 2** ([26])**.** *For a function I* : [0, 1] <sup>2</sup> <sup>→</sup> [0, 1] *the following statements are equivalent.*


Moreover, the representation *I*(*x*, *y*) = *S*(*N*(*x*), *y*) is unique in this case.

**Definition 5** ([25])**.** *Let I* ∈FI. *The function NI defined by NI*(*x*) = *I*(*x*, 0) *for all x* ∈ [0, 1], *is called the natural negation of I.*

**Remark 2.** *According to the above definition, from any fuzzy negation one can defined a fuzzy implication, but according to Equation (16) it only happens for some fuzzy implications.*

**Definition 6** ([20])**.** *A binary function C* : [0, 1] <sup>2</sup> <sup>→</sup> [0, 1] *is called a fuzzy conjunction if it satisfies the following conditions:*

*C*is increasing in each variable, (18)

$$\mathcal{C}(\mathbf{x},0) = \mathcal{C}(0,\mathbf{x}) = 0 \text{ for all } \mathbf{x} \in [0,1], \tag{19}$$

$$C(1,1) = 1.\tag{20}$$

**Remark 3.** *A fuzzy conjunction C which satisfies (7), (8) and (10) is a t-norm. Every t-norm is a fuzzy conjunction, but the converse is not true. The left- and right-neutral elements of C will be denoted by el and er*, *respectively. The family of all fuzzy conjunctions will be denoted by* C*.*

**Example 2** ([20])**.** *Here are some fuzzy conjunctions which will be used in this paper:*

$$\mathbb{C}\_{\mathbf{x}}^{0}(\mathbf{x},\mathbf{y}) = \begin{cases} \mathbf{0}, & \text{if } \mathbf{y} = \mathbf{0}, \\ \mathbf{x}, & \text{otherwise}, \end{cases} \quad \mathbb{C}\_{\mathbf{y}}^{0}(\mathbf{x},\mathbf{y}) = \begin{cases} \mathbf{0}, & \text{if } \mathbf{x} = \mathbf{0}, \\ \mathbf{y}, & \text{otherwise}, \end{cases}$$

$$\mathbb{C}\_{m}^{\mathbf{y}}(\mathbf{x},\mathbf{y}) = \begin{cases} \text{ $\boldsymbol{y}$ }, & \text{if } \min\{\mathbf{x},\mathbf{y}\} > 0.5, \\ \min\{\mathbf{x},\mathbf{y}\}, & \text{otherwise}, \end{cases} \quad \mathbb{C}\_{\mathbf{p}}(\mathbf{x},\mathbf{y}) = \begin{cases} \mathbf{1}\_{\mathbf{y}}, & \text{if } \mathbf{x} = 1 \text{ and } \mathbf{y} = 1, \\ \frac{\mathbf{x}\mathbf{y}}{\mathbf{1}\_{\mathbf{y}}}, & \text{otherwise}. \end{cases}$$

#### **3. The Main Results**

*3.1. Solutions of Equation (6)—Some Necessary Conditions*

**Remark 4.** *It can be shown that the generalized cross-law of importation (6) can be derived from the* α*-migrativity of an R-implication obtained from an* α*-migrative t-norm. Now, we shall consider the following cases:*

*(1) If* α ∈ [0, 1], *then we have*

$$M\_T(ax, y) = \sup \{ z \in [0, 1] | T(ax, z) \le y \} = \sup \{ z \in [0, 1] | T(x, ax) \le y \}, \quad x, y \in [0, 1]. \tag{21}$$

*Note that as z varies over* [0, 1], α*z varies over* [0, α] *and substituting* β = α*z in to (21), we obtain*

$$\begin{array}{lcl} I\_T(\boldsymbol{\alpha}; \boldsymbol{y}) &= \sup \{ \frac{\boldsymbol{\beta}}{\boldsymbol{\alpha}} \in [0, 1] \Big| T(\boldsymbol{x}, \boldsymbol{\beta}) \le \boldsymbol{y} \} \\ &= \frac{1}{\boldsymbol{\alpha}} \sup \{ \boldsymbol{\beta} \in [0, \boldsymbol{\alpha}] \Big| T(\boldsymbol{x}, \boldsymbol{\beta}) \le \boldsymbol{y} \} \\ &= \min \{ \frac{1}{\boldsymbol{\alpha}} \sup \{ t \in [0, 1] \Big| T(\boldsymbol{x}, t) \le \boldsymbol{y} \}, 1 \} \\ &= \min \{ \frac{l\_T(\boldsymbol{x}, \boldsymbol{y})}{\boldsymbol{\alpha}}, 1 \}. \end{array} \tag{22}$$

*Note that (22) can be expressed as IT*(α*x*, *y*) = *I*GG(α, *IT*(*x*, *y*)), *where I*GG *is the Goguen implication.*

(2) *If* α = 0, *then we have* LHS of(22) = *IT*(0, *y*) = 1 = *I*GG(0, *IT*(*x*, *y*)) = RHS of(22).

Finally, substituting α = *x* (and *x* = α) in to (22), then we obtain *IT*(*x*α, *y*) = *I*GG(*x*, *IT*(α, *y*)) which is a special case of (6) with the triplet (*C* = *T*P, *I* = *IT*, *J* = *I*GG) being fixed.

**Remark 5.** *Let C* ∈ C *andI*, *J* ∈ FI. *Of course, there are many other triples* (*C*, *I*, *J*) *that satisfy (6). For example, consider the least and the greatest fuzzy implications (see Example 1-1).*

(i) If *C* satisfies that =*C*(*x*, *y*) = 0 ⇔ *x* = 0 or *y* = 0, then the triplet (*C*, *I*Lt, *J* = *I*Lt) satisfies (6). To see this, note that we have the following equivalences:

 LHS of(6) = 0 ⇔ *I*Lt(*C*(*x*, α), *y*) = 0 ⇔ *C*(*x*, α) > 0 and *y* < 1 ⇔ α, *x* > 0 and *y* < 1, RHS of(6) = 0 ⇔ *I*Lt(*x*, *I*Lt(α, *y*)) = 0 ⇔ *x* > 0 and *I*Lt(α, *y*) < 1 ⇔ α, *x* > 0 and *y* < 1.

$$\begin{cases} \text{LHS of } (6) = 1 \Leftrightarrow I\_{\text{Lt}}(\mathbb{C}(\mathbf{x}, a), y) = 1 \Leftrightarrow \mathbb{C}(\mathbf{x}, a) = 0 \text{ or } y = 1 \Leftrightarrow a = 0 \text{ or } \mathbf{x} = 0 \text{ or } y = 1, \\\text{RHS of } (6) = 1 \Leftrightarrow I\_{\text{Lt}}(\mathbf{x}, I\_{\text{Lt}}(a, y)) = 1 \Leftrightarrow \mathbf{x} = 0 \text{ or } I\_{\text{Lt}}(a, y) = 1 \Leftrightarrow a = 0 \text{ or } \mathbf{x} = 0 \text{ or } y = 1. \end{cases}$$

(ii) Similarly, if *C* satisfies that *C*(*x*, *y*) = 1 ⇔ *x* = *y* = 1, then the triplet (*C*, *I*Gt, *J* = *I*Gt) satisfies (6).

$$\begin{cases} \text{LHS of } (6) = 0 \Leftrightarrow l\_{\text{Gl}}(\mathbb{C}(\mathbf{x}, a), y) = 0 \Leftrightarrow \mathbb{C}(\mathbf{x}, a) = 1 \text{ and } y = 0 \Leftrightarrow a = \mathbf{x} = 1 \text{ and } y = 0, \\\text{RHS of } (6) = 0 \Leftrightarrow l\_{\text{Gl}}(\mathbf{x}, l\_{\text{Gl}}(a, y)) = 0 \Leftrightarrow \mathbf{x} = 1 \text{ and } l\_{\text{Gl}}(a, y) = 0 \Leftrightarrow a = \mathbf{x} = 1 \text{ and } y = 0. \end{cases}$$

$$\begin{cases} \text{LHS of}(6) = 1 \Leftrightarrow l\_{\text{Gt}}(\mathbb{C}(\mathbf{x}, a), y) = 1 \Leftrightarrow \mathbb{C}(\mathbf{x}, a) < 1 \text{ or } y > 0 \Leftrightarrow a < 1 \text{ or } \mathbf{x} < 1 \text{ or } y > 0, \\\text{RHS of}(6) = 1 \Leftrightarrow l\_{\text{Gt}}(\mathbf{x}, l\_{\text{Gt}}(a, y)) = 1 \Leftrightarrow \mathbf{x} < 1 \text{ or } l\_{\text{Gt}}(a, y) > 0 \Leftrightarrow a < 1 \text{ or } \mathbf{x} < 1 \text{ or } y > 0. \end{cases}$$

**Example 3.** *Consider C*<sup>0</sup> *<sup>y</sup>*(*x*, *<sup>y</sup>*) = 0, *if x* = 0, *<sup>y</sup>*, *otherwise and the fuzzy implication <sup>J</sup>*1(*x*, *<sup>y</sup>*) = 1, *if x* = 0, *y*, *otherwise*. *It is easy to verify that the triple* (*C*<sup>0</sup> *<sup>y</sup>*, *I*Lt, *J*1) *satisfies (6). On the other hand, the pair* (*C*<sup>0</sup> *<sup>y</sup>*, *I*Lt) *satisfies (2), as can be seen below:*

$$\text{LHS of } (2) = I\_{\text{Lt}}(\mathbf{C}\_y^0(\mathbf{x}, \alpha), y) = I\_{\text{Lt}}(\left\{ \begin{array}{ll} 0, & \mathbf{x} = \mathbf{0}, \\ \alpha, & \mathbf{x} \neq \mathbf{0}, \end{array} \right. \\ \left. y \right) = \begin{cases} \begin{array}{ll} 0, & \alpha, \mathbf{x} > \mathbf{0} \, and \, y < \mathbf{1}, \\ 1, & \text{otherwise}, \end{cases} \end{cases}$$

$$\begin{aligned} \text{RHS of}(2) &= I\_{\text{Lt}}(\text{x}, I\_{\text{Lt}}(\alpha, y)) = I\_{\text{Lt}} \begin{pmatrix} 0, & \alpha > 0 \, and \, y < 1, \\ 1, & \text{otherwise}, \end{pmatrix} \\ &= \begin{cases} 0, & \alpha, \, \text{x} > 0 \, and \, y < 1, \\ 1, & \text{otherwise}, \end{cases} = \text{LHS of}(2). \end{aligned}$$

Interestingly, the pair (*C*<sup>0</sup> *<sup>y</sup>*, *J*1) also satisfies Equation (2), as shown below:

$$\text{LHS of } (2) = f\_1(\mathbf{C}\_y^0(\mathbf{x}, \alpha), y) = f\_1(\begin{pmatrix} 0 & \mathbf{x} = 0 \\ a & \mathbf{x} > 0 \end{pmatrix} y) = \begin{cases} 1, & a = 0 \, or \, \mathbf{x} = 0, \\\ y, & a, \, \mathbf{x} \in (0, 1], \end{cases}$$

$$\text{RHS of } (2) \quad = f\_1(\mathbf{x}, f\_1(\alpha, y)) = f\_1(\mathbf{x}, \begin{pmatrix} 1 & \alpha = 0 \\ y & \alpha > 0 \end{pmatrix})$$

$$= \begin{cases} 1, & \alpha = 0 \, or \, \mathbf{x} = 0, \\\ y, & \alpha, \mathbf{x} \in (0, 1], \end{cases} = \text{LHS of } (2).$$

If we consider *C*<sup>1</sup> *<sup>y</sup>*(*x*, *<sup>y</sup>*) = *y*, *if x* = 1, 0, *otherwise* and its residual *<sup>J</sup>*2(*x*, *<sup>y</sup>*) = *y*, *if x* = 1, 1, *otherwise*, then the triple (*C*<sup>1</sup> *<sup>y</sup>*, *I*Gt, *J*2) satisfies (6), as shown below:

$$\text{LHS of } (\mathfrak{G}) = I\_{\text{Gt}}(\mathbb{C}\_{y}^{1}(\mathbf{x}, a), y) = I\_{\text{Gt}}(\left\{ \begin{array}{ll} a, & \mathbf{x} = 1, \\ 0, & \mathbf{x} \neq 1, \end{array} \, \mathbf{y} \right\} = \left\{ \begin{array}{ll} 0, & \alpha = \mathbf{x} = 1 and \, \mathbf{y} = 0, \\ 1, & \text{otherwise}, \end{array} \right.$$

$$\text{RHS of } (\mathfrak{G}) = I\_{\text{2}}(\mathbf{x}, I\_{\text{Gt}}(\alpha, \mathbf{y})) = I\_{\text{2}} \left( \mathbf{x}, \begin{cases} 0, & \alpha = 1 and \, \mathbf{y} = 0, \\ 1, & \text{otherwise}, \end{cases} \right)$$

$$= \left\{ \begin{array}{ll} 0, & \alpha = \mathbf{x} = 1, \, \mathbf{y} = 0, \\ 1, & \text{otherwise}, \end{array} \right.$$

On the other hand, the pairs (*C*<sup>1</sup> *<sup>y</sup>*, *I*Gt) and (*C*<sup>1</sup> *<sup>y</sup>*, *J*2) also satisfy (2). Thus, a natural question arises: If the triplet (*C*, *I, J*) satisfies (6), do the corresponding pairs (*C*, *I*) and (*C*, *J*) necessarily satisfy (2)? Unfortunately, the answer is negative. However, there are some sufficient conditions such that the pair (*C*, *I*) satisfies (2). For more details we refer the readers to [19].

**Remark 6.** *Let C* ∈ C *and I*, *J* ∈FI. *It can be shown that the satisfaction of (2) by either*/*both the pairs* (*C*, *I*) *and* (*C*, *J*) *is neither su*ffi*cient nor necessary for the triplet* (*C*, *I*, *J*) *to fulfill (6). Consider the following fuzzy implications and the result is presented in Table 1.*

$$I\_1(\mathbf{x}, \mathbf{y}) = \begin{cases} 1, & \text{if } \mathbf{y} = 1, \\ 1 - \mathbf{x}, & \text{otherwise}, \end{cases} \quad I\_2(\mathbf{x}, \mathbf{y}) = \begin{cases} 1, & \text{if } \mathbf{y} = 1, \\ 1 - \mathbf{x}^2, & \text{otherwise}, \end{cases}$$

$$I\_3(\mathbf{x}, \ y) = \begin{cases} 0, & \text{if } \mathbf{x} > 0.5 \, and \, y < 1, \\\ 1, & \text{otherwise,} \end{cases} \quad I\_4(\mathbf{x}, \ y) = \begin{cases} 1, & \text{if } \mathbf{x} \le 0.5, \\\ y, & \text{otherwise,} \end{cases}$$

$$I\_5(\mathbf{x}, \ y) = \begin{cases} \text{y}, & \text{if } \mathbf{x} = 1, \\\ 0, & \text{if } \mathbf{x} > 0 \, and \, y = 0, \\\ 1, & \text{otherwise,} \end{cases} \quad I\_\mathbf{D}(\mathbf{x}, \ y) = \begin{cases} 1, & \text{if } \mathbf{x} = 0 \, or \, y = 1, \\\ \mathbf{y}, & \text{otherwise.} \end{cases}$$


**Table 1.** The relationships between (2) and (6).

Finally, we want to finish this subsection with the following result, some necessary conditions on the triple (*C*, *I, J*) to satisfy (6).

**Proposition 1.** *Let the triple (C, I, J) satisfy (6), el*, *er* ∈ (0, 1] *be the left- and right-neutral elements of C and fl* ∈ (0, 1] *be the left-neutral elements of I. Let <sup>I</sup>* = *Ran*(*I*).

(i) *J*(*el*, *r*) = *r* on *I*. Further, if *<sup>I</sup>* = [0, 1], then *J* has left-neutral element *el*.

(ii) If *fl* is the right-neutral element of *C*, i.e., *fl* = *er*, then *I* = *J*.

(iii) If *I*(*er*, 0) = 0, then *NI* = *NJ* = *N* and *NI*(*x*) = 0 whenever *x* ∈ [*er*, 1].

**Proof.** First, to prove item (i), substituting *x* = *el* in to (6), we obtain

$$I(\mathbb{C}(\varepsilon\_l, a), \,\, y) = I(\varepsilon\_l, I(a, \, y)), \,\, a, \,\, e\_l, \,\, y \in [0, \,1]. \tag{23}$$

Since *C*(*el*, α) = α for all α ∈ [0, 1] and hence *I*(α, *y*) = *J*(*el*, *I*(α, *y*)). Now, let *I*(α, *y*) = *r* ∈ *<sup>I</sup>* = *Ran*(*I*), it is sufficient to see that *J*(*el*, *r*) = *r* on *I*. Furthermore, if *<sup>I</sup>* = [0, 1], then *J*(*el*, *r*) = *r* for all *r* ∈ [0, 1], which implies *el* is also the left-neutral element of *J*.

For item (ii), substituting α = *er* we obtain from (6),

$$I(\mathbf{x}, \boldsymbol{y}) = I(\mathbb{C}(\mathbf{x}, \boldsymbol{e}\_r), \boldsymbol{y}) = I(\mathbf{x}, I(\boldsymbol{e}\_r, \boldsymbol{y})), \quad \mathbf{x}, \boldsymbol{e}\_r, \boldsymbol{y} \in [0, 1]. \tag{24}$$

Obviously, if *fl* = *er*, then *J*(*x*, *I*(*er*, *y*)) = *J*(*x*, *I*(*fl*, *y*)) = *J*(*x*, *y*) and thus *I* = *J*. To prove item (iii), substituting α = *er* and *y* = 0 in to (6), after a simple rearrangement we obtain

$$I(\mathbf{x},0) = I(\mathbf{x}, I(e\_r, 0)), \quad \mathbf{x}, e\_r \in [0, 1]. \tag{25}$$

If *I*(*er*, 0) = 0, then *I*(*x*, 0) = *J*(*x*, 0) for all *x* ∈ [0, 1] and by Definition 5, we obtain *NI* = *NJ* = *N*. Finally, if *x* ∈ [*er*, 1], then by (12) we deduce that *NI*(*x*) = 0. -

#### *3.2. Perspective One: The Pair (C, I) Satisfies (2)*

Let us start with the first perspective when the pair (*C*, *I*) satisfies (2). Specifically, we have the following result.

**Theorem 3.** *Let C* ∈ C *and I*, *J* ∈FI *and consider the following items:*


Then one has the following items:


#### **Proof.**


**Remark 7.** *Note that in Theorem 3.(1), even if I does not have left-neutral element, we can still have the implication (i) and* (ii) <sup>⇒</sup> (iii). *To see this consider the pair* (*C*<sup>0</sup> *<sup>x</sup>*, *I*6) *(see Example 2), where*

$$I\_6(\mathbf{x}, \mathbf{y}) = \begin{cases} 1, & \text{if } \mathbf{y} = 1, \\ 1 - \mathbf{x}^6, & \text{otherwise.} \end{cases} \tag{26}$$

It is clear that *I*<sup>6</sup> does not have any left-neutral element. On the other hand, the pair (*C*<sup>0</sup> *<sup>x</sup>*, *I*6) satisfies Equation (2) as can be seen below:

$$\begin{split} \text{LHS of } (2) &= l\_{\mathbb{6}} (\mathbb{C}\_{\mathbf{x}}^{0} (\mathbf{x}, \alpha), y) = l\_{\mathbb{6}} \left( \begin{array}{ccc} 0, & \alpha = 0, \\ \mathbf{x}, & \alpha \neq 0, \end{array} y \right) = \left\{ \begin{array}{ccc} 1, & \alpha = 0 \text{ or } y = 1, \\ 1 - \mathbf{x}^{\delta}, & \alpha > 0 \text{ and } y < 1, \end{array} \right. \\\\ \text{RHS of } (2) &= l\_{\mathbb{6}} (\mathbf{x}, l\_{\mathbb{6}} (\alpha, y)) = l\_{\mathbb{6}} \left( \text{x}, \begin{cases} 1, & y = 1, \\ 1 - \alpha^{\delta}, & y \neq 1, \end{cases} \right) \\ &= \left\{ \begin{array}{ccc} 1, & \alpha = 0 \text{ or } y = 1, \\ 1 - \mathbf{x}^{\delta}, & \alpha > 0 \text{ and } y < 1, \end{array} \right. \\ \end{split}$$

Now, assume that the triple (*C*<sup>0</sup> *<sup>x</sup>*, *I*6, *J*) satisfies (6), then we have *J* = *I*6, as shown below: Let *x*, *y* ∈ [0, 1] and *J*(*x*, *y*) = μ. We divide our argument in two cases:

Case 1. If *y* = 1, then *J*(*x*, 1) = μ = 1. Case 2. If *<sup>y</sup>* <sup>∈</sup> [0, 1) and <sup>α</sup> = 0, then LHS of(6) = *<sup>I</sup>*6(*C*<sup>0</sup> *<sup>x</sup>*(*x*, 0), *y*) = 1 = *J*(*x*, 1) = *J*(*x*, *I*6(0, *y*)) = RHS of(6); If *y* ∈ [0, 1) and α ∈ (0, 1], then

$$\text{LHS of } (\mathfrak{G}) = I\_{\mathfrak{G}}(\mathbb{C}\_x^0(\mathfrak{x}, \alpha), y) = I\_{\mathfrak{G}}(\mathfrak{x}, y) = 1 - \mathfrak{x}^{\mathfrak{G}},$$

$$\text{RHS of } (\mathfrak{G}) = I(\mathfrak{x}, I\_{\mathfrak{G}}(\alpha, \mathfrak{y})) = I(\mathfrak{x}, 1 - \alpha^{\mathfrak{G}}).$$

Combining the above two equations, we have *<sup>J</sup>*(*x*, 1 <sup>−</sup> <sup>α</sup>6) = <sup>1</sup> <sup>−</sup> *<sup>x</sup>*6. As <sup>α</sup> varies over (0, 1], 1 <sup>−</sup> <sup>α</sup><sup>6</sup> varies over [0, 1). Thus, substituting 1 <sup>−</sup> <sup>α</sup><sup>6</sup> by *<sup>y</sup>* we obtain *<sup>J</sup>*(*x*, *<sup>y</sup>*) = <sup>μ</sup> = <sup>1</sup> <sup>−</sup> *<sup>x</sup>*6, when *<sup>y</sup>* <sup>&</sup>lt; 1. Therefore, we conclude that *J* = *I*6.

*3.3. Perspective Two: The Pair (C, J) Satisfies (2)*

In this subsection, we focus on the second perspective when the pair (*C*, *J*) satisfies (2).

**Theorem 4.** *Let C* ∈ C *and I*, *J* ∈FI *and consider the following items:*


Moreover, consider the following two properties with respect to *C* and *J*.


Then the following items hold:


#### **Proof.**


Sufficiency. Assume that *J* satisfies (17). Note that *J* is a fuzzy implication it satisfies (12) and then Theorem 2 implies that *J* is an (*S*, *N*)-implication derived from a t-conorm *S* and a continuous fuzzy negation *N*. Moreover, *JS*,*<sup>N</sup>* satisfies (4) with the function *F*(*x*, *y*) = *N*1(*S*(*N*(*x*), *N*(*y*))) where *N*<sup>1</sup> such that *N*◦*N*<sup>1</sup> = *id*[0, 1]. Finally, Theorem 1 ensures that the pair (*C*, *J*) satisfies (2).

(3) The implications (i) and (iv) ⇒ (ii) and (ii) and (iv) ⇒ (i) are trivially true. -

**Remark 8.** *Now, let us consider the necessity of the distinct conditions used in Theorem 4.*

(i) Note that in Theorem 4, we have considered two properties on *C* and *J*. In fact, the assumption (a) is not necessary. To see this, consider the pair (*C<sup>y</sup> <sup>m</sup>*, *<sup>I</sup>*4) (see Remark 6), it is easy to see that *<sup>C</sup><sup>y</sup> m* does not have any right-neutral element but *I*<sup>4</sup> has left-neutral element whenever *fl* ∈ [0.5, 1] and thus (a) is not valid. Now, assume that the triple (*C<sup>y</sup> <sup>m</sup>*, *I*4, *J*) satisfies (6), then we have *J* = *I*4, as shown below:

Let *x*, *y* ∈ [0, 1] and *J*(*x*, *y*) = ν. If *x* ∈ [0, 0.5], then

$$\text{LHS of}(\mathfrak{G}) = I\_4(\mathbb{C}\_m^y(\mathfrak{x}, \alpha), y) = I\_4(\min\{\mathfrak{x}, \alpha\}, y) = 1, 2$$

$$\text{RHS of } (\mathsf{6}) = \mathsf{J}(\mathsf{x}, \mathsf{I}\_4(\mathsf{a}, \mathsf{y})) = \mathsf{J}(\mathsf{x}, \begin{pmatrix} 1, & a \le 0.5, \\ \mathsf{y}, & a > 0.5, \end{pmatrix} = \mathsf{J}(\mathsf{x}, \mathsf{y})\_{\mathsf{x}}.$$

and thus *J*(*x*, *y*) = ν = 1, when *x* ≤ 0.5. If *x* ∈ (0.5, 1], then

$$\text{LHS of } (6) = I\_4(\mathbb{C}\_{\text{m}}^y(\text{x, } \alpha), y) = I\_4(\begin{pmatrix} \alpha, & \alpha \le 0.5, \\ \alpha, & \alpha > 0.5, \end{pmatrix} \begin{pmatrix} y \\ y, \ \alpha > 0.5, \end{pmatrix}$$

$$\text{RHS of } (6) = J(\text{x, } I\_4(a, y)) = J(\text{x, } \begin{cases} 1, & a \le 0.5, \\ \text{y, } & a > 0.5, \end{cases}) = \begin{cases} 1, & a \le 0.5, \\ J(\text{x, } y), & a > 0.5, \end{cases}$$

which implies that *J*(*x*, *y*) = ν = *y* if *x* > 0.5 and thus *J* = *I*4.

(ii) Similarly, one can show that the assumption (b) is not necessary, viz., even if *NJ* is not continuous, there exists a fuzzy implication *J* satisfies (17) with the pair (*C*, *J*) satisfies (2). To see this, consider the following pair (*C*λ, *J* = *I*WB), where

$$\mathsf{C}\_{\lambda}(\mathsf{x},\mathsf{y}) = \left\{ \begin{array}{c} 1, \\ (\mathsf{x}\mathsf{y})^{\lambda}, \,\lambda \ge 1, \,\,\begin{array}{c} \text{if } \mathsf{x} = 1 \text{ and } \mathsf{y} = 1, \\ \text{otherwise}, \end{array} \right. \right. \\\\ \left. I\_{\mathsf{WB}}(\mathsf{x},\mathsf{y}) = \left\{ \begin{array}{c} 1, & \mathsf{if } \mathsf{x} < 1, \\ \text{ $y$ }, \,\,\text{otherwise}. \end{array} \right. \right. \right. \end{array}$$

As is well known that *I*WB satisfies the exchange principle (17) but the natural negation of *I*WB is not continuous. However, the pair (*C*λ, *I*WB) satisfies (2) as can be seen below:

$$\text{LHS of } (2) = I\_{\text{WB}}(\text{C}\_{\text{A}}(\text{x}, a), y) = I\_{\text{WB}} \left( \begin{array}{l} 1, & a = \text{x} = 1, \\ \left(ax\right)^{\lambda}, & a < 1 \text{ or } \text{x} < 1, \end{array} \; y \right) = \begin{cases} 1, & a < 1 \text{ or } \text{x} < 1, \\ y, & a = \text{x} = 1, \end{cases}$$

$$\text{RHS of } (2) = I\_{\text{WB}}(\text{x}, I\_{\text{WB}}(a, \text{y})) = I\_{\text{WB}} \left( \text{x}, \begin{pmatrix} 1, & a < 1 \\ \text{y}, & a = 1, \end{pmatrix} = \begin{cases} 1, & a < 1 \text{ or } \text{x} < 1, \\ y, & a = \text{x} = 1, \end{cases} \; \text{LHS of } (2).$$

(iii) Finally, note that the assumption (iv) is not necessary, i.e., even if *J I*, there can exist a triple (*C*, *I, J*) satisfies (6) with the corresponding pair (*C*, *J*) satisfies (2). See for instance Example 3.

### *3.4. Perspective Three: The Triple (C, I, J) Satisfies (5) and (6)*

In this subsection, we want to discuss the second question as we have mentioned in the introduction, i.e., are there exist triples (*C*, *I*, *J I*) that satisfy both (5) and (6)?

#### **Remark 9.**

*(i) We already know that the triples* (*C*<sup>0</sup> *<sup>y</sup>*, *I*Lt, *J*1) *and* (*C*<sup>1</sup> *<sup>y</sup>*, *I*Gt, *J*2) *satisfy (6) (see Example 3). However, the triples also satisfy (5) as can be seen below:*

$$\text{LHS of } (5) = I\_{\text{Lt}}(\mathbb{C}^0\_y(\mathbf{x}, a), y) = I\_{\text{Lt}}(\begin{pmatrix} 0, & \mathbf{x} = 0, \\ a, & \mathbf{x} \neq 0, \end{pmatrix} \mathbf{y}) = \begin{cases} 0, & a, \mathbf{x} > 0 \, and \, y < 1, \\\ 1, & otherwise, \end{cases}$$

$$\begin{aligned} \text{RHS of (5)} \quad &= I\_{\text{Lt}}(\mathbf{x}, f\_1(\boldsymbol{a}, \boldsymbol{y})) = I\_{\text{Lt}} \begin{pmatrix} \mathbf{x}, \begin{pmatrix} 1, & \boldsymbol{a} = \mathbf{0}, \\ \boldsymbol{y}, & \boldsymbol{a} > \mathbf{0} \end{pmatrix} \end{pmatrix} = \begin{cases} \begin{array}{ll} 0, & \boldsymbol{a}, \mathbf{x} > \mathbf{0} \text{and } \boldsymbol{y} < \mathbf{1}, \\\ 1, & \text{otherwise}, \end{cases} \\ &= \text{LHS of (5)}. \end{aligned}$$

$$\text{LHS of (5)} = l\_{\text{Gt}}(\mathbb{C}\_y^1(\mathbf{x}, a), y) = l\_{\text{Gt}}(\begin{pmatrix} a, & \mathbf{x} = 1, \\ 0, & \mathbf{x} \neq 1 \end{pmatrix} \mathbf{y}) = \begin{cases} 0, & a = \mathbf{x} = 1 and \, y = 0, \\ 1, & otherwise, \end{cases}$$

$$\begin{aligned} \text{RHS of (5)} \quad &= I\_{\text{Gd}}(\mathbf{x}, \, \mathsf{J}\_2(a, \, y)) = I\_{\text{Gd}} \Big( \mathbf{x}, \, \begin{Bmatrix} y\_{\prime} & a = 1, \\ 1, & a < 1 \end{Bmatrix} \Big) = \begin{Bmatrix} 0, & a = \mathbf{x} = 1 \, \mathsf{and} \, y = 0, \\ 1, & \text{otherwise}, \end{Bmatrix} \\ &= \text{LHS of (5)}. \end{aligned}$$


$$I(\mathbf{x}, I(\alpha, y)) = I(\mathbf{x}, I(\alpha, y)), \quad \alpha, \mathbf{x}, \ y \in [0, 1], \tag{27}$$

but the converse is not true. For instance, consider the following triple (*C*, *I*5, *J* 0 <sup>1</sup>)(see Remark 6), where

$$\mathcal{C}\_{\Pi}(\mathbf{x}, \mathbf{y}) = \begin{cases} \mathbf{0}, & \text{if } \mathbf{x} = \mathbf{0} \text{ or } \mathbf{y} = \mathbf{0}, \\\mathbf{x}\mathbf{y}, & \text{otherwise}, \end{cases} \quad \mathcal{J}\_{0}^{1}(\mathbf{x}, \mathbf{y}) = \begin{cases} 1, & \text{if } \mathbf{x} = \mathbf{0} \text{ or } \mathbf{y} > \mathbf{0}, \\\mathbf{0}, & \text{otherwise}. \end{cases}$$


**Table 2.** The relationships between (5) and (6).

As is shown below, the corresponding pair (*I*5, *J* 0 <sup>1</sup>) satisfies (27).

$$\text{LHS of } (27) = l\_5(\mathbf{x}, l\_1^0(a, y)) = l\_5 \begin{pmatrix} 1, & a = 0 \ or \, y > 0, \\ 0, & \text{otherwise}, \end{pmatrix} = \begin{cases} 1, & a = 0 \ or \, \mathbf{x} = 0 \, \text{or} \, y > 0, \\ 0, & \text{otherwise}, \end{cases}$$

$$\begin{aligned} \text{RHS of } (27) \quad &= l\_1^0(\mathbf{x}, l\_5(\mathbf{z}, \mathbf{y})) = l\_1^0 \left| \mathbf{x}, \left\{ \begin{array}{ll} y\_\prime & a = 1, \\ 0, & a > 0 \, and \, y = 0, \\ 1, & \text{otherwise}, \end{array} \right\} = \begin{cases} l\_1^0(\mathbf{x}, \mathbf{y}), & a = 1, \\ l\_1^0(\mathbf{x}, 0), & a > 0 \, and \, y = 0, \\ 1, & \text{otherwise}, \end{cases} \\ &= \begin{cases} 1, & \alpha = 0 \, or \, \mathbf{x} = 0 \, or \, \mathbf{y} > 0, \\ 0, & \text{otherwise}, \end{cases} = \text{LHS of } (27). \end{aligned}$$

On the other hand, it is easy to verify that the triple (*C*, *I*5, *J* 0 <sup>1</sup>) does not satisfy (5) and (6), since

$$l\_5(C\_{\Pi}(1,1),0.5) = 0.5 \neq 1 = l\_5(1, l\_1^0(1,0.5)) = l\_1^0(1, l\_5(1,0.5)).$$

(iv) Finally, observe that Equations (5) and (6) can be extend a more generalized version which depend on four functions:

$$I(\mathbb{C}(\mathbf{x}, \boldsymbol{\alpha}), \boldsymbol{y}) = I(\mathbf{x}, \mathbb{K}(\boldsymbol{\alpha}, \boldsymbol{y})), \quad \boldsymbol{\alpha}, \mathbf{x}, \boldsymbol{y} \in [0, 1], \tag{28}$$

where *I*, *J*, *K* ∈FI and *C* ∈ C. Of course, there exists a quadruple (*C*, *I*, *J*, *K*) such that the above equation holds. Let us give an example of this.

**Example 4.** *Consider the following quadruple of functions* (*C*<sup>1</sup> <sup>0</sup>, *<sup>I</sup>*Lt, *<sup>J</sup>* <sup>=</sup> *<sup>I</sup>*Gt, *<sup>K</sup>*<sup>1</sup> *<sup>n</sup>*), where

$$\mathcal{C}\_{0}^{1}(\mathbf{x},\,\,y) = \begin{cases} 1, & \text{if } (\mathbf{x},\,\,y) = (1,1), \\ 0, & \text{otherwise}, \end{cases} \text{ \$K\_{n}^{1}(\mathbf{x},\,\,y) = \begin{cases} 1, & \text{if } y = 1, \\ 1-\mathbf{x}^{\mathrm{n}}, \,\,n \ge 1, \quad \text{otherwise}. \end{cases}$$

Then the quadruple (*C*<sup>1</sup> <sup>0</sup>, *<sup>I</sup>*Lt, *<sup>J</sup>* <sup>=</sup> *<sup>I</sup>*Gt, *<sup>K</sup>*<sup>1</sup> *<sup>n</sup>*) satisfies (28), as shown below:

$$\text{LHS of } (28) = I\_{\text{Lt}}(\mathbb{C}^1\_0(\mathbf{x}, a), \, y) = I\_{\text{Lt}} \left( \begin{array}{cc} 1, & a = \mathbf{x} = 1, \\ 0, & a < \mathbf{1} \, or \, \mathbf{x} < 1, \end{array} \, \begin{array}{c} y \\ \end{array} \right) = \begin{cases} 1, & a < \mathbf{1} \, or \, \mathbf{x} < 1 \, or \, \mathbf{y} = 1, \\ 0, & a = \mathbf{x} = 1, \, y < 1, \end{cases}$$

$$\begin{aligned} \text{RHSof} \,(28) \quad &= I\_{\text{Gf}} (\text{x}, \, \mathrm{K}\_{\text{n}}^{1}(\alpha, \, y)) = I\_{\text{Gf}} \Big( \text{x}, \, \begin{cases} 1, & y = 1, \\ 1 - \alpha^{n}, & \text{otherwise}, \end{cases} \Big) = \begin{cases} 1, & y = 1, \\ \, \, I\_{\text{Gf}}(\text{x}, 1 - \alpha^{n}), & \text{otherwise}, \end{cases} \\ &= \begin{cases} 1, & \alpha < 1 \, \text{or} \, \text{x} < 1 \, \text{or} \, \text{y} = 1, \\ 0, & \, \alpha = \text{x} = 1, \, y < 1, \end{cases} = \text{LHSof} \,(28). \end{aligned}$$

**Theorem 5.** *Let C* ∈ C *and I*, *J*, *K* ∈FI *and consider the following items:*


Then the following items hold:


**Proof.** Assume that the quadruple (*C*, *I*, *J*, *K*) satisfies (28). The first item is clear because if *I* = *J*, then *I*(*C*(*x*, α), *y*) = *I*(*x*, *K*(α, *y*)) which implies that the triple (*C*, *I*, *K*) satisfies (5). Similarly, if *I* = *K*, then we obtain *I*(*C*(*x*, α), *y*) = *J*(*x*, *I*(α, *y*)), which implies that the triple (*C*, *I*, *K*) satisfies (6), this completes the proof. -

**Remark 10.** *Next, let us discuss the necessity of the distinct conditions used in Theorem 5.*

(i) Note that the assumption (iv) is not necessary, i.e., even if *I J*, there can exist the quadruple (*C*, *I*, *J*, *K*) satisfies (28) with the corresponding triple (*C*, *I*, *K*) satisfying (5). So, to see this we need to search among those triples (*I*, *J*, *K*) satisfy that *I*(*x*, *K*(α, *y*)) = *J*(*x*, *K*(α, *y*)) for all α, *x*, *y* ∈ [0, 1]. However, the above equation holds rather rarely when *I J*. For example, consider the following quadruple (*C*<sup>1</sup> <sup>0</sup>, *I*Gt, *J* = *I*WB, *K* = *I*Gt). Clearly, it satisfies (28) as can be seen below:

$$\text{LHS of } (28) = I\_{\text{Gi}}(\mathbb{C}\_0^1(\mathbf{x}, \alpha), y) = I\_{\text{Gi}} \left( \begin{cases} 1, & \alpha = \mathbf{x} = 1, \\ 0, & \alpha < 1 \text{ or } \mathbf{x} < 1, \end{cases} y \right) = \left\{ \begin{array}{ll} 1, & \alpha < 1 \text{ or } \mathbf{x} < 1 \text{ or } y > 0, \\ 0, & \alpha = \mathbf{x} = 1, \ y = 0, \end{array} \right.$$

$$\text{RHS of } (28) \quad = I\_{\text{WB}}(\mathbf{x}, I\_{\text{Gi}}(\mathbf{a}, \mathbf{y})) = I\_{\text{WB}} \left( \mathbf{x}, \begin{cases} 1, & \alpha < 1 \text{ or } \mathbf{y} > 0, \\ 0, & \text{otherwise}, \end{cases} \right)$$

$$= \left\{ \begin{array}{ll} 1, & \alpha < 1 \text{ or } \mathbf{x} < 1 \text{ or } \mathbf{y} > 0, \\ 0, & \alpha = \mathbf{x} = 1, \ y = 0 \end{array} = \text{LHS of } (28).$$

On the other hand, the corresponding triple (*C*<sup>1</sup> <sup>0</sup>, *I*Gt, *K* = *I*Gt) satisfies (5), as shown below:

$$\text{RHS of (5)} = I\_{\text{Gt}}(\text{x}, I\_{\text{Gt}}(a, y)) = \begin{cases} I\_{\text{Gt}}(\text{x}, 1), & a < 1 \text{ or } y > 0, \\\ I\_{\text{Gt}}(\text{x}, 0), & a = 1 \text{ and } y = 0, \end{cases} \\ \text{LHS of (28)} = \text{LHS of (5)}.$$

(ii) Similarly, the assumption (v) is not necessary, i.e., even if *I K*, we can still have that the quadruple (*C*, *I*, *J*, *K*) satisfies (28) with the corresponding triple (*C*, *I*, *J*) satisfying (6). To see this, let us consider the quadruple (*C*<sup>0</sup> *<sup>x</sup>*<sup>2</sup> , *<sup>I</sup>*1, *<sup>J</sup>* <sup>=</sup> *<sup>I</sup>*2, *<sup>K</sup>* <sup>=</sup> *<sup>I</sup>*2) (see Remark 6), where

$$\mathcal{C}^{0}\_{\mathbf{x}^{2}}(\mathbf{x},\mathbf{y}) = \begin{cases} \mathbf{0}, & \text{if } \mathbf{y} = \mathbf{0}, \\ \mathbf{x}^{2}, & \text{otherwise.} \end{cases} \tag{29}$$

Observe that the quadruple (*C*<sup>0</sup> *<sup>x</sup>*<sup>2</sup> , *<sup>I</sup>*1, *<sup>J</sup>* <sup>=</sup> *<sup>I</sup>*2, *<sup>K</sup>* <sup>=</sup> *<sup>I</sup>*2) satisfies (28), as shown below:

$$\text{LHS of } (28) = I\_1(\mathbf{C}\_{\mathbf{x}^2}^0(\mathbf{x}, \alpha), y) = I\_1(\left\{ \begin{array}{cc} 0, & \alpha = 0, \\ \mathbf{x}^2, & \alpha > 0, \end{array} \right. \\ \left. y \right) = \begin{cases} 1, & \alpha = 0 \, or \, y = 1, \\ 1 - \mathbf{x}^2, & \text{otherwise}, \end{cases}$$

$$\begin{aligned} \text{RHS of (28)} \quad &= l\_2(\text{x, } l\_2(a, y)) = l\_2 \begin{pmatrix} 1 & y = 1 \\ 1 - a^2 & y < 1 \end{pmatrix} = \begin{cases} 1 & a = 0 \text{ or } y = 1 \\ 1 - \mathbf{x}^2 & \text{otherwise} \end{cases} \\ &= \text{LHS of (16)}. \end{aligned}$$

Moreover, the corresponding triple (*C*<sup>0</sup> *<sup>x</sup>*<sup>2</sup> , *<sup>I</sup>*1, *<sup>K</sup>* <sup>=</sup> *<sup>I</sup>*2) satisfies (6) as follows:

$$\text{RHS of } (6) = l\_2(\text{x, } l\_1(\alpha, y)) = l\_2 \begin{pmatrix} 1, & y = 1, \\ 1 - \alpha, & y < 1, \end{pmatrix} = \text{LHS of } (28) = \text{LHS of } (6).$$

#### **4. Discussion**

In the previous section, we have studied the relationship between (2), (5) and (6) under three different perspectives. It is shown that the satisfaction of (2) by either/both the pairs (*C*, *I*) and (*C*, *J*) is neither sufficient nor necessary for the triplet (*C*, *I*, *J*) to fulfill (6). In a similar way, it is shown that the satisfaction of (5) by the triple (*C*, *I*, *J*) is neither sufficient nor necessary to satisfy (6). Thus, a natural question arises: If the triple (*C*, *I*, *J*) satisfies (5) and (6), do the corresponding pairs (*C*, *I*) and (*C*, *J*) necessarily satisfy (2)? However, we have established a connection between the cross-law of importation (6) and the law of α-migrativity and discussed some conditions to satisfy the studied functional equations. However, there are still some issues worth further investigation, such as


$$I(\mathbf{x}\alpha, \, y) = I(\mathbf{x}, \, \mathbf{K}(\alpha, \, y)), \quad \alpha, \, \mathbf{x}, \, y \in [0, \, 1],$$

that comes from (28) with *C* = *C*?

We intend to study the above issues in a future work.

#### **5. Conclusions**

The generalized cross-law of importation (6) has not been investigated so far. In this work, we have shown that Equation (6) can be derived from the α-migrativity of an *R*-implication with respect to an α-migrative t-norm (Remark 4). Another important fact is that the satisfaction of (2) by either/both the pairs (*C*, *I*) and (*C*, *J*) is neither sufficient nor necessary for the triplet (*C*, *I*, *J*) to satisfy (6) (Remark 6). In addition, some necessary conditions for solutions to Equation (6) are given (Proposition 1). Following this, we have discussed the relationship between Equations (2), (5) and (6) under three different perspectives. In particular, note that both Equations (5) and (6) can be further generalized as mentioned in Remark 9 (iv).

We believe that our work provides an opportunity for better understanding of a connection between the laws of importation and the laws of α-migrativity.

**Author Contributions:** Supervision, K.L.; writing—original draft, Y.Z.; writing—review and editing, K.L. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was partially supported by the Natural Science Foundation of Hebei Province (Grant NO. F2018201060).

**Acknowledgments:** The authors are extremely grateful to the Editor and anonymous reviewers for their very valuable comments and suggestions.

**Conflicts of Interest:** The authors declare no conflict of interest.

### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
