*Article* **Solvability of a Bounded Parametric System in Max-Łukasiewicz Algebra**

### **Martin Gavalec\* and Zuzana N ˇemcová**

Faculty of Informatics and Management, University of Hradec Králové, 50003 Hradec Králové, Czech Republic; zuzana.nemcova@uhk.cz

**\*** Correspondence: martin.gavalec@uhk.cz; Tel.: +420-493-332-248

Received: 26 April 2020; Accepted: 19 June 2020; Published: 23 June 2020

**Abstract:** The max-Łukasiewicz algebra describes fuzzy systems working in discrete time which are based on two binary operations: the maximum and the Łukasiewicz triangular norm. The behavior of such a system in time depends on the solvability of the corresponding bounded parametric max-linear system. The aim of this study is to describe an algorithm recognizing for which values of the parameter the given bounded parametric max-linear system has a solution—represented by an appropriate state of the fuzzy system in consideration. Necessary and sufficient conditions of the solvability have been found and a polynomial recognition algorithm has been described. The correctness of the algorithm has been verified. The presented polynomial algorithm consists of three parts depending on the entries of the transition matrix and the required state vector. The results are illustrated by numerical examples. The presented results can be also applied in the study of the max-Łukasiewicz systems with interval coefficients. Furthermore, Łukasiewicz arithmetical conjunction can be used in various types of models, for example, in cash-flow system.

**Keywords:** max-min algebra; fuzzy max-T algebra; Łukasiewicz triangular norm; max-Łukasiewicz algebra; parametric solvability

**MSC:** 90C15

### **1. Introduction**

The max-Łukasiewicz algebra (max-Łuk algebra, for short), is one of the so-called max-T fuzzy algebras, which are defined for various triangular norms *T*.

A max-T fuzzy algebra works with variables in the unit interval I = 0, 1 and uses the binary operations of maximum and a *t*-norm, *T*, instead of the conventional operations of addition and multiplication. Formally, a max-T fuzzy algebra is a triplet (I, ⊕, ⊗*T*), where I = 0, 1 and ⊕ = max, ⊗*<sup>T</sup>* = *T* are binary operations on I. By I(*m*, *n*), I(*n*), we denote the set of all matrices, vectors, of the given dimensions over I. The operations ⊕, ⊗*<sup>T</sup>* are extended to matrices and vectors in the standard manner. Similarly, partial orderings on I(*m*, *n*) and I(*n*) are induced by the linear ordering on I.

The triangular norms (*t*-norms, for short) were introduced in [1], in connection with probabilistic metric spaces. The *t*-norms interpretations are mainly the conjunction in fuzzy logics and intersection of fuzzy sets. Therefore, they find applications in many domains, for example in decision making processes, game theory and statistics, information and data processing or risk management. The *t*-norms and *t*-conorms belong to basic notions in the theory of fuzzy sets. The following four main *t*-norms: Łukasiewicz, Gödel, product and drastic (and many others) can be found in [2].

The Łukasiewicz norm is often characterized as a logic of absolute or metric comparison.

The Łukasiewicz conjunction is defined by formula

$$\mathbf{x} \circledast\_L y = \max\{\mathbf{x} + y - 1, 0\}. \tag{1}$$

The Gödel norm is defined as the minimum of the entries (the truth degrees of the constituents). Gödel logic is the simplest norm; it is often characterized as a logic of relative comparison

$$
\propto \otimes\_G y = \min(x, y). \tag{2}
$$

The product norm is defined by the formula

$$
\propto \otimes\_p y = \mathbf{x} \cdot \mathbf{y}.\tag{3}
$$

The drastic triangular norm (the "weakest norm") is a basic example of a non-divisible *t*-norm on any partially ordered set. This *t*-norm is defined by the formula

$$\mathbf{x} \otimes\_{\mathbf{d}} y = \begin{cases} \min(x, y) & \text{if } \max(x, y) = 1, \\ 0 & \text{if } \max(x, y) < 1. \end{cases} \tag{4}$$

The max-T algebras with the above mentioned *t*-norms have various applications and their steady states and optimization methods have been intensively studied, see, for example, [3–7]. The algebras with interval entries have been studied in [8–10].

In the particular case when *T* is the Gödel *t*-norm, we get an important max-min algebra which is useful in solving various problems in fuzzy scheduling and optimization. Max-min algebra belongs to the so-called tropical mathematics, which has many applications and brings a great number of contributions to mathematical theory. Interesting monographs [11–14] and collections of papers [15–19] come from tropical mathematics and its applications.

Tropical algebras are often used for describing and studying systems working in discrete time stages. The state of the system in stage *k* is described by the state vector, *x*(*k*). Then the transition matrix, *A*, determines the transition of the system to the next stage. In more detail, the next state of the system, *x*(*k* + 1), is obtained by multiplication *A* ⊗ *x*(*t*) = *x*(*t* + 1). During the work of the system, it can happen that, after some time, the system reaches a steady state. In algebraic notation, the state vectors of steady states are eigenvectors of the transition matrix with some eigenvalue *λ* ∈ I: *A* ⊗ *x* = *λ* ⊗ *x*.

The eigenproblem in max-min algebra has been frequently investigated, and many interesting results have been found. The structure of the eigenspace has been described and algorithms for computing the largest eigenvector have been suggested, see for example [20,21]. The eigenvectors in a max-*T* algebra, for various triangular norms *T*, have applications in fuzzy set theory. Such eigenvectors have been studied in [5,7,22]. The eigenvalues and eigenvectors are important characteristics of the system described by the fuzzy algebra. For the case of the drastic and product *t*-norms, the structure of the eigenspace has been studied in [5,7]. Finally, [22] describes the case of a Łukasiewicz fuzzy algebra.

Łukasiewicz arithmetical conjunction has applications in many model situations. The operation subtracts 1 from the sum of the components and takes the maximum with zero.This leads to the idea that the result of the operation is a remainder that is over the unit. Thus, the Łukasiewicz conjunction can be used, for example, in describing backup of data on a computer, the maximal capacity of an oil tank or lump payment in finances.

Such applications often lead to systems of max-Łuk linear equations. There is no inverse operation to ⊕ in max-Łuk algebra, therefore the transfer of variables from one side of equation to the other side is not possible. As a consequence, solving the one-sided linear systems (with variables, say, on the left-hand side of the equations) requires an approach different from solving the two-sided systems (with variables on both sides).

The aim of this paper is to present an algorithm for recognizing solvability of a given one-sided max-Łuk linear system with bounded variables, in dependence of a linear parameter factor on the right side, see (9) and (10) for an exact formulation.

This problem has not yet been studied in the parametrized version. The main contribution of this paper is description of the recognition algorithm, which has crucial role in the investigation of interval eigenvectors. The algorithm for recognizing the solvability of a given one-sided max-Łuk linear system can be shortly summarized in the following steps:

1. permute the equations in the system so that the right-hand side will be decreasing, that is

$$0 \le 1 - b\_1 \le 1 - b\_2 \le \cdots \le 1 - b\_m \le 1,\tag{5}$$


The structure of this paper is the following. Section 2 contains a case study based on an interactive cash-flow system, which shows motivation for solving linear systems in max-Łuk algebra. The problem is formulated in Section 3, where the preparatory results are also presented. The main results are described in Section 4. Illustrative numerical examples related to the case study from Section 2 are shown with details in Section 5. Discussion, comparison of the results with other papers, as well as future developments, are given in Conclusions.

#### **2. Case Study: Interactive Cash-Flow System**

Consider an interactive cash-flow system created by a network of *n* cooperating banks, *B*1, *B*2, ... , *Bn*. Assume that the cooperation is performed in stages. During the run of the system, variable interest rates of the banks mutually influence each other. In each stage, every bank *Bi* chooses a cash-flow cooperation with some other bank *Bj* (choice *i* = *j* is also possible) in order to achieve the optimal profit, expressed by the value of the interest rate achieved for the next stage.

The system can be modeled as a discrete events system (DES). For any bank *Bi*, variable *xi*(*k*) shows the interest rate value in stages *k* = 1, 2, ... , the vector *x*(*k*) = - *x*1(*k*), *x*2(*k*), ... , *xn*(*k*) *<sup>T</sup>* is called the state vector of DES in stage *k*. The change of the next state-vector values during the transition of the DES to the state vector *x*(*k* + 1) depends on the entries *aij* of the so-called transition matrix *A*.

The possible increase of the profit coming from the cooperation of *Bi* with bank *Bj* is equal to *aij* (including the lump payment). Thus, the efficient sum of *aij* and *xi*(*k*) is only the part exceeding 1 (that is, exceeding 100%), in the case when *Bi* chooses *Bj* for cooperation in the stage *k*.

Optimization of the variable interest rate in stage *k* + 1 leads every *Bi* to such a choice of *Bj*, where the efficient increase of the profit is maximal. That is, *xi*(*k* + 1) = max *<sup>j</sup>*∈*<sup>N</sup>* max- *aij* + *xj*(*k*) − 1, 0 . In max-Łuk notation the optimal choice can be written as

$$\mathbf{x}\_{l}(k+1) = \bigoplus\_{j \in N} a\_{ij} \otimes\_{L} \mathbf{x}\_{j}(k), \text{ or } \tag{6}$$

$$\mathbf{x}(k+1) = A \otimes\_L \mathbf{x}(k). \tag{7}$$

For simplicity we assume that the system is homogeneous (that is, *A* does not change from stage to stage).

In real life, the matrix and vector entries are not always exact values. For example, if (7) is applied for prediction, then the transition matrix is not exactly known, it is only an estimation, belonging to some interval *A* ∈ **A** = [*A*, *A*]. Analogously, the state vector belongs to some interval *x* ∈ **X** = [*x*, *x*]. We say that the DES is considered with interval coefficients.

For formulas with interval coefficients, it must be decided which values from the corresponding interval will be taken. One possibility is to take all values (using the universal quantifier). The other possibility is to use the existential quantifier and only require that there is some value from the interval, such that the formula is satisfied.

If there are more interval variables in the formula in consideration, then the quantifiers can be combined. For example, various types of quantified notions in max-min algebra are described in [23,24].

By recurrent application of (7), the sequence of state vectors (also called: orbit of the DES) *<sup>x</sup>*, *<sup>A</sup>* <sup>⊗</sup> *<sup>x</sup>*, ... , *<sup>A</sup><sup>k</sup>* <sup>⊗</sup> *<sup>x</sup>*, ... , where *<sup>A</sup><sup>k</sup>* <sup>=</sup> *<sup>A</sup>* <sup>⊗</sup> ... <sup>⊗</sup> *<sup>A</sup>* (*<sup>k</sup>* times), can be created. The orbit represents a predicted evolution of interest rates. Two natural questions arise:

Q1. Can the orbit reach a fixed given state vector value?

Q2. Can the orbit reach a steady state (such a state which does not change from stage to stage)?

Q1 requires to recognize whether, in some stage *k*, there is a value *y* = *x*(*k*) such that *b* = *x*(*k* + 1) for a given vector *b* ∈ I(*n*). If we consider the problem in the interval arithmetic, then we get the state vector variable *y* ∈ [*y*, *y*]. Moreover, we can generalize the problem by adding a parameter *λ* ∈ I to the given value *b*. Then the original question is beeing solved as a special subcase with *λ* = 1.

Therefore, question Q1 can be solved as one-sided bounded parametric problem studied in Sections 3 and 4. The main result is Theorem 6, which describes a necessary and sufficient condition for solvability of the system (9) and (10).

The computations answering to Q1 are illustrated by Example 1 (positive answer) and Example 2 (negative answer) in Section 5, with detailed interpretation.

Q2 is connected with the eigenproblem of the transition matrix. A steady state is characterized by the equation *x*(*k* + 1) = *x*(*k*) or, equivalently, by *A* ⊗*<sup>L</sup> x* = *x*. That is, steady states are equivalent to max-Łuk eigenvectors of the transition matrix *A*. Usually the eigenvectors are considered in a more general form, with added the so-called eigenvalue *λ* ∈ I. That is, *x* ∈ I(*n*) is an eigenvector of matrix *A* ∈ I(*n*, *n*) with eigenvalue *λ* ∈ I if *A* ⊗*<sup>L</sup> x* = *λ* ⊗*<sup>L</sup> x*. The eigenvectors in max-Łuk algebra have been studied in [3,6], and recently, in a more general context, in [25].

If we wish to answer Q2 in the interval arithmetics, then we have to consider *A* ∈ **A** = [*A*, *A*] and *x* ∈ **X** = [*x*, *x*]. According to the choice of universal/existential quantifier for *A* ∈ **A**, and for *x* ∈ **X**, various types of interval eigenproblem have been studied by various authors over max/plus and max-min algebra.

For example, **X** is called a strongly tolerable eigenvector of **A** if

$$(\exists \exists \lambda)(\exists A \in \mathbf{A})(\forall \mathbf{x} \in \mathbf{X}) \left[A \otimes \mathbf{x} = \lambda \otimes \mathbf{x}\right] \tag{8}$$

In words, we ask for the existence of *λ* and *A* ∈ **A** such that every *x* ∈ **X** is an eigenvector of *A* with eigenvalue *λ* (we shortly say that every *x* ∈ **X** is tolerated by *A*).

Analogous problem has been solved in max-min algebra in [23], where it has been shown that the problem can be reduced to the solvability of the system *<sup>C</sup>*˜ <sup>⊗</sup> *<sup>y</sup>* <sup>=</sup> *<sup>λ</sup>* <sup>⊗</sup> ˜ *b* using generators of the interval matrix **A**. The main idea of the algorithm is to find a certificate matrix of the given instance of dimension *n* × *n*, as a max-min linear combination of generators. The necessary coefficients of this linear combinations can be computed by solving an auxiliary one-sided max-min linear system of dimension *<sup>n</sup>*<sup>2</sup> <sup>×</sup> *<sup>n</sup>*2.

By analogy, this approach can easily be transferred from max-min to max-Łuk algebra, with a single exception of recognizing the solvability for the auxiliary one-sided linear system of dimension *<sup>n</sup>*<sup>2</sup> <sup>×</sup> *<sup>n</sup>*2. Namely, to recognize the parametric solvability of a one-sided linear system is substantially more complicated problem in max-Łuk algebra than it is in max-min algebra. In fact, it is this manuscript, where an efficient algorithm for the necessary solvability problem has been formulated.

Till now, the specific methods of max-Łuk algebra have only been presented at Conference EURO 2019 in Dublin. The extended version of this presentation is in preparation and will be submitted soon. The recognition method described in this manuscript, plays important role in the proofs of the following two theorems.

**Theorem 1** ([23])**.** *Let an interval matrix* **A** = [*A*, *A*] *and an interval vector* **X** = [*x*, *x*] *be given. Then,* **X** *is a strongly tolerable eigenvector of* **<sup>A</sup>** *if and only if <sup>C</sup>*˜ <sup>⊗</sup>*<sup>L</sup> <sup>y</sup>* <sup>=</sup> *<sup>λ</sup>* <sup>⊗</sup>*<sup>L</sup>* ˜ *b is solvable for some λ* ∈ I*.*

**Theorem 2** ([23])**.** *The recognition problem of whether a given interval vector* **X** *is a strong tolerance eigenvector of a given interval matrix* **A** *in max-min algebra, is solvable in O*(*n*5) *time.*

#### **3. Bounded Parametric Systems of Max-Łuk Linear Equations**

In view of the motivation inspired by the case study in Section 2, the solvability problem for a bounded parametric linear system in max-Łuk algebra is studied in this paper.

We consider the system

$$\mathcal{C} \circledcirc\_L y = \lambda \circledcirc\_L b,\tag{9}$$

$$
\underline{y} \le y \le \overline{y}.\tag{10}
$$

with fixed matrix *C* ∈ I(*m*, *n*) and the right-hand side vector *b* ∈ I(*m*). The basic question is whether the system is solvable for some value 0 < *λ* ∈ I of the parameter. In other words, we are looking for a necessary and sufficient condition allowing the recognition of whether there is a *λ* ∈I\{0} such that (9) and (10) is solvable (the case *λ* = 0 is trivial).

In the sequel, we use the notation *M* = {1, 2, ... , *m*} and *N* = {1, 2, ... , *n*}. The set of all solutions to (9) without any constraint is denoted by *S*(*C*, *λ* ⊗*<sup>L</sup> b*); the solution set with the upper bound is *S*(*C*, *λ* ⊗*<sup>L</sup> b*, *y*), and the solution set with both upper and lower bound is denoted by *S*(*C*, *λ* ⊗*<sup>L</sup> b*, *y*, *y*). That is, we have to recognize whether *S*(*C*, *λ* ⊗*<sup>L</sup> b*, *y*, *y*) = ∅, for some *λ* ∈ I or not.

Without any loss of generality, we assume till the end of the paper that the right-hand side vector *b* ∈ I(*m*) satisfies the monotonicity condition

$$11 \ge b\_1 \ge b\_2 \ge \cdots \ge b\_m \ge 0. \qquad \text{Then} \tag{11}$$

$$0 \le 1 - b\_1 \le 1 - b\_2 \le \cdots \le 1 - b\_m \le 1. \tag{12}$$

System (9) is equivalent to

$$\mathbb{P}\left(\forall i \in M\right) \left[\bigoplus\_{j \in N} \left(c\_{ij} \otimes\_L y\_j\right) = \lambda \otimes\_L b\_i\right],\tag{13}$$

which is further equivalent to

$$(\forall i \in M) \left(\forall j \in N\right) \left[c\_{ij} \otimes\_L y\_j \le \lambda \otimes\_L b\_i\right],\tag{14}$$

$$(\forall k \in M) \left(\exists j \in N\right) \left[c\_{kj} \otimes\_L y\_j = \lambda \otimes\_L b\_k\right].\tag{15}$$

In view of the definition of ⊗*L*, the inequality in (14) takes one of the following forms

$$0 < c\_{i\bar{j}} + y\_{\bar{j}} - 1 \le \lambda + b\_{\bar{i}} - 1,\tag{16}$$

$$0 \ge c\_{i\bar{j}} + y\_{\bar{j}} - 1, \qquad 0 < \lambda + b\_{\bar{i}} - 1,\tag{17}$$

$$0 \ge c\_{i\bar{j}} + y\_{\bar{j}} - 1, \qquad 0 \ge \lambda + b\_{\bar{i}} - 1. \tag{18}$$

We shall use the notation *H*(*λ*) = {*i* ∈ *M*; 0 < *λ* + *bi* − 1} (for short: *H* if *λ* is clear from the context). For *i* ∈ *M* \ *H* we have 0 ≥ *λ* + *bi* − 1. Therefore,

$$
\lambda \otimes\_L b\_i = \lambda + b\_i - 1 \qquad \text{for } i \in H,\tag{19}
$$

$$
\lambda \circledast\_L b\_i = 0 \qquad\qquad\qquad\text{for } i \in \mathcal{M} \;\!/\;H.\tag{20}
$$

For brevity, we write *dij* = *bi* − *cij* for every *i* ∈ *M*, *j* ∈ *N*.

**Lemma 1.** *If y* ∈ *S*(*C*, *λ* ⊗*<sup>L</sup> b*, *y*, *y*)*, then*

$$\text{1. } \quad \left(\forall j \in \mathcal{N}\right) \left[y\_j \le \left(\lambda + \bigwedge\_{i \in H} d\_{ij}\right)\right].$$

$$\text{2.} \qquad (\forall j \in N) \left[ y\_j \le \bigwedge\_{i \in M \backslash H} (1 - c\_{i\bar{j}}) \right],$$

$$\text{3. } \quad (\forall j \in N) \left[ \underline{y}\_j \le y\_j \le \overline{y}\_j \right] \dots$$

**Proof.** Let *j* ∈ *N* be fixed.

(i) For every *i* ∈ *H* we have 0 < *λ* + *bi* − 1, which implies *cij* + *yj* − 1 ≤ *λ* + *bi* − 1, in view of (16) and (17). That is, *yj* ≤ *λ* + *bi* − *cij* = *λ* + *dij*. As a consequence, *yj* ≤ - *λ* + *dij* = *λ* + *dij*.

*i*∈*H i*∈*H* (ii) For *i* ∈ *M* \ *H* we have 0 ≥ *λ* + *bi* − 1, which implies 0 ≥ *cij* + *yj* − 1, by (18). Then *yj* ≤ 1 − *cij*, that is, *yj* ≤ - 1 − *cij* .

*i*∈*M*\*H* (iii) The assertion follows directly from the definition.

If the equality *ckj* ⊗*<sup>L</sup> yj* = *λ* ⊗*<sup>L</sup> bk* in (15) holds, then we say that *yj* is active in row *k*. If so, we write *k* ∈ *Aj*(*λ*) and *Aj* = *Aj*(*λ*) is then called the activity set of the variable *yj*.

There are two possible activity subcases:

$$y\_j = \lambda + d\_{kj} \qquad \text{for } k \in H,\tag{21}$$

$$0 \le y\_j \le 1 - c\_{kj} \qquad \text{for } k \in M \backslash H. \tag{22}$$

Namely, if *k* ∈ *H*, then 0 < *λ* + *bi* −1 = *λ* ⊗*<sup>L</sup> bk*. Then also *ckj* ⊗*<sup>L</sup> yj* > 0, which gives *ckj* + *yj* −1 = *λ* + *bi* − 1. As a consequence, *yj* = *λ* + *bk* − *ckj* = *λ* + *dkj*. On the other hand, if *k* ∈ *M* \ *H*, then 0 ≥ *λ* + *bi* − 1. That is, *λ* ⊗*<sup>L</sup> bk* = 0. Then, also *ckj* ⊗*<sup>L</sup> yj* = 0, which implies *ckj* + *yj* − 1 ≤ 0, and *yj* ≤ 1 − *ckj*.

In subcase (21) with *k* ∈ *H*, we have *yj* = *λ* + *dkj* ≤ *λ* + *<sup>i</sup>*∈*<sup>H</sup> dij*. As a consequence,

$$d\_{kj} = \bigwedge\_{i \in H} d\_{ij}.\tag{23}$$

In subcase (22) with *k* ∈ *M* \ *H*, we get, using Lemma 1(ii),

$$0 \le y\_j \le \bigwedge\_{i \in \mathcal{M} \backslash H} (1 - c\_{ij}) \le 1 - c\_{kj}. \tag{24}$$

**Lemma 2.** *Assume C* ∈ I(*m*, *n*)*, b* ∈ I(*m*) *with the monotonicity condition* (11)*, y*, *y*, *y* ∈ I(*n*)*, λ* ∈ I *and h* ∈ *M with* 1 − *bh* < *λ* ≤ 1 − *bh*<sup>+</sup>1*. Then H* = {1, 2, . . . , *h*} *and the following statements are equivalent:*


*where the submatrix CH (subvector bH) consists of the rows in Ci (in bi) with i* ∈ *H. Analogously, the vector <sup>y</sup><sup>h</sup>* ∈ I(*n*) *with <sup>y</sup><sup>h</sup> <sup>j</sup>* = *i*∈*M*\*H* - 1 − *cij*) *for every j* ∈ *N is constructed from the rows Ci of C, i* ∈ *M* \ *H.*
