10th Anniversary of *Axioms*

Logic

Edited by Oscar Castillo

www.mdpi.com/journal/axioms

## **10th Anniversary of** *Axioms***: Logic**

## **10th Anniversary of** *Axioms***: Logic**

Editor **Oscar Castillo**

MDPI • Basel • Beijing • Wuhan • Barcelona • Belgrade • Manchester • Tokyo • Cluj • Tianjin

*Editor* Oscar Castillo Division of Graduate Studies and Research Tijuana Institute of Technology TecNM Tijuana Mexico

*Editorial Office* MDPI St. Alban-Anlage 66 4052 Basel, Switzerland

This is a reprint of articles from the Special Issue published online in the open access journal *Axioms* (ISSN 2075-1680) (available at: www.mdpi.com/journal/axioms/special issues/10th Anniversary of axioms logic).

For citation purposes, cite each article independently as indicated on the article page online and as indicated below:

LastName, A.A.; LastName, B.B.; LastName, C.C. Article Title. *Journal Name* **Year**, *Volume Number*, Page Range.

**ISBN 978-3-0365-7839-2 (Hbk) ISBN 978-3-0365-7838-5 (PDF)**

© 2023 by the authors. Articles in this book are Open Access and distributed under the Creative Commons Attribution (CC BY) license, which allows users to download, copy and build upon published articles, as long as the author and publisher are properly credited, which ensures maximum dissemination and a wider impact of our publications.

The book as a whole is distributed by MDPI under the terms and conditions of the Creative Commons license CC BY-NC-ND.

## **Contents**


## **About the Editor**

## **Oscar Castillo**

Oscar Castillo obtained his Doctor of Science degree (Doctor Habilitatus) in Computer Science from the Polish Academy of Sciences (with his Dissertation, titled "Soft Computing and Fractal Theory for Intelligent and Manufacturing"). He is a Professor of Computer Science in the Graduate Division, Tijuana Institute of Technology, Tijuana, Mexico. In addition, he is serving as Research Director of Computer Science and is head of the research group focusing on hybrid fuzzy intelligent systems. Currently, he is the president of the HAFSA (Hispanic American Fuzzy Systems Association) and the former president of the IFSA. Prof. Castillo is also the chair of the Mexican Chapter of the Computational Intelligence Society (IEEE). He belongs to the Mexican Research System (SNI Level 3). His research interests are in type-2 fuzzy logic, fuzzy control, and neuro-fuzzy and genetic-fuzzy hybrid approaches. He has published over 300 journal papers, 10 authored books, 60 edited books, 300 papers in conference proceedings, and more than 300 chapters in edited books, totaling more than 1080 publications (according to Scopus), with an h index of 89 and more than 26,000 citations according to Google Scholar. He has been the Guest Editor of several successful Special Issues in the past in the following journals: *Applied Soft Computing*, *Intelligent Systems*, *Information Sciences*, *Soft Computing*, *Non-Linear Studies*, *Fuzzy Sets and Systems*, *JAMRIS*, and *Engineering Letters*. He is currently the Associate Editor of the *Information Sciences Journal*, *Journal of Engineering Applications on Artificial Intelligence*, *International Journal of Fuzzy Systems*, *Journal of Complex Intelligent Systems*, and *Granular Computing Journal and Intelligent Systems Journal* (Wiley). He was elected as an IFSA Fellow in 2015 and an MICAI Fellow in 2016. Finally, he received recognition as a Highly Cited Researcher in 2017 and 2018 from Clarivate Analytics and Web of Science.

## **Preface to "10th Anniversary of** *Axioms***: Logic"**

Published for the first time in 2012, *Axioms* is celebrating its 10th anniversary. To mark this significant milestone and celebrate the achievements made throughout the years, we have organized a Special Issue entitled "10th Anniversary of *Axioms*—Logic" in the section "Logic" of *Axioms*.

Mathematical logic is a field of mathematics which has a wide range of various applications. This reprint includes high-quality papers which focus on original research in logic, mathematical logic, and applications, with the main (but not only) focus being on algebraic logic, fuzzy logic, descriptive set theory, decision making, computability and recursion theory, and algorithmic and combinatorial optimization.

We developed this reprint in order to honor the collective efforts of all those who have contributed so far to the success of the journal and the gathering of scientific research in mathematical logic related to current challenges and future innovations.

> **Oscar Castillo** *Editor*

## *Article* **Granular Computing Approach to Evaluate Spatio-Temporal Events in Intuitionistic Fuzzy Sets Data through Formal Concept Analysis**

**Imran Ali 1,2 , Yongming Li 1,\* and Witold Pedrycz 3,4**

	- <sup>4</sup> Systems Research Institute, Polish Academy of Sciences, 01-224 Warsaw, Poland
	- **\*** Correspondence: liyongm@snnu.edu.cn

**Abstract:** Knowledge discovery through spatial and temporal aspects of data related to occurrences of events has many applications in digital forensics. Specifically, in electronic surveillance, it is helpful to construct a timeline to analyze information. The existing techniques only analyze the occurrence and co-occurrence of events; however, in general, there are three aspects of events: occurrences (and co-occurrences), nonoccurrences, and uncertainty of occurrences/non-occurrences with respect to spatial and temporal aspects of data. These three aspects of events have to be considered to better analyze periodicity and predict future events. This study focuses on the spatial and temporal aspects given in intuitionistic fuzzy (IF) datasets using the granular computing (GrC) paradigm; formal concept analysis (FCA) was used to understand the granularity of data. The originality of the proposed approach is to discover the periodicity of events data given in IF sets through FCA and the GrC paradigm that helps to predict future events. An experimental evaluation was also performed to understand the applicability of the proposed methodology.

**Keywords:** granular computing; formal concept analysis; intuitionistic fuzzy sets; periodicity; spatial and temporal aspects; knowledge discovery

**MSC:** 74E20; 94D05; 03B52; 03G10; 06D72

## **1. Introduction**

An event is the occurrence of something at some place and time which involves some actors as objects and spatio-temporal features as attributes. In theliterature, the idea of spatial, temporal, and spatio-temporal co-occurrences can be found. In general, spatial co-occurrence is defined as when two or more events occur at the same place, temporal co-occurrence as when a number of events occur at the same time or in the same timeinterval, and spatio-temporal co-occurrence as when events occur at the same place and time. Periodical events are those that occur at the same time intervals, for example, an event that occurs every day, weekend, month, or year. In the application domain, it is important to analyze these aspects of events. In the context of smart video surveillance, it is possible to discover the periodical and same-place movements of pedestrians to predict a crime before it happens. Moreover, in the context of intuitionistic fuzzy (IF) sets, there are some membership and nonmembership values that can be indicated for events occurring at some place and time. Existing approaches only work on occurrences and co-occurrences of events; however, in real life, there can be three aspects: occurrences (and co-occurrences), nonoccurrences, and the uncertainty of occurrences/nonoccurrences. The limitation of focusing

**Citation:** Ali, I.; Li, Y.; Pedrycz, W. Granular Computing Approach to Evaluate Spatio-Temporal Events in Intuitionistic Fuzzy Sets Data through Formal Concept Analysis. *Axioms* **2023**, *12*, 407. https:// doi.org/10.3390/axioms12050407

Academic Editor: Hsien-Chung Wu

Received: 9 February 2023 Revised: 29 March 2023 Accepted: 20 April 2023 Published: 22 April 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

only on the occurrences and co-occurrences of events is that it indicates the data related to event occurrences that may be missing the elicitation of complete and important knowledge related to event nonoccurrences, as well as the uncertainty of occurrences/non-occurrences. Motivated by these limitations, this research provides a novel approach based on granular computing (GrC) to discover these three aspects of events at the same places and in the periodical form in the IF sets, where *µ* (membership), *γ* (non-membership), and *π* (uncertainty) values indicate the occurrences (and co-occurrences), nonoccurrences, and the uncertainty of occurrences/nonoccurrences of events, respectively. GrC was used to discover the periodicity in data at various abstraction levels. Moreover, formal concept analysis (FCA) was used to discover the granulation levels and process the granulation measures to understand the IF concepts, where the events indicated as objects and spatio-temporal occurrences showed the attributes of lattices formed by formal concepts. The originality of the proposed approach is to discover the periodicity of spatial–temporal occurrences data of events given in IF sets through GrC and FCA. Moreover, this approach helps predict the occurrence (co-occurrence), nonoccurrence, and uncertainty of occurrence/nonoccurrence of events for spatial and temporal aspects of data through IF sets. The motivation for the use of IF sets instead of fuzzy sets in this proposed approach is the three-tuple nature of the IF sets, which contain the *µ* (membership), *γ* (nonmembership), and *π* (IF set index or indeterminacy, which expresses the degree of uncertainty) values of the elements. Here, *π* is used in the computation of GrC measures i.e., IG and COV that help in the process of decision making. This paper is organized as follows: Section 2 discusses the related works; Section 3 provides the definitions of IF sets and FCA specifically used in the context of the IF sets data; Section 4 explains the GrC; Section 5 explains the proposed methodology; Section 6 demonstrates the experimental evaluation; Section 7 gives the results and discussion; Section 8 explains the comparison of the proposed approach with existing SOTA (state of the art) approaches; and Section 9 contains the conclusion and future work, followed by the references.

## **2. Related Works**

In the literature, research work related to spatio-temporal and periodical occurrences and co-occurrences can be found. The most important task regarding periodical occurrences is to determine the data blocks in the whole dataset from which suitable views can be analyzed. For example, in a dataset of hundred events, discovering seventy events that always occur on a Sunday may be more interesting than ninety events occurring on the weekends. For this type of task, views are determined by selecting the temporal attributes and adjusting the temporal units in a way that helps to create a temporal zoom operation on data and discover the more interesting data blocks in the form of periodical occurrences. Depending on the data and objective, some data analysis techniques are required to evaluate the data blocks aiming to discover the periodical co-occurrences of events. Based on the GrC paradigm and FCA, different computational approaches are proposed to discover the spatio-temporal co-occurrences for different purposes. As in [1], FCA as a central tool for the proposed method is used to combine time-based granulation and three-way decisions to understand the learned granular structures conceptualizing spatio-temporal events. Moreover, the GrC is integrated with FCA as concept learning via GrC [2], granular rule acquisition in decision formal contexts [3], GrC approach based on FCA in fuzzy datasets [4], granular transformations, and irreducible element judgement [5]. There are two types of granules in FCA, one is the granule made by the set of objects in formal concept and the other is the one formed by the individual objects. Some research studies show that the granule formed by the individual objects play a vital role, with a strong correlation with object granules, object concepts [5], and granular concepts [6]. Additionally, there exist many other types of granules in FCA; however, the classification and the criteria for the classification of information granules in FCA are still an open research direction.

Yang et al. in [7] explained the sequential approach of three-way GrC by a framework of spatio-temporal multilevel granular structure, described with temporality of data and spatiality of parameters. Moreover, in the context of three-way decision approaches in [8], an IF three-way decision model based on IF sets is proposed to improve the ability to process complex fuzzy incomplete information systems. Zhao et al., in [9], proposed a novel spatial–temporal fuzzy information granule (STFIG) model to achieve the multistep forecasting of time series. In [10], the method puts forward research on the optimal route planning of traffic multisource routes based on GrC. GrC is used with set theory, shadowed sets, rough sets, fuzzy sets, etc. In each of these sets' environments, the granules or the granulation processing is defined in different ways, as well as a tentative one to find similarity and bridge the gap between these settings, as described in [11]. Additionally, the IF sets using an FCA algorithm have already been discussed in the literature [12]; for example, in [13], the structure of formal concept forming operators is given in the form of fuzzy dilation and fuzzy erosion operators of bipolar fuzzy mathematical morphology, and in [14], attribute reduction in IF concept lattices is discussed.

This methodology uses the GrC paradigm and FCA with IF datasets as spatio-temporal attributes to realize the granulation or abstraction of data related to the periodical timeslots in temporal attributes of formal contexts, which were formed from the IF datasets; the granules involving spatio-temporal attributes were used to determine the co-occurrences of events with respect to space and time. In addition, the granulation measures of lattices made from the formal contexts of IF sets were discussed, such as information granulation (IG), coverage (COV), specificity (SP), and unique index (Q) value, to evaluate the granule according to its information related to spatio-temporal and periodical co-occurrences.

## **3. Preliminaries**

*3.1. Intuitionistic Fuzzy (IF) Sets*

In [15], the notion of fuzzy sets is given as

$$\mathcal{C}' = \left\{ \langle \mathfrak{x}, \mu\_{\mathcal{C}'}(\mathfrak{x}) \rangle \mid \mathfrak{x} \in \mathcal{X} \right\},$$

where *µ<sup>C</sup>* <sup>0</sup>(*x*) ∈ [0, 1] is the membership function of the fuzzy set *C* 0 . The notion of IF set [16–18] is given as

$$\mathcal{C} = \{ \langle \mathfrak{x}, \mu\_{\mathcal{C}}(\mathfrak{x}), \gamma\_{\mathcal{C}}(\mathfrak{x}) \rangle \mid \mathfrak{x} \in X \},$$

where *µ<sup>C</sup>* : *X* → [0, 1] and *γ<sup>C</sup>* : *X* → [0, 1], such that

$$0 \le \mu\_{\mathcal{C}}(\mathfrak{x}) + \gamma\_{\mathcal{C}}(\mathfrak{x}) \le 1.$$

Here, *µC*(*x*), *γC*(*x*) ∈ [0, 1] indicate the degree of membership and the degree of nonmembership of *x* ∈ *C*, respectively. Each fuzzy set in terms of IF sets can be represented as

$$\mathcal{C} = \left\{ \langle \mathfrak{x}, \mu\_{\mathcal{C}'} (\mathfrak{x}), 1 - \mu\_{\mathcal{C}'} (\mathfrak{x}) \rangle \mid \mathfrak{x} \in X \right\}.$$

In addition to this, the important concept of each IF set *C* in *X* is given as

$$
\pi\_{\mathbb{C}}(\mathfrak{x}) = 1 - \mu\_{\mathbb{C}}(\mathfrak{x}) - \gamma\_{\mathbb{C}}(\mathfrak{x}).
$$

Here, *πC*(*x*) is called the "hesitation degree" of *x* ∈ *C*, which indicates the uncertainty or the lack of the knowledge of whether *x* ∈ *C* or *x* ∈/ *C*. Moreover, it is clear that 0 ≤ *πC*(*x*) ≤ 1, ∀*x* ∈ *X*. This hesitation degree plays an important role in distance [19,20], similarity [20], and entropy [21,22], which are key measures that are used specially in the information processing tasks. Additionally, hesitation degree also plays a significant role in image processing [23], multicriteria group decision making [24], IF decision trees [25], genetic algorithms [26], and many other situations. In addition to this, let *C*1, *C*<sup>2</sup> ∈ *IF*(*U*), *C*<sup>1</sup> ⊆ *C*<sup>2</sup> ⇔ *µC*<sup>1</sup> (*x*) ≤ *µC*<sup>2</sup> (*x*) and *γC*<sup>1</sup> (*x*) ≥ *γC*<sup>2</sup> (*x*), ∀*x* ∈ *U*. If both *C*<sup>1</sup> ⊆ *C*<sup>2</sup> and *C*<sup>2</sup> ⊆ *C*<sup>1</sup> then, *C*<sup>1</sup> = *C*<sup>2</sup> and *C*<sup>2</sup> = *C*1. The universe set *U* and null set ∅ are the special type of IF sets, where *U* = {h*x*, 1, 0i | *x* ∈ *U*} and ∅ = {h*x*, 0, 1i | *x* ∈ *U*}.

## *3.2. Formal Concept Analysis (FCA)*

The FCA method was proposed in early 1980 by R. Wille for when a set of objects share a set of attributes. The foundation of FCA is built on the notions of lattice and set theory. This method outputs two sets of data. The first one provides the hierarchical relationship of constructed concepts in the form of a diagram called "Concept Lattice". The second set of data provides the list of the interdependencies among all the attributes in a formal context.

**Definition 1.** *In FCA, the relation K* = (*G*, *M*, *I*) *is called a formal context, where G and M denote the set of objects and set of attributes, respectively. In addition to this, I* ⊆ *G* × *M shows the relationship between G objects (extents) and M attributes (intents). Moreover, the relation* (*g*, *m*) ∈ *I shows that the object g has attribute m, which can also be written as gIm.*

**Definition 2.** *For a subset A* ⊆ *G of objects then, the subset of the attributes common to all the objects in A is given as*

$$A \uparrow = \{ m \in M \mid \forall \mathfrak{g} \in A, \mathfrak{g} \operatorname{Im} \}.$$

*Likewise, given a subset B* ⊆ *M of attributes, the subset of objects having all the attributes in set B is given as*

$$B \downarrow = \{ g \in G \mid \forall m \in B, gIm \} \dots$$

**Definition 3** ([26])**.** *A formal context K* = (*G*, *M*, *I*) *is defined as a pair* (*A*, *B*)*, where A* ⊆ *G, B* ⊆ *M and A* ↑= *B*, *B* ↓= *A, where A denotes the objects (extents) and B indicates the attributes (intents) of the pair* (*A*, *B*)*. Let* (*A*1, *B*1) *and* (*A*2, *B*2) *be the two formal concepts of a formal context K* = (*G*, *M*, *I*)*;* (*A*1, *B*1) *is called a superconcept of* (*A*2, *B*2)*, and* (*A*2, *B*2) *is called a subconcept of* (*A*1, *B*1) *if it satisfies the equivalent condition given as*

$$(A\_1 \, B\_1) \le (A\_2 \, B\_2) \Leftrightarrow A\_1 \subseteq A\_2 \Leftrightarrow B\_2 \subseteq B\_1$$

*The set of all the superconcept and subconcept interrelations construct a design structure known as a lattice. The lattice is an abstract structure with join (denoted by "*∨*") and meet (denoted by "*∧*") operations. The above expression in this definition, in the form of join and meet, is*

$$(A\_1 \,\, B\_1) \vee (A\_2 \,\, B\_2) = ((A\_1 \cup A\_2) \,\, \downarrow \uparrow \,\, B\_1 \cap B\_2)\_\prime$$

$$(A\_1 \,\, B\_1) \wedge (A\_2 \,\, B\_2) = (A\_1 \cap A\_2 \,\, (B\_1 \cup B\_2) \,\, \downarrow \uparrow)$$

*where "*∨*" and "*∧*" indicate the supermum and infimum operations, respectively.*

For any ∀*g* ∈ *G*, the pair (*g* ↑↓, *g* ↑) is called the object concept, and ∀*m* ∈ *M*, the pair (*m* ↑↓, *m* ↑) is called the attribute concept. In a lattice diagram, when two branches join below, it is called a join operation "∨", and the point where two branches meet above is known as a meet operation "∧". This interprets the relationship among the concepts, objects, and attributes. The nodes in this diagram express the concepts. However, this diagram is a type of directed acyclic graph. In IF sets, FCA is used for decision making, data analysis, knowledge discovery, and especially for forecasting purposes.

**Definition 4.** *Let C*1, *C*<sup>2</sup> ∈ *IF*(*U*) *be the two IF sets, given as*

$$\begin{array}{l} \mathsf{C}\_{1} = \left\{ \left( \mathsf{x}, \mu\_{\mathsf{C}\_{1}}(\mathsf{x}), \gamma\_{\mathsf{C}\_{1}}(\mathsf{x}) \right) \mid \mathsf{x} \in \mathsf{U} \right\}, \\ \mathsf{C}\_{2} = \left\{ \left( \mathsf{x}, \mu\_{\mathsf{C}\_{2}}(\mathsf{x}), \gamma\_{\mathsf{C}\_{2}}(\mathsf{x}) \right) \mid \mathsf{x} \in \mathsf{U} \right\}. \end{array}$$

*where µC*<sup>1</sup> (*x*), *γC*<sup>1</sup> (*x*) : *U* → [0, 1] *and µC*<sup>2</sup> (*x*), *γC*<sup>2</sup> (*x*) : *U* → [0, 1] *such that*

$$0 \le \mu\_{\mathcal{C}\_1}(\mathfrak{x}) + \gamma\_{\mathcal{C}\_1}(\mathfrak{x}) \le 1,$$

$$0 \le \mu\_{\mathcal{C}\_2}(\mathfrak{x}) + \gamma\_{\mathcal{C}\_2}(\mathfrak{x}) \le 1.$$

*Here, µC*<sup>1</sup> (*x*), *γC*<sup>1</sup> (*x*) ∈ [0, 1] *indicate the degree of membership and nonmembership of x* ∈ *C*1*, and µC*<sup>2</sup> (*x*), *γC*<sup>2</sup> (*x*) ∈ [0, 1] *indicate the degree of membership and nonmembership of x* ∈ *C*<sup>2</sup> *IF sets, such that* ∀*x* ∈ *U.*

**Definition 5.** *Let C*1, *C*<sup>2</sup> ∈ *IF*(*U*) *be the two IF sets given in Definition 4, then these two sets through FCA algorithm are evaluated as*

> *C*1,2 = *x*, *min µC*<sup>1</sup> (*x*), *µC*<sup>2</sup> (*x*) , *max γC*<sup>1</sup> (*x*), *γC*<sup>2</sup> (*x*) <sup>|</sup> *<sup>x</sup>* <sup>∈</sup> *<sup>U</sup>* .

These are the basic mathematical definitions which define the FCA and its operations with respect to IF sets. Moreover, later sections explain it more in detail by means of the GrC approach.

## **4. Granular Computing (GrC)**

GrC is an emerging field for information processing [27,28] through the basic building blocks of information, named granules. In the data science literature, the granule is defined as the cluster or set of objects extracted or grouped together by similarity, uniformity, proximity, predictability, resemblance, physical adjacency, or functionality. These granules can be represented in interval values, rough sets, neutrosophic sets [29], fuzzy sets [30], IF sets, etc. Moreover, these granules can be partitioned into finer or smaller granules called subgranules. In order to compose and decompose the granules, specific measures called granulation measures are employed.

In this study, the GrC approach is used with FCA by considering the IF datasets containing various events as objects having spatio-temporal attributes. Moreover, different GrC measures are used, including IG, COV, SP, and Q value for the IF datasets. Here, for the first decomposition, IF datasets are decomposed in different granules, while each granule consists of the set of events as objects having spatio-temporal attributes. In the first decomposition, the IG of each granule is determined, and the granule (having more IG) is selected for further granulation measures i.e., COV and SP. For the second decomposition, the granule determined in the first decomposition (for the further granulation measures) is further decomposed into subgranules, the IG of each subgranule is found, the subgranule (with higher IG) for further granulation measures is determined, and so on. This process is performed until the granules/subgranules are obtained, with interesting granulation measures having more COV, less SP, and higher Q value.

## **5. Proposed Methodology**

*5.1. Periodic Occurrences (Co-Occurrences), Nonoccurrences, and Uncertainty of Occurrences/Nonoccurrences of Events in the Form of IF Datasets*

In real life, an event can be represented by spatio-temporal occurrences and cooccurrences. Based on the specific time unit, different timelines can be assumed for the temporal information related to the occurrences and co-occurrences of events [31]. For example, the time unit is a day or a month, considering the timeline based on the day or the month, respectively. A timeslot is the sequence of time units (days or months); if the timeline is considered based on the days, then each day corresponds to a timeslot. Hence, different timelines can provide temporal granularity.

In the literature, spatial and temporal events data are evaluated through FCA and the GrC paradigm using classical single-attribute value in FCA data [31]. This proposed methodology uses the IF datasets, in which events occur at a certain place (spatial aspect) and time (temporal aspect) with certain membership and nonmembership values.

**Definition 6.** *Let G<sup>i</sup> be the set of objects having M<sup>j</sup> set of attributes, where i* = 1, 2, 3, · · · *and j* = 1, 2, 3, · · · *denote the number of objects and attributes, respectively, such that each M<sup>j</sup>*

*attribute has IF set values µi*,*<sup>j</sup> and γi*,*<sup>j</sup> as membership and nonmembership of the G<sup>i</sup> object in the M<sup>j</sup> attribute, respectively.*

$$M\_{\dot{\jmath}} = \left\{ \left( \mathfrak{x}, \mu\_{\dot{\imath}, \dot{\jmath}}(\mathfrak{x}), \gamma\_{\dot{\imath}, \dot{\jmath}}(\mathfrak{x}) \right) \mid \forall \mathfrak{x} \in M\_{\dot{\jmath}} \right\}.$$

**Definition 7.** *Formally, consider an IF formal context Ki*,*<sup>j</sup>* = (*G<sup>i</sup>* , *M<sup>j</sup>* , *I*) *such that G<sup>i</sup>* , *M<sup>j</sup> , and I indicate the objects, attributes (given in Definition 6), and relation between the objects and the attributes, respectively, as shown in Table 1, where*

$$\begin{aligned} G\_{\boldsymbol{i}} &= \{ \mathbf{G}\_{1\prime} \mathbf{G}\_{2\prime} \mathbf{G}\_{3\prime} \cdots \}, \\ M\_{\boldsymbol{j}} &= \{ \mathbf{M}\_{1\prime} \mathbf{M}\_{2\prime} \mathbf{M}\_{3\prime} \cdots \}. \end{aligned}$$

**Definition 8.** *Let a subset G<sup>i</sup>* ⊆ *G of the objects, then the subset of the attributes to all the objects in G<sup>i</sup> is given as*

$$G\_i \uparrow = \{ m \in M \mid \forall \emptyset \in G\_{i\prime} \not\subset lm \}.$$

*Likewise, given a subset M<sup>j</sup>* ⊆ *M of attributes, the subset of objects having all the attributes in set M<sup>j</sup> is given as*

$$M\_{\circ} \downarrow = \{ \emptyset \in G \mid \forall m \in M\_{\circ}, \text{g} \, \text{Im} \}$$

**Definition 9.** *According to FCA, for an IF formal concept of a formal context Ki*,*<sup>j</sup>* = (*G<sup>i</sup>* , *M<sup>j</sup>* , *I*)*, let there be a pair* (*G<sup>i</sup>* , *Mj*)*, where G<sup>i</sup>* ⊆ *G*, *M<sup>j</sup>* ⊆ *M and G<sup>i</sup>* ↑= *M<sup>j</sup>* , *M<sup>j</sup>* ↓= *G<sup>i</sup> , where G<sup>i</sup> denotes the objects (extents) and M<sup>j</sup> indicates the attributes (intents) of the pair* (*G<sup>i</sup>* , *Mj*)*.*

**Definition 10.** *Let the IF concept lattice Li*,*<sup>j</sup>* = (*G<sup>i</sup>* , *M<sup>j</sup>* , *I*)*, constructed with all the concepts of IF formal concepts of Ki*,*<sup>j</sup>* = (*G<sup>i</sup>* , *M<sup>j</sup>* , *I*)*, such that* (G1, M1) *and* (G2, M2) *are the two IF formal concepts of the IF formal context Ki*,*<sup>j</sup>* = (*G<sup>i</sup>* , *M<sup>j</sup>* , *I*)*, where* (G1, M1) *is called a superconcept of* (G2, M2)*, and* (G2, M2) *is called a subconcept of* (G1, M1) *if it satisfies the equivalent condition given as*

$$(\mathfrak{G}\_1 \mathsf{M}\_1) \le (\mathfrak{G}\_2 \mathsf{M}\_2) \Leftrightarrow \mathfrak{G}\_1 \subseteq \mathfrak{G}\_2 \Leftrightarrow \mathfrak{M}\_2 \subseteq \mathfrak{M}\_1$$

**Definition 11.** *The set of all the IF superconcept and the subconcept interrelations construct a lattice. The lattice is an abstract structure with join (denoted by "*∨*") and meet (denoted by "*∧*") operations. Hence, the above expression of the IF superconcept and subconcept in this definition, in the form of join and meet, is*

$$(\mathbf{(}\mathbf{G\_1}\mathbf{M\_1})\lor(\mathbf{G\_2}\mathbf{M\_2}) = ((\mathbf{G\_1}\cup\mathbf{G\_2})\downarrow\uparrow,\mathbf{M\_1}\cap\mathbf{M\_2})\downarrow)$$

$$(\mathbf{(}\mathbf{G\_1}\mathbf{M\_1}) \land (\mathbf{G\_2}\mathbf{M\_2}) = (\mathbf{G\_1} \cap \mathbf{G\_2}, (\mathbf{M\_1} \cup \mathbf{M\_2}) \downarrow \uparrow))$$

*In this mathematical form, "*∨*" and "*∧*" indicate the supermum and infimum operations of IF formal concepts, respectively.*

**Definition 12.** *The IF formal concept of the given set of G<sup>i</sup> objects with M<sup>j</sup> attributes having the IF values* (*µi*,*<sup>j</sup>* , *γi*,*j*) → [0, 1] *in Ki*,*<sup>j</sup>* = (*G<sup>i</sup>* , *M<sup>j</sup>* , *I*) *formal context is evaluated as*

$$\left(\min(\mu\_{i,j}), \max(\gamma\_{i,j})\right)$$

*where i* ∈ *G*, *j* ∈ *M.*

**Example 1.** *Let the IF formal concept for* G<sup>1</sup> *and* G<sup>2</sup> *objects having M<sup>j</sup>* (*j* = 1, 2, 3, · · ·) *attributes (given in Table 1) be computed as*

$$\begin{aligned} \mathbf{G\_{12}} &= \left[ \left( \min(\mu\_{1,1}, \mu\_{2,1}), \max(\gamma\_{1,1}, \gamma\_{2,1}) \right), \left( \min(\mu\_{1,2}, \mu\_{2,2}), \max(\gamma\_{1,2}, \gamma\_{2,2}) \right) \right] \\ &\left( \min(\mu\_{1,3}, \mu\_{2,3}), \max(\gamma\_{1,3}, \gamma\_{2,3}) \right), \dots \dots \left( \min(\mu\_{1,j}, \mu\_{2,j}), \max(\gamma\_{1,j}, \gamma\_{2,j}) \right) \right] \end{aligned}$$

In this prposed methodology, the objects *G<sup>i</sup>* indicate the events, and the attributes *M<sup>j</sup>* indicate the occurrence of those events at a certain place and time, with certain membership *µ* (occurrence/co-occurrence), nonmembership *γ* (nonoccurrence), and uncertainty *π* (uncertainty of occurrence/nonoccurrence) values provided in the IF datasets. For example, in Table 1, let G<sup>1</sup> be one of the events, M<sup>1</sup> and M<sup>2</sup> be two places, and M3, · · · , *M<sup>j</sup>* be the number of times an event has occurred with some *µ* membership of occurrence and *γ* nonmembership of nonoccurrence values; then, it can be said that the event G1 has occurred at M<sup>1</sup> and M<sup>2</sup> places at M3, · · · , *M<sup>j</sup>* different times with *µ* happening and *γ* not happening values of events.


**Table 1.** Objects having attributes in the form of IF sets.

**Example 2.** *Let* M3 *be the one-year temporal attribute showing the events occurring in the* M3 *year. For the temporal granulation of the* M<sup>3</sup> *year attribute, let Q*1, *Q*2, *Q*3, *and Q*<sup>4</sup> *be the four quarters, indicating data for January, February, and March; April, May, and June; July, August, and September; and October, November, and December, given that each month's data are a basic granule. Hence, for the first decomposition, there will be four granules containing data for events occurring in the four quarters of the year. For example, E*<sup>1</sup> *event's data in the Q*<sup>1</sup> *quarter of the* M3 *year in the form of IF sets is given as* (0.3, 0.6)*, where µ* = 0.3*, (membership) indicates the E*<sup>1</sup> *event's occurrence (co-occurrence) and γ* = 0.6 *(nonmembership) indicates the E*<sup>1</sup> *event's nonoccurrence. Moreover, π* = 0.1 *(IF set index or indeterminacy) indicates the E*<sup>1</sup> *event's uncertainty of occurrence/nonoccurrence, which is used to compute the IG later in this section.*

Existing approaches only work on the periodical occurrences and co-occurrences of events using the GrC paradigm and FCA by considering single-value attributes for formal concepts. However, in the proposed approach, three aspects of the phenomenon of events are considered: event occurrence (co-occurrence), nonoccurrence, and the uncertainty of occurrence/nonoccurrence using GrC and the FCA algorithm by considering the IF datasets. Furthermore, the events data are represented in the form of three-tuple IF datasets as *µ* (membership), *γ* (nonmembership), and *π* (IF set index or indeterminacy), indicating the event occurrence (co-occurrence), nonoccurrence, and the uncertainty of occurrence/nonoccurrence, respectively. This timed granulation of occurring event data is further explained and analyzed for the knowledge discovery in Section 6.

Here, the IF datasets (containing the objects and attributes relationship) are divided into multiple parts, and each part is considered as the IF granule. Moreover, the lattice of each IF granule is designed for the data analysis using FCA and IF granulation measures.

## *5.2. Computation of an IF Granule*

In [32], fuzzy information granules and the hierarchical structures of IF rough sets from the viewpoint of GrC are presented. In addition to this, FCA is also widely used in IF sets, such as the research study in [33], which mainly focuses on the FCA in an IF formal context. Moreover, in [33], the primitive notions in concept lattice theory are also extended to the IF environment. In this research, the idea of IF granule evaluation is performed by calculating *IG*, *COV*, *SP*, and the *Q* value of the IF concept lattice, where each concept lattice is treated as an individual granule.

## *5.3. Information Granulation (IG)*

IG (*IG*) : *IG* = |1 − *IE*| provides the information on the granule within the lattice by taking into account the extensional parts (objects) included in the granule [31]. Information entropy (IE) is an important measure to evaluate the uncertainty in data [34,35], which is why the term |1 − *IE*| gives the total IG obtained from the data granule or the concept lattice. According to Shannon's theory, IE is the key information measure in data analysis. Based on the IF sets, different types of IE measures may be needed, depending upon the evaluation. In [36], the authors introduce IE into the field of FCA to quantify the weight of the concepts' intent. A type of nonprobabilistic entropy measure for IF is proposed in [37]. Here, in [37], the entropy measure is the result of the IF sets' geometric interpretation, and it uses the ratio of distances between them, defined in terms of the ratio of the IF sets' cardinalities of *F* ∩ *F <sup>c</sup>* and *<sup>F</sup>* <sup>∪</sup> *<sup>F</sup> c* , where *F c* is the complement of the *F* IF set. Two methods to determine the attribute weights are proposed in [38]. The first is when the information regarding the attributes is completely unknown, and the second is when partial information about attribute weights is known. Moreover, in [38], the attribute weights' identification based on the IF entropy is offered in the context of IF sets. In the literature, every type of uncertainty measure, such as information Shannon entropy, information granularity, rough entropy, and IE, is called by a common name: information granularity. The distancebased information granularity for IF and multigranulation IF granular spaces is presented in [39]; moreover, the author used this distance-based information granularity to construct a novel hierarchical structure on such spaces. In [40], the authors compute the information granularity by taking into account the number of objects (extensional parts) included in the granule; hence, in this study, IG and IE provide the framework to evaluate the set of granulation. Let *K* = (*G*, *M*, *I*) be the IF formal context of IF granule and *L* = (*G*, *M*, *I*) be its corresponding lattice. The first granulation measure for the designed lattice of IF formal context is given as

$$IG(L) = \frac{1}{G} \sum \left[ \frac{1}{n} \sum\_{j=1}^{n} 1 - \left( \gamma\_j + \frac{\pi\_j}{2} \right) \right],\tag{1}$$

where "*G*" is the number of objects involved in the IF granule, "*n*" is the number of attributes of each object, *j* = 1, 2, 3, · · · shows the number of attributes, and "*γj*" and "*πj*" are the nonmembership and hesitancy degree of the "*j*th" attribute. For the different IF formal contexts from the IF datasets, *K<sup>x</sup>* = (*Gx*, *M*, *Ix*) and *L<sup>x</sup>* = (*Gx*, *M*, *Ix*), where *K<sup>x</sup>* indicates the formal contexts, *L<sup>x</sup>* indicates their corresponding lattices, and *x* = 1, 2, 3, · · · denotes the number of formal contexts and their lattices. If the *IG* of lattice *L*<sup>1</sup> = (G1, M, I1) is greater than that of *L*<sup>2</sup> = (G2, M, I2), then the *K*<sup>1</sup> formal context contains more IG and is more interesting with respect to providing spatio-temporal information in the IF GrC perspective.

Let *E*1, *E*2, *E*3, and *E*<sup>4</sup> be the four events as objects; *Place*1, *Place*2, *Place*3, and *Place*<sup>4</sup> be the four spatial attributes; and *Q*<sup>1</sup> and *Q*<sup>2</sup> be the two parts of one-year data, such that *Q*<sup>1</sup> consists of Jan, Feb, Mar, Apr, May, and June and *Q*<sup>2</sup> consists of July, Aug, Sep, Oct, Nov, and Dec temporal attributes data in the form of IF sets, as given in the Table 2. Furthermore, let the events *E*1, *E*2, *E*3, and *E*<sup>4</sup> occur at the given spatiality, with *Q*<sup>1</sup> temporality in the *K*<sup>1</sup> formal context and with *Q*<sup>2</sup> temporality in the *K*<sup>2</sup> formal context.

**Table 2.** Four Events as Objects with Four Spatial and Two Temporal Attributes Data.


Hence, the *IG* of *K*<sup>1</sup> and *K*<sup>2</sup> formal contexts is given as

*IG*(*K*1) =

1 <sup>4</sup> ∑ 1 5 1 − 0.1 + 0 2 <sup>+</sup> 1 − 0.2 + 0.2 2 <sup>+</sup> 1 − 0.7 + 0 2 <sup>+</sup> 1 − 0.1 + 0.1 2 <sup>+</sup> 1 − 0.6 + 0.1 2 <sup>+</sup> 1 5 1 − 0.5 + 0.2 2 <sup>+</sup> 1 − 0.5 + 0 2 <sup>+</sup> 1 − 0.2 + 0 2 <sup>+</sup> 1 − 0.5 + 0.3 2 <sup>+</sup> 1 − 0.2 + 0.1 2 <sup>+</sup> 1 5 1 − 0.2 + 0 2 <sup>+</sup> 1 − 0.2 + 0.2 2 <sup>+</sup> 1 − 0.1 + 0.2 2 <sup>+</sup> 1 − 0.7 + 0.1 2 <sup>+</sup> 1 − 0.6 + 0 2 <sup>+</sup> 1 5 1 − 0.6 + 0.2 2 <sup>+</sup> 1 − 0.6 + 0.1 2 <sup>+</sup> 1 − 0.3 + 0.1 2 <sup>+</sup> 1 − 0.6 + 0.3 2 <sup>+</sup> 1 − 0.8 + 0 2 , *IG*(*K*1) = 0.53 *IG*(*K*2) = 1 <sup>4</sup> ∑ 1 5 1 − 0.1 + 0 2 <sup>+</sup> 1 − 0.2 + 0.2 2 <sup>+</sup> 1 − 0.7 + 0 2 <sup>+</sup> 1 − 0.1 + 0.1 2 <sup>+</sup> 1 − 0 + 0.1 2 <sup>+</sup> 1 5 1 − 0.5 + 0.2 2 <sup>+</sup> 1 − 0.5 + 0 2 <sup>+</sup> 1 − 0.2 + 0 2 <sup>+</sup> 1 − 0.5 + 0.3 2 <sup>+</sup> 1 − 0.1 + 0.1 2 <sup>+</sup> 1 5 1 − 0.2 + 0 2 <sup>+</sup> 1 − 0.2 + 0.2 2 <sup>+</sup> 1 − 0.1 + 0.2 2 <sup>+</sup> 1 − 0.7 + 0.1 2 <sup>+</sup> 1 − 0.8 + 0.1 2 <sup>+</sup> 1 5 1 − 0.6 + 0.2 2 <sup>+</sup> 1 − 0.6 + 0.1 2 <sup>+</sup> 1 − 0.3 + 0.1 2 <sup>+</sup> 1 − 0.6 + 0.3 2 <sup>+</sup> 1 − 0.2 + 0.1 2 ,

*IG*(*K*2) = 0.58

Hence, the *IG* of the *K*<sup>2</sup> IF formal context is greater than the *IG* of the *K*<sup>1</sup> IF formal context, implying that the events with given spatial and *Q*<sup>2</sup> temporal attributes are more interesting with respect to providing more spatio-temporal information in the periodical IF GrC perspective. Moreover, for the further process, the *K*<sup>2</sup> IF formal context will be decided for the computation of granulation measures, which is discussed in Section 6.

## *5.4. Granular Computing Measures for the Interestingness Level of IF Lattice*

In the literature, there are various proposed granular measures based on FCA which identify the interestingness level of the granule. The GrC and FCA measures defined in [41] and [42], respectively, include COV, SP, stability, robustness, probability, separation, etc. The most important granular measures are COV and SP, which are used in the GrC approach based on FCA. In this study, COV, SP, and Q value are used to analyze the interestingness level of the IF lattice.

## *5.5. Coverage (COV)*

COV is the most important granulation measure to evaluate the granule within the spatial, temporal, or spatio-temporal granulation perspective [31]. COV indicates the data granule to represent or cover the given data. The main objective of calculating the COV in this study is to find the IF lattice granule data objects' COV which contains the interesting information. Generally, the larger the data objects being covered the higher the COV of the interesting information granule. In [43], the concept of COV with invariability and its interconnections are analyzed from the viewpoint of algebraic properties of a fuzzy system, including membership function, inclusion, union and intersection, and support and fuzzy relation. Depending on the nature of granule, the definition of COV can be properly expressed, as in [44], where the concept of COV is defined with the fuzzy perspective of GrC. Here, the COV for the IF concept lattice objects using membership values in the perspective of GrC approach is computed as

$$\text{COV}(\text{C}) = \left[ \left( \frac{D}{G} \times \frac{1}{N} \sum\_{i=1}^{N} \text{C} \left( \mathbf{x}\_{\mu\_{j}} \right) \right) + \frac{\pi\_{j}}{2} \right],\tag{2}$$

where "*N*" is the number of elements in the IF concept lattice *C* granule, *µ<sup>j</sup>* , where *j* = 1, 2, 3, · · · , is the number of membership values, and *π<sup>j</sup>* is the hesitation degree of each attribute involved in the granule. Here, *D* shows the involved objects, and *G* indicates the total number of objects in the granule. In the above Equation (2), ( *πj* 2 ) is used because the uncertainty can be membership or nonmembership of the IF set value.

The motivation behind the use of Equation (2) is the computation of COV for the formal concept granule containing the event's spatio-temporal information in the form of IF sets. The COV for the IF formal concept as a granule *C* for the involved objects (events) *D G* [31] is the sum of membership grades [44] 1 *<sup>N</sup>* ∑ *N <sup>i</sup>*=<sup>1</sup> *C xµj* in the IF formal concept. Additionally, the term *<sup>π</sup><sup>j</sup>* 2 indicates the membership value in the *π* degree of indeterminacy. The illustration to compute the COV is given in Example 3.

**Example 3.** *Let X* = {*x*1, *x*2, *x*3, *x*4, *x*5} *and C*e(*x*) *be the IF formal concept of IF formal context, consisting of E*<sup>1</sup> *event as an involved object with Q*<sup>1</sup> *temporal data given in Table 2, such that*

$$\bar{\mathcal{C}}(\mathbf{x}) = \{ (0.9, 0.1, \mathbf{x}\_1), (0.6, 0.2, \mathbf{x}\_2), (0.3, 0.7, \mathbf{x}\_3), (0.8, 0.1, \mathbf{x}\_4), (0.3, 0.6, \mathbf{x}\_5) \}.$$

*Let D* = 1 *and G* = 4*, because the object involved in the C*e *IF formal concept is one, i.e., E*1*, and the total number of objects is four, i.e., E*1, *E*2, *E*3, *and E*4*, respectively. Moreover, N* = 5 *is the number of IF attributes in the data, and π<sup>j</sup> is the total degree of indeterminacy in all the attributes of the C IF formal concept.* e

$$COV(\mathbb{C}) = \left[ \left( \frac{1}{4} \times \frac{1}{5} \sum\_{i=1}^{5} 0.9 + 0.6 + 0.3 + 0.8 + 0.3 \right) + \frac{0.4}{2} \right] = 0.34.2$$

## *5.6. Specificity (SP)*

The SP measure is the fundamental granulation measure used to find the abstract, precise, or specific level of the granule in GrC. SP's role in IF sets is similar to the role of entropy in probability theory, as entropy estimates the probability of the specific event under consideration, which encapsulates the information about the fundamental probability distribution. The author of [45] states that in expert- and knowledge-based systems, SP plays a fundamental role in determining the usefulness of the information provided by these systems. Moreover, an increase in the abstract level of the SP of the information provided increases the information's usefulness. For example, a system shows the prediction of tornado storm occurrences in different states at different times. Additionally, this system, in most cases, will correctly predict the situation of the tornado's occurrence in both spatial and temporal perspectives. This system will not be of much use if it does not determine which type of precautionary measures should be taken at particular states at a particular time. This scenario points out a very important uncertainty principle of information theory, which is called the specificity–correctness trade-off.

An important idea to note is that the higher the SP, the lower the granule level of abstraction. In this study, the concept of SP is used for the spatio-temporality (two perspectives) of the IF concept lattice granule measure by using the *len*(*d*) and *range* concepts. As explained in [31], *len*(*d*) and *range* indicate the length of the involved temporal slot and the sum of the lengths of all temporal slots, respectively. According to refs. [31,45], SP is measured as follows:

$$SP(\mathbb{C}) = \left[1 - \frac{len(d)}{range}\right] \times \left[\mathfrak{a} - \frac{1}{n-1} \sum\_{\mathbf{x} \in X \neq X^\*} G(\mathbf{x})\right].\tag{3}$$

Here, the IF set's concept lattice is considered. Let *X* = {*x*1, *x*2, *x*3, · · · , *xn*} be the set of attributes in set *X* and *C* be the IF set with (*C* <sup>+</sup>(*x*), *C* −(*x*)) membership and nonmembership of the IF ordered pair. In Equation (3) *α* = *Maxx*[*C* <sup>+</sup>(*x*)], assuming that it occurs at *x<sup>m</sup>* such that *α* = *C* <sup>+</sup>(*xn*), <sup>∀</sup>*x<sup>n</sup>* <sup>6</sup><sup>=</sup> *<sup>x</sup>m*, calculate *<sup>G</sup>*(*x*) = *<sup>α</sup>* <sup>∧</sup> (<sup>1</sup> <sup>−</sup> *<sup>C</sup>* −(*x*)) to compute the SP of IF set C [45]. The illustration of calculating SP is given in Example 4.

**Example 4.** *Let X* = {*x*1, *x*2, *x*3, *x*4, *x*5} *and C*e(*x*) *be the IF formal concept, such that*

$$\bar{\mathcal{C}}(\mathbf{x}) = \{ (0.9, 0.1, \mathbf{x}\_1), (0.6, 0.2, \mathbf{x}\_2), (0.3, 0.7, \mathbf{x}\_3), (0.8, 0.1, \mathbf{x}\_4), (0.3, 0.6, \mathbf{x}\_5) \}.$$

*Here, the value of α* = 0.9 *occurs in x*1*, then G*(*x*) = *α* ∧ (1 − *C* −(*x*)) *is computed for the x* 6= *x*<sup>1</sup> *as*

> *G*(*x*2) = 0.9 ∧ (1 − 0.2) = 0.8, *G*(*x*3) = 0.9 ∧ (1 − 0.7) = 0.3, *G*(*x*4) = 0.9 ∧ (1 − 0.1) = 0.9, *G*(*x*5) = 0.9 ∧ (1 − 0.6) = 0.4.

For example, *C*e(*x*) is one of the IF formal concepts of the IF formal context, with *Q*<sup>1</sup> temporal data given in Table 2; then, *len*(*d*) = 6 and *range* = 12.

$$SP(\widetilde{C}(\mathbf{x})) = \left[1 - \frac{6}{12}\right] \times \left[0.9 - \frac{1}{5 - 1} \sum (0.8 + 0.3 + 0.9 + 0.4)\right] = 0.15.$$

The SP of individual IF concept lattice granules is calculated in Section 6.

## *5.7. Unique Index (Q) Value*

In ref. [31], the authors define the aggregation of COV and SP as the *Q* value. In the *Q* value, the COV(C) determines the objects representing the IF concept lattice granule COV; on the other hand, SP(C) indicates the SP for the IF concept lattice granule in the perspectives of spatial and temporal attributes using the GrC approach. The mathematical measure to compute the Q value is given as

$$Q(\mathbb{C}) = \mathbb{C}OV(\mathbb{C}) \times (SP(\mathbb{C}))^{\mathbb{f}} \tag{4}$$

Here, the exponent on SP, "*ζ*", is the aspect of the SP. It shows the change in the partition level of the data. Moreover, the higher the value of "*ζ*", the more important the aspect of the SP. The idea of "*ζ*" is more understandable later in the experimental analysis. In ref. [31], the authors also propose the average Q value of data granules; here, the IF concept lattice granule average Q value can be computed as follows:

$$\overline{Q}(L) = \sum\_{(A,B)\in L} \frac{Q(\mathbb{C})}{n} \tag{5}$$

In this expression, the IF concept lattice granule *C* shows the object or the set of objects *A*, which contains the attributes in the form of membership and nonmembership *B* of the IF set.

To assess different hierarchical levels of data, granulation measures can be compared by checking which granulation level provides more interesting results. To assess the hierarchical levels, a particular attribute is decomposed to check whether the data granulation provides improved results over the previous ones. Here, the focus was spatial and temporal attributes. Suppose that temporal attributes are decomposed, such that *T* denotes the temporal attribute, and after decomposing *T* in *n* attributes {*T*1, *T*2, *T*3, · · · , *Tn*}, it can be determined through the granulation measures which temporal decomposition provides more interesting results. Additionally, the formal context related to the *T* temporal attribute is shown as *K* = (*G*, *M*, *I*), while that related to the decomposed temporal attributes, i.e., *T*1, *T*2, *T*3, · · · , *Tn*, is given by *K* 0 = (*G* 0 , *M* 0 , *I* 0 ). Moreover, the granulation measures are expressed for different hierarchical levels accordingly. With this, the COV for different granularity levels can be shown as

$$\text{'COV}(\text{C}) \ge \text{COV}(\text{C}') \tag{6}$$

In addition to this, the SP for the different granular levels can also hold the following statement:

$$SP(\mathbb{C}) \le SP(\mathbb{C}^\prime) \tag{7}$$

To check the granularity level of interestingness for a particular timeslot, [31] can be computed as

$$\overline{Q}(T) = \sum \frac{Q(\mathbb{C})}{\mathfrak{n}\_T} \tag{8}$$

where *n<sup>T</sup>* is the cardinality of the set of IF formal concepts having *T* temporal attributes. It is obvious that the granulation through the decomposition of temporal attribute may lead to better results, such as

$$Q(\mathbb{C}) \ge Q(\mathbb{C}') \tag{9}$$

In this way, the level of interestingness is assessed in different hierarchical granule levels by checking that the greater *Q*(*C*) value is the more suitable granule in terms of interestingness.

## **6. Experimental Evaluation**

In this section, experimental analysis for the proposed IF concept lattice granule through GrC methodology is discussed. The objective of this study is to analyze the spatiotemporal perspectives of the IF granule. The results may be used to predict the spatiality and periodicity of the information granule, particularly when the data are provided in the IF sets. The datasets used in this experiment consist of the four activity records of providing information related to spatiality and temporality of the activities executed by a specific actor or user. Here, the activities are indicated as four events, *E*1, *E*2, *E*3, and *E*4; four places, *Place*1, *Place*2, *Place*3, and *Place*4, denoting spatiality; and four quarters, *Q*1, *Q*2, *Q*3, and *Q*4, of the year, denoting temporality, where events indicate objects, and places and quarters of the year indicate the attributes. The main focus of this experiment is the periodical granulation of the IF concept lattice granules. There may be hundreds of events indicating object occurrences at different spatio-temporal attributes, but here, four events as objects and four spatial and four temporal attributes for the experimental analysis are considered, as presented in Table 3. In the temporal perspective of attributes, annual periodicity of time granulation is decomposed into four quarters, *Q*1, *Q*2, *Q*3, and *Q*4, where these timed granulation quarters consists of Jan, Feb, and Mar; Apr, May, and June; July, Aug, and Sep; and Oct, Nov, and Dec, respectively. Additionally, the GrC approach is performed by considering the periodicity of the temporal attribute, in which the first decomposition of periodicity is set to months.


**Table 3.** Four Events as Objects with Four Spatial and Four Temporal Attributes Data.

The IF concepts of the given four objects with spatial attributes in the *Q*<sup>1</sup> quarter of time granule are (1, *<sup>C</sup>*e<sup>1</sup> 1 )),(2, *<sup>C</sup>*e<sup>1</sup> 2 ),(3, *<sup>C</sup>*e<sup>1</sup> 3 ),(12, *<sup>C</sup>*e<sup>1</sup> 4 ),(13, *<sup>C</sup>*e<sup>1</sup> 5 ),(23, *<sup>C</sup>*e<sup>1</sup> 6 ),(24, *<sup>C</sup>*e<sup>1</sup> 7 ),(123, *<sup>C</sup>*e<sup>1</sup> 8 ),(124, *<sup>C</sup>*e<sup>1</sup> 9 ), (234, *<sup>C</sup>*e<sup>1</sup> <sup>10</sup>),(*U*, *<sup>C</sup>*e<sup>1</sup> <sup>11</sup>)*and*(∅, *<sup>C</sup>*e<sup>1</sup> ∅ ) where:

*C*e1 <sup>=</sup> {(0.9, 0.1),(0.6, 0.2),(0.3, 0.7),(0.8, 0.1),(0.3, 0.6)}, *<sup>C</sup>*e<sup>1</sup> = {(0.3, 0.5),(0.5, 0.5),(0.8, 0.2),(0.2, 0.5),(0.7, 0.2)} *C*e1 <sup>=</sup> {(0.8, 0.2),(0.6, 0.2),(0.7, 0.1),(0.2, 0.7),(0.4, 0.6)}, *<sup>C</sup>*e<sup>1</sup> = {(0.3, 0.5),(0.5, 0.5),(0.3, 0.7),(0.2, 0.5),(0.3, 0.6)} *C*e1 <sup>=</sup> {(0.8, 0.2),(0.6, 0.2),(0.3, 0.7),(0.2, 0.7),(0.3, *<sup>o</sup>*.6)}, *<sup>C</sup>*e<sup>1</sup> = {(0.3, 0.5),(0.5, 0.5),(0.7, 0.2),(0.2, 0.7),(0.4, 0.6)} *C*e1 <sup>=</sup> {(0.2, 0.6),(0.3, 0.6),(0.6, 0.3),(0.1, 0.6),(0.2, 0.8)}, *<sup>C</sup>*e<sup>1</sup> = {(0.3, 0.5),(0.5, 0.5),(0.3, 0.7),(0.2, 0.7),(0.3, 0.6)} *C*e1 <sup>=</sup> {(0.2, 0.6),(0.3, 0.6),(0.3, 0.7),(0.1, 0.6),(0.2, 0.8)}, *<sup>C</sup>*e<sup>1</sup> = {(0.2, 0.6),(0.3, 0.6),(0.6, 0.3),(0.1, 0.7),(0.2, 0.8)} *C*e1 <sup>11</sup> <sup>=</sup> {(0.2, 0.6),(0.3, 0.6),(0.3, 0.7),(0.1, 0.7),(0.2, 0.8)}, *<sup>C</sup>*e<sup>1</sup> <sup>∅</sup> = {(1, 0),(1, 0),(1, 0),(1, 0),(1, 0)}

The IF values of the IF formal concepts are evaluated according to the expression *min*(*µi*,*j*), *max*(*γi*,*j*) , given in Definition 12. The IF concept's lattice design of the given four objects with spatio-temporal attributes with the *Q*<sup>1</sup> quarter of time granule is given in Figure 1.

**Figure 1.** IF concept's lattice diagram of four objects with spatial and *Q*<sup>1</sup> quarter of time granule attributes.

Similarly, the IF concepts of the given four objects with spatial attributes in the *Q*<sup>2</sup> quarter of time granule are(1, *<sup>C</sup>*e<sup>2</sup> 1 ),(2, *<sup>C</sup>*e<sup>2</sup> 2 ),(3, *<sup>C</sup>*e<sup>2</sup> 3 ),(12, *<sup>C</sup>*e<sup>2</sup> 4 ),(13, *<sup>C</sup>*e<sup>2</sup> 5 ),(23, *<sup>C</sup>*e<sup>2</sup> 6 ),(24, *<sup>C</sup>*e<sup>2</sup> 7 ),(123, *<sup>C</sup>*e<sup>2</sup> 8 ), (124, *<sup>C</sup>*e<sup>2</sup> 9 ),(234, *<sup>C</sup>*e<sup>2</sup> <sup>10</sup>),(1234, *<sup>C</sup>*e<sup>2</sup> <sup>11</sup>),(∅, *<sup>C</sup>*e<sup>2</sup> ∅ ) where:

*C*e2 <sup>=</sup> {(0.9, 0.1),(0.6, 0.2),(0.3, 0.7),(0.8, 0.1),(0.9, 0.0)}, *<sup>C</sup>*e<sup>2</sup> = {(0.3, 0.5),(0.5, 0.5),(0.8, 0.2),(0.2, 0.5),(0.8, 0.1)} *C*e2 <sup>=</sup> {(0.8, 0.2),(0.6, 0.2),(0.7, 0.1),(0.2, 0.7),(0.1, 0.8)}, *<sup>C</sup>*e<sup>2</sup> = {(0.2, 0.6),(0.3, 0.6),(0.6, 0.3),(0.1, 0.6),(0.7, 0.2)} *C*e2 <sup>=</sup> {(0.8, 0.2),(0.6, 0.2),(0.3, 0.7),(0.2, 0.7),(0.1, 0.8)}, *<sup>C</sup>*e<sup>2</sup> = {(0.3, 0.5),(0.5, 0.5),(0.7, 0.2),(0.2, 0.7),(0.1, 0.8)} *C*e2 <sup>=</sup> {(0.2, 0.6),(0.3, 0.6),(0.6, 0.3),(0.1, 0.6),(0.7, 0.2)}, *<sup>C</sup>*e<sup>2</sup> = {(0.3, 0.5),(0.5, 0.5),(0.3, 0.7),(0.2, 0.7),(0.1, 0.8)} *C*e2 <sup>=</sup> {(0.2, 0.6),(0.3, 0.6),(0.3, 0.7),(0.1, 0.6),(0.7, 0.2)}, *<sup>C</sup>*e<sup>2</sup> = {(0.2, 0.6),(0.3, 0.6),(0.6, 0.3),(0.1, 0.7),(0.1, 0.8)} *C*e2 <sup>=</sup> {(0.2, 0.6),(0.3, 0.6),(0.3, 0.7),(0.1, 0.7),(0.1, 0.8)}, *<sup>C</sup>*e<sup>2</sup> <sup>∅</sup> = {(1, 0),(1, 0),(1, 0),(1, 0),(1, 0)}

The IF concept's lattice design of the given four objects with spatio-temporal attributes with the *Q*<sup>2</sup> quarter of time granule is given in Figure 2.

Moreover, the IF concepts of the given four objects with spatial attributes in the *Q*<sup>3</sup> quarter of time granule are (1, *<sup>C</sup>*e<sup>3</sup> 1 ),(2, *<sup>C</sup>*e<sup>3</sup> 2 ),(3, *<sup>C</sup>*e<sup>3</sup> 3 ),(4, *<sup>C</sup>*e<sup>3</sup> 4 ),(12, *<sup>C</sup>*e<sup>3</sup> 5 ),(13, *<sup>C</sup>*e<sup>3</sup> 6 ),(23, *<sup>C</sup>*e<sup>3</sup> 7 ),(24, *<sup>C</sup>*e<sup>3</sup> 8 ), (123, *<sup>C</sup>*e<sup>3</sup> 9 ),(124, *<sup>C</sup>*e<sup>3</sup> <sup>10</sup>),(234, *<sup>C</sup>*e<sup>3</sup> <sup>11</sup>),(1234, *<sup>C</sup>*e<sup>3</sup> <sup>12</sup>),(∅, *<sup>C</sup>*e<sup>3</sup> ∅ ) where:

*C*e3 <sup>1</sup> <sup>=</sup> {(0.9, 0.1),(0.6, 0.2),(0.3, 0.7),(0.8, 0.1),(0.7, 0.2)}, *<sup>C</sup>*e<sup>3</sup> <sup>2</sup> = {(0.3, 0.5),(0.5, 0.5),(0.8, 0.2),(0.2, 0.5),(0.8, 0.2)} *C*e3 <sup>3</sup> <sup>=</sup> {(0.8, 0.2),(0.6, 0.2),(0.7, 0.1),(0.2, 0.7),(0.7, 0.3)}, *<sup>C</sup>*e<sup>3</sup> <sup>4</sup> = {(0.2, 0.6),(0.3, 0.6),(0.6, 0.3),(0.1, 0.6),(0.8, 0.1)} *C*e3 <sup>5</sup> <sup>=</sup> {(0.3, 0.5),(0.5, 0.5),(0.3, 0.7),(0.2, 0.5),(0.7, 0.2)}, *<sup>C</sup>*e<sup>3</sup> <sup>6</sup> = {(0.8, 0.2),(0.6, 0.2),(0.3, 0.7),(0.2, 0.7),(0.7, 0.3)} *C*e3 <sup>7</sup> <sup>=</sup> {(0.3, 0.5),(0.5, 0.5),(0.7, 0.2),(0.2, 0.7),(0.7, 0.3)}, *<sup>C</sup>*e<sup>3</sup> <sup>8</sup> = {(0.2, 0.6),(0.3, 0.6),(0.6, 0.3),(0.1, 0.6),(0.8, 0.2)} *C*e3 <sup>9</sup> <sup>=</sup> {(0.3, 0.5),(0.5, 0.5),(0.3, 0.7),(0.2, 0.7),(0.7, 0.3)}, *<sup>C</sup>*e<sup>3</sup> <sup>10</sup> = {(0.2, 0.6),(0.3, 0.6),(0.3, 0.7),(0.1, 0.6),(0.7, 0.2)} *C*e3 <sup>11</sup> <sup>=</sup> {(0.2, 0.6),(0.3, 0.6),(0.6, 0.3),(0.1, 0.7),(0.7, 0.3)}, *<sup>C</sup>*e<sup>3</sup> <sup>12</sup> = {(0.2, 0.6),(0.3, 0.6),(0.3, 0.7),(0.1, 0.7),(0.7, 0.3)} *C*e3 <sup>∅</sup> = {(1, 0),(1, 0),(1, 0),(1, 0),(1, 0)}

**Figure 2.** IF concept's lattice diagram of four objects with spatial and *Q*<sup>2</sup> quarter of time granule attributes.

The IF concept's lattice design of the given four objects with spatio-temporal attributes with the *Q*<sup>3</sup> quarter of time granule is given in Figure 3.

**Figure 3.** IF concept's lattice diagram of four objects with spatial and *Q*<sup>3</sup> quarter of time granule attributes.

Similarly, the IF concepts of the given four objects with spatial attributes in the *Q*<sup>4</sup> quarter of time granule are (1, *<sup>C</sup>*e<sup>4</sup> 1 ),(2, *<sup>C</sup>*e<sup>4</sup> 2 ),(3, *<sup>C</sup>*e<sup>4</sup> 3 ),(12, *<sup>C</sup>*e<sup>4</sup> 4 ),(13, *<sup>C</sup>*e<sup>4</sup> 5 ),(23, *<sup>C</sup>*e<sup>4</sup> 6 ),(24, *<sup>C</sup>*e<sup>4</sup> 7 ),(123, *<sup>C</sup>*e<sup>4</sup> 8 ), (124, *<sup>C</sup>*e<sup>4</sup> 9 ),(234, *<sup>C</sup>*e<sup>4</sup> <sup>10</sup>),(1234, *<sup>C</sup>*e<sup>4</sup> <sup>11</sup>),(∅, *<sup>C</sup>*e<sup>4</sup> ∅ ) where:

*C*e4 <sup>=</sup> {(0.9, 0.1),(0.6, 0.2),(0.3, 0.7),(0.8, 0.1),(0.4, 0.3)}, *<sup>C</sup>*e<sup>4</sup> = {(0.3, 0.5),(0.5, 0.5),(0.8, 0.2),(0.2, 0.5),(0.5, 0.4)} *C*e4 <sup>=</sup> {(0.8, 0.2),(0.6, 0.2),(0.7, 0.1),(0.2, 0.7),(0.2, 0.7)}, *<sup>C</sup>*e<sup>4</sup> = {(0.3, 0.5),(0.5, 0.5),(0.3, 0.7),(0.2, 0.5),(0.4, 0.4)} *C*e4 <sup>=</sup> {(0.8, 0.2),(0.6, 0.2),(0.3, 0.7),(0.2, 0.7),(0.2, 0.7)}, *<sup>C</sup>*e<sup>4</sup> = {(0.3, 0.5),(0.5, 0.5),(0.7, 0.2),(0.2, 0.7),(0.2, 0.7)} *C*e4 <sup>=</sup> {(0.2, 0.6),(0.3, 0.6),(0.6, 0.3),(0.1, 0.6),(0.1, 0.6)}, *<sup>C</sup>*e<sup>4</sup> = {(0.3, 0.5),(0.5, 0.5),(0.3, 0.7),(0.2, 0.7),(0.2, 0.7)} *C*e4 <sup>=</sup> {(0.2, 0.6),(0.3, 0.6),(0.3, 0.7),(0.1, 0.6),(0.1, 0.6)}, *<sup>C</sup>*e<sup>4</sup> = {(0.2, 0.6),(0.3, 0.6),(0.6, 0.3),(0.1, 0.7),(0.1, 0.7)} *C*e4 <sup>=</sup> {(0.2, 0.6),(0.3, 0.6),(0.3, 0.7),(0.1, 0.7),(0.1, 0.7)}, *<sup>C</sup>*e<sup>4</sup> <sup>∅</sup> = {(1, 0),(1, 0),(1, 0),(1, 0),(1, 0),(1, 0)}

Here, the IF concept's lattice design of the given four objects with spatio-temporal attributes with the *Q*<sup>4</sup> quarter of time granule is given in Figure 4.

**Figure 4.** IF concept's lattice diagram of four objects with spatial and *Q*<sup>4</sup> quarter of time granule attributes.

Now, according to Equation (1), the IG of each lattice designed with the four events showing the objects along with each quarter of the time granulation is given below:

*Lattice*1(designed with *Q*<sup>1</sup> quarter of time granulation) *IG* : 0.53

*Lattice*2(designed with *Q*<sup>2</sup> quarter of time granulation) *IG* : 0.58

*Lattice*3(designed with *Q*<sup>3</sup> quarter of time granulation) *IG* : 0.60 *Lattice*4(designed with *Q*<sup>4</sup> quarter of time granulation) *IG* : 0.52

Here, *Lattice*3, designed with the *Q*<sup>3</sup> quarter of time granulation, gives more IG than the other three lattices designed with the other three quarters of time granulation, respectively. A higher IG leads to more interesting results with a less focused view of the data. Now, the granulation measures COV and SP of *Lattice*<sup>3</sup> IF concepts can be measured through Equation (2) and Equation (3), respectively. According to Equations (2)–(4), the COV, SP, and Q value of each IF concept of *Lattice*<sup>3</sup> is given in Table 4.


**Table 4.** Granulation measures of each IF concept of *Lattice*3.

Now, if the *Q*<sup>3</sup> quarter of time granulation is decomposed into more parts, then this decomposition of the *Q*<sup>3</sup> quarter may provide more interesting results. For this purpose, let *Q*3,1, *Q*3,2, and *Q*3,3 contain the July, August, and September IF data, respectively. This is the second decomposition of the IF concept lattice designed through the *Q*<sup>3</sup> quarter of time granulation. Thus, the four events' IF data with spatio-temporal attributes of the *Q*<sup>3</sup> quarter's second decomposition are given in Table 5.

**Table 5.** Events with Four Spatial and Three (decomposed) *Q*3,1, *Q*3,2, and *Q*3,3 Temporal Attributes Data.


The IG of each lattice, *Lattice*3,1, *Lattice*3,2, and *Lattice*3,3, with second decomposition of *Q*3,1, *Q*3,2, and *Q*3,3 quarters of time granulation, respectively, is given below:

*Lattice*3,1 (designed with *Q*3,1 quarter of time granulation) *IG* : 0.63

*Lattice*3,2 (designed with *Q*3,2 quarter of time granulation) *IG* : 0.46

*Lattice*3,3 (designed with *Q*3,3 quarter of time granulation) *IG* : 0.44

It shows that *Lattice*3,1, made with the *Q*3,1 quarter of time granulation, gives more IG than the other lattices of timed granulations. Moreover, the granulation measures of the each concept lattice (as made with *Lattice*3), i.e., made with the *Q*3,1 quarter of time granulation, are given as in Table 6.

Likewise, it can be observed that the granule *Lattice*2, designed with the *Q*<sup>2</sup> quarter of time granulation with an IG of 0.58, is the second highest IG. So, the granulation measures COV, SP, and the Q value of the lattice, i.e., made with the *Q*<sup>2</sup> quarter of time granulation, are given as in Table 7.

Note, the value of "*ζ* = 1" is used because of the primary decomposition of the granules. Here, primary decomposition means partitioning the data into months, because the first decided decomposition is set to one month. Moreover, partitioning one month into two timeslots would be the secondary decomposition; in that case, the value of "*ζ*" is 0.5. The applicability of the proposed approach is the knowledge discovery of periodical events' occurrences (co-occurrences), nonoccurrences, and uncertainty of occurrences/nonoccurrences in spatial and temporal aspects through IF datasets by applying FCA and GrC.


**Table 6.** Granulation measures of each IF concept of *Lattice*3,1.

**Table 7.** Granulation measures of each IF concept of *Lattice*2.


## **7. Results and Discussion**

The experiments were performed on a 64-bit system (Intel Core i3-4010U, 1.70 GHz, 4 GB RAM). Python (version 3.7) was used to construct the IF concepts' lattice structures in the experimental evaluation section. In the experimental evaluation of this research article, IF data are taken to process the proposed methodology. Additionally, this IF data contain four events, happening at four places in a year. For the first decomposition, one-year timeslot data are partitioned into four quarters of the time granulation of events happening at the given four places, where events show the objects and places, with time granulation data indicating the attributes. The purpose of this methodology is to analyze the spatiotemporal perspectives of the IF granule. More specifically, the idea is to find out whether the granulation of IF data provides more interesting results. In the experimental evaluation, the IG of the four lattices designed with the four events (objects) is analyzed first, which happens at four places in four different quarters of the year, showing the spatio-temporal attributes given as

*Lattice*1(designed with *Q*<sup>1</sup> quarter of time granulation) *IG* : 0.53 *Lattice*2(designed with *Q*<sup>2</sup> quarter of time granulation) *IG* : 0.58 *Lattice*3(designed with *Q*<sup>3</sup> quarter of time granulation) *IG* : 0.60 *Lattice*4(designed with *Q*<sup>4</sup> quarter of time granulation) *IG* : 0.52

Hence, the IG of *Lattice*<sup>3</sup> made with the *Q*<sup>3</sup> quarter of time granulation is higher than the IG of all three lattices, so *Lattice*<sup>3</sup> is chosen for further granulation measures. The COV, SP, and Q value of each of *Lattice*3's IF concept are calculated and given in Table 4. For the second decomposition, the *Q*<sup>3</sup> quarter is partitioned into three more timeslots, *Q*3,1, *Q*3,2, and *Q*3,3, and the IG of lattices is made with the second partitioned timeslots, given as

*Lattice*3,1 (designed with *Q*3,1 quarter of time granulation) *IG* : 0.63 *Lattice*3,2 (designed with *Q*3,2 quarter of time granulation) *IG* : 0.46 *Lattice*3,3 (designed with *Q*3,3 quarter of time granulation) *IG* : 0.44

It can be observed that the IG of the lattice with the *Q*3,1 quarter of time granulation is greater than all the other IGs of the second decomposition lattices; therefore, if the granulation measures of *Lattice*3,1 with the *Q*3,1 quarter of time granulation are checked in Table 6, it can be seen that most of the IF concepts of the second decomposed lattice have more COV and Q values than those of *Lattice*<sup>3</sup> with the *Q*<sup>3</sup> quarter of time granulation. Hence, the granularity of data in IF sets gives more interesting results.

## **8. Comparison with Previous SOTA (State of the Art) Approaches**

## *8.1. Comparison with Previous Spatial and Temporal Approaches Using FCA and GrC*

This approach and its results are compared with other SOTA methodologies based on the research methodology, the GrC perspective of spatial and temporal aspects and the data viewpoint with the FCA algorithm. In [1], the authors present and evaluate a method which uses an existing approach to discover periodic events in the data to combine timebased granulation and three-way decisions to support decision makers in understanding and reasoning on the learned granular structures conceptualizing spatial–temporal events. In [7], the methodology interprets, represents, and implements sequential three-way GrC with a framework of temporal–spatial multigranularity learning, which is described with the temporality of data and the spatiality of parameters. The method in [31], based on the GrC and FCA technique, proposes an approach which focuses on the temporal aspect of data to extracte knowledge concerning the periodic occurrences of events. In the context of three-way GrC, the authors in [46] introduce three extensional ideas, temporal, spatial, spatial–temporal-based trisecting–acting–outcome (TAO) frameworks for the construction of a multilevel composite granular structure.

In the literature, knowledge discovery through spatial and temporal aspects of data uses the classical FCA algorithm (using single-value attributes) and the GrC paradigm for the occurrences and co-occurrences of events. However, there can be three aspects of events: occurrences (and co-occurrences), nonoccurrences, and uncertainty of occurrences/nonoccurrences with respect to spatial and temporal aspects of data. In this proposed approach, IF datasets were used for events, such that event occurrences (and cooccurrences), nonoccurrences, and uncertainty of occurrences/nonoccurrences in spatial and temporal views can be indicated through the *µ*, *γ*, and *π* values, respectively. GrC was used to discover the periodicity in the data at various abstraction levels, while FCA was used to discover the granulation levels and process the granulation measures to understand IF concepts. References [1,31] use an FCA-based single-value attribute for the single aspect of event occurrences (and co-occurrences) with respect to the spatial and temporal aspects, while [7,46] use granular structures for the spatial and temporal aspects of data. The main advantages of the proposed approach over the existing approaches are discovering the periodicity of spatial–temporal events data given in IF sets through GrC and the FCA algorithm and predicting event occurrences, (and co-occurrences), nonoccurrences, and uncertainty of occurrences/nonoccurrences in spatial and temporal views of data through IF sets. The comparison of the proposed approach with other SOTA approaches is presented in Table 8.

## *8.2. Comparison with Finding IE/IG*

IG is computed through IE (uncertainty) in data. In GrC, the approaches [31,35] calculate IE and IG using single-value attributes for the FCA while considering the one aspect of event occurrences (co-occurrences). However, the proposed approach based on the GrC paradigm uses IF datasets for the attributes of FCA that improves the results of IG. Additionally, unlike the existing approaches, the proposed approach provides three aspects of event occurrences (co-occurrences), nonoccurrence, and uncertainty of occurrence/nonoccurrence in the spatial and temporal views of data. The comparison of (improved results computed

through) the proposed approach with other SOTA approaches [31,35] is presented in Table 9.

Here, the IG results obtained with the approaches of [31,35] are unchanged due to the different IF *µ* and *γ* values in all the attributes shared by the objects in each lattice.


**Table 8.** Comparison with other SOTA approaches.

**Table 9.** Comparison with other research methodologies to find IG. Higher values are bolded.


*8.3. Comparison with Finding COV, SP, and Q Value*

COV, SP, and Q value are important granulation measures to analyze the granule. In [31], granules are represented in the form of formal concepts and GrC and evaluated through these granulation measures; moreover, in [44,45], these granulation measures are proposed for the granules represented in fuzzy and IF sets. In existing approaches, granulation measures are used only in the perspectives of GrC with the FCA algorithm [31], or on fuzzy and IF sets [44,45]. However, the granulation measures in the proposed approach are used in the perspective of GrC, FCA, and IF sets. In the proposed approach, IF concepts are made and represented as granules, where the granulation measures are used to evaluate those granules. The comparison given in Tables 10 and 11 shows that the granulation measures used in the proposed approach give improved results.


**Table 10.** Comparison with other Research Methodologies to find the granulation measures (COV, SP, and Q value) of *Lattice*3. Higher values are bolded.

**Table 11.** Comparison with other Research Methodologies to find the granulation measures (COV, SP, and Q value) of *Lattice*3,1. Higher values are bolded.


The proposed approach is compared with other SOTA approaches by applying granulation measures on the IF datasets given in Section 6 (experimental evaluation). These IF datasets contain events as objects and spatial and temporal attributes, in which the temporal attribute is decomposed into four quarters, *Q*1, *Q*2, *Q*3, and *Q*<sup>4</sup> of the annual periodicity of time granulation, and four granules are created in the first decomposition. Afterwards, the IG of each granule is computed to determine the granule with more IG. FCA is then used to construct lattices from each granule, and granulation measures are performed on the decided granule (with more IG). As shown in Table 9, the IG obtained with the proposed approach is greater than that obtained with other approaches [31,35]. In Table 9, the IG obtained with the other approaches is the same for all the lattices, because none of the objects have identical IF values. Furthermore, in Tables 10 and 11, most of the granulation measures (COV, SP, and Q value) of *Lattice*<sup>3</sup> and *Lattice*3,1 obtained with the proposed approach are greater than the existing approaches [44,45]. Hence, it can be observed that the proposed approach provides improved results for IG, COV, and Q value obtained from the IF datasets and processed through GrC and the FCA algorithm.

## **9. Conclusions and Future Work**

This research suggests a novel approach to determine occurrences (and co-occurrences), nonoccurrences, and uncertainty of occurrences/nonoccurrences of events based on GrC

and IF datasets with spatio-temporal attributes. The FCA algorithm was used to analyze the granulation level and granulation measures. Furthermore, different measures are proposed to analyze granulation levels formed with the IF datasets. The originality of this proposed methodology is to discover the periodical occurrences (and co-occurrences), nonoccurrences, and uncertainty of occurrences/nonoccurrences in IF datasets with spatiotemporal attributes using FCA and granulation measures. Here, the limited IF datasets indicating the spatial and temporal aspects of data are considered for the experimentation and work of the proposed methodology. This can be implemented on a large number of IF datasets in the context of big data for the scalability of the proposed methodology. In the real world, this approach can be used to discover the significance in periodicities of data related to storm occurrences, digital forensics, and electronic and smart video surveillance by constructing a timeline to analyze and predict information. Moreover, the proposed approach does not provide an automatic or semiautomatic process to predict an event's occurrence in granular structures. The authors aim to address these additions in future works.

**Author Contributions:** I.A.: conceptualization, methodology, software, visualization, formal analysis, investigation, writing—original draft, and writing—review and editing. Y.L.: conceptualization, methodology, formal analysis, writing—review and editing, supervision, investigation, and funding acquisition. W.P.: writing—review and editing, suggestions, guidance, formal analysis, investigation, supervision, and validation. All authors have read and agreed to the published version of the manuscript.

**Funding:** Y. Li is supported by the National Science Foundation of China (Grants No. 12071271 and No. 11671244).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Data are contained within the article.

**Conflicts of Interest:** The authors declare no conflict of interest.

## **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

## *Article* **Application of the Methodology of Multi-Valued Logic Trees with Weighting Factors in the Optimization of a Proportional Valve**

**Adam Deptuła <sup>1</sup> , Michał Stosiak <sup>2</sup> , Rafał Cie´slicki 2,\*, Mykola Karpenko <sup>3</sup> , Kamil Urbanowicz <sup>4</sup> , Paulius Skaˇckauskas <sup>3</sup> and Anna Małgorzata Deptuła <sup>1</sup>**


**Abstract:** Hydraulic valves are used to determine the set values of hydraulic quantities (flow rate, pressure, or pressure difference) in a hydraulic system or its part. This is achieved through the appropriate throttling of the stream flowing through the valve, which is automatically set by the operator (e.g., opening the throttle valve). The procedures for determining its static and dynamic properties were described using the example of modeling a two-stage proportional relief valve. Subsequently, the importance of the design and operational parameters was determined using multivalued logic trees. Modeling began with the determination of equations describing the flow and movement of moving parts in a valve. Based on the equations, a numerical model was then created, e.g., in the Matlab/Simulink environment (R2020b). The static characteristics were obtained as the result of a model analysis of slow changes in the flow rate through the valve. Various coefficients of logical products have not been taken into account in the separable and common minimization processes of multi-valued logic equation systems in any available literature. The results of the model tests can be used to optimize several types of hydraulic valve constructions.

**Keywords:** multi-valued logic trees; hydraulic proportional valve; weighting factors; optimization

**MSC:** 03B50; 03B70; 03B80; 05C05

## **1. Introduction**

In recent years, intensive development in the field of hydraulic valves has been observed. This development is mainly related to the integration of electronics designed to control the valves. Modern hydraulic valves—especially those controlled via the proportional technique—are often equipped with various types of sensors, e.g., an inductive spool-position sensor inside the body of a proportional valve. The integration of classic hydraulics with electronics and sensors creates new, previously unattainable possibilities for using hydraulic proportional valves [1]. The course of the control signal of proportional valves is shown in the form of a block diagram in Figure 1.

An analog electrical signal with a voltage value typically not greater than 10 V is fed to an electronic amplifier. From the electronic amplifier, the electric control signal is fed through wires with a current that usually does not exceed 1.5 A per coil of a proportional electromagnet. Depending on the type of proportional electromagnet, a force or displacement of the proportional electromagnet armature is generated. If the valve uses

**Citation:** Deptuła, A.; Stosiak, M.; Cie´slicki, R.; Karpenko, M.; Urbanowicz, K.; Skaˇckauskas, P.; Deptuła, A.M. Application of the Methodology of Multi-Valued Logic Trees with Weighting Factors in the Optimization of a Proportional Valve. *Axioms* **2023**, *12*, 8. https://doi.org/ 10.3390/axioms12010008

Academic Editor: Oscar Castillo

Received: 1 November 2022 Revised: 9 December 2022 Accepted: 13 December 2022 Published: 22 December 2022

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

a proportional solenoid with an adjustable stroke, a displacement of the electromagnet armature is generated proportionally to the value of the control current. This affects the proportional valve's control element (e.g., a spool-bushing pair or seat plug), causing its displacement (x). If the valve uses a proportional solenoid with a regulated force, the *F* force is generated proportionally to the value of the control current on the armature of the proportional solenoid. This force is transmitted to the valve control, which is usually a poppet in the proportional pressure valve. With a change in the displacement of the valve actuator or the force acting on it, the *Q* flow rate or the *p* pressure functionally vary depending on if a proportional valve is controlling the flow rate or pressure. These parameters control the operation of the hydraulic receiver where the current determines the *n* or *v* speed of the hydraulic receiver and the pressure determines the external *M* or *F* load. *Axioms* **2022**, *11*, x FOR PEER REVIEW 2 of 35

**Figure 1.** Schematic diagram of the control signal in the proportional control technique. **Figure 1.** Schematic diagram of the control signal in the proportional control technique.

An analog electrical signal with a voltage value typically not greater than 10 V is fed to an electronic amplifier. From the electronic amplifier, the electric control signal is fed through wires with a current that usually does not exceed 1.5 A per coil of a proportional electromagnet. Depending on the type of proportional electromagnet, a force or displacement of the proportional electromagnet armature is generated. If the valve uses a proportional solenoid with an adjustable stroke, a displacement of the electromagnet armature is generated proportionally to the value of the control current. This affects the proportional valve's control element (e.g., a spool-bushing pair or seat plug), causing its displacement (x). If the valve uses a proportional solenoid with a regulated force, the *F* force is generated proportionally to the value of the control current on the armature of the proportional solenoid. This force is transmitted to the valve control, which is usually a poppet in the proportional pressure valve. With a change in the displacement of the valve actuator or the force acting on it, the *Q* flow rate or the *p* pressure functionally vary depending on if a proportional valve is controlling the flow rate or pressure. These parameters control the operation of the hydraulic receiver where the current determines the *n* or *v* speed of the hydraulic receiver and the pressure determines the external *M* or *F* load. There are no significant differences in terms of the mechanical design between conventionally controlled valves and proportionally controlled valves. The main difference relies on the fact that, in proportional valves, instead of a spring, hand wheel, lever, or conventional solenoid, there are one or more proportional solenoids. There are two types of proportional electromagnets: force-adjustable and stroke-adjustable. Their use depends on the type and function of the proportional valve and is determined by its characteristics. The benefits of using proportional valves include combining several functions in one valve, a smooth control of flow and pressure parameters, the ability to program the valve There are no significant differences in terms of the mechanical design between conventionally controlled valves and proportionally controlled valves. The main difference relies on the fact that, in proportional valves, instead of a spring, hand wheel, lever, or conventional solenoid, there are one or more proportional solenoids. There are two types of proportional electromagnets: force-adjustable and stroke-adjustable. Their use depends on the type and function of the proportional valve and is determined by its characteristics. The benefits of using proportional valves include combining several functions in one valve, a smooth control of flow and pressure parameters, the ability to program the valve and the receiver controlled by it, and the reduction of dynamic surpluses [2]. Proportionalvalves also have some disadvantages in comparison to conventional ones, including a higher price, stricter requirements regarding purity of the working liquid, and sensitivity to operating conditions (moisture, salinity of the environment, and external mechanical vibrations). Technically speaking, proportional valves were initially a bridge between conventionally (mechanically or electrically) controlled valves and servo valves. Presently, the latest proportional control valves have dynamic parameters equal to those of servo valves and sometimes even surpass them [3]. For example, it can be stated that the limit frequency of a two-stage servo valve is 240–270 Hz depending on the manufacturer, and the limit frequency of the latest-generation proportional valve with VCD (Voice Coil Drive) technology is 350 Hz. Several years ago, this frequency was 6–10 Hz for a single-stage proportional valve [4]. In many industries, particularly mechanical engineering, proportional relief valves with one—or more often, two—stages are widespread. An example of the use of proportional valves is their use in the hydraulic system for lifting and lowering loads with significant masses [5] such as agricultural machinery [6], CNC machine tools, hydraulic presses, wheel loaders, and ships.

#### and the receiver controlled by it, and the reduction of dynamic surpluses [2]. Proportional *Related Work*

*Related Work* 

valves also have some disadvantages in comparison to conventional ones, including a higher price, stricter requirements regarding purity of the working liquid, and sensitivity to operating conditions (moisture, salinity of the environment, and external mechanical vibrations). Technically speaking, proportional valves were initially a bridge between conventionally (mechanically or electrically) controlled valves and servo valves. Presently, the latest proportional control valves have dynamic parameters equal to those of servo valves and sometimes even surpass them [3]. For example, it can be stated that the limit frequency of a two-stage servo valve is 240–270 Hz depending on the manufacturer, and the limit frequency of the latest-generation proportional valve with VCD (Voice Coil Drive) technology is 350 Hz. Several years ago, this frequency was 6–10 Hz for a singlestage proportional valve [4]. In many industries, particularly mechanical engineering, The research to determine the importance of hydraulic valves' design and/or operational parameters is still ongoing. For example, study [7] optimized the relief valve by minimizing partial multi-valued logic functions. Multi-valued logical equations which constituted design guidelines for the entire series of types of such valves were used. The analysis of the stability of hydraulic elements based on the systems of multi-valued logical equations and the method of multi-valued logical trees, taking into account weighting factors, allows for the consideration of the conditions of global stability. The most favorable result is the relationship specification, which binds the design and operational parameters limitations. In addition, the conditions that limit the parameters of the valve and the system are brought to a simple analytical and graphical relationship. Overall, it is limited to a

proportional relief valves with one—or more often, two—stages are widespread. An example of the use of proportional valves is their use in the hydraulic system for lifting and

The research to determine the importance of hydraulic valves' design and/or operational parameters is still ongoing. For example, study [7] optimized the relief valve by minimizing partial multi-valued logic functions. Multi-valued logical equations which constituted design guidelines for the entire series of types of such valves were used. The relief valve system operated directly, general stability conditions, and a computerized time-course solution with different variable coefficients.

The modeling of hydraulic systems usually uses ordinary differential equations. From these equations a system of equations is created, initial conditions are assumed, and, after parameterization of the equations, the equations are solved to obtain time courses of the relevant parameters of the hydraulic system, e.g., pressure as a function of time or the velocity of the receiver as a function of time. These are models with focused parameters. One hydraulic system sometimes exhibits wave phenomena that lead to hydraulic resonance. This is referred to as a hydraulic long line. In such cases, partial differential equations are used for modeling and the method of characteristics (MOC) is used to solve them. This paper considers a system in which no wave phenomena occur and uses ordinary differential equations to describe the valve state. Decision-support systems are also applied to hydraulic and pneumatic systems [8–16]. The paper [8] mainly presents related methods, from classical clustering and classification topics to database methods (e.g., association), and from database methods (e.g., association rules, data cubes) to newer and more advanced topics (e.g., SVD/PCA, wavelets, support vector machines). The work of [9–12] focused on concepts for integrating decision-support systems of poorly structured data with a data warehouse based on relational or multidimensional structures. In [13], a framework was developed to evaluate different rainwater-discharge options for urban areas in arid regions. The modeling of rainfall runoff was carried out using the Hydrological-Engineering-Centre and Hydrological-Modelling-System (HEC-HMS). Hydraulic modeling was carried out using SewerGEM to evaluate the effectiveness of the various alternatives for a given design flood [14,15]. The authors of [14,15] presented further applications of multi-criteria decision support methods. In particular, in the work [15] of the Geospatial Information System (GIS), a multi-criteria decision-making system (MCDM) was applied to logic. The decision-making Trial and Evaluation Laboratory (DEMATEL) approach was used to create a network of relationships between criteria. The author of [17] described a model-driven decision-support system (software tool) implementing a model-based online leak-detection and localization methodology that is useful for a large class of water distribution networks.

The present work presents the use of multi-valued logical trees with multivalent weighting factors in the analysis of a two-stage proportional relief valve and a nozzleaperture preliminary stage [18–20]. A significant amount of literature exists on the applications of decision trees in decision-making systems. However, there are only a small number of publications on their application in design methodology. Cognitive decision theories seek sufficient and effective solutions for so-called real-world problems and welldefined problems. There are a number of decision-support methods that are familiar to the authors and, in particular, have already been used by the authors to solve a number of problems in decision-support areas, e.g., in the use of special types of parametric dependency graphs [21,22]; inductive decision trees [23,24]; and in particular multi-valued logic trees [25]. Specifically, the recent paper has shown how methods based on multivalued logic trees can be very beneficial when other methods are ineffective. However, multi-valued logic tree methods have plenty of advantages in design methodology and are still being developed. The advantage of the method of multi-valued logical trees is that the measurement data can be recorded by means of appropriate formal notations and it is even possible to combine complex quantitative and qualitative features with different degrees of detail according to the rules of the multi-valued morphological array. The canonical alternative normal form (KAPN) of a bivariate or multi-valued logical function describes all variants, i.e., true (realizable) solutions of a given problem obtained according to the rules of the morphological table, as the full array of combinations of values of logical variables describes all theoretical variants. As a result of minimization (after applying the Quine– McCluskey algorithm), one obtains from the realizable solutions the true sub-solutions as a shortened alternative normal form of the SAPN of the logical function. In this way, the real

sub-solutions of the problem are appropriately grouped and therefore the computational time required to obtain the most important real sub-solutions is reduced. lutions as a shortened alternative normal form of the SAPN of the logical function. In this way, the real sub-solutions of the problem are appropriately grouped and therefore the computational time required to obtain the most important real sub-solutions is reduced.

ical alternative normal form (KAPN) of a bivariate or multi-valued logical function describes all variants, i.e., true (realizable) solutions of a given problem obtained according to the rules of the morphological table, as the full array of combinations of values of logical variables describes all theoretical variants. As a result of minimization (after applying the Quine–McCluskey algorithm), one obtains from the realizable solutions the true sub-so-

*Axioms* **2022**, *11*, x FOR PEER REVIEW 4 of 35

#### **2. The Tested Object 2. The Tested Object**

The tested object is a two-stage proportional relief valve with a preliminary nozzleaperture stage (Figure 2) [23]. Figure 2 demonstrates a two-stage proportional relief valve. The main stage is pressure controlled, while the pilot is controlled by a proportional solenoid. Changing the pressure in the chamber above the main-stage spool is possible by throttling the fluid flowing out of the pilot. This throttling is altered by changing the position of the diaphragm driven by the proportional solenoid. The tested object is a two-stage proportional relief valve with a preliminary nozzleaperture stage (Figure 2) [23]. Figure 2 demonstrates a two-stage proportional relief valve. The main stage is pressure controlled, while the pilot is controlled by a proportional solenoid. Changing the pressure in the chamber above the main-stage spool is possible by throttling the fluid flowing out of the pilot. This throttling is altered by changing the position of the diaphragm driven by the proportional solenoid.

**Figure 2.** The tested two-stage proportional relief valve. **Figure 2.** The tested two-stage proportional relief valve.

Figure 3 shows the drive system with a proportional valve and a receiver. Figure 3 shows the drive system with a proportional valve and a receiver.

1 **Figure 3.** Diagram of the drive system.

The receiver in the analyzed system is a throttle valve whose performance characteristics are described as follows:

$$\begin{cases} \begin{array}{l} p \le 1 \, MPa : Q\_{odd} = 1.2446666 \cdot 10^{-10} p \\ 1 \, MPa \le p \le 6 \, MPa : Q\_{odd} = 0.3533333 \cdot 10^{-10} p + 0.8913333 \cdot 10^{-4} \end{array} \tag{1} \\\ p \le 6 \, MPa : Q\_{odd} = 0.2425893 \cdot 10^{-10} p + 14.55 \cdot 10^{-5} .\end{array} \tag{1}$$

where *Qodb* is the hydraulic actuator flow rate.

In order to describe the flow through a proportional valve it is necessary to consider the value of the loss factor as a function of the displacement of the moving element. The actual course is similar to the solution of a second-order differential equation with a variable throttling factor. This relationship is described in the following form [26]:

$$\begin{array}{l}k\_{\text{vx}} = 0.82 \cdot \left[1 - \exp(-b \cdot 10^3 \cdot \frac{\text{x}}{2})\cos(10^3 \sqrt{-\Delta})\right],\\\Delta < 0\end{array} \tag{2}$$

where:

$$\begin{array}{l} b = 5 + \frac{5 \cdot 10^{\overline{\gamma}}}{p}; \\ \Delta = b^2 - 100\pi^2, \\ \Delta > 0 \end{array} \tag{3}$$

and

$$k\_{\upsilon\chi} = 0.82[1 - \exp(-b - \sqrt{-\Delta})10^3 \frac{\chi}{2}].\tag{4}$$

The following course was used in the control stage (∆*y* < 0):

$$k\_{vy} = 0.75 \left[ 1 - \exp\left(-b\_y \cdot 10^3 \cdot \frac{y}{2}\right) \cos\left(10^3 y \sqrt{-\Delta y}\right) \right] \tag{5}$$

where:

$$\begin{array}{l}b\_{y} = 40 + \frac{1.5 \cdot 10^{8}}{p\_{y} + 10^{5}}\\\Delta y = b\_{y}^{2} - 100\pi^{2}.\end{array} \tag{6}$$

The force generated by the electromagnetic transducer used in the valve is described as follows:

$$\begin{array}{l} F\_{\text{m}} = 73.19631(i - 0.045), \\ d i = \frac{1}{T\_{\text{m}}}(\frac{II}{18} - i)dt, \end{array} \tag{7}$$

where *<sup>T</sup><sup>m</sup>* <sup>=</sup> <sup>15</sup> ms when *<sup>i</sup>* increases *U* <sup>18</sup> − *i* > 0 and *T<sup>m</sup>* = 7.5 ms when *i* decreases *U* <sup>18</sup> − *i* < 0 .

## *Mathematical Model of the Tested Valve*

The mathematical model of the valve under consideration was built on the basis of ordinary differential equations of the second order. The first equation of the system of equations is the flow rate balance equation, which takes into account the compressibility of the working fluid (its capacitance).

The flow balance of the drive system can be written as [23]:

$$Q\_p = Q\_{zQ} + Q\_{1x} + Q\_{odd}.\tag{8}$$

The flow balance through the main valve stage is described as:

$$Q\_{\mathbf{z}Q} = Q\_{\mathbf{z}Q\mathbf{x}} + Q\_{\mathbf{D}1} + Q\_{\mathbf{tx}}.\tag{9}$$

The flow through the nozzle is described as:

$$Q\_{D1} = Q\_{D2} = Q\_{D3\prime} \tag{10}$$

$$Q\_{DG} = Q\_{zQY} + Q\_{tY}.\tag{11}$$

The flow balance through the control stage is described as:

*QD*<sup>3</sup> = *Q*1*<sup>Y</sup>* + *QzQY* + *QtY*. (12)

In addition, the flow rate is distinguished in the main stage as:

$$Q\_{\rm lx} = \frac{V\_{\chi}}{B} \cdot \frac{dp}{dt} = \frac{4.33735 \cdot 10^{-3}}{1.4 \cdot 10^9} \frac{dp}{dt} = 3.098107 \cdot 10^{-12} \frac{dp}{dt} \tag{13}$$

and at the control stage as:

$$Q\_{1y} = \frac{V\_y}{B} \cdot \frac{dp}{dt} = \frac{1.2 \cdot 10^{-6}}{1.4 \cdot 10^9} \frac{dp}{dt} = 0.857 \cdot 10^{-15} \frac{dpy}{dt} \tag{14}$$

The flow rate through the valve is represented as:

$$\begin{cases} Q\_z = \sqrt{\frac{2}{\rho}} \cdot k(k\_v x) \sqrt{p - p\_0} \\ \text{with} \\ p\_0 \ll p\_\star \end{cases} \tag{15}$$

• through the main stage:

$$Q\_{\mathbf{z}Q\mathbf{x}} = \sqrt{\frac{2}{892}} \cdot \pi \cdot 22 \cdot 10^{-3} \cdot \sin 30^{\circ} (k\_{\text{vx}} \cdot \mathbf{x}) \sqrt{p} \,\tag{16}$$

$$Q\_{zQx} = 1.6355097 \cdot 10^{-3} (k\_{vx} \cdot x) \sqrt{p} \,\tag{17}$$

• through the control stage:

$$Q\_{zQy} = \sqrt{\frac{2}{892}} \cdot \pi \cdot 1.8 \cdot 10^{-3} (k\_{vy} \cdot y) \sqrt{p},\tag{18}$$

$$Q\_{zQy} = 0.2676292 \cdot 10^{-3} (k\_{vy} \cdot y) \sqrt{p}. \tag{19}$$

Ultimately, the flow rates are represented as:

• through the nozzle *D*1:

$$Q\_{D1} = a\_1(p - p\_1) = 0.2370513 \cdot 10^{-10} (p - p\_1). \tag{20}$$

• through the nozzle *D*3:

$$Q\_{D3} = a\_3(p\_2 - p\_y) = 0.2370486 \cdot 10^{-10} (p\_2 - p\_y). \tag{21}$$

An additional equation described is the equilibrium equation of the forces acting on the valve control element (according to d'Alembert's principle) on the main stage and the secondary stage. This equation takes into account the forces of inertia, spring stiffness, frictional force, and the hydrodynamic reaction force associated with the change in momentum of the fluid stream.

Forces in the valve:

Dynamic loads:

$$F\_d = m \frac{d^2 \mathbf{x}}{dt^2}.\tag{22}$$

In the main stage:

$$F\_{d\chi} = \left[0.675 + \frac{1}{3}(0.008 + 0.00439)\right] \frac{d^2 \chi}{dt^2} = 0.70631 \frac{d^2 \chi}{dt^2} \tag{23}$$

where the following values indicate:


$$F\_{dy} = 0.03 \frac{d^2 y}{dt^2}.\tag{24}$$

Sticky friction:

$$F\_{t1} = \frac{A\_{st} \cdot \mu}{L\_0} \cdot \frac{d\mathbf{x}}{dt} \,\tag{25}$$

• forces in the main stage:

$$F\_{l1x} = \frac{\pi \cdot 22 \cdot 10^{-3} \cdot 10.5 \cdot 10^{-3} \cdot 0.06265}{5 \cdot 10^{-6}} = 9.0885102 \frac{d\mathbf{x}}{dt} \,\tag{26}$$

• forces in the control stage:

$$F\_{t1y} = \frac{32 \cdot 10^{-6} \cdot 0.06265}{12 \cdot 10^{-6}} \frac{dy}{dt} = 0.1670666 \frac{dy}{dt}.\tag{27}$$

Forces of the hydrodynamic reaction are described as follows:

• of the main stage:

$$F\_{\rm rx} = 2k\_{\rm x} \cos \theta (k\_{\rm rx} \mathbf{x}) p = 2 \cdot \pi \cdot 22 \cdot 10^{-5} \cdot \sin 30^{\circ} \cdot 1 \cdot \cos 35^{\circ} (k\_{\rm rx} \mathbf{x}) p\_{\prime} \tag{28}$$

$$F\_{\rm rx} = 56.59033 \cdot 10^{-6} \cdot (k\_{\rm rx} x) p\_{\prime} \tag{29}$$

• of the nozzle-aperture pair:

$$F\_{ry} = \frac{16A\_y(k\_{vy}y)^2}{d\_{DG}^2} P\_p{}^{\prime} \tag{30}$$

$$F\_{ry} = \frac{16 \cdot \pi / 4 (1.65 \cdot 10^{-3})}{\left(1.5 \cdot 10^{-3}\right)^2} \left(k\_{vy}y\right)^2 = 15.1976 \left(k\_{vy}y\right)^2 P\_p. \tag{31}$$

The dynamic equations of the proportional valve forces are described at any point in the transient state after the introduction of the step function:

of the main stage:

$$F\_{d\mathbf{x}} = -F\_{t1\mathbf{x}} - F\_{r\mathbf{x}} - F\_{\mathbf{s}\mathbf{z}\mathbf{x}} - F\_{\mathbf{G}\mathbf{x}} + F\_{\mathbf{s}1\mathbf{x}} - F\_{\mathbf{s}2\mathbf{x}}.\tag{32}$$

of the control stage:

$$F\_{dy} = -F\_{I1y} + F\_{ry} + F\_{sy} - F\_{tsy} - F\_{opy} - F\_m. \tag{33}$$

The feedback loop equation is written as follows: when *U<sup>z</sup>* − *U<sup>p</sup>* − *e*<sup>0</sup> < 0,

$$\frac{du}{dt} = K\_M \left[ K\_{p1} (\mathcal{U}\_z - \mathcal{U}\_p) + K\_{p2} \left( \mathcal{U}\_z - \mathcal{U}\_p - e\_0 \right) \right],\tag{34}$$

when *U<sup>z</sup>* − *U<sup>p</sup>* + *e*<sup>0</sup> < 0,

$$\frac{du}{dt} = K\_M \left[ K\_{p1} (\mathcal{U}\_z - \mathcal{U}\_p) + K\_{p2} \left( \mathcal{U}\_z - \mathcal{U}\_p + e\_0 \right) \right],\tag{35}$$

if none of these conditions are met:

$$\begin{aligned} -e\_0 &< \mathcal{U}\_z - \mathcal{U}\_p < e\_0, \\ &\text{to} \\ \frac{d\mu}{dt} &= \mathcal{K}\_M \mathcal{K}\_{p1} \{ \mathcal{U}\_z - \mathcal{U}\_p \}. \end{aligned} \tag{36}$$

The output equations for the computer simulation of the operation of the hydraulic part are shown in Appendix A.

## **3. Methodology of Multi-Valued Logic Trees with Weight Coefficients as Discrete Optimization**

The methodology presented is based on two algorithms:


## *3.1. Quine-Mc Cluskey Algorithm for the Minimization of Partial Multi-Valued Logical Functions*

In the case of logic trees, the logical values of the variables are encoded on the branches of the tree. There can only be one Boolean variable per level of the tree, with the number of floors being equal to the number of independent variables of a given Boolean function. Representing a given Boolean function written in canonical alternative normal form (KAPN) on a logic tree involves encoding the individual canonical products on a tree path from the root to the end vertex. An individual path on the tree (from root to vertex) is a component of the singularity of the logical function, describing the realization of one possible solution. On the contrary, the set of paths is the set of all possible solutions. Figure 4 shows a logic tree in which a fixed Boolean function of three variables is encoded. *Axioms* **2022**, *11*, x FOR PEER REVIEW 9 of 35

In the Quine–McCluskey algorithm, a truncated alternative normal form (SAPN) and eventually a minimum alternative normal form (MAPN) are obtained by simplifying the Boolean functions encoded in KAPN (Figure 5). In the Quine–McCluskey algorithm, a truncated alternative normal form (SAPN) and eventually a minimum alternative normal form (MAPN) are obtained by simplifying the Boolean functions encoded in KAPN (Figure 5).

A minimized form of the output function (with a minimum number of literals) is subsequently obtained. However, given that so-called isolated branches exit, this is not the minimum decision form, meaning that there is no continuity between the root and the vertices. In the case of multi-valued logical functions—as in Boolean functions—the notions of incomplete gluing and elementary absorption, which are applied to the APN of a given logical function, play a fundamental role in the search for prime implicants.

subsequently obtained. However, given that so-called isolated branches exit, this is not the minimum decision form, meaning that there is no continuity between the root and the vertices. In the case of multi-valued logical functions—as in Boolean functions—the notions of incomplete gluing and elementary absorption, which are applied to the APN of a

where *r* =1, ..., *n* and *A* denotes an elementary partial product, of which, variables of

where *r* = 1, ... , *n* and *A* denotes a partial product of which the variables of the individ-

where <sup>0</sup> *<sup>u</sup> <sup>m</sup>* 1, <sup>1</sup> *<sup>r</sup> <sup>n</sup>*, <sup>≤</sup> <sup>≤</sup> *<sup>r</sup>* <sup>−</sup> <sup>≤</sup> <sup>≤</sup> and *A* denotes a partial product of which the variables of the individual literals belong to a set of { } *<sup>r</sup> <sup>r</sup> <sup>n</sup> x* , ..., *x* , *x* ,..., *x* <sup>1</sup> <sup>−</sup><sup>1</sup> <sup>+</sup><sup>1</sup> . If the above equation holds, then A absorbs ( ) *<sup>u</sup> <sup>r</sup> Aj x* . Signs (v) denote that a given partial product of the elementary, written using the digits of the system ( ) *m mn* , ..., <sup>1</sup> -positional, takes part in the gluing with those products that have a sign (v) in the same column. The notation marks of the gluing operation are entered separately in the columns and not in a single column as was the case in previous literature studies of bivalent cases. In the case of equal, multivalued variables

*Ajo* () () *xr Ajm xr A <sup>r</sup>* + ... + <sup>−</sup><sup>1</sup> = (37)

*Aju* ( ) *xr* + *A* = *A* , (39)

() () () () *<sup>o</sup> <sup>r</sup> <sup>m</sup> <sup>r</sup> <sup>o</sup> <sup>r</sup> <sup>m</sup> <sup>r</sup> Aj x Aj x A Aj x Aj x <sup>r</sup> <sup>r</sup>* <sup>1</sup> <sup>1</sup> ... ... + + <sup>−</sup> = + + + <sup>−</sup> , (38)

given logical function, play a fundamental role in the search for prime implicants.

A **gluing** operation is called a transformation:

ual literals belong to a set of ሼ1,…,−, +,…,ሽ.

the individual literals belong to the set ሼ1,…,−, +,…,ሽ. An **incomplete gluing** operation is called a transformation:

An elementary absorption operation is called a transformation:

**Figure 5.** Logic tree and simplified logic tree.

In the Quine–McCluskey algorithm, a truncated alternative normal form (SAPN) and eventually a minimum alternative normal form (MAPN) are obtained by simplifying the

**Figure 5.** Logic tree and simplified logic tree. **Figure 5.** Logic tree and simplified logic tree.

> A minimized form of the output function (with a minimum number of literals) is A **gluing** operation is called a transformation:

A **gluing** operation is called a transformation:

**Figure 4.** Boolean function of three variables encoded on a logic tree.

Boolean functions encoded in KAPN (Figure 5).

$$A j\_{\boldsymbol{\theta}}(\mathbf{x}\_{\boldsymbol{r}}) + \dots + A j\_{\boldsymbol{m}\_{\boldsymbol{r}}-1}(\mathbf{x}\_{\boldsymbol{r}}) = A. \tag{37}$$

vertices. In the case of multi-valued logical functions—as in Boolean functions—the notions of incomplete gluing and elementary absorption, which are applied to the APN of a where *r* = 1, . . . , *n* and *A* denotes an elementary partial product, of which, variables of the individual literals belong to the set {*x*1, . . . , *xr*−*<sup>i</sup>* , *xr*+*<sup>i</sup>* , . . . , *xn*}.

given logical function, play a fundamental role in the search for prime implicants. An **incomplete gluing** operation is called a transformation:

$$Aj\_o(\mathbf{x}\_r) + \dots + Aj\_{\mathfrak{m}\_r - 1}(\mathbf{x}\_r) = A + Aj\_o(\mathbf{x}\_r) + \dots + Aj\_{\mathfrak{m}\_r - 1}(\mathbf{x}\_r),\tag{38}$$

where *r* =1, ..., *n* and *A* denotes an elementary partial product, of which, variables of the individual literals belong to the set ሼ1,…,−, +,…,ሽ. where *r* = 1, . . . , *n* and *A* denotes a partial product of which the variables of the individual literals belong to a set of {*x*1, . . . , *xr*−*<sup>i</sup>* , *xr*+*<sup>i</sup>* , . . . , *xn*}.

An **incomplete gluing** operation is called a transformation: An elementary absorption operation is called a transformation:

$$A j\_{\mu}(\mathbf{x}\_{r}) + A = A,\tag{39}$$

where *r* = 1, ... , *n* and *A* denotes a partial product of which the variables of the individual literals belong to a set of ሼ1,…,−, +,…,ሽ. An elementary absorption operation is called a transformation: *Aju* ( ) *xr* + *A* = *A* , (39) where <sup>0</sup> *<sup>u</sup> <sup>m</sup>* 1, <sup>1</sup> *<sup>r</sup> <sup>n</sup>*, <sup>≤</sup> <sup>≤</sup> *<sup>r</sup>* <sup>−</sup> <sup>≤</sup> <sup>≤</sup> and *A* denotes a partial product of which the variables of the individual literals belong to a set of { } *<sup>r</sup> <sup>r</sup> <sup>n</sup> x* , ..., *x* , *x* ,..., *x* <sup>1</sup> <sup>−</sup><sup>1</sup> <sup>+</sup><sup>1</sup> . If the above equation holds, then A absorbs ( ) *<sup>u</sup> <sup>r</sup> Aj x* . Signs (v) denote that a given partial product of the elementary, written using the digits of the system ( ) *m mn* , ..., <sup>1</sup> -positional, takes part in the gluing with where 0 ≤ *u* ≤ *m<sup>r</sup>* − 1, 1 ≤ *r* ≤ *n*, and *A* denotes a partial product of which the variables of the individual literals belong to a set of {*x*1, . . . , *xr*−1, *xr*+1, . . . , *xn*}. If the above equation holds, then A absorbs *Aju*(*xr*). Signs (v) denote that a given partial product of the elementary, written using the digits of the system (*m*1, . . . , *mn*)-positional, takes part in the gluing with those products that have a sign (v) in the same column. The notation marks of the gluing operation are entered separately in the columns and not in a single column as was the case in previous literature studies of bivalent cases. In the case of equal, multivalued variables *x*1, . . . , *x<sup>n</sup>* of a given logical function, the set of first implicants is obtained as a special case from different multi-valued variables.

operation are entered separately in the columns and not in a single column as was the case in previous literature studies of bivalent cases. In the case of equal, multivalued variables

those products that have a sign (v) in the same column. The notation marks of the gluing **Example 1.** *Using the relationship:*

$$A j\_0(\mathbf{x}\_{\mathbf{r}}) + \dots + A j\_{m-1}(\mathbf{x}\_{\mathbf{r}}) = A, \ A j\_{\mathbf{u}}(\mathbf{x}\_{\mathbf{r}}) + A = A,\tag{40}$$

*whereA* = *A*(*x*1, . . . , *xr*−1, *xr*+1, . . . , *xn*)*,*

$$j\_{\mu}(\mathbf{x}\_{r}) = \begin{cases} m-1 & , \quad u = \mathbf{x}\_{r} \\ \\ 0 & , \quad u \neq \mathbf{x}\_{r} \end{cases} \quad 0 \le u \le m-1; \tag{41}$$

The successive steps of minimizing a multi-valued logical function can be represented as follows:


Finally, two NAPNs and MAPNs of a given logic function are obtained, written using m-position system numbers: {(02-), (20-), (1-1), (21-), (-21)} and {(02-), (20-), (1-1), (21-), (2-1)}.

The rank of importance of successive decision variables is determined using complex alternative normal forms through the swapping of floors in logical decision trees. The swapping of logical tree floors in complex, multi-valued logical functions establishes the rank of importance of logical variables from the most important (at the root) to the least important (at the top). There is a generalization of a bivariate quality indicator to a multivariate one; (*C<sup>k</sup>* − *kim<sup>i</sup> ) + (k<sup>i</sup> + K<sup>i</sup> )*, where *C<sup>k</sup>* represents the number of branches of the *k*-th floor, *k<sup>i</sup>* is the simplification factor on the *k*-th floor of the m<sup>i</sup> -value variable, and *K<sup>i</sup>* represents the number of branches (*k* − 1)-th floors from which the non-simplifying branches of the *k*-th floor are formed. In this way, it is possible to obtain the minimum complexity alternative normal form (MZAPN) of a given logical function without isolated branches on the decision tree and with a concomitant minimum number of real (realizable) branches, which in particular can be considered to be elementary design guidelines. All transformations refer to the so-called Quine—McCluskey algorithm for minimising individual partial multi-valued logical functions.

**Example 2.** *A multi-valued logical function f(x*1*, x*2*, x*3*), where x*1*, x*<sup>2</sup> *and x*<sup>3</sup> *are 0, 1 and 2, respectively; with a numerically recorded KAPN: 100, 010, 002, 020, 101, 110, 021, 102, 210, 111, 201, 120, 022, 112, 211, 121, 212, 221 and 122; and with one MZAPN after applying the Quine—McCluskey algorithm for minimising individual partial multi-valued logical functions has 13 literals:*

$$\begin{array}{c} f(\mathbf{x}\_{1}, \mathbf{x}\_{2}, \mathbf{x}\_{3}) = j\_{\boldsymbol{\rho}}(\mathbf{x}\_{1})(j\_{\boldsymbol{\rho}}(\mathbf{x}\_{2})j\_{\mathbf{2}}(\mathbf{x}\_{3}) + j\_{1}(\mathbf{x}\_{2})j\_{\boldsymbol{\rho}}(\mathbf{x}\_{3}) + j\_{2}(\mathbf{x}\_{2})) \\ + j\_{1}(\mathbf{x}\_{1}) + j\_{2}(\mathbf{x}\_{1})(j\_{\boldsymbol{\rho}}(\mathbf{x}\_{2})j\_{1}(\mathbf{x}\_{3}) + j\_{1}(\mathbf{x}\_{2}) + j\_{2}(\mathbf{x}\_{2})j\_{1}(\mathbf{x}\_{3})). \end{array} \tag{42}$$

Figure 6 shows all possible ZKAPNs of a given multi-valued logical function.

*erals:*

**Example 2.** *A multi-valued logical function f(x1, x2, x3), where x1, x2, and x3 are 0, 1, and 2, respectively; with a numerically recorded KAPN: 100, 010, 002, 020, 101, 110, 021, 102, 210, 111, 201, 120, 022, 112, 211, 121, 212, 221, and 122; and with one MZAPN after applying the Quine— McCluskey algorithm for minimising individual partial multi-valued logical functions has 13 lit-*

( ) () ( ) ( ) ( ) ( ) ( ) ( )

++ + +

Figure 6 shows all possible ZKAPNs of a given multi-valued logical function.

, , 123 1 2 2 3 1 2 3 2 2

*fxx x j x j x j x j x j x j x oo o*

() () ( ) ( ) ( ) () () ( )

(42)

. 11 21 213 1 2 2 213

*jx j x j x jx jx j x jx o*

= + ++

**Figure 6.** ZKAPN and MZAPN of the given logical function from Example 2**. Figure 6.** ZKAPN and MZAPN of the given logical function from Example 2.


1. The first stage of minimization due to *x3***:** 1. The first stage of minimization due to *x***3**:

2. The first stage of minimization due to *x***1**:


Further minimisation steps for other variables:


*3.2. Generalization of the Quine–Mc Cluskey Algorithm for Minimization of Partial Multi-Valued Logical Functions for Multi-Valued Weighting Factors*

In multi-valued logical functions with weighted products it is possible to apply the Quine–McCluskey algorithm for the minimization of multi-valued functions. As with the minimization of multi-valued logical functions without weighting coefficients, in the algorithm the elementary products are written as numbers in the corresponding positional systems. Additional elements and operations are introduced to account for the weighting coefficients.

In partial data of multi-valued logical functions *fi*(*x*1, . . . , *xn*) *n* of variables (*m*1, . . . , *mn*), value-added gluing and pseudo-gluing operations should include weighting factors (*wn*, *wn*−1, *wn*−2, . . . , *w*1) assigned to the corresponding multi-valued logical products.

The Quine–McCluskey algorithm for minimizing multi-valued logical functions is built from *n* columns with (*w*1, . . . , *wn*) weighting factors.

Symbols indicating pseudo-gluing (V) and gluing (v) sequentially relative to groups of indices differing by one are placed in the columns corresponding to the values of the weighting factors for the corresponding logical products.

Given multi-valued weighting coefficients, individual (parallel) pseudo-bonding operations sequentially against groups of indices, differing by at least one, and containing at most (*m<sup>i</sup>* − 1) elements can proceed in canonical products with different weighting coefficients.

The characters appear in different columns. In addition, they may be in columns with a corresponding coefficient (*wn*, *wn*−1, *wn*−2, . . . , *w*1). Therefore, the columns with (*w*1, . . . , *wn*) weighting coefficients introduce position numbers *p<sup>i</sup>* , with *i* = 1,...,*n*, which is useful for calculating the quality of the minimization in further stages.

Definitions of 'pure' and 'impure' gluing are introduced for gluing operations of individual partial multi-valued logical functions with weighted coefficients.

**Definition 1.** *The pure gluing operation is the gluing of multi-valued canonical elementary products according to the Quine–McCluskey algorithm with the same weighting factor w<sup>i</sup>* .

A pure gluing operation is a transformation of:

$$w\_i A j\_0(\mathbf{x}\_r) + \dots + w\_i A j\_{m\_r - 1}(\mathbf{x}\_r) = w\_i A \,\,\, \,\tag{43}$$

where *r* = 1, . . . , *n* and *A* represents a partial product of which the variables of the individual literals belong to a set of {*x*1, . . . , *xr*−*<sup>i</sup>* , *xr*+*<sup>i</sup>* , . . . , *xn*}. In *n* m-value variables, the weighting factor before the partial canonical product takes values in the interval *w*1, . . . , *wn*, with *w<sup>j</sup>* = *wj*−<sup>1</sup> + *wj*−<sup>2</sup> + . . . + *w*<sup>1</sup> and *j* = 2, . . . , *n*.

**Definition 2.** *The gluing operation according to the Quine–McCluskey algorithm of multi-valued canonical elementary products with different values of weight coefficients* (*w*1, . . . , *wn*) *is impure gluing.*

The impure gluing operation for multi-valued canonical elementary products is performed with respect to the weighting factor with the smallest value, i.e., *min*{*w*1, . . . , *wn*}. An impure bonding operation is a transformation:

$$\begin{array}{l} w\_{0}Aj\_{0}(\mathbf{x}\_{r}) + \ \ldots + w\_{m\_{r}-1}Aj\_{m\_{r}-1}(\mathbf{x}\_{r})\\ = (\min\{w\_{0}, \ldots, w\_{m\_{r}-1}\}) \cdot A + \sum\_{s=i\_{0}, \ldots, i\_{m\_{r}-2}} w\_{s} \cdot A \cdot j\_{s}(\mathbf{x}\_{r}) \end{array} \tag{44}$$

where *r* = 1, . . . , *n*, *w<sup>s</sup>* > *min*{*w*1, . . . , *wmr*−1}, and *A* denotes a partial product of which the variables of the individual literals belong to a set of {*x*1, . . . , *xr*−*<sup>i</sup>* , *xr*+1, . . . , *xn*}. In *n*(*m*1, . . . , *mn*)-value variables, the weighting factor *w<sup>i</sup>* before the partial canonical product takes values in the interval *w*1, . . . , *wn*, with *w*<sup>1</sup> where *j* = 2, . . . , *n*.

**Definition 3.** *An incomplete gluing operation is a transformation that retains the original records to be glued after the algorithm has been executed in the result.*

Given that there is an isomorphic interpretation of logical transformations, the Quine– McCluskey algorithm for minimizing individual partial multi-valued logical functions can be considered with the weighting factors mentioned, which is important for describing the rank validity of design guidelines.

**Example 3 with weighting factors.** *In a partial logical function f*(*x*1, *x*2, *x*3)*, written numerically in KAPN: 010, 100, 002, 011, 110, 012 and 112, the Quine–McCluskey algorithm for minimizing logical functions with multi-valued weight coefficients yields one MZAPN which has 11 literals of f*(*x*1, *x*2, *x*3)*, i.e.,*

$$\begin{array}{l} f(\mathbf{x}\_{1}, \mathbf{x}\_{3}, \mathbf{x}\_{2}) = j\_{0}(\mathbf{x}\_{1})(1j\_{0}(\mathbf{x}\_{3})j\_{1}(\mathbf{x}\_{2})) + 2j\_{1}(\mathbf{x}\_{3})2j\_{1}(\mathbf{x}\_{2}) + 2j\_{2}(\mathbf{x}\_{3})) \\ + j\_{1}(\mathbf{x}\_{1})(1j\_{2}(\mathbf{x}\_{3})j\_{1}(\mathbf{x}\_{2})) + 2j\_{0}(\mathbf{x}\_{3})j\_{1}(\mathbf{x}\_{2})) \end{array} \tag{45}$$

*while other ZAPN f*(*x*1, *x*2, *x*3)*, f*(*x*2, *x*1, *x*3)*, f*(*x*2, *x*3, *x*1)*and f*(*x*3, *x*1, *x*2)*of a given logical function have 12 and f*(*x*3, *x*2, *x*1)*13 literals, respectively.*

$$\begin{array}{l} f(\mathbf{x}\_{2}, \mathbf{x}\_{3}, \mathbf{x}\_{1}) = j\_{0}(\mathbf{x}\_{2})(1j\_{0}(\mathbf{x}\_{3})j\_{1}(\mathbf{x}\_{1}) + 2j\_{2}(\mathbf{x}\_{3})j\_{0}(\mathbf{x}\_{1})) \\ + j\_{1}(\mathbf{x}\_{2})(2j\_{0}(\mathbf{x}\_{3})j\_{1}(\mathbf{x}\_{1}) + 2j\_{1}(\mathbf{x}\_{3})j\_{0}(\mathbf{x}\_{1}) + 2j\_{1}(\mathbf{x}\_{3})j\_{0}(\mathbf{x}\_{1})) \end{array} \tag{46}$$

$$\begin{aligned} f(\mathbf{x\_2}, \mathbf{x\_1}, \mathbf{x\_3}) &= j\_0(\mathbf{x\_2}) (2j\_0(\mathbf{x\_1})j\_2(\mathbf{x\_3}) + 1j\_1(\mathbf{x\_1})j\_0(\mathbf{x\_3})) \\ &+ j\_1(\mathbf{x\_2}) (2j\_0(\mathbf{x\_1})(j\_1(\mathbf{x\_3}) + j\_2(\mathbf{x\_3})) + j\_1(\mathbf{x\_1})(2j\_0(\mathbf{x\_3}) + 1j\_2(\mathbf{x\_3}))) \end{aligned} \tag{47}$$

$$\begin{array}{l} f(\mathbf{x}\_{1}, \mathbf{x}\_{2}, \mathbf{x}\_{3}) = j\_{0}(\mathbf{x}\_{1})(2j\_{0}(\mathbf{x}\_{2})j\_{2}(\mathbf{x}\_{3}) + 2j\_{1}(\mathbf{x}\_{2})(j\_{1}(\mathbf{x}\_{3}) + j\_{2}(\mathbf{x}\_{3}))) \\ + j\_{1}(\mathbf{x}\_{1})(1j\_{0}(\mathbf{x}\_{2})j\_{0}(\mathbf{x}\_{3}) + j\_{1}(\mathbf{x}\_{2})(2j\_{0}(\mathbf{x}\_{3}) + 1j\_{2}(\mathbf{x}\_{3}))) \end{array} \tag{48}$$

$$\begin{array}{l} f(\mathbf{x}\_{3},\mathbf{x}\_{1},\mathbf{x}\_{2}) = j\_{0}(\mathbf{x}\_{3})(1j\_{0}(\mathbf{x}\_{1})j\_{1}(\mathbf{x}\_{2}) + 2j\_{1}(\mathbf{x}\_{1})j\_{1}(\mathbf{x}\_{2})) \\ + 2j\_{1}(\mathbf{x}\_{3})j\_{0}(\mathbf{x}\_{1})j\_{1}(\mathbf{x}\_{2}) + j\_{2}(\mathbf{x}\_{3})(2j\_{0}(\mathbf{x}\_{1}) + 1j\_{1}(\mathbf{x}\_{1})j\_{1}(\mathbf{x}\_{2})) \end{array} \tag{49}$$

$$\begin{array}{l} f(\mathbf{x}\_{3}, \mathbf{x}\_{2}, \mathbf{x}\_{1}) = j\_{0}(\mathbf{x}\_{3})(1j\_{0}(\mathbf{x}\_{2})j\_{1}(\mathbf{x}\_{1}) + 2j\_{1}(\mathbf{x}\_{2})j\_{1}(\mathbf{x}\_{1})) \\ + 2j\_{1}(\mathbf{x}\_{3})j\_{1}(\mathbf{x}\_{2})j\_{0}(\mathbf{x}\_{1}) + j\_{2}(\mathbf{x}\_{3})(2j\_{0}(\mathbf{x}\_{2})j\_{0}(\mathbf{x}\_{1}) + 2j\_{1}(\mathbf{x}\_{2})j\_{0}(\mathbf{x}\_{1})). \end{array} \tag{50}$$

The following are the successive steps in the minimisation of logical functions due to given decision variables:


## **Tree interpretation.**

Figure 7 shows the MZAPN of the multi-valued logical function from Example 3.

2(110) 1 - 1 0 V

 1 0 0 V 2(110) 1 - 1 0 V

1 1 2 V

 1 0 0 V 2(110) 1 - 1 0 V

1 1 2 V

*pi* 1 2 3 4 5 6 1 2 3 4 5 6

*pi* 1 2 3 4 5 6 1 2 3 4 5 6

*pi* 1 2 3 4 5 6 1 2 3 4 5 6

*wi x1 x3 wi* = 2 *wi* = 1

*w<sup>i</sup> x1 x3 w<sup>i</sup>* = 2 *w<sup>i</sup>* = 1

Figure 7 shows the MZAPN of the multi-valued logical function from Example 3.

2 1 1 V

 2 - 0 2 V 2 0 1 V

 2 0 2 V 2 0 0 V

 2 - 0 2 V 2 0 1 V

V

**Tree interpretation.** 

*Axioms* **2022**, *11*, x FOR PEER REVIEW 17 of 35

V

**Figure 7.** MZAPN logic tree of the multivalued logic function ሺଵ, ଷ, ଶሻ from Example 3. **Figure 7.** MZAPN logic tree of the multivalued logic function *f*(*x*<sup>1</sup> , *x*3, *x*2) from Example 3.

The proposed methodology can be described by the flow chart shown in Figure 8. The proposed methodology can be described by the flow chart shown in Figure 8.

**Figure 8.** Flow chart of the proposed method (with example runs for the weighting factor *w*i = 2) **Figure 8.** Flow chart of the proposed method (with example runs for the weighting factor *w*<sup>i</sup> = 2).

The structuring of the described problem takes into account the methodology of multi-valued logical trees, allowing for the introduction of appropriate formal notations and even making it possible to combine complex quantitative and qualitative features The structuring of the described problem takes into account the methodology of multivalued logical trees, allowing for the introduction of appropriate formal notations and even

In addition, the morphological and decision tables can be encoded analytically and numerically according to the definitions and theorems of the logic of multi-valued decision processes. This enables a variant way of identifying and classifying information in computer science terms when seeking and modifying solutions in the design process.

In such a situation it is possible to introduce CAD, e.g., for the generation of all theoretical variants of the designed system, selection for realizability, search for realizable solutions and—most importantly—realizable sub-solutions, etc. In order to ensure the stable operation of the actual system, model tests are carried out on the basis of which of the relevant parameters are selected. The phenomena occurring during the flow of a medium are quite often not precisely defined, so it is necessary to identify an analytical model

stored in this array and marked on the variant tree.

when carrying out such studies.

with different degrees of detail according to the principles of a multidimensional morpho-

making it possible to combine complex quantitative and qualitative features with different degrees of detail according to the principles of a multidimensional morphological array. Therefore, there is no need to extend the generation process to sub-arrays when using a multidimensional morphological array as all information about the varieties of the main and detailed features and their numerous modifications can be immediately stored in this array and marked on the variant tree.

In addition, the morphological and decision tables can be encoded analytically and numerically according to the definitions and theorems of the logic of multi-valued decision processes. This enables a variant way of identifying and classifying information in computer science terms when seeking and modifying solutions in the design process.

In such a situation it is possible to introduce CAD, e.g., for the generation of all theoretical variants of the designed system, selection for realizability, search for realizable solutions and—most importantly—realizable sub-solutions, etc. In order to ensure the stable operation of the actual system, model tests are carried out on the basis of which of the relevant parameters are selected. The phenomena occurring during the flow of a medium are quite often not precisely defined, so it is necessary to identify an analytical model when carrying out such studies.

## **4. Application of the Methodology of Multi-Valued Logic Trees with Weighting Factors in the Optimization of a Proportional Valve**

Tests have already been carried out for valves of the direct-acting UPZ type [4], which are designed to regulate the upstream pressure of steam and non-flammable, chemically inert gases and liquids regardless of the pressure at their outlet. Multivalent weighting factors were not considered in the tested valve class. For this reason, it was decided that an improved Quine–McCluskey algorithm with weighting factors for the hydraulic proportional valve would be used. Therefore, three 'novelties' are presented in this paper.

One of the optimal methods presented are multi-valued logic algorithms. For example, in [24] the authors presented a description of the dynamics of molecular states caused by a sequence of laser pulses using multi-valued logic. In turn, the authors of [25] used multivalued logical schemes to calculate significance measures based on incompletely-defined data. This method is based on the definition of a mathematical model of an analyzed system in the form of a structure function that determines the correlation of the system reliability and the states of its components.

In [27], the authors described the historical and technical background of MVL, as well as the areas of present and future applications of quadrivalent logic. It was also intended to serve as a guide for non-specialists. The wide application of multi-valued logic in particular in these microelectronic circuits is presented in [28]. Additionally, there are many original works describing the practical application of multi-valued logic trees.

In addition, there are other works in which multi-valued decision trees and logic algorithms have been applied. For example, the authors of [29] presented the applications of machine learning and classification and regression trees (CART) in medicine. Specifically, they presented the concept of a gradient-boosting algorithm. The authors of [30] presented the application of a rotation forest with decision trees as a base classifier and a new ensemble model in the spatial modeling of groundwater potential. The use of fault-tree analysis to calculate system-failure probability bounds from qualitative data in an intuitive, fuzzy environment is presented in paper [31]. Meanwhile, in paper [32] the authors adopted component fault trees (CFTs) to support fault tree analysis, failure mode, and effect analysis as extensions of SysML models. Boolean decision support methods were presented in paper [33]. A very modern optimization method was proposed by the authors of [34]: the use of root trees. The root-tree algorithm was used for high-order sliding mode control using a super-twist algorithm based on the DTC scheme for DFIG.

The initial conditions of a differential equation can be determined by entering *dx<sup>i</sup> dt* = 0. The simulations were performed using the Matlab/Simulink package:

$$\begin{cases} -801.2102 \cdot 10^{-3} (k\_{\rm tr} \mathbf{x}\_1) \mathbf{x}\_3 - 147224.3 \mathbf{x}\_1 - 1925.155 + 5.3792244 \cdot 10^{-3} \left[ (1 - 10^3 \mathbf{x}\_1) \mathbf{x}\_3 - \mathbf{x}\_6 \right] = 0, \\ 0.2851216 \cdot 10^9 (1 - 1.32 \cdot 10^{-9} \mathbf{x}\_3) - 0.5279061 \cdot 10^9 (k\_{\rm tr} \mathbf{x}\_1) \sqrt{\mathbf{x}\_3} - 7.65 (\mathbf{x}\_3 - \mathbf{x}\_6) - 0.3227777 \cdot 10^{12} \mathbf{Q}\_{\rm afb} = 0, \\ 0.7123874 \cdot 10^{-4} \mathbf{x}\_7 + 418.8773 (k\_{\rm r} \mathbf{x}\_4)^2 \mathbf{x}\_7 - 33.33333 \mathbf{F}\_{\rm a} = 0, \\ 0.276556 \cdot 10^5 (\mathbf{x}\_3 - \mathbf{x}\_6) - 0.312234 \cdot 10^{12} (k\_{\rm r} \mathbf{x}\_4) \sqrt{\mathbf{x}\_6} = 0, \\ \mathbf{x}\_7 = \mathbf{x}\_6 - 0.2025169 \cdot 10^6 (k\_{\rm r} \mathbf{x}\_4) \sqrt{\mathbf{x}\_6}. \end{cases} (51)$$

Assuming that:

$$\mathcal{U}\_z = \mathcal{U}\_p = 1\,\text{V}.\tag{52}$$

It can be obtained that:

$$\begin{aligned} \mathcal{U}\_z &= \mathcal{U}\_p = 1 \; V, \\ \mathcal{Q}\_{odd} &= 12/6 \cdot 10^{-4} \left[ \frac{m^3}{s} \right]. \end{aligned} \tag{53}$$

*The Importance of the Design and/or Operational Parameters of a Hydraulic Proportional Valve*

In the optimization process, the changed parameters of the proportional valve while observing the *Q* flow rate and *p* pressure are represented by the regulator *Kp*1·*Kp*<sup>2</sup> gain (as a complex variable), the *Qobd* receiver flow rate (depending on the impulse input of the *U<sup>z</sup>* control voltage), and the *F<sup>m</sup>* magnetic force.

The arithmetic values of the tested parameters were selected for the analysis. They were coded by the authors of this work with logical decision variables:

$$\begin{aligned} (K\_{p1} \cdot K\_{p2}) &= 30 \sim 0; \\ (K\_{p1} \cdot K\_{p2}) &= 40 \sim 1; \\ (K\_{p1} \cdot K\_{p2}) &= 50 \sim 2; \\ (K\_{p1} \cdot K\_{p2}) &= 60 \sim 3; \\ F\_m &= 1.96 \text{[N]} \sim 0; \\ F\_m &= 2.96 \text{[N]} \sim 1; \\ F\_m &= 3.96 \text{[N]} \sim 2; \\ F\_m &= 4.96 \text{[N]} \sim 3; \\ Q\_{72} &= 36 \to 24 \left[ \text{dm}^3/\text{min} \right] \sim 0; \\ Q\_{72} &= 24 \to 12 \left[ \text{dm}^3/\text{min} \right] \sim 1; \\ Q\_{72} &= 36 \to 12 \left[ \text{dm}^3/\text{min} \right] \sim 2; \end{aligned} \tag{54}$$

In the operation of the relief valve, the authors introduced restrictions on the *Q* and *p* design parameters in terms of the stabilization time *tw:t<sup>w</sup>* < 0.48 *t*0. Subsequently, dynamic calculations of the valve were carried out, resulting in the *tw:t<sup>w</sup> <* 0.48 *t<sup>0</sup>* limitation. Following the dynamic calculations, 23 charts were selected. The code changes of the *Kp*1, *Kp*2, Qrz and *F<sup>m</sup>* design parameters are presented in Table 1.

Furthermore, in the code changes of the *Kp*1·*Kp*2, *Qrz* and *F<sup>m</sup>* design parameters multivalued *w<sup>i</sup>* weighting factors are introduced, similar to the relief valve. The greater the weighting number, the faster the *Q* and *p* functions reach a stable state (*t<sup>t</sup> > t<sup>j</sup>* ).

The following weighting factors were adopted in the *t<sup>w</sup>* < 0.48 *t*<sup>0</sup> limitation:


Table 1 presents the code changes of the *Kp*1·*Kp*2, *Qrz* and *F<sup>m</sup>* design parameters, taking into account the multi-valued weighting factors and the *t<sup>w</sup> <* 0.48 *t*<sup>0</sup> limitation.

Notably, the value of the weighting factor for changes in the code *Kp*1·*Kp*2, *Qrz* and *F<sup>m</sup>* design parameters in Table 1 is minimal among the coefficients defined separately in the *Q* and *p* function. If one of the functions stabilizes faster than the other, then the canonical product for the same code changes of *Kp*1·*Kp*2, *Qrz* and *F<sup>m</sup>* parameters should be assigned a smaller weighting factor.


**Table 1.** KAPN for the *Kp*<sup>1</sup> , *Kp*<sup>2</sup> , *Qrz* and *F<sup>m</sup>* parameter code data, taking into account the *w<sup>i</sup>* weighting factors.

In the system of multi-valued logic functions with weighting factors, weighting factors are assigned separately for each of the functions. In the system of multi-valued logic functions with weighting factors, weighting factors are assigned separately for each of the functions.

Figures 9–13 show the time periods of the *Q* and *p* functions with the weighting factor intervals marked *w<sup>i</sup>* :*p* (red color) and *Q* (blue color). Figures 9–13 show the time periods of the *Q* and *p* functions with the weighting factor intervals marked *wi*: *p* (red color) and *Q* (blue color).

(**a**)

**Figure 9.** *Cont*.

**Figure 9.** The *Q* and *p* time periods for code changes of the *Kp1·Kp2, Qrz,* and *Fm* parameters where *Qrz*: (**a**) 2(212), (**b**) 2(211), and (**c**) 2(210). Runs for a weighting factor value of *w*i = 2. **Figure 9.** The *<sup>Q</sup>* and *<sup>p</sup>* time periods for code changes of the *<sup>K</sup>p*<sup>1</sup> ·*Kp*<sup>2</sup> , *Qrz* and *F<sup>m</sup>* parameters where *Qrz*: (**a**) 2(212), (**b**) 2(211) and (**c**) 2(210). Runs for a weighting factor value of *w*<sup>i</sup> = 2.

*Axioms* **2022**, *11*, x FOR PEER REVIEW 22 of 35

**Figure 10.** The *Q* and *p* time periods for code changes of the*Kp1·Kp2, Qrz,* and *Fm* parameters where *Qrz*: (**a**) 3(310), (**b**) 3(110), and (**c**) 2(010). Runs for a weighting factor value of *w*i = 3—(**a**,**b**) and *w*i = 2 for (**a**). **Figure 10.** The *Q* and *p* time periods for code changes of the *Kp*<sup>1</sup> ·*Kp*<sup>2</sup> , *Qrz* and *Fm* parameters where *Qrz*: (**a**) 3(310), (**b**) 3(110) and (**c**) 2(010). Runs for a weighting factor value of *w*<sup>i</sup> = 3—(**a**,**b**) and *w*<sup>i</sup> = 2 for (**a**).

(**a**)

for (**a**).

(**c**) **Figure 10.** The *Q* and *p* time periods for code changes of the*Kp1·Kp2, Qrz,* and *Fm* parameters where *Qrz*: (**a**) 3(310), (**b**) 3(110), and (**c**) 2(010). Runs for a weighting factor value of *w*i = 3—(**a**,**b**) and *w*i = 2

**Figure 11.** The *Q* and *p* time periods for code changes of the *Kp1·Kp2, Qrz, Fm* parameters, *Qrz*: (**a**) 2(122), (**b**) 2(322), (**c**) 2(222). Runs for a weighting factor value of *w*i = 2. **Figure 11.** The *Q* and *p* time periods for code changes of the *Kp*<sup>1</sup> ·*Kp*<sup>2</sup> , *Qrz*, F<sup>m</sup> parameters, *Qrz*: (**a**) 2(122), (**b**) 2(322), (**c**) 2(222). Runs for a weighting factor value of *w*<sup>i</sup> = 2.

*Axioms* **2022**, *11*, x FOR PEER REVIEW 25 of 35

**Figure 12.** The *Q* and *p* time periods for code changes of the *Kp1·Kp2, Qrz,* and *Fm* parameters where *Qrz*: (**a**) 3(023), (**b**) 1(021), and (**c**) 1(220). Runs for a weighting factor value of *w*i = 3 for (**a**) and *w*i = 1 for (**b**,**c**). **Figure 12.** The *Q* and *p* time periods for code changes of the *Kp*<sup>1</sup> ·*Kp*<sup>2</sup> , *Qrz* and *Fm* parameters where *Qrz*: (**a**) 3(023), (**b**) 1(021) and (**c**) 1(220). Runs for a weighting factor value of *w*<sup>i</sup> = 3 for (**a**) and *w*<sup>i</sup> = 1 for (**b**,**c**).

(**a**)

(**c**)

for (**b**,**c**).

**Figure 12.** The *Q* and *p* time periods for code changes of the *Kp1·Kp2, Qrz,* and *Fm* parameters where *Qrz*: (**a**) 3(023), (**b**) 1(021), and (**c**) 1(220). Runs for a weighting factor value of *w*i = 3 for (**a**) and *w*i = 1

**Figure 13.** The *Q* and *p* time periods for code changes of the *Kp1·Kp2, Qrz,* and *Fm* parameters where *Qrz*: (**a**) 2(320), (**b**) 1(120), and (**c**) 1(020). Runs for a weighting factor value of *w*i = 2 for (**a**) and *w*i = 1 for (**b**,**c**). **Figure 13.** The *Q* and *p* time periods for code changes of the *Kp*<sup>1</sup> ·*Kp*<sup>2</sup> , *Qrz* and *Fm* parameters where *Qrz*: (**a**) 2(320), (**b**) 1(120) and (**c**) 1(020). Runs for a weighting factor value of *w*<sup>i</sup> = 2 for (**a**) and *w*<sup>i</sup> = 1 for (**b**,**c**).

The multi-valued logical trees with the weighting factors from Table 1 are shown in

Figure 14.

The multi-valued logical trees with the weighting factors from Table 1 are shown in Figure 14. *Axioms* **2022**, *11*, x FOR PEER REVIEW 28 of 35

**Figure 14.** Multi-valued logical tree of the *Kp1·Kp2, Qrz,* and *Fm* parameters with (**a**) 24 branches, (**b**) 25 branches, (**c**) 30 branches, and (**d**) 31 branches. 1, 2, 3—The values of the weighting factors *w*i. **Figure 14.** Multi-valued logical tree of the *<sup>K</sup>p*<sup>1</sup> ·*Kp*<sup>2</sup> , *Qrz* and *F<sup>m</sup>* parameters with (**a**) 24 branches, (**b**) 25 branches, (**c**) 30 branches, and (**d**) 31 branches. 1, 2, 3—The values of the weighting factors *w*<sup>i</sup> .

For the *t<sup>w</sup>* < 0.48 *t*<sup>0</sup> criterion of limitation, one optimal multi-valued logical tree is presented in Figure 14. For a hydraulic proportional valve, the most crucial parameter is the *Qodb* flow rate of the receiver (depending on the *U<sup>z</sup>* step function of the control voltage).

One of the issues presented in this paper is the application of Boolean equations in the optimization of machine systems. This paper generalizes the Quine–McCluskey algorithm for minimizing multi-valued logical functions with multi-valued weight coefficients. In addition, a procedure for the combinatorial solution of weighted multi-valued systems of logical equations describing design guidelines in terms of morphological analysis with the Rosser–Turguette axioms is discussed.

The application of the methodology of multi-valued logic trees with weighting coefficients for relief valves allows for the determination of alternative sets of design guidelines to find the most crucial design guidelines in any fixed design and/or operational parameters while ensuring that the constraints and extremes of the criterion are met. In particular, the novelty presented in this paper is:


## **5. Conclusions**

This paper presents the use of multi-valued logical trees with weighting factors to determine the importance of the constructional and operational parameters of a two-stage proportional relief valve. As has been demonstrated by this research, relief valves do not keep up with a pressure increase in the system, react with a certain delay, and can vibrate under fixed operating conditions.

The above incorrect response of the valves usually occurs during the transition period. Hence, it is necessary to carry out model tests of valves in the transition state and to determine the importance of the operational parameters directly affecting their dynamics. Model tests aim to select essential parameters to ensure the stability of the real system. It is crucial to determine the importance of design and/or operational parameters during the model verification and subsequently select the appropriate optimization procedure.

This work discusses the procedure of a combinatorial solution for weight–multi-valued systems of logic equations describing the design guidelines in terms of morphological analysis with the preservation of Rosser–Turguette axioms. It has been shown that, in general, the minimization of logic functions with weight coefficients may be the same as without weight coefficients. However, a better reflection of the physical models of hydraulic relief systems was obtained through mathematical models. The literature shows that various coefficients of logical products have not been taken into account in the separable and common minimization of systems of multi-valued logic equations.

Three following '**novelties**' are presented in this paper:


simulation solutions should be, at least to some extent, verified by reliable experimental studies.

## *Limits of the Methodology Used*

Each of the KAPN products should be assigned corresponding discrete changes in parameter values. Therefore, it is not possible to fully apply the developed methods in continuous linear and non-linear optimisation. However, for construction/engineering purposes, the use of discrete analysis is preferable in the opinion of the authors (who have computational experience). If one were to change the numerical values of the input variables in a mathematical model, one would obtain changes in the values of the output variables. In order to obtain a different planned behaviour of a system (component), one can often make many changes to the numerical values of the input variables. Concerns related to change include: which values might be changed, how the change might be made (by increasing values, keeping them unchanged, or decreasing them), or in what order the variables might be changed, etc. Such conjecturing is akin to subjectively (according to a given designer) changing the numerical values in a mathematical model. This means that another designer, according to their own experience, may subjectively redesign the layout (element) quite differently for new work conditions that are identical to those of the previous designer.

The multi-valued weighting system of logical equations describing the design guidelines can be minimized separately or together with logical equivalence. Still, even in the bivalent (Boolean) case, the common minimization is not inferior to the separate minimization in terms of literal multiplicity. Increasing, reducing, or keeping the numerical values unchanged in the process of redesigning a system for other operating conditions can be coded using multi-valued logic while sets of design guidelines can be presented as sums of multi-valued logical products.

Model tests are particularly important in the design of new valves. These design parameters, which significantly affect the dynamics of valves, cannot be selected randomly (depending on the assumptions and experience of the designer). Their values should be closely related to the permissible peak overload of the controlled signal, operation speed, time constant, and eliminating vibration. Model tests will be more useful if the described valve is outlined more accurately in the transition state. Thus, building a correct analytical equation that presents a given valve in a dynamic course determines the sense of any theoretical considerations.

In further research, it will also be necessary to take into account the modified methodology of multi-valued logic trees as parametrically-playing out graphs, i.e., a heuristic simulation method for solving linear–dynamic decision models for relief valves. In the instance analyzed, a number of simplifications were additionally taken into account; for example, the impact of the closing element hitting the valve seat was not considered and the effect of the valve-wall compressibility was not taken into account. These factors will be considered in further papers. Additionally, a new control system using optimised proportional–directional control valves, throttling valves, and flow controllers will be proposed in further studies.

**Author Contributions:** Conceptualization, A.D., M.K. and K.U.; methodology, A.D. and K.U.; software, A.D. and P.S.; validation, A.D., M.K. and K.U.; formal analysis, A.D., R.C., M.S., M.K., K.U. and P.S.; investigation, A.D., M.S., M.K., K.U. and A.M.D.; resources, M.K. and A.M.D.; data curation, A.D., R.C., M.S., M.K., K.U. and P.S.; writing—original draft preparation, A.D., R.C., M.S., M.K., K.U. and P.S.; writing—review and editing, A.D., R.C., M.S., M.K., K.U. and A.M.D.; visualization, A.D., R.C., M.S., M.K. and K.U.; supervision, A.D. and M.K.; project administration, P.S.; funding acquisition, A.D., M.S., M.K. and K.U. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

## **Nomenclature**


## **Appendix A The Output Equations for the Computer Simulation of the Operation of the Hydraulic Part and the Model**

The output equations to simulate the operation of a hydraulic part are presented in the following form:

 1 : *dx*<sup>1</sup> *dt* = *x*2, 2 : *dx*<sup>2</sup> *dt* <sup>=</sup> <sup>−</sup>14846.301*x*<sup>2</sup> <sup>−</sup> 801.2102 · <sup>10</sup>−<sup>3</sup> (*kvx x*1)*x*<sup>3</sup> − 147224.3*x*<sup>1</sup> <sup>−</sup>1925.135 <sup>+</sup> 5.3792244 · <sup>10</sup>−<sup>3</sup> - (<sup>1</sup> <sup>−</sup> <sup>10</sup>3*x*1)*x*<sup>3</sup> <sup>−</sup> *<sup>x</sup>*<sup>6</sup> , 3 : *dx*<sup>3</sup> *dt* <sup>=</sup> 0.2851216 · <sup>10</sup><sup>9</sup> (<sup>1</sup> <sup>−</sup> 1.32 · <sup>10</sup>−9*x*3) <sup>−</sup>0.5279061 · <sup>10</sup><sup>9</sup> (*kvx x*1) √ *<sup>x</sup>*<sup>3</sup> <sup>−</sup> 0.1226361 · <sup>10</sup>9*x*<sup>2</sup> <sup>−</sup> 7.65(*x*<sup>3</sup> <sup>−</sup> *<sup>x</sup>*6) <sup>−</sup>0.3227777 · <sup>10</sup>12*Qodb*, 4 : *dx*<sup>4</sup> *dt* = *x*5, 5 : *dx*<sup>5</sup> *dt* <sup>=</sup> <sup>−</sup>5.5688865 · <sup>10</sup>3*x*<sup>5</sup> <sup>−</sup> 0.840264 · <sup>10</sup>6*<sup>x</sup>* 2 5 sign*x*<sup>5</sup> <sup>+</sup>0.7123874 · <sup>10</sup>−4*x*<sup>7</sup> <sup>+</sup> 418.87733(*kvyx*4) 2 *x*7 −2.616sign*x*<sup>5</sup> − 33.33333*Fm*, 6 : *dx*<sup>6</sup> *dt* <sup>=</sup> 0.276556 · <sup>10</sup><sup>5</sup> (*x*<sup>3</sup> <sup>−</sup> *<sup>x</sup>*6) <sup>−</sup> 0.312234 · <sup>10</sup>12(*kvyx*4) √ *x*6 <sup>+</sup>0.4432633 · <sup>10</sup>12*x*<sup>3</sup> <sup>−</sup> 2.060625 · <sup>10</sup>9*x*5, 7 : *<sup>x</sup>*<sup>7</sup> <sup>=</sup> *<sup>x</sup>*<sup>6</sup> <sup>−</sup> 0.2025169 · <sup>10</sup><sup>6</sup> (*kvyx*4) √ *x*<sup>6</sup> − 1328.096*x*5. (A1)

**References** 

https://doi.org/10.3390/en15051860.

https://doi.org/10.1177/0959651817692472.

https://doi.org/10.1016/j.sna.2015.09.042.

1

 

0

1: ,

*dx <sup>x</sup> dt*

=

2

− <sup>12</sup> 4

4: ,

*dx <sup>x</sup> dt*

=

0.4432633

− −

*dt*

  5

.3227777 10 ,

⋅

5

2.616 33.33333 ,

− ⋅

*signx F*

+ ⋅ 12 9

=− ⋅ −

−

2 3

1925.135 5.3792244 10 (1 10 ) ,

5 3 62

6 5 12

*m*

*dx x xsignx dt x kx x*

6 : 0.276556 10 ( ) 0.312234 10 ( )

3 5 <sup>6</sup> 76 46 5 10 2.060625 10 , 7 : 0.2025169 10 ( ) 1328.096 . *vy x x x x kx x x*

4 2 7 47

*dx x x kx x*

=− ⋅ − ⋅ +

−

*dt xx x*

− + ⋅− − = ⋅ −⋅ −

3 9 9

5 : 5.5688865 10 0.840264 10 0.7123874 10 418,87733( )

+ ⋅+ −

*dx <sup>x</sup> dt*

3 : 0.2851216 10 (1 1.32 10 )

*vx*

*Q*

*odb*

2 13 1

*vx*

13 6

3

5 55

36 4 6

*vy*

(A1)

−

*kx x x x x*

−

*vy*

= ⋅ −− ⋅ +

=− − ⋅ − −

1 3 2 36

3 3

*dx <sup>x</sup> kxx x*

9 9

0.5279061 10 ( ) 0.1226361 10 7.65( )

− ⋅ − ⋅ − −+

2 : 14846.301 801.2102 10 ( ) 147224.3

**Figure A1.** Model in Matlab. 1-6 describes inputs and outputs. measinst.2021.102070. **Figure A1.** Model in Matlab. 1-6 describes inputs and outputs.

3. Owczarek, P.; Rybarczyk, D.; Kubacki, A. Dynamic Model and Simulation of Electro-Hydraulic Proportional Valve. In *Automation 2017*; Szewczyk, R., Zieliński, C., Kaliczyńska, M., Eds.; Advances in Intelligent Systems and Computing; Springer Interna-

4. Han, M.; Liu, Y.; Liao, Y.; Wang, S. Investigation on the Modeling and Dynamic Characteristics of a Novel Hydraulic Propor-

5. Xie, H.; Liu, J.; Hu, L.; Yang, H.; Fu, X. Design of Pilot-Assisted Load Control Valve for Proportional Flow Control and Fast Opening Performance Based on Dynamics Modeling. *Sens. Actuators A Phys.* **2015**, *235*, 95–104.

6. Kumar, S.; Tewari, V.K.; Bharti, C.K.; Ranjan, A. Modeling, Simulation and Experimental Validation of Flow Rate of Electro-Hydraulic Hitch Control Valve of Agricultural Tractor. *Flow Meas. Instrum.* **2021**, *82*, 102070. https://doi.org/10.1016/j.flow-

tional Valve Driven by a Voice Coil Motor. *SV-JME* **2021**, *67*, 223–234. https://doi.org/10.5545/sv-jme.2021.7089.

tional Publishing: Cham, Switzerland, 2017; Volume 550, pp. 99–107, ISBN 978-3-319-54041-2.

Proportional Relief Valve. *Proc. Inst. Mech. Eng. Part I J. Syst. Control. Eng.* **2017**, *231*, 189–198.

1. Bury, P.; Stosiak, M.; Urbanowicz, K.; Kodura, A.; Kubrak, M.; Malesińska, A. A Case Study of Open- and Closed-Loop Control of Hydrostatic Transmission with Proportional Valve Start-Up Process. *Energies* **2022**, *15*, 1860.

## **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

## *Article* **Linear Diophantine Fuzzy Rough Sets on Paired Universes with Multi Stage Decision Analysis**

**Saba Ayub <sup>1</sup> , Muhammad Shabir <sup>1</sup> , Muhammad Riaz <sup>2</sup> , Faruk Karaaslan <sup>3</sup> , Dragan Marinkovic 4,\* and Djordje Vranjes <sup>5</sup>**


**Abstract:** Rough set (RS) and fuzzy set (FS) theories were developed to account for ambiguity in the data processing. The most persuasive and modernist abstraction of an FS is the linear Diophantine FS (LD-FS). This paper introduces a resilient hybrid linear Diophantine fuzzy RS model (LDF-RS) on paired universes based on a linear Diophantine fuzzy relation (LDF-R). This is a typical method of fuzzy RS (F-RS) and bipolar FRS (BF-RS) on two universes that are more appropriate and customizable. By using an LDF-level cut relation, the notions of lower approximation (L-A) and upper approximation (U-A) are defined. While this is going on, certain fundamental structural aspects of LD-FAs are thoroughly investigated, with some instances to back them up. This cuttingedge LDF-RS technique is crucial from both a theoretical and practical perspective in the field of medical assessment.

**Keywords:** fuzzy set; linear Diophantine fuzzy sets; linear Diophantine fuzzy relations; level cut relations; rough approximations on two universes; decision analysis

## **1. Introduction**

As one of the most effective methods for developing a set's embryonic concept, Zadeh [1] first proposed the idea of an FS in 1965. According to the attributes, FS permits grading a set's features in the range of [0, 1]. Since the conception of the theory, FS has been developed in a variety of ways, including intuitionistic fuzzy set (IF-S) [2,3], bipolar FS (B-FS) [4], Pythagorean FS (P-FS) [5,6], q-rung orthopair FS (q-ROF-S) [7], and LD-FS [8].

In 2019, Riaz and Hashmi [8] unveiled LD-FS, one of the most exquisite and significant generalizations of FS. Using the control parameters, LD-FS eliminates the restrictions connected to the membership degree (MD) and non-membership degree (NMD) of the prevalent abstractions of IF-Ss, B-FSs, and q-ROF-S. LD-FS is the most practical mathematical model for decision making (DM), multi-attribute decision making (MADM), engineering, artificial intelligence (AI), and medicine, allowing the decision maker to freely choose the grades [8]. Today, LD-FS is the owner of a huge study (see [9–11]). Ayub et al. [12] advanced an impressive method of an LDF-R to broaden the concept of IF-R, in which they provide an in-depth analysis of its essential characteristics, algebraic structures, and application in decision analysis.

While binary relations play a significant role in several domains for the transmission of unique things. In 1971, Zadeh [13] proposed the fuzzification of binary relations and presented the idea of an F-R. Numerous significant applications of FSs and F-Rs may be found in MCDM, neural networks, databases, pattern recognition, AI, clustering, F-control, and uncertainty reasoning. A thorough analysis of FSs and F-Rs is offered in [14].

**Citation:** Ayub, S.; Shabir, M.; Riaz, M.; Karaaslan, F.; Marinkovic, D.; Vranjes, D. Linear Diophantine Fuzzy Rough Sets on Paired Universes with Multi Stage Decision Analysis. *Axioms* **2022**, *11*, 686. https:// doi.org/10.3390/axioms11120686

Academic Editor: Oscar Castillo

Received: 25 October 2022 Accepted: 26 November 2022 Published: 30 November 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

The necessity to expand F-R was similar to that of FS. In 1984, Atanassov [15] proposed the concept of IF-R. An IF-R, per Atanassov's definition [15], is a pair of F-Rs where the total of the coalition and alienation grades is less than or equal to 1. A soft set [16], being a parameterized collection of the universe objects, has robust applications in decision making. *m*-Polar neutrosophic topology provides a generalized topological structure for data analysis [17].

Pawlak [18,19] suggested an approach of RS to deal with uncertainty in intelligent systems as another abstraction of classical set theory. The L-A and U-A, which are used to define the M of objects in RS theory, are two sharp approximation (A) sets. The fundamental ideas of the RS theory, which reveals the hidden knowledge in information systems, are these approximations. AI, machine learning, conflict analysis, and data analysis are just a few fields where RS theory has been successfully applied.

Due to the equivalence relation (E-R) that underlies the RS theory, its application in practical situations is constrained. Numerous abstractions have been constructed to overcome the constraint of an E-R. For instance, RS based on a binary relation [20,21], a setvalued map [22], a tolerance relation [23], a similarity relation [24], a reflexive relation (R-R) and transitive relation (T-R) [25], a soft binary relation [26,27], a soft E-R [28], two E-Rs [29], a normal soft group [30], two soft binary relations, and two normal soft groups, demonstrates how an E-R may be adjusted with different granule interpretations. Zhan and Alcantud [31] proposed a new kind of soft rough covering by means of soft neighborhoods. Motivation of the proposed work is based on some existing methodologies such as attribute analysis [32], picture fuzzy aggregation [33], interval-valued picture fuzzy Maclaurin symmetric mean operator [34] complex interval-valued Pythagorean fuzzy aggregation [35], risk priority evaluation [36], roughness in soft-intersection groups [37], and roughness in modules of fractions [38]. Karama¸sa et al. [39] proposed an extended SVN-AHP and MULTIMOORA method to for flight training organizations. Osintsev [40] suggested DEMATEL-ANP method for an evaluation of logistic flows in green supply chains.

## *1.1. Research Gap and Motivation*

From all of the above-mentioned, the sequel summarizes the driving forces behind our research and the gaps that lie underneath it:


## *1.2. Major Contributions*

This study uses level-cut relations from an LDF-R of dual universes to examine the roughness of an LD-FS. The fore set and after set of the level cut relations are used to design the underlying operations of RSs, the L- and U-As. With the use of useful examples, certain fundamental conclusions about As are demonstrated. We also defined the terms

"accuracy measure" (A-M) and "roughness measure" (R-M) for LDF-RS. Finally, an LDF-RSs application to medical diagnosis is made to demonstrate its viability in real life.

## *1.3. Organization of the Paper*

The remainder of this article is organized as follows to facilitate the study: In Section 2, some hypothetical early conceptions of RS, LD-FS, and LDF-R are provided. Using an LDF-R and a thorough examination of the essential characteristics of approximations with examples, the concept of LDF-RS on two distinct universes is introduced in the third segment. Section 4 includes the A-M and R-M cues for the LDF-RS. The application of LDF-RSs is demonstrated with the help of an example in Section 5. Section 6 concludes the paper by summarizing the final remarks.

## **2. Preliminaries**

This subsection consists of some essential knowledge of LD-FS, LDF-R and RS. Throughout this research, U˘, U˘ <sup>1</sup> and U˘ <sup>2</sup> will denote the initial universes, unless otherwise specified.

**Definition 1** ([19])**.** *Let <sup>ρ</sup> be an <sup>E</sup>* <sup>−</sup> *<sup>R</sup> on* <sup>U</sup>˘*. Then, the pair* (U˘, *<sup>ρ</sup>*) *is known as an R approximation space (R-AS). For any subset* <sup>O</sup> *of* <sup>U</sup>˘*, the L-A* <sup>O</sup>*<sup>ρ</sup> and the U-A* O *ρ are defined as follows:*

$$\underline{\mathcal{Q}}\_{\rho} = \{ v \in \vartheta \: \middle\Vert \: [v]\_{\rho} \subseteq \mathcal{O} \} \text{ and } \overline{\mathcal{O}}^{\rho} = \{ v \in \vartheta \: \middle\Vert \: [v]\_{\rho} \cap \mathcal{O} \neq \mathcal{O} \}.$$

*where* [*v*]*<sup>ρ</sup> signifies an E-class of <sup>v</sup>* <sup>∈</sup> <sup>U</sup>˘ *deduced by <sup>ρ</sup>. The boundary zone is indicated and described as follows:*

$$BR(\mathcal{O}) = \overline{\mathcal{O}}^{\rho} - \underline{\mathcal{O}}\_{\rho}$$

*If BR*(O) 6= ∅*, then* O *is known as an RS or otherwise a crisp set or a definable set. Based on these As, Pawlak characterized a crisp set* O ⊆ <sup>U</sup>˘ *in the sequel:*


Recently, Riaz and Hashmi [8] introduced an efficient approach to handling uncertainties that eradicate all the limitations related to affiliation and disassociation grades of the existing models (FS,B-FS,IF-S and P-FS).

**Definition 2** ([8])**.** *An LD-FS on* U˘ *is an object defined as follows:*

$$\mathcal{E}\_{\mathcal{D}} = \{ (v, < \Theta^M(v), \Theta^N(v)> , < \mathcal{a}^M(v), \mathcal{a}^M(v)> ) : v \in \vartheta \check{\iota} \}$$

*where*

$$
\Theta^M, \Theta^N: \mathcal{U} \to [0, 1].
$$

*are M and NM functions and <sup>v</sup>M*(*v*), *<sup>v</sup>N*(*v*) <sup>∈</sup> [0, 1] *are the reference parameters of* <sup>Θ</sup>*M*(*v*), <sup>Θ</sup>*N*(*v*) *respectively, such that* <sup>0</sup> <sup>≤</sup> *<sup>v</sup>M*(*v*)Θ*M*(*v*) + *<sup>v</sup>N*(*v*)Θ*N*(*v*) <sup>≤</sup> <sup>1</sup> *satisfying* <sup>0</sup> <sup>≤</sup> *<sup>v</sup>M*(*v*) + *<sup>v</sup>N*(*v*) <sup>≤</sup> <sup>1</sup> *for all <sup>u</sup>* <sup>∈</sup> <sup>U</sup>˘*. The hesitation part is defined as* <sup>Λ</sup>(*v*)Π(*v*) = <sup>1</sup> <sup>−</sup> (*vM*(*v*)Θ*M*(*v*) + *vN*(*v*)Θ*N*(*v*))*, where* Π(*v*) *expresses the degree of indeterminacy, and* Λ(*v*) *refers to the relevant reference parameter. We use the notion LD* <sup>−</sup> *FS*(U˘) *to represent the collection of all LD-FSs on* U˘*.*

By using control parameters that correspond to the association and disassociation grades in Riaz and Hashmi's [8] motivation, Ayub et al. [12] have expanded the idea of IF-R [15] to LDF-R.

**Definition 3** ([12])**.** *An expression of the following form is an LDF-R ρ*¨ *from* U˘ <sup>1</sup> *to* U˘ 2*:*

$$\vec{\rho} = \left\{ ((\upsilon\_1, \upsilon\_2), < \Theta^M(\upsilon\_1, \upsilon\_2), \Theta^N(\upsilon\_1, \upsilon\_2) > < \mathcal{O}^M(\upsilon\_1, \upsilon\_2), \mathcal{O}^N(\upsilon\_1, \upsilon\_2) > \right\} : \upsilon\_1 \in \vartheta \check{\mathbb{K}}\_1 \upsilon\_2 \in \vartheta \check{\mathbb{K}}\_2 \right\}$$

*where the mappings*

$$
\Theta^M, \Theta^N : \mathbb{X}\_1 \times \mathbb{X}\_2 \to [0, 1].
$$

*indicate the M and NM F-Rs from* U˘ <sup>1</sup> *to* U˘ <sup>2</sup>*, respectively, and <sup>v</sup>M*(*v*1, *<sup>v</sup>*2), *<sup>v</sup>N*(*v*1, *<sup>v</sup>*2) <sup>∈</sup> [0, 1] *are the relevant reference parameters to* Θ*M*(*v*1, *v*2) *and* Θ*N*(*v*1, *v*2)*, respectively, fulfilling the requirement* <sup>0</sup> <sup>≤</sup> *<sup>v</sup>M*(*v*1, *<sup>v</sup>*2)Θ*M*(*v*1, *<sup>v</sup>*2) + *<sup>v</sup>N*(*v*1, *<sup>v</sup>*2)Θ*N*(*v*1, *<sup>v</sup>*2) <sup>≤</sup> <sup>1</sup>*, for all* (*v*1, *<sup>v</sup>*2) <sup>∈</sup> <sup>U</sup>˘ 1 × U˘ <sup>2</sup> *with* <sup>0</sup> <sup>≤</sup> *<sup>v</sup>M*(*v*1, *<sup>v</sup>*2) + *<sup>v</sup>N*(*v*1, *<sup>v</sup>*2) <sup>≤</sup> <sup>1</sup>*. The hesitation part is defined as follows:*

$$\dot{\gamma}(v\_1, v\_2)\ddot{\pi}(v\_1, v\_2) = 1 - (\mathcal{a}^M(v\_1, v\_2)\Theta^M(v\_1, v\_2) + \mathcal{a}^N(v\_1, v\_2)\Theta^N(v\_1, v\_2))$$

*where π*¨(*v*1, *v*2) *is the hesitation index, and γ*¨(*v*1, *v*2) *is the relevant reference parameter. For the sake of simplicity, we will use ρ*¨ = (< Θ*M*(*v*1, *v*2), Θ*N*(*v*1, *v*2) >, < *vM*(*v*1, *v*2), *vN*(*v*1, *v*2) >) *for an LDF-R from* U˘ <sup>1</sup> *to* U˘ <sup>2</sup>*. The collection of all LDF-Rs from* U˘ <sup>1</sup> *to* U˘ <sup>2</sup> *will be designated by LDF* <sup>−</sup> *<sup>R</sup>*(U˘ <sup>1</sup> <sup>×</sup> <sup>U</sup>˘ 2)*.*

With respect to finite universes U˘ <sup>1</sup> and U˘ <sup>2</sup>, the matrix notation of an LDF-R is given in the sequel.

**Definition 4** ([12])**.** *Let ρ*¨ = (< Θ*M*(*u<sup>i</sup>* , *vj*), Θ*N*(*u<sup>i</sup>* , *vj*) >, < *vM*(*u<sup>i</sup>* , *vj*), *vN*(*u<sup>i</sup>* , *vj*) >) *be an LDF-R from* U˘ <sup>1</sup> *to* U˘ <sup>2</sup>*, where* U˘ <sup>1</sup> <sup>=</sup> {*u*1, *<sup>u</sup>*2, ..., *<sup>u</sup>m*} *and* <sup>U</sup>˘ <sup>2</sup> = {*v*1, *v*2, ..., *vn*}*. Consider* Θ*M*(*u<sup>i</sup>* , *vj*) = (Θ*<sup>M</sup> ij* )*m*×*n,* <sup>Θ</sup>*N*(*u<sup>i</sup>* , *vj*) = (Θ*<sup>N</sup> ij* )*m*×*<sup>n</sup> and <sup>v</sup>M*(*u<sup>i</sup>* , *vj*) = (*v<sup>M</sup> ij* )*m*×*n, <sup>v</sup>N*(*u<sup>i</sup>* , *vj*) = (*v<sup>N</sup> ij* )*m*×*n, with* <sup>0</sup> <sup>≤</sup> *<sup>v</sup><sup>M</sup> ij* + *<sup>v</sup><sup>M</sup> ij* <sup>≤</sup> <sup>1</sup> *fulfilling* <sup>0</sup> <sup>≤</sup> *<sup>v</sup><sup>M</sup> ij* <sup>Θ</sup>*<sup>M</sup> ij* + *<sup>v</sup><sup>M</sup> ij* <sup>Θ</sup>*<sup>N</sup> ij* ≤ 1 *for all i*, *j, where* 1 ≤ *i* ≤ *m and* 1 ≤ *j* ≤ *n. Then, the following four matrices can be used to represent ρ*¨*:*

Θ*<sup>M</sup>* = (Θ*<sup>M</sup> ij* )*m*×*<sup>n</sup>* = Θ*<sup>M</sup>* <sup>11</sup> <sup>Θ</sup>*<sup>M</sup>* <sup>12</sup> ... <sup>Θ</sup>*<sup>M</sup>* 1*n* Θ*<sup>M</sup>* <sup>21</sup> <sup>Θ</sup>*<sup>M</sup>* <sup>22</sup> ... <sup>Θ</sup>*<sup>M</sup>* 2*n* . . ... . . . ... . . . ... . Θ*<sup>M</sup> <sup>m</sup>*<sup>1</sup> <sup>Θ</sup>*<sup>M</sup> m*2 ... Θ*<sup>M</sup> mn ,* Θ*<sup>N</sup>* = (Θ*<sup>N</sup> ij* )*m*×*<sup>n</sup>* = Θ*<sup>N</sup>* <sup>11</sup> <sup>Θ</sup>*<sup>N</sup>* <sup>12</sup> ... <sup>Θ</sup>*<sup>N</sup>* 1*n* Θ*<sup>N</sup>* <sup>21</sup> <sup>Θ</sup>*<sup>N</sup>* <sup>22</sup> ... <sup>Θ</sup>*<sup>N</sup>* 2*n* . . ... . . . ... . . . ... . Θ*<sup>N</sup> <sup>m</sup>*<sup>1</sup> <sup>Θ</sup>*<sup>N</sup> m*2 ... Θ*<sup>N</sup> mn , v<sup>M</sup>* = (*v<sup>M</sup> ij* )*m*×*<sup>n</sup>* = *v<sup>M</sup>* <sup>11</sup> *<sup>v</sup><sup>M</sup>* <sup>12</sup> ... *<sup>v</sup><sup>M</sup>* 1*n v<sup>M</sup>* <sup>21</sup> *<sup>v</sup><sup>M</sup>* <sup>22</sup> ... *<sup>v</sup><sup>M</sup>* 2*n* . . ... . . . ... . . . ... . *v<sup>M</sup> <sup>m</sup>*<sup>1</sup> *<sup>v</sup><sup>M</sup> m*2 ... *v<sup>M</sup> mn , v<sup>N</sup>* = (*v<sup>N</sup> ij* )*m*×*<sup>n</sup>* = *v<sup>N</sup>* <sup>11</sup> *<sup>v</sup><sup>N</sup>* <sup>12</sup> ... *<sup>v</sup><sup>N</sup>* 1*n v<sup>N</sup>* <sup>21</sup> *<sup>v</sup><sup>N</sup>* <sup>22</sup> ... *<sup>v</sup><sup>N</sup>* 2*n* . . ... . . . ... . . . ... . *v<sup>N</sup> <sup>m</sup>*<sup>1</sup> *<sup>v</sup><sup>N</sup> m*2 ... *v<sup>N</sup> mn* 

The following definitions describe some basic operations on LDF-Rs.

**Definition 5** ([12])**.** *Let ρ*¨<sup>1</sup> = (< Θ*<sup>M</sup>* 1 (*v*1, *v*2), Θ*<sup>N</sup>* 1 (*v*1, *v*2) >, < *v<sup>M</sup>* 1 (*v*1, *v*2), *v<sup>N</sup>* 1 (*v*1, *v*2) >) *and ρ*¨<sup>2</sup> = (< Θ*<sup>M</sup>* 2 (*v*1, *v*2), Θ*<sup>N</sup>* 2 (*v*1, *v*2) >, < *v<sup>M</sup>* 2 (*v*1, *v*2), *v<sup>N</sup>* 2 (*v*1, *v*2) >) *be two LDF-Rs from* U˘ 1 *to* U˘ <sup>2</sup>*. Then,*

*(1) ρ*¨<sup>1</sup> ⊆ *ρ*¨<sup>2</sup> *if and only if*

$$\Theta\_1^M(v\_1, v\_2) \le \Theta\_2^M(v\_1, v\_2) \text{ and } \Theta\_1^N(v\_1, v\_2) \ge \Theta\_2^N(v\_1, v\_2),$$

$$\mathcal{O}\_1^M(v\_1, v\_2) \le \mathcal{O}\_2^M(v\_1, v\_2) \text{ and } \mathcal{O}\_1^N(v\_1, v\_2) \ge \mathcal{O}\_2^N(v\_1, v\_2)$$
(2)  $\vec{\rho}\_1 \cup \vec{\rho}\_2 = (<\triangleleft\_1^M \cup \Theta\_2^M)(v\_1, v\_2),$  $(\Theta\_1^N \cap \Theta\_2^N)(v\_1, v\_2) > <\mathcal{O}\_1^M(v\_1, v\_2) \lor \mathcal{O}\_2^M(v\_1, v\_2) > \square$ , where 
$$\mathcal{N} = \mathcal{N} \to \mathcal{N} \qquad \square$$

$$(\Theta\_1^M \cup \Theta\_2^M)(v\_1, v\_2) = \Theta\_1^M(v\_1, v\_2) \lor \Theta\_2^M(v\_1, v\_2) \; and$$

$$(\Theta\_1^N \cap \Theta\_2^N)(v\_1, v\_2) = \Theta\_1^N(v\_1, v\_2) \land \Theta\_2^N(v\_1, v\_2)$$

*(3) <sup>ρ</sup>*¨<sup>1</sup> <sup>∩</sup> *<sup>ρ</sup>*¨<sup>2</sup> = (<sup>&</sup>lt; (Θ*<sup>M</sup>* <sup>1</sup> <sup>∩</sup> <sup>Θ</sup>*<sup>M</sup>* 2 )(*v*1, *v*2),(Θ*<sup>N</sup>* <sup>1</sup> <sup>∪</sup> <sup>Θ</sup>*<sup>N</sup>* 2 )(*v*1, *v*2) >, < *v<sup>M</sup>* 1 (*v*1, *<sup>v</sup>*2) <sup>∧</sup> *<sup>v</sup><sup>M</sup>* 2 (*v*1, *v*2), *v<sup>N</sup>* 1 (*v*1, *<sup>v</sup>*2) <sup>∨</sup> *<sup>v</sup><sup>N</sup>* 2 (*v*1, *v*2) >)*, where*

$$(\Theta\_1^M \cap \Theta\_2^M)(v\_1, v\_2) = \Theta\_1^M(v\_1, v\_2) \land \Theta\_2^M(v\_1, v\_2) \text{ and}$$

$$(\Theta\_1^N \cup \Theta\_2^N)(v\_1, v\_2) = \Theta\_1^N(v\_1, v\_2) \lor \Theta\_2^N(v\_1, v\_2)$$
 $(4) \quad \vec{\rho}\_1^c = (<\Theta\_1^N(v\_1, v\_2), \Theta\_1^M(v\_1, v\_2) > < \omega\_1^N(v\_1, v\_2), \omega\_1^M(v\_1, v\_2) > ).$  $\text{for all } (v\_1, v\_2) \in \vec{\omega}\_1^c \times \vec{\omega}\_2^c.$ 

**Definition 6** ([12])**.** *Let ρ*¨<sup>1</sup> = (< Θ*<sup>M</sup>* 1 (*v*1, *v*2), Θ*<sup>N</sup>* 1 (*v*1, *v*2) >, < *v<sup>M</sup>* 1 (*v*1, *v*2), *v<sup>N</sup>* 1 (*v*1, *v*2) >) *be an LDF-R over* U˘ <sup>1</sup> <sup>×</sup> <sup>U</sup>˘ <sup>2</sup> *and ρ*¨<sup>2</sup> = (< Θ*<sup>M</sup>* 2 (*v*1, *v*2), Θ*<sup>N</sup>* 2 (*v*1, *v*2) >, < *v<sup>M</sup>* 2 (*v*1, *v*2), *v<sup>N</sup>* 2 (*v*1, *v*2) >) *be an LDF-R over* U˘ <sup>2</sup> <sup>×</sup> <sup>U</sup>˘ <sup>3</sup>*. Then, their composition is denoted by* ◦ˆ *and is determined accordingly:*

$$
\phi\_1 \rhd \phi\_2 = \left( < (\Theta\_1^M \rhd \Theta\_2^M)(v\_1, v\_3), (\Theta\_1^N \rhd \Theta\_2^N)(v\_1, v\_3) > , < (\phi\_1^M \rhd \phi\_2^M)(v\_1, v\_3), (\phi\_1^N \rhd \phi\_2^N)(v\_1, v\_3) > \right)
$$

*where*

$$(\Theta\_1^M \Diamond \Theta\_2^M)(v\_1, v\_3) = \vee\_{\chi\_2 \in \mathring{\mathcal{W}}\_2} (\Theta\_1^M(v\_1, v\_2) \wedge \Theta\_2^M(v\_2, v\_3))$$

$$(\Theta\_1^N \Diamond \Theta\_2^N)(v\_1, v\_3) = \wedge\_{\chi\_2 \in \mathring{\mathcal{W}}\_2} (\Theta\_1^N(v\_1, v\_2) \vee \Theta\_2^N(v\_2, v\_3))$$

*and*

$$(\mathcal{a}\_1^M \diamond \mathcal{a}\_2^M)(v\_1, v\_3) = \vee\_{\mathfrak{u}\_2 \in \mathring{\mathcal{W}}\_2} (\mathcal{a}\_1^M(v\_1, v\_2) \wedge \mathcal{a}\_2^M(v\_2, v\_3))$$

$$(\mathcal{a}\_1^N \diamond \mathcal{a}\_2^N)(v\_1, v\_3) = \wedge\_{\mathfrak{u}\_2 \in \mathring{\mathcal{W}}\_2} (\mathcal{a}\_1^N(v\_1, v\_2) \vee \mathcal{a}\_2^N(v\_2, v\_3))$$

*for all* (*v*1, *<sup>v</sup>*3) <sup>∈</sup> <sup>U</sup>˘ <sup>1</sup> <sup>×</sup> <sup>U</sup>˘ 3*.*

**Definition 7** ([12])**.** *Let ρ*¨ *be an LDF-R on* U˘*. Then, ρ*¨ *is classified as: (1) a reflexive LDF-R (R-LDF-R), if:*

$$\Theta^M(\upsilon, \upsilon) = 1, \Theta^N(\upsilon, \upsilon) = 0 \text{ and } \mathfrak{o}^M(\upsilon, \upsilon) = 1, \mathfrak{o}^N(\upsilon, \upsilon) = 0$$

*for all u* <sup>∈</sup> <sup>U</sup>˘*.*

*(2) a symmetric LDF-R (S-LDF-R), if*

Θ *<sup>M</sup>*(*v*1, *v*2) = Θ *<sup>M</sup>*(*v*2, *v*1), Θ *<sup>N</sup>*(*v*1, *v*2) = Θ *<sup>N</sup>*(*v*2, *v*1) *and α*¨(*v*1, *v*2) = *α*¨(*v*2, *v*1), *β*¨(*v*1, *v*2) = *β*¨(*v*2, *v*1) *(3) a transitive LDF-R (T-LDF-R), if*

$$
\mathfrak{G}^M \mathfrak{S} \Theta^M \subseteq \Theta^M, \Theta^N \mathfrak{S} \Theta^N \supseteq \Theta^N \text{ and } \mathfrak{a}^M \mathfrak{S} \mathfrak{a}^M \subseteq \mathfrak{a}^M, \mathfrak{a}^N \mathfrak{S} \mathfrak{a}^N \supseteq \mathfrak{a}^N.
$$

*(4) an equivalence LDF-R (E-LDF-R), if ρ*¨ *is a R-, S-, and T-LDF-R over* U˘*.*

If |U| ˘ <sup>=</sup> *<sup>n</sup>*, where <sup>|</sup>.<sup>|</sup> indicates the quantity of items in <sup>U</sup>˘, *<sup>ρ</sup>*¨ = (<sup>&</sup>lt; (Θ*<sup>M</sup> ij* )*n*×*n*,(Θ*<sup>N</sup> ij* )*n*×*<sup>n</sup>* > , < (*v<sup>M</sup> ij* )*n*×*n*,(*v<sup>N</sup> ij* )*n*×*<sup>n</sup>* <sup>&</sup>gt;). Let <sup>Θ</sup>*<sup>M</sup>* = (Θ*<sup>M</sup> ij* )*n*×*n*, <sup>Θ</sup>*<sup>N</sup>* = (Θ*<sup>N</sup> ij* )*n*×*<sup>n</sup>* and *<sup>v</sup><sup>M</sup>* = (*v<sup>M</sup> ij* )*n*×*n*, *v<sup>N</sup>* = (*v<sup>N</sup> ij* )*n*×*n*. Then,


## **3. Some Properties of Linear Diophantine Fuzzy Relation**

Ayub et al. [12] proposed the idea of LDF-R from U˘ <sup>1</sup> to U˘ <sup>2</sup>. The purpose of this section is to introduce the idea of a level cut relation of an LDF-R. Additionally, we investigate a few of its crucial characteristics, including the R-, S-, and T-LDF-R in terms of its level cut relations.

**Definition 8.** *Let ρ*¨ = (< Θ*M*(*v*1, *v*2), Θ*N*(*v*1, *v*2) >, < *vM*(*v*1, *v*2), *vN*(*v*1, *v*2) >) *be an LDF-R from* U˘ <sup>1</sup> *to* U˘ <sup>2</sup>*. Let s*¨, ¨*t*, *<sup>u</sup>*¨, *<sup>v</sup>*¨ <sup>∈</sup> [0, 1] *be such that* <sup>0</sup> <sup>≤</sup> *<sup>s</sup>*¨*u*¨ <sup>+</sup> ¨*tv*¨ <sup>≤</sup> <sup>1</sup> *with* <sup>0</sup> <sup>≤</sup> *<sup>u</sup>*¨ <sup>+</sup> *<sup>v</sup>*¨ <sup>≤</sup> <sup>1</sup>*, and define the* (<sup>&</sup>lt; *<sup>s</sup>*¨, *<sup>u</sup>*¨ <sup>&</sup>gt;, <sup>&</sup>lt; ¨*t*, *<sup>v</sup>*¨ <sup>&</sup>gt;)−*level cut relation of <sup>ρ</sup>*¨ *as follows:*

$$\left(\ddot{\rho}\right)\_{<\tilde{s},\emptyset>}^{<\underline{t},\flat>} = \left\{ (\upsilon\_1,\upsilon\_2) \in \ddot{\mathfrak{d}}\_1 \times \ddot{\mathfrak{d}}\_2 : \Theta^M(\upsilon\_1,\upsilon\_2) \ge \ddot{s}, \mathcal{o}^M(\upsilon\_1,\upsilon\_2) \ge \ddot{u} \text{ and } \Theta^N(\upsilon\_1,\upsilon\_2) \le \ddot{t}, \mathcal{o}^N(\upsilon\_1,\upsilon\_2) \le \ddot{v} \right\}$$

*where*

$$(\check{\rho})\_{<\check{s},\check{\iota}>} = \left\{ (v\_1, v\_2) \in \mathring{\mathcal{U}}\_1 \times \mathring{\mathcal{U}}\_2 : \Theta^M(v\_1, v\_2) \ge \mathring{s}, \varpi^M(v\_1, v\_2) \ge \mathring{\iota} \right\}.$$

*is said to be* < *s*¨, *u*¨ > −*level cut relation of ρ*¨*, and*

$$\left(\left(\check{\rho}\right)^{<\check{t},\sharp>}\right) = \left\{ (v\_1, v\_2) \in \mathring{\mathscr{U}}\_1 \times \mathring{\mathscr{U}}\_2 : \Theta^N(v\_1, v\_2) \le \check{t}, \mathcal{o}^N(v\_1, v\_2) \le \vartheta \right\}$$

*is called* <sup>&</sup>lt; ¨*t*, *<sup>v</sup>*¨ <sup>&</sup>gt; <sup>−</sup>*level cut relation of <sup>ρ</sup>*¨*.*

**Theorem 1.** *ρ*¨ *is R-LDF-R if and only if* (*ρ*¨) <¨*t*,*v*¨> <*s*¨,*u*¨> *is R-R on* U˘*, for all s*¨, *u*¨, ¨*t*, *<sup>v</sup>*¨ <sup>∈</sup> [0, 1]*.*

**Proof.** Suppose that *<sup>ρ</sup>*¨ is R-LDF-R. By Definition <sup>7</sup> (1), <sup>Θ</sup>*M*(*v*, *<sup>v</sup>*) = <sup>1</sup> <sup>≥</sup> *<sup>s</sup>*¨, <sup>Θ</sup>*N*(*v*, *<sup>v</sup>*) = <sup>0</sup> <sup>≤</sup> ¨*<sup>t</sup>* and *<sup>v</sup>M*(*v*, *<sup>v</sup>*) = <sup>1</sup> <sup>≥</sup> *<sup>u</sup>*¨, *<sup>v</sup>N*(*v*, *<sup>v</sup>*) = <sup>0</sup> <sup>≤</sup> *<sup>v</sup>*¨, for all *<sup>s</sup>*¨, ¨*t*, *<sup>u</sup>*¨, *<sup>v</sup>*¨ <sup>∈</sup> [0, 1] such that <sup>0</sup> <sup>≤</sup> *<sup>s</sup>*¨*u*¨ <sup>+</sup> ¨*tv*¨ <sup>≤</sup> <sup>1</sup> with 0 ≤ *u*¨ + *v*¨ ≤ 1. Hence, (*x*, *x*) ∈ (*ρ*¨) <¨*t*,*v*¨> <*s*¨,*u*¨> for all *<sup>u</sup>* <sup>∈</sup> <sup>U</sup>˘.

Conversely, assume that (*ρ*¨) <¨*t*,*v*¨> <*s*¨,*u*¨> is R-R. If *<sup>ρ</sup>*¨ is not R-LDF-R, then for some *<sup>v</sup>* <sup>∈</sup> <sup>U</sup>˘ either <sup>Θ</sup>*M*(*v*, *<sup>v</sup>*) <sup>6</sup><sup>=</sup> 1, or <sup>Θ</sup>*N*(*v*, *<sup>v</sup>*) <sup>6</sup><sup>=</sup> <sup>0</sup> or *<sup>v</sup>M*(*v*, *<sup>v</sup>*) <sup>6</sup><sup>=</sup> <sup>1</sup> or *<sup>v</sup>N*(*v*, *<sup>v</sup>*) <sup>6</sup><sup>=</sup> 0, for some *<sup>s</sup>*¨, ¨*t*, *<sup>u</sup>*¨, *<sup>v</sup>*¨ <sup>∈</sup> [0, 1]. If <sup>Θ</sup>*M*(*v*, *<sup>v</sup>*) <sup>6</sup><sup>=</sup> 1. Taking *<sup>s</sup>*¨ <sup>=</sup> 1, we have (*x*, *<sup>x</sup>*) <sup>∈</sup>/ (*ρ*¨) <¨*t*,*v*¨> <*s*¨,*u*¨> , which is a contradiction. The other three cases are similar. Hence, (*ρ*¨) <¨*t*,*v*¨> <*s*¨,*u*¨> is a R-R.

**Theorem 2.** *ρ*¨ *is S-LDF-R if and only if* (*ρ*¨) <¨*t*,*v*¨> <*s*¨,*u*¨> *is S-R on* U˘*, for all s*¨, *u*¨, ¨*t*, *<sup>v</sup>*¨ <sup>∈</sup> [0, 1]*.*

**Proof.** Suppose that *ρ*¨ is S-LDF-R. Let (*v*1, *v*2) ∈ (*ρ*¨) <¨*t*,*v*¨> <*s*¨,*u*¨> . By Definition 8, <sup>Θ</sup>*M*(*v*1, *<sup>v</sup>*2) <sup>≥</sup> *<sup>s</sup>*¨, *<sup>v</sup>M*(*v*1, *<sup>v</sup>*2) <sup>≥</sup> *<sup>u</sup>*¨ and <sup>Θ</sup>*N*(*v*1, *<sup>v</sup>*2) <sup>≤</sup> ¨*t*, *<sup>v</sup>N*(*v*1, *<sup>v</sup>*2) <sup>≤</sup> *<sup>v</sup>*¨. Since *<sup>ρ</sup>*¨ is symmetric, so we have <sup>Θ</sup>*M*(*v*2, *<sup>v</sup>*1) <sup>≥</sup> *<sup>s</sup>*¨, *<sup>v</sup>M*(*v*2, *<sup>v</sup>*1) <sup>≥</sup> *<sup>u</sup>*¨ and <sup>Θ</sup>*N*(*v*2, *<sup>v</sup>*1) <sup>≤</sup> ¨*t*, *<sup>v</sup>N*(*v*2, *<sup>v</sup>*1) <sup>≤</sup> *<sup>v</sup>*¨ (see Definition <sup>7</sup> (2)). Thus, (*v*2, *v*1) ∈ (*ρ*¨) <¨*t*,*v*¨> <*s*¨,*u*¨> .

Conversely, assume that (*ρ*¨) <¨*t*,*v*¨> <*s*¨,*u*¨> is S-R on U˘. Letting Θ*M*(*v*1, *v*2) = *s*¨, *vM*(*v*1, *v*2) = *u*¨ and Θ*N*(*v*1, *v*2) = ¨*t*, *vN*(*v*1, *v*2) = *v*¨, for some *s*¨, ¨*t*, *<sup>u</sup>*¨, *<sup>v</sup>*¨ <sup>∈</sup> [0, 1] such that <sup>0</sup> <sup>≤</sup> *<sup>s</sup>*¨*u*¨ <sup>+</sup> ¨*tv*¨ <sup>≤</sup> <sup>1</sup> with 0 ≤ *u*¨ + *v*¨ ≤ 1. It follows that (*v*1, *v*2) ∈ (*ρ*¨) <¨*t*,*v*¨> <*s*¨,*u*¨> . By assumption on (*ρ*¨) <¨*t*,*v*¨> <*s*¨,*u*¨> , we have (*v*2, *v*1) ∈ (*ρ*¨) <¨*t*,*v*¨> <*s*¨,*u*¨> . Thus, <sup>Θ</sup>*M*(*v*2, *<sup>v</sup>*1) <sup>≥</sup> *<sup>s</sup>*¨ <sup>=</sup> <sup>Θ</sup>*M*(*v*2, *<sup>v</sup>*1), *<sup>v</sup>M*(*v*2, *<sup>v</sup>*1) <sup>≥</sup> *<sup>u</sup>*¨ <sup>=</sup> *<sup>v</sup>M*(*v*1, *<sup>v</sup>*2) and <sup>Θ</sup>*N*(*v*2, *<sup>v</sup>*1) <sup>≤</sup> ¨*<sup>t</sup>* <sup>=</sup> <sup>Θ</sup>*N*(*v*1, *<sup>v</sup>*2), *<sup>β</sup>*(*v*2, *<sup>v</sup>*1) <sup>≤</sup> *<sup>v</sup>*¨ <sup>=</sup> *<sup>v</sup>N*(*v*1, *<sup>v</sup>*2). By using similar arguments, other inequalities can be shown. Thus, *ρ*¨ is S-LDF-R on U˘. This completes the proof.

**Proposition 1.** *ρ*¨ *is T-LDF-R if and only if*

$$
\Theta^M(v\_1, v\_2) \wedge \Theta^M(v\_2, v\_3) \le \Theta^M(v\_1, v\_3), \\
\Theta^N(v\_1, v\_2) \wedge \Theta^N(v\_2, v\_3) \ge \Theta^N(v\_1, v\_3)
$$

$$
\dots \quad \dots \quad \dots \quad \dots \quad \dots \quad \dots \quad \dots \quad \dots \quad \dots \quad \dots \quad \dots
$$

$$\mathcal{O} \text{ and } \mathfrak{o}^M(v\_1, v\_2) \wedge \mathfrak{o}^M(v\_2, v\_3) \le \mathfrak{o}^M(v\_1, v\_3), \mathfrak{o}^N(v\_1, v\_2) \wedge \mathfrak{o}^N(v\_2, v\_3) \ge \mathfrak{o}^N(v\_1, v\_3).$$

*for all v*1, *<sup>v</sup>*2, *<sup>v</sup>*<sup>3</sup> <sup>∈</sup> <sup>U</sup>˘*.*

**Proof.** Suppose that *<sup>ρ</sup>*¨ is T-LDF-R on <sup>U</sup>˘. By Definition <sup>7</sup> (3), (Θ*M*◦ˆΘ*M*)(*v*1, *<sup>v</sup>*3) <sup>⊆</sup> <sup>Θ</sup>*M*(*v*1, *<sup>v</sup>*3), (Θ*N*◦ˆΘ*N*)(*v*1, *<sup>v</sup>*3) <sup>⊇</sup> <sup>Θ</sup>*N*(*v*1, *<sup>v</sup>*3) and (*vM*◦ˆ*vM*)(*v*1, *<sup>v</sup>*3) <sup>⊆</sup> *<sup>v</sup>M*(*v*1, *<sup>v</sup>*3),(*vN*◦ˆ*vN*)(*v*1, *<sup>v</sup>*3) <sup>⊇</sup> *<sup>v</sup>N*(*v*1, *<sup>v</sup>*3), for all *<sup>v</sup>*1, *<sup>v</sup>*<sup>3</sup> <sup>∈</sup> <sup>U</sup>˘. Thus, <sup>Θ</sup>*M*(*v*1, *<sup>v</sup>*2) <sup>∧</sup> <sup>Θ</sup>*M*(*v*2, *<sup>v</sup>*3) <sup>≤</sup> <sup>Θ</sup>*M*(*v*1, *<sup>v</sup>*3), <sup>Θ</sup>*N*(*v*1, *<sup>v</sup>*2) <sup>∧</sup> <sup>Θ</sup>*N*(*v*2, *<sup>v</sup>*3) <sup>≥</sup> <sup>Θ</sup>*N*(*v*1, *<sup>v</sup>*3) and *<sup>v</sup>M*(*v*1, *<sup>v</sup>*2) <sup>∧</sup> *<sup>v</sup>M*(*v*2, *<sup>v</sup>*3) <sup>≤</sup> *<sup>v</sup>M*(*v*1, *<sup>v</sup>*3), *<sup>v</sup>N*(*v*1, *<sup>v</sup>*2) <sup>∧</sup> *<sup>v</sup>N*(*v*2, *<sup>v</sup>*3) <sup>≥</sup> *<sup>v</sup>N*(*v*1, *<sup>v</sup>*3), for all *<sup>v</sup>*1, *<sup>v</sup>*2, *<sup>v</sup>*<sup>3</sup> <sup>∈</sup> <sup>U</sup>˘ (see Definition 6). The converse can be proven, similarly.

**Theorem 3.** *ρ*¨ *is a T-LDF-R if and only if* (*ρ*¨) <¨*t*,*v*¨> <*s*¨,*u*¨> *is T-R on* U˘*, for all s*¨, *u*¨, ¨*t*, *<sup>v</sup>*¨ <sup>∈</sup> [0, 1]*.*

**Proof.** Suppose that *ρ*¨ is T-LDF-R. Let (*v*1, *v*2),(*v*2, *v*3) ∈ (*ρ*¨) <¨*t*,*v*¨> <*s*¨,*u*¨> . Then, <sup>Θ</sup>*M*(*v*1, *<sup>v</sup>*2) <sup>∧</sup> <sup>Θ</sup>*M*(*v*2, *<sup>v</sup>*3) <sup>≥</sup> *<sup>s</sup>*¨, *<sup>α</sup>*(*v*1, *<sup>v</sup>*2) <sup>∧</sup> *<sup>α</sup>*(*v*2, *<sup>v</sup>*3) <sup>≥</sup> *<sup>u</sup>*¨ and <sup>Θ</sup>*N*(*v*1, *<sup>v</sup>*2) <sup>∨</sup> <sup>Θ</sup>*N*(*v*2, *<sup>v</sup>*3) <sup>≤</sup> ¨*t*, *<sup>β</sup>*(*v*1, *<sup>v</sup>*2) <sup>∨</sup> *<sup>β</sup>*(*v*2, *<sup>v</sup>*3) <sup>≤</sup> *<sup>v</sup>*¨ (see Definition <sup>8</sup> ). Using above Proposition 1, we obtain: <sup>Θ</sup>*M*(*v*1, *<sup>v</sup>*3) <sup>≥</sup> *<sup>s</sup>*¨, *<sup>v</sup>M*(*v*1, *<sup>v</sup>*3) <sup>≥</sup> *<sup>u</sup>*¨, <sup>Θ</sup>*N*(*v*1, *<sup>v</sup>*3) <sup>≤</sup> ¨*t*, *<sup>v</sup>N*(*v*1, *<sup>v</sup>*3) <sup>≤</sup> *<sup>v</sup>*¨. Thus, (*v*1, *<sup>v</sup>*3) <sup>∈</sup> (*ρ*¨) <¨*t*,*v*¨> <*s*¨,*u*¨> .

**Theorem 4.** *ρ*¨ *is an E-LDF-R if and only if* (*ρ*¨) <¨*t*,*v*¨> <*s*¨,*u*¨> *is an E-R on* U˘*, for all s*¨, ¨*t*, *<sup>u</sup>*¨, *<sup>v</sup>*¨ <sup>∈</sup> [0, 1]*.*

**Proof.** Theorems 1–3 have a direct impact on the proof.

Now, to measure the 'resemblance', 'comparability' or 'closeness' of the objects in U˘, we define the following concept.

**Definition 9.** *ρ*¨ *is said to be a tolerance LDF-R (or compatible LDF-R), if it is R-LDF-R and S-LDF-R.*

To illustrate our above notions, we provide Example 2 below.

**Example 1.** *Let* <sup>U</sup>˘ <sup>=</sup> {*v*1, *<sup>v</sup>*2, *<sup>v</sup>*3, *<sup>v</sup>*4}*. Construct an LDF-R <sup>ρ</sup>*¨ *on* <sup>U</sup>˘ *in matrix notation form as follows:*

$$
\begin{split}
\boldsymbol{\Theta}^{M} &= \begin{pmatrix} 1 & 0.725 & 0.862 & 0.921 \\ 0.725 & 1 & 0.815 & 0.132 \\ 0.862 & 0.815 & 1 & 0.325 \\ 0.921 & 0.132 & 0.325 & 1 \end{pmatrix}, \boldsymbol{\Theta}^{N} = \begin{pmatrix} 0 & 0.218 & 0.125 & 0.215 \\ 0.218 & 0 & 0.651 & 0.334 \\ 0.125 & 0.651 & 0 & 0.728 \\ 0.215 & 0.334 & 0.728 & 0 \end{pmatrix}, \\
\boldsymbol{\mathcal{O}}^{M} &= \begin{pmatrix} 1 & 0.71 & 0.81 & 0.89 \\ 0.71 & 1 & 0.75 & 0.11 \\ 0.81 & 0.75 & 1 & 0.21 \\ 0.89 & 0.11 & 0.21 & 1 \end{pmatrix}, \boldsymbol{\mathcal{O}}^{N} = \begin{pmatrix} 0 & 0.16 & 0.10 & 0.11 \\ 0.16 & 0 & 0.25 & 0.34 \\ 0.10 & 0.25 & 0 & 0.64 \\ 0.11 & 0.34 & 0.64 & 0 \end{pmatrix}.
\end{split}
$$

*Using Definition 8 of* (< *s*¨, *u*¨ >, < ¨*t*, *v*¨ >)*-level cut relation, we are able to obtain the following: For s*¨ = *u*¨ = 1*,* ¨*t* = *v*¨ = 0*,*

$$(\not p)\_{<1,1>}^{<0,0>} = \{(v\_1, v\_1), (v\_2, v\_2), (v\_3, v\_3), (v\_4, v\_4)\}$$

*For s*¨ = 0.725*, u*¨ = 0.71 *and* ¨*t* = 0.218*, v*¨ = 0.16*,*

$$(\not p)\_{<0.725,0.71>}^{<0.218,0.16>} = \{ (v\_1, v\_1), (v\_1, v\_2), (v\_1, v\_3), (v\_2, v\_1), (v\_2, v\_2), (v\_3, v\_1), (v\_3, v\_3), (v\_4, v\_1) \}$$

$$\text{For } \text{\textquotedblleft} = 0.862, \text{\textquotedblright} = 0.81 \text{ and } \ddot{t} = 0.125, \t v = 0.10,$$

$$(\not p)^{<0.125,0.10>}\_{<0.862,0.81>} = \{ (v\_1, v\_1), (v\_1, v\_3)(v\_2, v\_2), (v\_3, v\_1), (v\_3, v\_3), (v\_4, v\_4) \}$$

*For s*¨ = 0.921*, u*¨ = 0.89 *and* ¨*t* = 0.215*, v*¨ = 0.11*,*

$$\left( (\not p)\_{<0.921,0.89>}^{<0.215,0.11>} \right) = \{ (v\_1, v\_1), (v\_1, v\_4), (v\_2, v\_2), (v\_3, v\_3), (v\_4, v\_1), (v\_4, v\_4) \}$$

*For s*¨ = 0.815*, u*¨ = 0.75 *and* ¨*t* = 0.651*, v*¨ = 0.25*,*

$$(\not p)^{<0.651,0.25>}\_{<0.815,0.75>} = \{ (v\_1, v\_1), (v\_1, v\_3), (v\_1, v\_4), (v\_2, v\_2), (v\_2, v\_3), (v\_3, v\_1), (v\_3, v\_2), (v\_3, v\_3), (v\_4, v\_1), (v\_4, v\_4) \}$$

$$\text{For } \mathbb{S} = 0.132, \ it = 0.11 \text{ and } \ddagger = 0.334, \not\equiv 0.34,$$

(*ρ*¨) <0.334,0.34> <sup>&</sup>lt;0.132,0.11<sup>&</sup>gt; = {(*v*1, *v*1),(*v*1, *v*2),(*v*1, *v*3),(*v*1, *v*4),(*v*2, *v*1),(*v*2, *v*2),(*v*2, *v*4),(*v*3, *v*1),(*v*3, *v*3),(*v*4, *v*1),(*v*4, *v*2),(*v*4, *v*4)}

$$For \text{ } \text{\textquotedblleft} = 0.325, \text{\textquotedblright} = 0.21 \text{ } and \text{\textquotedblleft} = 0.728, \text{\textquotedblright} = 0.64,$$

$$(\not p)^{<0.728,0.64>}\_{<0.325,0.21>} = (\not \check{\mathbb{X}} \times \not \mathbb{Y}) \mid \{ (v\_2, v\_4), (v\_4, v\_2) \}.$$

*It is simple to observe that* (*ρ*¨) <¨*t*,*v*¨> <*s*¨,*u*¨> *is an E-R on* U˘*, for each s*¨, *u*¨, ¨*t*, *v*¨*. Hence, by using Theorem 4, ρ*¨ *is an E-LDF-R on* U˘*.*

## **4. Linear Diophantine Fuzzy Rough Sets on Two Universes**

In literature, R-As on two different universes using F-R are initiated by Sun and Ma [48]. Since the NM part is not discussed in F-R, Yang et al. [51] extended the concept of [48] to fuzzy bipolar relation (FB-R). In this segment, we generalize this concept to LDF-R and introduce a new concept of roughness called LDF-RS on two universes based on the after sets and fore sets of the level cut relation of an LDF-R (a crisp relation).

If *<sup>ρ</sup>*¨ <sup>∈</sup> *LDF* <sup>−</sup> *<sup>R</sup>*(U˘ <sup>1</sup> <sup>×</sup> <sup>U</sup>˘ <sup>2</sup>), then the triplet P¨ = (U˘ <sup>1</sup>, U˘ <sup>2</sup>, *ρ*¨) is called an LDF rough approximation space (LDF-RAS).

**Definition 10.** *Let* P¨ = (U˘ <sup>1</sup>, U˘ <sup>2</sup>, *<sup>ρ</sup>*¨) *be an LDF-RAS and* Y ⊆ <sup>U</sup>˘ <sup>2</sup>*. Describe the L-A appr ρ*¨ <¨*t*,*v*¨> <*s*¨,*u*¨> (Y) *of* Y *and the U-A appr ρ*¨ <¨*t*,*v*¨> <*s*¨,*u*¨> (Y) *of* Y *as follows:*

$$\begin{aligned} \operatorname\*{opppr}\_{\overline{\rho}\_{<\breve{s},\breve{\mu}>}^{<\mathfrak{t},\mathfrak{r}>}}(\mathcal{Y}) &= \{v\_1 \in \breve{\mathcal{U}}\_1 : \mathcal{O} \neq v\_1\breve{\rho}\_{<\breve{s},\breve{\mu}>}^{<\mathfrak{t},\mathfrak{r}>} \subseteq \mathcal{Y}\}; \\ \overline{\operatorname\*{opppr}\_{\overline{\rho}\_{<\breve{s},\breve{\mu}>}^{<\mathfrak{t},\mathfrak{r}>}}(\mathcal{Y}) &= \{v\_1 \in \breve{\mathcal{U}}\_1 : \mathcal{O} \neq v\_1\breve{\rho}\_{<\breve{s},\breve{\mu}>}^{<\breve{\mu}>}, v\_1\breve{\rho}\_{<\breve{s},\breve{\mu}>}^{<\breve{\mu}>} \cap \mathcal{Y} \neq \mathcal{O}\}. \end{aligned}$$

*Similarly, we can define the L-A* (X )*appr ρ*¨ <¨*t*,*v*¨> <*s*¨,*u*¨> *and U-A* (X )*appr ρ*¨ <¨*t*,*v*¨> <*s*¨,*u*¨> *for any subset* X ⊆ <sup>U</sup>˘ 1 *as follows:*

$$\frac{(\mathcal{X})appr\_{\rho<\_{\mathcal{S},\mathcal{U}>}} = \{v\_2 \in \mathring{\vartheta}\_2 : \bigotimes \neq \mathring{\rho}\_{<\mathcal{S},\mathcal{U}>}^{<\widetilde{\mathfrak{f}},\psi>}v\_2 \subseteq \mathcal{X}\}}{}$$

$$\overline{(\mathcal{X})appr\_{\rho<\_{\mathcal{S},\mathcal{U}>}}} = \{v\_2 \in \mathring{\vartheta}\_2 : \bigotimes \neq \mathring{\rho}\_{<\mathcal{S},\mathcal{U}>}^{<\widetilde{\mathfrak{f}},\mathcal{U}>}v\_2, \widetilde{\rho}\_{<\mathcal{S},\mathcal{U}>}^{<\widetilde{\mathfrak{f}},\mathcal{U}>}v\_2 \cap \mathcal{X} \neq \mathcal{O}\}}$$

*where v*2*ρ*¨ <¨*t*,*v*¨> <sup>&</sup>lt;*s*¨,*u*¨<sup>&</sup>gt; <sup>=</sup> {*v*<sup>2</sup> <sup>∈</sup> <sup>U</sup>˘ <sup>2</sup> : (*v*1, *v*2) ∈ *ρ*¨ <¨*t*,*v*¨> <*s*¨,*u*¨> } *and ρ*¨ <¨*t*,*v*¨> <*s*¨,*u*¨> *<sup>v</sup>*<sup>2</sup> <sup>=</sup> {*v*<sup>1</sup> <sup>∈</sup> <sup>U</sup>˘ <sup>1</sup> : (*v*1, *v*2) ∈ *ρ*¨ <¨*t*,*v*¨> <*s*¨,*u*¨> }*.*

## **Remark 1.**


**Definition 11.** *Let* P¨ = (U˘ <sup>1</sup>, U˘ <sup>2</sup>, *<sup>ρ</sup>*¨) *be an LDF-RAS and* Y ⊆ <sup>U</sup>˘ <sup>2</sup>*. Then, the following sets are defined as follows:*


$$\begin{array}{c} \text{(3)} \quad \text{LDF} - \text{NEG}\_{\mathbb{P}}(\mathcal{Y}) = \vartheta \check{\mathbb{Q}}\_{2} - \overline{a \operatorname{ppr}\_{\check{\mathbb{P}}^{<\tilde{\mathbb{P}}, \mathbb{P}>}}(\mathcal{Y})} = (\overline{a \operatorname{ppr}\_{\check{\mathbb{P}}^{<\tilde{\mathbb{P}}, \mathbb{P}>}}(\mathcal{Y})})^{c}. \end{array}$$

*are called the PR, BR and NR of* Y ⊆ <sup>U</sup>˘ <sup>2</sup>*, respectively.*

In the sequel of this manuscript, we mean P¨ = (U˘ <sup>1</sup>, U˘ <sup>2</sup>, *ρ*¨) as a LDF-RAS and *s*¨, *u*¨ ∈ (0, 1], ¨*t*, *<sup>v</sup>*¨ <sup>∈</sup> [0, 1).

**Proposition 2.** *Let* <sup>Y</sup>1, <sup>Y</sup><sup>2</sup> <sup>⊆</sup> <sup>U</sup>˘ <sup>2</sup>*. Then,*

$$\begin{array}{c} (1) \quad \operatorname{apppr}\_{\mathbb{P}^{<\mathrm{I},\mathcal{V}>}\_{<\mathrm{I},\mathcal{U}>} \left(\mathcal{V}\_{1}\right)} \subseteq \overline{\operatorname{apppr}\_{\mathbb{P}^{<\mathrm{I},\mathcal{V}>}\_{<\mathrm{I},\mathcal{U}>} \left(\mathcal{V}\_{1}\right)};} \\ (2) \quad \overline{\operatorname{apppr}\_{\mathbb{P}^{<\mathrm{I},\mathcal{V}>}\_{\mathbb{P}^{<\mathrm{I},\mathcal{U}>}\_{<\mathrm{I},\mathcal{U}>} \left(\mathcal{U}\right)}} = \mathcal{Q} = \operatorname{apppr}\_{\mathcal{P}^{<\mathrm{I},\mathcal{U}>}\_{<\mathrm{I},\mathcal{U}>} \left(\mathcal{Q}\right);} \end{array}$$

$$\text{(3)} \quad \begin{array}{l} \text{If } \mathcal{V}\_{<\bar{s},\bar{\mu}>} \\ \text{If } \mathcal{V}\_{1} \subseteq \mathcal{V}\_{2} \text{ then } \operatorname{apppr}\_{\mathcal{P}^{<\bar{t},\bar{\mu}>}\_{<\bar{s},\bar{\mu}>} (\mathcal{V}\_{1}) \subseteq \operatorname{apppr}\_{\mathcal{P}^{<\bar{t},\bar{\mu}>}\_{<\bar{s},\bar{t}\bar{\mu}>} (\mathcal{V}\_{2}) \text{-} } \end{array} $$

*(4)* Y<sup>1</sup> ⊆ Y2*, then appr ρ*¨ <¨*t*,*v*¨> <*s*¨,*u*¨> (Y1) ⊆ *appr ρ*¨ <¨*t*,*v*¨> <*s*¨,*u*¨> (Y2)*;*

$$(5)\quad\underbrace{\operatorname{apppr}\_{\overset{\sim\leqslant\uparrow,\beta>}{\bar{\rho}^{\leq\uparrow,\beta>}}}(\mathcal{Y}\_{1}\cap\mathcal{Y}\_{2})}\_{}=\operatorname\*{apppr}\_{\overset{\sim\leqslant\uparrow,\beta>}{\bar{\rho}^{\leq\uparrow,\beta>}}}(\mathcal{Y}\_{1})\cap\operatorname\*{apppr}\_{\overset{\sim\leqslant\uparrow,\beta>}{\bar{\rho}^{\leq\uparrow,\beta>}}}(\mathcal{Y}\_{2});$$

$$(6)\quad\operatorname{apppr}\_{\boldsymbol{\beta}^{<\mathrm{I},\mathcal{O}>}\_{<\mathfrak{I},\mathcal{O}>}\left(\mathcal{Y}\_{1}\cap\mathcal{Y}\_{2}\right)}\subseteq\operatorname{apppr}\_{\boldsymbol{\beta}^{<\mathrm{I},\mathcal{O}>}\_{<\mathfrak{I},\mathcal{O}>}\left(\mathcal{Y}\_{1}\right)\cap\operatorname{apppr}\_{\boldsymbol{\beta}^{<\mathrm{I},\mathcal{O}>}\_{<\mathfrak{I},\mathcal{O}>}\left(\mathcal{Y}\_{2}\right)\mathcal{Y}\_{1}}$$

$$(7)\quad ap\, pr\_{\rho\_{<\mathfrak{J},\mathfrak{I}>\colon}^{<\mathfrak{I},\mathfrak{I}>\colon}}(\mathcal{Y}\_1 \cup \mathcal{Y}\_2) \supseteq ap\, pr\_{\rho\_{<\mathfrak{I},\mathfrak{I}>\colon}^{<\mathfrak{I},\mathfrak{I}>\colon}}(\mathcal{Y}\_1) \cup ap\, pr\_{\rho\_{<\mathfrak{I},\mathfrak{I}>\colon}^{<\mathfrak{I},\mathfrak{I}>\colon}}(\mathcal{Y}\_2);$$

$$(8)\quad\overline{appr\_{\rho\_{<\vec{\varepsilon},\vec{\sigma}>}^{<\vec{\varepsilon},\vec{\sigma}>}}(\mathcal{Y}\_1\cup\mathcal{Y}\_2)}=\overline{appr\_{\rho\_{<\vec{\varepsilon},\vec{\sigma}>}^{<\vec{\varepsilon},\vec{\sigma}>}}(\mathcal{Y}\_1)}\cup\overline{appr\_{\rho\_{<\vec{\varepsilon},\vec{\sigma}>}^{<\vec{\varepsilon},\vec{\sigma}>}}(\mathcal{Y}\_2)}$$

**Proof.** All the assertions can be easily proved by using Definition 10.

Note that: if *xρ*¨ <¨*t*,*v*¨> <*s*¨,*u*¨> 6= ∅, then the assertions (1) and (2) may not hold (see Example 2).

**Example 2.** *Let* U˘ <sup>1</sup> <sup>=</sup> {*u*1, *<sup>u</sup>*2, *<sup>u</sup>*3} *and* <sup>U</sup>˘ <sup>2</sup> = {*v*1, *v*2, *v*3} *be the universal sets. Then, we define an LDF-R ρ*¨ *from* U˘ <sup>1</sup> *to* U˘ <sup>2</sup> *in the matrix notations given as below:*

$$\begin{aligned} \Theta^M &= \begin{pmatrix} 0.77 & 0.57 & 0.67\\ 0.55 & 0.48 & 0.50\\ 0.68 & 0.45 & 0.43 \end{pmatrix}, \Theta^N = \begin{pmatrix} 0.71 & 0.41 & 0.56\\ 0.80 & 0.72 & 0.46\\ 0.54 & 0.40 & 0.22\\ 0.46 & 0.40 & 0.37\\ 0.54 & 0.39 & 0.35 \end{pmatrix}, \mathcal{O}^N = \begin{pmatrix} 0.49 & 0.46 & 0.38\\ 0.52 & 0.58 & 0.58\\ 0.45 & 0.56 & 0.61 \end{pmatrix}. \end{aligned}$$

*Using Definition 8 of* (< *s*¨, *u*¨ >, < ¨*t*, *v*¨ >)*-level cut relation, for s*¨ = 0.77, *u*¨ = 0.51*,* ¨*t* = 0.71, *v*¨ = 0.49*, we can obtain:*

$$\mu\_1 \flat\_{<0.77, 0.51>}^{<0.71, 0.49>} = \{v\_1\}, \\ \mu\_2 \flat\_{<0.77, 0.51>}^{<0.71, 0.49>} = \mu\_3 \flat\_{<0.77, 0.51>}^{<0.71, 0.49>} = \mathcal{O}^+$$

*Suppose* Y = {*v*1, *v*2}*. Then by Definition 10,*

(Y)*appr ρ*¨ <0.71,0.49> <0.77,0.51> = U1,(Y)*appr ρ*¨ <0.71,0.49> <0.77,0.51> = {*u*1} (∅)*appr ρ*¨ <0.71,0.49> <0.77,0.51> = {*u*2, *u*3},(∅)*appr ρ*¨ <0.71,0.49> <0.77,0.51> = ∅ (U2)*appr ρ*¨ <0.71,0.49> <0.77,0.51> = U1,(U2)*appr ρ*¨ <0.71,0.49> <0.77,0.51> = {*u*1}

*Thus, we obtain that* (∅)*appr ρ*¨ <0.71,0.49> <0.77,0.51> 6= ∅ *and* (U2)*appr ρ*¨ <0.71,0.49> <0.77,0.51> 6= U1*. However, if uρ*¨ <0.71,0.49> <0.77,0.51> 6= ∅*, then:*

$$\frac{(\vartheta\_2)appr\_{\rho\_{<0.77,0.51>}^{<0.71,0.49>}} = \overline{(\vartheta\_2)appr\_{\rho\_{<0.77,0.51>}^{<0.71,0.49>}}} = \{\mu\_1\} \neq \vartheta\_1$$

*(see Proposition 3).*

**Proposition 3.** *Let ρ*¨ *be a R-LDF-R on* U˘ <sup>1</sup> *and s*¨, *u*¨ ∈ (0, 1]*,* ¨*t*, *<sup>v</sup>*¨ <sup>∈</sup> [0, 1)*. For any subset* Y ⊆ <sup>U</sup>¨ 1*, the following properties hold:*

$$(1)\quad appr\_{\rho^{<\mathfrak{f},\mathcal{O}>}\_{<\mathfrak{x},\mathfrak{a}>^{\ast}}(\mathcal{Y})}(\mathcal{Y}) \subseteq \mathcal{Y} \subseteq \overline{appr\_{\rho^{<\mathfrak{f},\mathcal{O}>}\_{<\mathfrak{x},\mathfrak{a}>^{\ast}}(\mathcal{Y})}(\mathcal{Y})};$$

$$(2)\quad\underline{appr\_{\tilde{\rho}^{<\tilde{\imath},\vartheta>}\_{<\tilde{\imath},\vartheta>}}(\vartheta\_1^{\check{\jmath}})}=\vartheta\_1^{\check{\jmath}}=\overline{appr\_{\tilde{\rho}^{<\tilde{\imath},\vartheta>}\_{<\tilde{\jmath},\vartheta>}}(\vartheta\_1^{\check{\jmath}})}.$$

**Proof.** The proof is straightforward.

**Lemma 1.** *Suppose that <sup>s</sup>*¨1,*s*¨2, *<sup>u</sup>*¨1, *<sup>u</sup>*¨<sup>2</sup> <sup>∈</sup> (0, 1] *and* ¨*t*1, ¨*t*2, *<sup>v</sup>*¨1, *<sup>v</sup>*¨<sup>2</sup> <sup>∈</sup> [0, 1) *such that <sup>s</sup>*¨<sup>1</sup> <sup>≤</sup> *<sup>s</sup>*¨2*, <sup>u</sup>*¨<sup>1</sup> <sup>≤</sup> *<sup>u</sup>*¨<sup>2</sup> *and* ¨*t*<sup>2</sup> <sup>≤</sup> ¨*t*1*, <sup>v</sup>*¨<sup>2</sup> <sup>≤</sup> *<sup>v</sup>*¨1*. Then,*

$$
\check{\rho}^{<\underline{\mathfrak{t}}\_2, \underline{\mathfrak{t}}\_2>}\_{<\underline{\mathfrak{s}}\_2, \underline{\mathfrak{t}}\_2>} \subseteq \check{\rho}^{<\overline{\mathfrak{t}}\_1, \underline{\mathfrak{t}}\_1>}\_{<\underline{\mathfrak{s}}\_1, \underline{\mathfrak{t}}\_1>}.
$$

**Proof.** Let (*v*1, *v*2) ∈ *ρ*¨ <¨*t*2,*v*¨2> <*s*¨2,*u*¨2> . Using Definition 8, <sup>Θ</sup>*M*(*v*1, *<sup>v</sup>*2) <sup>≥</sup> *<sup>s</sup>*¨2, *<sup>v</sup>M*(*v*1, *<sup>v</sup>*2) <sup>≥</sup> *<sup>u</sup>*¨<sup>2</sup> and <sup>Θ</sup>*N*(*v*1, *<sup>v</sup>*2) <sup>≤</sup> ¨*t*2, *<sup>v</sup>N*(*v*1, *<sup>v</sup>*2) <sup>≤</sup> *<sup>v</sup>*¨2. Since *<sup>s</sup>*¨<sup>1</sup> <sup>≤</sup> *<sup>s</sup>*¨2, *<sup>u</sup>*¨<sup>1</sup> <sup>≤</sup> *<sup>u</sup>*¨<sup>2</sup> and ¨*t*<sup>2</sup> <sup>≤</sup> ¨*t*1, *<sup>v</sup>*¨<sup>2</sup> <sup>≤</sup> *<sup>v</sup>*¨1, so

$$\Theta^M(v\_1, v\_2) \ge \mathfrak{s}\_2 \ge \mathfrak{s}\_1,\\ \mathcal{o}^M(v\_1, v\_2) \ge \mathfrak{s}\_2 \ge \mathfrak{s}\_1 \text{ and } \Theta^N(v\_1, v\_2) \le \mathfrak{s}\_2 \le \mathfrak{t}\_1,\\ \mathcal{o}^N(v\_1, v\_2) \le \mathfrak{s}\_2 \le \mathfrak{s}\_1.$$

Hence, <sup>Θ</sup>*M*(*v*1, *<sup>v</sup>*2) <sup>≥</sup> *<sup>s</sup>*¨1, *<sup>v</sup>M*(*v*1, *<sup>v</sup>*2) <sup>≥</sup> *<sup>u</sup>*¨<sup>1</sup> and <sup>Θ</sup>*N*(*v*1, *<sup>v</sup>*2) <sup>≤</sup> ¨*t*1, *<sup>v</sup>N*(*v*1, *<sup>v</sup>*2) <sup>≤</sup> *<sup>v</sup>*¨1. Thus (*v*1, *v*2) ∈ *ρ*¨ <¨*t*<sup>1</sup> ,*v*¨1> <*s*¨<sup>1</sup> ,*u*¨1> .

**Proposition 4.** *With the same assumptions as in the above Lemma 1, suppose that* Y ⊆ <sup>U</sup>˘ <sup>2</sup>*. Then, the following assertions are true:*


**Proof.** (1) Let *v*<sup>1</sup> ∈ *appr ρ*¨ <sup>&</sup>lt;¨*t*<sup>2</sup> ,*v*¨2> <*s*¨2 ,*u*¨2> (Y). From Definition 10, *v*<sup>2</sup> ∈ *v*1*ρ*¨ <¨*t*2,*v*¨2> <sup>&</sup>lt;*s*¨2,*u*¨2<sup>&</sup>gt; ∩ Y for some *v*<sup>2</sup> ∈ U1. Since *v*1*ρ*¨ <¨*t*2,*v*¨2> <sup>&</sup>lt;*s*¨2,*u*¨2<sup>&</sup>gt; ⊆ *v*1*ρ*¨ <¨*t*<sup>1</sup> ,*v*¨1> <*s*¨<sup>1</sup> ,*u*¨1> , therefore *v*<sup>2</sup> ∈ *v*1*ρ*¨ <¨*t*<sup>1</sup> ,*v*¨1> <*s*¨<sup>1</sup> ,*u*¨1<sup>&</sup>gt; ∩ Y (using Lemma 1). Hence, *v*<sup>1</sup> ∈ *appr ρ*¨ <*t*1 ,*v*1> <sup>&</sup>lt;¨*s*¨1 ,*u*¨1> (Y).

(2) Let *v*<sup>1</sup> ∈ *appr ρ*¨ <sup>&</sup>lt;¨*t*<sup>1</sup> ,*v*¨1> <*s*¨1 ,*u*¨1> (Y). By Definition 10, *v*1*ρ*¨ <¨*t*<sup>1</sup> ,*v*¨1> <*s*¨<sup>1</sup> ,*u*¨1<sup>&</sup>gt; ⊆ Y. From Lemma 1, *v*1*ρ*¨ <¨*t*2,*v*¨2> <sup>&</sup>lt;*s*¨2,*u*¨2<sup>&</sup>gt; ⊆ Y. This proves that *v*<sup>1</sup> ∈ *appr ρ*¨ <sup>&</sup>lt;¨*t*<sup>2</sup> ,*v*¨2> <*s*¨2 ,*u*¨2> (Y).

The inclusions in Proposition 4 may not hold, as is demonstrated in the sequel.

**Example 3.** *Let us revisit Example 2, assume s*¨<sup>1</sup> = 0.55, *u*¨<sup>1</sup> = 0.46, ¨*t*<sup>1</sup> = 0.80, *v*¨<sup>1</sup> = 0.52 *and s*¨<sup>2</sup> = 0.77, *u*¨<sup>2</sup> = 0.51, ¨*t*<sup>2</sup> = 0.71, *v*¨<sup>2</sup> = 0.49*. Then by Definition 8,*

*u*1*ρ*¨ <0.80,0.52> <sup>&</sup>lt;0.55,0.46<sup>&</sup>gt; = U2, *u*2*ρ*¨ <0.80,0.52> <sup>&</sup>lt;0.55,0.46<sup>&</sup>gt; = *u*3*ρ*¨ <0.80,0.52> <sup>&</sup>lt;0.55,0.46<sup>&</sup>gt; = {*v*1}

$$\mu\_1 \flat\_{<0.77, 0.51>}^{<0.71, 0.49>} = \{v\_1\}, \\ \mu\_2 \flat\_{<0.77, 0.51>}^{<0.71, 0.49>} = \mu\_3 \flat\_{<0.77, 0.51>}^{<0.71, 0.49>} = \mathcal{O}$$

*Take* Y = {*v*1}*, then by Definition 10, we have:*

$$\frac{\operatorname{apppr}\_{\boldsymbol{\beta}^{<0.71,0.49>}\_{<0.77,0.51>}}(\mathcal{Y}) = \operatorname{apppr}\_{\boldsymbol{\beta}^{<0.71,0.49>}\_{<0.77,0.51>}}(\mathcal{Y}) = \{\boldsymbol{u}\_{1}\}}{\operatorname{apppr}\_{\boldsymbol{\beta}^{<0.85,0.46>}\_{<0.55,0.46>}}(\mathcal{Y}) = \{\boldsymbol{u}\_{2},\boldsymbol{u}\_{3}\}, \overline{\operatorname{apppr}\_{\boldsymbol{\beta}^{<0.80,0.52>}\_{<0.59,0.46>}}(\mathcal{Y}) = \mathcal{Y}}$$

*Since s*¨<sup>1</sup> < *s*¨2*, u*¨<sup>1</sup> < *u*¨<sup>2</sup> *and* ¨*t*<sup>1</sup> > ¨*t*2*, v*¨<sup>1</sup> > *v*¨2*, but appr ρ*¨ <0.71,0.49> <0.77,0.51> (Y) *and appr ρ*¨ <0.80,0.52> <0.55,0.46> (Y) \* *appr ρ*¨ <0.71,0.49> <0.77,0.51> (Y)*.*

**Lemma 2.** *Let <sup>ρ</sup>*¨1, *<sup>ρ</sup>*¨<sup>2</sup> <sup>∈</sup> *LDF* <sup>−</sup> *<sup>R</sup>*(U˘ <sup>1</sup> <sup>×</sup> <sup>U</sup>˘ <sup>2</sup>) *be such that ρ*¨<sup>1</sup> ⊆ *ρ*¨2*. Then,*

$$
\phi\_1^{<\sharp,\sharp>} \subseteq \phi\_2^{<\sharp,\sharp>},
$$

**Proof.** Let (*v*1, *v*2) ∈ *ρ*¨<sup>1</sup> <¨*t*,*v*¨> <*s*¨,*u*¨> . By Definition 8, Θ*<sup>M</sup>* 1 (*v*1, *<sup>v</sup>*2) <sup>≥</sup> *<sup>s</sup>*¨, *<sup>v</sup><sup>M</sup>* 1 (*v*1, *v*2) ≥ *u*¨ and Θ*<sup>N</sup>* 1 (*v*1, *<sup>v</sup>*2) <sup>≤</sup> ¨*t*, *<sup>v</sup><sup>N</sup>* 1 (*v*1, *<sup>v</sup>*2) <sup>≤</sup> *<sup>v</sup>*¨. Since *<sup>ρ</sup>*¨<sup>1</sup> <sup>⊆</sup> *<sup>ρ</sup>*¨2, therefore *<sup>s</sup>*¨ <sup>≤</sup> <sup>Θ</sup>*<sup>M</sup>* 1 (*v*1, *<sup>v</sup>*2) <sup>≤</sup> <sup>Θ</sup>*<sup>M</sup>* 2 (*v*1, *v*2), *u*¨ ≤ *v<sup>M</sup>* 1 (*v*1, *<sup>v</sup>*2) <sup>≤</sup> *<sup>v</sup><sup>M</sup>* 2 (*v*1, *<sup>v</sup>*2) and ¨*<sup>t</sup>* <sup>≥</sup> <sup>Θ</sup>*<sup>N</sup>* 1 (*v*1, *<sup>v</sup>*2) <sup>≥</sup> <sup>Θ</sup>*<sup>N</sup>* 2 (*v*1, *<sup>v</sup>*2), *<sup>v</sup>*¨ <sup>≥</sup> *<sup>v</sup><sup>N</sup>* 1 (*v*1, *<sup>v</sup>*2) <sup>≥</sup> *<sup>v</sup><sup>N</sup>* 2 (*v*1, *v*2). Hence, Θ*<sup>M</sup>* 2 (*v*1, *<sup>v</sup>*2) <sup>≥</sup> *<sup>s</sup>*¨, *<sup>v</sup><sup>M</sup>* 2 (*v*1, *<sup>v</sup>*2) <sup>≥</sup> *<sup>u</sup>*¨ and <sup>Θ</sup>*<sup>N</sup>* 2 (*v*1, *<sup>v</sup>*2) <sup>≤</sup> ¨*t*, *<sup>v</sup><sup>N</sup>* 2 (*v*1, *v*2) ≤ *v*¨. Thus, (*v*1, *v*2) ∈ *ρ*¨<sup>2</sup> <¨*t*,*v*¨> <*s*¨,*u*¨> .

**Proposition 5.** *With the same notations as in Lemma 2, assume that* Y ⊆ <sup>U</sup>˘ <sup>2</sup>*. Then,*

*(1) appr ρ*¨2 <¨*t*,*v*¨> <*s*¨,*u*¨> (Y) ⊆ *appr ρ*¨1 <¨*t*,*v*¨> <*s*¨,*u*¨> (Y)*, (2) appr ρ*¨1 <¨*t*,*v*¨> <*s*¨,*u*¨> (Y) ⊆ *appr ρ*¨2 <¨*t*,*v*¨> <*s*¨,*u*¨> (Y)*.*

**Proof.** (1) Let *v* ∈ *appr ρ*¨2 <¨*t*,*v*¨> <*s*¨,*u*¨> (Y). Then, *vρ*¨<sup>2</sup> <¨*t*,*v*¨> <sup>&</sup>lt;*s*¨,*u*¨<sup>&</sup>gt; ⊆ Y. By Lemma 2, *vρ*¨<sup>1</sup> <¨*t*,*v*¨> <*s*¨,*u*¨> ⊆ *vρ*¨<sup>2</sup> <¨*t*,*v*¨> <sup>&</sup>lt;*s*¨,*u*¨<sup>&</sup>gt; ⊆ Y. Hence, *xρ*¨<sup>1</sup> <¨*t*,*v*¨> <sup>&</sup>lt;*s*¨,*u*¨<sup>&</sup>gt; ⊆ Y. This proves that *v* ∈ *appr ρ*¨1 <¨*t*,*v*¨> <*s*¨,*u*¨> (Y). Similar to the proof of (1), proof of (2).

## **5. Accuracy Measure and Roughness Measure for LDF-RSs on Two Universes**

The concept of A-M and R-M was first invented by Pawlak in 1982 in order to define the imprecision of R-As. Our perception of the accuracy of the data relating to an E-R for a given classification is based on these numerical measures. In [51], Yang et al. gave the idea of A-M and R-M for BF-RSs on dual universes. In this passage, we extend this concept to LDF-RSs on two universes.

With respect to a Pawlak A-S *P* = (U˘, *ρ*), where *ρ* is an E-R on U˘. Then the A-M and R-M of <sup>O</sup> of <sup>U</sup>˘ are defined as follows, respectively:

$$AM(\mathcal{O}) = \frac{\rho(\mathcal{O})}{\overline{\rho(\mathcal{O})}} \text{ and } RM(\mathcal{O}) = 1 - AM(\mathcal{O}).$$

We define the subsequent ideas by using the same pattern.

**Definition 12.** *Let* P¨ = (U˘ <sup>1</sup>, U˘ <sup>2</sup>, *<sup>ρ</sup>*¨) *be an LDF-RAS and* Y ⊆ <sup>U</sup>˘ <sup>2</sup>*, define the AM of* Y *with respect to ρ*¨ *as follows:*

$$\mathsf{AMod}(\mathcal{Y}) = \frac{|appr\_{\mathcal{P}^{<\bar{\mathfrak{t}},\mathcal{V}>}\_{<\bar{\mathfrak{t}},\mathcal{U}>}}(\mathcal{Y})|}{|\overline{appr\_{\mathcal{P}^{<\bar{\mathfrak{t}},\mathcal{V}>}}(\mathcal{Y})}|}$$

*where* <sup>|</sup>.<sup>|</sup> *indicates the number of elements in the sets. After that, we define the RM of* Y ⊆ <sup>U</sup>˘ <sup>2</sup> *with respect to ρ*¨ *as follows:*

$$\mathbb{RMod}(\mathcal{Y}) = 1 - \mathbb{AMod}(\mathcal{Y})$$

**Remark 2.** *The following points can be deduced from definition 12 given above:*


In the following, we construct an example for the clarification of the above Definition 12.

**Example 4.** *In Example 3, for <sup>s</sup>*¨<sup>1</sup> <sup>=</sup> 0.55, *<sup>u</sup>*¨<sup>1</sup> <sup>=</sup> 0.46, ¨*t*<sup>1</sup> <sup>=</sup> 0.80, *<sup>v</sup>*¨<sup>1</sup> <sup>=</sup> 0.52 *and* <sup>Y</sup> <sup>=</sup> {*y*1}*, we have:*

$$\operatorname\*{\bf apppr}\_{\mathfrak{P}^{<0.71,0.49>}\_{<0.77,0.51>}}(\mathcal{Y}) = \operatorname\*{\bf pppr}\_{\mathfrak{P}^{<0.71,0.49>}\_{<0.77,0.51>}}(\mathcal{Y}) = \{\mathfrak{x}\_1\}$$

*Thus, by Definition 12,* MA(Y) = 1 *and* MR(Y) = 0*. Hence, our information related to ρ*¨ *is accurate up to grade 1, which means that ρ*¨ *describes the objects of* Y *absolutely accurately. On the other hand, for <sup>s</sup>*¨<sup>2</sup> <sup>=</sup> 0.77, *<sup>u</sup>*¨<sup>2</sup> <sup>=</sup> 0.51, ¨*t*<sup>2</sup> <sup>=</sup> 0.71, *<sup>v</sup>*¨<sup>2</sup> <sup>=</sup> 0.49 *and* <sup>Y</sup> <sup>=</sup> {*y*1}*, we have:*

$$\operatorname\*{uppr}\_{\mathcal{P}^{<0.80, 0.52>}\_{<0.55, 0.46>}(\mathcal{Y})}(\mathcal{Y}) = \{ \mathbf{x}\_2, \mathbf{x}\_3 \}, \overline{\operatorname\*{uppr}\_{\mathcal{P}^{<0.80, 0.52>}\_{<0.55, 0.46>}(\mathcal{Y})} = \mathcal{\mathcal{U}}\_1$$

*Then,* MA(Y) = <sup>2</sup> 3 *and* MR(Y) = <sup>1</sup> 3 *. Hence, our information related to ρ*¨ *is accurate up to grade 0.6666, which means that ρ*¨ *describes the items of* U˘ <sup>2</sup> *accurately up to grade 0.6666.*

In the following result, we describe a connection of the A-M AM(Y) and R-M RM(Y) about the union and intersection of Y<sup>1</sup> and Y<sup>2</sup> on the universe U2.

**Theorem 5.** *Let* P¨ = (U˘ <sup>1</sup>, U˘ <sup>2</sup>, *<sup>ρ</sup>*¨) *be a LDF-RAS and* <sup>Y</sup>1, <sup>Y</sup><sup>2</sup> *are any non-empty subsets of* <sup>U</sup>˘ 2*. Then, A-M and R-M of* Y1*,* Y2*,* Y<sup>1</sup> ∪ Y<sup>2</sup> *and* Y<sup>1</sup> ∩ Y<sup>2</sup> *the following relations;*

*(1)* MR(Y<sup>1</sup> ∪ Y2)|*appr ρ*¨ <¨*t*,*v*¨> <*s*¨,*u*¨> (Y1) ∪ *appr ρ*¨ <¨*t*,*v*¨> <*s*¨,*u*¨> (Y2)| ≤ MR(Y1)|*appr ρ*¨ <¨*t*,*v*¨> <*s*¨,*u*¨> (Y1)|+ MR(Y2)|*appr ρ*¨ <¨*t*,*v*¨> <*s*¨,*u*¨> (Y2)| − MR(Y<sup>1</sup> ∩ Y2)|*appr ρ*¨ <¨*t*,*v*¨> <*s*¨,*u*¨> (Y1) ∩ *appr ρ*¨ <¨*t*,*v*¨> <*s*¨,*u*¨> (Y2)|*; (2)* MA(Y<sup>1</sup> ∪ Y2)|*appr ρ*¨ <¨*t*,*v*¨> <*s*¨,*u*¨> (Y1) ∪ *appr ρ*¨ <¨*t*,*v*¨> <*s*¨,*u*¨> (Y2)| ≥ MA(Y1)|*appr ρ*¨ <¨*t*,*v*¨> <*s*¨,*u*¨> (Y1)|+ MA(Y2)|*appr ρ*¨ <¨*t*,*v*¨> <*s*¨,*u*¨> (Y2)| − MA(Y<sup>1</sup> ∩ Y2)|*appr ρ*¨ <¨*t*,*v*¨> <*s*¨,*u*¨> (Y1) ∩ *appr ρ*¨ <¨*t*,*v*¨> <*s*¨,*u*¨> (Y2)|

**Proof.** The proof resembles that of Theorem 3.3 in [51].

## **6. An Application of LDF-RSs on Two Different Universes**

In the literature, a number of scientists have developed various techniques for medical diagnosis. Sun and Ma [48] presented an application of the F-RS model on two distinct domains in clinical diagnosis systems. Since the information is insufficient in the case of F-RS, Yang et al. [51] expanded the idea of Sun and Ma [48] to BF-RS model on two distinct cosmologies. LD-FSs are more efficient in decision analysis than the prevailing concepts of FS, IF-S, B-FS and q-ROF-S. Therefore, we need to extend the existing technique of BF-RS to a more general and robust model, namely LDF-RS on two contrasting universes and utilize this notion in clinical diagnosis.

Suppose that U˘ <sup>1</sup> refers to the collection of afflicted people and U˘ <sup>2</sup> indicates the group of symptoms. Let P¨ = (U˘ <sup>1</sup>, U˘ <sup>2</sup>, *ρ*¨) be LDF-RAS. If (*v*1, *v*2) ∈ *ρ*¨ <¨*t*,*v*¨> <*s*¨,*u*¨> , for all *<sup>v</sup>*<sup>1</sup> <sup>∈</sup> <sup>U</sup>˘ <sup>1</sup> and *<sup>v</sup>*<sup>2</sup> <sup>∈</sup> <sup>U</sup>˘ <sup>2</sup>, then we say that the sufferer *x* has the symptom *y* and the percentage of the patient who exhibits symptom *y* is at least *s*¨ and the degree of its corresponding parameter is not less than *u*¨, the sufferer's degree of symptom *y* non-existence is not greater than ¨*t*, and the degree of its corresponding parameter is not greater than *v*¨.

We are aware that a certain illness has a number of common symptoms. We denote a certain disease by <sup>Y</sup> <sup>=</sup> {y*<sup>i</sup>* <sup>∈</sup> <sup>U</sup>˘ <sup>2</sup> : *<sup>i</sup>* <sup>∈</sup> *<sup>I</sup>*} for any Y ⊆ <sup>U</sup>˘ <sup>2</sup> and make the following inferences using the PR, NR, and BR described in Definition 11:

Let *<sup>v</sup>* <sup>∈</sup> <sup>U</sup>˘ <sup>1</sup> be a given certain sufferer. Then,


Let us use a specific case to demonstrate this.

**Example 5.** *Let* U˘ <sup>1</sup> <sup>=</sup> {*p*1, *<sup>p</sup>*2, *<sup>p</sup>*3, *<sup>p</sup>*4} *be the group of certain victims and* <sup>U</sup>˘ <sup>2</sup> = {*q*1, *q*2, *q*3} *be the set of some symptoms. Consider an LDF-R ρ*¨ *from* U˘ <sup>1</sup> *to* U˘ <sup>2</sup>*. It describes the M and NM grades, together with the grades of their parameters, for each patient pi in relation to the symptom qj in the following matrices:*

$$
\begin{aligned}
\Theta^{M} &= \begin{pmatrix}
0.80 & 0.54 & 0.68 \\
0.71 & 0.45 & 0.40 \\
0.57 & 0.36 & 0.75 \\
0.85 & 0.81 & 0.62
\end{pmatrix}, \Theta^{N} = \begin{pmatrix}
0.35 & 0.46 & 0.38 \\
0.36 & 0.72 & 0.43 \\
0.46 & 0.56 & 0.47 \\
0.21 & 0.32 & 0.25
\end{pmatrix}, \\
\mathcal{O}^{M} &= \begin{pmatrix}
0.71 & 0.50 & 0.62 \\
0.62 & 0.38 & 0.30 \\
0.46 & 0.26 & 0.60 \\
0.80 & 0.78 & 0.59
\end{pmatrix}, \beta = \begin{pmatrix}
0.24 & 0.48 & 0.38 \\
0.38 & 0.52 & 0.70 \\
0.54 & 0.66 & 0.40 \\
0.20 & 0.18 & 0.28
\end{pmatrix}. \\
\mathcal{O}^{M} &= \begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & 0 \end{pmatrix}
\end{aligned}
$$

*Let* Y = {*q*1, *q*2} *symbolize a specific sickness, and there are two signs of this condition in clinic.*

*Case-1: For s*¨ = 0.45*, u*¨ = 0.38 *and* ¨*t* = 0.72*, v*¨ = 0.52*, we have:*

$$p\_1 \ddot{\rho}\_{<0.45, 0.38>}^{<0.72, 0.52>} = p\_4 \ddot{\rho}\_{<0.45, 0.38>}^{<0.72, 0.52>} = \vartheta \acute{\varrho}\_2 \ p\_2 \ddot{\rho}\_{<0.45, 0.38>}^{<0.72, 0.52>} = \{q\_1, q\_2\}, \\ p\_3 \ddot{\rho}\_{<0.45, 0.38>}^{<0.72, 0.52>} = \{q\_3\}$$

*(see Definition 8). By simple computations, the L-A and U-A of* Y *are given below:*

$$\operatorname\*{\bf p}\operatorname\*{pr}\_{\widehat{\rho}^{<0.72,0.52>}\_{<0.45,0.38>}}(\mathcal{Y}) = \{p\_2\}, \overline{\operatorname\*{ap}\operatorname\*{pr}\_{\widehat{\rho}^{<0.72,0.52>}\_{<0.45,0.38>}}(\mathcal{Y})} = \{p\_1, p\_2, p\_4\}$$

*Using Definition 10, LDF* − *POS*P¨(Y) = {*p*2}*, LDF* − *BND*P¨(Y) = {*p*1, *p*4} *and LDF* − *NEG*P¨(Y) = {*p*3}*. Furthermore, by Definition 12, the A-M and R-M are calculated as:*

$$\mathsf{M}\mathsf{A}(\mathsf{J}) = \frac{1}{3}, \mathsf{M}\mathsf{R}(\mathsf{J}) = \frac{2}{3}$$

*Thus, we interpret the subsequent results:*


$$p\_1 \not\supset\_{<0.57, 0.46>}^{<0.46, 0.54>} = \{q\_1, q\_3\}, \\ p\_2 \not\supset\_{<0.57, 0.46>}^{<0.46, 0.54>} = \{q\_1\} = p\_3 \not\supset\_{<0.57, 0.46>}^{<0.46, 0.54>} \\ p\_4 \not\supset\_{<0.57, 0.46>}^{<0.46, 0.54>} = \emptyset \natural\_{>0.57, 0.46>}^{<0.46>} $$

*(using Definition 8). By simple calculations, the L- and U-As of* Y *are as follows:*

$$\operatorname\*{opppr}\_{\mathcal{P}^{<0.46, 0.54>}\_{<0.57, 0.46>}(\mathcal{Y})}(\mathcal{Y}) = \{p\_{2\prime}, p\_{3\prime}\}, \overline{\operatorname\*{opppr}\_{\mathcal{P}^{<0.46, 0.54>}\_{<0.57, 0.46>}(\mathcal{Y})} = \vartheta\_{2}$$

*Using Definition 10,* LDF*POS*P¨(Y) = {*p*2, *p*3}*,* LDF*BND*P¨(Y) = {*p*1, *p*4} *and* LDF *NEG*P¨ (Y) = ∅*. Further, using Definition 12, the A-M and R-M are computed as follows:*

$$\mathsf{M}\mathsf{A}(\mathsf{J}) = \frac{1}{2}, \,\mathsf{M}\mathsf{R}(\mathsf{J}) = \frac{1}{2}$$

*Thus, we conclude that:*


## **Remark 3.**


## *Comparative Analysis*

In this section, we contrast our findings with a few of Yang et al. [51], Sun and Ma [48] and Ayub et al.'s [52] previously used methods.

**Example 6.** *For [48], consider our previous example 5, where* U˘ <sup>1</sup> <sup>=</sup> {*p*1, *<sup>p</sup>*2, *<sup>p</sup>*3, *<sup>p</sup>*4} *and* <sup>U</sup>˘ <sup>2</sup> = {*q*1, *q*2, *q*3}*. The following describes the M grades for each patient pi in connection to the symptom qj and F-R* Θ*<sup>M</sup> on* U˘ <sup>1</sup> <sup>×</sup> <sup>U</sup>˘ 2*:*

$$
\Theta^M = \begin{pmatrix}
0.80 & 0.54 & 0.68 \\
0.71 & 0.45 & 0.40 \\
0.57 & 0.36 & 0.75 \\
0.85 & 0.81 & 0.62
\end{pmatrix}'
$$

*Using Definition 3.3 of [48] for level cuts, we obtain the following for s*¨ = 0.45*:*

$$p\_1 \Theta\_{0.45}^M = p\_4 \Theta\_{0.45}^M = \vartheta\_2^\prime,\\ p\_3 \Theta\_{0.45}^M = \{q\_{1\prime}, q\_3\},\\ p\_2 \Theta\_{0.45}^M = \{q\_{1\prime}, q\_2\}.$$

*For* Y = {*q*1, *q*2}*, the L- and U-As are obtained by using Definition 3.3 of [48] below:*

$$\underline{\operatorname{appr}\_{\oplus\_{0.45}^{M}}(\mathcal{Y})} = \{p\_{2}\}, \overline{\operatorname{appr}\_{\oplus\_{0.45}^{M}}}(\mathcal{Y}) = \emptyset \check{\mathsf{Y}}$$

*Therefore, P* − *R*(Y) = {*p*2}*, B* − *R*(Y) = {*p*1, *p*3, *p*4} *and N* − *R*(Y) = ∅*. As a result, the following conclusions may be made from this information:*


*For s*¨ = 0.57*, we have:*

$$p\_1 \oplus\_{0.57}^M = p\_3 \oplus\_{0.57}^M = \{q\_1, q\_3\}, p\_2 \Theta\_{0.57}^M = \{q\_1\}, p\_4 \Theta\_{0.57}^M = \vartheta \breve{\varrho}$$

*The L- and U-As for* Y *are found by applying Definition 3.3 of [48] below:*

$$\underline{\operatorname{appr}\_{\Theta\_{0:\mathcal{T}}^{\mathsf{M}}}(\mathcal{V}) = \{p\_{\mathfrak{I}}\}\_{\iota} \overline{\operatorname{appr}\_{\Theta\_{0:\mathcal{T}}^{\mathsf{M}}}(\mathcal{V})} = \vartheta\_{\mathcal{Z}}$$

*Therefore, P* − *R*(Y) = {*p*3}*, B* − *R*(Y) = {*p*1, *p*2, *p*4} *and N* − *R*(Y) = ∅*. Thus, it follows that:*


**Example 7.** *We use the same Example 5 with BF-R which is expressed in the Table 1 for [51]:*

**Table 1.** *ρB*.


*Using Definition 3.1 of [51], the* < *s*¨, ¨*<sup>t</sup>* <sup>&</sup>gt; <sup>−</sup>*level cuts for <sup>s</sup>*¨ <sup>=</sup> 0.45 *and* ¨*<sup>t</sup>* <sup>=</sup> 0.54*, we have the sequel:*

$$p\_1(\rho\_\mathcal{B})^{<0.45, 0.54>} = p\_4(\rho\_\mathcal{B})^{<0.45, 0.54>} = l\ell\_2,\\ p\_2(\rho\_\mathcal{B})^{<0.45, 0.54>} = \{q\_1, q\_2\},\\ p\_3(\rho\_\mathcal{B})^{<0.45, 0.54>} = \{q\_3\}$$

*From Definition 3.2 of [51], the L-, and U-As of* Y *are given below:*

$$\underline{\hspace{1cm}} \underline{\operatorname{pppr}}\_{(\rho\_{\mathcal{B}})^{<0.45,0.54>}}(\mathcal{Y}) = \{p\_{\mathcal{2}}\}, \overline{\operatorname{qppr}\_{(\rho\_{\mathcal{B}})^{<0.45,0.54>}}(\mathcal{Y})} = \{p\_{1}, p\_{2}, p\_{4}\}$$

*Therefore, P* − *R*(Y) = {*p*2}*, B* − *R*(Y) = {*p*1, *p*4} *and N* − *R*(Y) = {*p*3}*. Thus, based on these findings, the following inferences can be made:*


*Now, for s*¨ = 0.57 *and* ¨*t* = 0.40*, using Definition 3.1 of [51] for* < *s*¨, ¨*<sup>t</sup>* <sup>&</sup>gt; <sup>−</sup>*level cuts, we obtain the following:*

$$p\_1(\rho\_B)^{<0.45, 0.54>} = p\_3(\rho\_B)^{<0.45, 0.54>} = \{q\_1, q\_3\}, \\ p\_2(\rho\_B)^{<0.45, 0.54>} = \{q\_1\}, \\ p\_4(\rho\_B)^{<0.45, 0.54>} = \mathcal{U}\_{\mathcal{D}}$$

*By using Definition 3.2 of [51] and simple calculations, we obtain the L-A and U-A of* Y *in the sequel:*

$$\underline{\operatorname{appr}\_{\left(\rho\_{\mathbb{B}}\right)^{<0.57,0.40>}}\left(\mathcal{Y}\right)} = \{p\_{\mathcal{Z}}\}, \overline{\operatorname{appr}\_{\left(\rho\_{\mathbb{B}}\right)^{<0.57,0.40>}}\left(\mathcal{Y}\right)} = \mathcal{U}\_{\mathsf{D}}$$

*Therefore, P* − *R*(Y) = {*p*2}*, B* − *R*(Y) = {*p*1, *p*3, *p*4} *and N* − *R*(Y) = ∅*. Based on these results, we conclude that:*


**Example 8.** *For [52], consider the same LDF-R as in Example 5. By using Definition 9 of [52], we obtain the L-, and U-As for* Y = {*q*1, *q*2} *and s*¨ = 0.45*, u*¨ = 0.38 *as follows:*

$$\left(\overline{\rho}(\mathcal{Y})\right)\_{<0.45,0.38>} = \{p\_2\}\_{\prime} \overline{\widetilde{\rho}(\mathcal{Y})}^{<0.45,0.38>} = \mathcal{U}\_1$$

*For* ¨*t* = 0.72 *and v*¨ = 0.52*, the L-A and U-A are as follows:*

$$\left(\underline{\not p}(\mathcal{Y})\right)\_{<0.72,0.52>} = \{p\_{1\prime}, p\_{2\prime}, p\_4\}, \overline{\not p(\mathcal{Y})}^{<0.72,0.52>} = \mathcal{O}^{\wedge}$$

*Thus, P* − *R*(Y) = ({*p*2}, ∅)*, B* − *R*(Y) = ({*p*1, *p*3, *p*4}, {*p*1, *p*2, *p*4}) *and N* − *R*(Y) = (∅, {*p*3})*. These findings allow for the following inferences:*


*(3) Nobody who is in pain has a good diagnosis.*

*Now, for s*¨ = 0.57*, u*¨ = 0.46 *the L-, and U-As are as follows:*

$$
\underline{\mathfrak{p}(\mathcal{Y})}\_{<0.57, 0.46>} = \{p\_2\}\_{\prime \prime} \overline{\not p(\mathcal{Y})}^{<0.45, 0.38>} = \mathcal{U}\_1
$$

*For* ¨*<sup>t</sup>* <sup>=</sup> 0.46 *and <sup>v</sup>*¨ <sup>=</sup> 0.54*, the L-A and U-As of* <sup>Y</sup> *are as follows:*

$$
\underline{\vec{\rho}(\mathcal{Y})}\_{<0.46, 0.54>} = \mathcal{U}\_1, \overline{\vec{\rho}(\mathcal{Y})}^{<0.46, 0.54>} = \mathcal{Q}
$$

*Thus, P* − *R*(Y) = ({*p*2}, ∅)*, B* − *R*(Y) = ({*p*1, *p*3, *p*4}, U1) *and N* − *R*(Y) = (∅, ∅)*. These lead us to conclude that:*


## **7. Conclusions**

The concept of LD-FS is a very powerful and convenient tool to describe the uncertainties in many practical problems, which involves decisions. The decision makers can freely choose the degree of truthness and the degree of falsity by making the use of reference or control parameters. Thus, LD-FS enhanced the space of truthness degree and falsity degree and removed the limitations of these degrees as in the existing concepts of FS, IF-S, B-FS, P-FS and q-ROF-S. In this paper, the existing notions of the F-RS model and BF-RS model on two universes have been generalized into the LDF-RS model on two universes as a more convenient and a robust model. The basic notions of lower and upper LDF-RAS have been defined by employing the after sets and fore sets of the (< *s*¨, *u*¨ >, < ¨*t*, *v*¨ >)-level cut relation of an LDF-Rs. Some important results related to the L- and U-As have been proved with illustrative examples. Furthermore, to illustrate the application of LDF-RSs, an example has been employed. Further research on the proposed ideas of this research paper applied to other practical applications is needed, which may lead to many fruitful outcomes.

**Author Contributions:** S.A., M.S., and M.R., contributed to the investigation, methodology, and writing—original draft preparation. F.K., D.M., and D.V., contributed to the supervision, investigation, and funding acquisition. S.A. contributed to the application, formal analysis, and data analysis. All authors have read and agreed to the published version of the manuscript.

**Funding:** Support has been received from the German Research Foundation and the TU Berlin.

**Data Availability Statement:** This manuscript contains hypothetical data and can be used by anyone by citing this article.

**Acknowledgments:** The authors wish to acknowledge the support received from the German Research Foundation and the TU Berlin.

**Conflicts of Interest:** The authors declare that they have no conflict of interest regarding the publication of the research article.

## **References**


## *Article* **Proving, Refuting, Improving—Looking for a Theorem**

**Branislav Boriˇci´c**

Faculty of Economics, University of Belgrade, Belgrade 11000, Serbia; boricic@ekof.bg.ac.rs

**Abstract:** Exploring the proofs and refutations of an abstract statement, conjecture with the aim to give a formal syntactic treatment of its proving–refuting process, we introduce the notion of extrapolation of a possibly unprovable statement having the form *if A, then B,* and propose a procedure that should result in the new statement *if A* 0 *, then B* 0 *,* which is similar to the starting one, but provable. We think that this procedure, based on the extrapolation method, can be considered a basic methodological tool applicable to prove–refute–improve any conjecture. This new notion, extrapolation, presents a dual counterpart of the well-known interpolation introduced in traditional logic sixty-five years ago.

**Keywords:** extrapolation; interpolation; proving; refuting; improving

**MSC:** 03B05; 03B80; 03A10

## **1. Introduction**

Lakatos' monumental play 'Proofs and Refutations' (see [1]) can be considered a demonstration of applying the proof–refutation (or conjecture–refutation) method as a practical realization of the falsificationism concept advocated and supported at that time, among other authors, by [2]. At the same time, the concept of proving–refuting–improving, demonstrated in the same play, can be used as an effective interactive class model.

Refutation, as an isolated process, plays an extremely important role in the development of a pupil's critical thinking and has a crucial place in each study program syllabus. We deem that examples of finding and treating incorrectness in some reasoning and argumentation are at least of equal didactic importance as those with correct derivations and proofs. Such examples present and help incite critical thinking.

First, let us explain in brief what we mean under the term 'extrapolation'. As we know, interpolation deals with finding statements *C* and *D*, which are in between *A* and *B*, when *A* ` *B*; i.e., '*A* implies *B*', is provable, meaning that all three sequents *A* ` *C*, *C* ` *D* and *D* ` *B* are provable. In this case, the sequent *C* ` *D* presents an interpolant for *A* ` *B*. On the other side, if *A* ` *B* is refutable, i.e., *A* 6` *B*, then we are looking for two statements *C* and *D*, such that *C* ` *A*, *B* ` *D* and *C* ` *D* are all provable; in this case, the sequent *C* ` *D* will be an extrapolant for *A* 6` *B*.

In this paper, we extend the proving–refuting method by its immediate result improving—and place it in a wider logical context relating it with the well-known concept of interpolation, with a new concept, extrapolation, as its dual. Both these notions, extrapolation and interpolation, are closely connected with many aspects of abductive reasoning [3]. The improving process, based deeply on the extrapolation method, is presented through several examples. Let me repeat here that once, a long time ago, my teacher Aleksandar Kron told me: 'Oh, how many times I fell asleep with a proof, and woke up with a counterexample'. This was the essence of the proving–refuting–improving process, during the daily journey of any scientist from a conjecture to the truth (see [4]). This process, consisting of proving and refuting attempts producing an improvement of the starting conjecture, is presented formally as an methodological procedure for discovering better statements. In fact, this can be considered a kind of Hegelian–Marxist dialectic scheme: thesis–antithesis–synthesis. However, the crucial cognition is that the essential step of this

**Citation:** Boriˇci´c, B. Proving, Refuting, Improving—Looking for a Theorem. *Axioms* **2022**, *11*, 559. https://doi.org/10.3390/ axioms11100559

Academic Editors: Oscar Castillo and Radko Mesiar

Received: 1 August 2022 Accepted: 11 October 2022 Published: 15 October 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

procedure is based on extrapolation, which is a dual to the well-known logical feature of reasoning—the interpolation property. We introduce the notion of extrapolation as a counterpart of interpolation. We do this in general form, independently of the basic logic. Namely, our definition depends neither on language—we do not use connectives—nor on logic—we suppose that our deduction relation is not necessarily linked to classical logic. Pure propositional logics open the problem of existence of a minimal extrapolant, which seems particularly interesting in case of infinitely valued systems.

## **2. Interpolation and Extrapolation—A General Idea**

A typical form of a scientific statement is that '*B* follows from *A*', denoted by *A* ` *B*, expressing a causal relationship between *A* and *B*. Refutation of such a statement consists of argumentation presenting at least one example (interpretation) where *A* is satisfied, but *B* is not.

The turnstyle symbol will be used in an informal way, not connected to any particular logical system, but assuming its rudimentary structural properties such as identity (*A* ` *A*), weakening (*A* ` *B* implies *A*, *C* ` *B*), permutation (*A*, *B* ` *C* implies *B*, *A* ` *C*), contraction (*A*, *A* ` *B* implies *A* ` *B*) and transitivity (*A* ` *B* and *B* ` *C* imply *A* ` *C*).

Establishing a statement *A* ` *B* as a conjecture means that we believe that *A* ` *B* holds, but also that this is partly under question; does *A* ` *B*? In order to obtain a final conclusion regarding the truthfulness of our conjecture, we try to prove and to refute it. This process implies finding examples supporting *A* ` *B* and counterexamples refuting *A* ` *B*, as well as looking for similar statements *A* <sup>0</sup> ` *B* 0 that are, by their nature, weaker than *A* ` *B* in cases when *A* ` *B* is refutable, and stronger than *A* ` *B* in cases when *A* ` *B* is provable.

Let us consider the two apparently simplest cases of causal connection: (i) *A* ` *B* is not proven and (ii) *A* ` *B* is proven, where *A* and *B* are two arbitrary sentences. In the second case (ii), we can assert that there are two propositions *C* and *D* such that the following statements are provable: *A* ` *C*, *C* ` *D* and *D* ` *B*. If *C* and *D* are logically equivalent, then we recognize here a form of the well-known Craig interpolation theorem (see [5]), pointing out that the form presented here can be considered as its slight generalization. In a similar way, we will deal with the first case (i) and suppose that there are two propositions *C* and *D* such that the following statements are provable: *C* ` *A*, *B* ` *D* and *C* ` *D*, obtaining a form that is somehow dual to interpolation (ii) and which could be treated as a kind of extrapolation.

We point out that the term 'duality' is used here in a quite different meaning than in classical two-valued logic. For each statement of the form *A* ` *B*, provable or unprovable, we consider a provable statement *C* ` *D*. If *A* ` *C* and *D* ` *B* are provable, then *C* ` *D* is called an interpolant, while when *C* ` *A* and *B* ` *D* are provable, then *C* ` *D* is called an extrapolant. Consequently, *C* and *D* as parts of an interpolant are in consequent of *A* and antecedent of *B*, respectively, but as parts of an extrapolant, they have 'dually' just the opposite roles; *C* is in antecedent of *A* and *D* is in consequent of *B*.

More accurately, if we suppose that *A* ` *B* is any statement, provable or not, then (i) *C* ` *D* is its *extrapolant* if all statements *C* ` *A*, *B* ` *D* and *C* ` *D* are provable; (ii) *C* ` *D* is its *interpolant* if all statements *A* ` *C*, *D* ` *B* and *C* ` *D* are provable. We omit here more formal details such as variable sharing and the context of a particular logical system for the deduction relation.

Note that in the case that an interpolant exists, the original statement *A* ` *B* is provable. However, in the case that an extrapolant exists, we can conclude nothing regarding the provability of *A* ` *B*. The most interesting cases in the sequel of this paper will be exactly those (i) when *A* 6` *B*, i.e., *A* ` *B* is unprovable. The challenges before us here are how to find some 'good' extrapolants for *A* 6` *B* and (ii) when *A* ` *B* is provable, how to find its 'good' interpolants. This is because in both these cases, the statement *C* ` *D* should present an improvement of *A* ` *B*, which will be explained below.

The term 'interpolation' is justified by the simple fact that we insert a new statement *C* ` *D* in between *A* and *B*, with an obvious possibility to infer *A* ` *B* from *A* ` *C*,

*C* ` *D* and *D* ` *B*. Similarly, the extrapolation process involves looking for a statement *C* 'before' *A*, because *C* ` *A*, and a statement *D* 'after' *B*, because *B* ` *D*. Both requirements, interpolation and extrapolation, have some trivial solutions. If *A* ` *B* is proven, then both forms *A* ` *A* and *B* ` *B* present possible interpolants. Furthermore, for any *A* ` *B*, all statements ⊥ ` >, ⊥ ` *A* and *B* ` > present its extrapolants, where we use the symbols > and ⊥, respectively, to denote truth and absurdity constants. Later, after sharpening both notions, extrapolation and interpolation, following the spirit of Craig's interpolation theorem and practical applications of extrapolation, we will see that trivial solutions have no importance (as usual).

**Example 1** (Lakatos' Proofs and Refutations)**.** *In his famous work, by giving a picturesque presentation of the proving–refuting process, Lakatos (see [1]) begins with an incorrect and refutable formulation of Euler's Polyhedral Theorem. The dialog between a teacher and his pupils starts with the teacher's provocation: "In our last lesson we arrived at a conjecture concerning polyhedra, namely, that for all polyhedra V* − *E* + *F* = 2*, where V is the number of vertices, E the number of edges and F the number of faces. We tested it by various methods. But we have not yet proven it. Has anybody found a proof?" After that, through a few iterations, the teacher, together with his pupils, by using a proving–refuting–improving method, obtains and proves the correct form of Euler's Polyhedral Theorem: for all convex polyhedra, V* − *E* + *F* = 2*.*

**Example 2** (Elementary Geometry)**.** *Let* **RTr**(**a**, **b**, **c**) *denote any right triangle with sides a*, *b*, *c, where c is its hypothenuse, and* **Tr**(**a**, **b**, **c**) *denotes any triangle with sides a*, *b*, *c. Some of the known elementary geometric facts can be formulated by means of a deduction relation as follows:*

*Triangle Inequality:* **Tr**(**a**, **b**, **c**) ` **a** + **b** > **c***.*

*Pythagorean Theorem:* **RTr**(**a**, **b**, **c**) ` **a <sup>2</sup>** + **b <sup>2</sup>** = **c 2** *and a*<sup>2</sup> + *b* <sup>2</sup> = *c* <sup>2</sup> ` **RTr**(**a**, **<sup>b</sup>**, **<sup>c</sup>**)*.*

*We also have two obvious facts:* **RTr**(**a**, **b**, **c**) ` **Tr**(**a**, **b**, **c**)*; i.e., each right triangle is a triangle and, in elementary algebra, a* <sup>2</sup> + *b* <sup>2</sup> = *c* <sup>2</sup> ` *<sup>a</sup>* <sup>+</sup> *<sup>b</sup>* <sup>&</sup>gt; *<sup>c</sup> for any positive reals <sup>a</sup>*, *<sup>b</sup>*, *<sup>c</sup> (see [6]). In order to illustrate the extrapolation phenomenon in this context, we consider the following negative statement:*

$$\text{Tr}(\mathbf{a}, \mathbf{b}, \mathbf{c}) \nmid \mathbf{a}^2 + \mathbf{b}^2 = \mathbf{c}^2$$

*By the extrapolation approach, bearing in mind that* **RTr**(**a**, **b**, **c**) ` **Tr**(**a**, **b**, **c**) *and a* <sup>2</sup> + *b* <sup>2</sup> = *c* 2 ` *a* + *b* > *c, we can infer the following statements:* **RTr**(**a**, **b**, **c**) ` **a <sup>2</sup>** + **b <sup>2</sup>** = **c 2** *,* **Tr**(**a**, **b**, **c**) ` **a** + **b** > **c** *and* **RTr**(**a**, **b**, **c**) ` **a** + **b** > **c***, as possible extrapolants. Deeming the proving–refuting– improving process one of the most important methods of knowledge growth, the author of this text, with a group of his brilliant students (Aleksanra Djokovi´c, Bojana Tujkovi´c, Ivana Cekrdži´c, ˇ Aleksandar Elezovi´c, Doroteja Djordjevi´c and Milan Peri´c), during the spring semester 2014, set up a musical performance under the title 'Proofs and refutations: devoted to the glorious triangle', at the Faculty of Economics, University of Belgrade. That performance was deeply inspired by [1] but, for the sake of better understanding the basic message, instead of Euler's Polyhedral Theorem, considered in the original Lakatos' play, we dealt with proofs and refutations of the Pythagorean Theorem.*

**Example 3** (Propositional Calculus)**.** *Here, we present some more subtle examples of interpolants and extrapolants. Let* ∧ *and* ∨ *denote the conjunction and disjunction connectives, respectively. (i) The form p* ∨ *q* ` *p* ∧ *q is unprovable, i.e., p* ∨ *q* 6` *p* ∧ *q, and it can be improved by the following forms: p* ` *p, q* ` *q, p* ` *p* ∨ *q, q* ` *p* ∨ *q and p* ∧ *q* ` *p* ∨ *q; this is not a complete list of its extrapolants. (ii) The form p* ∧ *q* ` *p* ∨ *q is provable and it can be improved by the following interpolants: p* ∧ *q* ` *p, p* ` *p* ∨ *q, p* ` *p and q* ` *q; this is not a complete list of its interpolants. Let us note that the examples of extrapolants and interpolants considered here are compatible not only with classical, but also with many non-classical propositional logics.*

**Example 4** (Set-Theoretic Interpretation)**.** *Due to the immediate link between the set-theoretic inclusion relation and the classical implication connective, the interpolation and extrapolation have a rough illustrative and a quite simple set-theoretic interpretation. Namely, if for two sets A and B* *we have A* ⊆ *B, then the sets C and D, such that A* ⊆ *C* ⊆ *D* ⊆ *B, can be considered as the basic constituents of an interpolant C* ⊆ *D for A* ⊆ *B. On the other side, if A* 6⊆ *B, then the sets C and D, such that C* ⊆ *A, B* ⊆ *D and C* ⊆ *D, define an extrapolant C* ⊆ *D for A* 6⊆ *B.*

**Example 5** (Impossibility Tradition)**.** *In spite of their great methodological and logical importance (see [7,8]), impossibility theorems raise a natural question: how they can be transformed into the corresponding relevant possibility results? Each such transformation is based on some proving– refuting–improving process that starts with the improving, i.e., with an extrapolation step. Let us discuss two simple cases of impossibility theorems.*

*Incommensurability of the diagonal and the side of a square: if a is the side of a square and d is its diagonal, then a* = 1 ` *d* ∈/ **N***, i.e., a* = 1 6` *d* ∈ **N***, where* **N** *is the set of natural numbers. By replacing (weakening) d* ∈ **N** *with d* ∈ **Q***, bearing in mind that d* ∈ **N** ` **d** ∈ **Q** *where* **Q** *is the set of rational numbers, we also obtain by extrapolation an invalid statement a* = 1 ` *d* ∈ **Q***. The next iteration is finding an appropriate extrapolant for the statement a* = 1 6` *d* ∈ **Q***. Obviously, by replacing d* ∈ **Q** *with d* ∈ **R***, bearing in mind that d* ∈ **Q** ` **d** ∈ **R** *where* **R** *is the set of reals, we obtain a valid positive statement a* = 1 ` *d* ∈ **R***, i.e., a possibility result.*

*Unsolvability of the equation x* <sup>2</sup> <sup>+</sup> *<sup>a</sup>* <sup>=</sup> <sup>0</sup>*, <sup>a</sup>* <sup>∈</sup> **<sup>R</sup>***, in the field of reals: <sup>a</sup>* <sup>∈</sup> **<sup>R</sup>** <sup>∧</sup> **<sup>x</sup> <sup>2</sup>** <sup>+</sup> **<sup>a</sup>** <sup>=</sup> **<sup>0</sup>** 6` **x** ∈ **R** *leads to two simple positive possibilities. By antecedent weakening, from a* ≤ 0 ` *a* ∈ **R***, we obtain a* ≤ 0 ∧ *x* <sup>2</sup> <sup>+</sup> *<sup>a</sup>* <sup>=</sup> <sup>0</sup> ` *<sup>x</sup>* <sup>∈</sup> **<sup>R</sup>***, or, by consequent weakening, from <sup>x</sup>* <sup>∈</sup> **<sup>R</sup>** ` **<sup>x</sup>** <sup>∈</sup> **<sup>C</sup>** *where* **C** *is the set of complex numbers, we have a* ∈ **R** ∧ **x <sup>2</sup>** <sup>+</sup> **<sup>a</sup>** <sup>=</sup> **<sup>0</sup>** ` **<sup>x</sup>** <sup>∈</sup> **<sup>C</sup>***, i.e., solvability of that equation in the field of complex numbers.*

*In a similar way, but with more complex argumentation and context, Arrow's Impossibility Theorem (see [7–9]), the most popular and important result in Social Choice Theory during the last century, has generated a number of possibility results. Variations of the corresponding possibility theorems (see [10]), obtained by weakening the antecedent or consequent of Arrow's original theorem, can be considered as effective examples of applying the proving–refuting–improving process as well.*

Here, we will explain why we do believe that both interpolants and extrapolants present improvements of our initial statement *A* ` *B*. (i) If *A* ` *B* is not provable, then its extrapolant *C* ` *D*, which is provable, obtained from an unprovable statement, presents its improvement, bearing in mind that from the initial statement *A* ` *B* of low quality (unprovable), we obtain its extrapolant *C* ` *D*, a statement of higher quality (provable). (ii) If *A* ` *B* is provable, then its interpolant *C* ` *D*, which is provable together with *A* ` *C* and *C* ` *B*, can be used as a sufficient condition to infer the initial statement *A* ` *B*, and from this point of view it can be considered as its essence—its improvement—enabling us to prove and better understand the meaning of the initial statement *A* ` *B*.

## **3. Extrapolation—More Formally**

Let us discuss a more subtle aspect of extrapolation including some views of relevance logic. A deduction of *B* from hypothesis *A* is acceptable relevance logic if this deduction employs every element of *A*. Another syntactic relevance principle, known as *variable sharing condition,* is that if *A* entails *B*, then At*A* ∩ At*B* 6= ∅, where At*A* denotes the set of all atomic formulae, i.e., propositional letters, occurring in *A* (see [11]). Variable sharing is not sufficient, but it is a necessary condition for relevance.

Now, we can formulate more ambitious expectations, including some kind of variable sharing principle.

**Interpolation property:** If At*A* ∩ At*B* 6= ∅ and *A* ` *B*, then there exist *C* and *D* such that At*C* ∪ At*D* ⊆ At*A* ∩ At*B*, *A* ` *C*, *D* ` *B* and *C* ` *D*.

**Extrapolation property:** If At*A* ∩ At*B* 6= ∅ and *A* 6` *B*, then there exist *C* and *D* such that At*C* ∪ At*D* ⊆ At*A* ∩ At*B*, *C* ` *A*, *B* ` *D* and *C* ` *D*.

The interpolation property is defined in accordance with Craig's well-known approach (see [5]). The extrapolation property tends to find relevant, non-trivial and, in some sense, minimal statements *C* and *D* establishing an extrapolant.

Let us note here that Craig's original definition deals with only one formula *C*, such that *A* ` *C* and *C* ` *B*, as an interpolant for *A* ` *B*. In this spirit, it would be possible to redefine our notion of extrapolant *C* for *A* 6` *B* so that *C* ` *A* and *B* ` *C*. It is not difficult to see that this approach with one formula playing the role of interpolant (or extrapolant) is logically equivalent to our definition employing two formulae in both cases.

The logical, methodological, philosophical and, even algebraic aspects of interpolation have been analyzed, discussed and explained in detail as a necessary part of most textbooks in logic (see [12,13]). Here, we will attempt to elucidate the logical sense of extrapolation. Bearing in mind the following derivation:

$$\frac{\begin{array}{c} A \vdash B\\ \hline \end{array} \begin{array}{c} \text{weakening} \\ \hline \end{array} \times \begin{array}{c} \text{weakening} \\ \hline \end{array} \times \begin{array}{c} \text{2} \\ \hline \end{array} \begin{array}{c} B \vdash D\\ \hline \end{array} \text{cut} \times \begin{array}{c} \text{2} \\ \hline \end{array} \end{array}$$

the extrapolation can be considered to be a weakening of the antecedent and the consequent of *A* ` *B*, respectively, by special statements *C* and *D*, such that *C* ` *A* and *B* ` *D* (Instead of {*A*, *B*} ` {*C*, *D*}, we will write simply *A*, *B* ` *C*, *D*, which, according to the traditional classical logic proof-theoretic interpretation, can be understood as *A* ∧ *B* ` *C* ∨ *D*). The procedure will be satisfiable when, from an unprovable statement, we obtain a provable one, i.e., when, in fact, from *A* 6` *B*, we obtain *C* ` *D*, where *C* and *D* are in the corresponding causal connections with *A* and *B*, respectively. In practice, when we search for an adequate statement, instead of reasoning starting with the explicit application of weakening rules, as above, the pure derivation with the cut rules

$$\frac{\mathbb{C} \vdash A \qquad A \vdash B \qquad B \vdash D}{\mathbb{C} \vdash D} \text{cut} \times \mathbf{2}$$

hides the presence of weakening. On the other side, we have to emphasize that it would be wrong to understand the extrapolation just as a simple weakening, because it is a very restricted and specific weakening in order to find the relevant extrapolant.

Extrapolation is formally, in the context of classical logic, equivalent to the left and the right side weakening rules, bearing in mind the following derivations

$$\frac{\mathbb{C}\_{\prime}A \vdash A \quad A \vdash B}{\mathbb{C}\_{\prime}A \vdash B} \text{ and } \frac{A \vdash B \quad B \vdash B \vdash D}{A \vdash B \land D}$$

Nevertheless, the extrapolation, as defined, seems more restrictive and suggests some kind of 'relevant' weakening. Namely, the above two derivations are classically, and even intuitionistically, admissible, but not from the point of view of relevance logic. This is the reason why the extrapolation can be essentially considered as a process partly supported by relevant logic principles, bearing in mind that variable sharing conditions for *C* with *A* and *B* with *D* are satisfied, but not necessary for *C* with *D*.

In case of an unprovable statement *A* ` *B*, when we look for some of its improvements *C* ` *D*, in order to avoid trivial solutions and to find the best one, if possible, we define the notion of minimal sentences:

**Minimal extrapolants:** Suppose *A* ` *B* is not proven and *C* ` *D* is its extrapolant. *C* will be called *a minimal sentence* for *A*, *B* and *D*, in this order, if for each *C* 0 , such that *C* ` *C* 0 is provable and *C* <sup>0</sup> ` *C* is unprovable, one of the statements *C* <sup>0</sup> ` *A* or *C* <sup>0</sup> ` *D* is unprovable. In a dual way, *D* will be called *a minimal sentence* for *A*, *B* and *C*, in this order, if for each *D*0 , such that *D* ` *D*<sup>0</sup> is unprovable and *D*<sup>0</sup> ` *D* is provable, one of the statements *B* ` *D*<sup>0</sup> or *C* ` *D*<sup>0</sup> is unprovable. In cases when both hold, *C* is a minimal sentence for *A*, *B* and *D*, and *D* is a minimal sentence for *A*, *B* and *C*; then, the statement *C* ` *D* is called *a minimal extrapolant* for *A* ` *B*.

The central question now is the following one: does a minimal extrapolant exist (and when)? It depends on the logical context, clearly. For instance, in *m*-valued propositional

logics, due to the existence of finitely many nonequivalent formulae over any finite set of atomic formulae (propositional letters), we always have the possibility to choose the minimal sentences. The next question is: does the minimal nontrivial extrapolant exist (and when)? Moreover, how could a nontrivial statement be characterized?

**Example 6.** *Obviously, for any two sentences A and B, such that A* 6` *B and p* ∈ At*A* ∩ At*B, the statement p* ∧ ¬*p* ` *p* → *p presents an extrapolant. This is a trivial example.*

**Example 7.** *Let us consider again some extrapolants p* ∧ *q* ` *p* ∨ *q, p* ` *p* ∨ *q, p* ∧ *q* ` *p and p* ` *p of the statement p* ∨ *q* ` *p* ∧ *q. In the case of extrapolant p* ∧ *q* ` *p* ∨ *q, the statement p* ∧ *q is not minimal for p* ∨ *q, p* ∧ *q and p* ∨ *q because there is a statement, p such that p* ∧ *q* ` *p, and both p* ` *p* ∨ *q and p* ∧ *q* ` *p are provable. On the other hand, the statement p is a minimal one for p* ∨ *q, p* ∧ *q and p* ∨ *q, and this is a way to find a new and 'better' extrapolant p* ` *p* ∨ *q. In the case of this extrapolant p* ` *p* ∨ *q, although p is a minimal for p* ∨ *q, p* ∧ *q and p* ∨ *q, the proposition p* ∨ *q is not minimal for p* ∨ *q, p* ∧ *q and p because, for the statement p, we have that p* ` *p* ∨ *q and both p* ` *p* ∨ *q and p* ` *p are provable, while p is a minimal statement for p* ∨ *q, p* ∧ *q and p. Let us note also that the examples considered here have a general character and are compatible with both classical and intuitionistic propositional logics.*

**Example 8.** *In set-theoretic interpretation, when A* 6⊆ *B, the parts of minimal extrapolants will be the sets in between C* = *A* ∩ *B and D* = *A* ∪ *B with respect to the inclusion relation. In general, C* = *A* ∩ *B* ⊆ *B* = *D will be a minimal extrapolant for C* = *A* ∩ *B*(⊆ *A*)*, A*(6⊆ *B*) *and B* ⊆ *D* = *B, and C* = *A*(⊆ *D* = *A* ∪ *B*) *will be a minimal extrapolant for A* = *C*(⊆ *A*)*, A*(6⊆ *B*) *and B*(⊆ *D* = *A* ∪ *B*)*.*

## **4. More Examples**

The importance of propositional language is founded, inter alia, on its simplicity. Propositional context is usually suitable for explaining and understanding the differences between various philosophical concepts for the foundations of mathematics. For instance, the spirit of essential variations between Platonism, intuitionism and relevance is already visible on the level of classical, intuitionistic and relevant propositional logics. On the other side, the founding of any serious mathematical theory needs much more than a propositional language. Here, we will try to present the idea of extrapolation in the context of the first-order predicate language.

The general symbolic form of an Impossibility Theorem stating that 'there does not exist an object *x* such that *A* implies *B*', is

$$\neg \exists \mathfrak{x} (A \to B)$$

The first-order sentence ¬∃*x*(*A*(*x*) → *B*(*x*)) can be presented in a classically equivalent way as ¬(∀*xA*(*x*) → ∃*xB*(*x*)), or a bit more informally as "∀*xA*(*x*) does not imply ∃*xB*(*x*)", i.e., ∀*xA*(*x*) 6` ∃*xB*(*x*). Here, we want to describe an application of extrapolation method on

∀*xA*(*x*) 6` ∃*xB*(*x*)

Namely, we are looking for sentences *C* and *D* such that *C* ` ∀*xA*(*x*), ∃*xB*(*x*) ` *D* and *C* ` *D*, where the last statement presents an extrapolant and, simultaneously, a transformation of an 'impossibility' result into a 'possibility' one.

On the level of general first-order languages examples, we analyze an 'impossibility case'.

**Example 9.** *Let us consider the following statement:* ∀*x*(*A* ∨ *B*) 6` ∃*x*(*A* ∧ *B*)*, having exactly the form of an impossibility theorem. If we try to weaken the antecedent* ∀*x*(*A* ∨ *B*) *by (1)* ∀*xA* ∨ ∀*xB or by (2)* ∀*xA, and the consequent* ∃*x*(*A* ∧ *B*) *by (3)* ∃*xA* ∧ ∃*xB or by (4)* ∃*xA, we do not obtain extrapolants by combining (1) with (3), (1) with (4) or (2) with (3); only the combination (2) with (4) gives an extrapolant, because* ∀*xA* ` ∀*x*(*A* ∨ *B*)*,* ∃*x*(*A* ∧ *B*) ` ∃*xA, and* ∀*xA* ` ∃*xA.*

We also consider some relationships between binary relations properties.

**Example 10.** *The logic of preferences is usually based on axioms concerning some properties of a binary relation P, called a preference relation. For instance, the list of axioms contains irreflexivity (Ir),* ∀*x*¬*P*(*y*, *x*)*, asymmetry (As),* ∀*x*∀*y*(*P*(*x*, *y*) → ¬*P*(*y*, *x*))*, transitivity (Tr),* ∀*x*∀*y*∀*z*(*P*(*x*, *y*) ∧ *P*(*y*, *z*) → *P*(*x*, *z*)) *and connectivity (Cn),* ∀*x*∀*y*∀*z*(*P*(*x*, *y*) → *P*(*x*, *z*) ∨ *P*(*z*, *y*))*. It is an easy exercise to show that* Cn 6` Tr*, but, bearing in mind that* As ∧ Cn ` Cn*,* As ∧ Cn ` Tr *and* Tr ` Tr*, we conclude that* As ∧ Cn ` Tr *presents an extrapolant and an improvement of the initial statement. In a similar way, we can find that the same statement,* As ∧ Cn ` Tr *is an extrapolant for both* Ir ∧ Cn 6` Tr *and* As ∧ Tr 6` Cn*.*

## **5. A Proving–Refuting–Improving Procedure**

Each theorem, or more generally, each scientific statement, can be expressed in the following form: *if* Γ*, then* ∆*.* Γ presents a set of hypotheses (given context or a theory) and ∆ is a consequence (conclusion). This is the reason why the basic form we use in this part of the paper is Γ ` ∆, an informal deduction relation (entailment) ` between two finite sets of sentences Γ (antecedent) and ∆ (consequent), with the intended meaning that it is possible to infer a conclusion ∆, interpreted as a disjunction of all elements of ∆, from the hypotheses set Γ, interpreted as a conjunction of all elements of Γ. The Greek capitals Γ, ∆, . . . , with or without subscripts or superscripts, will be used as metavariables over finite sets of sentences denoted by Latin capitals *A*, *B*, *C*, *D*, . . . We also use Γ |= ∆ with the usual model theoretic, meaning that if all elements of Γ are true, then at least one element of ∆ is true. This will be the context enabling us to express that Γ ` ∆, or *A* ` *B*, is provable or unprovable, and that Γ |= ∆, or *A* |= *B*, is refutable or irrefutable.

The idea of a proving–refuting–improving procedure has been hinted at by [4]. Here, we will develop it further. In both cases when Γ ` ∆ is provable or unprovable, i.e., when Γ |= ∆ is valid or refutable, we define the following four sets: Γ-antecedent, Γ-consequent, ∆ antecedent and ∆-consequent, respectively, as Γant = {*A a* 1 , . . . , *A a <sup>m</sup>*}, Γcon = {*A c* 1 , . . . , *A c <sup>m</sup>*}, ∆ant = {*B a* 1 , . . . , *B a <sup>n</sup>*} and ∆con = {*B c* 1 , . . . , *B c <sup>n</sup>*}, corresponding to the sets Γ = {*A*1, . . . , *Am*} and ∆ = {*B*1, . . . , *Bn*}, such that, for each *i* (1 ≤ *i* ≤ *m*), *A a i* ` *A<sup>i</sup>* and *A<sup>i</sup>* ` *A c i* are provable, and for each *j* (1 ≤ *j* ≤ *n*), *B a j* ` *B<sup>j</sup>* and *B<sup>j</sup>* ` *B c j* are provable.

The main problem here is to define concrete content of sets Γant, Γcon, ∆ant and ∆con in this general case, because the condition that *A a i* ` *A<sup>i</sup>* is provable has infinitely many solutions for *A a i* . On the other hand, each particular problem in some specific part of mathematics gives the researcher a freedom to use his intuition during the process of 'looking for a better theorem'.

The two elementary steps in our proving–refuting–improving procedure as follows:

Step (i): if Γ ` ∆ is not proven or Γ |= ∆ is refuted, we are looking for some *A a <sup>i</sup>* ∈ Γant or some *B c <sup>j</sup>* ∈ ∆con for which the provability of Γ <sup>0</sup> ` ∆ 0 can be reconsidered, where Γ <sup>0</sup> ∪ ∆ 0 is obtained from Γ ∪ ∆ by substituting at least one occurrence of *A<sup>i</sup>* by *A a i* in Γ or at least one occurrence of *B<sup>j</sup>* by *B c j* in ∆;

Step (ii): if Γ ` ∆ is proven, or Γ |= ∆ is not refuted, we are looking for some *A c <sup>i</sup>* ∈ Γcon or *B a <sup>j</sup>* ∈ ∆ant for which the provability of Γ <sup>0</sup> ` ∆ 0 can be reconsidered, where Γ <sup>0</sup> ∪ ∆ 0 is obtained from Γ ∪ ∆ by substituting at least one occurrence of *A<sup>i</sup>* by *A c i* in Γ or at least one occurrence of *B<sup>j</sup>* by *B a j* in ∆.

In both cases (i) and (ii), the result will be a statement Γ <sup>0</sup> ` ∆ 0 . If Γ <sup>0</sup> ` ∆ 0 is provable, then the procedure can be stopped and Γ <sup>0</sup> ` ∆ 0 will present a generalized extrapolant or interpolant for Γ ` ∆ in cases (i) and (ii), respectively. Otherwise, if we cannot decide if Γ <sup>0</sup> ` ∆ 0 is provable or if Γ <sup>0</sup> ` ∆ 0 is refutable, then we proceed with step (i) on Γ <sup>0</sup> ` ∆ 0 .

Finally, in the sequel, we apply the same procedure on Γ <sup>0</sup> ` ∆ 0 ; i.e., firstly, we try to prove Γ <sup>0</sup> ` ∆ 0 or to falsify Γ 0 |= ∆ 0 . If Γ <sup>0</sup> ` ∆ 0 is not proven or Γ 0 |= ∆ 0 is falsifiable, then we apply the procedure (i) on Γ <sup>0</sup> ` ∆ 0 in order to obtain a new statement Γ <sup>00</sup> ` ∆ 00. If Γ <sup>0</sup> ` ∆ 0 is provable or Γ 0 |= ∆ 0 is not refuted, then we apply the procedure (ii) on Γ <sup>0</sup> ` ∆ 0

in order to obtain a new statement Γ <sup>00</sup> ` ∆ 00. This process is called the *proving–refuting– improving procedure.*

Let us point out that a similar form of a generalized interpolant appears in S. Maehara's approach to interpolation in the context of sequent calculi (see [13]).

Note that the sentence 'Γ ` ∆ is not proven' does not exclude the case that Γ ` ∆ can be provable, and sentence 'Γ |= ∆ is not refuted' does not exclude the case that Γ |= ∆ can be refutable. Namely, if some fact is not proven, maybe, in the future, it could be proven, and if some fact has not been refuted up to now, it could be refuted later.

The above procedure, part (i), proving–refuting–improving, was based on methodological ideas promoted by Popper–Lakatos' proof–refutation (also known as conjecture–refutation) falsificationism (see [1,2]). Furthermore, the transformation of Γ ` ∆ into Γ <sup>0</sup> ` ∆ 0 , generally, can be considered a kind of Hegelian–Marxist dialectic scheme: thesis–antithesis–synthesis, which is obviously parallel with our scheme consisting of (i) and (ii), defining the process of proving–refuting–improving.

The statement Γ <sup>0</sup> ` ∆ <sup>0</sup> presents an improvement of Γ ` ∆ in case (i), in a sense that from an unprovable statement Γ ` ∆, we obtain a statement Γ <sup>0</sup> ` ∆ 0 , which may be provable; but if Γ ` ∆ is provable, then Γ <sup>0</sup> ` ∆ 0 is provable as well. On the other hand, the statement Γ <sup>0</sup> ` ∆ <sup>0</sup> presents an improvement of Γ ` ∆ in case (ii), in the sense that from a provable statement Γ ` ∆, we obtain a provable statement Γ <sup>0</sup> ` ∆ 0 from which Γ ` ∆ can be derived; i.e., Γ <sup>0</sup> ` ∆ 0 is more general than Γ ` ∆. These are the reasons to treat Γ <sup>0</sup> ` ∆ 0 as an improvement of Γ ` ∆ in both cases. This also means that any possible application of our procedure to a provable statement cannot produce an unprovable statement.

If reconsideration of Γ ` ∆ provides a statement Γ <sup>0</sup> ` ∆ 0 , consisting of some new elements of Γant ∪ Γcon ∪ ∆ant ∪ ∆con, then, obviously, Γ <sup>0</sup> ` ∆ 0 presents an improvement of Γ ` ∆. More accurately, we can justify our procedure by some kind of soundness statement:

## **Theorem 1.**


**Proof.** By induction on *n* + *m*—the number of statements belonging to Γ ∪ ∆: in case (i), from both, Γ ` ∆ and *A a i* ` *A<sup>i</sup>* , and Γ ` ∆ and *B<sup>j</sup>* ` *B c j* , by the hypothetical syllogism rule, we can infer Γ <sup>0</sup> ` ∆ 0 . In case (ii), from both pairs, Γ <sup>0</sup> ` ∆ <sup>0</sup> and *A<sup>i</sup>* ` *A c i* , and from Γ <sup>0</sup> ` ∆ 0 and *B a j* ` *B<sup>j</sup>* , by the hypothetical syllogism rule, we can infer Γ ` ∆.

Let us note that in the particular case when *A a i* is true or when *B c j* is a false statement, applying step (i) of our procedure produces the effects of enthymematic reasoning (see [14]).

A rare and unexpected case, which is not covered by (i) and (ii), is when the statement Γ ` ∆ is undecidable, i.e., the case when it is possible to show that Γ ` ∆ is neither provable nor refutable. Such examples are connected with highly formalized concepts and will not be our focus.

This procedure can be considered a sequence of consecutive attempts to falsify a statement and then to save it as a supplementary conjecture or to give it a new semantic interpretation. In this way, a progressive improvement of the initial claim is enabled.

In order to visualize the transformation process of Γ ` ∆ into Γ <sup>0</sup> ` ∆ 0 with the help of Γant, Γcon, ∆ant and ∆con, we give a 2*D*-presentation of relationships between elements of Γ and ∆, with or without subscripts or superscripts:


where, for instance, the first column

$$\begin{array}{c} A\_1^a \\ \top \\ A\_1 \\ \top \\ A\_1^c \end{array}$$

of this 2*D*-presentation means that both *A a* 1 ` *A*<sup>1</sup> and *A*<sup>1</sup> ` *A c* 1 are provable. Consequently, by some replacements of *A<sup>i</sup>* with *A a i* or with *A c i* , (1 ≤ *i* ≤ *m*), and some replacements of *B<sup>j</sup>* with *B a j* or with *B c j* , (1 ≤ *j* ≤ *n*), we obtain this new form Γ <sup>0</sup> ` ∆ 0 . The symbol '? `', appearing above, stands for '6`' or '`'.

## **6. Concluding Remarks**

An unproven statement of hypothetical character, a conjecture, is usually treated in one of the following two ways: we try to prove it, or we try to refute it. Then, for a proven statement, we try to find its interpolants, in order to simplify its proof and to better understand the nature of its proof, but for a refuted, i.e., unprovable, statement, we look for its extrapolants, trying to find a similar and relevant but provable statement.

Briefly, if we start with a statement of the form *A* ` *B*, then we have, syntactically, two possibilities to obtain from *A* ` *B* a better statement: if *A* ` *B* is unproven, we will look for its extrapolant presenting a provable statement relevant for *A* ` *B*, but if *A* ` *B* is proven, then we will find its interpolant relevant for *A* ` *B*, better explaining the nature of *A* ` *B*. Namely, the basic principle respected in the process of transforming *A* ` *B* into a 'better statement' *A* <sup>0</sup> ` *B* 0 is that all side statements occurring in derivations, such as *C* ` *A* and *B* ` *D*, are provable, except the principal statement *A* ` *B*, which can be, but does not have to be, provable, and that each step in the considered derivation is made strictly in accordance with the sound logical inference rules.

In working versions of this paper, we used the term 'algorithm' for the proving– refuting–improving process, but later we accepted the term 'procedure' as the appropriate one. Namely, it is not clear if the step transforming Γ 6` ∆ into Γ <sup>0</sup> ` ∆ 0 is well defined, in the sense that we do not know if the problem of provability of both Γ ` ∆ and Γ <sup>0</sup> ` ∆ 0 is decidable.

Finally, let us note that while the phenomenon of interpolation is usually treated as a property of an axiomatic theory or a logical system, because even some natural propositional logics do not possess it (see [15]), extrapolation, although observed as a dual to interpolation, presents essentially a method of transforming an unprovable statement *A* ` *B* into a 'similar', but provable one: *A* <sup>0</sup> ` *B* 0 .

We also point out that if there is a grain of suspicion that a counterexample to our conjecture exists, it will be of great didactic importance in developing and stirring the critical reasoning of students and researchers. This has to find a central place in all study programs as a basic goal of education, together with stimulating creative thinking.

**Funding:** This research received no external funding.

**Acknowledgments:** The author thanks the anonymous reviewers for making valuable suggestions and helpful comments.

**Conflicts of Interest:** The authors declare no conflict of interest.

## **References**


## *Article* **Spatial Fuzzy C-Means Clustering Analysis of U.S. Presidential Election and COVID-19 Related Factors in the Rustbelt States in 2020**

**Shianghau Wu**

Department of International Business, Chung Yuan Christian University, Taoyuan City 320314, Taiwan; antonwoo888@hotmail.com

**Abstract:** The rustbelt states play a key role in determining the vote turnout in the U.S. elections. The current study attempts to utilize the spatial fuzzy C-means method to analyze the U.S. presidential election in the rustbelt states in 2020. We intend to explore that the U.S. presidential election had related factors, including COVID-19-related factors, such as the mask-wearing percentage and the COVID-19 death tolls in each county of the rust belt states. Contrary to the related literature, the study uses education level, number of house units, unemployment rate, household income, COVID-19-related factors and the share of Republican's votes in the presidential election. The results indicate that spatial generalized fuzzy C-means analysis has better clustering results than the C-means clustering method. Moreover, the COVID-19 death toll in each county did not affect the Republican's vote share in the rustbelt states, while the mask-wearing behavior in some regions had a negative impact on the Republican's vote share.

**Keywords:** spatial fuzzy C-means; COVID-19; rustbelt states

**MSC:** 03B52; 03C45

C-Means Clustering Analysis of U.S. Presidential Election and COVID-19 Related Factors in the Rustbelt States in 2020. *Axioms* **2022**, *11*, 401. https://doi.org/10.3390/

Academic Editor: Oscar Castillo

axioms11080401

**Citation:** Wu, S. Spatial Fuzzy

Received: 29 June 2022 Accepted: 10 August 2022 Published: 15 August 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

## **1. Introduction**

The U.S. presidential election in 2020 was influenced by the COVID-19 pandemic, including increasing infections, death tolls, and lockdowns. The previous literature indicated that political polarization was aggravated due to intense fear during the disaster [1,2]. People tended to search for assuage by insisting on their conservative political viewpoints and supporting the ruling party, while other scholars believed that some voters would punish the political elite for worse management during the natural or man-made disaster. Since COVID-19-related policies were created in a very short period of time, without full deliberation, it was possible to arouse public discontent [3]. People were more supportive of their governments during the early stage of the COVID-19 pandemic [4]. However, the evaluations of the policies about the pandemic were influenced by two polarized mindsets. Some voters chose to punish the politicians for the conditions caused by the pandemic, which were out of their control, while some voters were attentive to the political elites' reactions and determined their feelings accordingly [5].

The previous literature about the U.S. presidential election in 2020 focused on the effects of COVID-19 on the U.S. presidential election results. Hart (2021) stated that the COVID-19 pandemic seemed to have decreased the support for Trump among the Democrats, while it increased for independent voters [6]. Baccini et al. [7] pointed out that COVID-19-related factors negatively affected Donald Trump's re-election, and the effect was stronger in urban areas. They also observed that COVID-19 had a positive effect on the voters' mobilization for Joe Biden. The rustbelt states are traditionally "swing states" in the U.S. presidential elections, including Illinois, Wisconsin, Indiana, Michigan, Ohio, West Virginia, Pennsylvania, and New York. Geographical and racial divergences

increased in the counties of rustbelt states in the past five years [8]. The geographical factors enable these divergences to become more visible, and people tend to live in more politically polarized conditions [9]. The voting results of rustbelt states have a pivotal influence on the whole country. However, there are fewer instances in the literature about the voting results' analysis of the rustbelt states. Gimpel [10] pointed out that some counties in rustbelt states changed their support to the Democrats in the presential election in 2020. The influencing factors of the voting results need to be examined. In order to analyze the topic more thoroughly, we attempt to analyze the COVID-19 pandemic effects along with the regional factors' influence, the related economic variables, and the Republican's support rate in the 2020 U.S. presidential election.

The structure of this research is as follows: the Research Method Section presents our research design and related descriptive statistics of the variables. The Discussion Section presents the results of the research model. The research findings are listed in the Conclusions Section.

## **2. Methodology**

## *2.1. Research Method*

The current study used the spatial fuzzy C-means clustering method to analyze the influencing factors of COVID-19 on the U.S. presidential election. In order to explore the impacts of COVID-19 and other factors, such as social and geographical factors, as the mentioned in the Introduction, the study also used educational level, number of house units, unemployment rate, and household income variables to create the clustering. The previous literature utilized daily experience sampling (ESM) to analyze the impact of COVID-19 on employee uncertainty [11]. Di Nardo et al. (2019) utilized the literature review method to provide useful information about COVID-19 infection on neonates and children [12]. Regarding the fuzzy clustering approach, Indelicato et al. (2022) used the method with the fuzzy TOPSIS model to analyze the determinants of immigrants in Cuenca, Ecuador [13]. Compared to the COVID-19-related research about its effects on U.S. elections, the study considered spatial factors and attempted to describe the regional differences under the influence of these variables.

## *2.2. Data Description*

The study explored the influencing factors of the pandemic on the 2020 U.S presidential election. The study used the Republican's voting share (X1) in the U.S. presidential election in 2020 as one of the variables related to the U.S. presidential election. The data were obtained from the web repository (https://github.com/tonmcg/US\_County\_Level\_ Election\_Results\_08-20 (accessed on 6 August 2022)); it collected the 2020 election results at the county level, which were scraped from the results published by Fox News, Politico, and the New York Times.

In order to measure mask-wearing behavior in the rustbelt states (X2), the study used the dataset collected by the survey firm, Dynata. Dynata surveyed 250 thousand respondents in the U.S. between 2 and 14 July 2020. The survey asked the respondents whether or not they wore face masks often in public. The responses included "always", "frequently", "sometimes", "rarely", and "never", according to the descending frequency.

The variables (X3, X4, X5, X6) were obtained from the dataset of the U.S. Census Bureau. These variables were released on a flow basis throughout each year.

The study also used the death toll (X7) before the U.S. presidential election as a COVID-19-related variable. Other variables included education level and household economic condition. The descriptive statistics of all the variables are listed in Tables 1 and 2:


**Table 1.** All variables used for clustering.

**Table 2.** Descriptive statistics of all variables.


## *2.3. C-Means Clustering*

Initially, the study used the classical C-means method to create the fuzzy unsupervised classification. The fuzziness degree (m) was set at 1.5 in order to obtain the satisfied results. The classical C-means method includes the following two equations. The first equation is the updated values of membership in each iteration of uik [14]:

$$u\_{ik} = \frac{\left(\left||\mathbf{x}\_k - \upsilon\_i\right||^2\right)^{\frac{-1}{m-1}}}{\sum\_{j=1}^c \left(\left||\mathbf{x}\_k - \upsilon\_j\right||^2\right)^{\frac{-1}{m-1}}}\tag{1}$$

The center of the cluster is as follows:

$$w\_{l} = \frac{\sum\_{k=1}^{N} u\_{ik}^{m}(\mathbf{x}\_{k})}{\sum\_{k=1}^{N} u\_{ik}^{m}} \tag{2}$$

In Equations (1) and (2), x<sup>k</sup> represents the observation of k's value, v<sup>i</sup> is the value of the center of the cluster i, c is the cluster number, and m is the index of fuzziness.

## *2.4. Fuzzy C-Means Clustering*

Fuzzy C-means clustering is an algorithm that permits a data point to pertain to two or more clusters. Let X = {x1, x2, . . . , xn} represent an image with n pixels, where x<sup>i</sup> is the gray value of the ith pixel. The objective function of the standard FCM algorithm is as follows:

$$J = \sum\_{k=1}^{K} \sum\_{i=1}^{n} u\_{ki}^{m} ||\mathbf{x}\_{i} - \mathbf{v}\_{k}||^{2} \tag{3}$$

In Equation (3), the center of the kth cluster is v<sup>k</sup> (1 ≤ k ≤ K), and uki (1 ≤ k ≤ K, 1 ≤ i ≤ n) is the membership degree function value of the ith pixel, which pertains to the kth cluster. uki also needs to meet the requirements of the following constraints:

$$\sum\_{k=1}^{K} u\_{ki} = 1, \quad u\_{ki} \in [0, 1], \quad 0 \le \sum\_{i=1}^{n} u\_{ki} \le n \tag{4}$$

In Equation (3), the distance between x<sup>i</sup> and v<sup>k</sup> is used in the Euclidean form, and parameter m (m > 1) is a weighting parameter that relates to the level of fuzziness and the resulting partition. The minimization of the objective function in Equation (3) can obtain the updated equations of the membership degree function uki and the cluster center v<sup>k</sup> as follows:

$$\mu\_{ki} = \frac{1}{\sum\_{i=1}^{k} \left(\frac{||\mathbf{x}\_i - \mathbf{v}\_k||^2}{||\mathbf{x}\_i - \mathbf{v}\_l||^2}\right)^{\frac{1}{m-1}}} \tag{5}$$

$$w\_k = \frac{\sum\_{i=1}^{n} u\_{ki}^m x\_i}{\sum\_{i=1}^{n} u\_{ki}^m} \tag{6}$$

The goal of these functions is to obtain suitable clusters for the data points.

## *2.5. Spatial Fuzzy C-Means Clustering*

Fuzzy C-means clustering (FCM) has shortcomings due to its sensitivity to noise. Some algorithms were developed to overcome this shortcoming by utilizing the spatial information obtained from the neighborhood window around each pixel. Mean spatial information and median spatial information are two prevalent types of local information. The mean spatial information of the ith pixel is denoted as follows [15]:

$$\delta\_{\vec{l}} = \frac{1}{|S\_{\vec{l}}|} \sum\_{p \in S\_{\vec{l}}} \chi\_p \tag{7}$$

In Equation (7), S<sup>i</sup> is the set of neighboring pixels in a window centered at the ith pixel, and |Si| represents its cardinality. The median spatial information can be represented as:

$$
\varepsilon\_{\bar{l}} = \operatorname{median} \left\{ \mathbf{x}\_p \right\} , \; p \in \mathcal{S}\_{\bar{l}} \tag{8}
$$

Most of the FCM algorithms utilize the above-mentioned local spatial information in the objective function; however, FCM algorithms with local spatial information can obtain a better image segmentation performance with a low noise level. The local spatial information obtained from the near pixels of a pixel is not efficient due to possible contamination. In fact, there are many pixels with a similar neighborhood configuration in an image. It is more beneficial to utilize pixels with a similar neighborhood configurations to the given pixel to obtain the spatial information than only using the neighboring pixels of the given pixel. Such types of spatial information can be taken as non-local spatial information. The non-local spatial information for the ith pixel *x<sup>i</sup>* is calculated by the following equation [16]:

$$\overline{\mathbf{x}}\_{i} = \sum\_{j \in w\_{i}^{r}} w\_{ij} \mathbf{x}\_{j} \tag{9}$$

In Equation (9), *ω<sup>r</sup> i* represents the r × r search window centered at the ith pixel. The non-local spatial information of the ith pixel is computed by using the pixels in the window. The weight between the ith and jth pixels can be denoted as wij *j* ∈ *w r i* , 0 ≤ wij ≤ 1 and ∑*j*∈*w<sup>r</sup> i wij* = 1. The weight wij is defined as follows:

$$w\_{i\bar{j}} = \frac{1}{Z\_i} \exp(-||\chi(\mathcal{N}\_{\bar{i}}) - \mathfrak{x}(\mathcal{N}\_{\bar{j}})||\_{2,\sigma}^2/h^2) \tag{10}$$

In Equation (10), h means the filtering degree parameter and directs the decreasing weight function wij, and *<sup>Z</sup><sup>i</sup>* = <sup>∑</sup>*j*∈*w<sup>r</sup> i exp*(−||*x*(*Ni*) − *x Nj* ||2 2,*σ* /*h* 2 ) is the normalizing constant. The weight wij depends on the similarity between the ith and jth pixels. The similarity is computed by the Gaussian weighted Euclidean distance ||*x*(*Ni*) − *x Nj* ||2 2,*σ* . The positive term σ is the Euclidean distance, which means the standard deviation of the Gaussian kernel. x(N<sup>i</sup> ) is the gray level vector with an s × s square neighborhood N<sup>i</sup> centered at ith pixel.

Fuzzy clustering algorithm with spatial information uses the spatial information for individual pixels to determine the spatial constant term, and then obtains the spatial constraint to the objective function of FCM. Fuzzy clustering algorithm with spatial information uses the spatial information for individual pixels to determine the spatial constant term, and then obtains the spatial constraint to the objective function of FCM.

#### **3. Results 3. Results**

#### *3.1. Fuzzy C-Means and Generalized Fuzzy C-Means Clustering 3.1. Fuzzy C-Means and Generalized Fuzzy C-Means Clustering*

The study used the classical K-means to determine the number of clusters. According to Figure 1, the four clusters can explain almost 40% of the original data variance. The study used the classical K-means to determine the number of clusters. According to Figure 1, the four clusters can explain almost 40% of the original data variance.

**Figure 1.** Impact of the number of groups on the explained variance. **Figure 1.** Impact of the number of groups on the explained variance.

Then, the study used the "fclust" package of R language to analyze the quality of the classification [17]. The study also utilized the "geocmeans" package of the R language to compute the generalized version of the c-means algorithm [18]. The algorithm can accelerate convergence and obtain less fuzzy results by adjusting the membership matrix at each iteration. It needs an extra beta parameter controlling the effectiveness of the modification. The modification only influences the formula updating the membership matrix. Then, the study used the "fclust" package of R language to analyze the quality of the classification [17]. The study also utilized the "geocmeans" package of the R language to compute the generalized version of the c-means algorithm [18]. The algorithm can accelerate convergence and obtain less fuzzy results by adjusting the membership matrix at each iteration. It needs an extra beta parameter controlling the effectiveness of the modification. The modification only influences the formula updating the membership matrix.

$$\mu\_{ki} = \frac{\left(\left||\mathbf{x}\_k - \mathbf{v}\_j\right||^2 - \beta\_k\right)^{\frac{-1}{m-1}}}{\sum\_{i=1}^c \left(\left||\mathbf{x}\_k - \mathbf{v}\_j\right||^2 - \beta\_k\right)^{\frac{-1}{m-1}}}\tag{11}$$

value for this parameter, the study sought all the possible values between 0 and 1 with a step of 0.05. The results of the related index were obtained according to the ascending *β* values in Table 3. In Equation (11), *<sup>β</sup><sup>k</sup>* <sup>=</sup> *min*(||*x<sup>k</sup>* <sup>−</sup> *<sup>v</sup>*||<sup>2</sup> ) and 0 ≤ β ≤ 1. In order to choose an adequate value for this parameter, the study sought all the possible values between 0 and 1 with a step of 0.05. The results of the related index were obtained according to the ascending *β* values in Table 3.

**Table 3.** Some indices with ascending *β* values. **Table 3.** Some indices with ascending *β* values.



**Table 3.** *Cont.*

According to Table 1, the study chose beta = 0.8, maintained a satisfied silhouette index, increased the Xie and Beni index, and explained inertia. The results of GFCM (generalized version of fuzzy C-means clustering) and FCM are listed in Table 4. According to Table 1, the study chose beta = 0.8, maintained a satisfied silhouette index, increased the Xie and Beni index, and explained inertia. The results of GFCM (generalized version of fuzzy C-means clustering) and FCM are listed in Table 4.


*Axioms* **2022**, *11*, x FOR PEER REVIEW 6 of 14


The results indicate that the GFCM provides a less fuzzy solution (with higher explained inertia and lower partition entropy), but keeps a good silhouette index and a lower Xie and Beni index. The study created two membership matrices maps and the most likely group for each observation. The study used the function map clusters from geocmeans in R language. We set a threshold of 0.45. If an observation only obtained values below this probability in a membership matrix, it was marked as "undecided" (represented by transparency on the map). plained inertia and lower partition entropy), but keeps a good silhouette index and a lower Xie and Beni index. The study created two membership matrices maps and the most likely group for each observation. The study used the function map clusters from geocmeans in R language. We set a threshold of 0.45. If an observation only obtained values below this probability in a membership matrix, it was marked as "undecided" (represented by transparency on the map). In Figure 2, the left-hand-side graph was the fuzzy C-means clustering result. The

The results indicate that the GFCM provides a less fuzzy solution (with higher ex-

In Figure 2, the left-hand-side graph was the fuzzy C-means clustering result. The right-hand-side graph was the generalized fuzzy C-means clustering result. We can observe that the right-hand-side graph had fewer undecided parts. right-hand-side graph was the generalized fuzzy C-means clustering result. We can observe that the right-hand-side graph had fewer undecided parts.

**Figure 2.** FCM and GFCM clusters. *3.2. Spatial C-Means and Generalized C-Means* **Figure 2.** FCM and GFCM clusters.

## *3.2. Spatial C-Means and Generalized C-Means*

*Axioms* **2022**, *11*, x FOR PEER REVIEW 7 of 14

The study used the SFCM function of R language to execute spatial c-means clustering. The first step was to determine a spatial weight matrix indicating the observations that were neighbors and the strength of their relationship. The study attempted to use a basic queen neighbor matrix (built with the spdep package of R language). The matrix should be row-standardized to ensure that the interpretation of all the parameters remains clear. The study used the SFCM function of R language to execute spatial c-means clustering. The first step was to determine a spatial weight matrix indicating the observations that were neighbors and the strength of their relationship. The study attempted to use a basic queen neighbor matrix (built with the spdep package of R language). The matrix should be row-standardized to ensure that the interpretation of all the parameters remains

The two following equations indicate how the functions renewing the condition of the membership matrix and the centers of the clusters are modified. clear. The two following equations indicate how the functions renewing the condition of

$$u\_{ik} = \frac{\left(||\mathbf{x}\_k - \mathbf{v}\_i||^2 + a||\overline{\mathbf{x}\_k} - \mathbf{v}\_i||^2\right)^{\frac{-1}{m-1}}}{\sum\_{j=1}^{c} \left(||\mathbf{x}\_k - \mathbf{v}\_i||^2 + a||\overline{\mathbf{x}\_k} - \mathbf{v}\_i||^2\right)^{\frac{-1}{m-1}}} \tag{12}$$

$$v\_i = \frac{\sum\_{k=1}^{N} u\_{ik}^m (\chi\_k + a\overline{\chi\_k})}{(1+a)\sum\_{k=1}^{N} u\_{ik}^m} \tag{13}$$

In Equations (12) and (13), *x* is the lagged version of x, and α ≥ 0. In Equations (12) and (13), ̅ is the lagged version of x, and α ≥ 0.

The SFCM (spatial fuzzy C-means) can be taken as a spatially smoothed version of the classical c-means, and alpha controls the degree of spatial smoothness. This smoothing can be taken as an attempt to reduce the spatial overfitting of the classical c-means. The SFCM (spatial fuzzy C-means) can be taken as a spatially smoothed version of the classical c-means, and alpha controls the degree of spatial smoothness. This smoothing can be taken as an attempt to reduce the spatial overfitting of the classical c-means.

The study chose the best alpha value in order to reduce spatial inconsistency as much as possible and to maintain a good classification quality. The relationship between the spatial inconsistency and alpha value is shown in Figure 3. The study chose the best alpha value in order to reduce spatial inconsistency as much as possible and to maintain a good classification quality. The relationship between the spatial inconsistency and alpha value is shown in Figure 3.

**Figure 3.** Link between alpha value and spatial inconsistency. **Figure 3.** Link between alpha value and spatial inconsistency.

= 2.

In Figure 3, the increasing alpha value results in the decrease in the spatial inconsistency. In Figure 4, the explained inertia decreased when the alpha value increased and again followed an inverse function. The classification searched for a compromise between the original and lagged values. However, the loss was only 3% between alpha = 0 and alpha In Figure 3, the increasing alpha value results in the decrease in the spatial inconsistency. In Figure 4, the explained inertia decreased when the alpha value increased and again followed an inverse function. The classification searched for a compromise between the original and lagged values. However, the loss was only 3% between alpha = 0 and alpha = 2.

**Figure 4.** The relationship between the alpha value and explained inertia. **Figure 4.** The relationship between the alpha value and explained inertia. **Figure 4.** The relationship between the alpha value and explained inertia. **Figure 4.** The relationship between the alpha value and explained inertia.

According to Figures 5 and 6, as a larger silhouette index means a better classification, and a smaller Xie and Beni index represents a better classification, the study intended to retain the alpha = 0.25 value to provide a good balance between spatial consistency and classification quality. According to Figures 5 and 6, as a larger silhouette index means a better classification, and a smaller Xie and Beni index represents a better classification, the study intended to retain the alpha = 0.25 value to provide a good balance between spatial consistency and classification quality. According to Figures 5 and 6, as a larger silhouette index means a better classification, and a smaller Xie and Beni index represents a better classification, the study intended to retain the alpha = 0.25 value to provide a good balance between spatial consistency and classification quality. According to Figures 5 and 6, as a larger silhouette index means a better classification, and a smaller Xie and Beni index represents a better classification, the study intended to retain the alpha = 0.25 value to provide a good balance between spatial consistency and classification quality.

**Figure 5.** Link between alpha and Xie and Beni index. **Figure 5. Figure 5.** Link between alpha and Xie and Beni index. Link between alpha and Xie and Beni index. **Figure 5.** Link between alpha and Xie and Beni index.

**Figure 6.** Link between alpha value and silhouette index.

#### *3.3. Spatial Generalized Fuzzy C-Means (SGFCM) 3.3. Spatial Generalized Fuzzy C-Means (SGFCM)* In order to facilitate the clustering process of the SGFCM method, we needed to determine the alpha and beta values of the following equation regarding the center of the clusters.

follows:

*3.3. Spatial Generalized Fuzzy C-Means (SGFCM)*

*3.3. Spatial Generalized Fuzzy C-Means (SGFCM)*

**Figure 6.** Link between alpha value and silhouette index.

**Figure 6.** Link between alpha value and silhouette index.

**Figure 6.** Link between alpha value and silhouette index.

*Axioms* **2022**, *11*, x FOR PEER REVIEW 9 of 14

*Axioms* **2022**, *11*, x FOR PEER REVIEW 9 of 14

*Axioms* **2022**, *11*, x FOR PEER REVIEW 9 of 14

In order to facilitate the clustering process of the SGFCM method, we needed to determine the alpha and beta values of the following equation regarding the center of the clusters. termine the alpha and beta values of the following equation regarding the center of the clusters. −1 clusters. = (||− || <sup>2</sup>−+||̅̅̅̅− || 2) −1 −1 −1 (14) = (||− || <sup>2</sup>−+||̅̅̅̅− || 2) −1 −1 ∑ (||− || <sup>2</sup>−+||̅̅̅̅− || 2) =1 −1 −1 (14)

In order to facilitate the clustering process of the SGFCM method, we needed to de-

In order to facilitate the clustering process of the SGFCM method, we needed to determine the alpha and beta values of the following equation regarding the center of the

$$u\_{ik} = \frac{\left(||\mathbf{x}\_k - \mathbf{v}\_i||^2 - \beta\_k + a||\overline{\mathbf{x}\_k} - \mathbf{v}\_i||^2\right)^{\frac{-1}{m-1}}}{\sum\_{j=1}^c \left(||\mathbf{x}\_k - \mathbf{v}\_i||^2 - \beta\_k + a||\overline{\mathbf{x}\_k} - \mathbf{v}\_i||^2\right)^{\frac{-1}{m-1}}}\tag{14}$$

The study attempted to use the multiprocessing approach to select the suitable alpha and beta values. The impact of alpha and beta values on the various indices is shown as follows: and beta values. The impact of alpha and beta values on the various indices is shown as follows: Figures 7 and 8 indicate that some specific combinations of alpha and beta values follows: Figures 7 and 8 indicate that some specific combinations of alpha and beta values generate good results in the range of 0.3 < alpha < 0.7 and 0.4 < beta < 0.6. Figure 9 shows Figures 7 and 8 indicate that some specific combinations of alpha and beta values generate good results in the range of 0.3 < alpha < 0.7 and 0.4 < beta < 0.6. Figure 9 shows that the selection of beta has no impact on spatial consistency.

Figures 7 and 8 indicate that some specific combinations of alpha and beta values generate good results in the range of 0.3 < alpha < 0.7 and 0.4 < beta < 0.6. Figure 9 shows that the selection of beta has no impact on spatial consistency. generate good results in the range of 0.3 < alpha < 0.7 and 0.4 < beta < 0.6. Figure 9 shows that the selection of beta has no impact on spatial consistency. that the selection of beta has no impact on spatial consistency.

**Figure 7.** Influence of beta and alpha values on silhouette index. **Figure 7.** Influence of beta and alpha values on silhouette index. **Figure 7.** Influence of beta and alpha values on silhouette index.

**Figure 8.** Influence of beta and alpha values on Xie and Beni index. **Figure 8.** Influence of beta and alpha values on Xie and Beni index. **Figure 8.** Influence of beta and alpha values on Xie and Beni index.

**Figure 9.** Influence of beta and alpha values on spatial inconsistency. **Figure 9.** Influence of beta and alpha values on spatial inconsistency.

**Figure 9.** Influence of beta and alpha values on spatial inconsistency. **Figure 9.** Influence of beta and alpha values on spatial inconsistency. Regarding Figures 7–9, the study selected beta = 0.5 and alpha = 0.25, which obtained better results for all the indices considered. Based on the alpha and beta values, the study acquired the results of the SFCM and SGFCM results (see Table 5).


Regarding Figures 7–9, the study selected beta = 0.5 and alpha = 0.25, which obtained better results for all the indices considered. Based on the alpha and beta values, the study

**Table 5.** Comparison of the indices between SFCM and SGFCM. **Table 5.** Comparison of the indices between SFCM and SGFCM.

acquired the results of the SFCM and SGFCM results (see Table 5).

*Axioms* **2022**, *11*, x FOR PEER REVIEW 10 of 14

The results of the SGFCM are better concerning the semantic and spatial aspects due to the lower partition entropy, Xie Beni index, and Fukuyama Sugeno index, and higher values of other indices. to the lower partition entropy, Xie Beni index, and Fukuyama Sugeno index, and higher values of other indices. The SFCM and SGFCM clustering maps are listed as follows.

The results of the SGFCM are better concerning the semantic and spatial aspects due

The SFCM and SGFCM clustering maps are listed as follows. According to Figure 10, the right-hand-side graph is the SGFCM clustering map. The

According to Figure 10, the right-hand-side graph is the SGFCM clustering map. The left-hand-side graph is the SFCM clustering map. We can observe that the undecided units are less on the SGFCM clustering map. left-hand-side graph is the SFCM clustering map. We can observe that the undecided units are less on the SGFCM clustering map.

**Figure 10.** Most likely cluster and undecided units of SFCM and SGFCM. **Figure 10.** Most likely cluster and undecided units of SFCM and SGFCM.

*3.4. Comparison of the Four Algorithms 3.4. Comparison of the Four Algorithms*

The study attempted to perform a thorough spatial analysis and compare the spatial consistency of the four classifications (FCM, GFCM, SFCM, SGFCM) (see Table 6). The study attempted to perform a thorough spatial analysis and compare the spatial consistency of the four classifications (FCM, GFCM, SFCM, SGFCM) (see Table 6).

**Table 6.** Moran I index for the columns of the membership matrices among the four algorithms. **Table 6.** Moran I index for the columns of the membership matrices among the four algorithms.


The Moran I value according to the membership matrices were higher for SFCM and The Moran I value according to the membership matrices were higher for SFCM and SGFCM, representing strongaer spatial structures in the classifications.

SGFCM, representing strongaer spatial structures in the classifications. The study also checked that the values of spatial inconsistency for SGFCM were significantly lower than those of SFCM. The study used the previously mentioned 250 values The study also checked that the values of spatial inconsistency for SGFCM were significantly lower than those of SFCM. The study used the previously mentioned 250 values obtained by permutations; we could calculate a pseudo *p*-value = 0.032 > 1/250 = 0.004. This means that the SGFCM algorithm did not have a predominant advantage over the SFCM algorithm. However, the SGFCM clustering map indicated that the undecided points were fewer than that of the SFCM.

We can observe that the undecided parts were fewer as compared with Figures 2 and 10.

## **4. Discussion**

The study attempted to utilize the spatial fuzzy C-means clustering method to analyze the relationship among COVID-19-related factors and the vote share of Republicans in the U.S. presidential election in the rustbelt states in 2020. The study found that spatial generalized fuzzy C-means clustering (SGFCM) produced better results compared to the other three algorithms according to Table 3. The study also found the SGFCM clustering graph in Figure 10 presented better results because the uncertain parts (areas that did not belong to any cluster) were fewer compared to the other clustering results shown in Figure 2.

The descriptive statistics of the four clusters (Tables A1–A4) are listed in the Appendix A. According to the four tables, we can conclude the four clusters are as follows:


The results seem to slightly contrast with the previous literature. Warshaw et al. (2020) found that COVID-19 fatalities decreased the support for Donald Trump in the 2020 presidential election [19]. However, our results show that the third and fourth clusters in the rustbelt states have higher COVID-19 death tolls with higher Republican vote shares and residents less inclined to wear face masks. Meanwhile, the second cluster had higher Republican vote shares and the residents there often wore face masks, while the COVID-19 death toll seemed unimportant. We can conclude that the COVID-19 death toll in each county did not affect the Republican vote shares in the rustbelt states, while the maskwearing behavior in some regions had a negative impact on the Republican vote shares.

According to Figure 11, we can observe that cluster 2 accounts for the largest area in the rustbelt states. Cluster 1 accounts for the smallest area. The clustering results indicate that the U.S. presidential election-related factors and COVID-19-related factors are closely related to the clustering results. It enables the researchers in the related field to conduct further studies.

**Figure 11.** Final cluster of SGFCM. **Figure 11.** Final cluster of SGFCM.

#### **5. Conclusions 5. Conclusions**

further studies.

The present study intended to use the spatial fuzzy C-means clustering to analyze the related factors of COVID-19 and the U.S. presidential election in the rustbelt states in 2020. The study found that the spatial generalized fuzzy C-means (SGFCM) method produced better clustering results. The SGFCM method divided the rustbelt states into four areas. The results imply that the COVID-19 death toll in each county did not affect the Republican vote shares in the rustbelt states, while the mask-wearing behavior in some regions had a negative impact on the Republican vote shares. It is worth conducting fur-The present study intended to use the spatial fuzzy C-means clustering to analyze the related factors of COVID-19 and the U.S. presidential election in the rustbelt states in 2020. The study found that the spatial generalized fuzzy C-means (SGFCM) method produced better clustering results. The SGFCM method divided the rustbelt states into four areas. The results imply that the COVID-19 death toll in each county did not affect the Republican vote shares in the rustbelt states, while the mask-wearing behavior in some regions had a negative impact on the Republican vote shares. It is worth conducting further research.

related to the clustering results. It enables the researchers in the related field to conduct

ther research. **Funding:** This research was supported by the Preliminary Research Resource Funding from Chung Yuan Christian University.

**Funding:** This research was supported by the Preliminary Research Resource Funding from Chung Yuan Christian University. **Institutional Review Board Statement:** Not Applicable.

**Institutional Review Board Statement:** Not Applicable. **Informed Consent Statement:** Not Applicable.

**Informed Consent Statement:** Not Applicable. **Data Availability Statement:** The COVID-19-related data for the U.S. can be downloaded from https://github.com/nytimes/COVID-19-data (accessed on 6 August 2022). The U.S. presidential election results in each county can be downloaded from https://github.com/tonmcg/US\_County\_Level\_Election\_Results\_08- 20/blob/f9b5f335ad1c66a7eba681539db49eec0c22787b/2020\_US\_County\_Level\_Presidential\_Re-**Data Availability Statement:** The COVID-19-related data for the U.S. can be downloaded from https://github.com/nytimes/COVID-19-data (accessed on 6 August 2022). The U.S. presidential election results in each county can be downloaded from https://github.com/tonmcg/US\_County\_Level\_ Election\_Results\_08-20/blob/f9b5f335ad1c66a7eba681539db49eec0c22787b/2020\_US\_County\_Level\_ Presidential\_Results.csv (accessed on 6 August 2022). The education level and household economic condition can be downloaded from https://www.census.gov/ (accessed on 6 August 2022).

sults.csv (accessed on 6 August 2022). The education level and household economic condition can be downloaded from https://www.census.gov/ (accessed on 6 August 2022). **Acknowledgments:** The author would like to show their gratitude for the preliminary research fund received by Chung Yuan Christian University.

**Acknowledgments:** The author would like to show their gratitude for the preliminary research fund **Conflicts of Interest:** The author declares no conflict of interest.

#### **Conflicts of Interest:** The author declares no conflict of interest. **Appendix A**

**Appendix A Table A1.** Descriptive statistics for cluster 1.

received by Chung Yuan Christian University.


**X<sup>1</sup> X<sup>2</sup> X<sup>3</sup> X<sup>4</sup> X<sup>5</sup> X<sup>6</sup> X<sup>7</sup>** Q5 0.417 0.449 49.4 4579.6 3.1 43,118 9 Q10 0.463 0.487 89 5836.6 3.3 46,262 15 Q25 0.539 0.54 198 11,116 3.8 49,767 37 Q50 0.605 0.612 368 20,204 4.4 53,901 77 Q75 0.674 0.723 510 41,229 4.9 60,121 115 Q90 0.729 0.79 608 68,550 5.5 66,521 155 Q95 0.762 0.827 633.6 109,462 5.7 73,006.8 174.2 Mean 0.599 0.627 356.193 35,019.47 4.415 55,596.35 79.845 Std 0.105 0.119 186.098 62,713.13 0.899 9592.194 52.128

**Table A2.** Descriptive statistics for cluster 2.

**Table A3.** Descriptive statistics for cluster 3.


**Table A4.** Descriptive statistics for cluster 4.


## **References**


## *Article* **Does Set Theory Really Ground Arithmetic Truth?**

**Alfredo Roque Freire**

Center for Research and Development in Mathematics and Applications (CIDMA), Department of Mathematics, University of Aveiro, 3810-193 Aveiro, Portugal; alfredo.roque.freire@ua.pt

**Abstract:** We consider the foundational relation between arithmetic and set theory. Our goal is to criticize the construction of standard arithmetic models as providing grounds for arithmetic truth. Our method is to emphasize the incomplete picture of both theories and to treat models as their syntactical counterparts. Insisting on the incomplete picture will allow us to argue in favor of the revisability of the standard-model interpretation. We start briefly characterizing the expansion of arithmetic 'truth' provided by the interpretation in a set theory. Interpreted versions of an arithmetic theory into set theories generally have more theorems than the original. This theorem expansion is not complete however. Using this, the set theoretic multiversalist concludes that there are multiple legitimate standard models of arithmetic. We suggest a different multiversalist conclusion: while there is a single arithmetic structure, its interpretation in each universe may vary or even not be possible. We continue by defining the coordination problem. We consider two independent communities of mathematicians responsible for deciding over new axioms for ZF and PA. How likely are they to be coordinated regarding PA's interpretation in ZF? We prove that it is possible to have extensions of PA not interpretable in a given set theory ST. We further show that the number of extensions of arithmetic is uncountable, while interpretable extensions in ST are countable. We finally argue that this fact suggests that coordination can only work if it is assumed from the start.

**Keywords:** foundations of mathematics; arithmetic; set theory

**1. Overview**

In this article, we study the idea of reducing arithmetic to set theory as a strategy for grounding arithmetic truth. The method of reduction we have in mind is interpretation. We say that a theory *T*<sup>1</sup> is interpreted in a theory *T*2, when there is a uniform mapping of theorems of *T*<sup>1</sup> in theorems of *T*2. This mapping should preserve the boolean structure and bound quantifiers of *T*<sup>1</sup> in a definable class of *T*2. We will next indicate how model constructions can be understood as the establishment of interpretations between theories.

In what follows, we assume that mathematical structures exist independently of our ability to completely describe them. It is common practice, however, to refer to models as fully formed entities for which one can assert whether any formula is valid. This is generally done with Gödel-Tarski method within a set-theoretic metatheory. The fact that one can decide whether any formula *ϕ* is satisfied by a model *M* is simply given by the axiom of excluded middle in the metatheory. Although this strategy may help us to understand model-theoretic properties, it will not necessarily help us to concretely determine which are the valid formulas. For example, considering the standard model *N* of arithmetic built in a ZF metatheory, we indeed know that *ψ* = "twin prime conjecture" is satisfied or not by the model. But that "N satisfies *ψ*" can still be unprovable from the point of view of ZF.

This is the reason why we will consider models via their syntactical representation through interpretations. Understanding models in this way will allow us to distinguish more precisely the undecidable instances of the form "N satisfies *ψ*" in the chosen metatheory. Structures should not be treated as syntactical constructions nevertheless. One may refer to a set-theoretic structure *V* as a platonic collection of objects; and due to our limited

Theory Really Ground Arithmetic Truth? *Axioms* **2022**, *11*, 351. https:// doi.org/10.3390/axioms11070351

Academic Editors: Oscar Castillo and Radko Mesiar

**Citation:** Freire, A.R. Does Set

Received: 19 May 2022 Accepted: 13 July 2022 Published: 21 July 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

knowledge, the notion of satisfaction in *V* is vaguely defined. We can, however, define a precise notion of knowledge about satisfaction by fixing a set theoretic theory *ST*:

$$\text{We know that } V \models \varphi \text{ if, and only if, } ST \vdash \varphi \tag{1}$$

Now, each model definable in a given base model *V ST* can be said be to the result of bounding the elements of *V* to a given interpretation *I* (this will be define precisely in the Section 2 with respect to arithmetic). By doing so, we can keep in mind our limited knowledge of models. Since, if M is definable in *V* (i.e., M = *I <sup>V</sup>*) and we do not know any other information about *V* other than that it satisfies *ST*, then

$$\text{We know } \mathcal{M} \models \not\equiv \emptyset \text{ if, and only if, } ST \vdash \not\!\!\!\!/ \tag{2}$$

Furthermore, we investigate the grounding relation represented by interpreting PA in ZF. Notably, if one considers the standard interpretation of PA in ZF to be correct, then it expands what one known to be arithmetically true—i.e., many independent formulas in PA become theorems as we see them in ZF through the interpretation. But even though we expect that interpretations of PA in ZF expand knowledge of arithmetic truth, ZF does not completely decide on arithmetical formulas. Indeed, for every interpretation I of arithmetic in a recursive extension S of ZF, there is an arithmetical formula that S does not decide under this interpretation. At any stage in the development of ZF (a recursive extension), the concept of arithmetical truth will still be open. Some arithmetic formulas will be undecidable under the interpretation in any recursively extended set theory. Hence, it is possible to build two structures satisfying the set theory that disagree about the truth value of an arithmetic formula.

Taking a multiversalist view of set theory, Hamkins and others (see [1–3]) use a similar basis to advance a pluralist view of arithmetic. In [1], for example, Hamkins and Yang show that there are models of ZF that agree about what the standard model of arithmetic is and yet disagree about what is valid in the standard model. This (and other results) suggests that there are alternative models of arithmetic. In this article we use a different approach. Assuming we have good reasons to say that there is a unique arithmetic intended structure while maintaining a multiversalist view of set theory (this view is suggested by Koellner in [4]), we argue that the standard interpretation should be taken as revisable. Furthermore, it may happen that the structure of arithmetic is not definable in some set-theoretic universes.

It is due to this phenomena that we consider what we call the coordination problem: consider that there are two groups of mathematicians responsible for deciding over new axioms. The first will decide over axioms for arithmetic and the second for a set theory. How should we consider the relation between the two groups? Note that if we consider that the arithmetic group should conform to any development provided by the set theory group, it becomes hard to see in what sense the interpretation of arithmetic into set theory has any foundational role. This framework is indistinguishable from simply taking arithmetic to live in set theory.

If, however, the interpretation of arithmetic in set theory has a meaningful foundational role, it is important to consider the possibility of the coordination between the two theories to break. Is it possible that an extension of arithmetic not to be interpretable in any extension of a set theory? We show in Theorem 2 that for any extension A of PA and any extension S of ZF, there is an extension A<sup>+</sup> that is not interpretable in S. But, how likely is it to be the case? We will further show in Theorem 3 that there are uncountable consistent extensions of a recursive A, while only a countable number of interpretations of arithmetic in any set theory. For this reason, the addition of axioms to set theory and arithmetic by the two groups would preserve the interpretability relation only if coordination is assumed. We further conclude that this perfect coordination would empty the reductivist foundational role of set theory to arithmetic. Finally, we briefly explore an alternative foundational role that would avoid this problem.

## **2. The Standard Model of Arithmetic**

The strategy of offering set-theoretical models to describe objects of a theory comes from the work of Tarski, Mostowsky, and Robinson in the 1940s [5]. Ever since this date, mathematicians and philosophers often resort to this strategy. It is generally accepted that once we start talking about models, we put aside the formal aspects of the mathematical subject and start talking about its objects and truths. Nevertheless, because of Gödel's incompleteness theorem and Löwenhein-Skolem theorem, there is no formal way to fix the model of any recursive extension of Peano arithmetic. It is impossible to say that the only model that satisfies our descriptions of arithmetic is the intended model, no matter how extensively we describe it. Still, using a set-theoretical apparatus, we can describe the intended model as *N* = h*ω*, +, ., 0,*s*i (called standard model). We can then show that a set theory like ZF is expressive enough to define a truth predicate for this interpretation.

The literature on this subject generally presents two approaches for fixing the standard model: (i) one should offer extra-logical (or second-order) reasons for choosing *N* from the myriad possible models for arithmetic; (ii) one should abandon the model-theoretical construction and find other ways to ground arithmetic truth. A renewed version of (ii) can be seen in Gabbay's defense of a new kind of formalism [6]; Moreover, others may abandon a privileged emphasis on *N*, because we must focus on mathematical practice (Ferreirós [7]) or because we must commit ourselves to a realistic multiverse (Hamkins [8]). Still, differences of opinion are more common as to how and why we should follow project (i). Those like Williamson [9] argue for metaphysical reasons for setting *N*, others like Maddy [10], Quine [11] or Putnam [12] advocate ways to naturalize the reasons for *N*. Finally, a recent approach by Rodrigo Freire grounds *N* in mathematical practice using a normative basis in place of the Platonist commitment to *N* [13].

The question of the adequacy of *N* is often overlooked. Though one may find a vast literature on non-standard models of arithmetic, these are generally regarded as 'deviant' or not intended. They are indeed existing structures that satisfy an arithmetic theory, but they are not the one true model of arithmetic. The assumption behind this is that if something is a model of arithmetic, then it is *N*. We may not know why this is the intended model or even deny that such a model exists, but the conformity to *N* is hardly questioned. However, presenting *N* as an object without further consideration is a category mistake. Notably, a similar category mistake would be to say that 'there have been two sun revolutions since so and so'. The phrase 'two sun revolutions' is used as quantity of time, even though it describes a movement in reference to the Sun. Hence, the statement would be a category mistake unless, for instance, an implicit reference to Earth and not Mars is assumed. Precisely stated, *N* is an interpretation of PA in the language of membership. It represents therefore a construction of objects for arithmetic in terms of objects of a given set theory. Hence, it is only when we fix the objects for a set theory that the objects expressed in the construction *N* gain life.

For any given model of set theory *V ZF*, an arithmetic interpretation *I* can be understood as a procedure for obtaining a model N for PA. The model N = h*Obj*, +<sup>N</sup> , . N , 0N ,*s N*i is a set in the vaguely defined *V* with the appropriate meaning for the arithmetic symbols + (sum), . (multiplication), 0 (constant zero) and *s* (successor function). The model N is built from the interpretation *I* = h*U*, *f*+, *f*. , *f<sup>s</sup>* , *zero*i. The elements of *I* are formulas in the language of ZF: *U* is a formula with one free variable, *f*<sup>+</sup> and *f*. are formulas with three free variables, *f<sup>s</sup>* is a formula with two free variables and *zero* is a formula with one free variable. It is then necessary to prove in *V* that the formulas in *f*+(*x*, *y*, *z*), *f*.(*x*, *y*, *z*), *fs*(*x*, *z*) indeed represent functions with respect to the variable *z* and that *zero*(*x*) is satisfied by a unique element in *V*. With these ingredients, we explicitly build in *V* the model N :


5. *s* <sup>N</sup> = {h*x*, *y*i | *x*, *y* ∈ *Obj* and *V fs*(*x*, *y*)}.

We may refer to the model obtained from *V* using *I* as *I <sup>V</sup>*. In this context, the standard interpretation *N* = h*U*, *f*+, *f*. , *f<sup>s</sup>* , *Zero*i is the case where *U*(*x*) expresses in set theory '*x* is an finite ordinal', *f*+(*x*, *y*, *z*) expresses '*z* is the ordinal sum of *x* and *y*', *f*.(*x*, *y*, *z*) expresses '*z* is the ordinal product of *x* and *y*', *fs*(*x*, *z*) expresses '*z* is the ordinal successor of *x*' and *Zero*(*x*) expresses '*x* is the empty set'. We can then obtain that, independently on the choice of the base model *V ZF*, the model *N<sup>V</sup> PA*.

Syntactically, we may use *I* to produce a uniform strategy for mapping formulas in the language of arithmetic *L*(*PA*) to formulas in the language of set theory *L*(*ZF*). As we assumed that *f*+(*x*, *y*, *z*) is a function in *V*, we may use, for simplicity, a function-like language defining *F*+(*x*, *y*) in *ZF* as "the *z* such that *f*(*x*, *y*, *z*)". Similarly, we define *F*. , *F<sup>s</sup>* and *Zero*. For every arithmetic formula *ϕ*, we define the partially interpreted formula *ϕ I* ∗ by:


If *ϕ* has free variables *x*1, *x*2, . . . , *xn*, the interpreted formula *ϕ I* is defined as (*U*(*x*1) ∧ *U*(*x*2) ∧ . . . ∧ *U*(*xn*)) → *ϕ I* ∗ . With this, we can now say that ZF interprets PA with the standard interpretation *N* since every *ϕ* ∈ *PA* is such that *ZF* ` *ϕ N*.

Our idea is to insist on the incomplete picture of the set-theoretical representation of arithmetic. All we know about the vaguely defined *V* is that it is based on an incomplete theory ZF. Therefore, the picture of arithmetic obtained from reducing PA to *V* by *N* is also incomplete. In this context, it is worth paying attention to precisely what is decidedly valid in the standard construction with the syntactic notion *ZF* ` *ϕ <sup>N</sup>*. If one only commits to the validity of the axioms of a set theory ST, the undecidable formulas in ST of the form *ϕ <sup>I</sup>* are precisely the arithmetic formulas that one does not know if they are valid or not.

So to what are we committing in the case where we say that *N* is the standard model of arithmetic? As we will discuss in the next section, it depends on what is the chosen model *V*. It is in fact showing that the standard model has many representations (even isomorphic, though with different truth predicates), that Hamkins and Yang in [1] proposes a pluralist view of arithmetic. Notably, however, they still fix the standard interpretation–evaluating this interpretation in different structures of set theory. It seems like the single construction for the intended model of arithmetic is based on the idea condensed in the sentence: 'no matter which model of set theory one is assuming, the model of arithmetic would be given by *N*'. Indeed, the picture provided by the literature is that of *revisable truth for set theory and arithmetic*–but **unrevisable** reduction of arithmetic in set theory. In the next sections, we argue that to take the standard model to have a foundational role, one should assume the interpretation to be revisable. For now, we consider the characterization of arithmetic in set theory in more details.

## *Foundational Characterization of PA in ZF*

Being *I* an interpretation of arithmetic in a set theory S, we call the set *A S <sup>I</sup>* = {*ϕ* ∈ *L*(*PA*) | *S* ` *ϕ <sup>I</sup>*} the expansion of arithmetic truth under the interpretation. Indeed some undecidable formulas *ϕ* of PA are 'true' in the standard model (*ZF* ` *ϕ <sup>N</sup>*). This is the case for the Gödel formula, Goodstein's theorem and many others arithmetic results. We will thus consider more broadly the question of expansion of arithmetic truth from interpretations in set theories.

Given that *I* is an interpretation of an arithmetic theory A in a set theory S and *Th*(*A*) = {*ϕ* | *A* ` *ϕ*}, we expect to have *Th*(*A*) \$ *A S <sup>I</sup>* \$ *Arithmetic truth*, as we see in Figure 1:

**Figure 1.** Expansion of validity under interpretation.

The reason for the expansion *Th*(*A*) \$ *A S I* is that, in the usual case, one expects to build a set-size model of arithmetic. Consequently, a consistency predicate for *A* should be expressed and proved in *S*. Consider the base case of PA and ZF with the standard interpretation *N*. Assuming a model *V* for *ZF*, we can build a model *N<sup>V</sup>* satisfying PA. We then know that there are many valid formulas in *N<sup>V</sup>* that are not provable in PA. The most immediate example is the consistency predicate *Con*(*PA*); in fact, we know that the predicate is valid in *<sup>N</sup><sup>V</sup>* or, in other words, that *ZF* ` (*Con*(*PA*))*<sup>I</sup>* .

Of course, from a given recursive extension *S* of ZF, one may simply choose the recursive arithmetic theory corresponding to the theorems in *ST* about the standard interpretation (i.e., *A S N* ). But this is to *put the cart before the horse*, being open to the evaluation of extra valid formulas with respect to the current axiomatization of arithmetic (e.g., *ϕ* ∈ *A S N* but such that *ϕ* is not proved in the current axiomatization of arithmetic) is a fundamental aspect in this study. In addition, there are important recent results that show fundamental mismatches between arithmetic and set theory. In fact, no subtheory of any extension of ZF is bi-interpretable with any extension of PA. This is a simple consequence of a theorem by Enayat and independently discovered by Hamkins and me: two different extensions of ZF can never be bi-interpretable [14–16] (the direct proof is done in the dissertation ([17] pp. 150–152). Together with the bi-interpretation of finite set theory and Peano arithmetic, the result follows. Hence, in order to obtain a set theory equivalent to PA we must add an axiom that contradicts ZF. Similarly, no compatible (with ZF) collection of set-theoretic concepts can perfectly mirror an axiomatization of arithmetic that extends PA.

We also note that the characterization of the foundation relation by theorem expansion relates to the mathematical practice. With the discovery of the Gödel's incompleteness theorem in [18], some resistance to the result was argued in the sense that the obtained undecidable statement had little mathematical meaning. Later on, Goodstein [19] proved that there are fast growing functions (called Goodstein sequences) that cannot be proved to be total in PA. The existence of these sequences is directly connected to the traditional Hydra problem, and thus it bears a clear mathematical meaning (see Caicedo's "Goodstein's function" [20]). Thus the question of foundation arises as to whether the interpretation of PA in set theory answers a significant arithmetical problem that was not possibly addressed by the axiomatization. And this is indeed the case as we consider Goodstein sequences.

Notably, important results in number theory have recently become so loaded with complicated techniques that mathematicians have begun to question whether the proofs extrapolated Peano's axioms. This is the case of Fermat's last theorem and the weak Goldbach conjecture, proved respectively by Andrew Wiles [21,22] and by Harald Helfgott [23]. This type of question is akin to the program of reverse mathematics and has drawn the attention of mathematicians like Harvey Friedman. However, the validity of those theorems, whether they depend or not on more axioms than PA, is hardly questioned. The choice is not commonly to add axioms to PA, but to investigate arithmetic truths in a theory that expands the extension of theorems. One is not however simply doing 'finite ordinal set theory' when dealing more loosely with arithmetic's axiomatization, as these 'stronger than PA' assumption should correspond to number theorists' intuitions about natural numbers.

We have discussed that interpretations of arithmetic in set theories generally expand what may be taken to be arithmetical truth (*Th*(*A*) \$ *A S I* ). Yet this expansion is not necessarily complete (*A S <sup>I</sup>* = arithmetic truth). A confusion in this regard is due to the idea that model constructions in set theories offer venues for defining truth for interpreted theories. Each interpretation *I* represents the appropriate model construction such that the grounding set theoretic model *V* can provide the notion of satisfaction *I <sup>V</sup> ϕ* for any formula. Eventually, we would have that for any formula *γ*, either *V <sup>I</sup> γ* or *V <sup>I</sup>* <sup>¬</sup>*γ*. However, as we have already discussed, a more syntactical approach makes it clear that this is simply the expression of the *excluded middle*. Indeed, "either *V <sup>I</sup> γ* or *V <sup>I</sup>* <sup>¬</sup>*γ*" should be syntactically represented by

$$\exists \text{ZF} \vdash \gamma^I \lor \neg \gamma^I \tag{3}$$

Instead, what is really wanted is a notion like

$$\exists F \vdash \gamma^I \text{ or } ZF \vdash \neg \gamma^I \tag{4}$$

As we suppose a base model *V* for ZF, we are at hand with an interpretation for ZF itself or with a loosely defined model. In this case, the notion of truth in a model is represented by "either *I <sup>V</sup> γ* or *I <sup>V</sup>* <sup>¬</sup>*γ*". However, if our supposition of a model *<sup>V</sup>* is not informed by any specific information other than *V ZF*, the interpretation works simply as the identity. Therefore, we return to the problem of establishing a notion as in (4).

However, Equation (4) is not achievable for any recursive extension of ZF. For a given interpretation *I* of arithmetic in a recursive extension *S* of ZF, there will be formulas of *L*(*PA*) that are undecidable about arithmetic in *S*, that is, formulas *ϕ* in *L*(*PA*) such that *S* 0 *ϕ <sup>I</sup>* and *<sup>S</sup>* <sup>0</sup> (¬*ϕ*) *I* . One may think that this is a direct consequence of Gödel's incompleteness for PA, as S could be seen as a recursive extension of PA. But this is false. As mentioned before, no subtheory of an extension of ZF is bi-interpretable with any extension of PA. Indeed, PA is bi-interpretable with the theory *ZFf in* composed of ZF without axiom of infinity and with the addition of negation of infinity and transitive closure (see [24]). However, no extension of *ZFf in* can be *S*, since *S* asserts the existence of infinite sets. In view of this, we prove the very simple theorem:

**Theorem 1.** *For a given interpretation I of PA in a recursive extension S of ZF, there will be formulas of L*(*PA*) *such that S* 0 *ϕ I and S* 0 (¬*ϕ*) *I .*

**Proof.** To prove this, we should reinternalize the provability predicate under the interpretation. Let as consider *A* = {*ϕ* | *S* ` *ϕ <sup>I</sup>*}. Notably, *PA* <sup>⊆</sup> *<sup>A</sup>* and thus *<sup>A</sup>* can produce arithmetization for arithmetic formulas and for set-theoretic formulas. Let p*ϕ*q be the Gödel number of any formula *ϕ* in *A* or in *S* and ph*ϕ*1, *ϕ*2, . . . , *ϕn*iq the Gödel number of any sequence of formulas h*ϕ*1, *ϕ*2, . . . , *ϕn*i in *A* or in *S* (as done in ([25] pp. 122–126)).

Since S is recursive, "ph*ϕ*1, *ϕ*2, . . . , *ϕn*iq is a proof in S" is recursive. From the representation theorem (see [25] pp. 126–128), there is a predicate *PrS*(*x*, *y*) such that

$$A \vdash \Pr\_{\mathbb{S}}(^{\sqsubset} \langle \varphi\_1, \varphi\_2, \dots, \varphi\_n \rangle \urcorner \ulcorner \ulcorner \psi \urcorner) \iff \langle \varphi\_1, \varphi\_2, \dots, \varphi\_n \rangle \text{ is a proof in } \mathbb{S} \text{ and } \psi \text{ is } \varrho\_n \tag{5}$$

Moreover, the statement "*ψ* is the *ϕ <sup>I</sup>* of some *ϕ*" is recursive. Then, from the representation theorem, there is a predicate *FmlI*(*x*) such that

$$A \vdash Fml\_I(\ulcorner \psi \urcorner) \iff \psi \text{ is the } \emptyset^I \text{ of some } \emptyset \tag{6}$$

Defining *Th<sup>A</sup> S* (*y*) as ∃*x*(*PrS*(*x*, *y*) ∧ *FmlI*(*y*)), we can then use the diagonal lemma for the formula <sup>¬</sup>*Th<sup>A</sup> S* (*y*), obtaining a formula *G* such that

$$A \vdash G \leftrightarrow \neg Th\_S(\ulcorner G \urcorner) \tag{7}$$

If *S* ` *G I* , then *A* ` *FmlI*(p*G*q) and *A* ` *ThS*(p*G*q) from (5) and (6). From (7), we have *A* ` ¬*ThS*(p*G*q), contradiction. To obtain a contradiction from *S* ` ¬*G I* , we should reformulate the proof using the Rosser trick, although it will also work the same way as in ([25] pp. 131–132). Then the formula *G* obtained in the diagonalization for the equivalent Rosser-Gödel predicate is the undecidable arithmetic formula in S.

This theorem can be understood as a **very small** expansion of Gödel's incompleteness theorem as we consider decidability under relations between theories. Moreover, it relates to results available in *Satisfaction is not absolute* [1]. In this article, Hamkins and Yang considered the idea that there may be arithmetical formulas *ρ* that two models of ZF disagree–even as these same models agree on what is the standard model for arithmetic. Though very important in the context of this paper, the result lacks a construction for the *ρ* formula. This formula is obtained as the existential for a number representing a formula. In fact, exhibiting *ρ* is not possible, since it would imply the inconsistency of ZF.

Put another way, we have shown a similar phenomenon in which disagreement can be exhibited. To make it possible, we considered a foundational view that accommodates our incomplete understanding of set theory and arithmetic. Thus, agreement on arithmetic is to be understood as having similar sets of known arithmetical truths {*ϕ* | *S* ` *ϕ N*}, S being some stage (or alternative stage) in the development of ZF. In this sense, there is a formula *ρ* that would be true in some possible development of S and false in some other possible development of S. As a reviewer pointed out, Ali Enayat [26] has recently studied this phenomenon in a similar light. He points out that *NZF* ( *NZFI*, where *I* indicates the existence of an inaccessible cardinal. Interestingly, he also creates a natural way of describing the S's expansion of arithmetic. If *θ*0, *θ*1, . . . , *θ<sup>i</sup>* , . . . is an enumeration of formulas of *S* and *S<sup>n</sup>* = {*θ<sup>i</sup>* | *i* < *n*}, the resulting arithmetic obtained from *S* is PA together with statements *ϕ* → *Con*(*S<sup>n</sup>* ∪ {*ϕ <sup>N</sup>*}). Enayat later shows a series of results on how and to what extent set theory models can disagree over the standard model of arithmetic. The limit of his method for the purposes of the present article is that his main concern is a model-theoretical characterization of 'nonstandard' models (with respect to some background V) that are obtained in some S using the standard interpretation.

There are indeed various important open statements of finite set theory. The recent book "Extremal problems for finite sets" ([26] pp. 211–215) deals with some of those systematically: Erd˝os matching conjecture, Chvátal conjecture, Frankl's union-closed conjecture and so on. If some of these turn out to be undecidable in ZF (or ZFC), they will correspond to undecidable statements of arithmetic under the standard interpretation. The question we would like to propose is this: assuming that the standard interpretation of PA in ZF produces true arithmetic statements, should we simply say that if some set theorists decide to include some of those conjectures as axioms, then should number theorists accept the corresponding statements as arithmetic truths?

In particular, there has been an important debate regarding the multiversalist picture of set theory. Many set theorists today consider that there are indeed equally legitimate non-isomorphic set theoretic models. The motivations for this are various (see [8]). But do those motivations apply to arithmetic? With set theory, there is a fundamental limitation generally accepted even by many conservative set theorists: whenever we deal with a model of set theory, we should always set a limit to an ordinal level in the cumulative hierarchy. Therefore, there is at least a multiverse of set-theoretic models with respect to ordinal levels. Nothing similar to this is found in arithmetic intuitions. Natural numbers are precisely those one can effectively count and there is little to no reason to take a pluralist view with respect to arithmetic. Notice, however, that by accepting the multiversalist view of set theory together with the view that the one true reduction of arithmetic to set theory is the standard interpretation, we are consequently subscribing to a pluralist view of arithmetic. And this is precisely the conclusion drawn by Hamkins. Now, if there is only one model of arithmetic and many legitimate set theoretic models, it becomes fundamentally important to consider that the interpretation of arithmetic in set theory is revisable and that the model of arithmetic may not even characterizable in some set

theoretic models. It is in view of this consideration that we should now investigate what we call *the coordination problem*.

## **3. The Coordination Problem**

Let us consider the following fictional scenario for the development of set theory and arithmetic. There are two groups of mathematicians who would decide about new axioms for set theory and arithmetic. The first *G<sup>s</sup>* is responsible for one (among possibly many) set-theoretic universe, and the second *G<sup>a</sup>* for the arithmetic structure. Let us further assume that *G<sup>a</sup>* agrees with the standard expansion of arithmetic in ZF (*A ZF N* is considered valid for *Ga*). How should we frame the relation between the two groups?

Consider that *G<sup>s</sup>* have decided in favor of new axiom *α* to set theory ZF. In particular, this would expand the set of arithmetic truths in *A ZF*+*α N* . Should *G<sup>a</sup>* consider this new set to be true? This being the general attitude towards arithmetic means that the standard reduction determines new truths for arithmetic. In what sense does the standard interpretation provide a foundation for new arithmetical truths? If we think that the standard interpretation does this, it seems like we have simply assumed that arithmetic lives in set theory, without any further considerations. After all, this framework bounds the expansion of arithmetic truth to the expansion of set-theoretic truth. Therefore, *G<sup>a</sup>* would not have any authority over new arithmetic axioms after all.

In order to make room for this setting, one should consider that we have a better understanding on how arithmetic is reduced to set theory than we have for each of the theories. And, for this to work in general, we should consider the reduction of arithmetic in set theory **unrevisable**.

Very often we consider ourselves to have a good understanding on relations between things that we may not have a good understanding. This is the case for translating a sentence like "Napoleon was an emperor". We may have a lot of doubts about the ontological status of the words used in this sentence and still be confident about how to translate it into Chinese.

Indeed, we may be more confident about the way we reduce arithmetic to set theory than about the truth in these theories. Yet this is not sufficient to assume the unrevisability of the reduction relation. After some investigation over the concept of emperor, one has realized that the standard translation of emperor in Chinese does not really represents what English speakers refer with 'emperor'. For instance, emperor is usually translated as 'Huangdi' in Chinese, even though this word associate the monarch with his divinity. In English, although often associated with divinity, the word emperor can be used without divine association. So a more intricate description as 'Napoleon was the non-divine man who ruled over the French empire' would be better (even if it is not practical).

If there are grounds for taking *N* to be a privileged interpretation, those would be based on partial representations of arithmetic and set theory. Therefore, the idea that *N* correctly works as a connection between the theories may be simply because we have not advanced the theories enough. This would be a similar case if a Chinese working in the translation of a western modern history book has been translating 'Emperor' as 'Huangdi'. It seems perfectly fine if he believed this to be a general translation, given that the only time he applied the translation was for the 'Emperor of the Holy Roman Empire'. But as he starts translating the Napoleonic period, the broader picture would force him to reconsider the generality of the translation.

A different picture would be the case where the Chinese translator invented a language where *w* means 'blue chair'. Finding someone else using *w* to refer to a red chair, he could correctly accuse the person to be using the word incorrectly. So this would be similar to the case where we consider arithmetic to be a definition inside set theory. But this being the case would imply that there is no foundational gain in studying the relation between the theories.

Whereas set theory has a foundational role for arithmetic, we may now consider that the standard interpretation is a good yet revisable set-theoretic inspection over arithmetic. It is precisely because we assume the interpretation to be revisable that a foundational relation can be argued. As truth expands in both theories, we evaluate conflicts and revise, if necessary, the interpretation to accommodate changes. A summary of the steps in the coordination of *G<sup>a</sup>* and *G<sup>s</sup>* can be:


As we see in Step 2, the two communities should sit together and reevaluate the state of the reduction, if necessary. Hopefully, these conferences would hardly occur. But we should allow some independence to each group. Otherwise, their development, especially on arithmetic, would turn out to be assumed by definition in the development of the other.

We have added some life to the grounding relation by allowing it to fail. However, there is still a deeper problem. The following scenario is still possible:


Allowing both of these possibilities weakens the edifice of the grounding relation. Each moment in the development of the theories is an incomplete stage in which we cannot anticipate the impossibility of reductions occurring further in the development of the theories. From (i), any addition to the theories allows one to find (or keep) an interpretation of arithmetic. However, from (ii), finding those interpretations does not add to the idea that arithmetic is indeed reducible to a given set theory. This scenario is possible, as we will see in the next theorem.

**Theorem 2.** *Let S be a consistent extension of ZF and A a consistent recursive extension of PA, then there is a consistent extension A*∗ *of A that is not interpretable in S.*

**Proof.** We extend the theory A by generating a sequence of theories that are not interpretable in S by a particular interpretation I. Being these theories compatible with each other, the union of them will not be interpretable in S.

Let *A*<sup>0</sup> = *A* and {*I*1, *I*2, . . .} be an enumeration of all interpretations from PA language to ZF language. We generate a sequence of theories *A*<sup>0</sup> ⊆ *A*<sup>1</sup> ⊆ . . . ⊆ *A<sup>n</sup>* ⊆ . . . by adding **one** formula in each step. It should be noticed that the proof here is not constructive, meaning we are not using a recursive method to determine the new formula added to *A<sup>i</sup>* to obtain *Ai*+<sup>1</sup> . Nonetheless, since every theory *A<sup>i</sup>* will be the addition of *i* formulas to the recursively axiomatized *A*0, then *A<sup>i</sup>* is also recursively axiomatized. In this case, for every *i*, there is a formula *G<sup>i</sup>* obtained by the Rosser-Gödel diagonalization argument. With this in mind, we define the *A<sup>i</sup>* 's as follows (abbreviation: *T* ≤*<sup>J</sup> T* 0 represents "*T* is interpreted in *T* 0 by *J*"):

Let *ϕ*0, *ϕ*1, . . . , *ϕ<sup>k</sup>* , . . . be an enumeration of arithmetic formulas.

1. If *A<sup>i</sup>* ≤*I<sup>i</sup> S* and there is a least *k* such that *A<sup>i</sup>* 0 *ϕ<sup>k</sup>* and *S* ` *ϕ Ii k* , then

$$A\_{i+1} = A\_i \cup \{\neg \varphi\_k\}.$$

2. Otherwise,

$$A\_{i+1} = A\_i \cup G\_i$$

Let *A* ∗ = S *i*∈*ω Ai* . We note that *A* ∗ is a consistent extension of *A* because in each step we add an unprovable formula.

Suppose *A* ∗ is interpretable by I in S, then *I* = *I<sup>k</sup>* for some natural number *k*. Notably, if a theory *T* is interpreted in a theory *T* 0 , then any subtheory of *T* is interpreted in *T* 0 by the same interpretation. Thus the entire sequence of theories {*A*1, *A*2, . . .} is interpreted in *S* by *I<sup>k</sup>* . In particular, we have *A<sup>k</sup>* ≤*I<sup>k</sup> S* and *Ak*+<sup>1</sup> = *A<sup>i</sup>* + ¬*ϕ<sup>q</sup>* or *Ak*+<sup>1</sup> = *A<sup>k</sup>* + *G<sup>k</sup>* as in the definition. If *Ak*+<sup>1</sup> = *A<sup>i</sup>* + ¬*ϕq*, then option 1 in the definition was used and we have *S* ` *ϕ Ik q* . However, since *S* also interprets *Ak*+<sup>1</sup> with *I<sup>k</sup>* , we have the contradiction *S* ` ¬*ϕ Ik q* . If *Ak*+<sup>1</sup> = *A<sup>k</sup>* + *G<sup>k</sup>* , then option 1 is not applied and we have either (i) *A<sup>k</sup>* -*Ik S* or (ii) that, for all *n*, *A<sup>i</sup>* ` *ϕ<sup>n</sup>* if, and only if, *S* ` *ϕ Ik <sup>n</sup>* . Note that (i) contradicts *A<sup>k</sup>* ≤*I<sup>k</sup> S*. Moreover, since *A<sup>i</sup>* 0 *G<sup>i</sup>* , it follows from (ii) that *S* 0 *G Ik k* –which, in turn, implies the contradiction *Ak*+<sup>1</sup> -*Ik S*. Therefore, *A* ∗ is not interpretable in S.

Let *A* = *A ZF N* , *Ak* be the Ackermann interpretation of membership in arithmetic language and consider that a formula *ϕ* is equivalent to (*con*(*ZF*))*Ak* in *A*. Suppose also that the group *G<sup>a</sup>* considers *ϕ* to be valid. Notably, this formula would represent a relation between natural numbers such that the standard interpretation stops being a correct interpretation of arithmetic. Similar constructions can be used to generate a myriad of examples. However, each of these examples can be subject to a 'contrary to intuition' kind of criticism. In the case presented, one may suggest that (*con*(*ZF*))*Ak* means that we are adding an axiom representing the consistency of ZF in the arithmetic without doing the same in the set theory. Simply adding the axiom *con*(*ZF*) to our set theory would make the standard interpretation work again nicely. Nevertheless, we note that the phenomenon presented in the theorem is not exactly to add isolated axioms, but to add an enumeration of axioms to the arithmetic. Our suggestion is therefore that a bundle addition of axioms may force the theories to loose coordination. We also note that we do not impose the set theory *S* to be recursive. For this reason, one may simply consider that *S* is a complete extension of ZF. In this case, no addition to the set theory would possibly allow the theories to recover the interpretability relation.

We argued that it is possible for ZF and PA to part ways along the path of development. Although disturbing, this may simply account for the meaningfulness of the question about the reduction between the two theories. We have considered that we should conceive it to fail (even fatally, as in this case) in order not to take for granted that the reduction works. Note further that this pays tribute to the idea that by interpreting arithmetic in set theory we should inform something that was not simply given, i.e., that arithmetic lives in the realm of set theory. Nonetheless, we should now show the simple (and not a novelty) result that the number consistent extensions of PA is uncountable. Meanwhile, the number of interpretations is trivially countable. This means that we are in a situation similar to that of choosing a random number in the Real line expecting to find a natural number. Our claim is that, for this reason, the coordination between the systems can work only if the coordination is assumed from the beginning and as a principle.

**Theorem 3.** *Let A be a consistent recursive extension of PA, then there is a uncountable number of consistent extensions of A.*

**Proof.** From the incompletness theorem, there is a formula *G* that is undecidable in PA. Thus, both *PA* + *G* and *PA* + ¬*G* are consistent. Notably, this is still true for the addition of any finite number of new axioms *<sup>α</sup>*1, *<sup>α</sup>*2, . . . , *<sup>α</sup>n*. There is a formula *<sup>G</sup>*h*i*<sup>i</sup> that is undecidable in *<sup>A</sup>*h*i*<sup>i</sup> = *PA* + {*α*1, *<sup>α</sup>*2, . . . , *<sup>α</sup>n*} since *<sup>A</sup>*h*i*<sup>i</sup> is a recursive extension. Let us then index PA extensions with binary codes (i.e. sequences of 0's and 1's) in the following way:


Note that each member of Σ is an extension of PA with the addition of a finite number of formulas. Now we build infinite extensions of PA from Σ. Let *M* : *FinBin* → Σ be the map between binary codes and extensions in Σ. We say that *C* : *ω* → *FinBin* is a chain in *FinBin* when ∀*x*, *y* ∈ *ω*(*x* ≤ *y* → (*C*(*y*) extends the code of *C*(*x*))). Also, if *x* ∈ *FinBin*, we write

*<sup>x</sup>*(*n*) = ( n'th digit of *x*, if there is the n'th digit 0, otherwise

If *C* is a chain in *FinBin*, then *dig<sup>C</sup>* = h(*C*(0))(0),(*C*(1))(1), . . . ,(*C*(*n*))(*n*), . . .i is an infinite binary code associated with the extension *Ex<sup>C</sup>* obtained by

$$\bigcup \{ \mathbf{C}(i) \mid i \in \omega \} $$

We define Π as the set

{h*digC*, *ExC*i | *C* is a chain in *FinBin*}

Note that Π is a function from the set of all binary infinite codes to extensions of PA. Since infinite binary codes are uncountable, we need only to show that Π is injective and that the image of Π is composed of consistent extensions of PA.

Suppose that some *Ex<sup>C</sup>* is not consistent; then there is a finite proof of the inconsistency of *ExC*. Hence, there is *<sup>n</sup>* <sup>∈</sup> *<sup>ω</sup>* such that *Ex<sup>n</sup> <sup>C</sup>* = S {*C*(*i*) | *i* ∈ *n*} = *C*(*n*) is inconsistent. But this is false, since each *C*(*i* + 1) obtained by adding an unprovable formula to *C*(*i*) and *C*(0) = *PA* is assumed consistent.

Suppose that Π(*digC*<sup>1</sup> ) = Π(*digC*<sup>2</sup> ) and that *digC*<sup>1</sup> 6= *digC*<sup>2</sup> . Then there is the least *i* such that *digC*<sup>1</sup> (*i* + 1) 6= *digC*<sup>2</sup> (*i* + 1). This means, without loss of generality, that *C*1(*i* + 1) = *C*1(*i*) + *GC*<sup>1</sup> (*i*) , *C*2(*i* + 1) = *C*2(*i*) + ¬*GC*2(*i*) and *C*1(*i*) = *C*2(*i*). Therefore, Π(*digC*<sup>1</sup> ) contains the formulas *GC*<sup>1</sup> (*i*) and ¬*GC*<sup>1</sup> (*i*) . This is absurd, as we just showed that the image of Π is composed of consistent extensions of PA.

We note that the same can be obtained, even if the starting point includes all theorems of the set theory S under the interpretation. Indeed, we can include the theorems under a given interpretation at any point without interfering with the result.

Although extensions like *A* <sup>+</sup> are in general not interpretable in S, the process of generating these theories is internalizable in S. Therefore, we may say that *S* proves the consistency statement for all these extensions. This is not enough to claim a proper foundational relation. The model construction emerging from this type of consistency proof is simply given by the existence of a model as in the Henkin canonical construction. Thus, the foundational model one can generate provides little more information than saying that the theory is consistent (see [27]). Therefore, we should not consider those cases as a path to avoid the problem discussed in this section.

As developed in this section, we should not consider that the addition of new axioms to the systems is, in principle, coordinated. Instead, the reducibility of arithmetical truth should be a result of the expressiveness of set theory. However, assuming that the choices of the two groups *G<sup>a</sup>* and *G<sup>s</sup>* would result in a interpretable arithmetic is similar to expect that a random choice of a real number to be a natural number (which has probability zero). It follows that coordination between the groups of mathematicians can only occur in principle. Hence, the reduction of arithmetic truth to set theory is not attainable unless assumed and the foundational relation should be based on other grounds.

To further elaborate on this conclusion, let us consider a metaphor. Picture the situation in which we have the unstable equilibrium of a sphere on a hill with a very small slope. We would like to say that the appearance of equilibrium represents our intuitions about the reduction between the theories being correct. Indeed, we have put the sphere in a position that appears to be an equilibrium. As the slope of the hill is very small, our perception of equilibrium works really well. However, even if it takes a long time, it will become evident that the interpretation of PA in ZF is not in equilibrium. We are, nonetheless, in a better position if we accept the multiversalist view of set theory. Under this assumption, we should thus say that there are indeed some universes perfectly coordinated with arithmetic under the standard interpretation, and there are some universes perfectly coordinated with arithmetic under other interpretations. However, these universes are only a small portion among a much larger multitude of possible universes of set theory.

The ideas developed in the present article, especially in Theorem 3, bring attention to the fact that we are talking about an unstable hill. No matter how the sphere appears to be at rest, we know that eventually it will gain traction and fall. The project of using *N* for grounding arithmetic truth is equivalent to finding the equilibrium peak of the hill. It seems to be a good project as we focus on the movement of the sphere–but an analysis of the geography of the hill is already sufficient to conclude this hill to be unstable. We should not base our foundational investigations on the guarantee that we have the correct interpretation in a fixed set theory. Instead, we should use the interpretations as it informs about arithmetic concepts and as it considers bundles of arithmetic formulas in the very expressive environment of set theory.

Our position is not that the standard interpretation N cannot play a foundational role. Alternatively, the very possibility of investigating expansions of arithmetic propositions provided by analyzing N (or other interpretations) is all the ground we need. In place of using foundational relations to establish 'arithmetic truth', we propose using the N interpretation to understand how bundles of arithmetical propositions relates to each other. In this case, we use the technical apparatus and the expressiveness of theories like ZF to analyze arithmetical concepts rather than fixing its truth.

## **4. Final Remarks**

Rather than manipulating models of PA, we considered interpretations of PA in ZF. Our goal was to accommodate the incomplete picture of the set-theoretical metatheory into our analysis of the foundations of arithmetic. The standard interpretation expands what we may consider true in arithmetic: many undecidable formulas in PA become theorems when examined under the interpretation in ZF. This is a general phenomenon. For every well founded interpretation of recursive extensions of PA in extensions of ZF, the interpreted version of arithmetic has more theorems than the original. This shows that studying arithmetic inside set theory can be significant. As one considers these interpretations, one explores **expansion of arithmetic truth** and how the addition of bundles of axioms plays out.

We continued by introducing the coordination problem. We considered two independent communities of mathematicians responsible for deciding over new axioms of ZF and PA. Using this setting, we studied the possibility of coordinating PA with PA's interpretation in ZF. Nonetheless, we showed that it is possible to have extensions of PA that are not interpretable in a given set theory S. Moreover, we consider a given recursive extension A of PA and an extension S of ZF. Here, we prove that there are uncountable extensions of A while countable interpretations of arithmetic in S. This last result implies that the coordination between the two communities of mathematicians should be coordinated from the start. However, we argued that this would empty the foundational role of set theory over arithmetic.

We have, therefore, set a framework to criticize the notion of grounding truth between theories such as arithmetic and set theory, specially with respect to the idea of fixing an interpretation between the systems. Indeed, the multiversalist propagates their pluralism from set theory to arithmetic by relying on the standard interpretation. We reject this conclusion, arguing that it is the interpretation that should be revised. By allowing the interpretation of arithmetic into set theory to change, we make compatible the set theoretic pluralism with the view that there is a single arithmetic structure.

However, this is not to be understood as a general criticism of the idea of using set theory to investigate foundational matters regarding arithmetic. Instead, we have solely shown that it may be flawed to assume that a single set theory can really provide grounds

for arithmetic truth or a definitive description of the universe of numbers. Our suggestion is therefore to consider a foundational relation that aims primarily at conceptual clarification of the concepts involved in the studied theory. An expressively rich environment such as set theory is armed with tools to study arithmetical relations in wider settings than it would be possible without leaving its deductive apparatus.

**Funding:** This research was supported by FCT through CIDMA and projects UIDB/04106/2020 and UIDP/04106/2020.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

## **References**


## *Article* **A Story of Computational Science: Colonel Titus' Problem from the 17th Century**

**Trond Steihaug**

Department of Informatics, University of Bergen, N-5020 Bergen, Norway; trond.steihaug@ii.uib.no

**Abstract:** Experimentation and the evaluation of algorithms have a long history in algebra. In this paper we follow a single test example over more than 250 years. In 1685, John Wallis published *A treatise of algebra, both historical and practical*, containing a solution of Colonel Titus' problem that was proposed to him around 1650. The Colonel Titus problem consists of three algebraic quadratic equations in three unknowns, which Wallis transformed into the problem of finding the roots of a fourth-order (quartic) polynomial. When Joseph Raphson published his method in 1690, he demonstrated the method on 32 algebraic equations and one of the examples was this quartic equation. Edmund Halley later used the same polynomial as an example for his new methods in 1694. Although Wallis used the method of Vietè, which is a digit–by–digit method, the more efficient methods of Halley and Raphson are clearly demonstrated in the works by Raphson and Halley. For more than 250 years the quartic equation has been used as an example in a wide range of solution methods for nonlinear equations. This paper provides an overview of the Colonel Titus problem and the equation first derived by Wallis. The quartic equation has four positive roots and the equation has been found to be very useful for analyzing the number of roots and finding intervals for the individual roots, in the Cardan–Ferrari direct approach for solving quartic equations, and in Sturm's method of determining the number of real roots of an algebraic equation. The quartic equation, together with two other algebraic equations, have likely been the first set of test examples used to compare different iteration methods of solving algebraic equations.

**Keywords:** Vietè's method; Newton–Raphson method; regula falsi method; testing of algorithms

**MSC:** 65-03; 68-03; 01A50; 01A55; 01A60

## **1. Introduction**

A problem brought to John Pell (1611–1685) in 1649, and discussed at the time with Silius Titus (1623–1704), was the following—to find numbers *a*, *b*, and *c* satisfying the equations

$$a^2 + bc = 16, \quad b^2 + ac = 17, \text{ and } c^2 + ab = 22. \tag{1}$$

A solution with positive integers is easily seen to be *a* = 2, *b* = 3, and *c* = 4, but Pell decided to challenge himself by changing the final equation:

$$a^2 + bc = 16, \quad b^2 + ac = 17, \text{ and } c^2 + ab = 18. \tag{2}$$

In 1662, Pell left notes on their progress for Titus and by the following year he and John Wallis had successfully solved it, calculating values of *a*, *b*, and *c* to 15 decimal places each [1]. The solution was printed in 1685 [2], derived from the general problem

$$a^2 + bc = l, \quad b^2 + ac = m, \text{ and } c^2 + ab = n.$$

Colonel Titus' problem is likely the earliest instance of a problem involving three simultaneous quadratic equations ([3], p. 34) and is one of the first algebraic problems

**Citation:** Steihaug, T. A Story of Computational Science: Colonel Titus' Problem from the 17th Century. *Axioms* **2022**, *11*, 287. https:// doi.org/10.3390/axioms11060287

Academic Editor: Oscar Castillo

Received: 21 April 2022 Accepted: 22 May 2022 Published: 14 June 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

leading to a quartic equation, an equation that is not derived from a problem in geometry or trigonometry.

A variant of the Colonel Titus problem is Question 113 in *Ladies' Diary* from 1725 shown in Figure 1

$$a^2 + bc = 920, \quad b^2 + ac = 980, \text{ and } c^2 + ab = 1000,\tag{3}$$

and was solved by John Turner in 1726. Turner only specifies the quartic equation to be solved and a solution *a*, *b*, *c*, of (3). Question 113 is also found in algebra textbooks in 1820 ([4], p. 405) and in 1840 ([5], p. 563).

The publications of collected questions in *Ladies' Diary* in 1774, 1775 [6,7], and 1817 [8] sparked new interest in Colonel Titus' problem.

A fourth variant of the problem was published in Question 209 in *The Scientific Receptacle* in 1796 and shown in Figure 2:

$$a^2 + bc = 1\ 806\ 520, \quad b^2 + ac = 2\ 225\ 275, \text{ and } c^2 + ab = 5\ 567\ 720. \tag{4}$$

John Ryley solved the problem and introduced a new way to solve it by expressing *a* and *b* as a fraction of *c* [9].


**Figure 1.** Question 113, proposed by Thomas Grant in *Ladies Diary* from 1725, taken from Charles Hutton 1775 [7], p. 266. In the collection by Leybourn from 1817, the question is slightly rephrased [8], p. 145.

Wallis ([2], pp. 225–256) eliminates the variables *b* and *c* in (2) and reduces the three equations to a fourth-order algebraic equation

$$\mathbf{x}^4 - 80\mathbf{x}^3 + 1998\mathbf{x}^2 - 14\mathbf{A}\mathbf{y}37\mathbf{x} + 5000 = \mathbf{0} \tag{5}$$

where *x* = 2*a* 2 . In the following we will use the term "Pell–Wallis equation" to refer to (5). To determine a root *x* ∗ , Wallis uses Viète's method and *a* is found through *a* = √ *x* ∗/2. To compute *b*, Wallis derives a cubic equation which follows from multiplying the first quadratic equation by *a* and the second by *b* and eliminates *abc* to obtain the cubic equation

$$17b - b^3 = 16a - a^3, \text{ where } a = \sqrt{\frac{1}{2}\pi^\*}.$$

Having found *a* and *b*, the unknown *c* is found from the first quadratic *a* <sup>2</sup> + *bc* = 16.

One of the most classical problems in mathematics is the solution of systems of polynomial equations in several unknowns [10]. They arise in robotics, coding theory, optimization, mathematical biology, computer vision, game theory, statistics, machine learning, control theory, and numerous other areas [10]. Systems of quadratic polynomial equations appear in nearly every crypto-system [11] and in robotics [12].

For more than 250 years, the equation *x* <sup>4</sup> <sup>−</sup> <sup>80</sup>*<sup>x</sup>* <sup>3</sup> + 1998*x* <sup>2</sup> <sup>−</sup> <sup>14937</sup>*<sup>x</sup>* <sup>+</sup> <sup>5000</sup> <sup>=</sup> <sup>0</sup> has played an important role in the development of new methods, analyses of algebraic equations, and comparisons of methods for solving nonlinear equations.

In Section 2 we discuss four different approaches in solving Colones Titus' problem that have appeared in the literature and in Section 3 we discuss different techniques and methods using the Pell–Wallis Equation (5). For a modern treatment of numerical methods for roots of polynomials, see [13,14] and references therein. For solving systems of polynomial equations, see [10,11] and references therein.

## **2. Colonel Titus' Problem**

Using the notation in Wallis algebra book from 1685 [2], Ch. LX–LXI, the general Colonel Titus problem is as follows. For given positive real numbers *l*, *m*, and *n*, find *a*, *b*, and *c* such that

$$a^2 + bc \quad = \quad l,\tag{6}$$

$$b^2 + ac \quad = \quad m \text{ } \text{ and} \tag{7}$$

$$a^2 + ab^2 = \quad \text{n.}\tag{8}$$

We review several solution techniques, mainly using what could be described as high-school algebra [15]. An elegant solution is given in *Solutions of the principal questions of Dr. Hutton's course of mathematics* by Thomas Stephens Davies, and we follow his solution technique.

From (6) and (7) we have

$$c = \frac{l - a^2}{b} \text{ and } c = \frac{m - b^2}{a}.$$

Equating the two expressions for *c*, we have a cubic equation in *b*

*c*

$$b^3 - mb + a(l - a^2) = 0.$$

From (8) and the two expression for *c* above, we have

$$m - ab = c^2 = \frac{l - a^2}{b} \frac{m - b^2}{a}$$

which is a quadratic equation in *b*

$$(l - 2a^2)b^2 + nab + (a^2 - l)m = 0.$$

Multiply the quadratic equation by *b* and the cubic equation by *l* − 2*a* <sup>2</sup> and subtract the two expressions to eliminate the cubic term. We now have two quadratic equations in *b*

$$(l - 2a^2)b^2 + nab + (a^2 - l)m = 0 \text{ and } nb^2 - mab + (a^2 - l)(l - 2a^2) = 0.$$

To eliminate *b* 2 , multiply the first quadratic equation by *n* and the second by *l* − 2*a* <sup>2</sup> and subtract the two resulting quadratic equations. The result is a linear equation in *b*. Solve for *b*:

$$b = \frac{(l - a^2)(mn - (l - 2a^2)^2)}{a(n^2 + m(l - 2a^2))}.$$

Substitute the value for *b* in

$$mb - ma = \frac{(l - a^2)(mn^2 - m^2a^2 - n(l - a^2)(l - 2a^2))}{a(n^2 + m(l - 2a))}$$

and

$$\frac{1}{b}(a^2 - l)(l - 2a^2) = \frac{n(l - a^2)(l - 2a^2)^2 a (n^2 + m(l - aa^2))}{(l - a^2)(mn - (l - 2a^2)^2)}$$

Equate the two expressions in *nb* − *ma* = (*l* − *a* 2 )(*l* − 2*a* 2 )/*b* and simplify

$$8a^8 - 20la^6 + (18l^2 - 2mn)a^4 + (5lmn - 7l^3 - m^3 - n^3)a^2 + (l^2 - mn)^2 = 0.5$$

Multiply the equation by 2 and let *x* = 2*a* <sup>2</sup> and we have the equation

$$\mathbf{x}^4 - 5\mathbf{l}\mathbf{x}^3 + (9l^2 - mn)\mathbf{x}^2 + (5lmn - 7l^3 - m^3 - n^3)\mathbf{x} + 2(l^2 - mn)^2 = 0.\tag{9}$$

.

For each real root, *x* ∗ of (9) *a*, *b*, and *c*, can be computed in the following way; *a* in 2*a* <sup>2</sup> = *x* ∗ , *<sup>b</sup>* in *n b*<sup>2</sup> <sup>−</sup> *ma b* = (*<sup>l</sup>* <sup>−</sup> *<sup>a</sup>* 2 ) <sup>2</sup> <sup>−</sup> *<sup>a</sup>* 2 (*l* − *a*), and *c* in *a* <sup>2</sup> + *bc* = *l*. For *l* = 16, *m* = 17, and *n* = 18 we have the equation

$$x^4 - 80x^3 + 1998x^2 - 14937x + 5000 = 0$$

which is (5).

Different techniques for solving Colonel Titus' problem have been suggested in the literature by philomaths and mathematicians, school teachers of mathematics, and professors of mathematics. The different solution techniques can mainly be divided in two groups; the first group is based on elimination and the second group on first reformulating the problem and then performing an elimination.

The first solution to Colonel Titus' problem was published by J. Wallis [2] and this was an elimination of the unknowns that results in the quartic Equation (5). To find the four positive roots of (5) Wallis used a digit-by-digit computation method. The solution of Colonel Titus' problem by Wallis was republished by Francis Maseres (1731–1824) in 1800, including numerous details ([16], pp. 187–239). However, Maseres did not use a digit-by-digit method to find the roots, but rather the Newton–Raphson method. Similar solutions using explicit elimination are found in [5,17–19], all leading to the same quartic Equation (5). J. Kirkby [20] in 1735 and A. Cayley [21] in 1860 used a general elimination theory, leading to the same quartic equation.

The method of introducing two new variables expressing the unknowns as a fraction of one of the other variable was studied by J. Ryley [9] in 1796, and made popular by William Frend [22] in 1800. Variations of this technique are found in [23–25]. Ivory expressed two of the unknowns as a difference of the third [26,27]. All these reformulations lead to quartic equations that are different from the quartic Equation (5). These quartic equations never reached the same popularity as (5).

Using iterative methods to solve the three equations simultaneously was suggested in the *Diarian Repository* [6] in 1774 and by Whitley [28] in 1824.

## *2.1. Ladies' Diary 1725 Question 113*

We find a variation of Colonel Titus' problem in the journal *Ladies' Diary* from 1725 in Question 113 shown in Figure 1 where *l* = 920, *m* = 980, and *m* = 1000.

In *Ladies' Diary* in 1726 John Turner (active in *Ladies' Diary* from 1726 to 1750 ([17], p. 423)) gives a solution of the problem and states the equation (using the notation in Wallis)

$$(8a^8 - 20la^6 + (18l^2 - 2mn)a^4 + (5lmn - 7l^3 - m^3 - n^3)a^2 + l^4 - 2l^2mn + m^2n^2 = 0.$$

Let *x* = 2*u* <sup>2</sup> and multiplying the equation by two gives (9). Turner gives the solution of Question 113 in *Ladies' Diary* to be 19.5991, 22.7788, and 23.5276. There are three minor typographical errors in the solution by Turner ([29], p. 7):

$$(18a^8 - 18la^6 + (18l^2 + 2mn)a^4 + (5lmn - 7l^3 - m^3 - n^3 - mn)a^2 + l^4 - 2l^2mn + m^2n^2 = 0..$$

These three typographical errors are repeated in *Diarian Miscellany* [7] and *Diarian Repository* [6] and one error is pointed out in the Errata of [8].

For *l* = 920, *m* = 980, and *n* = 1000, the Equation (9) has four positive roots approximately equal to 1937.6, 1881.6, 768.0, and 12.7, and the only root that gives reasonable ages is 768.0, and the ages are approximately *a* = 19.5965, *b* = 22.7799, and *c* = 23.5286.

## *2.2. A Renewed Interest in Colonel Titus' Problem*

In *Diarian Repository* by S. Clark [6], pp. 190–191 (Archibald [30] states that Samuel Clark was the editor of this repository) from 1774; *Diarian Miscellany* by C. Hutton from 1775 [7], pp. 266 and 271; and later in Leybourn's four volume collection of questions in *Ladies' Diary* from 1817 [8], pp. 145–146, we find Question 113 and the three Equations (6)– (8). The three repositories [6–8] all reproduce John Turner's equation and solution (ages) but also give additional information or alternative solution techniques.

Leybourn also presents an additional solution of Colones Titus' problem provided by Mark Noble (a mathematician at Royal Military College (Sandhurst)) in the appendix in the fourth volume [17], pp. 255–259. The contribution is signed N and in the preface of Leybourn's first volume ([8], Preface page X) it is signed "this is Mark Noble".This is an elimination technique and it leads to the same quartic equation as in Wallis. Noble derives one cubic and one quadratic equation similar to the equations derived by Kirkby. Although Kirkby invokes a general elimination result from Newton ([31], p. 74), Noble carries out the elimination explicitly and obtains the Equation (9). Noble gives the roots of the polynomials and the different values of *a*, *b*, and *c*.

## *2.3. The Scientific Receptacle 1796 Question 209*

*The Scientific Receptacle* published in 1796 the question shown in Figure 2 ([9], p. 77). The problem is find positive numbers (using the notation in Wallis) *a*, *b*, and *c* so that

> *a* + *bc* = 1, 806, 520, *b* + *ac* = 2, 225, 275, and *c* + *ab* = 5, 567, 720

with a solution published in a later issue in the same volume ([9], p. 95).


**Figure 2.** Question 209 in *The Scientific Receptacle* from 1796 proposed by James Gale.

John Ryley (1747–1815) published the solution of the problem in 1796 [9]. Ryley considered the three Equations (6)–(8) and introduced two new variables *x* and *y* so that

$$b = \mathfrak{x} \ c \text{ and } a = \mathfrak{y} \ c. \tag{10}$$

and derived the equation

$$(n^2 - lm)\mathbf{x}^4 + (m^2 + ln)\mathbf{x}^3 - 4mn\mathbf{x}^2 + (n^2 + lm)\mathbf{x} + m^2 - ln = 0.\tag{11}$$

From a root of (11), all other quantities can be determined. However, Ryley does not compute any root of (11) or values of *a*, *b*, and *c* for the given *l*, *m*, and *n*.

## *2.4. First Reformulation and then Elimination*

J. Ryley was the first to express two of the unknowns as a fraction of the third. W. Frend (1757–1841) ([22], pp. 240–246) in 1800 provided a different derivation and introduced *x* and *y* so that

$$b = \mathfrak{x} \ a \text{ and } \mathfrak{c} = \mathfrak{y} \ a,\tag{12}$$

and derived the equation

$$(mn - l^2)\mathbf{x}^4 - (ln + m^2)\mathbf{x}^3 + 4lm\mathbf{x}^2 - (l^2 + mn)\mathbf{x} - m^2 + ln = 0.\tag{13}$$

A minor improvement of Frend's solution, avoiding a square root, was given by John Hellins (c. 1749–1827) in the introduction of the same volume in which Frend's solution was found [16], pp. lxxi–lxxii. By interchanging the variables, Equation (13) can be derived from (11).

For *l* = 16, *m* = 17, and *n* = 18 we obtain the equation

$$150\mathbf{x}^4 - 577\mathbf{x}^3 + 1088\mathbf{x}^2 - 562\mathbf{x} - 1 = 0.5$$

The equation has four real solutions (or roots), of which one is negative. Maseres [16] (pp. 246–275) finds the three positive roots to be approximately 1.027179787, 1.17565, and 9.3388519 using Newton–Raphson iteration. Maseres regards the root 1.027179787 as "impossible" since *y* is negative. For a given root, *y* and the unknowns *a*, *b*, and *c* are easily found.

Maseres ends the tract with a comment that Mr. Frend's solution has the advantage that it saves the trouble of those very tedious and perplexing algebraic multiplications and divisions necessary in Dr. Wallis's solution [16], p. 275. A similar solution to Frend's was given by Tebay [25] in 1845. A third variation is to express *a* and *b* in terms of *c* [23].

James Ivory (1765–1842) [26] wrote that the solution provided by Wallis to the problem (2) was *remarkably operose and inellegant* and a solution of the same problem by Frend [22] is preferable to Wallis's solution. Ivory expressed two of the unknowns as a difference of the third and the analysis was printed in 1804 in [26], but with no numerical solution. Ivory restricts his analysis to the specific choice *l* = 16, *m* = 17, and *n* = 18. Ivory's analysis was mailed to Baron Maseres [27] p. 360 and Maseres added many details and a numerical solution based on the Newton–Raphson method [32].

Whitley [24], p. 121 wrote in 1824 that Mr. Ivory's solution was an elegant specimen of analysis and Davis [18], p. 274 in 1840 called it an exceedingly elegant investigation. Cockle [3] speculated that the analysis can be carried over to the case where *m* = (*n* + *l*)/2. It can be shown that the derivation by Ivory can be extended to the general case of *l*, *m*, and *n*. Maseres [32], pp. 371–395 computed the two positive roots of the quartic equation derived by Ivory and these correspond to the positive *a*, *b*, and *c* values provided by Wallis.

## *2.5. Simultaneous Solution of the Three Unknowns*

In the *Repository Solution* section in the *Diarian Repository* [6], pp. 190–191 an iterative approach was suggested. First, find an approximate solution (in this case 23, 22.5, and 21.1); then, find a correction (*x*, *y*, and *z*) that solves the (linear) equations where the second order terms are eliminated. *. . . then via the solution of the resulting equations, x, y, and z will be determined to a sufficient degree of exactness; if not, the operation must be again repeated with the last found values. . .* [6], pp. 190–191. This is Newton's method but no actual computations of *a*, *b*, and *c* are shown, except for finding the starting point for the iteration.

J. H. Swales, the editor of the *Liverpool Apollonius* asked its readers in 1823 to find a simpler solution than those given by Ivory [26], p. 156 and Frend [22], p. 240. Three traditional solutions were submitted by J. Whitley, Settle, and S. Ryley and a completely

new approach using a fixed-point iteration method by Whitley was published in the next volume. In 1853, T. T. Wilkinson, in his series of articles on the *History of Mathematical Periodicals* wrote in relation to the *Liverpool Apollonius* (In *Mechanics Magazine*, Volume 58, 1853 p. 307) that the iterative method used by Whitley was one of the neatest and most effective methods of solving Colonel Titus' problem. The same appraisal was provided in 1865 in the *Educational Times* (*Educational Times p. 270, 1865 on Question 113 from the* Ladies' Diary*)*.

The method proposed by Whitley [28], pp. 127–128 in 1824 is the fixed-point iteration

$$
\begin{pmatrix} a\_{k+1} \\ b\_{k+1} \\ c\_{k+1} \end{pmatrix} = \begin{pmatrix} \sqrt{l - b\_k c\_k} \\ \sqrt{m - a\_k c\_k} \\ \sqrt{n - a\_k b\_k} \end{pmatrix} \quad k \ge 0,
$$

with the starting point given by *a*<sup>0</sup> = *b*<sup>0</sup> = *c*<sup>0</sup> = 3, where *l* = 16, *m* = 17, and *n* = 18. Table 1 compares the fixed-point iteration to Newton's method for *F*(*a*, *b*, *c*) ≡ (*a* <sup>2</sup> <sup>+</sup> *bc* <sup>−</sup> 16, *b* <sup>2</sup> <sup>+</sup> *ac* <sup>−</sup> 17, *<sup>c</sup>* <sup>2</sup> <sup>+</sup> *ab* <sup>−</sup> <sup>18</sup>) = (0, 0, 0) with the starting point (*a*0, *<sup>b</sup>*0, *<sup>c</sup>*0) = (3, 3, 3).


**Table 1.** Fixed-point iterations of Whitley and Newton's method.

Arthur Cayley (1821–1895) considered Colonel Titus' problem and suggested that if *a* = *<sup>x</sup> z* and *b* = *y z* the equations become

$$\begin{aligned} x^2 + cyz - lz^2 &= 0\\ y^2 + czx - mz^2 &= 0\\ (c^2 - n)z^2 + xy &= 0 \end{aligned}$$

which are three homogeneous equations of second order in three unknowns [21]. However, Cayley did not solve the homogeneous equations. Schumacher solved this problem [33] in 1911.

## *2.6. Erroneous Solution*

The achievements of Adrien Quentin Buée (1748–1825), also called Abbé Buée, are important in relation to the conceptual development of the negative numbers and for the graphical representation of the complex numbers. In [34], he considers Colonel Titus' problem and makes an attempt to solve it using geometry and complex numbers. He claims that the solution must be *a* = 3.25*x*, *b* = 4.25*x*, and *c* = 5.25*x*, where *x* is the area of a circle in the geometric construction. However, he does not find any correct solution to the problem.

## **3. The Pell–Wallis Equation**

In the late 17th and early 18th centuries, there were numerous collections of algebraic equations [35]. Most practical algebraic equations were derived from geometric or trigonometric problems. An algebra book by John Ward from 1695 contains ten geometric problems with corresponding algebraic equations [36] and *The Young Mathematician's guide* from 1707 contains more than 20 practical problems from geometry and trigonometry, leading to algebraic equations [37]. However, The Pell–Wallis equation is derived from a different type of problem. The equation has been in use for 270 years, from the first time it appeared in print in 1685 to the most recent reference to the equation in a paper from 1955.

## *3.1. Digit-by-Digit Methods*

The root finding method used by Wallis in 1685 was a digit-by-digit computation method [2]. The method used by Wallis was based on Vietè's method but it deviated from Vietè's method in the divisor used to compute the next digit [38]. In this method, the roots are computed with a very high degree of accuracy. With Horner's technique to compute shifted polynomials, the digit-by-digit approach became more efficient using Holdred's and Horner's divisor [38]. The Pell–Wallis equation is used as an example in Holdred [39], pp. 55–56 and Nicholson [40], pp. 74–76, 80–82 in 1820 and [41], p. 19; de Morgan [42], pp. 50–51 in 1839; Perkins [43], pp. 356–358 and Young [44], pp. 213–221 in 1842; Lobatto [45], pp. 114–166 in 1845; Schnuse [46], pp. 212–216 in 1850; and Onley [47], pp. 240–245 in 1878—all using digit-by-digit computation.

## *3.2. Bracketing Methods*

In Vietè's method, the first digit of a root must be specified. This will normally lead to the determination of the intervals of the roots. Intervals of the real roots may also provide a starting point for linear interpolation. Cardano's golden rule and regula falsi are methods in which a root is bracketed. Application of the Newton–Raphson method and the Halley method, which are iterative methods, requires a starting point sufficiently close to a root/solution and this point is often determined to be in an interval including the root.

The Pell–Wallis equation has been used as an example in [48], p. 335, Kirkby [20] Part IV, pp. 32–34 in 1735; Frend [49], pp. 109–111 and [50] pp. 298–299 in 1799 and 1800. A more systematic approach was employed with the application of Sturm's theorem in [51] from 1839 and Young [52], pp. 159–161 in 1841. This method was also used by Siebel in 1880 and 1887 [53], pp. 406–407 [54], pp. 337–338 in an ad hoc way.

## *3.3. Linear Interpolation*

The first use of the Pell–Wallis equation and interpolation occurred in 1732. Graaf [55], pp. 33–35 considered (5) and scaled the variable *x* ← *x*/10 in the interval of 0 to 3.6 and plotted the graph (*x*, *f*(*x*)), where *f*(*x*) is the left-hand side of (5). Based on the graph, an interval where a solution exists was identified, and then linear interpolation. This is a variation of regula falsi [56] and Cardano's regula aurea [57], Chapter 30 methods, since both end points of the interval are changed in de Graaf's approach.

The method of John Davidson, a teacher in mathematics in Burntisland, involves a bracketing approach and linear interpolation [58], p. 114, [59], p. 38, as shown in his textbooks from 1814 and 1852. This is Cardano's golden rule [57], Chapter 30.

## *3.4. The Newton–Raphson Method*

Wallis published his algebra book in 1685 [2] and it contained the first printed version of Newton's method. When Raphson presented his method in 1690 it was regarded as a different method. It was not until the mid-18th century that it became clear that the two methods generated the same sequence of iterations [35]. From a computational point of view, the methods are very different. Raphson demonstrated his method on 32 examples and the Pell–Wallis equation was given as example 21 [60] Problem XXI. Kirby [20], Part IV, pp. 35, 44–45 in 1735 used the Newton–Raphson method to find one of the roots of the Pell–Wallis equation.

In Volume III of *Scriptores logarithmici* from 1796, Francis Maseres used the Newton– Raphson method. First an approximation 0.3507 to the smallest root is found by using a series expansion and then two iterations are performed [61], pp. 718–725. Maseres writes *". . . and this I take to be the very best method that can be employed to find the value of x to this degree of exactness".*

Lockhart [62] in 1839 argues that the numbers of digits required to compute an approximate solution using the Newton–Raphson method is not worse than Horner's digit-by-digit method, as presented by De Morgan [42] in 1839.

## *3.5. Halley's Method*

Edmund Halley (1656–1742), in a paper from 1694, derived two methods, the rational and irrational method [63]. Halley pointed out that the Pell–Wallis Equation (5) was solved by Wallis using the method of Vieté and solved by Raphson using the Newton–Raphson method. Halley applied both methods to the Pell–Wallis equation. For the irrational method, two possible corrections can be used before the new iteration.

In 1710, Christian Wolff (1679–1754) provided a different derivation of Halley's irrational method and redid the computation method developed by Halley using the irrational method and the correction to find the largest root of (5) [64], pp. 192–194.

Philip Ronyane (1683–1755) applied Halley's rational and irrational methods. With the irrational method he use the two corrections used by Halley and gave a derivation of the corrections, whereas Halley has just stated them [65], pp. 242–244.

One of the earliest professors in mathematics in an American college was Isaac Greenwood (1702–1745) and two notebooks from his students—Samuel Langdon (1723–1797), who graduated from Harvard in 1740, and James Diman (1707–1788), who graduated in 1730—have been kept [66], ([67], pp. 3–17). A topic in the Diman notebook from 1730 is "Dr. Halley's theorems for solving equations of all sorts" and here we find (reproduced in [66], p. 64) three iterations with Halley's rational method on (5).

## *3.6. Ferrari–Cardano Approach*

The linear shift *x* − 20 in the Pell–Wallis equation makes the term *x* <sup>3</sup> vanish and the depressed quartic equation is

$$\mathbf{x}^4 - 402\mathbf{x}^2 + 983\mathbf{x} + 25\mathbf{/} 460 = \mathbf{0}.\tag{14}$$

Taking two slightly different approaches, Francis Maseres first finds the depressed quartic (14) and then, with reference to Ferrari, finds the resolvent cubic

$$v^3 - 201v^2 - 25460v - \frac{967.897}{8} = 0,\tag{15}$$

and, with reference to Descartes [68], p. 142, the resolvent cubic (in *e* 2 ) is

$$e^{6} - 804\mu^{4} + 59764e^{2} - 966\angle 89^{\circ} = 0.\tag{16}$$

The four roots of the Pell–Wallis equation can then found [68], pp. 134–182. Maseres points out that the use of linear interpolation and one iteration with Newton–Raphson will require fewer arithmetic operations than the use of Ferrari-Cardano approach [68] p. 178.

William Rutherford (1798–1871) found the depressed quartic (14) and then derived the resolvent cubic equation (in *u* 2 )

$$u^6 - \frac{402}{2}u^4 + \frac{59.764}{16}u^2 - \frac{966.289}{64} = 0.\tag{17}$$

From the resolvent cubic (17), using Horner's method, Rutherford found one root and the four roots of the Pell–Wallis equation [69], pp. 17–18.

Orson Pratt (1811–1881) [70], pp. 130–131 used the depressed quartic (14) and derived the resolvent depressed cubic

$$y^3 - 1.401\text{.}372y - 6\text{.}3\text{.}074\text{.}427 = 0.1$$

A root of the depressed cubic is found using a digit-by-digit approach with a modified divisor in Vietè's method, to eleven decimal places, and then the roots of the Pell–Wallis equation are given.

Christian Heinrich Schnuse (1800–1878) considered the Pell–Wallis equation and derived the depressed quartic (14) and the resolvent cubic (17). Using a digit-by-digit approach, he found the same root of (17) as Rutherford in 1849 [46], pp. 358–359.

## *3.7. Gräffe's Method*

D. Miguel Merino (1831–1905) translated and revised a work by Johann Franz Encke (1791–1865) [71] on the numerical solution of equations. Using Gräffe's method and one final Newton–Raphson iteration, the four roots are found [71], pp. 42–44. In Gräffe's method a sequence of polynomials is generated and the method is a "root-squaring" process and approximations to the roots can be computed from the coefficients of of the generated polynomials. The method works well for the Pell–Wallis equation since the roots are real, positive, and separated. The method is suitable for computation by hand, whereas computer implementations usually exhibit overflow after only a few steps. After a few steps, the estimates of the roots are good and suitable for a correction by means of Newton– Raphson iterations. The two smallest roots are correct with four decimal digits after four steps in Gräffe's method. Given that the two smallest roots have been accurately computed, the remaining two roots can be computed [72], pp. 74–75. Encke in 1839, Merino in 1879, and Rey Pastor [72] (1888–1962) in 1924 found it more convenient to work with the log of the coefficients of the polynomials.

## *3.8. Miscellaneous Methods and Comments*


$$
\mathbf{x}^4 - \mathbf{x}^2 + \frac{983}{\sqrt{402^3}}\mathbf{x} + \frac{25\angle 460}{402^2} = \mathbf{0}.\tag{18}
$$

By graphical inspection, the roots of (18) are located in intervals of length 0.01.For a point in the interval, a first correction method is a Newton-Raphson iteration, then a correction based on the next term in the Taylor expansion. The four roots are computed using two or three corrections.

• Silvestre François Lacroix (1765–1843) [78], p. 261 discussed the Pell–Wallis equation as a problem of scaling the coefficients and found that the two roots are between 0 and 10 and 10 and 20.


## *3.9. An Early Comparison of Four Algorithms on Three Examples*

One of the first comparisons of the use of several algorithms on different problems is found in [16]. The methods used were Newton–Raphson, Halley's two methods, and regula falsi or linear interpolation. The latter method is called the *the differential method* in [16] or *the method of double position*. Maseres [16] p. 109 provides a reference to *A Course of Mathematics in Two Volumes, Composed for the Use of the Royal Military Academy* by Charles Hutton for the equivalence between the differential method and the method of double position.

The three equations tested were *x* <sup>3</sup> <sup>−</sup> <sup>17</sup>*<sup>x</sup>* <sup>2</sup> <sup>+</sup> <sup>54</sup>*<sup>x</sup>* <sup>−</sup> <sup>350</sup> <sup>=</sup> 0, *<sup>x</sup>* <sup>4</sup> <sup>−</sup> <sup>3</sup>*<sup>x</sup>* <sup>2</sup> <sup>+</sup> <sup>75</sup>*<sup>x</sup>* <sup>−</sup> 10,000 <sup>=</sup> 0, and the Pell–Wallis equation −*x* <sup>4</sup> + 80*x* <sup>3</sup> <sup>−</sup> <sup>1998</sup>*<sup>x</sup>* <sup>2</sup> <sup>+</sup> 14,937*<sup>x</sup>* <sup>−</sup> <sup>5000</sup> <sup>=</sup> 0. These three examples are from Halley [63].

## **4. Concluding Remarks**

We have shown that the three quadratic equations in three unknowns forming Colonel Titus' problem can be reduced to a single quartic equation using standard high school algebra. The different derivations of a quartic equation have been suggested by philomaths and mathematicians, school teachers of mathematics, and professors of mathematics over a period ranging from the mid-17th to the early 20th century. We find systems of quadratic equations in modern crypto-systems or robotics. Today, solutions can easily be obtained through the use of computer algebra systems implemented in Maple, Mathematica, or Wolfram. The modern theory related to solution methods, such as the use of a Gröbner basis, has not yet been explored in relation to Colonel Titus' problem.

We have seen that the quartic equation, the Pell–Wallis Equation (5), derived from Colonel Titus' problem, has been used for more than 250 years as a test example to develop methods to solve algebraic equations, techniques to determine the number of roots, or intervals of the roots, as well as in numerous textbooks. As a well-known equation, it has been included in the early numerical comparisons of root finding methods.

The references in this paper do not form a complete list of the use of this equation and Colonel Titus' problem.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The author declares no conflict of interest.

## **References**


## *Article* **Interval Type-3 Fuzzy Control for Automated Tuning of Image Quality in Televisions**

**Oscar Castillo 1,\* , Juan R. Castro <sup>2</sup> and Patricia Melin <sup>1</sup>**


**\*** Correspondence: ocastillo@tectijuana.mx

**Abstract:** In this article, an intelligent system utilizing type-3 fuzzy logic for automated image quality tuning in televisions is presented. The tuning problem can be formulated as controlling the television imaging system to achieve the requirements of production quality. Previously, the tuning process has been carried out by experts, by manually adjusting the television imaging system on production lines to meet the quality control standards. In this approach, interval type-3 fuzzy logic is utilized with the goal of automating the tuning of televisions manufactured on production lines. An interval type-3 fuzzy approach for image tuning is proposed, so that the best image quality is obtained and, in this way, meet quality requirements. A system based on type-3 fuzzy control is implemented with good simulation results. The validation of the type-3 fuzzy approach is made by comparing the results with human experts on the process of electrical tuning of televisions. The key contribution is the utilization of type-3 fuzzy in the image tuning application, which has not been reported previously in the literature.

**Keywords:** interval type-3 fuzzy theory; fuzzy control; manufacturing

**MSC:** 03B52; 03E72; 62P30

**Citation:** Castillo, O.; Castro, J.R.; Melin, P. Interval Type-3 Fuzzy Control for Automated Tuning of Image Quality in Televisions. *Axioms* **2022**, *11*, 276. https://doi.org/ 10.3390/axioms11060276

Academic Editor: Alexander Šostak

Received: 21 April 2022 Accepted: 6 June 2022 Published: 9 June 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

## **1. Introduction**

In this article, the application of interval type-3 fuzzy theory [1–6] for automating the tuning process is presented. The tuning process involves a process of dynamically adjusting the image quality to achieve the best possible image in the end. A set of fuzzy rules that encapsulates the knowledge of experts in performing the tuning process has been designed. Based on these fuzzy rules we propose the automation of television tuning. Interval type-3 fuzzy enables the handling of the decision-making uncertainty for this problem in a better way than other available alternatives described in the literature, such as type-1 [7–9], interval type-2, and general type-2 fuzzy logic [10–18]. Of course, there are successful applications of type-1 fuzzy control in the recent literature, such as the excellent works presented in [19–21], but the main goal of this article was exploring the utilization of type-3 in this particular application and its comparison with type-2 and type-1.

The key issue that we are dealing with in this work is achieving a way to reproduce images in the best fashion in televisions. In the production of televisions, we usually find a section on the manufacturing line with the responsibility of adjusting the imaging system. Traditionally, an expert adjusts the imaging system using a remote controller, based on voltage and current values. Here, we are dealing with a system based on type-3 fuzzy for controlling the image tuning of televisions. The interval type-3 system has a rule base formed from expert knowledge about the tuning of televisions. The main reason behind the utilization of interval type-3 fuzzy logic is to model better the uncertainty in the decisionmaking process [6,22]. We need to consider the voltage, current intensity, time, and quality as fuzzy variables in the fuzzy rules [23] and define the membership functions (MFs) for these variables that reflect real data and the knowledge of experts.

Several important related works have demonstrated the efficiency of interval type-3 fuzzy logic systems (IT3FLSs) when compared to type-1 fuzzy logic systems (T1FLSs), interval type-2 fuzzy logic systems (IT2FLSs), and generalized type-2 fuzzy logic systems (GT2FLSs); and some of these works are highlighted in the following: in [24] a subsethood for type-n fuzzy sets is presented by Rickard et al.; in [25] a related approach to the design of interval type-3 TS systems is presented by Singh et al.; in [3] Qasem et al. present a type-3 logic fuzzy system that was optimized using a Kalman filter with adaptive fuzzy kernel size; in [26] Wang et al. present a non-singleton type-3 fuzzy approach for flowmeter fault detection for the gas industry; in [27] Alattas et al. present a new data-driven control system for gyroscopes: using type-3 fuzzy systems; in [1] Cao et al. analyze a deep-learned recurrent type-3 fuzzy system with application for renewable energy prediction; in [28] Tian et al. design a deep-learned type-3 fuzzy system and describe its application in modeling problems; in [5] Mohammadzadeh et al. describe an interval type-3 fuzzy system and a new online fractional-order learning algorithm; and in [29] Ma et al. use an optimal type-3 fuzzy system for solving singular equations.

The field of control is an area in which there is a wide and deep number of problems where IT3FLSs may prove to have good performance. The following works focus their studies on showing that an IT3FLS is an excellent tool in control, based on their results. A stabilization of deep type-3 fuzzy control is presented by Gheisarnejad et al. in [30]; an interval type-3 control for solar systems is developed by Liu et al. in [6]; a type-3 controller for gyroscopes is studied by Vafaie et al. in [31]; interval type-3 control for navigation of autonomous vehicles is presented by Tian et al. in [32]; a fractional-order type-3 fuzzy control is implemented by Mohammadzadeh et al. in [33]; an interesting model-predictive type-3 controller for power converters is presented by Gheisarnejad et al. in [34]; a predictive type-3 control for multi-agents is presented by Taghieh et al. in [35]; an event-triggered type-3 controller for multi-agent systems is presented by Yan et al. in [36]; and a type-3 fuzzy voltage management is applied in battery systems by Nabipour et al. in [37]. As can be noted from the discussion of previous works on type-3 fuzzy logic, the particular application that is being considered in this article has not been tackled before with interval type-3 fuzzy models, and this was part of the motivation for carrying out this work. In addition, from a practical point of view, we are presenting a working prototype to the industrial workers in a manufacturing plant in Tijuana, Mexico (as they provided the experts to give us their empirical knowledge on tuning the imaging system). Additionally, on the theoretical side, we were able to extend concepts from type-1 and type-2 to the level of type-3 [38], that could be useful for other problems. In summary, the objective of this research work was to extend the theory and methodology for designing type-2 fuzzy to interval type-3 fuzzy, and also to test this theory and methodology with a challenging application that allows us to make a comparative study of type-3 versus type-2 and type-1 in tuning the imaging systems of televisions.

The key contribution is the utilization of interval type-3 fuzzy theory for achieving an efficient tuning during the production of televisions. This has not been previously reported in the literature, which is evidence of the innovative nature of this research work. In addition, we show that interval type-3 outperforms type-2 and type-1 fuzzy in handling the uncertainty in the decision-making process involved in the evaluation of image quality. There is also innovation on the application side of this work. It is worth noting that the utilization of type-3 fuzzy in the image-tuning application has not been described previously in the literature, which indicates of the novelty of the study. There are only applications of type-2 and type-1 to manufacturing problems reported at this time [23]. In this sense, the approach presented here could be generalized to other problems related to the manufacturing of similar, products, such as sound speakers, sound systems, and others. These problems also involve the tuning of images or sound in a similar way, and the approach proposed here could be adapted to solve them.

The rest of the paper is described as follows: Section 2 highlights the concepts of interval type-3. Section 3 outlines the basic terminology involved in Mamdani type-3 fuzzy systems. Then, Section 4 describes how to use type-3 fuzzy techniques for automating the tuning of televisions and illustrates its validity with simulations. At the end, Section 5 outlines the conclusions.

## **2. Interval Type-3 Fuzzy Theory**

We begin by formulating type-3 terminology.

**Definition 1.** *A type-3 fuzzy set (T3 FS) [2,4,5,38], denoted by A* (3) *, is represented by the plot of a function called MF of A* (3) *in the Cartesian product X* × [0, 1] × [0, 1] *in* [0, 1]*, where X is the primary variable universe of A*(3) *, x.*

In this case, *u* is the membership function of *x*, and *v* is the membership function of *u*. The MF of *µA*(3) is formulated by *µA*(3)(*x*, *u*, *v*) (or *µA*(3) for short) and it is labeled a type-3 MF (T3 MF). In other words,

$$
\mu\_{A^{(3)}}: X \times [0,1] \times [0,1] \to [0,1]
$$

$$A^{(\mathfrak{J})} = \left\{ (\mathbf{x}, u(\mathbf{x}), v(\mathbf{x}, u), \mu\_{A^{(\mathfrak{J})}}(\mathbf{x}, u, v)) \mid \mathbf{x} \in \mathcal{X}, \, u \in \mathcal{U} \subseteq [0, 1], v \in V \subseteq [0, 1] \right\} \tag{1}$$

where *U* is the universe for the secondary variable *u* and *V* is the universe for tertiary variable *v*. A T3FS, *A* (3) can be formulated as:

$$A^{(3)} = \int\_{\mathbf{x} \in X} \int\_{u \in [0,1]} \int\_{v \in [0,1]} \mu\_{A^{(3)}}(\mathbf{x}, u, v) / (\mathbf{x}, u, v) \tag{2}$$

$$A^{(3)} = \int\_{\mathbf{x} \in X} \left[ \int\_{\boldsymbol{\mu} \in [0,1]} \left[ \int\_{\boldsymbol{\nu} \in [0,1]} \mu\_{A^{(3)}}(\mathbf{x}, \boldsymbol{\mu}, \boldsymbol{\nu}) / \boldsymbol{\nu} \right] / \boldsymbol{\mu} \right] / \mathbf{x} \tag{3}$$

where t is notation for the union over all the admissible *x*, *u*, *v* values.

Equation (3) is formulated as a T3 FS MF mapping with the expressions:

$$A^{(3)} = \int\_{\mathbf{x} \in \mathcal{X}} \mu\_{A\_{\mathbf{x}}^{(3)}}(\mu, \mathbf{v}) / \propto \tag{4}$$

$$\mu\_{A\_{\mathbf{x}}^{(3)}}(\boldsymbol{u}, \boldsymbol{v}) = \int\_{\boldsymbol{u} \in [0, 1]} \mu\_{A\_{(\mathbf{x}, \boldsymbol{u})}^{(3)}}(\boldsymbol{v}) / \boldsymbol{u} \tag{5}$$

$$
\mu\_{A^{(3)}\_{(x,\mu)}}(v) = \int\_{v \in [0,1]} \mu\_{A^{(3)}}(\mathbf{x}, \boldsymbol{\mu}, v) / v \tag{6}
$$

where *µ A* (3) *x* (*u*, *v*) is the primary MF, *µ A* (3) *x* (*u*, *v*) is the secondary membership function, and *µ A* (3) (*x*,*u*) (*v*) is the tertiary MF of the T3 FS.

If *µA*(3)(*x*, *u*, *v*) = 1 for all *x* ∈ *X*, *u* ∈ *U*, *v* ∈ *V*, the T3 FS, *A* (3) , is simplified to an interval type-3 fuzzy set (IT3 FS) with the notation A, postulated by expression (7):

$$\mathbb{A} = \int\_{\mathbf{x} \in X} \left[ \int\_{\boldsymbol{\mu} \in [0, 1]} \left[ \int\_{\boldsymbol{v} \in [\underline{\mu}\_{\mathbb{A}}(\mathbf{x}, \boldsymbol{u}), \ \overline{\mathbb{A}}\_{\mathbb{A}}(\mathbf{x}, \boldsymbol{u})} \mathbf{1}/\boldsymbol{v} \right]/\boldsymbol{u} \right] / \propto \tag{7}$$

where

$$\mu\_{\mathbb{A}(\mathbf{x},\boldsymbol{\mu})}(v) = \int\_{\boldsymbol{v} \in [\underline{\mu}\_{\mathbb{A}}(\mathbf{x},\boldsymbol{\mu}), \ \overline{\mu}\_{\mathbb{A}}(\mathbf{x},\boldsymbol{\mu})} \mathbf{1}/v \tag{8}$$

$$\mu\_{\mathbb{A}(\mathbf{x})}(\boldsymbol{\mu}, \boldsymbol{v}) = \int\_{\boldsymbol{\mu} \in [0, 1]} \left[ \int\_{\boldsymbol{\nu} \in [\underline{\mu}\_{\mathbb{A}}(\mathbf{x}, \boldsymbol{\mu}), \ \ \ \ \ \ \ \ \ \ \ \_{\mathbb{A}}(\mathbf{x}, \boldsymbol{\mu})} \mathbf{1}/\boldsymbol{v} \right] / \boldsymbol{\mu} \tag{9}$$

$$\mathbb{A} = \int\_{\mathbf{x} \in X} \mu\_{\mathbb{A}\_{(\mathbf{x})}}(\mu, \mathbf{v}) / \mathbf{x} \tag{10}$$

and (15):

Assuming that *v* ∈ h *µ* A (*x*, *u*), *µ*<sup>A</sup> (*x*, *u*) i and the lower and upper MFs *µ* A (*x*, *u*), *µ*A (*x*, *u*) are general type-2 MFs (T2 MFs) on the plane (*x*, *u*), Equation (4) can be simplified to an interval type-3 MF (IT3 MF), *<sup>µ</sup>*eA(*x*, *<sup>u</sup>*) <sup>∈</sup> h *µ* A (*x*, *u*), *µ*<sup>A</sup> (*x*, *u*) i , defined by Equation (11): (11): = න න (, ) (, ) ⁄ ௫∈ ௨∈[,ଵ] (11) where the lower T2 MF (, ), is contained in the upper T2 MF (, ), that is,

Assuming that ∈ [(, ), (, )] and the lower and upper MFs (, ), (, ) are general type-2 MFs (T2 MFs) on the plane (, ), Equation (4) can be simplified to an interval type-3 MF (IT3 MF), (, ) ∈ [(, ), (, )], defined by Equation

= න (ೣ)

௫∈

(, ) ⁄

*Axioms* **2022**, *11*, x FOR PEER REVIEW 4 of 19

$$\mathbb{A} = \int\_{\mathbf{x} \in X} \int\_{\mathbf{u} \in [0,1]} \tilde{\mu}\_{\mathbf{A}}(\mathbf{x}, \mathbf{u}) / (\mathbf{x}, \mathbf{u}) \tag{11}$$

(10)

where the lower T2 MF *µ* A (*x*, *u*), is contained in the upper T2 MF *µ*<sup>A</sup> (*x*, *u*), that is, *µ* A (*x*, *u*) ⊆ *µ*<sup>A</sup> (*x*, *u*), then *µ* A (*x*, *u*) ≤ *µ*<sup>A</sup> (*x*, *u*), and as a consequence an IT3 FS is expressed by two T2 FSs, one inferior *A* with T2 MF *µ* A (*x*, *u*), and another superior *A*, with T2 MF *µ*<sup>A</sup> (*x*, *u*), expressed by Equations (12) and (13) (see Figure 1): by two T2 FSs, one inferior with T2 MF (, ), and another superior , with T2 MF (, ), expressed by Equations (12) and (13) (see Figure 1): = න න (, ) (, ) ⁄ ௫∈ ௨∈[,ଵ] = න ቈන ௫()⁄ ௨∈[,ଵ] ൗ ௫∈ (12)

$$\underline{\mathbf{A}} = \int\_{\mathbf{x} \in X} \int\_{\boldsymbol{\mu} \in [0,1]} \underline{\mu}\_{\mathbf{A}}(\mathbf{x}, \boldsymbol{\mu}) / (\mathbf{x}, \boldsymbol{\mu}) = \int\_{\mathbf{x} \in X} \left[ \int\_{\boldsymbol{\mu} \in [0,1]} \underline{f}\_{\mathbf{x}}(\boldsymbol{\mu}) / \boldsymbol{\mu} \right] / \boldsymbol{\chi} \tag{12}$$

$$\overline{\mathbf{A}} = \int\_{\mathbf{x} \in X} \int\_{\mathbf{u} \in [0,1]} \overline{\mu}\_{\mathbf{A}}(\mathbf{x}, \mathbf{u}) / (\mathbf{x}, \mathbf{u}) = \int\_{\mathbf{x} \in X} \left[ \int\_{\mathbf{u} \in [0,1]} \overline{f}\_{\mathbf{x}}(\mathbf{u}) / \mathbf{u} \right] / \mathbf{x} \tag{13}$$

where the secondary MFs of *A* and *A* are T1 MFs of T1FS expressed by Equations (14) and (15): (௫)() = න ௫()⁄ (14)

$$
\mu\_{\mathbf{A}(x)}(u) = \int\_{u \in I\_x} \underline{f}\_{\mathbf{x}}(u) / u \tag{14}
$$

$$\mu\_{\overline{\mathcal{A}}(\mathbf{x})}(\boldsymbol{\mu}) = \int\_{\boldsymbol{\mu} \in I\_{\mathbf{x}}} \overline{f}\_{\mathbf{x}}(\boldsymbol{\mu}) / \boldsymbol{\mu} \tag{15}$$

**Figure 1.** IT3 FS with IT3MF *<sup>µ</sup>*e(*x*, *<sup>u</sup>*) where *<sup>µ</sup>*(*x*, *<sup>u</sup>*) is the LMF and *<sup>µ</sup>*(*x*, *<sup>u</sup>*) is the UMF.

**Figure 1.** IT3 FS with IT3MF (, ) where (, ) is the LMF and (, ) is the UMF. In this case, we utilize interval type-3 MFs that are scaled Gaussians in the primary and secondary variables, respectively. This function can be represented as *<sup>µ</sup>*eA(*x*, *<sup>u</sup>*), with Gaussian footprint of uncertainty *FOU*(A), characterized with parameters [*σ*, *m*] (UpperParameters) for the upper membership function UMF, and for the lower membership function LMF the parameters *λ* (LowerScale) and ` (LowerLag), to form the *DOU* = [*µ*(*x*), *µ*(*x*)]. The vertical cuts A(*x*) (*u*) characterize the *FOU*(A), and are IT2 FSs with Gaussian IT2 MFs, *µ*A(*x*) (*u*) with parameters [*σu*, *m*(*x*)] for the UMF and LMF *λ* (LowerScale), ` (LowerLag). The IT3 MF, *<sup>µ</sup>*eA(*x*, *<sup>u</sup>*) <sup>=</sup> ScaleGaussScaleGaussIT3MF (*x*, [*σ*, *<sup>m</sup>*], *<sup>λ</sup>*, `) is described with the following equations:

$$\overline{u}(\mathbf{x}) = \exp\left[ -\frac{1}{2} \left( \frac{\mathbf{x} - m}{\sigma} \right)^2 \right] \tag{16}$$

$$\underline{u}(\mathbf{x}) = \lambda \cdot \exp\left[ -\frac{1}{2} \left( \frac{\mathbf{x} - m}{\sigma^\*} \right)^2 \right] \tag{17}$$

where *σ* ∗ = *σ* qln(`) ln(*ε*) and *ε* is the machine epsilon. If ` = 0, then *σ* ∗ = *σ*. In this case, *u*(*x*) and *u*(*x*) are the upper and lower limits of the domain of uncertainty (DOU). The range, *δ*(*x*) and radius, *σ<sup>x</sup>* of the FOU are:

$$\delta(\mathbf{x}) = \overline{u}(\mathbf{x}) - \underline{u}(\mathbf{x}) \tag{18}$$

$$
\sigma\_{\mathbf{x}} = \frac{\delta(\mathbf{x})}{2\sqrt{3}} + \varepsilon \tag{19}
$$

The apex or core, *<sup>m</sup>*(*x*), of the IT3 MF *<sup>µ</sup>*e(*x*, *<sup>u</sup>*), is defined by the expression:

$$m(\mathbf{x}) = \exp\left[ -\frac{1}{2} \left( \frac{\mathbf{x} - m}{\rho} \right)^2 \right] \tag{20}$$

where *ρ* = (*σ* + *σ* ∗ )/2. Then, the vertical cuts with IT2 MF, *µ*A(*x*) (*u*) = <sup>h</sup> *µ* A(*x*) (*u*), *µ*A(*x*) (*u*) i , are described by the equations:

$$\overline{\mu}\_{\mathbb{A}(\mathbf{x})}(u) = \exp\left[ -\frac{1}{2} \left( \frac{u - u(\mathbf{x})}{\sigma\_{\mathbf{x}}} \right)^2 \right] \tag{21}$$

$$\underline{\mu}\_{\mathbb{A}(\mathbf{x})}(u) = \lambda \cdot \exp\left[ -\frac{1}{2} \left( \frac{u - u(\mathbf{x})}{\sigma\_{\mathbf{x}}^{\*}} \right)^{2} \right] \tag{22}$$

where *σ* ∗ *<sup>x</sup>* = *σ<sup>x</sup>* qln(`) ln(*ε*) . If ` = 0, then *σ* ∗ *<sup>u</sup>* = *σu*. Then, *µ*A(*x*) (*u*) and *µ* A(*x*) (*u*) are the UMF and LMF of the IT2 FSs of the vertical cuts of the secondary IT2MF of the IT3 FS.

## **3. Mamdani Type-3 Fuzzy Models**

The IT3 FLS structure contains the same main components (fuzzifier, rule base, inference machine and, in the final stage, an output processing unit) as its analogous T2 FLSs. While in the case of T2 FLSs the final stage consists of a process of type reduction to T1 FS + defuzzification, in the case of an IT3 FLS, the output process consists of type reduction to an IT2 FS + defuzzification. The fuzzy operators of the inference machine of an IT3 FLS and the type-reduction methods are equivalent to a T2 FLS, except that in the inputs and outputs we have IT3 FSs in an IT3 FLS. The interval type-3 fuzzy operators of union (∪) and intersection (∩), are related to the join (t) and *meet* (u) operators, respectively. The Cartesian product (×) and the implication (→) are intersection operations. We first define the type-3 fuzzy operators, as follows: consider two IT3 FSs, A and B, that are expressed utilizing the representation of horizontal cuts, as in [38]:

$$\mathbb{A} = \int\_{\mathbf{x} \in X} \mu\_{\mathbf{A}(\mathbf{x})} \left( \mathbf{u} \right) / \mathbf{x} = \int\_{\mathbf{x} \in X} \left[ \sup\_{\mathbf{a} \in [0,1]} \mathbf{a} / \mathbf{A}\_{\mathbf{a}} (\mathbf{x}) \right] / \mathbf{x} = \int\_{\mathbf{x} \in X} \left[ \sup\_{\mathbf{a} \in [0,1]} \mathbf{a} / \left[ \underline{\mathbf{A}}\_{\mathbf{a}} (\mathbf{x}), \overline{\mathbf{A}}\_{\mathbf{a}} (\mathbf{x}) \right] \right] / \mathbf{x} \tag{23}$$

where

$$\underline{\mathbf{A}}\_{\alpha}(\mathbf{x}) = [\underline{a}\_{\alpha}(\mathbf{x}), \underline{b}\_{\alpha}(\mathbf{x})]$$

$$\underline{a}\_{\alpha}(\mathbf{x}) = \inf \left\{ \mu \, \middle| \, \mu \in [0, 1], \, \underline{\mu}\_{\mathbb{A}}(\mathbf{x}, \mu) \ge \alpha \right\}$$

$$\underline{h}\_{a}(\mathbf{x}) = \sup \left\{ u \mid \underline{u} \in [0, 1], \,\underline{\mu}\_{\Delta}(\mathbf{x}, u) \ge a \right\}$$

$$\overline{\mathcal{A}}\_{a}(\mathbf{x}) = \left[ \overline{a}\_{\mathbf{a}}(\mathbf{x}), \overline{b}\_{\mathbf{a}}(\mathbf{x}) \right]$$

$$\overline{a}\_{a}(\mathbf{x}) = \inf \{ u \mid \underline{u} \in [0, 1], \,\,\overline{\mathcal{A}}\_{\Delta}(\mathbf{x}, u) \ge a \}$$

$$\overline{b}\_{a}(\mathbf{x}) = \sup \{ u \mid \underline{u} \in [0, 1], \,\,\overline{\mathcal{A}}\_{\Delta}(\mathbf{x}, u) \ge a \}$$

$$\mathbb{B} = \int\_{\mathbf{x} \in \mathcal{X}} \mu\_{\mathbb{B}(\mathbf{x})} \left( u \right) / \mathbf{x} = \int\_{\mathbf{x} \in \mathcal{X}} \left[ \sup\_{\underline{a} \in [0, 1]} a / \,\, \underline{\mathcal{B}}\_{\mathbf{a}}(\mathbf{x}) \right] / \mathbf{x} = \int\_{\mathbf{x} \in \mathcal{X}} \left[ \sup\_{\underline{a} \in [0, 1]} a / \left[ \, \underline{\mathcal{B}}\_{\underline{a}}(\mathbf{x}) \right] \, \, \overline{\mathcal{B}}\_{\underline{a}}(\mathbf{x}) \right] / \mathbf{x} \tag{24}$$
 wherea

 

o

where

$$\underline{\mathbf{B}}\_{\alpha}(\mathbf{x}) = [\underline{\mathbf{c}}\_{\mathfrak{a}}(\mathbf{x}), \underline{\mathbf{d}}\_{\mathfrak{a}}(\mathbf{x})]$$

$$\underline{\mathbf{c}}\_{\mathfrak{a}}(\mathbf{x}) = \inf \left\{ u \, \Big| \, u \in [0, 1] \; / \, \underline{\mu}\_{\mathbb{B}}(\mathbf{x}, \mathfrak{u}) \ge \mathfrak{a} \right\}$$

$$\underline{\mathbf{d}}\_{\mathfrak{a}}(\mathbf{x}) = \sup \left\{ u \, \Big| \, u \in [0, 1] \; / \, \underline{\mu}\_{\mathbb{B}}(\mathbf{x}, \mathfrak{u}) \ge \mathfrak{a} \right\}$$

$$\overline{\mathbf{B}}\_{\mathfrak{a}}(\mathbf{x}) = \left[ \overline{\mathbf{c}}\_{\mathfrak{a}}(\mathbf{x}), \overline{\mathbf{d}}\_{\mathfrak{a}}(\mathbf{x}) \right]$$

$$\overline{\mathbf{c}}\_{\mathfrak{a}}(\mathbf{x}) = \inf \{ u \, \big| \, u \in [0, 1] \; / \, \| \, \overline{\mu}\_{\mathbb{B}}(\mathbf{x}, \mathfrak{u}) \ge \mathfrak{a} \}$$

$$\overline{\mathbf{d}}\_{\mathfrak{a}}(\mathbf{x}) = \sup \{ u \, \big| \, \underline{u} \in [0, 1] \; / \, \overline{\mu}\_{\mathbb{B}}(\mathbf{x}, \mathfrak{u}) \ge \mathfrak{a} \}$$

## **Union of IT3 FSs**

The union of two IT3FSs, A ∪ B, is calculated using horizontal cuts as:

$$\mathsf{A}\cup\mathsf{B} = \int \upmu\_{\left(\mathsf{A}\cup\mathsf{B}\right)\_{\texttt{x}}}(\mathsf{u})/\mathsf{x} = \int \left[\underbrace{\sup\_{\mathsf{a}\in\mathsf{X}}\mathsf{a}/\left(\mathsf{A}\_{\texttt{x}}\cup\mathsf{B}\_{\texttt{x}}\right)}\_{\mathsf{x}\in\left[\mathsf{0},1\right]}\right]/\mathsf{x} = \int\_{\mathsf{x}\in\mathsf{X}} \left[\sup\_{\mathsf{a}\in\left[\mathsf{0},1\right]}\mathsf{a}/\left[\mathsf{A}\_{\texttt{x}}(\mathsf{x})\cup\mathsf{B}\_{\texttt{x}}(\mathsf{x}),\overline{\mathsf{A}}\_{\texttt{x}}(\mathsf{x})\cup\overline{\mathsf{B}}\_{\texttt{x}}(\mathsf{x})\right]\right]/\mathsf{x} \tag{25}$$

where

$$\underline{\mathbf{A}}\_{\alpha}(\mathfrak{x}) \cup \underline{\mathbf{B}}\_{\alpha}(\mathfrak{x}) = [\underline{\mathfrak{g}}\_{\alpha}(\mathfrak{x}) \vee \underline{\mathfrak{e}}\_{\alpha}(\mathfrak{x}), \underline{\mathfrak{h}}\_{\alpha}(\mathfrak{x}) \vee \underline{\mathfrak{g}}\_{\alpha}(\mathfrak{x})]$$

and

$$\overline{\mathcal{A}}\_{\mathfrak{a}}(\mathfrak{x}) \cup \overline{\mathcal{B}}\_{\mathfrak{a}}(\mathfrak{x}) = \left[ \overline{a}\_{\mathfrak{a}}(\mathfrak{x}) \vee \,\, \overline{c}\_{\mathfrak{a}}(\mathfrak{x}), \, \overline{b}\_{\mathfrak{a}}(\mathfrak{x}) \vee \overline{d}\_{\mathfrak{a}}(\mathfrak{x}) \right]^{\mathfrak{a}}$$

## **Intersection of IT3 FSs**

The intersection of two IT3FSs, A T B, is calculated using horizontal cuts as:

$$\mathbf{A}\bigcap \mathbb{B} = \int\_{\mathbf{x}\in\mathcal{X}} \mu\_{\{\mathbf{A}\}\cap\mathbf{B}\_{\mathbf{1}}}(\mathbf{u})/\mathbf{x} = \int\_{\mathbf{x}\in\mathcal{X}} \left[\underbrace{\sup\_{\mathbf{a}\in\mathcal{X}} \mathbf{a}/\left(\mathbf{A}\_{\mathbf{d}}\bigcap\|\mathbf{B}\_{\mathbf{d}}\right)}\_{\mathbf{a}\in[0,1]}\right]/\mathbf{x} = \int\_{\mathbf{x}\in\mathcal{X}} \left[\sup\_{\mathbf{a}\in[0,1]} \mathbf{a}/\left[\varDelta\_{\mathbf{d}}(\mathbf{x})\bigcap\mathbf{B}\_{\mathbf{d}}(\mathbf{x}), \overline{\mathbf{A}}\_{\mathbf{d}}(\mathbf{x})\right] \prod\mathbf{B}\_{\mathbf{d}}(\mathbf{x})\right]/\mathbf{x} \tag{26}$$

where

$$\underline{\mathbf{A}}\_{\mathfrak{a}}(\boldsymbol{\mathfrak{x}}) \bigcap \underline{\mathbf{B}}\_{\mathfrak{a}}(\boldsymbol{\mathfrak{x}}) = [\underline{\mathbf{a}}\_{\mathfrak{a}}(\boldsymbol{\mathfrak{x}}) \wedge \underline{\mathbf{c}}\_{\mathfrak{a}}(\boldsymbol{\mathfrak{x}}), \underline{\mathbf{b}}\_{\mathfrak{a}}(\boldsymbol{\mathfrak{x}}) \wedge \underline{\mathbf{d}}\_{\mathfrak{a}}(\boldsymbol{\mathfrak{x}})]$$
 
$$\overline{\mathbf{A}}\_{\mathfrak{a}}(\boldsymbol{\mathfrak{x}}) \bigcap \overline{\mathbf{B}}\_{\mathfrak{a}}(\boldsymbol{\mathfrak{x}}) = \left[ \overline{a}\_{\mathfrak{a}}(\boldsymbol{\mathfrak{x}}) \wedge \, \overline{c}\_{\mathfrak{a}}(\boldsymbol{\mathfrak{x}}), \, \overline{b}\_{\mathfrak{a}}(\boldsymbol{\mathfrak{x}}) \wedge \, \overline{d}\_{\mathfrak{a}}(\boldsymbol{\mathfrak{x}}) \right]$$

## **Complement of IT3 FSs**

The complement of an IT3 FS, A, is calculated using horizontal cuts as:

$$\overline{\mathbb{A}} = \int \mu\_{\{\overline{\mathbb{A}}\}\_{\mathbf{x}}}(\mathbf{v}) / \mathbf{x} = \int \left[ \underbrace{\sup\_{\mathbf{a} \in X} \mathbf{a} / \neg \mu\_{\mathbf{A}\_{\mathbf{a}}(\mathbf{x})}}\_{\mathbf{a} \in [0, 1]} \right] / \mathbf{x} = \int \underbrace{\left[ \sup\_{\mathbf{a} \in [0, 1]} \mathbf{a} / \left[ \neg \underline{\mathbf{A}}\_{\mathbf{a}}(\mathbf{x}), \neg \overline{\mathbf{A}}\_{\mathbf{a}}(\mathbf{x}) \right] \right]}\_{\mathbf{a} \in [0, 1]} / \mathbf{x} \tag{27}$$
 
$$\neg \underline{\mathbf{A}}\_{\mathbf{a}}(\mathbf{x}) = [1 - \underline{\mathbf{b}}\_{\mathbf{a}}(\mathbf{x}), 1 - \underline{\mathbf{a}}\_{\mathbf{a}}(\mathbf{x})]$$
 
$$\neg \overline{\mathbf{A}}\_{\mathbf{a}}(\mathbf{x}) = \left[ 1 - \overline{\mathbf{b}}\_{\mathbf{a}}(\mathbf{x}), 1 - \overline{\mathbf{a}}\_{\mathbf{a}}(\mathbf{x}) \right]$$

**Definition 2.** *The structure of the Mamdani if–then rule is:*

*R k Z* : *IF x*<sup>1</sup> *is* F *k* 1 *and* . . . *and x<sup>i</sup> is* F *k i and* . . . *and xn is* F *k <sup>n</sup> THEN y*<sup>1</sup> *is* G *k* 1 , . . . , *y<sup>j</sup> is* G *k j* , . . . , *ym is* G *k m*

*where i = 1* . . . *, n (number of inputs), j = 1* . . . *, m (number of outputs) and k = 1* . . . *, r (number of rules).*

To initiate the explanation, we represent the antecedents of the rules with a fuzzy relation A*<sup>k</sup>* = F *k* <sup>1</sup> × . . . × F *k n* , utilizing the Cartesian product, ×, with interval type-3 fuzzy sets (IT3 FS), F *k i* , and the implication for the consequent of the j-th output is also an IT3 FS, G*k j* ; then, the fuzzy relation of the rule R*<sup>k</sup> j* can be formulated as:

$$\mathbb{R}\_j^k = \mathbb{A}^k \to \mathbb{G}\_j^k \tag{28}$$

The n-dimensional input is given by a type-2 fuzzy relation, AX<sup>0</sup> , with T2MF as:

$$\mathbb{A}\_{\mathbb{X}'} = \mathbb{X}\_1 \times \dots \times \mathbb{X}\_n \tag{29}$$

Each relation of the rule R*<sup>k</sup> j* establishes a fuzzy set of the consequent of the rule B *k <sup>j</sup>* <sup>=</sup> <sup>A</sup>X<sup>0</sup> ◦ <sup>R</sup>*<sup>k</sup> j* in *Y* such that:

$$\mathbb{E}\_{\hat{\jmath}}^{k} = \left[ \mathbb{X}\_{1} \diamond \left( \mathbb{F}\_{1}^{k} \times \mathbb{G}\_{\hat{\jmath}}^{k} \right) \right] \times \dots \times \left[ \mathbb{X}\_{n} \diamond \left( \mathbb{F}\_{n}^{k} \times \mathbb{G}\_{\hat{\jmath}}^{k} \right) \right] = \times\_{i=1}^{n} \left[ \mathbb{X}\_{i} \diamond \left( \mathbb{F}\_{i}^{k} \times \mathbb{G}\_{\hat{\jmath}}^{k} \right) \right] \tag{30}$$

where the *level of activation of the rule* is an IT3 FS, B *k j* . By aggregating all the sets, B *k j* that represent the levels of activation of the rules, we obtain the aggregated set B*<sup>j</sup>* for the outputs *j* = 1 . . . , *m*.

$$\mathbb{B}\_{\rangle} = \mathbb{B}\_{\rangle}^1 \cup \dots \cup \mathbb{B}\_{\rangle}^k \cup \dots \cup \mathbb{B}\_{\rangle}^r = \cup\_{k=1}^r \mathbb{B}\_{\rangle}^k \tag{31}$$

The abstract model of *y*ˆ*<sup>j</sup>* = *f*(*x*) is a fuzzy model IT3 (*y<sup>j</sup> is* B*<sup>j</sup>* ), where the sets B *k j* are submodels of B*<sup>j</sup>* .

Equation (32) is obtained by the MF of IT3 fuzzy relation, *µ*<sup>B</sup> *k j yj x* 0 , and is:

$$\mu\_{\mathbb{B}\_{\boldsymbol{\gamma}}^{k}}(y\_{\boldsymbol{\gamma}}|\mathbf{x}') = \mu\_{\mathbb{A}\_{\mathbf{X}'} \circ \mathbb{R}\_{\boldsymbol{\gamma}}^{k}}(y\_{\boldsymbol{\gamma}}|\mathbf{x}') = \underbrace{\sup \left[ \mu\_{\mathbb{A}\_{\mathbf{X}'}}(\mathbf{x}) \sqcap \mu\_{\mathbb{A}^{k} \to \mathbb{G}\_{\boldsymbol{\gamma}}^{k}}(\mathbf{x}, y\_{\boldsymbol{\gamma}}) \right]}\_{\text{\textquotedblleft} \mathbf{y} \in \mathbf{Y}} \mathbf{y} \in \mathbf{Y} \tag{32}$$

where *µ*<sup>B</sup> *k j yj x* 0 is the input–output relation between the fuzzy set that fires the inference of a rule (reasoning) and the output fuzzy set. The composition (◦) is a nonlinear mapping from input *x* 0 to an IT3 FS with MF, *µ*<sup>B</sup> *k j yj x* 0 (*y<sup>j</sup>* ∈ *Y*) of the output *y<sup>j</sup>* . The reasoning is a mechanism that transforms fuzzy sets into fuzzy sets by the composition operator (basically a max–min operator). Simplifying Equation (32), we obtain:

$$
\mu\_{\mathbb{B}\_{\boldsymbol{\gamma}}^{k}}(y\_{\boldsymbol{\gamma}}|\mathbf{x}') = \widetilde{\Phi}^{k}(\mathbf{x}') \sqcap \mu\_{\mathbb{G}\_{\boldsymbol{\gamma}}^{k}}(y\_{\boldsymbol{\gamma}}) \tag{33}
$$

where

$$\widetilde{\boldsymbol{\Phi}}^{k}(\mathbf{x}') = \sqcap\_{i=1}^{n} \left[ \underbrace{\sup\_{\mathbf{x}\_{i} \in \mathbf{X}\_{i}} \mu\_{\mathbb{Q}\_{i}^{k}}(\mathbf{x}\_{i}|\mathbf{x}\_{i}')} \right] \tag{34}$$

$$
\mu\_{\mathbb{Q}\_i^k}(\mathbf{x}\_i|\mathbf{x}\_i') = \mu\_{\mathbb{X}\_i}(\mathbf{x}\_i|\mathbf{x}\_i') \sqcap \mu\_{\mathbb{F}\_i^k}(\mathbf{x}\_i) \tag{35}
$$

Maximizing function *µ*Q*<sup>k</sup> i xi x* 0 *i* , we obtain the supremum value in *x* = *x max k*,*i* :

$$\mathfrak{x}\_{k,i}^{\text{max}} \equiv \underset{\mathbf{x}\_{i}}{\text{argmax}} \left\{ \underbrace{\sup\_{\mathbf{x}\_{i} \in X\_{i}} \mu\_{\mathbb{Q}\_{i}^{k}}(\mathbf{x}\_{i}|\mathbf{x}\_{i}')} \right\} \tag{36}$$

The *firing strength* Φe *k* (*x* 0 ), is the membership of the t-norm operation, u, of all the supreme membership values *µ*Q*<sup>k</sup> i x max k*,*i x* 0 *i* of the intersection of each input *µ*X*<sup>i</sup> xi x* 0 *i* with its antecedent *µ*<sup>F</sup> *k i* (*xi*) that contributes to the rule level of activation, i.e.:

$$\widetilde{\Phi}^{k}(\mathbf{x'}) = \sqcap\_{i=1}^{n} \mu\_{\mathbb{Q}\_{i}^{k}} \left( \mathfrak{x}\_{k,i}^{\max} \middle| \mathfrak{x}\_{i}' \right) \tag{37}$$

*j*

The *level of activation of the rule* is the membership *µ*<sup>B</sup> *k yj x* 0 resulting from the op-

eration u of the firing strength Φe *k* (*x* 0 ) and the membership of the consequent of the rule *µ*G*<sup>k</sup> j yj* , that is, the composition operation (◦) of the facts and the knowledge base rules that describe the relational function, B *k <sup>j</sup>* <sup>=</sup> <sup>A</sup>*X*<sup>0</sup> ◦ <sup>R</sup>*<sup>k</sup> j* .

Equation (27) for the MF of the fuzzy relation IT3, *<sup>µ</sup>B*<sup>e</sup> *j yj x* 0 , is the aggregation of all the rules for each output *j*= 1 . . . , *m*, using the operator *join* (u)-fuzzy union-. The combining of the rules using the *join* (t) operator for calculating the aggregation of the values of *µ*<sup>B</sup> *k j yj x* 0 is described by the equation:

$$\mu\_{\mathbb{B}\_{\boldsymbol{j}}}(y\_{\boldsymbol{j}}|\mathbf{x}') = \mu\_{\mathbb{B}\_{\boldsymbol{j}}^{1}}(y\_{\boldsymbol{j}}|\mathbf{x}') \sqcup \dots \sqcup \mu\_{\mathbb{B}\_{\boldsymbol{j}}^{k}}(y\_{\boldsymbol{j}}|\mathbf{x}') \sqcup \dots \sqcup \mu\_{\mathbb{B}\_{\boldsymbol{j}}^{r}}(y\_{\boldsymbol{j}}|\mathbf{x}') = \sqcup\_{k=1}^{r} \mu\_{\mathbb{B}\_{\boldsymbol{j}}^{k}}(y\_{\boldsymbol{j}}|\mathbf{x}') \tag{38}$$

or

$$\mu\_{\mathbb{B}\_{\boldsymbol{\beta}}}(y\_{\boldsymbol{j}}|\mathbf{x}') = \sqcup\_{k=1}^{r} \left[ \widetilde{\Phi}^{k}(\mathbf{x}') \sqcap \mu\_{\mathbb{G}\_{\boldsymbol{j}}^{k}}(y\_{\boldsymbol{j}}) \right] \tag{39}$$

For applications that require a numeric output, *µ*B*<sup>j</sup> yj x* 0 is reduced to an IT2 FS or interval, and this is then reduced to a numeric value *y*ˆ*<sup>j</sup>* . The type reduction methods are the same as the ones used in T2 FS theory.

$$\hat{y}\_{j} = \text{typeReduction}\left(y\_{j'} \,\mu\_{\mathbb{B}\_j}(y\_j|\mathbf{x'})\right) \tag{40}$$

In Figure 2, we illustrate the inference in a type-3 system for a particular value of x = 4, and in Figure 3 the type reduction process.

**Figure 2.** Illustration of the inference process for a value of x = 4. **Figure 2.** Illustration of the inference process for a value of x = 4.

*Axioms* **2022**, *11*, x FOR PEER REVIEW 9 of 19

**Figure 3.** Illustration of the type reduction process for a value of x = 4. **Figure 3.** Illustration of the type reduction process for a value of x = 4.

x = 6, and in Figure 5, the corresponding type reduction process.

**Figure 3.** Illustration of the type reduction process for a value of x = 4. In Figure 4, we illustrate the inference in a type-3 fuzzy system for another value of In Figure 4, we illustrate the inference in a type-3 fuzzy system for another value of x = 6, and in Figure 5, the corresponding type reduction process. In Figure 4, we illustrate the inference in a type-3 fuzzy system for another value of x = 6, and in Figure 5, the corresponding type reduction process.

**Figure 4.** Illustration of the inference process for a value of x = 6. **Figure 4.** Illustration of the inference process for a value of x = 6. **Figure 4.** Illustration of the inference process for a value of x = 6.

*Axioms* **2022**, *11*, x FOR PEER REVIEW 10 of 19

y **Figure 5.** Illustration of the type reduction process for a value of x = 6. **Figure 5.** Illustration of the type reduction process for a value of x = 6.

**Figure 5.** Illustration of the type reduction process for a value of x = 6. For more details on the type reduction process for type-3 fuzzy sets the reader can check a more detailed (step by step) explanation in a recent reference work on type-3 fuzzy systems [38]. Of course, this type reduction is similar to the process that is performed for type-2 fuzzy sets [10,11].

The structure of an interval type-3 system is almost the same as for type-2 and type-1, and it is composed of a fuzzifier, rules, inference, type reduction and defuzzifier [38]. In Figure 6 we show the structure of an interval IT3 system [4,6].

In Figure 6 we show the structure of an interval IT3 system [4,6].

**Figure 6.** Structure of an interval type-3 system. **Figure 6.** Structure of an interval type-3 system.

In the next section, we explain the usefulness of interval type-3 fuzzy as we illustrate the design method and also show the improvements in results compared to type-1 and type-2 systems. In the next section, we explain the usefulness of interval type-3 fuzzy as we illustrate the design method and also show the improvements in results compared to type-1 and type-2 systems.

For more details on the type reduction process for type-3 fuzzy sets the reader can check a more detailed (step by step) explanation in a recent reference work on type-3 fuzzy systems [38]. Of course, this type reduction is similar to the process that is performed for

The structure of an interval type-3 system is almost the same as for type-2 and type-1, and it is composed of a fuzzifier, rules, inference, type reduction and defuzzifier [38].

## **4. Simulation Results**

type-2 fuzzy sets [10,11].

**4. Simulation Results**  We developed our own toolbox for type-3 fuzzy systems to implement the fuzzy rules for automated tuning of televisions. A fuzzy system with three inputs and one output was utilized. The inputs are the voltage, current intensity, and time, and the output is the image quality. We use the Mamdani inference and Gaussian MFs. The fuzzy system was designed by reducing the knowledge of human experts to a system of 14 rules. The We developed our own toolbox for type-3 fuzzy systems to implement the fuzzy rules for automated tuning of televisions. A fuzzy system with three inputs and one output was utilized. The inputs are the voltage, current intensity, and time, and the output is the image quality. We use the Mamdani inference and Gaussian MFs. The fuzzy system was designed by reducing the knowledge of human experts to a system of 14 rules. The block diagram of the system is illustrated in Figure 7.

block diagram of the system is illustrated in Figure 7. We show in Table 1 the rule base for automated tuning of televisions that encapsulates the knowledge of the experts in image tuning. In Table 2, the parameters of the MFs utilized in the inputs and output are presented. The parameters shown in Table 2 were determined by empirical knowledge of experts combined with a trial-and-error approach, but in the future, these could be optimized to improve results even more. At this stage of the research we considered three membership functions for several reasons: (1) according to experts this was reasonable for them and the fuzzy model was also understandable for them, (2) previous implementations of the fuzzy tuning (type-2 and type-1) of imaging systems have been carried out with three membership functions [23], so for comparative purposes we also needed to have three membership functions, and (3) in future work, we plan to consider changing and optimizing the number of membership functions, so that we investigate this issue more precisely.

**Figure 7.** Structure of interval type-3 fuzzy for TV image quality tuning. **Figure 7.** Structure of interval type-3 fuzzy for TV image quality tuning.




**Table 2.** Parameter values for the Gaussian MFs used in the linguistic values.

A sample of the simulation results for 11 cases is shown in Table 3, where we can see that the results from the fuzzy system are close to the real values provided by the experts (in this case two experts were consulted because of the availability of experts at the real plant). In Figures 8–10, the MFs of the inputs (voltage, current intensity and time) are presented. We depict in Figure 11 the MFs for the output of the system. The design of the membership functions was based on the original definitions that were utilized for type-1 and type-2 in [23]. Finally, we illustrate, in Figures 12 and 13, two views of the surface of the fuzzy model.

**Table 3.** Simulation results for a sample of cases.


**Voltage Current Time** 

**Table 3.** Simulation results for a sample of cases.

**Image Quality with IT2 Fuzzy (%)** 

9.03 7.47 2.53 37.2215 38.5492 39.1266 39.6137 40.50 5.01 5.02 3.10 84.3312 85.4573 86.8751 87.9177 88.25 4.91 5.10 5.10 50.7735 51.4486 51.9168 52.2079 53.00 8.75 4.95 5.03 55.4532 56.6396 57.2788 58.5392 57.75 5.20 4.85 8.70 48.1782 48.3319 48.8429 49.1119 50.75 2.25 6.33 7.20 24.9891 24.5638 24.0734 21.9642 23.25 5.10 4.99 5.20 48.7865 49.5543 50.7942 51.8969 51.50 6.20 3.17 5.15 48.6734 49.5112 50.7333 51.8500 52.25 5.31 5.21 4.80 53.1853 53.7693 54.2964 55.8204 56.50 3.99 6.25 5.10 49.0231 49.9732 51.2754 52.1439 53.25

**Image Quality with GT2 Fuzzy (%)** 

**Image Quality with IT3 Fuzzy (%)** 

**Expert Evaluation (%)** 

**Image Quality with T1 Fuzzy (%)** 

**Figure 8.** Input MFs of the electric voltage variable. **Figure 8.** Input MFs of the electric voltage variable.

**Figure 9.** Input MFs of the electric current intensity variable. **Figure 9.** Input MFs of the electric current intensity variable.

**Figure 10.** Input MFs of the time linguistic variable.

**Figure 9.** Input MFs of the electric current intensity variable.

**Figure 10.** Input MFs of the time linguistic variable. **Figure 10.** Input MFs of the time linguistic variable.

**Figure 11.** Output MFs of the image quality variable. **Figure 12.** 3-D view of the model surface with respect to current intensity and voltage. **Figure 11.** Output MFs of the image quality variable.

**Figure 11.** Output MFs of the image quality variable.

**Figure 12.** 3-D view of the model surface with respect to current intensity and voltage.

**Figure 12.** 3-D view of the model surface with respect to current intensity and voltage.

**Figure 13.** 3-D view of the model surface with respect to current intensity and time.

**Figure 13.** 3-D view of the model surface with respect to current intensity and time. From Table 3 we can notice that the image quality achieved with type-3 fuzzy is closer to the expert evaluation when compared to general type-2 (GT2), interval type-2 (IT2) and From Table 3 we can notice that the image quality achieved with type-3 fuzzy is closer to the expert evaluation when compared to general type-2 (GT2), interval type-2 (IT2) and type-1 (T1).

type-1 (T1). Figures 12 and 13 provide a graphical representation of the fuzzy model. Two figures are needed because we have, in total, four variables, so we need to show two different views of the model. In Figure 12, we can appreciate the influence of current intensity and voltage on the image quality, and this can be viewed as showing all possible image quality outputs for different combinations of the input values. In Figure 13, we also show, in a Figures 12 and 13 provide a graphical representation of the fuzzy model. Two figures are needed because we have, in total, four variables, so we need to show two different views of the model. In Figure 12, we can appreciate the influence of current intensity and voltage on the image quality, and this can be viewed as showing all possible image quality outputs for different combinations of the input values. In Figure 13, we also show, in a similar way, the influence of time and current intensity on the image quality.

We have described in this article an intelligent system utilizing type-3 fuzzy logic for the tuning of the imaging system of televisions. We have shown in Section 2 the concepts of interval type-3. Then, in Section 3 the basic terminology involved in Mamdani type-3 fuzzy systems was presented. Section 4 then described how to use type-3 fuzzy techniques for automating the tuning of televisions and illustrated its validity with simulations. The tuning problem can be defined as controlling the imaging system of the television to meet quality standards. Previously, this process has been carried out by experts, by manually adjusting the imaging system of televisions on production lines. In the approach proposed here, we utilize an interval type-3 fuzzy system to automate this tuning process. The fuzzy system was designed to control the tuning, so that the imaging system meets quality requirements. An intelligent system was implemented based on type-3 fuzzy control and produced good simulation results. The validation of the type-3 fuzzy approach was made by comparing its results with those of human experts in the process of electrical tuning of televisions. In most of the tests, the interval type-3 fuzzy system provided results closer to the human experts, when compared to type-2 and type-1 fuzzy approaches. We believe that these results are due to the fact that type-3 is able to handle in a better way the uncertainty involved in the tuning process of the imaging system. The main contribution of the article has been the application of the new concepts of interval type-3 to an interesting

**5. Conclusions** 

similar way, the influence of time and current intensity on the image quality.

## **5. Conclusions**

We have described in this article an intelligent system utilizing type-3 fuzzy logic for the tuning of the imaging system of televisions. We have shown in Section 2 the concepts of interval type-3. Then, in Section 3 the basic terminology involved in Mamdani type-3 fuzzy systems was presented. Section 4 then described how to use type-3 fuzzy techniques for automating the tuning of televisions and illustrated its validity with simulations. The tuning problem can be defined as controlling the imaging system of the television to meet quality standards. Previously, this process has been carried out by experts, by manually adjusting the imaging system of televisions on production lines. In the approach proposed here, we utilize an interval type-3 fuzzy system to automate this tuning process. The fuzzy system was designed to control the tuning, so that the imaging system meets quality requirements. An intelligent system was implemented based on type-3 fuzzy control and produced good simulation results. The validation of the type-3 fuzzy approach was made by comparing its results with those of human experts in the process of electrical tuning of televisions. In most of the tests, the interval type-3 fuzzy system provided results closer to the human experts, when compared to type-2 and type-1 fuzzy approaches. We believe that these results are due to the fact that type-3 is able to handle in a better way the uncertainty involved in the tuning process of the imaging system. The main contribution of the article has been the application of the new concepts of interval type-3 to an interesting problem with relevance to the television manufacturing process. The main advantage of the proposal is the relative simplicity of building the fuzzy model based on expert knowledge, though this could be a disadvantage if there were a lack of experts concerning other problems or case studies. In future works, we plan to optimize the MFs of the system by using metaheuristic optimization techniques, in this way improving the results even more. In addition, the proposed type-3 decision-making approach could be applied and tested in similar quality control problems [23] and classification systems [39,40].

**Author Contributions:** Conceptualization, creation of main idea, writing—review and editing, O.C.; formal analysis, J.R.C.; methodology and validation, P.M. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not Applicable.

**Informed Consent Statement:** Not Applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** We would like to thank TecNM and Conacyt for their support during the realization of this research.

**Conflicts of Interest:** The authors declare no conflict of interest.

## **References**


## *Article* **Analysis of Interval-Valued Intuitionistic Fuzzy Aczel–Alsina Geometric Aggregation Operators and Their Application to Multiple Attribute Decision-Making**

**Tapan Senapati 1,2 , Radko Mesiar 3,4, Vladimir Simic <sup>5</sup> , Aiyared Iampan 6,\* , Ronnason Chinram <sup>7</sup> and Rifaqat Ali <sup>8</sup>**


**Abstract:** When dealing with the haziness that is intrinsic in decision analysis-driven decision making procedures, interval-valued intuitionistic fuzzy sets (IVIFSs) can be quite effective. Our approach to solving the multiple attribute decision making (MADM) difficulties, where all of the evidence provided by the decision-makers is demonstrated as interval-valued intuitionistic fuzzy (IVIF) decision matrices, in which all of the components are distinguished by an IVIF number (IVIFN), is based on Aczel–Alsina operational processes. We begin by introducing novel IVIFN operations including the Aczel–Alsina sum, product, scalar multiplication, and exponential. We may then create IVIF aggregation operators, such as the IVIF Aczel–Alsina weighted geometric operator, the IVIF Aczel–Alsina ordered weighted geometric operator, and the IVIF Aczel–Alsina hybrid geometric operator, among others. We present a MADM approach that relies on the IVIF aggregation operators that have been developed. A case study is used to demonstrate the practical applicability of the strategies proposed in this paper. By contrasting the newly developed technique with existing techniques, the method is capable of demonstrating the advantages of the newly developed approach. A key result of this work is the discovery that some of the current IVIF aggregation operators are subsets of the operators reported in this article.

**Keywords:** MADM; Aczel–Alsina operations; IVIFNs; IVIF Aczel–Alsina geometric aggregation operators

**MSC:** 90B50; 47S40

## **1. Introduction**

The intuitionistic fuzzy set [1] was extended by Atanassov and Gargov to the IVIFS [2], which is represented by membership and non-membership functions whose values are intervals rather than real numbers. Due to the advantages of IVIFS, several researchers have attempted to incorporate IVIF information generated by different kinds of operators to generate judgments [3,4]. For instance, Xu [5] constructed several aggregation operators

**Citation:** Senapati, T.; Mesiar, R.; Simic, V.; Iampan, A.; Chinram, R.; Ali, R. Analysis of Interval-Valued Intuitionistic Fuzzy Aczel–Alsina Geometric Aggregation Operators and Their Application to Multiple Attribute Decision-Making. *Axioms* **2022**, *11*, 258. https://doi.org/ 10.3390/axioms11060258

Academic Editor: Oscar Castillo

Received: 30 January 2022 Accepted: 24 May 2022 Published: 29 May 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

for IVIFNs, including the IVIF weighted averaging (IVIFWA) operator and the IVIF hybrid averaging (IVIFHA) operator. Liu [6] presented two IVIF operators based on the power average and Heronian mean operators and then integrated IVIF information using them. Zhao and Xu [7] provided several novel IVIF aggregation operations. Yu et al. [8] created the IVIF prioritized weighted averaging/geometric operator. Chen and Han [9,10] provided a MADM approach that was built on the multiplication of IVIF values, in addition to the LP and NLP methodologies. The influenced IVIF weighted and ordered weighted geometric operators were invented by Wei et al. [11]. Li [12] suggested a MADM technique using IVIFSs based on TOPSIS-based nonlinear programming (NLP). Xu and Gou [13] discussed the IVIF aggregation operator in detail. Chen et al. [14] developed a variety of MADM techniques based on IVIFSs. The influenced IVIF hybrid Choquet integral operators developed by Meng et al. [15] were used in decision-making issues. Wang and Liu [16,17] recommended the IVIF Einstein weighted averaging and geometric operators. IVIF MADM has already been widely applied in a variety of fields, including hotel location selection [18], air quality evaluation [19], solid waste management [20], hotel location selection [18], sustainable supplier selection [21], potential partner selection [22], and weapon-group target analysis [23].

Schweizer and Sklar pioneered the idea of triangular norms in their theory of empirical metric spaces [24]. As it develops out, *t*-norms and their associated *t*-conorms are vital operations in fuzzification and other evolutionary computing, for instance, Lukasiewicz *t*-norm and *t*-conorm [25], Hamacher *t*-norm and *t*-conorm [26], Einstein *t*-norm and *t*-conorm [17], general continuous Archimedean *t*-norm, *t*-conorm [27], etc. Klement et al. [28] conducted a thorough examination of the characteristics and concept implications of triangular norms in the latest years.

## *1.1. Motivation of the Study*

Generalizing the ideas of Menger [29] from 1942, Schweizer and Sklar [24] proposed in 1960 the concept of triangular norms, or *t*-norms. While their methodology was developed within the context of probabilistic metric spaces for the purpose of making generalizations the triangular inequality of metrics, however, within some years they have been considered in several other branches, most notably fuzzy set theory (there, *t*-norms generate the fuzzy conjunctions, generalizing the original proposal of Zadeh [30] considering the min operation when introducing the intersection of fuzzy sets). Already in the framework of probabilistic metric spaces, but later also to cover the fuzzy disjunctions, the dual operations to *t*-norms, namely *t*-conorms were considered [31]. Later, *t*-norms and *t*-conorms have been considered in several generalizations of the fuzzy set theory, including intuitionistic fuzzy set theory [1], interval-valued fuzzy set theory and fuzzy type-2 theory [32], IVIFS theory [2], etc. For more details concerning *t*-norms and *t*-conorms we highly suggest the monograph [28] due to Klement et al.

Let *F* : [0, 1] <sup>2</sup> <sup>→</sup> [0, 1] be a commutative, associative and monotone function. Then, if *e* = 1 is its neutral element, *F*(*x*, 1) = *F*(1, *x*) = *x* for all *x* ∈ [0, 1], *F* is called a triangular norm (*t*-norm in short). Similarly, if *e* = 0 is its neutral element, i.e., *F*(*x*, 0) = *F*(0, *x*) = *x* for all *x* ∈ [0, 1], then *F* is called a triangular *t*-conorm (*t*-conorm, in short).

To have a clear distinction for *t*-norms and *t*-conorms in notation, we will consider the traditional notation *T* for *t*-norms and *S* for *t*-conorms. Note that these two classes are dual, i.e., for any *t*-norm *T*, the function *S* : [0, 1] <sup>2</sup> <sup>→</sup> [0, 1] given by *<sup>S</sup>*(*x*, *<sup>y</sup>*) = <sup>1</sup> <sup>−</sup> *<sup>T</sup>*(<sup>1</sup> <sup>−</sup> *<sup>x</sup>*, 1 <sup>−</sup> *<sup>y</sup>*) is a *t*-conorm (also called a *t*-conorm dual to *T*), and for any *t*-conorm *S*, the function *T* : [0, 1] <sup>2</sup> <sup>→</sup> [0, 1] determined by *<sup>T</sup>*(*x*, *<sup>y</sup>*) = <sup>1</sup> <sup>−</sup> *<sup>S</sup>*(<sup>1</sup> <sup>−</sup> *<sup>x</sup>*, 1 <sup>−</sup> *<sup>y</sup>*) is a *<sup>t</sup>*-norm (*t*-norm dual to *S*).

It is not difficult to see that the strongest (greatest) *t*-norm is *TM*(*x*, *y*) = *min*(*x*, *y*) following the notation from [28], while the smallest *t*-norm is the drastic product *T<sup>D</sup>* which is vanishing on [0, 1] 2 (clearly, if *max*(*x*, *y*) = 1 then for any *t*-norm we have *T*(*x*, *y*) = *min*(*x*, *y*)). Two prototypical *t*-norms playing an important role both in theory and applications are the product *t*-norm *T<sup>P</sup>* (standard product of reals), and the Lukasiewicz

*t*-norm *T<sup>L</sup>* given by *TL*(*x*, *y*) = *max*(0, *x* + *y* − 1). One of the most distinguished subclasses of the class of all *t*-norms is formed by the continuous Archimedean *t*-norms, i.e., *t*-norms generated by a continuous additive generator. Their importance is clearly visible when *n*-ary extensions of *t*-norms are considered. For deeper results and more details see [28]. In our paper, we will deal with some specially generated *t*-norms, namely with strict *t*-norms which are isomorphic to the product *t*-norm, and which are generated by decreasing bijective additive generators *t* : [0, 1] → [0, ∞]. In such a case, *T*(*x*, *y*) = *t* −1 (*t*(*x*) + *t*(*y*)), and, considering the *n*-array extension (which is unique due to the associativity of *t*-norms), *T*(*x*1, . . . , *xn*) = *t* −1 ( *n* ∑ *i*=1 *t*(*xi*)). Recall that both extremal *t*-norms *T<sup>M</sup>* and *TD*, as well as the product *t*-norm *T<sup>P</sup>* commute with the power functions, i.e., for any *λ* > 0, they satisfy the equality *T*(*x λ* , *y λ* ) = *T*(*x*, *y*) *λ* . Aczel and Alsina in the early 1980s [33] have characterized all other *t*-norm solutions of the above functional equation, showing that these are just strict *t*-norms generated by additive generators *t*ð, ð ∈]0, ∞[, given by *t*ð(*x*) = (−*logx*) ð . The related *t*-norms are denoted as *T* ð *A* and called (strict) Aczel–Alsina *t*-norms, and given by

$$T\_A^{\mathfrak{F}}(\mathfrak{x}, \mathfrak{y}) = \begin{cases} T\_D(\mathfrak{x}, \mathfrak{y})\_\prime & \text{if } \mathfrak{d} = 0\\ \min(\mathfrak{x}, \mathfrak{y})\_\prime & \text{if } \mathfrak{d} = \infty\\ e^{-((-\log \mathfrak{x})^{\mathfrak{d}} + (-\log \mathfrak{y})^{\mathfrak{d}})^{1/\mathfrak{d}}} & \text{otherwise.} \end{cases}$$

Observe that including the extremal *t*-norms, we obtain their Aczel–Alsina family (*T* ð *A* ), ð ∈ [0, ∞] of *t*-norms, which is strictly increasing and continuous in parameter ð.

Due to the duality, similar notes and examples can be introduced for *t*-conorms. There, the smallest *t*-conorms is *S<sup>M</sup>* = *max* (dual to *TM*), and the greatest *t*-conorm is the drastic product *SD*, which is constant 1 on [0, 1] 2 . For any *t*-conorm *S*, if *min*(*x*, *y*) = 0, then *S*(*x*, *y*) = *max*(*x*, *y*). Dual *t*-conorm *S<sup>L</sup>* to *TL*(Lukasiewicz *t*-conorm, also called a truncated sum) is given by *SL*(*x*, *y*) = *min*(1, *x* + *y*), and the dual *t*-conorm *S<sup>P</sup>* to the product *T<sup>P</sup>* (called a probabilistic sum) is given by *SP*(*x*, *y*) = *x* + *y* − *xy*. Continuous Archimedean t-conorms are also generated by additive generators (which are increasing), and if *S* is dual to a continuous Archimedean *t*-norm *T* generated by an additive generator t, then S is generated by an additive generator *s* given by *s*(*x*) = *t*(1 − *x*). In particular, dual *t*-conorms *S* ð *A* to strict Aczel–Alsina *t*-noms *T* ð *A* are generated by additive generators *<sup>s</sup>*ð(*x*) = (−*log*(<sup>1</sup> − *<sup>x</sup>*))<sup>ð</sup> , and they are given by

$$S\_A^\partial(\mathbf{x}, \mathbf{y}) = \begin{cases} S\_D(\mathbf{x}, \mathbf{y})\_\prime & \text{if } \partial = 0\\ \max(\mathbf{x}, \mathbf{y})\_\prime & \text{if } \partial = \infty\\ 1 - e^{-((-\log(1-\mathbf{x}))^\partial + (-\log(1-\mathbf{y}))^\partial)^{1/\partial}} \text{, otherwise.} \end{cases}$$

Observe that including the extremal *t*-conorms, we obtain their Aczel–Alsina family (*S* ð *A* ), ð ∈ [0, ∞] of *t*-conorms, which is strictly decreasing and continuous in parameter ð.

Aczel-Alsina [33] came up with two new operations called Aczel–Alsina *t*-norm and Aczel-Alsina *t*-conorm. These operations have a good relationship with the deployment of parameters. Wang et al. [34] used the Aczel-Alsina triangular norm (AA *t*-norm) to come up with a score level convolution neural network that increases the distance between imposters and legitimate at the same time. Senapati et al. [35–38] came up with Aczel—Alsina operations depending on intuitionistic fuzzy, IVIF, hesitant fuzzy, picture fuzzy aggregation operators, and they used them to solve MADM problems. The primary objective of this insightful article is to illustrate several geometric aggregation operators using IVIF data, known to as IVIF Aczel–Alsina geometric aggregations, for the purpose of identifying the successfully guide of decisions made utilizing decision-making techniques. Unaware of the previously existing unique ways that have been developed in this domain, we have fully examined every possibility to exhibit our proposed approach, in order for it to exceed all past attempts to apprehend the system assessment problem.

## *1.2. Structure of This Study*

The framework of the study is presented in Figure 1. The following details are presented: The next section discusses several basic concepts relating to IVIFSs. Section 3 discusses the Aczel–Alsina operational laws governing the IVIFNs. Section 4 discusses the IVIF Aczel–Alsina weighted geometric (IVIFAAWG) operator, the IVIF Aczel–Alsina order weighted geometric (IVIFAAOWG) operator, and the IVIF Aczel–Alsina hybrid geometric (IVIFAAHG) operator, as well as a few particular instances. In Section 5, we demonstrate how to use the IVIFAAWG operator to construct particular approaches for resolving multiple attribute decision-making challenges in which support and understanding are represented as IVIF values. Section 6 shows the overall methodology with a genuine scenario. Section 7 investigates the effect of a parameter on the outcome of decisionmaking. Section 8 provides a comparison investigation of alternative important strategies to substantiate the suggested technique's sufficiency. Section 9 concludes this analysis and identifies potential future concerns.

**Figure 1.** The framework of the study.

## **2. Preliminaries**

This section will summarize some major themes that will be discussed throughout the remainder of this work.

**Definition 1** ([2])**.** *Assuming F is a recognized universe of discourse, an IVIFS in F is an expression E given by* ˜

$$\tilde{E}\_{\perp} = \{ \langle f, \tilde{\beta}\_{\mathbb{E}}(f), \tilde{\delta}\_{\mathbb{E}}(f) \rangle : f \in F \}\tag{1}$$

*where β*˜ *<sup>E</sup>*(*f*) : *F* → *D*[0, 1]*,* ˜*δE*(*f*) : *<sup>F</sup>* <sup>→</sup> *<sup>D</sup>*[0, 1] *and <sup>D</sup>*[0, 1] *is the set of all subintervals of* [0, 1]*. The intervals β*˜ *<sup>E</sup>*(*f*) *and* ˜*δE*(*f*) *denote the intervals of the degree of membership and degree of non-membership of the element f in the set E*˜*, where β*˜ *<sup>E</sup>*(*f*) = [*β L E* (*f*), *β U E* (*f*)] *and* ˜*δE*(*f*) = [*δ L E* (*f*), *δ U E* (*f*)]*, for all f* ∈ *F, including the condition* 0 ≤ *β U E* (*f*) + *δ U E* (*f*) ≤ 1*. πE*(*f*) = [*π L E* (*f*), *π U E* (*f*)] *denotes the indeterminacy degree of element f that belongs to E*˜*, where π L E* (*f*) = 1 − *β U E* (*f*) − *δ U E* (*f*) *and π U E* (*f*) = 1 − *β L E* (*f*) − *δ L E* (*f*)*.*

Assume that *<sup>E</sup>*˜ <sup>=</sup> {h*<sup>f</sup>* , *<sup>β</sup>*˜ *<sup>E</sup>*(*f*), ˜*δE*(*f*)<sup>i</sup> : *<sup>f</sup>* <sup>∈</sup> *<sup>F</sup>*} and *<sup>W</sup>*˜ <sup>=</sup> {h*<sup>f</sup>* , *<sup>β</sup>*˜*W*(*f*), ˜*δW*(*f*)<sup>i</sup> : *<sup>f</sup>* <sup>∈</sup> *<sup>F</sup>*} are two IFSs over the universe *F*. The next relations and operations concerning two IVIFSs were described as follows [2,25]:


where any pair (*T*, *S*) can be utilized, *T* indicates a *t*-norm and *S* a so-called *t*-conorm dual to the *t*-norm *T*, characterized by *S*(*x*, *y*) = 1 − *T*(1 − *x*, 1 − *y*).

For convenience, Xu [5] called ˜*∂* = ([*β L ∂* , *β U ∂* ], [*δ L ∂* , *δ U ∂* ]) an IVIFN, where [*β L ∂* , *β U ∂* ] ∈ *D*[0, 1], [*δ L ∂* , *δ U ∂* ] ∈ *D*[0, 1] and *β U <sup>∂</sup>* + *δ U <sup>∂</sup>* ≤ 1.

For any three IVIFNs ˜*∂* = ([*β L ∂* , *β U ∂* ], [*δ L ∂* , *δ U ∂* ]), ˜*∂*<sup>1</sup> = ([*β L ∂*1 , *β U ∂*1 ], [*δ L ∂*1 , *δ U ∂*1 ]) and ˜*∂*<sup>2</sup> = ([*β L ∂*2 , *β U ∂*2 ], [*δ L ∂*2 , *δ U ∂*2 ]), Xu [5] and Xu and Chen [39] stated a few operations as follows:


Several indices [5,40] were used to characterize IVIFN.

**Definition 2** ([40])**.** *For any IVIFN* ˜*∂* = ([*β L ∂* , *β U ∂* ], [*δ L ∂* , *δ U ∂* ])*, the score function Sco*( ˜*∂*)*, accuracy function Acc*( ˜*∂*)*, membership uncertainty index Mui*( ˜*∂*) *and hesitation uncertainty index Hui*( ˜*∂*) *of ∂ be defined as follows:*

$$\mathcal{Sco}(\tilde{\boldsymbol{\theta}}) \quad = \quad \frac{1}{2} (\boldsymbol{\beta}\_{\partial}^{L} + \boldsymbol{\beta}\_{\partial}^{II} - \boldsymbol{\delta}\_{\partial}^{L} - \boldsymbol{\delta}\_{\partial}^{II}), \tag{2}$$

$$\text{Acc}(\tilde{\mathfrak{d}}) \quad = \quad \frac{1}{2} (\beta\_{\partial}^{L} + \beta\_{\partial}^{II} + \delta\_{\partial}^{L} + \delta\_{\partial}^{II}), \tag{3}$$

$$Mui(\tilde{\boldsymbol{\theta}}) \; \; \;= \; \; \mathcal{J}\_{\partial}^{\mathrm{II}} + \delta\_{\partial}^{\mathrm{L}} - \beta\_{\partial}^{\mathrm{L}} - \delta\_{\partial}^{\mathrm{II}} \; \; \; \tag{4}$$

$$Hui(\tilde{\mathfrak{d}}) \quad = \ \beta\_{\partial}^{\mathrm{II}} + \delta\_{\partial}^{\mathrm{II}} - \beta\_{\partial}^{\mathrm{L}} - \delta\_{\partial}^{\mathrm{L}}.\tag{5}$$

Based on these indices of IVIFNs, the total ordering [40] on IVIFNs was defined as follows.

**Definition 3.** *Let* ˜*∂*<sup>1</sup> = ([*β L ∂*1 , *β U ∂*1 ], [*δ L ∂*1 , *δ U ∂*1 ]) *and* ˜*∂*<sup>2</sup> = ([*β L ∂*2 , *β U ∂*2 ], [*δ L ∂*2 , *δ U ∂*2 ]) *be two IVIFNs, then*

	- (a) *if Acc*( ˜*∂*1) < *Acc*( ˜*∂*2)*, then* ˜*∂*<sup>1</sup> < ˜*∂*2*,*
	- (b) *if Acc*( ˜*∂*1) = *Acc*( ˜*∂*2)*, then*
	- (i) *if Hui*( ˜*∂*1) < *Hui*( ˜*∂*2)*, then* ˜*∂*<sup>1</sup> < ˜*∂*2*,*
	- (ii) *if Hui*( ˜*∂*1) = *Hui*( ˜*∂*2)*, then* ˜*∂*<sup>1</sup> *and* ˜*∂*<sup>2</sup> *are same, i.e., β L ∂*1 = *β L ∂*2 *, β U ∂*1 = *β U ∂*2 *, δ L ∂*1 = *δ L ∂*2 *and δ U ∂*1 = *δ U ∂*2 *, denoted by* ˜*∂*<sup>1</sup> = ˜*∂*2*.*

Definition 3 defines a way to compare two IVIFNs by prioritizing the functions of score, accuracy, membership uncertainty index, and hesitation uncertainty index. Because once two IVIFNs are analyzed, the sequencing is examined in the following order: general belonging degree, accuracy or hesitation level, membership uncertainty index, and hesitation uncertainty index. This comparative procedure is repeated unless one of the four functions defined in Definition 3 recognizes the two IVIFNs. When these two IVIFNs are distinguished at a particular level of severity, the computation is completed and functions with lower value levels are not computed.

Deschrijver et al. [41] designed the concept of the notion of non-empty intervals. They denoted by *L* the lattice of non-empty intervals *L* = {[*m*, *n*]|(*m*, *n*) ∈ [0, 1] 2 , *m* ≤ *n*} with the partial order ≤*<sup>L</sup>* determined as [*m*, *n*] ≤*<sup>L</sup>* [*p*, *q*] ⇔ *m* ≤ *p* and *n* ≤ *q*. The inferior and superior elements are denoted by the symbol 0*<sup>L</sup>* = [0, 0] and 1*<sup>L</sup>* = [1, 1], respectively.

In this specific situation, Wang and Liu [16,17] meant by *L* ? the lattice of non-empty IV-IFNs *L* ? <sup>=</sup> {h[*m*, *<sup>n</sup>*], [*p*, *<sup>q</sup>*]i|[*m*, *<sup>n</sup>*], [*p*, *<sup>q</sup>*] <sup>∈</sup> *<sup>D</sup>*[0, 1], *<sup>n</sup>* <sup>+</sup> *<sup>q</sup>* <sup>≤</sup> <sup>1</sup>} with the partial order <sup>≤</sup>*<sup>L</sup>* ? characterized as h[*m*1, *n*1], [*p*1, *q*1]i ≤*<sup>L</sup>* ? h[*m*2, *n*2], [*p*2, *q*2]i ⇔ [*m*1, *n*1] ≤*<sup>L</sup>* [*m*2, *n*2]&[*p*2, *q*2] ≤*<sup>L</sup>* [*p*1, *q*1] ⇔ *m*<sup>1</sup> ≤ *m*2, *n*<sup>1</sup> ≤ *n*2, *p*<sup>1</sup> ≥ *p*<sup>2</sup> and *q*<sup>1</sup> ≥ *q*2, where the inferior and superior elements are 0*<sup>L</sup>* ? = h0*L*, 1*L*i = h[0, 0], [1, 1]i and 1*<sup>L</sup>* ? = h1*L*, 0*L*i = h[1, 1], [0, 0]i, respectively.

**Remark 1.** *If α* ≤*<sup>L</sup>* ? *ν, then α* ≤ *ν, i.e., the total order consists of the standard partial order on L* ? *.*

**Definition 4.** *g<sup>L</sup>* ? : (*L* ? ) } → *<sup>L</sup>* ? *is an aggregation function if it is monotone with respect to* ≤*<sup>L</sup>* ? *and satisfies g<sup>L</sup>* ? (0*<sup>L</sup>* ? , . . . , 0*<sup>L</sup>* ? ) = 0*<sup>L</sup>* ? *and g<sup>L</sup>* ? (1*<sup>L</sup>* ? , . . . , 1*<sup>L</sup>* ? ) = 1*<sup>L</sup>* ? *.*

Currently, a wide number of operators are now being developed for accumulating IVIF data in *L* ? [42,43]. The IVIF weighted geometric (IVIFWG) operator and the IVIF ordered weighted geometric (IVIFOWG) operator are probably the most frequently acknowledged operators for accumulating inputs, and they are discussed in details in the following.

**Definition 5.** *Let* ˜*∂<sup>ζ</sup>* = ([*<sup>β</sup> L ∂ζ* , *β U ∂ζ* ], [*δ L ∂ζ* , *δ U ∂ζ* ]) (*ζ* = 1, 2, . . . , }) *be an accumulation of IVIFNs and ξ* = (*ξ*1, *ξ*2, . . . , *ξ*}) *T is the weight vector of ∂<sup>ζ</sup>* (*ζ* = 1, 2, . . . , }) *so as ξ<sup>ζ</sup>* ∈ [0, 1]*, ζ* = 1, 2, . . . , } *and* } ∑ *ζ*=1 *ξ<sup>ζ</sup>* = 1*. Therefore, the IVIF weighted geometric (IVIFWG) operator of dimension* } *is a*

$$\begin{split} & \text{function } \text{IVIFWG}: (L^{\star})^{\hbar} \to L^{\star} \text{ and } \text{IVIFWG}(\tilde{\partial}\_{1}, \tilde{\partial}\_{2}, \dots, \tilde{\partial}\_{\hbar}) = \bigotimes\_{\zeta=1}^{\hbar} (\tilde{\partial}\_{\zeta})^{\tilde{\xi}\_{\zeta}}, \\ & \qquad = \left( \left[ \prod\_{\zeta=1}^{\hbar} (\mathcal{J}\_{\partial\_{\zeta}}^{L})^{\tilde{\xi}\_{\zeta}}, \prod\_{\zeta=1}^{\hbar} (\mathcal{J}\_{\partial\_{\zeta}}^{H})^{\tilde{\xi}\_{\zeta}} \right], \left[ 1 - \prod\_{\zeta=1}^{\hbar} (1 - \delta\_{\partial\_{\zeta}}^{L})^{\tilde{\xi}\_{\zeta}}, 1 - \prod\_{\zeta=1}^{\hbar} (1 - \delta\_{\partial\_{\zeta}}^{H})^{\tilde{\xi}\_{\zeta}} \right] \right). \end{split}$$

**Definition 6.** *Let* ˜*∂<sup>ζ</sup>* = ([*<sup>β</sup> L ∂ζ* , *β U ∂ζ* ], [*δ L ∂ζ* , *δ U ∂ζ* ]) (*ζ* = 1, 2, . . . , }) *be a collection of IVIFNs and ξ* = (*ξ*1, *ξ*2, . . . , *ξ*}) *T is the weight vector of ∂<sup>ζ</sup>* (*ζ* = 1, 2, . . . , }) *so as ξ<sup>ζ</sup>* ∈ [0, 1]*, ζ* = 1, 2, . . . , } *and* } ∑ *ζ*=1 *ξ<sup>ζ</sup>* = 1*. Then, the IVIF ordered weighted geometric (IVIFOWG) operator of dimension* } *is a function IVIFOWG* : (*L* ? ) ? *and IVIFOWG*( ˜*∂*1, ˜*∂*2, . . . , ˜*∂*}) = <sup>N</sup> } ( ˜*∂\$*(*j*) ) *ξζ*

$$\begin{split} \text{a function } & IVIFOWG: (L^{\star})^{\hbar} \to L^{\star} \text{ and } IVIFOWG(\tilde{\partial}\_{1}, \tilde{\partial}\_{2}, \dots, \tilde{\partial}\_{\hbar}) = \bigotimes\_{\zeta=1} (\tilde{\partial}\_{\varrho(j)})^{\xi\_{\zeta}} \\ & = \left( \left[ \prod\_{\zeta=1}^{\hbar} (\mathfrak{J}\_{\partial\_{\varrho(j)}}^{L})^{\xi\_{\zeta}}, \prod\_{\zeta=1}^{\hbar} (\mathfrak{J}\_{\partial\_{\varrho(j)}}^{L})^{\xi\_{\zeta}} \right], \left[ 1 - \prod\_{\zeta=1}^{\hbar} (1 - \delta\_{\partial\_{\varrho(j)}}^{L})^{\xi\_{\zeta}}, 1 - \prod\_{\zeta=1}^{\hbar} (1 - \delta\_{\partial\_{\varrho(j)}}^{L})^{\xi\_{\zeta}} \right] \right). \end{split}$$

## **3. Aczel–Alsina Operations of IVIFNs**

This section will introduce the Aczel–Alsina operations on IVIFNs and discuss some of its fundamental properties.

If you let the *t*-norm *T* be the Aczel–Alsina product *T<sup>A</sup>* and the *t*-conorm *S* be the Aczel–Alsina sum *SA*, the generalized intersection and union over two IVIFNs *E* and *W* are the Aczel–Alsina product (*E* ⊗ *W*) and Aczel–Alsina sum (*E* ⊕ *W*) over two IVIFNs *E* and *W*, respectively, which can be seen:

$$\begin{array}{rcl} E \otimes \mathcal{W} &=& \left\langle \left[ T\_A \{ \beta\_{E'}^{L} \beta\_W^{L} \} , T\_A \{ \beta\_{E'}^{L} \beta\_W^{L} \} \right] , \left[ S\_A \{ \delta\_{E'}^{L} \delta\_W^{L} \} , S\_A \{ \delta\_{E'}^{L} \delta\_W^{L} \} \right] \right\rangle \\ E \oplus \mathcal{W} &=& \left\langle \left[ S\_A \{ \beta\_{E'}^{L} \beta\_W^{L} \} , S\_A \{ \beta\_{E'}^{L} \beta\_W^{L} \} \right] , \left[ T\_A \{ \delta\_{E'}^{L} \delta\_W^{L} \} , T\_A \{ \delta\_{E'}^{L} \delta\_W^{L} \} \right] \right\rangle . \end{array}$$

**Proposition 1.** *Let* ˜*∂*<sup>1</sup> = ([*β L ∂*1 , *β U ∂*1 ], [*δ L ∂*1 , *δ U ∂*1 ]) *and* ˜*∂*<sup>2</sup> = ([*β L ∂*2 , *β U ∂*2 ], [*δ L ∂*2 , *δ U ∂*2 ]) *be two IVIFNs,* ð ∈ [0, ∞] *and ϕ* > 0*. Then, the Aczel–Alsina t-norm and t-conorm operations of IVIFNs are assigned as:*

$$\begin{split} \text{(i)} \quad & \mathfrak{d}\_{1} \oplus \mathfrak{d}\_{2} = \left\langle \left[ 1 - e^{-((-\log(1-\beta\_{\mathfrak{d}\_{1}}^{L}))^{\beta} + (-\log(1-\beta\_{\mathfrak{d}\_{2}}^{L}))^{\beta})^{1/\beta}} \right. \\ & \qquad \qquad \mathbbm{1} - e^{-((-\log(1-\beta\_{\mathfrak{d}\_{1}}^{\mathcal{U}}))^{\beta} + (-\log(1-\beta\_{\mathfrak{d}\_{2}}^{\mathcal{U}}))^{\beta})^{1/\beta}} \right\rangle \cdot \left[ e^{-((-\log\delta\_{\mathfrak{d}\_{1}}^{L})^{\beta} + (-\log\delta\_{\mathfrak{d}\_{2}}^{L})^{\beta})^{1/\beta}} \right. \\ & \qquad \qquad e^{-((-\log\delta\_{\mathfrak{d}\_{1}}^{\mathcal{U}})^{\beta} + (-\log\delta\_{\mathfrak{d}\_{2}}^{\mathcal{U}})^{\beta})^{1/\beta}} \Big] \} \_{/} \end{split}$$

$$\begin{split} \text{(ii)} \quad \mathfrak{d}\_{1} \otimes \mathfrak{d}\_{2} &= \left\langle \left[ e^{-((-\log \mathfrak{f}\_{\mathfrak{d}\_{1}}^{L})^{\mathfrak{d}} + (-\log \mathfrak{f}\_{\mathfrak{d}\_{2}}^{L})^{\mathfrak{d}})^{1/\mathfrak{d}}} \right. \\ \left. e^{-((-\log \mathfrak{f}\_{\mathfrak{d}\_{1}}^{L})^{\mathfrak{d}} + (-\log \mathfrak{f}\_{\mathfrak{d}\_{2}}^{L})^{\mathfrak{d}})^{1/\mathfrak{d}}} \right] / \left[ 1 - e^{-((-\log(1 - \delta\_{\mathfrak{d}\_{1}}^{L}))^{\mathfrak{d}} + (-\log(1 - \delta\_{\mathfrak{d}\_{2}}^{L}))^{\mathfrak{d}})^{1/\mathfrak{d}}} \right. \\ \left. 1 - e^{-((-\log(1 - \delta\_{\mathfrak{d}\_{1}}^{L}))^{\mathfrak{d}} + (-\log(1 - \delta\_{\mathfrak{d}\_{2}}^{L}))^{\mathfrak{d}})^{1/\mathfrak{d}}} \right] / \\ \end{split}$$

**Definition 7.** *Let* ˜*∂* = ([*β L ∂* , *β U ∂* ], [*δ L ∂* , *β U ∂* ]) *be a IVIFN,* ð ∈ [0, ∞] *and ϕ* > 0*. Then, the following two operations of IVIFNs are defined as:*

$$\begin{split} \text{(i)} \quad \begin{split} q\tilde{\partial} &= \left\langle \left[ 1 - e^{-\left( \boldsymbol{\varphi} ( -\log(1 - \boldsymbol{\beta}\_{\mathcal{S}}^{L}) )^{\otimes} \right) 1/\beta}, 1 - e^{-\left( \boldsymbol{\varphi} ( -\log(1 - \boldsymbol{\beta}\_{\mathcal{S}}^{U}) )^{\otimes} \right) 1/\beta} \right] \right\rangle \\ & \left[ e^{-\left( \boldsymbol{\varphi} ( -\log \boldsymbol{\beta}\_{\mathcal{S}}^{L})^{\otimes} \right) 1/\beta}, e^{-\left( \boldsymbol{\varphi} ( -\log \boldsymbol{\beta}\_{\mathcal{S}}^{U})^{\otimes} \right) 1/\beta} \right] \rangle \\ \text{(ii)} \quad \tilde{\partial}^{\phi} &= \left\langle \left[ e^{-\left( \boldsymbol{\varphi} ( -\log \boldsymbol{\beta}\_{\mathcal{S}}^{L})^{\otimes} \right) 1/\beta}, e^{-\left( \boldsymbol{\varphi} ( -\log \boldsymbol{\beta}\_{\mathcal{S}}^{U})^{\otimes} \right) 1/\beta} \right] \left[ 1 - e^{-\left( \boldsymbol{\varphi} ( -\log \boldsymbol{\left} ( -\log \boldsymbol{\beta}\_{\mathcal{S}}^{L}) )^{\otimes} \right) 1/\beta} \right] \\ & \mathbf{1} - e^{-\left( \boldsymbol{\varphi} ( -\log \boldsymbol{\left} ( -\log \boldsymbol{\left} ( -\log \boldsymbol{\beta}\_{\mathcal{S}}^{U}) )^{\otimes} \right) \right) 1/\beta} \right) . \end{split}$$

**Example 1.** *Let* ˜*∂* = ([0.55, 0.60], [0.35, 0.40])*,* ˜*∂*<sup>1</sup> = ([0.75, 0.80], [0.15, 0.20]) *and* ˜*∂*<sup>2</sup> = ([0.35, 0.45], [0.45, 0.50]) *be three IVIFNs, then applying Aczel–Alsina operation on IVIFNs as specified in Proposition 1 and Definition 7 for* ð = 3 *and ϕ* = 2*, we get*

,

(i) ˜*∂*<sup>1</sup> <sup>⊕</sup> ˜*∂*<sup>2</sup> <sup>=</sup> Dh<sup>1</sup> <sup>−</sup> *<sup>e</sup>* <sup>−</sup>((<sup>−</sup> log(1−0.75))3+(<sup>−</sup> log(1−0.35))<sup>3</sup> ) 1/3 , 1 − *e* <sup>−</sup>((<sup>−</sup> log(1−0.80))3+(<sup>−</sup> log(1−0.45))<sup>3</sup> ) 1/3 i , h *e* −((− log 0.15) <sup>3</sup>+(<sup>−</sup> log 0.45) 3 ) 1/3 , *e* −((− log 0.20) <sup>3</sup>+(<sup>−</sup> log 0.50) 3 ) 1/3 iE = ([0.75341, 0.80534], [0.14325, 0.19182]). (ii) ˜*∂*<sup>1</sup> <sup>⊗</sup> ˜*∂*<sup>2</sup> <sup>=</sup> Dh*e* −((− log 0.75) <sup>3</sup>+(<sup>−</sup> log 0.35) 3 ) 1/3 ,*e* −((− log 0.80) <sup>3</sup>+(<sup>−</sup> log 0.45) 3 ) 1/3 i , h 1 − *e* <sup>−</sup>((<sup>−</sup> log(1−0.15))3+(<sup>−</sup> log(1−0.45))<sup>3</sup> ) 1/3 , 1 <sup>−</sup> *<sup>e</sup>* <sup>−</sup>((<sup>−</sup> log(1−0.20))3+(<sup>−</sup> log(1−0.50))<sup>3</sup> ) 1/3 iE = ([0.34751, 0.44741], [0.45218, 0.50380]). (iii) 2 ˜*∂* = Dh<sup>1</sup> <sup>−</sup> *<sup>e</sup>* <sup>−</sup>(2(<sup>−</sup> log(1−0.55))<sup>3</sup> ) 1/3 , 1 <sup>−</sup> *<sup>e</sup>* <sup>−</sup>(2(<sup>−</sup> log(1−0.60))<sup>3</sup> ) 1/3 i , h *e* −(2(− log 0.35) 3 ) 1/3 , *e* −(2(− log 0.40) 3 ) 1/3 iE = ([0.63434, 0.68477], [0.26642, 0.31523]). (iv) ˜*∂* <sup>2</sup> = Dh*e* −(2(− log 0.55) 3 ) 1/3 ,*e* −(2(− log 0.60) 3 ) 1/3 i , h 1 − *e* <sup>−</sup>(2(<sup>−</sup> log(1−0.35))<sup>3</sup> ) 1/3 , 1 − *e* <sup>−</sup>(2(<sup>−</sup> log(1−0.40))<sup>3</sup> ) 1/3 iE = ([0.47084, 0.52540], [0.41885, 0.47460])*.*

**Theorem 1.** *Let* ˜*∂* = ([*β L ∂* , *β U ∂* ], [*δ L ∂* , *β U ∂* ])*,* ˜*∂*<sup>1</sup> = ([*β L ∂*1 , *β U ∂*1 ], [*δ L ∂*1 , *δ U ∂*1 ])*, and* ˜*∂*<sup>2</sup> = ([*β L ∂*2 , *β U ∂*2 ], [*δ L ∂*2 , *δ U ∂*2 ]) *be three IVIFNs, then we have*


**Proof.** For the three IVIFNs ˜*∂*, ˜*∂*<sup>1</sup> and ˜*∂*2, <sup>ð</sup> <sup>∈</sup> [0, <sup>∞</sup>], and *<sup>ϕ</sup>*, *<sup>ϕ</sup>*1, *<sup>ϕ</sup>*<sup>2</sup> <sup>&</sup>gt; 0, as stated in Proposition 1 and Definition 7, we can get

$$
\begin{split}
\mathbf{(i)} \quad \bullet \mathbf{\tilde{\partial}}\_{1} \oplus \mathbf{\tilde{\partial}}\_{2} &= \left\langle \left[ 1 - e^{-((-\log(1-\beta\_{\mathbf{\tilde{\partial}}\_{1}}^{L}))^{\beta} + (-\log(1-\beta\_{\mathbf{\tilde{\partial}}}^{L}))^{\beta})^{1/\beta} \right. \\
\mathbf{1} - e^{-((-\log(1-\beta\_{\mathbf{\tilde{\partial}}\_{1}}^{L}))^{\beta} + (-\log(1-\beta\_{\mathbf{\tilde{\partial}}}^{L}))^{\beta})^{1/\beta})} \right. \\
\mathbf{e} &= (-\log \delta\_{\mathbf{\tilde{\partial}}\_{1}}^{L^{1}})^{\beta} + (-\log \delta\_{\mathbf{\tilde{\partial}}\_{2}}^{L^{1}})^{1/\beta} \Big] \bigg\langle \\
\mathbf{1} - e^{-((-\log(1-\beta\_{\mathbf{\tilde{\partial}}\_{2}}^{L}))^{\beta} + (-\log(1-\beta\_{\mathbf{\tilde{\partial}}\_{1}}^{L^{1}}))^{\beta})^{1/\beta}} \bigg\rangle \\
\mathbf{e} &= (-\log \delta\_{\mathbf{\tilde{\partial}}\_{2}}^{L^{1}})^{\beta} + (-\log \delta\_{\mathbf{\tilde{\partial}}\_{1}}^{L^{1}})^{\beta})^{1/\beta} \\
\mathbf{e} &= (-\log \delta\_{\mathbf{\tilde{\partial}}\_{2}}^{L^{1}})^{\beta} + (-\log \delta\_{\mathbf{\tilde{\partial}}\_{1}}^{L^{1}})^{\beta})^{1/\beta} \\
\end{split}
$$

	- Then, log(1 − *t*) = −((− log(1 − *β L ∂*1 ))<sup>ð</sup> + (− log(<sup>1</sup> − *<sup>β</sup> L ∂*2 ))<sup>ð</sup> ) 1/ð . Using this, we get *ϕ*( ˜*∂*<sup>1</sup> <sup>⊕</sup> ˜*∂*2) = *<sup>ϕ</sup>* Dh<sup>1</sup> <sup>−</sup> *<sup>e</sup>* −((− log(1−*β L ∂*1 ))ð+(<sup>−</sup> log(1−*<sup>β</sup> L ∂*2 ))<sup>ð</sup> ) 1/ð , 1 − *e* −((− log(1−*β U ∂*1 ))ð+(<sup>−</sup> log(1−*<sup>β</sup> U ∂*2 ))<sup>ð</sup> ) 1/<sup>ð</sup> i , h *e* −((− log *δ L ∂*1 ) <sup>ð</sup>+(<sup>−</sup> log *<sup>δ</sup> L ∂*2 ) ð ) 1/ð , *e* −((− log *δ U ∂*1 ) <sup>ð</sup>+(<sup>−</sup> log *<sup>δ</sup> U ∂*2 ) ð ) 1/<sup>ð</sup> iE <sup>=</sup> Dh<sup>1</sup> <sup>−</sup> *<sup>e</sup>* −(*ϕ*((− log(1−*β L ∂*1 ))ð+(<sup>−</sup> log(1−*<sup>β</sup> L ∂*2 ))<sup>ð</sup> ) 1/ð , 1 − *e* −(*ϕ*((− log(1−*β U ∂*1 ))ð+(<sup>−</sup> log(1−*<sup>β</sup> U ∂*2 ))<sup>ð</sup> ) 1/<sup>ð</sup> i , h *e* −(*ϕ*((− log *δ L ∂*1 ) <sup>ð</sup>+(<sup>−</sup> log *<sup>δ</sup> L ∂*2 ) ð ))1/<sup>ð</sup> , *e* −(*ϕ*((− log *δ U ∂*1 ) <sup>ð</sup>+(<sup>−</sup> log *<sup>δ</sup> U ∂*2 ) ð ))1/<sup>ð</sup> iE <sup>=</sup> Dh<sup>1</sup> <sup>−</sup> *<sup>e</sup>* −(*ϕ*(− log(1−*β L ∂*1 ))<sup>ð</sup> ) 1/ð , 1 − *e* −(*ϕ*(− log(1−*β U ∂*1 ))<sup>ð</sup> ) 1/<sup>ð</sup> i , h *e* −(*ϕ*(− log *δ L ∂*1 ) ð ) 1/ð ,*e* −(*ϕ*(− log *δ U ∂*1 ) ð ) 1/<sup>ð</sup> iE <sup>⊕</sup> Dh<sup>1</sup> <sup>−</sup> *e* −(*ϕ*(− log(1−*β L ∂*2 ))<sup>ð</sup> ) 1/ð , 1 − *e* −(*ϕ*(− log(1−*β U ∂*2 ))<sup>ð</sup> ) 1/<sup>ð</sup> i , h *e* −(*ϕ*(− log *δ L ∂*2 ) ð ) 1/ð , *e* −(*ϕ*(− log *δ U ∂*2 ) ð ) 1/<sup>ð</sup> iE <sup>=</sup> *<sup>ϕ</sup>*˜*∂*<sup>1</sup> <sup>⊕</sup> *<sup>ϕ</sup>*˜*∂*2.

$$
\begin{split}
\begin{split}
\mbox{(v)} \quad & \left(\mathfrak{S}\_{1}\otimes\mathfrak{S}\_{2}\right)^{\operatorname{\bf p}} = \left\langle \left[e^{-\left((-\log\mathfrak{H}\_{\operatorname{\bf q}\_{1}}^{L})^{\operatorname{\bf}}+(-\log\mathfrak{H}\_{\operatorname{\bf q}\_{2}}^{L})^{\operatorname{\bf p}}\right)^{1/\operatorname{\bf}}\right.} \operatorname{\bf p}^{-\left((-\log\mathfrak{H}\_{\operatorname{\bf q}\_{1}}^{L})^{\operatorname{\bf}}+(-\log\mathfrak{H}\_{\operatorname{\bf q}\_{2}}^{L})^{\operatorname{\bf}}\right)^{1/\operatorname{\bf}}\right\rangle, \\
& \left[\mathbbm{1}-e^{-\left((-\log\mathfrak{I}\_{\operatorname{\bf q}\_{1}}^{L})^{\operatorname{\bf}}+(-\log\mathfrak{I}-\operatorname{\bf}\_{\operatorname{\bf q}\_{2}}^{L})^{\operatorname{\bf}}\right)^{\operatorname{\bf}}\right], 1- \\
& \left.e^{-\left((-\log\mathfrak{I}\_{\operatorname{\bf q}\_{1}}^{L})^{\operatorname{\bf}}+(-\log\mathfrak{I}\_{\operatorname{\bf q}\_{2}}^{L})^{\operatorname{\bf}}\right)^{\operatorname{\bf}}\right] \}^{\operatorname{\bf p}} = \left\langle \left[e^{-\left((-\log\mathfrak{I}\_{\operatorname{\bf q}\_{1}}^{L})^{\operatorname{\bf}}+(-\log\mathfrak{I}\_{\operatorname{\bf q}\_{2}}^{L})^{\operatorname{\bf}}\right)^{1/\operatorname{\bf}}\right]^{1/\operatorname{\bf}}\right\rangle, \\
& \left.e^{-\left(\mathfrak{H}\_{\operatorname{\bf$$

*e* −(*ϕ*(− log *β U ∂*1 ) ð ) 1/<sup>ð</sup> i , h 1 − *e* −(*ϕ*(− log(1−*δ L ∂*1 ))<sup>ð</sup> ) 1/ð , 1 − *e* −(*ϕ*(− log(1−*δ U ∂*1 ))<sup>ð</sup> ) 1/<sup>ð</sup> iE ⊕ Dh*e* −(*ϕ*(− log *β L ∂*2 ) ð ) 1/ð ,*e* −(*ϕ*(− log *β U ∂*2 ) ð ) 1/<sup>ð</sup> i , h 1 − *e* −(*ϕ*(− log(1−*δ L ∂*2 ))<sup>ð</sup> ) 1/ð , 1 − *e* −(*ϕ*(− log(1−*δ U ∂*2 ))<sup>ð</sup> ) 1/<sup>ð</sup> iE <sup>=</sup> ˜*<sup>∂</sup> ϕ* <sup>1</sup> <sup>⊗</sup> ˜*<sup>∂</sup> ϕ* 2 . (vi) ˜*∂ <sup>ϕ</sup>*<sup>1</sup> <sup>⊗</sup> ˜*<sup>∂</sup> <sup>ϕ</sup>*<sup>2</sup> = Dh*e* −(*ϕ*<sup>1</sup> (− log *β L ∂* ) ð ) 1/ð ,*e* −(*ϕ*<sup>1</sup> (− log *β U ∂* ) ð ) 1/ð i , h 1 − *e* −(*ϕ*<sup>1</sup> (− log(1−*δ L ∂* ))<sup>ð</sup> ) 1/ð , 1 − *e* −(*ϕ*<sup>1</sup> (− log(1−*δ U ∂* ))<sup>ð</sup> ) 1/ð iE <sup>⊗</sup> Dh*e* −(*ϕ*2(− log *β L ∂* ) ð ) 1/ð , *e* −(*ϕ*2(− log *β U ∂* ) ð ) 1/ð i , h 1 − *e* −(*ϕ*2(− log(1−*δ L ∂* ))<sup>ð</sup> ) 1/ð , 1 − *e* −(*ϕ*2(− log(1−*δ U ∂* ))<sup>ð</sup> ) 1/ð iE = Dh*e* −((*ϕ*1+*ϕ*2)(− log *β L ∂* ) ð ) 1/ð ,*e* −((*ϕ*1+*ϕ*2)(− log *β U ∂* ) ð ) 1/ð i , h 1 − *e* −((*ϕ*1+*ϕ*2)(− log(1−*δ L ∂* ))<sup>ð</sup> ) 1/ð , 1 − *e* −((*ϕ*1+*ϕ*2)(− log(1−*δ U ∂* ))<sup>ð</sup> ) 1/ð iE <sup>=</sup> ˜*<sup>∂</sup>* (*ϕ*1+*ϕ*2) .

## **4. IVIF Aczel–Alsina Geometric Aggregation Operators**

We demonstrate some IVIF geometric aggregation operators throughout this section using the Aczel–Alsina operations.

**Definition 8.** *Let* ˜*∂<sup>ζ</sup>* = ([*<sup>β</sup> L ∂ζ* , *β U ∂ζ* ], [*δ L ∂ζ* , *δ U ∂ζ* ]) *(ζ* = 1, 2, . . . , }) *be an accumulation of IVIFNs and ξ* = (*ξ*1, *ξ*2, . . . , *ξ*}) *T be the weight vector associated with ∂<sup>ζ</sup>* (*ζ* = 1, 2, . . . , })*, along with ξ<sup>ζ</sup>* ∈ [0, 1] *and* } ∑ *ζ*=1 *ξ<sup>ζ</sup>* = 1*. In that case an IVIF Aczel–Alsina weighted geometric (IVIFAAWG) operator can be described as function IVIFAAWG* : (*L* ? ) } → *<sup>L</sup>* ? *, in which*

$$IVIFAAWG\_{\mathfrak{f}}(\mathfrak{d}\_1, \mathfrak{d}\_2, \dots, \mathfrak{d}\_\hbar) = \bigotimes\_{\mathfrak{f}=1}^{\hbar} (\mathfrak{d}\_{\mathfrak{f}})^{\mathfrak{f}\_{\mathbb{C}}} = (\mathfrak{d}\_1)^{\mathfrak{f}\_1} \bigotimes (\mathfrak{d}\_2)^{\mathfrak{f}\_2} \bigotimes \dots \bigotimes (\mathfrak{d}\_\hbar)^{\mathfrak{f}\_\hbar}.$$

Following that, we prove the associated theorem for the Aczel–Alsina operations on IVIFNs.

**Theorem 2.** *Let* ˜*∂<sup>ζ</sup>* = ([*<sup>β</sup> L ∂ζ* , *β U ∂ζ* ], [*δ L ∂ζ* , *δ U ∂ζ* ]) *(ζ* = 1, 2, . . . , }) *be an accumulation of IVIFNs and* ð ∈ [0, ∞]*, then aggregated value of them utilizing the IVIFAAWG operator is also a IVIFNs, and*

$$\begin{split} \mathit{IVIFAAWG}\_{\mathbb{S}}(\tilde{\mathfrak{d}}\_{1},\tilde{\mathfrak{d}}\_{2},\ldots,\tilde{\mathfrak{d}}\_{\hbar}) &= \bigotimes\_{\zeta=1}^{\hbar} (\tilde{\mathfrak{d}}\_{\zeta})^{\mathbb{S}\_{\zeta}} \\ &= \left\langle \left[ e^{-\left(\sum\_{\zeta=1}^{\hbar} \xi\_{\zeta}(-\log \beta\_{\mathfrak{d}\_{\zeta}}^{L})^{\beta}\right)^{1/\delta}}, e^{-\left(\sum\_{\zeta=1}^{\hbar} \xi\_{\zeta}(-\log \beta\_{\mathfrak{d}\_{\zeta}}^{M})^{\beta}\right)^{1/\delta}} \right] \right\rangle \\ &\left[ 1 - e^{-\left(\sum\_{\zeta=1}^{\hbar} \xi\_{\zeta}(-\log(1-\delta\_{\mathfrak{d}\_{\zeta}}^{L}))^{\beta}\right)^{1/\delta}}, 1 - e^{-\left(\sum\_{\zeta=1}^{\hbar} \xi\_{\zeta}(-\log(1-\delta\_{\mathfrak{d}\_{\zeta}}^{M}))^{\beta}\right)^{1/\delta}} \right] \end{split} \tag{6}$$

*where <sup>ξ</sup>* = (*ξ*1, *<sup>ξ</sup>*2, . . . , *<sup>ξ</sup>*}) *function as weight vector associated with* ˜*∂<sup>ζ</sup>* (*<sup>ζ</sup>* = 1, 2, . . . , }) *so that ξ<sup>ζ</sup>* ∈ [0, 1]*, and* } ∑ *ζ*=1 *ξ<sup>ζ</sup>* = 1*.*

**Proof.** We may prove Theorem 2 using the following mathematical induction method: (i) When } = 2, rely upon Aczel–Alsina operations of IVIFNs, we acquire

$$\begin{split} (\tilde{\partial}\_{1})^{\tilde{\xi}\_{1}} &= \quad \left\langle \left[ e^{-(\tilde{\xi}\_{1}(-\log\beta\_{\partial\_{1}}^{L})^{\beta})^{1/\delta}}, e^{-(\tilde{\xi}\_{1}(-\log\beta\_{\partial\_{1}}^{M})^{\beta})^{1/\delta}} \right] \right\rangle \\ &= \quad \left[ 1 - e^{-(\tilde{\xi}\_{1}(-\log(1-\delta\_{\partial\_{1}}^{L}))^{\beta})^{1/\delta}}, 1 - e^{-(\tilde{\xi}\_{1}(-\log(1-\delta\_{\partial\_{1}}^{M}))^{\beta})^{1/\delta}} \right\rangle \\ (\tilde{\partial}\_{2})^{\tilde{\xi}\_{2}} &= \quad \left\langle \left[ e^{-(\tilde{\xi}\_{2}(-\log\beta\_{\partial\_{2}}^{L})^{\beta})^{1/\delta}}, e^{-(\tilde{\xi}\_{2}(-\log\beta\_{\partial\_{2}}^{M})^{\beta})^{1/\delta}} \right] \right\rangle \\ &= \quad \left[ 1 - e^{-(\tilde{\xi}\_{2}(-\log(1-\delta\_{\partial\_{2}}^{L}))^{\beta})^{1/\delta}}, 1 - e^{-(\tilde{\xi}\_{2}(-\log(1-\delta\_{\partial\_{2}}^{M}))^{\beta})^{1/\delta}} \right) . \end{split}$$

Depending on Definition 7 and Proposition 1, we get *IVIFAAWG<sup>ξ</sup>* ( ˜*∂*1, ˜*∂*2) = (˜*∂*1) *<sup>ξ</sup>*<sup>1</sup> N( ˜*∂*2) *<sup>ξ</sup>*<sup>2</sup> = Dh*e* −(*ξ*<sup>1</sup> (− log *β L ∂*1 ) ð ) 1/ð ,*e* −(*ξ*<sup>1</sup> (− log *β U ∂*1 ) ð ) 1/<sup>ð</sup> i , h 1 − *e* −(*ξ*<sup>1</sup> (− log(1−*δ L ∂*1 ))<sup>ð</sup> ) 1/ð , 1 − *e* −(*ξ*<sup>1</sup> (− log(1−*δ U ∂*1 ))<sup>ð</sup> ) 1/<sup>ð</sup><sup>E</sup> <sup>N</sup> Dh*<sup>e</sup>* −(*ξ*2(− log *β L ∂*2 ) ð ) 1/ð , *e* −(*ξ*2(− log *β U ∂*2 ) ð ) 1/<sup>ð</sup> i , h 1 − *e* −(*ξ*2(− log(1−*δ L ∂*2 ))<sup>ð</sup> ) 1/ð , 1 − *e* −(*ξ*2(− log(1−*δ U ∂*2 ))<sup>ð</sup> ) 1/ðE = *e* − *ξ*1 (− log *β L ∂*1 ) <sup>ð</sup>+*ξ*2(<sup>−</sup> log *<sup>β</sup> L ∂*2 ) ð 1/<sup>ð</sup> ,*e* − *ξ*1 (− log *β U ∂*1 ) <sup>ð</sup>+*ξ*2(<sup>−</sup> log *<sup>β</sup> U ∂*2 ) ð 1/<sup>ð</sup> , 1 − *e* − *ξ*1 (− log(1−*δ L ∂*1 ))ð+*ξ*2(<sup>−</sup> log(1−*<sup>δ</sup> L ∂*2 ))<sup>ð</sup> 1/<sup>ð</sup> , 1 − *e* − *ξ*1 (− log(1−*δ U ∂*1 ))ð+*ξ*2(<sup>−</sup> log(1−*<sup>δ</sup> U ∂*2 ))<sup>ð</sup> 1/<sup>ð</sup> = *e* − <sup>2</sup> ∑ *ζ*=1 *ξζ* (− log *β L ∂ ζ* ) ð 1/<sup>ð</sup> ,*e* − <sup>2</sup> ∑ *ζ*=1 *ξζ* (− log *β U ∂ ζ* ) ð 1/<sup>ð</sup> , 1 − *e* − <sup>2</sup> ∑ *ζ*=1 *ξζ* (− log(1−*δ L ∂ ζ* ))<sup>ð</sup> 1/<sup>ð</sup> , 1 − *e* − <sup>2</sup> ∑ *ζ*=1 *ξζ* (− log(1−*δ U ∂ ζ* ))<sup>ð</sup> 1/<sup>ð</sup> . Hence, (6) is true for } <sup>=</sup> 2.

(ii) Assume that (6) is true for } = *k*, then we have

$$IVIAAAW\mathbb{G}\_{\mathbb{E}}(\mathfrak{d}\_{1},\mathfrak{d}\_{2},\dots,\mathfrak{d}\_{k}) = \bigotimes\_{\zeta=1}^{k} (\mathfrak{d}\_{\zeta})^{\mathfrak{E}\_{\mathbb{E}}}$$

$$= \left\langle \left[ e^{-\left(\sum\_{\zeta=1}^{k} \xi\_{\zeta}(-\log \beta\_{\mathfrak{d}\_{\zeta}}^{L})^{\mathfrak{d}}\right)^{1/\mathfrak{d}}}, e^{-\left(\sum\_{\zeta=1}^{k} \xi\_{\zeta}(-\log \beta\_{\mathfrak{d}\_{\zeta}}^{\mathcal{U}})^{\mathfrak{d}}\right)^{1/\mathfrak{d}}} \right] \right\rangle$$

$$\left[ 1 - e^{-\left(\sum\_{\zeta=1}^{k} \xi\_{\zeta}(-\log(1-\delta\_{\mathfrak{d}\_{\zeta}}^{L}))^{\mathfrak{d}}\right)^{1/\mathfrak{d}}}, 1 - e^{-\left(\sum\_{\zeta=1}^{k} \xi\_{\zeta}(-\log(1-\delta\_{\mathfrak{d}\_{\zeta}}^{\mathcal{U}}))^{\mathfrak{d}}\right)^{1/\mathfrak{d}}} \right] \rangle.$$

$$\begin{split} \text{Now for } \hbar &= k + 1, \text{ then } \\ \text{IVIFAAWG}\_{\xi}(\tilde{\boldsymbol{\partial}}\_{1}, \tilde{\boldsymbol{\partial}}\_{2}, \dots, \tilde{\boldsymbol{\partial}}\_{k}, \tilde{\boldsymbol{\partial}}\_{k+1}) &= \mathop{\otimes}\_{\xi}(\tilde{\boldsymbol{\partial}}\_{\xi})^{\xi\_{\zeta}} \otimes (\tilde{\boldsymbol{\partial}}\_{k+1})^{\xi\_{k+1}} \\ &= \left\langle \left[ e^{-\left(\sum\_{\zeta=1}^{k} \xi\_{\zeta}(-\log \boldsymbol{\beta}\_{\tilde{\boldsymbol{\xi}}\_{\zeta}}^{\mathsf{L}})^{\beta}\right)^{1/\delta}}, e^{-\left(\sum\_{\zeta=1}^{k} \xi\_{\zeta}(-\log \boldsymbol{\beta}\_{\tilde{\boldsymbol{\xi}}\_{\zeta}}^{\mathsf{U}})^{\beta}\right)^{1/\delta}} \right] \right\rangle \\ &\left[ 1 - e^{-\left(\sum\_{\zeta=1}^{k} \xi\_{\zeta}(-\log(1-\boldsymbol{\delta}\_{\tilde{\boldsymbol{\xi}}\_{\zeta}}^{\mathsf{L}}))^{\beta}\right)^{1/\delta}}, 1 - e^{-\left(\sum\_{\zeta=1}^{k} \xi\_{\zeta}(-\log(1-\boldsymbol{\delta}\_{\tilde{\boldsymbol{\xi}}\_{\zeta}}^{\mathsf{U}})^{\beta}\right)^{1/\delta}} \right] \} \\ &\quad \otimes \left\langle \left[ e^{-\left(\xi\_{k+1}(-\log \boldsymbol{\beta}\_{\tilde{\boldsymbol{\xi}}\_{k+1}}^{\mathsf{L}}\right)^{\beta}\right)^{1/\delta}}, e^{-\left(\xi\_{k+1}(-\log \boldsymbol{\beta}\_{\tilde{\boldsymbol{\xi}}\_{k+1}}^{\mathsf{U}})^{\beta}\right)^{1/\delta}} \right], \end{split}$$

$$\begin{split} & \left[ 1 - e^{-\left( \widetilde{\xi}\_{k+1} ( - \log(1 - \delta\_{\widetilde{\vartheta}\_{k+1}}^{L}))^{\otimes} \right)^{1/\mathfrak{d}}}, 1 - e^{-\left( \widetilde{\xi}\_{k+1} ( - \log(1 - \delta\_{\widetilde{\vartheta}\_{k+1}}^{\mathfrak{U}}))^{\otimes} \right)^{1/\mathfrak{d}}} \right] \right] \\ &= \left\langle \left[ e^{-\left( \sum\_{\zeta=1}^{k+1} \xi\_{\zeta} ( - \log \delta\_{\widetilde{\vartheta}\_{\zeta}}^{\mathfrak{U}})^{\mathfrak{d}} \right)^{1/\mathfrak{d}}}, e^{-\left( \sum\_{\zeta=1}^{k+1} \xi\_{\zeta} ( - \log \delta\_{\widetilde{\vartheta}\_{\zeta}}^{\mathfrak{U}})^{\mathfrak{d}} \right)^{1/\mathfrak{d}}} \right] \\ & \left[ 1 - e^{-\left( \sum\_{\zeta=1}^{k+1} \xi\_{\zeta} ( - \log (1 - \delta\_{\widetilde{\vartheta}\_{\zeta}}^{\mathfrak{U}}))^{\mathfrak{d}} \right)^{1/\mathfrak{d}}}, 1 - e^{-\left( \sum\_{\zeta=1}^{k+1} \xi\_{\zeta} ( - \log (1 - \delta\_{\widetilde{\vartheta}\_{\zeta}}^{\mathfrak{U}}))^{\mathfrak{d}} \right)^{1/\mathfrak{d}}} \right] \} \\ & \text{Thus, (6) is true for } k - k + 1 \end{split}$$

Thus, (6) is true for } = *k* + 1.

Therefore, from (i) and (ii), we may conclude that (6) holds for any }.

**Theorem 3.** *(Idempotency) If all* ˜*∂<sup>ζ</sup>* = ([*<sup>β</sup> L ∂ζ* , *β U ∂ζ* ], [*δ L ∂ζ* , *δ U ∂ζ* ]) (*ζ* = 1, 2, . . . , }) *are equal, i.e.,* ˜*∂<sup>ζ</sup>* = ˜*<sup>∂</sup> for all <sup>ζ</sup>, then IVIFAAWG<sup>ξ</sup>* ( ˜*∂*1, ˜*∂*2, . . . , ˜*∂*}) = ˜*∂*.

**Proof.** Since ˜*∂<sup>ζ</sup>* = ([*<sup>β</sup> L ∂ζ* , *β U ∂ζ* ], [*δ L ∂ζ* , *δ U ∂ζ* ]) (*ζ* = 1, 2, . . . , }), then we have by Equation (6), 1/<sup>ð</sup>

*IVIFAAWG<sup>ξ</sup>* ( ˜*∂*1, ˜*∂*2, . . . , ˜*∂*}) = <sup>N</sup> } *ζ*=1 ( ˜*∂ζ* ) *<sup>ξ</sup><sup>ζ</sup>* = \*"*e* − } ∑ *ζ*=1 *ξζ* (− log *β L ∂ ζ* ) ð , *e* − } ∑ *ζ*=1 *ξζ* (− log *β U ∂ ζ* ) ð 1/<sup>ð</sup> # , " 1 − *e* − } ∑ *ζ*=1 *ξζ* (− log(1−*δ L ∂ ζ* ))<sup>ð</sup> 1/<sup>ð</sup> , 1 − *e* − } ∑ *ζ*=1 *ξζ* (− log(1−*δ U ∂ ζ* ))<sup>ð</sup> 1/<sup>ð</sup> #+ = \*"*e* − (− log *β L ∂* ) ð 1/<sup>ð</sup> ,*e* − (− log *β U ∂* ) ð 1/<sup>ð</sup># , " 1 − *e* − (− log(1−*δ L ∂* ))<sup>ð</sup> 1/<sup>ð</sup> , 1 − *e* − (− log(1−*δ U ∂* ))<sup>ð</sup> 1/<sup>ð</sup>#+ = \*"*e* log *β L <sup>∂</sup>* ,*e* log *β U ∂* # , " 1 − *e* log(1−*δ L ∂* ) , 1 − *e* log(1−*δ U ∂* ) #+ = ([*β L ∂* , *β U ∂* ], [*δ L ∂* , *δ U ∂* ]) = ˜*∂*. Thus, *IVIFAAWG<sup>ξ</sup>* ( ˜*∂*1, ˜*∂*2, . . . , ˜*∂*}) = ˜*<sup>∂</sup>* holds.

**Theorem 4.** *(Boundedness) Let* ˜*∂<sup>ζ</sup>* = ([*<sup>β</sup> L ∂ζ* , *β U ∂ζ* ], [*δ L ∂ζ* , *δ U ∂ζ* ]) (*ζ* = 1, 2, . . . , }) *be an accumulation of IVIFNs. Let* ˜*∂* − = min( ˜*∂*1, ˜*∂*2, . . . , ˜*∂*}) *and* ˜*<sup>∂</sup>* <sup>+</sup> = max( ˜*∂*1, ˜*∂*2, . . . , ˜*∂*})*. Then,* ˜*<sup>∂</sup>* <sup>−</sup> ≤ *IVIFAAWG<sup>ξ</sup>* ( ˜*∂*1, ˜*∂*2, . . . , ˜*∂*}) <sup>≤</sup> ˜*<sup>∂</sup>* +.

**Proof.** Let ˜*∂<sup>ζ</sup>* = ([*<sup>β</sup> L ∂ζ* , *β U ∂ζ* ], [*δ L ∂ζ* , *δ U ∂ζ* ]) (*ζ* = 1, 2, . . . , }) be an accumulation of IVIFNs. Let ˜*∂* − = min( ˜*∂*1, ˜*∂*2, . . . , ˜*∂*}) = ([*<sup>β</sup> L*− *∂* , *β U*− *∂* ], [*δ L*− *∂* , *δ U*− *∂* ]) and ˜*∂* <sup>+</sup> = max( ˜*∂*1, ˜*∂*2, . . . , ˜*∂*}) = ([*β L*+ *∂* , *β U*+ *∂* ], [*δ L*+ *∂* , *δ U*+ *∂* ]). We have, *β L*− *<sup>∂</sup>* = min *ζ* {*β L ∂ζ* }, *β U*− *<sup>∂</sup>* = min *ζ* {*β U ∂ζ* }, *δ L*− *<sup>∂</sup>* = max *ζ* {*δ L ∂ζ* }, *δ U*− *<sup>∂</sup>* = max *ζ* {*δ U ∂ζ* }, *β L*+ *<sup>∂</sup>* = max *ζ* {*β L ∂ζ* }, *β U*+ *<sup>∂</sup>* = max *ζ* {*β U ∂ζ* }, *δ L*+ *<sup>∂</sup>* = min *ζ* {*δ L ∂ζ* }, and *δ U*+ *<sup>∂</sup>* = min *ζ* {*δ U ∂ζ* }. Hence, there have the subsequent inequalities,

*e* − } ∑ *ζ*=1 *ξζ* (− log *β L*− *∂* ) ð 1/<sup>ð</sup> ≤ *e* − } ∑ *ζ*=1 *ξζ* (− log *β L ∂ ζ* ) ð 1/<sup>ð</sup> ≤ *e* − } ∑ *ζ*=1 *ξζ* (− log *β L*+ *∂* ) ð 1/<sup>ð</sup> , *e* − } ∑ *ζ*=1 *ξζ* (− log *β U*− *∂* ) ð 1/<sup>ð</sup> ≤ *e* − } ∑ *ζ*=1 *ξζ* (− log *β U ∂ ζ* ) ð 1/<sup>ð</sup> ≤ *e* − } ∑ *ζ*=1 *ξζ* (− log *β U*+ *∂* ) ð 1/<sup>ð</sup> , 1 − *e* − } ∑ *ζ*=1 *ξζ* (− log(1−*δ L*+ *∂* ))<sup>ð</sup> 1/<sup>ð</sup> ≤ 1 − *e* − } ∑ *ζ*=1 *ξζ* (− log(1−*δ L ∂ ζ* ))<sup>ð</sup> 1/<sup>ð</sup> ≤ 1 − *e* − } ∑ *ζ*=1 *ξζ* (− log(1−*δ L*− *∂* ))<sup>ð</sup> 1/<sup>ð</sup> , 1 − *e* − } ∑ *ζ*=1 *ξζ* (− log(1−*δ U*+ *∂* ))<sup>ð</sup> 1/<sup>ð</sup> ≤ 1 − *e* − } ∑ *ζ*=1 *ξζ* (− log(1−*δ U ∂ ζ* ))<sup>ð</sup> 1/<sup>ð</sup> ≤ 1 − *e* − } ∑ *ζ*=1 *ξζ* (− log(1−*δ U*− *∂* ))<sup>ð</sup> 1/<sup>ð</sup> .

Therefore, ˜*∂* <sup>−</sup> ≤ *IVIFAAWG<sup>ξ</sup>* ( ˜*∂*1, ˜*∂*2, . . . , ˜*∂*}) <sup>≤</sup> ˜*<sup>∂</sup>* +.

**Theorem 5.** *(Monotonicity) Let* ˜*∂<sup>ζ</sup> and* ˜*<sup>∂</sup>* 0 *ζ* (*<sup>ζ</sup>* <sup>=</sup> 1, 2, . . . , }) *be two sets of IVIFNs, if* ˜*∂<sup>ζ</sup>* <sup>≤</sup> ˜*<sup>∂</sup>* 0 *ζ for all ζ, then IVIFAAWG<sup>ξ</sup>* ( ˜*∂*1, ˜*∂*2, . . . , ˜*∂*}) <sup>≤</sup> *IVIFAA- WG<sup>ξ</sup>* ( ˜*∂* 0 1 , ˜*∂* 0 2 , . . . , ˜*∂* 0 } )*.*

**Proof.** The proof is straightforward.

Now, we present IVIF Aczel–Alsina ordered weighted geometric (IVIFAAOWG) operator.

**Definition 9.** *Let* ˜*∂<sup>ζ</sup>* = ([*<sup>β</sup> L ∂ζ* , *β U ∂ζ* ], [*δ L ∂ζ* , *δ U ∂ζ* ]) (*ζ* = 1, 2, . . . , }) *be an accumulation of IVIFNs. An IVIF Aczel–Alsina ordered weighted geometric (IVIFAAOWG) operator of dimension* } *is a mapping IVIFAAOWG* : (*L* ? ) } → *<sup>L</sup>* ? *with the corresponding vector ξ* = (*ξ*1, *ξ*2, . . . , *ξ*}) *T such that ξ<sup>ζ</sup>* ∈ [0, 1]*, and* } ∑ *ζ*=1 *ξ<sup>ζ</sup>* = 1*, as*

$$\begin{aligned} \text{IVIFAAOWWG}\_{\xi}(\tilde{\mathfrak{d}}\_{1}, \tilde{\mathfrak{d}}\_{2}, \dots, \tilde{\mathfrak{d}}\_{\hbar}) &= \bigotimes\_{\zeta=1}^{\hbar} (\tilde{\mathfrak{d}}\_{\mathfrak{q}(\zeta)})^{\xi\_{\zeta}} \\ &= (\mathfrak{d}\_{\mathfrak{q}(1)})^{\xi\_{1}} \bigotimes (\mathfrak{d}\_{\mathfrak{q}(2)})^{\xi\_{2}} \bigotimes \dots \bigotimes (\mathfrak{d}\_{\mathfrak{q}(\hbar)})^{\xi\_{\hbar}}. \end{aligned}$$

*where* (*\$*(1), *\$*(2), . . . , *\$*(})) *are the permutation of* (*<sup>ζ</sup>* <sup>=</sup> 1, 2, . . . , })*, for which* ˜*∂\$*(*ζ*−1) <sup>≥</sup> ˜*∂\$*(*ζ*) *for all ζ* = 1, 2, . . . , }*.*

We generate the following theorem on IVIFNs based on the Aczel–Alsina product.

**Theorem 6.** *Let* ˜*∂<sup>ζ</sup>* = ([*<sup>β</sup> L ∂ζ* , *β U ∂ζ* ], [*δ L ∂ζ* , *δ U ∂ζ* ]) (*ζ* = 1, 2, . . . , }) *be an accumulation of IVIFNs. An IVIF Aczel–Alsina ordered weighted geometric (IVIFAAOWG) operator of dimension* } *is a mapping IVIFAAOWG* : (*L* ? ) } → *<sup>L</sup>* ? *with the associated vector ϑ* = (*ϑ*1, *ϑ*2, . . . , *ϑ*}) *T such that ϑ<sup>ζ</sup>* ∈ [0, 1]*, and* } ∑ *ζ*=1 *ϑ<sup>ζ</sup>* = 1*. Then,*

$$\begin{split} & \quad \boldsymbol{IV} \boldsymbol{IV} \boldsymbol{IA} \boldsymbol{AA} \boldsymbol{OWG}\_{\boldsymbol{\theta}} (\boldsymbol{\tilde{\sigma}}\_{1}, \boldsymbol{\tilde{\sigma}}\_{2}, \dots, \boldsymbol{\tilde{\sigma}}\_{\boldsymbol{\tilde{\theta}}}) = \bigotimes\_{\boldsymbol{\zeta} = 1}^{\boldsymbol{h}} (\boldsymbol{\tilde{\sigma}}\_{\boldsymbol{\varrho}(\boldsymbol{\zeta})})^{\boldsymbol{\theta}\_{\boldsymbol{\zeta}}} \\ & = \left\langle \left[ \boldsymbol{e}^{-\left(\sum\_{\boldsymbol{\zeta}=1}^{\boldsymbol{h}} \boldsymbol{\theta}\_{\boldsymbol{\zeta}} \left( -\log \boldsymbol{\theta}\_{\boldsymbol{\tilde{\theta}}\_{\boldsymbol{\varrho}(\boldsymbol{\zeta})} \right)^{\boldsymbol{\mathfrak{d}}} \right)^{1/\boldsymbol{\mathfrak{d}}}} \right. \right. \\ & \left. - \left( \sum\_{\boldsymbol{\zeta}=1}^{\boldsymbol{h}} \boldsymbol{\theta}\_{\boldsymbol{\zeta}} \left( -\log \left( 1 - \boldsymbol{\tilde{\sigma}}\_{\boldsymbol{\tilde{\theta}}\_{\boldsymbol{\varrho}(\boldsymbol{\zeta})} \right) \right)^{\boldsymbol{\mathfrak{d}}} \right)^{1/\boldsymbol{\mathfrak{d}}}} - \left( \sum\_{\boldsymbol{\zeta}=1}^{\boldsymbol{h}} \boldsymbol{\theta}\_{\boldsymbol{\zeta}} \left( -\log \left( 1 - \boldsymbol{\tilde{\sigma}}\_{\boldsymbol{\tilde{\theta}}\_{\boldsymbol{\varrho}(\boldsymbol{\zeta})} \right) \right)^{\boldsymbol{\mathfrak{d}}} \right)^{1/\boldsymbol{\mathfrak{d}}} \right] \end{split}$$

*where* (*\$*(1), *\$*(2), . . . , *\$*(})) *are the permutation of* (*<sup>ζ</sup>* <sup>=</sup> 1, 2, . . . , })*, for which* ˜*∂\$*(*ζ*−1) <sup>≥</sup> ˜*∂\$*(*ζ*) *for all ζ* = 1, 2, . . . , }*.*

**Proof.** Like Theorem 2, Theorem 6 is simply obtained.

The following characteristics can be proven well by employing the IVIFAAOWG operator.

**Property 1.** *(Idempotency) If* ˜*∂<sup>ζ</sup>* (*<sup>ζ</sup>* = 1, 2, . . . , }) *are identical, i.e.,* ˜*∂<sup>ζ</sup>* = ˜*<sup>∂</sup> for every <sup>ζ</sup>, then IVIFAAOWGϑ*( ˜*∂*1, ˜*∂*2, . . . , ˜*∂*}) = ˜*∂*.

**Property 2.** *(Boundedness) Let* ˜*∂<sup>ζ</sup>* (*<sup>ζ</sup>* = 1, 2, . . . , }) *be an accumulation of IVIFNs. Let* ˜*<sup>∂</sup>* − = min *s* ˜*∂ζ ,* ˜*∂* <sup>+</sup> = max *s* ˜*∂ζ* . *Then,* ˜*∂* <sup>−</sup> ≤ *IVIFAAOWGϑ*( ˜*∂*1, ˜*∂*2, . . . , ˜*∂*}) <sup>≤</sup> ˜*<sup>∂</sup>* +.

**Property 3.** *(Monotonicity) Suppose that* ˜*∂<sup>ζ</sup> and* ˜*<sup>∂</sup>* 0 *ζ* (*ζ* = 1, 2, . . . , }) *are two sets of IVIFNs and* ˜*∂<sup>ζ</sup>* <sup>≤</sup> ˜*<sup>∂</sup>* 0 *ζ for every ζ, then IVIFAAOWGϑ*( ˜*∂*1, ˜*∂*2, . . . , ˜*∂*}) <sup>≤</sup> *IVIFAAOWGϑ*( ˜*∂* 0 1 , ˜*∂* 0 2 , . . . , ˜*∂* 0 } ).

**Property 4.** *(Commutativity) Let* ˜*∂<sup>ζ</sup> and* ˜*<sup>∂</sup>* 0 *ζ* (*ζ* = 1, 2, . . . , }) *be two sets of IVIFNs, then IVIFAAOWGϑ*( ˜*∂*1, ˜*∂*2, . . . , ˜*∂*}) = *IVIFAAOWGϑ*( ˜*∂* 0 1 , ˜*∂* 0 2 , . . . , ˜*∂* 0 } ) *where* ˜*∂* 0 *ζ* (*ζ* = 1, 2, . . . , }) *is any permutation of* ˜*∂<sup>ζ</sup>* (*<sup>ζ</sup>* = 1, 2, . . . , })*.*

As defined in Definition 8, the IVIFAAWG operator measures only the IVIFNs, and as defined in Definition 9, the IVIFAAOWG operator measures only the IVIFNs' consistent positions. Following that, weights represent different aspects of both the IVIFAAWG and IVIFAAOWG operators. Nevertheless, both the operators think about just one of them. To overcome this disadvantage, in the following we will exhibit IVIF Aczel–Alsina hybrid geometric (IVIFAAHG) operator, which weights both the given IVIFN and its ordered position.

**Definition 10.** *Let* ˜*∂<sup>ζ</sup>* (*<sup>ζ</sup>* = 1, 2, . . . , }) *be an accumulation of IVIFNs. An IVIFAAHG operator of dimension* } *is a function IVIFAAHG* : (*L* ? ) } → *<sup>L</sup>* ? *, such that*

$$\begin{aligned} IVIFAAHG\_{\mathfrak{F},\mathfrak{\theta}}(\tilde{\mathfrak{d}}\_{1}, \tilde{\mathfrak{d}}\_{2}, \dots, \tilde{\mathfrak{d}}\_{\mathfrak{h}}) &= \bigotimes\_{\mathfrak{f}=1}^{\tilde{\mathfrak{h}}} (\mathring{\mathfrak{d}}\_{\mathfrak{q}(\zeta)})^{\mathfrak{d}\_{\mathbb{G}}} \\ &= (\mathring{\mathfrak{d}}\_{\mathfrak{q}(1)})^{\mathfrak{d}\_{1}} \bigotimes (\mathring{\mathfrak{d}}\_{\mathfrak{q}(2)})^{\mathfrak{d}\_{2}} \bigotimes \cdots \bigotimes (\mathring{\mathfrak{d}}\_{\mathfrak{q}(\mathfrak{h})})^{\mathfrak{d}\_{\mathfrak{h}}} \end{aligned}$$

*where ϑ* = (*ϑ*1, *ϑ*2, . . . , *ϑ*}) *T is the weighting vector associated with the IVIFAAHG operator, with ϑ<sup>ζ</sup>* ∈ [0, 1] (*ζ* = 1, 2, . . . , }) *and* ∑ } *ζ*=1 *ϑ<sup>ζ</sup>* = 1*;* ˙˜*∂<sup>ζ</sup>* <sup>=</sup> ˜*<sup>∂</sup>* }*ξ<sup>ζ</sup> ζ , ζ* = 1, 2, . . . , }*,* ( ˙˜*∂\$*(1) , ˙˜*∂\$*(2) , . . . , ˙˜*∂\$*(}) ) *is any permutation of a collection of the weighted IVIFNs* ( ˙˜*∂*1, ˙˜*∂*2, . . . , ˙˜*∂*})*, such that* ˙˜*∂\$*(*ζ*−1) <sup>≥</sup> ˙˜*∂\$*(*ζ*) (*ζ* = 1, 2, . . . , })*; ξ* = (*ξ*1, *ξ*2, . . . , *ξ*}) *T is the weight vector of* ˜*∂<sup>ζ</sup>* (*<sup>ζ</sup>* = 1, 2, . . . , })*, with ξ<sup>ζ</sup>* ∈ [0, 1] (*ζ* = 1, 2, . . . , }) *and* ∑ } *ζ*=1 *ξ<sup>ζ</sup>* = 1*, and* } *is the balancing coefficient, which plays a role of balance.*

The following theorem can be deduced using Aczel–Alsina operations on IVIFNs information.

**Theorem 7.** *Let* ˜*∂<sup>ζ</sup>* = ([*<sup>β</sup> L ∂ζ* , *β U ∂ζ* ], [*δ L ∂ζ* , *δ U ∂ζ* ]) *(ζ* = 1, 2, . . . , }) *be an accumulation of IVIFNs and* ð ∈ [0, ∞]*. Their aggregated value by IVIFAAHG operator is still a IVIFN, and*

$$\begin{split} & IVIFAAHG\_{\xi} (\mathfrak{J}\_{1}, \mathfrak{J}\_{2}, \dots, \mathfrak{J}\_{\hbar}) = \bigotimes\_{\xi=1}^{\hbar} (\mathring{\mathfrak{J}}\_{\mathfrak{q}(\zeta)})^{\mathfrak{S}\_{\xi}} = \\ & \left\langle \left[ e^{-\left(\sum\_{\zeta=1}^{\hbar} \mathfrak{z}\_{\zeta} \left( -\log \mathfrak{H}\_{\mathfrak{S}\_{\mathfrak{q}(\zeta)}}^{\mathfrak{I}} \right)^{\mathfrak{J}} \right)^{1/\mathfrak{J}} \right. \right. \\ & \left. - \left( \sum\_{\zeta=1}^{\hbar} \mathfrak{z}\_{\zeta} \left( -\log \left( 1 - \delta\_{\mathfrak{J}\_{\mathfrak{q}(\zeta)}}^{\mathfrak{I}} \right) \right)^{\mathfrak{J}} \right)^{1/\mathfrak{J}} \right. \\ & \left. e^{-\left( \sum\_{\zeta=1}^{\hbar} \mathfrak{z}\_{\zeta} \left( -\log \left( 1 - \delta\_{\mathfrak{J}\_{\mathfrak{q}(\zeta)}}^{\mathfrak{I}} \right) \right)^{\mathfrak{J}} \right)^{1/\mathfrak{J}}} \right. \\ & \left. e^{-\left( \sum\_{\zeta=1}^{\hbar} \mathfrak{z}\_{\zeta} \left( -\log \left( 1 - \delta\_{\mathfrak{q}(\zeta)}^{\mathfrak{I}} \right) \right)^{\mathfrak{J}} \right)^{\mathfrak{J}} \right]^{1/\mathfrak{J}} \end{split}$$

*where ϑ* = (*ϑ*1, *ϑ*2, . . . , *ϑ*}) *T is the weighting vector associated with the IVIFAAHG operator, with ϑ<sup>ζ</sup>* ∈ [0, 1] (*ζ* = 1, 2, . . . , }) *and* ∑ } *ζ*=1 *ϑ<sup>ζ</sup>* = 1*;* ˙˜*∂<sup>ζ</sup>* <sup>=</sup> ˜*<sup>∂</sup>* }*ξ<sup>ζ</sup> ζ , ζ* = 1, 2, . . . , }*,* ( ˙˜*∂\$*(1) , ˙˜*∂\$*(2) , . . . , ˙˜*∂\$*(}) ) *is any permutation of a collection of the weighted IVIFNs* ( ˙˜*∂*1, ˙˜*∂*2, . . . , ˙˜*∂*})*, such that* ˙˜*∂\$*(*ζ*−1) <sup>≥</sup> ˙˜*∂\$*(*ζ*) (*ζ* = 1, 2, . . . , })*; ξ* = (*ξ*1, *ξ*2, . . . , *ξ*}) *T is the weight vector of* ˜*∂<sup>ζ</sup>* (*<sup>ζ</sup>* = 1, 2, . . . , })*, with ξ<sup>ζ</sup>* ∈ [0, 1] (*ζ* = 1, 2, . . . , }) *and* ∑ } *ζ*=1 *ξ<sup>ζ</sup>* = 1*, and* } *is the balancing coefficient, which plays a role of balance.*

**Proof.** Like Theorem 2, Theorem 7 is simply obtained.

**Theorem 8.** *The IVIFAAWG and IVIFAAOWG operators are both variants of the IVIFAAHG operator.*

**Proof.** (1) Assume *ϑ* = (1/}, 1/}, . . . , 1/}) *T* . Then,

$$\begin{split} \mathit{IVIFAA}(\mathsf{H}\_{\mathsf{\tilde{\xi}},\mathsf{\tilde{\theta}}}(\mathsf{\tilde{\xi}}\_{1},\mathsf{\tilde{\xi}}\_{2},...,\mathsf{\tilde{\xi}}\_{\mathsf{h}}) &= (\mathsf{\tilde{\xi}}\_{\mathsf{\tilde{\xi}}(1)})^{\mathsf{\tilde{\theta}}\_{1}} \otimes (\mathsf{\tilde{\theta}}\_{\mathsf{\tilde{\theta}}(2)})^{\mathsf{\tilde{\theta}}\_{2}} \otimes \cdots \otimes (\mathsf{\tilde{\theta}}\_{\mathsf{\tilde{\xi}}(h)})^{\mathsf{\tilde{\theta}}\_{h}} \\ &= (\mathsf{\tilde{\theta}}\_{\mathsf{\tilde{\theta}}(1)})^{(1/\mathsf{\tilde{\theta}})} \otimes (\mathsf{\tilde{\theta}}\_{\mathsf{\tilde{\theta}}(2)})^{(1/\mathsf{\tilde{\theta}})} \otimes \cdots \otimes (\mathsf{\tilde{\theta}}\_{\mathsf{\tilde{\theta}}(h)})^{(1/\mathsf{\tilde{\theta}})} \\ &= (\mathsf{\tilde{\theta}}\_{1})^{\mathsf{\tilde{\xi}}\_{1}} \otimes (\mathsf{\tilde{\theta}}\_{2})^{\mathsf{\tilde{\xi}}\_{2}} \otimes \cdots \otimes (\mathsf{\tilde{\theta}}\_{\mathsf{\tilde{\theta}}})^{\mathsf{\tilde{\xi}}\_{h}} \\ &= \mathit{IVIFAAWG}\_{\mathsf{\tilde{\xi}}}(\mathsf{\tilde{\theta}}\_{1}, \mathsf{\tilde{\theta}}\_{2}, \dots, \mathsf{\tilde{\theta}}\_{\mathsf{\tilde{\theta}}}), \end{split}$$

(2) Let *ξ* = (1/}, 1/}, . . . , 1/}) *T* . Then, ˙˜*∂<sup>ζ</sup>* <sup>=</sup> ˜*∂<sup>ζ</sup>* (*<sup>ζ</sup>* <sup>=</sup> 1, 2, . . . , }) and

$$\begin{aligned} \langle IVIFAAHGG\_{\mathfrak{F},\mathfrak{F}}(\tilde{\mathfrak{d}}\_{1}, \tilde{\mathfrak{d}}\_{2}, \dots, \tilde{\mathfrak{d}}\_{\hbar}) \rangle &= \langle \mathring{\mathfrak{d}}\_{\mathfrak{e}(1)} \rangle^{\mathfrak{H}\_{1}} \bigotimes (\mathring{\mathfrak{d}}\_{\mathfrak{e}(2)})^{\mathfrak{H}\_{2}} \bigotimes \cdots \bigotimes (\mathring{\mathfrak{d}}\_{\mathfrak{e}(\hbar)})^{\mathfrak{H}\_{\hbar}} \\ &= (\mathring{\mathfrak{d}}\_{\mathfrak{e}(1)})^{\mathfrak{H}\_{1}} \bigotimes (\mathring{\mathfrak{d}}\_{\mathfrak{e}(2)})^{\mathfrak{H}\_{2}} \bigotimes \cdots \bigotimes (\mathring{\mathfrak{d}}\_{\mathfrak{e}(\hbar)})^{\mathfrak{H}\_{\hbar}} \\ &= \mathit{IVIFAALOWG}(\tilde{\mathfrak{d}}\_{1}, \tilde{\mathfrak{d}}\_{2}, \dots, \tilde{\mathfrak{d}}\_{\hbar}), \end{aligned}$$

which completes the proof.

## **5. MADM Methods Influenced by IVIFAAWG Operator**

In this section, we shall take advantage of the IVIFAAWG operator to create a way for addressing MADM difficulties with IVIF information.

For a MADM issue, let Φ = {Φ1, Φ2, . . . , Φ*ψ*} function as the set of alternatives and *J* = {*J*1, *J*2, . . . , *J*}} function as the set of attributes, and attributes weight vector is *ξ* = (*ξ*1, *ξ*2, . . . , *ξ*}) *T* , fulfilling *ξ<sup>ζ</sup>* ∈ [0, 1] and } ∑ *ζ*=1 *ξ<sup>ζ</sup>* = 1. We explicit the assessment information of the alternative Φ<sup>℘</sup> concerning the criterion *J<sup>ζ</sup>* by *Υ*e <sup>℘</sup>*<sup>ζ</sup>* = ([*β L ∂*℘*<sup>ζ</sup>* , *β U ∂*℘*<sup>ζ</sup>* ], [*δ L ∂*℘*<sup>ζ</sup>* , *δ U ∂*℘*<sup>ζ</sup>* ]), and Γ = *Υ*e ℘*ζ ψ*×} is definitely an IVIF decision matrix. Hence, the MADM issue with IVIFNs may be discussed in the following matrix form, acknowledged by Equation (7).

Γ = *Υ*e ℘*ζ <sup>ψ</sup>*×} <sup>=</sup> *<sup>J</sup>*<sup>1</sup> *<sup>J</sup>*<sup>2</sup> · · · *<sup>J</sup>*} Φ<sup>1</sup> Φ<sup>2</sup> . . . Φ*<sup>ψ</sup>* ([*β L ∂*11 , *β<sup>U</sup> ∂*11 ], [*δ L ∂*11 , *δ<sup>U</sup> ∂*11 ]) ([*β L ∂*12 , *β<sup>U</sup> ∂*12 ], [*δ L ∂*12 , *δ<sup>U</sup> ∂*12 ]) · · · ([*<sup>β</sup> L ∂*1} , *β<sup>U</sup> ∂*1} ], [*δ L ∂*1} , *δ<sup>U</sup> ∂*1} ]) ([*β L ∂*21 , *β<sup>U</sup> ∂*21 ], [*δ L ∂*21 , *δ<sup>U</sup> ∂*21 ]) ([*β L ∂*22 , *β<sup>U</sup> ∂*22 ], [*δ L ∂*22 , *δ<sup>U</sup> ∂*22 ]) · · · ([*<sup>β</sup> L ∂*2} , *β<sup>U</sup> ∂*2} ], [*δ L ∂*2} , *δ<sup>U</sup> ∂*2} ]) . . . . . . . . . . . . ([*β L ∂ψ*<sup>1</sup> , *β<sup>U</sup> ∂ψ*<sup>1</sup> ], [*δ L ∂ψ*<sup>1</sup> , *δ<sup>U</sup> ∂ψ*<sup>1</sup> ]) ([*β L ∂ψ*<sup>2</sup> , *β<sup>U</sup> ∂ψ*<sup>2</sup> ], [*δ L ∂ψ*<sup>2</sup> , *δ<sup>U</sup> ∂ψ*<sup>2</sup> ]) · · · ([*<sup>β</sup> L ∂ψ*} , *β<sup>U</sup> ∂ψ*} ], [*δ L ∂ψ*} , *δ<sup>U</sup> ∂ψ*} ]) (7)

where every one of the components *Υ*e <sup>℘</sup>*<sup>ζ</sup>* = ([*β L ∂*℘*<sup>ζ</sup>* , *β U ∂*℘*<sup>ζ</sup>* ], [*δ L ∂*℘*<sup>ζ</sup>* , *δ U ∂*℘*<sup>ζ</sup>* ]) is certainly an IVIFN, where [*β L ∂*℘*<sup>ζ</sup>* , *β U ∂*℘*<sup>ζ</sup>* ] is the positive membership degree because of which alternative Φ℘ fulfills the attribute *J<sup>ζ</sup>* that has been appropriated by the decision-makers, and [*δ L ∂*℘*<sup>ζ</sup>* , *δ U ∂*℘*<sup>ζ</sup>* ] gave the degree that the alternative Φ<sup>℘</sup> does not fulfill the attribute *J<sup>ζ</sup>* that has been distributed by the decision-maker, where [*β L ∂*℘*<sup>ζ</sup>* , *β U ∂*℘*<sup>ζ</sup>* ] ⊂ *D*[0, 1], [*δ L ∂*℘*<sup>ζ</sup>* , *δ U ∂*℘*<sup>ζ</sup>* ] ⊂ *D*[0, 1] and 0 ≤ *β U ∂*℘*<sup>ζ</sup>* + *δ U ∂*℘*<sup>ζ</sup>* ≤ 1, (℘ = 1, 2, . . . , *ψ*).

The methodology dependent on IVIFAAWG operator to find out the MADM difficulties with IVIF data explicitly incorporates these steps:

**Step 1.** Modify decision matrix Γ = *Υ*e ℘*ζ ψ*×} into the normalization matrix Γ = *Υ*e ℘*ζ ψ*×} .

$$\overline{\widetilde{Y}}\_{\mathbb{Y}^{\xi}} = \begin{cases} \widetilde{Y}\_{\mathbb{Y}^{\xi}} \text{ for benefit attribute } I\_{\mathbb{Y}}\\ (\widetilde{Y}\_{\mathbb{Y}^{\xi}})^{c} \text{ for cost attribute } I\_{\mathbb{Y}} \end{cases} \tag{8}$$

where (*Υ*e <sup>℘</sup>*<sup>ζ</sup>* ) *c* is the complement of *Υ*e ℘*ζ* , such that (*Υ*e <sup>℘</sup>*<sup>ζ</sup>* ) *<sup>c</sup>* = ([*δ L ∂*℘*<sup>ζ</sup>* , *δ U ∂*℘*<sup>ζ</sup>* ], [*β L ∂*℘*<sup>ζ</sup>* , *β U ∂*℘*<sup>ζ</sup>* ]).

In fact, if all the attributes *J<sup>ζ</sup>* (*ζ* = 1, 2, . . . , }) are the same type, then there is no need to normalize them, but if it is found that there are two types of attributes then we will convert cost attributes to benefit attributes. Then, Γ = *Υ*e ℘*ζ <sup>ψ</sup>*×} will be transformed into IVIF decision matrix Γ = *Υ*e ℘*ζ ψ*×} .

**Step 2.** Make use of the decision data expressed in matrix Γ, and the operator IVIFAAWG to get the overall preference values *Υ*e<sup>℘</sup> (℘ = 1, 2, . . . , *ψ*) of the alternative Φ℘, i.e.,

$$\begin{split} \widetilde{Y}\_{\wp} &= IVIFAAW \mathbf{G}\_{\xi} (\widetilde{Y}\_{\wp 1}, \widetilde{Y}\_{\wp 2}, \dots, \widetilde{Y}\_{\wp h}) = \bigwedge\_{\xi\_{1}=1}^{h} (\widetilde{Y}\_{\wp \zeta})^{\xi\_{\zeta}} \\ &= \left\langle \begin{bmatrix} -\left(\sum\_{\zeta=1}^{h} \xi\_{\zeta} (-\log \delta\_{\widetilde{\mathbf{d}}\_{\zeta}}^{L})^{\mathfrak{d}}\right)^{1/\mathfrak{d}}, e^{-\left(\sum\_{\zeta=1}^{h} \xi\_{\zeta} (-\log \delta\_{\widetilde{\mathbf{d}}\_{\zeta}}^{\mathrm{U}})^{\mathfrak{d}}\right)^{1/\mathfrak{d}}} \\ e^{-\left(\sum\_{\zeta=1}^{h} \xi\_{\zeta} (-\log(1-\delta\_{\widetilde{\mathbf{d}}\_{\zeta}}^{L}))^{\mathfrak{d}}\right)^{1/\mathfrak{d}}, 1-e^{-\left(\sum\_{\zeta=1}^{h} \xi\_{\zeta} (-\log(1-\delta\_{\widetilde{\mathbf{d}}\_{\zeta}}^{\mathrm{U}}))^{\mathfrak{d}}\right)^{1/\mathfrak{d}}} \end{bmatrix}, \end{split} \tag{9}$$

**Step 3.** Rank all of the alternatives in order of preference. Make use of the method in Definition 3 to rank the entire rating values *Υ*e<sup>℘</sup> (℘ = 1, 2, . . . , *ψ*) and rank all the alternatives Φ<sup>℘</sup> (℘ = 1, 2, . . . , *ψ*) as per ~*Υ*e<sup>℘</sup> (℘ = 1, 2, . . . , *ψ*) in descending order. Lastly, we choose the advantageous alternative(s) with the highest rating value. **Step 4.** End.

## **6. Numerical Example**

This section contains an interesting explanation demonstrating the systematic methodology for choosing an appropriate car.

## *6.1. Problem Description*

Consider a consumer who is considering purchasing a car. There are five distinct types of cars (alternatives) Φ<sup>℘</sup> (℘ = 1, 2, . . . , 5). The consumer considers six attributes while deciding which vehicle to buy (adapted from Herrera and Martinez [44]): *J*1: Fuel economy; *J*2: Aerod degree; *J*3: Price; *J*4: Comfort; *J*5: Design; and *J*6: Security. The weight vector of the attributes *J<sup>ζ</sup>* (*ζ* = 1, 2, . . . , 6) is *ξ* = (0.15, 0.25, 0.14, 0.16, 0.20, 0.10) *T* . Expect that the features of the alternatives Φ<sup>℘</sup> (℘ = 1, 2, . . . , 5) are addressed by the IVIFNs, as demonstrated in the IVIF decision matrix Γ = *Υ*e ℘*ζ* 6×5 (Table 1).

**Table 1.** IVIF decision matrix.


## *6.2. The IFAAWG Operator-Based Technique*

To determine one of most perfect car Φ<sup>℘</sup> (℘ = 1, 2, . . . , 5), we employ the IFAAWG operator to construct a MADM theory using intuitionistic fuzzy information, which is frequently evaluated as follows:

	- *Υ*e <sup>1</sup> = ([0.637689, 0.762041], [0.154922, 0.224767]),
	- *Υ*e<sup>2</sup> = ([0.547075, 0.633999], [0.237723, 0.318131]),
	- *Υ*e<sup>3</sup> = ([0.516381, 0.595996], [0.241492, 0.328694]),
	- *Υ*e <sup>4</sup> = ([0.591841, 0.691298], [0.212673, 0.286312]),
	- *Υ*e<sup>5</sup> = ([0.574598, 0.669233], [0.226311, 0.324705]).

**Table 2.** Normalized IVIF decision matrix.


## **7. The Impact of the Parameter** ð **in This Technique**

To show how the different values of the parameter ð affect the alternatives, we use different values of the parameter ð to categorize the alternatives. The IVIFAAWG operator is used to rank the alternatives Φ<sup>℘</sup> (*t* = 1, 2 . . . , 5), and they are shown in Table 3 and shown in Figure 2. Clearly, when the value of ð for IVIFAAWG operator starts growing, the score values of the possible alternatives decrease, but the ranking stays the same: Φ<sup>1</sup> Φ<sup>4</sup> Φ<sup>5</sup> Φ<sup>2</sup> Φ3. Thus, the most important alternative is Φ1.

<sup>ð</sup> *<sup>K</sup>***<sup>ˆ</sup>** (*Υ*e**1**) *<sup>K</sup>***<sup>ˆ</sup>** (*Υ*e**2**) *<sup>K</sup>***<sup>ˆ</sup>** (*Υ*e**3**) *<sup>K</sup>***<sup>ˆ</sup>** (*Υ*e**4**) *<sup>K</sup>***<sup>ˆ</sup>** (*Υ*e**5**) **Ranking Order** 1 0.510021 0.312610 0.271095 0.392077 0.346408 Φ<sup>1</sup> Φ<sup>4</sup> Φ<sup>5</sup> Φ<sup>2</sup> Φ<sup>3</sup> 2 0.432522 0.274768 0.232654 0.369262 0.315391 Φ<sup>1</sup> Φ<sup>4</sup> Φ<sup>5</sup> Φ<sup>2</sup> Φ<sup>3</sup> 3 0.374209 0.245251 0.200569 0.344796 0.289366 Φ<sup>1</sup> Φ<sup>4</sup> Φ<sup>5</sup> Φ<sup>2</sup> Φ<sup>3</sup> 4 0.332473 0.222023 0.174023 0.319924 0.267292 Φ<sup>1</sup> Φ<sup>4</sup> Φ<sup>5</sup> Φ<sup>2</sup> Φ<sup>3</sup> 5 0.302133 0.203274 0.152149 0.295960 0.248489 Φ<sup>1</sup> Φ<sup>4</sup> Φ<sup>5</sup> Φ<sup>2</sup> Φ<sup>3</sup> 6 0.279331 0.187738 0.134129 0.273905 0.232478 Φ<sup>1</sup> Φ<sup>4</sup> Φ<sup>5</sup> Φ<sup>2</sup> Φ<sup>3</sup> 7 0.261630 0.174584 0.119238 0.254265 0.218860 Φ<sup>1</sup> Φ<sup>4</sup> Φ<sup>5</sup> Φ<sup>2</sup> Φ<sup>3</sup> 8 0.247512 0.163268 0.106868 0.237122 0.207273 Φ<sup>1</sup> Φ<sup>4</sup> Φ<sup>5</sup> Φ<sup>2</sup> Φ<sup>3</sup> 9 0.236003 0.153420 0.096520 0.222303 0.197390 Φ<sup>1</sup> Φ<sup>4</sup> Φ<sup>5</sup> Φ<sup>2</sup> Φ<sup>3</sup> 10 0.226454 0.144778 0.087797 0.209530 0.188927 Φ<sup>1</sup> Φ<sup>4</sup> Φ<sup>5</sup> Φ<sup>2</sup> Φ<sup>3</sup>

**Table 3.** Ranking order of the alternatives with various parameter ð by IVIFAAWG operator.

**Figure 2.** Score values belonging to the alternatives for various values ð by IVIFAAWG operator.

Additionally, Figure 2 reveals that when the value of ð is changed in the example, the ranking results remain identical, demonstrating the resilience of the IVIFAAWG operators.

## **8. Sensitivity Analysis (SA) of Criteria Weights**

To investigate the effect of criteria weights on ranking order, we present a sensitivity investigation. This is done using 24 different weight sets, namely—*Q*1, *Q*2, . . . , *Q*24 (Table 4) formed by considering all possible combinations of the criteria weights *η*<sup>1</sup> = 0.15, *η*<sup>2</sup> = 0.25, *η*<sup>3</sup> = 0.14, *η*<sup>4</sup> = 0.16, *η*<sup>5</sup> = 0.20, and *η*<sup>6</sup> = 0.10. This is especially valuable for achieving a more broad scope of criteria weights for taking a gander at the affectability of the created model. The scores of alternatives are accumulated in Figure 3, and their respective ranking orders are indexed in Table 5. Upon examining the ranking order of alternatives, it is seen that Φ<sup>1</sup> holds the first rank in 100% of the scenarios when the IVIFWG operator (taking


ð = 2) is applied. Hence, the priority of alternatives acquired by utilizing our developed

**Table 4.** Various weight sets of criteria.

method is credible.

**Figure 3.** Utility values of alternatives for distinct sets of weighted criteria.



## **9. Comparison Study**

Following that, we will compare and contrast our proposed approach with some other conventional methods such as the IVIF weighted averaging (IVIFWA) operator [5], the IVIF weighted geometric (IVIFWG) operator [39], the IVIF Einstein weighted geometric (*IVIFWG<sup>ε</sup>* ) operator [16], and the IVIF Einstein weighted averaging (*IVIFWA<sup>ε</sup>* ) operator [17]. The comparison findings are given in Tables 6 and 7, and they are depicted in

Figure 4 visually. If you look at Tables 3 and 6, you can see that the IVIFWG operator is a special case of the IVIFAAWG operator, and that this happens when ð = 1.

As a consequence, our recommended procedures for resolving IVIF MADM problems are frequently more extensive and adaptable than some of the techniques now in use.


**Table 7.** Qualitative evaluations of the current methods.


**Figure 4.** Comparison analysis with a few prevailing techniques.

## **10. Conclusions**

We began this study by extending the Aczel–Alsina *t*-norm and *t*-conorm to IVIF scenarios, defining and examining a few additional working principles for IVIFNs. Then, in light of these new operating laws, different new aggregation operators, such as the IVI-FAAWG operator, the IVIFAAOWG operator, and the IVIFAAHG operator, were devised to accommodate situations in which the specified assertions are IVIFNs. The fundamental

characteristics of the recommended operators are examined, as well as their specific situations. We provide a realistic approach to MADM difficulties with IVIFNs depending on the IFAAWG operator. Furthermore, an exemplary scenario of choosing suitable cars is utilized to demonstrate the developed model, and a comparative study with some other methods is undertaken to show the recommended operators' distinct advantages. In future studies, we plan to extend the challenge further by introducing new characteristics, including the use of probabilistic aggregations. Additionally, we will discuss additional decision-making aspects like cluster analysis, performance analysis [45], sustainable city logistics [46], risk investment assessment [47], Wireless Sensor Networks [48], capital budgeting techniques [49], home buying process [50], and other domains in uncertain environment [51–58].

**Author Contributions:** Conceptualization, T.S. and R.M.; methodology, T.S. and V.S.; validation, V.S.; formal analysis, V.S. and R.A.; investigation, R.M.; data curation, R.C.; writing—original draft preparation, T.S. and A.I.; writing—review and editing, T.S. and R.C.; supervision, R.M.; project administration, V.S.; funding acquisition, A.I. and R.C. All authors have read and agreed to the published version of the manuscript.

**Funding:** The author, Rifaqat Ali, extends his appreciation to Deanship of Scientific Research at King Khalid University, for funding this work through General Research Project under grant number (GRP/93/43). The author, Aiyared Iampan, is thankful to the revenue budget in 2022, School of Science, University of Phayao, for supporting this research. The author, Radko Mesiar, is thankful to the Slovak Research and Development Agency for supporting this research through Grant Number APVV-18-0052.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

## **List of Abbreviations**

The following abbreviations are used in this manuscript:


## **References**


## *Article* **New Fuzzy Extensions on Binomial Distribution**

**Gia Sirbiladze 1,\* , Janusz Kacprzyk <sup>2</sup> , Teimuraz Manjafarashvili <sup>1</sup> , Bidzina Midodashvili <sup>1</sup> and Bidzina Matsaberidze <sup>1</sup>**


**Abstract:** The use of discrete probabilistic distributions is relevant to many practical tasks, especially in present-day situations where the data on distribution are insufficient and expert knowledge and evaluations are the only instruments for the restoration of probability distributions. However, in such cases, uncertainty arises, and it becomes necessary to build suitable approaches to overcome it. In this direction, this paper discusses a new approach of fuzzy binomial distributions (BDs) and their extensions. Four cases are considered: (1) When the elementary events are fuzzy. Based on this information, the probabilistic distribution of the corresponding fuzzy-random binomial variable is calculated. The conditions of restrictions on this distribution are obtained, and it is shown that these conditions depend on the ratio of success and failure of membership levels. The formulas for the generating function (GF) of the constructed distribution and the first and second order moments are also obtained. The Poisson distribution is calculated as the limit case of a fuzzy-random binomial experiment. (2) When the number of successes is of a fuzzy nature and is represented as a fuzzy subset of the set of possible success numbers. The formula for calculating the probability of convolution of binomial dependent fuzzy events is obtained, and the corresponding GF is built. As a result, the scheme for calculating the mathematical expectation of the number of fuzzy successes is defined. (3) When the spectrum of the extended distribution is fuzzy. The discussion is based on the concepts of a fuzzy-random event and its probability, as well as the notion of fuzzy random events independence. The fuzzy binomial upper distribution is specifically considered. In this case the fuzziness is represented by the membership levels of the binomial and non-binomial events of the complete failure complex. The GF of the constructed distribution and the first-order moment of the distribution are also calculated. Sufficient conditions for the existence of a limit distribution and a Poisson distribution are also obtained. (4) As is known, based on the analysis of lexical material, the linguistic spectrum of the statistical process of word-formation becomes two-component when switching to vocabulary. For this, two variants of the hybrid fuzzy-probabilistic process are constructed, which can be used in the analysis of the linguistic spectrum of the statistical process of word-formation. A fuzzy extension of standard Fuchs distribution is also presented, where the fuzziness is reflected in the growing numbers of failures. For better representation of the results, the examples of fuzzy BD are illustrated in each section.

**Keywords:** fuzzy-sets; fuzzy-random variables; distribution generating function; fuzzy binomial distribution; Fuchs distribution

**MSC:** 03E72; 60A86

## **1. Introduction**

In current practice, and especially in the creation of new technologies, the use of extensions of classical probabilistic distributions based on expert data and evaluations is be-

**Citation:** Sirbiladze, G.; Kacprzyk, J.; Manjafarashvili, T.; Midodashvili, B.; Matsaberidze, B. New Fuzzy Extensions on Binomial Distribution. *Axioms* **2022**, *11*, 220. https:// doi.org/10.3390/axioms11050220

Academic Editor: Humberto Bustince

Received: 13 April 2022 Accepted: 5 May 2022 Published: 9 May 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

coming more and more common. Particularly, the fuzzy extensions of discrete distributions are attracting attention, and the use of fuzzy-stochastic distributions or fuzzy-stochastic processes often have no alternative in dealing with incomplete objective-experimental data [1–8]. Based on these considerations, the aim of our research was to develop a new approach to the extension of the BD under the fuzzy uncertainty environment. In the introduction, we first review the existing research directions on the fuzzy BD extensions and then present the main principle of our approach.

Briefly, regarding the basic works studying fuzzy BD and its application in the different problems, practices, and research, the addition of two fuzzy Bernoulli distributions and the sum of subsequent fuzzy BDs have been discussed in [9]. Extensions of these ideas would be of use to study fuzzy randomness and the concept of measure. In [10], the authors assume that the probability of "success" *p* is not known exactly and is to be estimated from a random sample or from expert opinion. For the fuzzy BD, a fuzzy number *<sup>p</sup>*<sup>e</sup> instead of *p* is substituted. In [11], discrete probability distributions, where some of the probability values are uncertain, are considered. These uncertainties are modeled using fuzzy numbers. The basic laws of fuzzy probability theory are derived. Applications to the binomial probability distribution and queuing theory are considered. In [12], essential properties of fuzzy probability are derived to present the measurement of fuzzy conditional probability, fuzzy independency, and fuzzy Bayes theorem. Fuzzy discrete distributions, fuzzy binomials, and fuzzy Poisson distributions are introduced with different examples. Among intelligent techniques, the authors in [13] focus on the application of the fuzzy set theory in the acceptance sampling. Multi-objective mathematical models for fuzzy single and fuzzy double acceptance sampling plans with illustrative examples are proposed. The study illustrates how an acceptance sampling plan should be designed under fuzzy BD. The fuzzy set theory can be successfully used to cope with the vagueness in these linguistic expressions for acceptance sampling. In [14], the main distributions of acceptance sampling plans are handled with fuzzy parameters, and their acceptance probability functions are derived. Then, the characteristic curves of acceptance sampling are examined under fuzziness. Illustrative examples are given with binomial and other fuzzy distributions. In [15], the authors intend to generate some properties of negative BD under imprecise measurement. These properties include fuzzy mean, fuzzy variance, fuzzy moments, and fuzzy GF. The uncertainty in the observations may not be addressed with the classical approach to probability distribution; therefore, the fuzzy set theory helps to modify the classical approach. In [16], the authors discuss the single acceptance sampling plan, when the proportion of nonconforming products is a fuzzy number. They showed that the operating characteristic (OC) curve of the plan is a band with high and low bounds and that for a fixed sample size and acceptance number, the width of the band depends on the ambiguity proportion parameter in the lot. Illustrative examples are given with binomial and other fuzzy distributions. In [17], the portfolio consists of only options traded in the financial market. One of the most famous models of option pricing is the Binomial Cox-Ross-Rubinstein (CRR) Model. Using Fuzzy Binomial CRR procedure, the price of option is an interval with a specific membership degree, by which the investors are allowed to adjust their portfolios. We make a portfolio dynamically adjusted periodically, in which the membership degree of an option price determines the decision of buying or selling the option in the simulation. Classifiers based on the BD can be found in the scientific literature, but due to the uncertainty of the epidemiological data, a fuzzy approach may be interesting. Reference [18] presents a new classifier named fuzzy binomial naive Bayes (FBiNB). The theoretical development is presented as well as the results of its application on simulated multidimensional data. A brief comparison among FBiNB, a classical binomial naive Bayes classifier, and a naive Bayes classifier is performed. The results obtained showed that the FBiNB provided the best performance, according to the Kappa coefficient. In [19], two main distributions of acceptance sampling plans are considered, which are binomial and Poisson distributions with fuzzy parameters, and they derived their acceptance probability functions. Then, fuzzy acceptance sampling plans

were developed based on these distributions. In [20] the authors study the determination of the Quick Switching Single Double Sampling System using fuzzy BD, where the acceptance number tightening method is used. In [21], the fuzzy representations of a real-valued random variable are introduced for capturing relevant information on the distribution of the variable through the corresponding fuzzy-valued mean value. Specifically, characteristic fuzzy representations of a random variable allow us to capture the whole information on its distribution. As a result, the tests about fuzzy means of fuzzy random variables can be applied to develop goodness-of-fit tests. In this work, empirical comparisons of goodness-of-fit tests based on some convenient fuzzy representations with well-known procedures in case the null hypothesis relates to some specified BDs are presented. As is known [22], the optimal hypothesis tests for the BD and some other discrete distributions are uniformly most powerful (UMP) one-tailed and UMP unbiased (UMPU) two-tailed randomized tests. Therefore, conventional confidence intervals are not dual to randomized tests and perform badly on discrete data at small and moderate sample sizes. In this work, a new confidence interval notion, called fuzzy confidence intervals, that is dual to and inherits the exactness and optimality of UMP and UMPU tests is introduced. A new P-value notion, called fuzzy P-values or abstract randomized P-values, that also inherits the same exactness and optimality is also introduced. In [15], the generating procedure of some properties of negative BD under imprecise measurement is developed. These properties include fuzzy mean, fuzzy variance, fuzzy moments, and fuzzy moments GF.

It should be noted that in almost all of the studies presented here, the use of binomial distribution (BD) in an uncertain environment may result in fuzziness for only one reason: the value-realization of a binomial value in an uncertain environment cannot be the result of exact measurements or calculations, and it must be represented by fuzzy variables [9–22]. In other words, we are dealing with a binomial experiment when the possible results are presented in fuzzy values, more often in triangular or trapezoidal fuzzy numbers [23]—i.e., the binomial distribution is a descriptor of a *random-fuzzy experiment* whose realizations or characteristic parameters are represented in fuzzy values. The problem presented in this article is different from those presented in the studies above. It refers to a generalization of binomial distribution when the results or characteristics of an experiment are described by fuzzy variables. These variables are defined on the universe of all the results of the experiment and not on a certain subset of real numbers, as discussed in the studies presented above—i.e., we are dealing with a *fuzzy-random experiment*, where the binomial variable is a fuzzy-random variable. It has both a probability distribution and a membership function on the universe of all results of the experiment. Of course, the use of such binomial models is in great demand. This was the main motivation for us, the authors, to explore some of the new fuzzy extensions of binomial distribution.

In this work, we present a new approach to the extension of a classical BD under different fuzzy environments. In contrast to the above approaches to the study of fuzzy BDs, a completely new approach is developed in this paper. Section 2 presents the fuzzy extension of the BD, where the Bernoulli fuzzy-random variable is considered instead of the Bernoulli random variable. Success and failure events have both probabilistic distributions and their implementation possibility in the form of compatibility levels. Based on this information, the probabilistic distribution of the corresponding binomial fuzzy-random variable is calculated. The conditions of restrictions on this distribution are obtained. The Poisson distribution is calculated as a limit case of the constructed binomial fuzzy-random experiment. Section 3 considers the fuzzy extension of a BD, where the number of successes, unlike the previous case, is of a fuzzy nature and is represented as a fuzzy subset of the set of possible success numbers. A formula for calculating the probability of the occurrence of binomial dependent fuzzy events is obtained. The formula for calculating the probability of the convolution of binomial dependent fuzzy events is obtained. The invariance principle of exponential distribution is applied, and the corresponding GF is constructed. As a result, a scheme for calculating the mathematical expectation of the number of fuzzy successes is created. Section 4 considers the fuzzy extension of the binomial upper distribution,

where the fuzziness is represented in the compatibility levels of the binomial and nonbinomial events of the complete failure complex. The GF of the constructed distribution and the first-order moment of the distribution are also calculated. Sufficient conditions for the existence of a corresponding limit distribution and the Poisson distribution are also obtained. Section 5 presents the fuzzy extension of the classical Fuchs distribution, where fuzziness is reflected in the number of increasing failures. The built distribution function and the first and second order moments of the distribution are also calculated. Sufficient conditions for the existence of a corresponding limit distribution and the Poisson distribution are obtained. For better representation of the results, the examples of fuzzy BD are illustrated in each section. Section 6 presents the main results obtained and prospects for future research. A sequential scheme of the key facts and obtained results is presented by Scheme 1. *Axioms* **2022**, *11*, x FOR PEER REVIEW 5 of 22

**Scheme 1.** Sequential scheme of key facts and obtained results. **Scheme 1.** Sequential scheme of key facts and obtained results.

## **2. BD by Fuzzy Elementary Events**

Consider *P*<sup>1</sup> and *P*<sup>0</sup> = 1 − *P*<sup>1</sup> as elementary a priori probabilities of success ("1") and failure ("0") events, respectively. Let us also consider the membership levels *µ*<sup>1</sup> and *µ*<sup>2</sup> for (1) and (0), respectively. Therefore, we created a fuzzy-random variable of Bernoulli *values* 1 0 


according to [24,25] can be calculated by the formulas:

$$\mathcal{P}(1) = \mu\_1 P\_1 \text{ and } \mathcal{P}(0) = \mu\_2 P\_0 \tag{1}$$

For a sequence of *n* repetitive ordinary (non-fuzzy) trials in a binomial experiment, we introduce the notations

$$\begin{array}{ll} \mathbb{C}\_{1} \equiv (1, \ldots, 1), & \mathbb{C}\_{2} \equiv (1, \ldots, 1, 0), \ldots, \\ \mathbb{C}\_{2^{n}-1} \equiv (0, \ldots, 0, 1), & \mathbb{C}\_{2^{n}} \equiv (0, \ldots, 0), \end{array} \tag{2}$$

as there exist 2 *<sup>n</sup>* possible results by the combination of (1) and (0). For describing the "*n* repetitive fuzzy elementary experiments"

$$
\tilde{\mathbf{C}}\_1 \equiv (\tilde{\mathbf{1}}, \dots, \tilde{\mathbf{1}}), \dots, \tilde{\mathbf{C}}\_{2^n} \equiv (\tilde{\mathbf{0}}, \dots, \tilde{\mathbf{0}}) \tag{3}
$$

We refer to the notion of a fuzzy variable introduced in [24]. Suppose we have a fuzzy Bernoulli variable X ≡e (*X*, *U*, *R*e(*x*, *u*)), where *X* is a fuzzy elementary event, *U* = {0, 1} is a universal set, and the restriction *R*e(*x*, *u*) ⊂ *U* means that

$$\widetilde{\mathcal{R}}(\mathfrak{x}, \mathfrak{u}) \equiv \widetilde{\mathcal{R}}(\widetilde{\mathcal{X}}) \equiv \widetilde{\mathbb{0}} \cup \widetilde{\mathbb{1}} \equiv \left\{ \widetilde{\mathbb{0}}, \widetilde{\mathbb{1}} \right\} \,\tag{4}$$

Consider an ordered set of *n* such variables (Xe <sup>1</sup>, . . . , Xe*n*) as a fuzzy binomial experiment. According to [24], the universal set of such a compound fuzzy variable is the Cartesian product *U*<sup>1</sup> × . . . × *Un*. Now, suppose that Xe <sup>1</sup>, . . . , Xe*<sup>n</sup>* are the same non-interactive variables, i.e.,

$$
\widetilde{\mathcal{R}}(\widetilde{\mathcal{X}}\_1, \dots, \widetilde{\mathcal{X}}\_n) = \overline{\widetilde{\mathcal{R}}}(\widetilde{\mathcal{X}}\_1) \cap \dots \cup \overline{\widetilde{\mathcal{R}}}(\widetilde{\mathcal{X}}\_n) \tag{5}
$$

where *R*e(Xe *<sup>i</sup>*) is a cylindrical continuation of a marginal constraint *R*e(Xe *<sup>i</sup>*), *i* = 1, . . . , *n*. We refer to the sequence of "*n* repetitive fuzzy elementary experiments" as a fuzzy point Re(Xe <sup>1</sup>, . . . , Xe*n*). According to (5), we have:

$$\begin{aligned} \mu\_{\tilde{\mathbb{C}}\_{1}} &= \min \{ \mu\_{1}, \ldots, \mu\_{1} \} \wedge \mu\_{\tilde{\mathbb{C}}\_{2^{n}}} = \min \{ \mu\_{2}, \ldots, \mu\_{2} \} \\ \mu\_{\tilde{\mathbb{C}}\_{i}} &= \min \{ (\mu\_{1} \operatorname{or} \mu\_{2}) \wedge \ldots \wedge (\mu\_{1} \operatorname{or} \mu\_{2}) \} = \mu\_{1} \wedge \mu\_{2} \quad i = 2, 3, \ldots, 2^{n} - 1. \end{aligned} \tag{6}$$

If we use the formula for calculating a fuzzy event probability, we obtain the following probabilities:

$$\begin{split} \mathcal{P}\left(\tilde{\mathbb{C}}\_{1}\right) & \equiv \mathcal{P}(\tilde{1}, \dots, \tilde{1}, \tilde{1}) = \mu\_{1} P(1, \dots, 1, 1) \\ \mathcal{P}\left(\tilde{\mathbb{C}}\_{2}\right) & \equiv \mathcal{P}(\tilde{1}, \dots, \tilde{1}, \tilde{0}) = (\mu\_{1} \wedge \mu\_{2}) P(1, \dots, 1, 0), \dots, \\ \mathcal{P}\left(\tilde{\mathbb{C}}\_{2^{n}-1}\right) & \equiv \mathcal{P}(\tilde{0}, \dots, \tilde{0}, \tilde{1}) = (\mu\_{1} \wedge \mu\_{2}) P(0, \dots, 0, 1) \\ \mathcal{P}\left(\tilde{\mathbb{C}}\_{2^{n}}\right) & \equiv \mathcal{P}(\tilde{0}, \dots, \tilde{0}, \tilde{0}) = \mu\_{2} P(0, \dots, 0, 0). \end{split} \tag{7}$$

As is well known, the projection of a relation on a given set of variables is a marginal sub-relation of that relation which applies only on these variables. It is considered on the Cartesian product of the universes of these variables. If we sum the distribution (7) by the projection of relation (5)

$$\underset{\mathcal{U}\_{\mathbf{i}\_{1}}\times\ldots\times\mathcal{U}\_{\mathbf{i}\_{n-1}}}{\text{Proj}} \quad \left< \widetilde{\mathcal{R}}(\widetilde{\mathcal{X}}\_{1},\ldots,\widetilde{\mathcal{X}}\_{\mathbf{i}}) \right> \\ = \widetilde{\mathcal{R}}\_{\mathbf{q}} \left( \widetilde{\mathcal{X}}\_{\mathbf{i}\_{1}},\ldots,\widetilde{\mathcal{X}}\_{\mathbf{i}\_{n-1}} \right), \quad \boldsymbol{q} \equiv (\,i\_{1},\ldots,i\_{n-1}), \tag{8}$$

we receive only normed fuzzy probabilities of e1 and e0

$$\mathcal{P}(\widetilde{1}) = \frac{\mu\_1 P\_1}{\sum\_{i=1}^{2^n} \mathcal{P}\left(\widetilde{\mathbb{C}}\_i\right)},\tag{9}$$

$$\mathcal{P}(\widetilde{0}) = 1 - \mathcal{P}(\widetilde{1}) = \frac{\mu\_1 P\_0 + (\mu\_2 - \mu\_1)P(0, \dots, 0)}{\sum\_{i=1}^{2^n} \mathcal{P}\left(\widetilde{\mathbb{C}}\_i\right)}.$$

After substituting (7) and (9) in the BD formula, we receive:

$$\begin{split} \mathcal{P}(\widetilde{\mathcal{C}}\_{(k)}) &= \frac{P\_1^k}{\left[1 + \left(\frac{\mu\_2}{\mu\_1} - 1\right) P(0, \dots, 0)\right]^{n-1}} \Big[1 + \left(\frac{\mu\_2}{\mu\_1} - 1\right) P(0, \dots, 0) - P\_1\Big]^{n-k}, \\ k &= 1, \dots, n. \end{split} \tag{10}$$

where common notation *C*e (*k*) is introduced for those *C*e *i* to which the same number *k* of successes correspond, since the probabilities of such *C*e *<sup>i</sup>* are equal. Note that

$$\sum\_{i=1}^{2^n} \mathcal{P}(\widetilde{\mathbb{C}}\_i) = \mu\_1 + (\mu\_2 - \mu\_1)\mathcal{P}(0, \dots, 0),\tag{11}$$

It is clear from (9) and (10) that if *µ*<sup>1</sup> = *µ*2, then the conditions for the independence of fuzzy events degenerate to the corresponding conditions for ordinary events.

The constraint (7) for probabilities P(*C*e (*k*) ) leads to the relationship

$$\frac{\mu\_2}{\mu\_1} = \left[ 1 + \frac{1 - P\_1 \cdot \sqrt[n-1]{\frac{P\_1}{P(1, \dots, 1)}}}{P(1, \dots, 1) \left( \sqrt[n-1]{\frac{P\_1}{P(1, \dots, 1)}} - 1 \right)^n} \right]^{-1} \cdot \tag{12}$$

By putting Formula (12) into (11) and assuming that *µ*<sup>2</sup> ≥ *µ*<sup>1</sup> and P *C*e *i* ≥ 0, we get a system of conditions

$$\mathcal{P}\left(\widetilde{\mathbb{C}}\_{(k)}\right) = P(1,\ldots,1) \left[ \sqrt[n-1]{\frac{P\_1}{P(1,\ldots,1)}} - 1 \right]^{n-k}, k = 1,\ldots,n;\tag{13}$$

$$0 \le \frac{P\_1 \cdot \frac{\mathbf{v} - 1}{\sqrt{P(1, \dots, 1)}} - 1}{P(1, \dots, 1) \left[ \sqrt[n-1]{\frac{P\_1}{P(1, \dots, 1)}} - 1 \right]^n} < 1; \frac{\mu\_2}{\mu\_1} = 1 + \frac{-1 + P\_1 \cdot \sqrt[n-1]{\frac{P\_1}{P(1, \dots, 1)}}}{P(0, \dots, 0)}.\tag{14}$$

The probabilities of considering fuzzy events, normalized in Re(Xe <sup>1</sup>, . . . , <sup>X</sup>e*n*) = <sup>2</sup> *n* ∪ *i*=1 *C*e *i* , are calculated by the formula

$$\mathcal{P}'(\tilde{\mathbb{C}}\_i) = \frac{\mathcal{P}(\mathbb{C}\_i)}{\sum\_{j=1}^{2^n} \mathcal{P}(\tilde{\mathbb{C}}\_j)}, \quad i = 1, \dots, 2^n. \tag{15}$$

In deriving the BD with fuzzy elementary events, we will proceed from the notion of the independence of fuzzy events [23], which is not equivalent to the ordinary independence. This leads to the certain conditions of independence, which we discuss below. For the purpose of clarity, let *µ*<sup>1</sup> ≤ *µ*2; then, we obtain the fuzzy binomial distribution:

$$\mathcal{P}'\left(\tilde{\mathbb{C}}\_1\right) = \left[\mathcal{P}'(\tilde{1})\right]^n, \dots, \mathcal{P}'\left(\tilde{\mathbb{C}}\_{2^n}\right) = \left[\mathcal{P}'(\tilde{0})\right]^n$$

Thus, conditions (13)–(15) are equivalent to the existence of the *n*-ar fuzzy-random variable, which is a sequence of *n* repetitive, fuzzy, non-interacting, and independent elementary events whose distribution is described by the BD with fuzzy elementary events

$$\begin{split} \mathcal{P}' \left( \tilde{\mathcal{B}}\_n^k \right) &= \mathsf{C}\_n^k \left[ \mathcal{P}'(\tilde{1}) \right]^k \left[ \mathcal{P}'(\tilde{0}) \right]^{n-k} = \mathsf{C}\_n^k \frac{\mu\_1^k P\_1^k \left[ \mu\_1 P\_0 + (\mu\_2 - \mu\_1) P(0...0) \right]^{n-k}}{\left[ \mu\_1 + (\mu\_2 - \mu\_1) P(0...0) \right]^n} = \\ \mathsf{C}\_n^k \frac{\mathsf{P}\_1^k \left[ P\_0 + \left( \frac{\mu\_2}{\mu\_1} - 1 \right) P(0...0) \right]^{n-k}}{\left[ 1 + \left( \frac{\mu\_2}{\mu\_1} - 1 \right) P(0...0) \right]^n} , \end{split} \tag{16}$$

where

$$
\widetilde{\mathcal{B}}\_n^k = \underset{\forall \ j \text{ with } k \text{ successes}}{\cup} \widetilde{C}\_j.
$$

If *µ*<sup>2</sup> < *µ*1, for the calculation of P 0 Be*k n* , it is necessary to make the following changes in the ending part of Equation (16). Instead of *<sup>µ</sup>*<sup>2</sup> *µ*1 , we write *<sup>µ</sup>*<sup>1</sup> *µ*2 , and instead of *P*(0, . . . , 0), we write *P*(1, . . . , 1):

$$\frac{\mu\_2}{\mu\_1} \leftarrow \frac{\mu\_1}{\mu\_2} \,, \qquad P(0, \dots, 0) \leftarrow P(1, \dots, 1) .$$

To be more precise, we receive

$$\begin{array}{l} \mathcal{P}^{\prime} \left( \hat{\mathcal{B}}\_{n}^{k} \right) = \mathcal{C}\_{n}^{k} \frac{\left[ \mu\_{2} \mathbb{P}\_{1} + (\mu\_{1} - \mu\_{2}) P(1, \dots, 1) \right]^{k} \mu\_{2}^{n-k} P\_{0}^{n-k}}{\left[ \mu\_{2} + (\mu\_{1} - \mu\_{2}) P(1, \dots, 1) \right]^{n}}, \\ \mathcal{C}\_{n}^{k} \frac{\left[ \mathcal{P}\_{0}^{n-k} \right] \left[ \mathcal{P}\_{1} + \left( \frac{\mu\_{1}}{\mu\_{2}} - 1 \right) P(1, \dots, 1) \right]^{n-k}}{\left[ 1 + \left( \frac{\mu\_{1}}{\mu\_{2}} - 1 \right) P(1, \dots, 1) \right]^{n}}. \end{array} \tag{17}$$

Note that in both cases, if *µ*<sup>2</sup> = *µ*1, then (16) and (17) transform to the usual BD.

From Formulas (16) and (17), we see that P 0 Be*k n* depends on the ratio *<sup>µ</sup>*<sup>2</sup> *µ*1 if *µ*<sup>2</sup> > *µ*1, and on the ratio *<sup>µ</sup>*<sup>1</sup> *µ*2 if *µ*<sup>2</sup> < *µ*1, while the condition of independence and non-interaction (14) allows us to express the normalized probability P 0 Be*k n* with the probabilities of the corresponding non-fuzzy events. Indeed, it is not difficult to show that

P 0 (e1) = *n*−1 s *P*1( *n*−1 z }| { 1, . . . , 1), *µ*<sup>1</sup> < *µ*<sup>2</sup> *n*−1 s *P*0( *n*−1 z }| { 1, . . . , 1), *µ*<sup>1</sup> > *µ*<sup>2</sup> , P 0 (e0) = *n*−1 s *P*1( *n*−1 z }| { 0, . . . , 0), *µ*<sup>1</sup> < *µ*<sup>2</sup> *n*−1 s *P*0( *n*−1 z }| { 0, . . . , 0), *µ*<sup>1</sup> > *µ*<sup>2</sup> (18)

If we enter the values from Formula (18) to Formulas (16) and (17), we get

$$\mathcal{P}'\left(\widehat{\mathcal{B}}\_{n}^{k}\right) = \left\{ \underbrace{\mathcal{C}\_{n}^{k}\left[P\_{1}\widehat{\left(1,\dots,1\right)}\right]^{\frac{k}{n-1}}}\_{\mathcal{C}\_{n}^{k}\left[P\_{0}\widehat{\left(0,\dots,0\right)}\right]^{\frac{k}{n-1}}}\_{n-1}\right\}^{\frac{n-k}{n-1}}, \mu\_{1} < \mu\_{2},$$

$$\mathcal{C}\_{n}^{k}\left[P\_{0}\widehat{\left(1,\dots,1\right)}\right]^{\frac{k}{n-1}}\left[P\_{0}\widehat{\left(0,\dots,0\right)}\right]^{\frac{n-k}{n-1}}\_{\mathcal{C}\_{1}\left(0,\dots,0\right)}, \mu\_{1} > \mu\_{2}.$$

Using the notion of a discrete distribution moment generating function, we analytically obtain the formula of the GF of the BD with fuzzy elementary events (two cases are considered as presented above).

$$\mathcal{G}\_{\mathcal{P}'(\mathfrak{d}\_n^k)}(y) = \begin{cases} \frac{[\mu\_1 \mathbb{P}\_0 + \mu\_1 \mathbb{P}\_1 y + (\mu\_2 - \mu\_1)P(0, \dots, 0)]^n}{[\mu\_1 + (\mu\_2 - \mu\_1)P(0, \dots, 0)]^n}, \mu\_1 < \mu\_2, \\\frac{[\mu\_2 \mathbb{P}\_1 y + (\mu\_1 - \mu\_2)P(1, \dots, 1)y + \mu\_2 P\_0]^n}{[\mu\_2 + (\mu\_1 - \mu\_2)P(1, \dots, 1)]^n}, \mu\_1 > \mu\_2. \end{cases} \tag{20}$$

As is well known, distribution moments are easily calculated from the generating function. Without presenting a long process of calculation, we give the analytical form of the first and second order moments of the BD with fuzzy elementary events

$$\overline{k} = \begin{cases} \frac{n\mu\_1 p\_1}{\mu\_1 + (\mu\_2 - \mu\_1)P(0,...,0)} & \text{, } \mu\_1 < \mu\_2, \\\frac{n[\mu\_2 P\_1 + (\mu\_1 - \mu\_2)P(1,...,1)]}{\mu\_2 + (\mu\_1 - \mu\_2)P(1,...,1)} & \text{, } \mu\_1 > \mu\_2. \end{cases} = \begin{cases} \frac{n\mathbb{P}\_1}{1 + \left(\frac{\mu\_2}{\mu\_1} - 1\right)P(0,...,0)} & \text{, } \mu\_1 < \mu\_2. \\\frac{n[\mu\_1 + \left(\frac{\mu\_1}{\mu\_2} - 1\right)P(1,...,1)]}{1 + \left(\frac{\mu\_1}{\mu\_2} - 1\right)P(1,...,1)} & \text{, } \mu\_1 > \mu\_2. \end{cases} \tag{21}$$

$$\begin{split} \overline{k}^2 = \overline{k}(1+\frac{n-1}{n}\overline{k}) &= \begin{cases} \frac{n\mu\_1 P\_1}{\mu\_1 + (\mu\_2 - \mu\_1)P(0,...,0)} \left[1+\frac{(n-1)\mu\_1 P\_1}{\mu\_1 + (\mu\_2 - \mu\_1)P(0,...,0)}\right] & , \mu\_1 < \mu\_2 \\ \frac{n\left[(\mu\_2 P\_1 + (\mu\_1 - \mu\_2)P(1,...,1)\right]}{\mu\_2 + (\mu\_2 - \mu\_2)P(1,...,0)} \left[1+\frac{(n-1)\left[\mu\_2 P\_1 + (\mu\_1 - \mu\_2)P(1,...,1)\right]}{\mu\_2 + (\mu\_1 - \mu\_2)P(1,...,1)}\right] & , \mu\_1 > \mu\_2. \end{cases} \\\\ \begin{cases} \frac{nP\_1}{1+\left[\frac{p\_1}{\mu\_1} - 1\right]P(0,...,0)} \left[1+\frac{(n-1)P\_1}{1+\left(\frac{p\_2}{\mu\_1} - 1\right)P(0,...,0)}\right] & , \mu\_1 < \mu\_2. \\ \frac{n\left[P\_1 + \left(\frac{p\_1}{\mu\_2} - 1\right)P(1,...,1)\right]}{1+\left(\frac{p\_1}{\mu\_2} - 1\right)P(1,...,1)} \left[1+\frac{(n-1)P\_1 + \left(\frac{p\_2}{\mu\_2} - 1\right)P(1,...,1)}{1+\left(\frac{p\_1}{\mu\_2} - 1\right)P(1,...,1)}\right] & , \mu\_1 > \mu\_2. \end{split} \tag{22}$$

Expressions (16), (17), and (21) allow us to prove the existence of Poisson limits for BD with fuzzy elementary events. It is not difficult to calculate the limits below if we use a well-known numerical sequence limit calculation technique. There are some possible cases: (1). *k* = *const*. In this case, we obviously have

$$\lim\_{\substack{n \to \infty \\ n \to \infty}} \mathcal{P}' \left( \widetilde{\mathcal{B}}\_n^k \right) = e^{-\overline{k}} \frac{\overline{k}^k}{k!}, \quad k = 0, 1, \ldots \tag{23}$$
  $\overline{k} = \text{const}$ 

(2). *µ*<sup>1</sup> and *µ*<sup>2</sup> are fixed and *nP*<sup>1</sup> = *const*. It is easy to show that:

$$\lim\_{\substack{n\to\infty\\n\to\infty\\P\_1\to 0}} \mathcal{P}'\left(\vec{\mathcal{B}}\_n^k\right) = \begin{cases} e^{-\varepsilon'\left(\frac{\langle\cdot\rangle^k}{k!}\right)^k} & \mathcal{c}' = \frac{\lambda}{1 + \left(\frac{\mu\_2}{\mu\_1} - 1\right)[\mathbf{P}(\overbrace{0,\dots,0}^{\mu\_2}) - \mu\_1]}, \mu\_1 < \mu\_2, \\\\ e^{-\varepsilon''\left(\frac{\langle\cdot\rangle^k}{k!}\right)^k} & \mathcal{c}' = \lambda + \left(\frac{\mu\_1}{\mu\_2} - 1\right)[\mathbf{P}(\overbrace{1,\dots,1}^{\mu\_1}) ], \mu\_1 > \mu\_2. \\\\ \end{cases} \tag{24}$$

**Example 1.** *Let the fuzzy Bernoulli distribution be given* X ∼e *values* 1 0 *probabilities* 0.3 0.7 *membership levels* 0.5 0.6 *. Based on Formula (19), construct fuzzy BD for the n* = 5*. Use Formulas (21) and (22) and calculate the moments of the first and second order k and k* 2 *. Calculate the standard deviation of distribution SD* = q *k* 2 − (*k*) 2 *. Using the Poisson distribution Formula (24), calculate the distribution values for k* = 0, 1, . . . , 7 *when nP*<sup>1</sup> ' *const* = 6*.*

Solution of Example 1. It is clear that for the calculations *P*<sup>1</sup> = 0.3 and *P*<sup>0</sup> = 0.7, *µ*<sup>1</sup> = 0.5 and *µ*<sup>2</sup> = 0.6, *n* = 5, *k* = 0, 1, 2, 3, 4, 5. In our case, *µ*<sup>1</sup> < *µ*2. Let us assume

that *P*1( *n*−1 z }| { 0, . . . , 0) = *P*1( 4 z }| { 0, . . . , 0) = (0.7) 4 and *P*1( *n*−1 z }| { 1, . . . , 1) = *P*1( 4 z }| { 1, . . . , 1) = (0.3) 4 . We receive (Table 1).

**Table 1.** Conditional fuzzy binomial probability distribution.


Using Formulas (21) and (22), we receive *k* = 1.4512, *k* 2 = 3.1360, and *SD* = q *k* 2 − (*k*) 2 = 1.0149. For the Poisson distribution, if *nP*<sup>1</sup> ' *const* = 6, then, for *k* = 0, 1, 2, 3, 4, 5, we receive (Table 2).

**Table 2.** Fragment of Poisson distribution.


## **3. BDs with a Fuzzy Number of Successes**

Consider a set A*<sup>n</sup>* ≡ {0, 1, . . . , *n*}. Let e*k*, e*k* ⊂ A*<sup>n</sup>* be the fuzzy subset in A*n*, <sup>e</sup>*<sup>k</sup>* <sup>=</sup> "*approximately k number*" with some membership function *<sup>µ</sup>*e*<sup>k</sup>* : A*<sup>n</sup>* → [0, 1] and e*k* = *n* ∪ - *µ*e*k* (*l*)/*l* [23,24].

*l*=0 If A*<sup>n</sup>* is a set of numbers of possible successes in *n* trials of the binomial scheme, then it is well known that to each element of A*<sup>n</sup>* corresponds the probability *P*(B *k n*;*p* ) = *C k n p k q n*−*k* . Therefore, according to [24,25], for the BD with the fuzzy success number, we obtain the formula

$$\mathcal{P}\left(\mathcal{B}\_{n;p}^{\tilde{k}}\right) = \sum\_{l=0}^{n} \mu\_{\tilde{k}}(l) P\left(\mathcal{B}\_{n;p}^{l}\right) \tag{25}$$

Here, P Be*k n*;*p* is the probability measure of a fuzzy event B e*k <sup>n</sup>*;*<sup>p</sup>* or the fuzzy subset e*k*.

Note that in this scheme under consideration, the fuzzy events B e*k <sup>n</sup>*;*<sup>p</sup>* are not mutually exclusive events. Therefore, according to the additivity property of a probability measure of a fuzzy event [24,25], we have

$$\begin{split} \mathcal{P}\left(\stackrel{\scriptstyle\mathbb{n}}{\cup}{\mathcal{B}}\_{\tiny\tiny\textsc{n:p}}^{\tilde{\mathsf{E}}}\right) &= \sum\_{k=0}^{\mathrm{n}} \mathcal{P}\left(\mathcal{B}\_{\textit{n:p}}^{\tilde{\mathsf{E}}}\right) - \sum\_{k,k'} \mathcal{P}\left(\mathcal{B}\_{\textit{n:p}}^{\tilde{\mathsf{E}}} \cap \mathcal{B}\_{\textit{n:p}}^{\tilde{\mathsf{E}}}\right) + \sum\_{k,k',k''} \mathcal{P}\left(\mathcal{B}\_{\textit{n:p}}^{\tilde{\mathsf{E}}} \cap \mathcal{B}\_{\textit{n:p}}^{\tilde{\mathsf{E}}} \cap \mathcal{B}\_{\textit{n:p}}^{\tilde{\mathsf{E}}''}\right) + \dots \\ &+ (-1)^{n} \mathcal{P}\left(\mathcal{B}\_{\textit{n:p}}^{\tilde{\mathsf{E}}} \cap \dots \cap \mathcal{B}\_{\textit{n:p}}^{\tilde{\mathsf{E}}}\right). \end{split} \tag{26}$$

Let 0 < *p<sup>i</sup>* < 1, *i* = 1, 2 be two numbers. An important feature of the distribution (25) is that the law of composition is satisfied

$$\mathcal{P}\left(\mathcal{B}\_{n;p\_1p\_2}^{\tilde{k}}\right) = \sum\_{m=0}^{n} P\left(\mathcal{B}\_{n;p\_1}^{m}\right) \mathcal{P}\left(\mathcal{B}\_{m,p\_2}^{\tilde{k}}\right) \tag{27}$$

which is easily verified by the simple calculations

$$\mathcal{P}\left(\mathcal{B}\_{n;p\_1p\_2}^{\tilde{k}}\right) = \sum\_{l=0}^{n} \mu\_{\tilde{k}}(l) P\left(\mathcal{B}\_{n;p\_1p\_2}^{l}\right) = \sum\_{l=0}^{n} \mu\_{\tilde{k}}(l) \mathcal{C}\_{n}^{k} (p\_1p\_2)^{l} (1 - p\_1p\_2)^{n-l}$$

and

$$\begin{split} &\sum\_{m=0}^{n}P\left(\mathcal{B}\_{n;p\_{1}}^{m}\right)\mathcal{P}\left(\mathcal{B}\_{n;p\_{2}}^{\tilde{k}}\right) = \sum\_{m=0}^{n}P\left(\mathcal{B}\_{n;p\_{1}}^{m}\right)\sum\_{l=0}^{n}\mu\_{\tilde{k}}(l)P\left(\mathcal{B}\_{n;p\_{2}}^{l}\right) = \\ &\sum\_{l=0}^{n}\mu\_{\tilde{k}}(l)\sum\_{m=0}^{n}C\_{m}^{n}C\_{m}^{m}{}\_{l}^{m}(1-p\_{1})^{n-m}p\_{2}^{l}(1-p\_{2})^{m-l} = \sum\_{l=0}^{n}\mu\_{\tilde{k}}(l)\frac{n!}{l!(n-l)!}(p\_{1}p\_{2})^{l}(1-p\_{1}p\_{2})^{n-l} \\ &\times\sum\_{m=0}^{n}\frac{(n-l)!}{(n-m)!(m-l)!}\frac{p\_{1}^{n-l}(1-p\_{1})^{n-m}(1-p\_{2})^{m-l}}{(1-p\_{1}p\_{2})^{n-l}} = \sum\_{l=0}^{n}\mu\_{\tilde{k}}(l)\frac{n!}{l!(n-l)!}p\_{1}p\_{2}^{l}(1-p\_{1}p\_{2})^{n-l} \\ &\times\sum\_{j=0}^{n-l}\frac{(n-l)!}{j!(n-l)!}\frac{p\_{1}^{l}(1-p\_{1})^{n-l-j}(1-p\_{2})^{j}}{(1-p\_{1}p\_{2})^{n-l}} = \sum\_{l=0}^{n}\mu\_{\tilde{k}}(l)P\left(\mathcal{B}\_{n;p\_{1}p\_{2}}^{l}\right). \end{split}$$

Based on the property of the invariability of the exponential distribution, let us extend the fuzzy subset e*k* from the set A*<sup>n</sup>* to the non-negative integer numbers set *N* ∪ {0}. In this case, the extended membership function *<sup>µ</sup>*e*<sup>k</sup>* (*l*), *l* ∈ *N* ∪ {0} will be a mapping of a set of natural numbers N into [0, 1]. Consider the expression of the moments' generating function of fuzzy BD.

Consider the expression of the moments' GF of the fuzzy BD

$$G(\tilde{k}) = \sum\_{n=0}^{\infty} \mathcal{P}\left(\mathcal{B}\_{n;p}^{\tilde{k}}\right) f\_n(u),\tag{28}$$

where *fn*(*u*) = (1 − *u*)*u n* , 0 < *u* < 1. If we denote *v* = *pu* <sup>1</sup>−*u*+*pu* and *<sup>g</sup>l*(*v*) <sup>=</sup> (<sup>1</sup> <sup>−</sup> *<sup>v</sup>*)*<sup>v</sup> l* , then

$$G(\tilde{k}) = \sum\_{l=0}^{\infty} \mu\_{\tilde{k}}(l) g\_l(v). \tag{29}$$

Indeed,

$$\begin{split} \mathbb{G}(\tilde{k}) &= \sum\_{n=0}^{\infty} f\_n(u) \mathcal{P}\left(\mathcal{B}\_{n;p}^{\tilde{k}}\right) = \sum\_{n=0}^{\infty} f\_n(u) \sum\_{l=0}^n \mu\_{\tilde{k}}(l) P\left(\mathcal{B}\_{n;p}^{l}\right) = f\_0(u) \sum\_{l=0}^0 \mu\_{\tilde{k}}(l) P\left(\mathcal{B}\_{0;p}^{l}\right) \\ &+ f\_1(u) \sum\_{l=0}^1 \mu\_{\tilde{k}}(l) P\left(\mathcal{B}\_{1;p}^{l}\right) + f\_2(u) \sum\_{l=0}^2 \mu\_{\tilde{k}}(l) P\left(\mathcal{B}\_{2;p}^{l}\right) + \dots \\ &= \mu\_{\tilde{k}}(0) \left[ f\_0(u) P\left(\mathcal{B}\_{0;p}^{0}\right) + f\_1(u) P\left(\mathcal{B}\_{1;p}^{0}\right) + f\_2(u) P\left(\mathcal{B}\_{2;p}^{0}\right) + \dots \right] + \\ &\mu\_{\tilde{k}}(1) \left[ f\_1(u) P\left(\mathcal{B}\_{1;p}^{1}\right) + f\_2(u) P\left(\mathcal{B}\_{2;p}^{1}\right) + \dots \right] \\ &+ \mu\_{\tilde{k}}(2) \left[ f\_2(u) P\left(\mathcal{B}\_{2;p}^{2}\right) + f\_3(u) P\left(\mathcal{B}\_{3;p}^{2}\right) + \dots \right] + \dots \end{split}$$

Given that for *<sup>r</sup>* <sup>&</sup>lt; *s P* B *s r*;*p* = 0, then

$$\begin{array}{l} G\left(\widehat{k}\right) = \sum\_{n=0}^{\infty} \mu\_{\widehat{k}}(n) \sum\_{l=0}^{\infty} f\_{l}(u) P\left(\mathcal{B}\_{l;p}^{n}\right) \\ = \sum\_{n=0}^{\infty} \mu\_{\widehat{k}}(n) \sum\_{l=0}^{\infty} \frac{l!}{n!(n-l)!} p^{n} (1-p)^{l-n} (1-u) u^{l} \\ = \sum\_{n=0}^{\infty} \mu\_{\widehat{k}}(n) (1-u) \left(pu\right)^{n} \sum\_{l=0}^{\infty} \frac{l!}{n!(n-l)!} [(1-p)u]^{l-n} = \\ \sum\_{n=0}^{\infty} \mu\_{\widehat{k}}(n) (1-u) \left(pu\right)^{n} \sum\_{j=0}^{\infty} \frac{(n+j)!}{n!j!} [(1-p)u]^{j} \ . \end{array}$$

The last sum is a decomposition of the function [1 − (1 − *p*)*u*] −*n*+1 into series by degrees of (1 − *p*)*u*. Considering the connection between *u* and *v*, we finally obtain the expression of (29)

$$G\left(\widetilde{k}\right) = \sum\_{n=0}^{\infty} \mu\_{\widetilde{k}}(l) \frac{1-\mu}{1-(1-p)u} \left[\frac{pu}{1-(1-p)u}\right]^n.$$

To determine the mean value of a success fuzzy number "with probability measure P", let us do the following. Consider a set of ordinary (nonfuzzy) events A ⊂ A*n*. Define the function of a set *E*(.) in such a way that, for any subset A, this function corresponds to the conditional mean, i.e., if A ⊂ A*n*, then *E*(A) = *k*A. According to the principle of generalization [23], the domain of definition *E*(.) can be extended to fuzzy subsets as well. Suppose we have a fuzzy subset e*k* of A*n*, and e*k* is represented as

$$
\overline{k} = \underset{\mathfrak{a}}{\cup} \mathcal{A}\_{\mathfrak{a}\prime} \qquad \mathfrak{a} \in [0, 1] , \tag{30}
$$

where A*<sup>α</sup>* denotes a cut set of level *α*. Then,

$$E(\bar{k}) = \underset{a}{\downarrow} E(\mathcal{A}\_{\mathfrak{A}}) = \underset{a}{\downarrow} \overline{k}\_{\mathcal{A}\_{a}}.\tag{31}$$

Here, *E*(e*k*) is a fuzzy subset on the set of all conditional mean values E. Relationships (30) and (31) define the calculation rule for the values of the characteristic functions of fuzzy subsets on the set of all conditional means *µ k*A corresponding to ordinary subsets A*<sup>n</sup>* over *<sup>µ</sup>*e*<sup>k</sup>* (*l*).

Define the mean value of the fuzzy success number as a convex combination [23] of the fuzzy subsets *E*(e*k*) with the following weights: *Wn*(e*k*) = P B e*k n*;*p* ∑*<sup>l</sup>* P B e*l n*;*p* . We define a fuzzy subset with the following membership function

$$\mu\_{\tilde{k}\_p} \left( l\_{\mathcal{A}} \right) = \sum\_{\tilde{k} \in \mathcal{A}\_n} \mathcal{W}\_n(\tilde{k}) \mu\_{\mathbb{E}(\tilde{k})} \left( \tilde{l}\_{\mathcal{A}} \right), \tilde{l}\_{\mathcal{A}} \in \mathcal{E} \tag{32}$$

Note that when *<sup>µ</sup>u*e(*l*) <sup>→</sup> *<sup>δ</sup>l k*, *<sup>δ</sup>l k* <sup>=</sup> 1, *i f l* = *k*, 0, *i f l* 6= *k* , that is, when moving e*k* to the ordinary set {*k*}, "the average by the measure P" tends to the mathematical expectation of the number of successes of the BD, e*k*<sup>P</sup> → *np*. The method given here can be used for the calculation of any order fuzzy moments e*k r* P , but when calculating high-order moments, it is necessary to use a certain rule for multiplying fuzzy numbers. Most importantly, we present a rule that is derived from the principle of generalization [23].

The discussion of the Poisson and Normal approximations for (25) is reduced to the substitution of the corresponding approximate values of *P* B *l n*;*p* in this formula.

**Example 2.** *Let the Bernoulli distribution be given* X ∼ *values* 1 0 *propbabilities* 0.3 0.7 *and let a binomial experiment be created based on this Bernoulli experiment for n* = 6*. Let be given the following fuzzy subsets "approximately k successes" (k* = 0, . . . , 6*) (Table 3).*


**Table 3.** Fuzzy subsets "approximately *k* successes" (*k* = 0, 1, 2, 3, 4, 5, 6 ).

Use the results of this Section to calculate the numerical values of BD with fuzzy success numbers.

Solution of Example 2. Note that *p* = 0.3. Using expression (25) and the data of Table 3, we calculate the BD values presented in Table 4.


**Table 4.** Numerical values of BD with fuzzy success numbers—P B e*k n*;*p* .

## **4. Fuzzy "Upper" BD**

As is well known, the discussion on the (non-fuzzy) "upper" BD is based on a model of the superposition of two processes: the binomial process B *k <sup>n</sup>*;*<sup>p</sup>* and the process of "increasing the total number of failures" <sup>B</sup>0-denoted by <sup>B</sup><sup>0</sup> ◦ B*<sup>k</sup> n*;*p* , characterized by a priori probability *P*(B0) = 1 − *γ* [26], where *p* is the elementary event probability of ("1"). Let *µ*<sup>0</sup> and *µ* 0 0 be *n*

the values of the membership function that correspond to the complex events z }| { (0, . . . , 0) at attempting to distinguish the binomial and non-binomial origin events. Then, as it is easy to verify, the probability of *k* successes in *n* trials of the binomial "upper" fuzzy experiment—denoted by Be <sup>0</sup> ◦ B*<sup>k</sup> <sup>n</sup>*;*<sup>p</sup>* will have the form

$$\begin{aligned} \mathcal{P}\left(\tilde{\mathcal{B}}\_{0}\circ\mathcal{B}\_{n;p}^{k}\right) &= \frac{1}{\mathcal{Z}} \begin{cases} \mu\_{0}P(\mathcal{B}\_{0}) + \mu\_{0}'P\left(\mathcal{B}\_{0}\right)P\left(\mathcal{B}\_{n;p}^{0}\right), \quad k=0,\\ P\left(\mathcal{B}\_{0}\right)P\left(\mathcal{B}\_{n;p}^{k}\right), \quad k=1,\ldots,n,\\ \frac{1}{\mathcal{Z}} \begin{cases} \mu\_{0}(1-\gamma) + \mu\_{0}'\gamma(1-p)^{n}, \quad k=0,\\ \gamma\mathcal{C}\_{n}^{k}p^{k}(1-p)^{n-k}, \quad k=1,\ldots,n \end{cases} \end{aligned} \tag{33}$$

where *Z* is a constant that is determined by the normalization condition ∑ *k* P Be <sup>0</sup> ◦ B*<sup>k</sup> n*;*p* = 1 and

$$Z = \mu\_0 (1 - \gamma) + \mu\_0' \gamma (1 - p)^n + \gamma \left[ 1 - (1 - p)^n \right]. \tag{34}$$

The corresponding GF and the first moment of this probabilistic distribution are as follows:

$$\mathcal{G}\_{\mathcal{P}(\mathfrak{E}\_0 \otimes \mathfrak{E}\_{n,p}^k)}(y) = \frac{1}{Z} [\mu\_0 (1 - \gamma) + \mu\_0 \gamma (1 - p)^n + \gamma ((1 - p + py)^n - (1 - p)^n)],\tag{35}$$

and *k* = *Z* <sup>−</sup>1*γnp*.

Poisson's limit (*np* → *c* > 0, *n* → ∞, *p* → 0) is

$$P\_{\text{Poisson}}(k) = \frac{1}{Z} \begin{cases} \mu\_0 (1 - \gamma) + \mu\_0' \gamma e^{-c} \text{ , } k = 0, \\ \gamma e^{-c} \frac{c^k}{k!} \text{ , } \qquad k = 1, 2, \dots \text{ .} \end{cases} \tag{36}$$

*k* and *c* are related by the ratio

$$\tilde{k} = \left[\mu\_0 (1 - \gamma) + \mu\_0' \gamma e^{-c} + \gamma \left(1 - e^{-c}\right)\right]^{-1} \gamma c. \tag{37}$$

By the integration of Formula (36) with respect to membership levels 0 ≤ *µ*<sup>0</sup> ≤ 1, 0 ≤ *µ* 0 <sup>0</sup> ≤ 1, we obtain the Poisson distribution

$$\overline{P}\_{\text{Poiss}}(k) = \begin{cases} 1 - (1 - e^{-c})\xi, & k = 0, \\ \ \xi e^{-c} \frac{c^k}{k!}, & k = 1, 2, \dots, \end{cases} \tag{38}$$

where

$$\mathfrak{F} = \iint\_{0 \le \mu\_0, \mu\_0' \le 1} \gamma Z^{-1} d\mu\_0 d\mu\_0'. \tag{39}$$

It is easy to show that GF *GPoiss* looks like as follows:

$$G\_{\rm Poisson}(y) = 1 - (1 - \epsilon c) \, \mathfrak{F} + \mathfrak{F}e^{-\mathfrak{c}}(e^{\mathfrak{c}y} - 1),\tag{40}$$

and in this case, *k* = *ξc*. Therefore, we finally receive

$$\overline{\mathcal{P}}\_{\text{Poisson}}(k) = \begin{cases} 1 - \left(1 - e^{-\frac{\frac{\pi}{\xi}}{\xi}}\right) \xi \, & k = 0, \\\\ \xi e^{-\frac{\pi}{\xi}} \frac{\left(\frac{\pi}{\xi}\right)^k}{k!} \, & k = 1, 2, \dots \end{cases} \tag{41}$$

**Example 3.** *Let the binomial experiment by the same data presented in Example 2 be given: <sup>p</sup>* <sup>=</sup> 0.3, *<sup>q</sup>* <sup>=</sup> 0.7, *<sup>n</sup>* <sup>=</sup> <sup>6</sup>*. For the creation of the (non-fuzzy) "upper" BD* <sup>B</sup><sup>0</sup> ◦ B*<sup>k</sup> n*;*p as a model of the superposition of two processes—the binomial process* B *k n*;*p and the process of "increasing the total number of failures"* B0*—we enter the a priori probability value P*(B0) = 1 − *γ* = 0.65 *[13] and the elementary event probability of ("1")—p* = 0.3*. Let µ*<sup>0</sup> = 0.8 *and µ* 0 <sup>0</sup> = 0.4 *be the levels n*

*of the membership function that correspond to the complex events* z }| { (0, . . . , 0) *when we want to distinguish the binomial and non-binomial origin events. Calculate: 1. the probability distribution of k success of the fuzzy "upper" binomial experiment—denoted by* Be <sup>0</sup> ◦ B*<sup>k</sup> n*;*p ; 2. the Poisson distribution—PPoiss*(*k*)*; 3. the Poisson distribution PPoiss*(*k*)*.*

Solution of Example 3. Case 1. Using Formula (33), we receive the numerical values of the probability distribution of *k* success of the fuzzy "upper" binomial experiment denoted by Be <sup>0</sup> ◦ B*<sup>k</sup> n*;*p* . *p* = 0.3, *q* = 0.7, *n* = 6 (Table 5).

**Table 5.** The values of probabilities of the fuzzy "upper" BD Be <sup>0</sup> ◦ B*<sup>k</sup> n*;*p* .


Case 2. By Formula (36), we calculated the values of the Poisson distribution—*PPoiss*(*k*) for the *k* = 0, 1 . . . , 6 success (Table 6).

**Table 6.** The probabilities of *k* success of the Poisson distribution—*PPoiss*(*k*).


Case 3. Using Formula (37), we numerically calculated the value of the first order moment of distribution—*k*. Therefore, we analytically received expression of the functions *<sup>Z</sup>* (Formula (34)) and GF *<sup>G</sup>*P(B<sup>e</sup> <sup>0</sup>◦B*<sup>k</sup> <sup>n</sup>*;*p*) (*y*). After this, we numerically calculated the value of the integral *ξ*. Finally, we calculated the values of the Poisson distribution *PPoiss*(*k*) for the *k* = 0, 1 . . . , 6 success (Table 7).

**Table 7.** The probabilities of *k* success of the Poisson distribution *PPoiss*(*k*).


## **5. Fuzzy Fuchs Distribution**

Let us consider a hybrid fuzzy-random process where the fuzzy process is predistributed while the random process is ordinary. Based on the analysis of lexical material, it has been established that the linguistic spectrum of the statistical process of word-formation (which is in conversation) becomes two-component when switching to vocabulary. This has been explained for several languages [24]. In this section, we construct two variants of such a process, which can be used in the analysis of the linguistic spectrum of the statistical process of word-formation. It is well known that, as in the case of the binomial "upper" distribution, all variants of the Fuchs distribution are based on a two-process superposition model, which, in the case under consideration, is interpreted as "determined" and binomial, *Φk <sup>n</sup>*;*ν*;*<sup>p</sup>* <sup>=</sup> <sup>B</sup>*<sup>ν</sup>* ◦ B*<sup>k</sup> n*−*ν*;*p* [24].

The derivation of the Fuchs probability distribution function for the most characteristic cases discussed below actually coincides with the corresponding (non-fuzzy) probability distribution. Therefore, we will present only the final results. In addition, we use the Fuchs model and terminology [26]. We consider two cases:

**Case 1.** The pre-placement process is non-fuzzy, while the fuzziness of the binomial process is conditioned by the fuzziness of the elementary events. In this case, the fuzzy elemental event is characterized by a probability that depends on the number of pre-placed elements. As in Section 1, we consider a basic fuzzy-random variable of Bernoulli B ∼e

 *values* 1 0 *probabilities P*<sup>1</sup> *P*<sup>0</sup> *membership levels µ*<sup>1</sup> *µ*2 and a sequence of fuzzy-random variables of Bernoulli

Be *ν* ∼ *values* 1 0 *probabilities P*(*ν*) 1 *P* (*ν*) 0 *membership levels f or* ( *ν*) *µ* (*ν*) <sup>1</sup> *µ* (*ν*) 2 , *<sup>ν</sup>* <sup>=</sup> 0, . . . , *<sup>n</sup>* for the creation of a fuzzy

Fuchs probability distribution. In this case, the Fuchs probabilistic distribution is as follows:

$$\mathcal{P}'\left(\widetilde{\mathcal{B}}\_{\upsilon}\circ\widetilde{\mathcal{B}}\_{n-\upsilon;p}^{k}\right) = \sum\_{\upsilon=0}^{n} \rho\_{\upsilon}\mathbb{C}\_{n-\upsilon}^{k-\upsilon} \left[\mathcal{P}'\_{n-\upsilon}(\widetilde{1})\right]^{k-\upsilon} \left[\mathcal{P}'\_{n-\upsilon}(\widetilde{0})\right]^{n-k},\tag{42}$$

where *ρ<sup>ν</sup>* are the proportions of those cells in which the *ν* elements are pre-placed (according to (15)) for *ν* = 0, 1, . . . and must meet the conditions *µ* (*ν*) <sup>1</sup> < *µ* (*ν*) 2 , *ν* = 0, . . . *n*, and

$$\mathcal{P}^{(v)}\left(\widetilde{\mathbf{C}}\_{(k)}\right) = \overbrace{P\_1^{(v)}(1, \dots, 1)}^{n-v}\left[\overbrace{P\_1^{(v)}}^{P\_1^{(v)}} - 1\right]^{n-k}, \quad k = 1, \dots, \tag{43}$$

,

$$\begin{split} \mathcal{P}^{(v)}\left(\widetilde{\mathcal{C}}\_{2^{n-v}}\right) &= \overbrace{P\_{1}^{(v)}(1,\ldots,1)}^{n-v}\left[\overbrace{\overbrace{\begin{subarray}{c}\frac{P\_{1}^{(v)}}{n-v}\\P\_{1}^{(v)}(1,\ldots,1)\end{subarray}}^{P\_{1}^{(v)}}-1\right]^{n-v} + 1 - P\_{1}^{(v)}\cdot\underbrace{\overbrace{\begin{subarray}{c}\frac{P\_{1}^{(v)}}{n-v}\\P\_{1}^{(v)}(1,\ldots,1)\end{subarray}}^{P\_{1}^{(v)}}}\_{\mathbf{0}}\right] \\ &\leq \overbrace{\left(\overbrace{\begin{subarray}{c}\frac{P\_{1}^{(v)}}{n-v}\\P\_{1}^{(v)}(1,\ldots,1)\end{subarray}}^{P\_{1}^{(v)}}(1,\ldots,1)\right)}\_{\mathbf{I}\_{1}^{(v)}(1,\ldots,1)} < 1, \end{split}$$

$$\frac{\mu\_2^{(\nu)}}{\mu\_1^{(\upsilon)}} = 1 + \overbrace{[P\_1^{(\upsilon)}(0, \dots, 0)]}^{n-\upsilon} \left[-1 + \underbrace{\boxed{P\_1^{(\upsilon)}}\_{n-\upsilon-1}}\_{\sqrt{P\_1^{(\upsilon)}(1, \dots, 1)}}\right],$$

The corresponding GF of the distribution (42) and the first two moments are as follows:

$$\begin{aligned} \mathcal{G}\_{\mathcal{P}'(\mathcal{S}\_{\upsilon}, \mathcal{S}\_{n-\upsilon, p}^{k})}(y) &= \sum\_{\upsilon=0} \rho\_{\upsilon} y^{\upsilon} \underbrace{\frac{[\mu\_{1}\rho\_{0} + \mu\_{1}\rho\_{1}y + (\mu\_{2} - \mu\_{1})]P\_{1}^{(\upsilon)}(0, \dots, 0)}{n-\upsilon}}\_{\widetilde{\mathcal{A}} = \upsilon P\_{1} + (1 - P\_{0}) \left(\sum\_{\upsilon=0}^{n} \mu\_{1}^{(\upsilon)} \rho\_{\upsilon}\right)^{-1} \left(\sum\_{\upsilon=0}^{n} \mu\_{2}^{(\upsilon)} \rho\_{\upsilon}\right)} \end{aligned} \tag{44}$$

.

$$\overline{\mathbb{K}}^2 = \left(\sum\_{\nu=0}^{u} \mu\_1^{(\nu)} \rho\_\nu\right)^{-1} \left[\sum\_{\nu=0}^{u} \mu\_2^{(\nu)} \rho\_\nu + P\_1 \sum\_{\nu=0}^{u} (n-\nu)(2\nu + 1 + P\_1(n-\nu-1))\mu\_2^{(\nu)}\rho\_\nu\right].\tag{45}$$

We can obtain the similar expressions for *<sup>G</sup>*P0(Be*v*◦Be*<sup>k</sup> n*−*v*;*p* ) (*y*), *k*, and *k* 2 in the case (*ν*) <sup>1</sup> > *µ* (*ν*) 2 , *ν* = 0, . . . , *n*, (omitted here).

**Case 2**. The pre-placement process is fuzzy, while the Binomial process is non-fuzzy— *Φ*e*k <sup>n</sup>*;*ν*;*<sup>p</sup>* <sup>=</sup> <sup>B</sup>e*<sup>ν</sup>* ◦ B*<sup>k</sup> n*−*ν*;*p* . Analogously to the previous case, we receive

$$\mathcal{P}'(\widetilde{\mathcal{B}}\_{\upsilon} \circ \mathcal{B}\_{n-\upsilon;p}^{k}) = \sum\_{\upsilon=0}^{n} \frac{\rho\_{\upsilon} p\_{\upsilon}}{\sum\_{s=0}^{n} \rho\_{s} p\_{s}} P\left(\mathcal{B}\_{n-\upsilon;p}^{k}\right),\tag{46}$$

where (*ν*, *ρν*, *pν*), *ν* = 0, 1, . . . , *n* is some fuzzy-random variable of the pre-placement process in the Fuchs distribution.

Given the subjective nature of the spectral probabilities in the Fuchs distribution, we can argue that, in this case, the non-fuzzy and fuzzy distributions coincide.

**Example 4.** *Case 1. Calculate the first and second moments of the Fuchs distribution if, in the role of fuzzy Bernoulli distribution, we selected* B ∼e *values* 1 0 *probabilities P*<sup>1</sup> = 0.3 *P*<sup>0</sup> = 0.7 *membership levels µ*<sup>1</sup> = 0.5 *µ*<sup>2</sup> = 0.6 *,*

*and the sequence of the membership levels of* Be*<sup>ν</sup> is given by Table 8.*

**Table 8.** Sequence of the membership levels of Be*ν*.

*µ*


Calculate the values of *k* and *k* 2 .

Case 2. Let the fuzzy binomial experiment and the fuzzy random variable on all possible success values be given {0, 1, 2, 3, 4, 5, 6} (Table 9).

**Table 9.** The fuzzy random variable.


Let the fuzzy Bernoulli variable also be given *values* 1 0 

X ∼e *probabilities P*<sup>1</sup> = 0.3 *P*<sup>0</sup> = 0.7 *membership levels µ*<sup>1</sup> = 0.5 *µ*<sup>2</sup> = 0.6 . Calculate the numerical value of the

Fuchs distribution.

Solution of the Example 4.

In Case 1, the fuzzy character of the binomial process is conditioned by the fuzzy character of the elementary events. Therefore, we received an expression of the corresponding GF. After this, we calculated the values of *k* = 2.3100 and e*k* <sup>2</sup> = 10.7470 (Formulas (44) and (45)).

In Case 2, when in the Fuchs experiment there is a fuzzy pre-placement process while the binomial process is non fuzzy—*Φ*e*<sup>k</sup> <sup>n</sup>*;*ν*;*<sup>p</sup>* <sup>=</sup> <sup>B</sup>e*<sup>ν</sup>* ◦ B*<sup>k</sup> n*−*ν*;*p* , for obtaining the numerical values of the Fuchs fuzzy distribution, we used Table 9 and Formula (46), where *P*(B *l n*−*ν*;*p* ) = *C l n*−*ν p l* (1 − *p*) *n*−*ν*−*l* , *p* = *P*<sup>1</sup> = 0.3, *C l <sup>n</sup>*−*<sup>ν</sup>* = <sup>0</sup> if *<sup>l</sup>* > *<sup>n</sup>* − *<sup>ν</sup>*. The results are given by Table 10.

**Table 10.** Values of the Fuchs fuzzy distribution.


## **6. Conclusions**

The research presented in this paper is relevant today in terms of its applicability. Experimental, objective data are often not sufficient to build discrete distributions in the study, analysis, and synthesis of difficult and complex phenomena. Often, such data do not exist at all. Modern modeling, and in particular simulation modeling, is unthinkable outside of the solution of the problems of restoring discrete distributions. The research presented in this paper is different from the existing studies. It refers to a generalization of binomial distribution where the results of an experiment are described by fuzzy variables. These variables are defined in the universe of all the results of the experiment. We are dealing with a binomial fuzzy-random variable. It has both a probability distribution and a membership function in the universe of all results of the experiment. This paper discusses four new and different cases of BD fuzzy extensions. *Case 1*: The fuzzy extension of the BD is presented when the Bernoulli fuzzy-random variable is considered instead of the Bernoulli random variable—i.e., the success and failure events have both probabilistic distributions and their implementation capabilities in the form of compatibility levels. Based on this information, the probabilistic distribution of the corresponding binomial fuzzy-random variable is calculated. The conditions of restrictions on this distribution are obtained. It is shown that these conditions depend on the ratio of success and failure compatibility levels. The formulas for the GF of the built distribution and the first and second order moments are also obtained. The Poisson distribution is calculated as a limit case of a constructed binomial fuzzy-random experiment. *Case 2*: The fuzzy extension of the BD is considered, where the number of successes, in contrast to the previous case, is of a fuzzy nature and is represented as a fuzzy subset of the set of possible success numbers. A formula for calculating the probability of the convolution of binomial dependent fuzzy

events is obtained. Using the principle of the invariancy of an exponential distribution, the corresponding GF is built. As a result, the scheme for calculating the mathematical expectation of the number of fuzzy successes is defined. It becomes possible in future studies to obtain Poisson and normal distributions as marginal cases of the fuzzy BDs constructed here. *Case 3*: The fuzzy extension of the "upper" BD is considered, where the fuzziness is represented by the compatibility levels of the binomial and non-binomial events of the complete failure complex. The GF and the first-order moment of the built distribution are calculated. Sufficient conditions for the existence of an appropriate marginal distribution, a Poisson distribution, are also obtained. *Case 4*: The fuzzy extension of the classical Fuchs distribution is presented, where the fuzziness is reflected in the growing number of failures. The built distribution function and the first and second order moments of the distribution are also calculated. In each section of the paper, for illustration of the obtained results, examples of the built fuzzy BD are considered. It becomes possible in future studies to obtain Poisson and normal distributions as marginal cases of the fuzzy Fuchs distribution. Of course, the practical application of the hybrid fuzzy-binomial models studied here is in great demand. This is the main motivation to continue research in this direction in the future. The main gradient of the research will be directed to the solution of applied problems, where the distributions built in this paper, or their modifications and generalizations, will be used.

**Author Contributions:** Conceptualization, G.S. and B.M. (Bidzina Midodashvili); Formal analysis, T.M.; Methodology, J.K.; Software, B.M. (Bidzina Matsaberidze). The authors contributed equally in this work. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by the Shota Rustaveli National Scientific Foundation of Georgia (SRNSF), grant number [FR-21-2015].

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** The paper is original and, therefore, no data were used.

**Acknowledgments:** We would like to mention our deceased colleague Tamaz Gachechiladze, whose ideas were very helpful to us in this work. The authors are grateful to the anonymous reviewers for their valuable comments and suggestions in improving the quality of the paper.

**Conflicts of Interest:** The authors declare no conflict of interest.

## **References**


MDPI St. Alban-Anlage 66 4052 Basel Switzerland Tel. +41 61 683 77 34 Fax +41 61 302 89 18 www.mdpi.com

*Axioms* Editorial Office E-mail: axioms@mdpi.com www.mdpi.com/journal/axioms

Academic Open Access Publishing

www.mdpi.com ISBN 978-3-0365-7838-5