**Fuzzy Decision Making and Soft Computing Applications**

Editors

**Giuseppe De Pietro Marco Pota**

MDPI • Basel • Beijing • Wuhan • Barcelona • Belgrade • Manchester • Tokyo • Cluj • Tianjin

*Editors* Giuseppe De Pietro National Research Council of Italy (CNR) - Institute for High Performance Computing & Networking (ICAR) Italy

Marco Pota National Research Council of Italy (CNR) - Institute for High Performance Computing & Networking (ICAR) Italy

*Editorial Office* MDPI St. Alban-Anlage 66 4052 Basel, Switzerland

This is a reprint of articles from the Special Issue published online in the open access journal *Applied System Innovation* (ISSN 2571-5577) (available at: https://www.mdpi.com/journal/asi/ special issues/fdmaking?authAll=true).

For citation purposes, cite each article independently as indicated on the article page online and as indicated below:

LastName, A.A.; LastName, B.B.; LastName, C.C. Article Title. *Journal Name* **Year**, *Volume Number*, Page Range.

**ISBN 978-3-0365-4929-3 (Hbk) ISBN 978-3-0365-4930-9 (PDF)**

© 2022 by the authors. Articles in this book are Open Access and distributed under the Creative Commons Attribution (CC BY) license, which allows users to download, copy and build upon published articles, as long as the author and publisher are properly credited, which ensures maximum dissemination and a wider impact of our publications.

The book as a whole is distributed by MDPI under the terms and conditions of the Creative Commons license CC BY-NC-ND.

## **Contents**


## *Editorial* **Special Issue "Fuzzy Decision Making and Soft Computing Applications"**

**Giuseppe De Pietro and Marco Pota \***

Institute for High Performance Computing and Networking–National Research Council of Italy (ICAR-CNR), 80131 Naples, Italy; giuseppe.depietro@icar.cnr.it

**\*** Correspondence: marco.pota@icar.cnr.it

Research on fuzzy logic [1] and soft computing for decision making has a long history. In many fields of application, rule-based fuzzy systems have been employed [2–4] for their unique properties in solving modelling problems. In particular, decision-making systems often deal with uncertain data. Moreover, in some fields of application, such as differential diagnosis in medicine, a meaningful confidence measure is required to be associated with the classification result in order to show all possible outcomes with the relative likelihood. Finally, semantically meaningful systems are often required, providing clear and logical interpretation of the inference process, in order to encapsulate them in interactive frameworks of cognitive systems, or to enable validation by domain experts. These issues can be accomplished, on the one hand, by encoding uncertain numerical data in terms of interpretable linguistic variables [5]. On the other hand, fuzzy rules show a clear and logical justification for each conclusion [6,7]. Finally, if desired, fuzzy systems allow classification results to be presented associated with a confidence measure, such as the probability of different classes [8].

The remarkable progress made by these approaches in various fields underlines their benefits and is stimulating further research. In particular, despite the remarkable successes in different tasks, research on these approaches is a field of increasing interest [9], with regard to theoretical aspects, which are being deepened [10–12], as well as aspects regarding procedures for learning fuzzy systems optimizing accuracy and/or interpretability, or for solving mathematical tasks using fuzzy numbers and soft computing [13–18]. Moreover, these approaches are prone to easily and proficiently be employed in different new fields of application [19–21].

This Special Issue collects original research articles discussing cutting-edge work as well as perspectives on future directions in the whole range of theoretical and practical aspects in this research area. In particular, there are 12 contributions selected for this Special Issue, representing progresses in the following areas specifically addressed.


**Citation:** De Pietro, G.; Pota, M. Special Issue "Fuzzy Decision Making and Soft Computing Applications". *Appl. Syst. Innov.* **2022**, *5*, 54. https://doi.org/10.3390/ asi5030054

Received: 1 June 2022 Accepted: 8 June 2022 Published: 10 June 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

triangle inequalities, which are used to investigate the convergence. Another contribution [14] proposes an adaptive-network-based fuzzy inference system (ANFIS) model for the accurate estimation of signal propagation. Results on benchmark data show that the proposed model outperforms nondeterministic models in terms of accuracy and presents flexibility, ease of use, robustness, generalization capability, and an alleviated training process for propagation prediction in complex scenarios. Some authors contributed with three different papers. In the first [15], to solve the Cauchy problem, three fuzzy numerical methods, based on the combination of fuzzy transform with one-step, two-step, and three-step numerical methods, are introduced. The error analysis of the new fuzzy methods is discussed, showing more accurate results compared with other existing methods. The other two papers by the same authors [16,17] report parts I and II of the same voluminous work, where new approximation methods for solving systems of ordinary differential equations (SODEs) using fuzzy transform are introduced and discussed. In particular, different modified numerical schemes and new representations of basic functions are proposed, the error analysis of the new approximation methods and the properties of the uniform fuzzy partition are examined, and numerical examples showing improved accuracy are presented. A further work [18] assesses how three shaking procedures affect the performance of a metaheuristic General Variable Neighbourhood Search algorithm. The different schemes were applied on benchmark instances of the Traveling Salesman Problem to examine the potential advantage of any of the three metaheuristic schemes, showing similarities and differences among different methods.

3. **Decision-making applications employing fuzzy logic and soft computing.** Contributions in this field show a variety of possible applications. In the context of the characterization of basmati rice product value using an image-based grading process, the authors in [19] propose a model for quality grade testing and identification, using a novel digital-image-processing- and knowledge-based ANFIS. This approach provides capabilities to simulate the behaviour of an expert in the characterization of rice grains using their physical properties, and compared to other machine learning techniques, its results are promising in terms of classification accuracy and efficiency. In the field of electron beam (EB) measurements, the author of [20] presents a novel method of restoring the EB measurements that are degraded by linear motion blur. The author's approach is based on a fuzzy inference system and a Wiener inverse filter, providing autonomy, reliability, flexibility, and real-time execution, in restoring highly degraded signals without requiring exact knowledge of EB probe size, and a demonstration is given by comparing ground truth signals with restorations. Finally, in [21], the motion control of mobile robots in a cluttered environment with obstacles is considered. In particular, to control the motion of a mobile robot using an eye gaze coordinate as inputs to the system, the paper presents an intelligent vision-based gaze guided robot control, utilizing an overhead camera, an eye-tracking device, a differential drive mobile robot, vision, and an interval-type-2 fuzzy inference tool. Experiments and simulation results indicate that the system can successfully perform operator intention, modulating speed and direction accordingly.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


### *Article* **Causal Graphs and Concept-Mapping Assumptions**

#### **Eli Levine 1,\* and J. S. Butler <sup>2</sup>**


Received: 11 May 2018; Accepted: 20 July 2018; Published: 24 July 2018

**Abstract:** Determining what constitutes a causal relationship between two or more concepts, and how to infer causation, are fundamental concepts in statistics and all the sciences. Causation becomes especially difficult in the social sciences where there is a myriad of different factors that are not always easily observed or measured that directly or indirectly influence the dynamic relationships between independent variables and dependent variables. This paper proposes a procedure for helping researchers explicitly understand what their underlying assumptions are, what kind of data and methodology are needed to understand a given relationship, and how to develop explicit assumptions with clear alternatives, such that researchers can then apply a process of probabilistic elimination. The procedure borrows from Pearl's concept of "causal diagrams" and concept mapping to create a repeatable, step-by-step process for systematically researching complex relationships and, more generally, complex systems. The significance of this methodology is that it can help researchers determine what is more probably accurate and what is less probably accurate in a comprehensive fashion for complex phenomena. This can help resolve many of our current and future political and policy debates by eliminating that which has no evidence in support of it, and that which has evidence against it, from the pool of what can be permitted in research and debates. By defining and streamlining a process for inferring truth in a way that is graspable by human cognition, we can begin to have more productive and effective discussions around political and policy questions.

**Keywords:** causality; statistics; concept-mapping; causal graph

#### **1. Introduction**

Causal inference is a key goal for understanding the relationships among phenomena in the real world that researchers are attempting to study [1] This becomes a challenging task when possible causal phenomena are numerous, highly interrelated, complex, and complicated to study with validity [2,3]. As things currently stand, there is no clear method for either promoting correct facts and high quality and honestly treated evidence, or for eliminating incorrect facts and inferences of poor quality, or dishonestly treated evidence from the pool of knowledge that is acceptable in policy debates. This paper proposes a possible method to clarify researchers' intentions and work, determine what data are necessary to collect, guide the selection of the methodology of treating the evidence, and produce possible counterfactual arguments that can be tested to establish a greater probability that correct inferences are drawn from the data. The hope of this paper is to clarify what is more probably true from what is less probably true and to streamline the pool of evidence that is permissible in policy and political debates. High quality and honestly treated evidence gains precedence over, and is promoted in discussions and debates, at the expense of poor quality and dishonestly treated evidence.

#### **2. Literature Review**

"Causality" is defined as "the relationship between something that happens or exists and the thing that causes it" [4]. Determining causal relations among variables is a challenging and much studied topic [1,5–10]. Much of the literature on causal relations comes from the medical field of epidemiology [11] and is used to infer causal relationships in disease diagnoses and treatment effects of medical regimens [12]. Causality is also a much studied subject in the social sciences. Its inference is typically derived from a statistical method or technique or qualitative analysis [6,13–17]. Testing for Granger Causality, which is a statistical concept where variables that cause effect variables contain information that predicts effect variables within them, has used "path diagrams" in the literature [18]. This paper specifically draws upon the concept of the "causal graph" described by Pearl [19] as the basis of this methodology. The causal graph is used alongside "concept mapping" in order to tease out the underlying assumptions about the nature and relationships among the variables in question. Casual graphing and concept mapping promote better understandings of the researchers' assumptions, and they develop alternative counterfactual cases with different causal graphs. Causal graphing also could help design research to test the factual and causal validity of the causal graph and, by extension, the researchers' concept map [20]. In summary, the researchers and other stakeholders may make different concept maps and causal graphs according to existing methodologies. The difference with this proposed method is that it actively seeks to remove all or parts of concept maps and causal graphs to infer what is more probably true in the real world itself.

A "causal graph is a directed graph that describes the variable dependencies" [21]. Causal graphs were first developed in the fields of mathematics, computer science, machine learning, and statistics [1,22,23] but have since evolved to the study of complex phenomena, such as epidemiology [9] and planning [21,24,25]. While causal graphs are not new tools in several academic fields and have been used in statistical analyses for developing causal relations after the data collection, it does not appear that they have been widely used by researchers to sketch assumptions and hypotheses before the data has been collected.

The ideas expressed here are not new in the field of economics. One of the first two Nobel Laureates in Economics in 1969, Jan Tinbergen, collected all proposed macroeconomic models in the late 1930s and built models of the business cycle with a similar technique ([26] pp. 101–130). Tinbergen "explained his model building as an iterative process involving both hypotheses and statistical estimation" ([26], p. 103). Morgan (1990) points out that "Despite their usefulness, few copied his graphical methods" ([26], footnote 9, p. 111).

While Tinbergen's methods are similar to the concept of causal graph modeling that is described here, they are not quite the same. Tinbergen was aiming to understand economies and processes in economies, not to infer causal relations among different social, economic, ecological variables, and factual conditions. Indeed, the method that is described in this paper is more applicable in meta-analyses of existing studies and guiding the direction of future research, not as the centerpiece of individual topical studies. The intention behind this method is to understand what is true and what is not as true, and to provide a quantitative method for deriving those truths and assessing the quality of the evidence behind them.

Another process that is similar to this one is known as "group model building" [27–29]. Group model building is a process that was created by system dynamics researchers to facilitate diverse stakeholders sharing information across different fields. This is done to solve problems that are common to these stakeholders by unifying, standardizing, and connecting the information that is presented by and for the stakeholders in question [27,29]. While this is a useful technique for helping groups understand problems from many different angles, it is not a generalized way of inferring causality and truth. Creating and testing different causal graphs with the evidence that is available is a separate process that aims to produce general knowledge of empirically inferred reality. The goal with causal graph analyses is to produce a coherent and accurate map of a given concept or problem that is more probably true than competing alternative maps. It is the process of weeding out models that are not supported by evidence, more than it is just the production of different models.

Most people have implicit assumptions about how the world works, in addition to possible desires about how they would like the world to work [15,30–32]. One method for determining the underlying assumptions that are implicit in a research project is to map them out through a process known as "cognitive mapping" [33–37]. Cognitive mapping has been used to understand the implicit assumptions and decisions made by policymakers in the past [38]. Cognitive maps give rise to different concept maps, which then are used to produce different causal graphs. It is logically plausible that the creation of causal graphs in causal modeling is produced by the conceptual maps people make by the same cognitive maps used by researchers, policymakers, and stakeholders. Indeed, cognitive mapping is implicit in some research concerning Bayesian networks, which map out the probabilities that a set of causal conditions relates to a set of observed variables [39–43]. It also has been linked to modeling ecological systems by researchers [37].

The hypothesis that underpins this perspective is that implicit cognitive maps of researchers, policymakers, and stakeholders alike result in the production of different conceptual maps of the world. The interplay between cognitive maps and conceptual maps gives rise to different causal graphs being produced through the different perceived factual "nodes" (points) and connecting "edges" (relations or connections between factual nodes) of the researcher, policymaker, and stakeholder. This is different from existing methods for making the goals of researchers, policymakers, and stakeholders explicit, such as the Logic Model. Logic models display the connection between different inputs and activities with different outputs and outcomes [44], in that this method is more free-form and allows the cognitive maps and implicit biases to be made more readily apparent instead of confining the maps to a preset form. Different assumptions, perspectives, levels, and degrees of awareness in the cognition of researchers, policymakers, and stakeholders result in the perception of different "facts", different interpretations of those "facts", and different edges among the "facts". This could be done implicitly and subconsciously by the researcher and policymaker, but it also is hypothetically possible for it to be done deliberately through conscious choice and selection of facts, interpretations of facts, and edges among facts [10,32,45,46].

A hypothetical example of this is between people who identify as "conservatives" and "liberals" looking at the same situation facing their shared nation. The conservative may claim that the moral integrity of the society is eroding as time passes, while a progressive may have a different outlook on change and difference in a society from one time to another. The evidence suggests that people on both sides will look for, perceive, and interpret the situation differently in mutually exclusive way. For example, the environment cannot both be and not be affected by humans' economic activities, and it cannot both be and not be significant for human survival. Different problems are identified, different choices and preferences are made, and different actions are seen as more or less acceptable because of those differences between the general psychological phenotypes. The obvious problem with relying on these subconscious assumptions and biases alone is that the individual person who is making the policy decisions may not accurately understand, represent, or interpret the meaning of the world. Without an accurate map of how the world works, policymakers are less able to make the best possible choices for the people living in the society and for their own benefit as policymakers making decisions that affect the world they live in. One can think back to the times before navigational and weather/oceanographic sensory technology had advanced to the point where ships could orient themselves accurately on Earth. Without the production of these technologies, which aid navigation and the ability to detect and predict conditions around the ships, sailors' lives were easily lost on the tempestuous oceans, and valuable cargo was lost and destroyed in transit around the world more often than now. The analogy could be carried over to the fate and condition of nations and human societies.

#### **3. The Methodology**

The goal of this paper is not to advocate a singular methodology or tool for studying complex phenomena in our universe. Rather, the goal is to propose a new tool that can be used to help determine the appropriate tool(s) for studying complex phenomena, and to at least partially overcome the deficits of human cognition and perception in research and decision-making. By making the assumptions explicit rather than implicit in research and policy designs, we can get a firmer grip on what healthy priorities are, and how to achieve them. Below is an example of a theoretical causal graph (Figure 1).

**Figure 1.** An example of a theoretical causal graph.

where x factors cause y effect when brought together in this combination. We see that x1 causes x2 which, when combined with x3, produces y1 effect. The plus represents x2 having a positive feedback effect on y effects (more of x2 leads to more of y1), while x3 has an unknown effect on y1. Notice the error/unknown factors variable to account for anything else the model misses.

The methodology is simple to describe and works as follows:

	- a It is important to note that this paper is agnostic about the specifics of the designs of the research, so long as it is logically valid and testable;
	- b This is where any number of qualitative and quantitative methods can be used;
	- c It is also a good idea to use multiple methods on the same factual node or interaction edge to increase the probability of validity. That is often called robustness in research;
	- a The probability of the population of causal graphs can never truly equal 1 for complete validity because there is always an unknown quantity of potential error present in the population of models, i.e., the unknown unknowns;
	- b The probabilities can be explicitly Bayesian, empirical Bayesian, or based on flat priors;

edges and nodes. Poorer quality evidence has less of an effect, or no effect on the probability of demonstrating truth;


It is important to again note that this procedure is agnostic about the specific research techniques that are used to infer causality or the truthfulness of factual nodes. Notice how the factual validity for each of the variables (the nodes in the causal graph) and causal edges (the links in the causal graph) are not necessarily known, and are rather hypothesized to exist based on past evidence and the circumspection of the researcher. From this model, we can derive various other models to test for and identify possible methods for gathering and examining the data. We can see other possible models below (Figures 2 and 3).

**Figure 2.** An alternative graph to Figure 1.

**Figure 3.** An alternative graph to Figure 2.

Notice how parts of the graphs in Figures 2 and 3 changed from Figure 1, representing different and mutually exclusive hypothetical models that may or may not be more accurate than the original hypothesis.

These assumptions (that are different from the original causal graph) each then have their own theoretical and observational bases and their own interpretations of what is present and happening in the real world outside of the researcher's perspective and assumptions. With this technique, it is also possible to model unknown or hypothesized interactions and facts, such as the question mark between variables x3 and y1. Other models can be constructed using all of the possibilities. For simplicity's sake, most of these options in the research design space have been left out. However, if the researcher(s) are able to get the largest possible collection of causal graphs together while staying relevant to the topic(s) at hand, the larger design space should provide a rich environment for testing the factual assumptions and interactions among the variables. Researchers can then work together across disciplines to design experiments and determine which data to collect and how in order to "shave away" at the hypothesis space of the research topic. The surviving causal graphs, which withstand the scrutiny of the researchers' efforts, can be said to be more probably true and valid than the other causal graphs that have aspects that are not valid or which have little to no evidence in support of them. These surviving causal graphs correspond to Bayesian posteriors or unrejected frequentist hypotheses, in that they are the end products of analyses.

Figure 4 is an example of a causal graph produced to clarify questions about education policy and the factors that link in to create academic, social, behavioral, and personal success in students. Using various data sets and methods of estimation, the most likely causal pathways could be found. Some researchers will add double-headed causal arrows and reversed arrows.

**Figure 4.** An example causal graph for hypotheses concerning outcomes in education.

There are two ideas that can be deconstructed from taking this holistic approach to education and educational success. The interrelated subject areas, such as the defined pedagogy, territorial demographics, the political environment, and parental/familial conditions that the child grows up in can be broken out from the causal graph into their own interrelated clusters as part of the larger graph that contains the whole. This would enable collaboration among experts in these various fields to create a more accurate model of the whole picture of how children develop, learn, and grow into adults, which can then give us a more accurate map for helping policymakers be better able to see where and how they might intervene in the given subject area. The second idea is that the whole causal graph is malleable to the perspective of the researcher in question, and alternatives for hypothesis testing can easily be developed by simply going through the graphical representation of the problem(s) at stake to find other possibilities and alternatives. Time stamps can be added to refine the temporal relationships among the variables.

#### **4. Implications of this Method for Policy Research**

The implications of this method for conducting social science research would allow policy researchers, policymakers, and stakeholders to better understand not only their own implicit, subconscious biases and explicit conscious biases, but to move beyond those biases in order to perceive and study our complex social and environmental worlds accurately. Communication of divergent beliefs and models would be easier. It is feasible that policymakers, and the researchers and stakeholders as well, will be able to move beyond disagreements over what may be just different cognitive maps in order that better choices may be made faster, with less debate, and with greater efficacy than would ordinarily happen without using this methodology explicitly to understand, design, and analyze situations and conditions in our social and environmental worlds. At least, it would be clearer what issues must be resolved and models estimated.

The methodology can also be used as a technique for holding policymakers and researchers more accountable for their assumptions and their chosen research techniques. Even though the method itself is agnostic about the methods that are used in research, there are practices for testing validity and causality that are more or less effective than others. By explicitly drawing the causal graph, it is easier to tell whether more or less appropriate methods are being used to test the nodes and causal edges of the graph. By explicitly stating the implicit and explicit biases of the individual through the process of mapping out their factual and causal assumptions, human societies and organizations that adopt this method for making choices and understanding the world may be able to more effectively understand political opponents' concerns and perspectives, as well as to be more effectively able to challenge those perspectives and opinions that are not based in evidence both behind closed doors in negotiations and in front of the fora of the general public. Assumptions totally lacking empirical verification would stand out.

The most significant benefit of using this methodology is that mutually exclusive opinions on facts and relations can be more clearly examined. Most of the common controversies in policy debates stem from competing, mutually exclusive ideas on how the world works, and how it ought to work for human well-being and survival. From whether to have public sector involvement in the economy, to the necessity of protecting the environment, the different attitudes, biases, and opinions cannot all be called of equal value for ensuring human health and well-being. Causal graphs can be used to sort through those differences in policy preferences and opinions to deliver a clearer picture of common reality and what is needed for human societies at given times. Those opinions that are supported by quality evidence can then take priority over those that simply are not, or have evidence against their empirical validity.

#### **5. Caveats to this Method**

The most glaring problem with this methodology is that it does not give instructions on how to collect data, what data to collect, or how to treat the data when they are collected. It may help inform research decisions, but it does not give explicit instructions on what to use or when to use it. This leaves the design of the experiments still open to possible researcher bias and the usual difficulties with inferring causality with researchers who have underlying assumptions and cognitive biases that they consciously or subconsciously prefer over other models and methods. Through explicitly stating the researcher's hypothesis space and cognitive bias, measures of robustness can be developed for causal models to see if researchers are truly ruling out other possibilities or whether they are honestly adhering to sound da identification, collection, and interpreting methods. Ignoring logical possibilities would be much more difficult.

Another caveat to this research methodology is the possibility for aspects of the causal graphs to change stochastically during the development of the models and throughout the experiments and analysis of the data. That is, the structure can change. A policymaker may be in the middle of developing a causal graph which is presently accurate, but may have dynamic aspects to it which can change in the near to distant future as aspects of our social world (such as technology and our understanding of the world itself) change. In addition to these probable knowledge based changes, there may also be some aspects of our social world which change due to aesthetic preferences or changes in relative perspective and attitude. This further complicates the development of these causal graphs, as the aspects and perspectives of them may change in ways that do not track neatly with the development and production of our knowledge and awareness. What may be in fashion and perceived of as desirable now may not be viewed as such in the near to distant future, thus altering the perspectives on the causal graphs that are developed today or rendering them potentially useless for achieving the goals of the society in the future. Thus, the dynamic and evolving nature of consciousness and preferences over time may influence the development of these causal graphs, if not affect the actual graphs themselves in the content of their facts, interpretations of facts, and interactions among the variables. In response, more basic social factors related to group dynamics can be added to the models, such as fundamental psychological processing common to humans. Change itself can become a part of the model. As different edges and nodes can change over time, and their changing nature can theoretically be observed, the changes and their effects can be noted and tested. This gives the resulting models significantly greater empirical validity, and thus enriches our understanding of common reality to the fullest possible extent that we can achieve.

A third possible problem with this methodology is that there is no method for keeping the model parsimonious and simple. While this may not be a problem when working with large and complex topics, it can be said that it is feasible that the models that researchers may make could become too unwieldy for practical use. A simple method for resolving this while not abandoning the potential complexity in a subject is for the researcher to narrow their focus initially to a given factual node or a specific interaction, and then to grow the model from there, limiting it to the practical relevance of the research in question. The researcher in question, or other researchers can then expand the web of knowledge in other directions at future times.

#### **6. Conclusions**

This paper presents a new tool for researchers and policymakers alike for understanding complex and interconnected topics of interest and importance to human society as a whole. Through explicitly stating the assumptions behind the subject, researchers and policymakers can then develop counterfactual alternative graphs for the subjects of their interest and research, identify data that is relevant to the subject, develop methods for collecting and analyzing the data, and then systematically shave away at factual assumptions and hypothetical interactions for which there is little to no quality evidence. Through this deductive process of elimination, researchers and policymakers alike can eliminate graphs for which all or parts do not have evidence, and thus, be left with a pool of possibilities that shrinks in size and increases in the chances of being probably accurate representations of reality itself.

It is possible that some specific aspects of the graphs may change over time with peoples' attitudes, preferences, and perspectives. However, it is assumed explicitly in this paper that the underlying method of creating causal graphs with fact nodes and interaction edges can be valid throughout time, space, and context, even if the specific models change over time. The process of shaving away at conceptual maps with this method can produce a more robust, accurate, and complete representation of reality that the human mind can comprehend and use for other purposes. By doing so, we can begin to constructively resolve key policy and political debates as they arise with this common process of gathering, analyzing, and evaluating evidence from our common reality. The political debates may be based ultimately in values and opinions. However, not all opinions and values are of equal value for human society's health and well-being. This proposed method hopes to help resolve these debates for that which is factually true and healthful, at the expense of those opinions that are not true, and are very likely unhealthful for humans in general.

**Author Contributions:** E.L. conceived and designed the methodology and its uses; E.L. wrote the paper and developed the graphical figures. J.S.B. helped edit the first draft.

**Funding:** This research received no external funding.

**Acknowledgments:** Special thanks to J.S. Butler for his assisting with the editing and advice to add Tinbergen's work into this paper.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Relation "Greater than or Equal to" between Ordered Fuzzy Numbers**

#### **Krzysztof Piasecki**

Department of Investment and Real Estate, Pozna ´n University of Economics and Business, Niepodległo´sci 10, 61-875 Pozna ´n, Poland; krzysztof.piasecki@ue.poznan.pl

Received: 13 July 2019; Accepted: 1 August 2019; Published: 3 August 2019

**Abstract:** The ordered fuzzy number (OFN) is determined as an ordered pair of fuzzy number (FN) and its orientation. FN is widely interpreted as imprecise number approximating real number. We interpret any OFN as an imprecise number equipped with additional information about the location of the approximated number. This additional information is given as orientation of OFN. The main goal of this paper is to determine the relation "greater than or equal to" on the space of all OFNs. This relation is unambiguously defined as an extension of analogous relations on the space of all FN. All properties of the introduced relation are investigated on the basis of the revised OFNs' theory. It is shown here that this relation is a fuzzy one. The relations "greater than" and "equal to" also are considered. It is proven that the introduced relations are independent on the orientation of the compared OFNs. This result makes it easier to solve optimization tasks using OFNs.

**Keywords:** ordered fuzzy number; fuzzy relation; preorder; strict order; equivalence relation

#### **1. Introduction**

The concept of ordered fuzzy number (OFN) was intuitively introduced by Kosi ´nski [1–4] as an extension of the notion of fuzzy number (FN) which is widely interpreted as imprecise approximation of real number. OFNs' usefulness follows from the fact that it is interpreted as FN with additional information about the location of the approximated number. Kosi ´nski [1–4] has determined arithmetic for OFNs as an extension of results obtained by Goetschel and Voxman [5] for FNs. For formal reasons, the Kosi ´nski' theory was revised [6] in such a way that revised OFN definition fully corresponds to the intuitive Kosi ´nski's definition of OFN. OFNs are always defined without use of any ordering relation between FNs. Knowing this fact makes it easier to read the section on ordering relationship between OFNs. This paper is linked to the revised OFNs' theory.

In decision analysis, economics and finance, OFNs are frequently employed to evaluate the alternatives in modelling a real-world problem [7–19]. On the other hand, the OFN theory has an important disadvantage. This disadvantage is due to the lack of formal mathematical models associated with OFNs. Therefore, an important goal of further formal research should be to fill these theoretical gaps.

If any alternatives are evaluated by OFNs then their ranking leads to OFNs' arrangement which is pre-given as an ordering relation "greater than or equal to" between OFNs.

Since the notion of OFN is interpreted as an extension of the notion of FN, any formal model of order between OFNs should be consistent with the fixed ordering relation between FNs. Unlike in the case of real numbers, FNs have no natural order. A straightforward approach to the ordering of FNs is to convert each compared FN into a real number. Any procedure of this conversion is called a "defuzzification method" [20]. Representative examples of FNs' arrangement using different defuzzification methods are presented in [20–56]. Each individual defuzzification method, however, pays attention to a special aspect of an FN. As a consequence, each approach suffers from some defects

that only one real number is associated with each FN. Freeling [57] pointed out that "by reducing the whole of our analysis to a single number, we are losing much of the information. We have purposely been keeping throughout our calculations".

Kosi ´nski and Sztyma [58] introduced defuzzification methods for OFNs. Some applications of OFN arrangement using defuzzification methods are presented in [8,16,18,19]. On the other side, in [17], it is shown that the use of defuzzification methods has a significant impact on the ordering of OFNs. In an extreme case, the use of defuzzification procedures can totally blur the true picture of arrangement of OFNs. It can lead to results deviating from real ranking of decision alternatives, which will increase the hazard of making a wrong decision. For this reasons, OFNs arrangement should be described by a fuzzy relation which compares OFNs pairwise. In this way, we can compare OFNs without losing information about the imprecision and orientation of evaluated OFNs. This approach is more realistic.

For FNs, fuzzy order relations can be defined in two ways. First of all, fuzzy order of FNs can be determined using α-cuts. Representative examples of FNs' arrangement using α-cuts are presented in [59–61]. At present, the α-cuts theory dedicated to OFNs is unknown. Therefore, in the current moment, any formal models of ordering with use of α-cuts cannot be extended to the case of OFNs. Moreover, Orlovsky [62] defined fuzzy order of FN applying the Zadeh's Extension Principle [63–65]. This method does not raise any objections.

Therefore, the main goal of presented work is to define such fuzzy order relation between OFN's which is consistent with fuzzy order introduced by Orlovsky. Setting such a relationship is needed to build each quantitative model based on comparison of OFNs. In general, the relation *GE* can be applied in any such quantitative model of the real world that a comparison of imprecise numbers is used. The tentative approach to this subject was presented in [66]. Obtained in this way fuzzy order of OFNs is applied in [12,17]. The results presented here are the final generalized version of such fuzzy ordering OFNs that it fulfils assumed condition.

The paper is organised in the following way. Section 2 presents considered models of imprecise quantity. Section 2.1 describes the basic concepts of FNs and arithmetic operations on FNs. The revised notion of OFN and arithmetic operations on OFNs are presented in Section 2.2. It is pointed out here that OFNs are always defined without use of any ordering relation between FNs. In Section 2.3, the disorientation map is introduced. Moreover, some differences between FNs and OFNs are explained here. In Section 3 the author proves that some simple properties are fulfilled by Orlovsky's fuzzy order of FN. In Section 4 the author introduced such relation "greater than or equal to" between OFNs which is consistent with Orlovsky's fuzzy order. Section 5 contains some basic problems linked with ordering of OFNs. In Section 6, all theoretical considerations are illustrated by case study devoted to the subject of investment decisions. Finally, Section 7 contains the final remarks.

#### **2. Imprecise Quantities—Considered Models**

Objects of any considerations may be given as elements of a predefined space X. The basic tool for imprecise classification of these elements is the notion of fuzzy set introduced by Zadeh [67]. Any fuzzy set A is unambiguously determined by means of its membership function μ*<sup>A</sup>* ∈ [0, 1] <sup>X</sup>, as follows

$$\mathcal{A} = \{ (\mathbf{x}, \mu\_A(\mathbf{x})); \mathbf{x} \in \mathbb{X} \}. \tag{1}$$

In all our considerations we use the multivalued logic determined by Łukasiewicz [68]. The truth value of the sentence ऀ will be denoted by the symbol ऄआ(ऀ). From the point-view of multi-valued logic, the value <sup>μ</sup>*A*(*x*) is interpreted as the truth value ऄआ("*<sup>x</sup>* ∈ A"). By the symbol <sup>F</sup> (X) we denote the family of all fuzzy sets in the space <sup>X</sup>. Any fuzzy set A∈F (X) may be described using the following notions:

For each α ∈ ] 0, 1], the α− cuts [A]<sup>α</sup> determined as follows

$$[\mathcal{A}]\_{\alpha} = \{ \mathbf{x} \in \mathbb{X} : \ \mu\_A(\mathbf{x}) \ge \alpha \};\tag{2}$$

The support closure [A]0<sup>+</sup> given in the following way

$$[\mathcal{A}]\_{0^{+}} = \lim\_{a \to 0^{+}} [\mathcal{A}]\_{a}. \tag{3}$$

An imprecise quantity is a family of real numbers belongs to it in a different degree. In this section, the fuzzy set notion is applied for describing imprecise quantities.

#### *2.1. Fuzzy Numbers—Some Basic Notions*

A commonly used model of an imprecise number is FN, defined as a fuzzy set in real line R. The most general definition of FN is given as follows:

The most general definition of fuzzy number is given as follows:

**Definition 1 [69].** *The fuzzy number (FN) is such a fuzzy subset* L∈F (R) *with bounded support closure* [L]0<sup>+</sup> *that it is represented by its upper semi-continuous membership function* μ*<sup>L</sup>* ∈ [0; 1] <sup>R</sup> *satisfying the conditions:*

$$\exists\_{\mathbf{x}\in\mathbb{R}}\,\,\mu\_{\mathcal{L}}(\mathbf{x}) = \mathbf{1} \tag{4}$$

$$\forall\_{(x,y,z)\in\mathbb{R}^3} \; \mathbf{x} \le \mathbf{y} \le z \implies \mu\_L(\mathbf{y}) \ge \min\{\mu\_L(\mathbf{x}); \mu\_L(z)\}.\tag{5}$$

The set of all FN we denote by the symbol <sup>F</sup>. Let us consider any arithmetic operation <sup>∗</sup> defined on R. The symbol denotes an extension of arithmetic operation <sup>∗</sup> to <sup>F</sup>. In [70], arithmetic operations on FN are introduced in such way that they are coherent with the Zadeh's Extension Principle. In line with it, for any pair (K, <sup>L</sup>) <sup>∈</sup> <sup>F</sup><sup>2</sup> represented by their membership functions <sup>μ</sup>*K*, <sup>μ</sup>*<sup>L</sup>* <sup>∈</sup> [0, 1] <sup>R</sup>, the FN

$$
\mathcal{M} = \mathcal{K} \oplus \mathcal{L} \tag{6}
$$

is described by its membership function μ*<sup>M</sup>* ∈ [0, 1] <sup>R</sup> determined by means of the identity:

$$
\mu\_M(z) = \sup \{ \min \{ \mu\_\mathcal{K}(\mathbf{x}), \mu\_\mathcal{L}(y) \} \colon z = \mathbf{x} \* y, (\mathbf{x}, y) \in \mathbb{R} \}.\tag{7}
$$

Thanks to the results obtained in [5], we have that any FN can be equivalently defined as follows:

**Theorem 1 [71].** *For any FN* <sup>L</sup> *there exists such a non-decreasing sequence* (*a*, *<sup>b</sup>*, *<sup>c</sup>*, *<sup>d</sup>*) <sup>⊂</sup> <sup>R</sup> *that* <sup>L</sup>(*a*, *<sup>b</sup>*, *<sup>c</sup>*, *<sup>d</sup>*, *LL*,*RL*) <sup>=</sup> L∈F (R) *is determined by its membership function* <sup>μ</sup>*L*(·|*a*, *<sup>b</sup>*, *<sup>c</sup>*, *<sup>d</sup>*, *LL*,*RL* ) <sup>∈</sup> [0, 1] R *described by the identity*

$$\mu\_L(\mathbf{x}|a, b, c, d, L\_L, R\_L) = \begin{cases} 0, & \mathbf{x} \notin [a, d], \\ L\_L(\mathbf{x}), & \mathbf{x} \in [a, b], \\ 1, & \mathbf{x} \in [b, c], \\ R\_L(\mathbf{x}), & \mathbf{x} \in [c, d], \end{cases} \tag{8}$$

*where the left reference function LL* ∈ [0, 1] [*a*,*b*] *and the right reference function RL* <sup>∈</sup> [0, 1] [*c*,*d*] *are upper semi-continuous monotonic ones meeting the conditions:*

$$L\_L(b) = \mathbb{R}\_L(\mathfrak{c}) = 1,\tag{9}$$

$$[\mathcal{L}]\_{0^{+}} = [a, d]. \tag{10}$$

The FN <sup>L</sup>(*a*, *<sup>a</sup>*, *<sup>a</sup>*, *<sup>a</sup>*, *LL*,*RL*) <sup>=</sup> *a* represents the real number *<sup>a</sup>* <sup>∈</sup> <sup>R</sup>. Therefore, we can say <sup>R</sup> <sup>⊂</sup> <sup>F</sup>. For any *<sup>z</sup>* <sup>∈</sup> [*b*, *<sup>c</sup>*], a FN <sup>L</sup>(*a*, *<sup>b</sup>*, *<sup>c</sup>*, *<sup>d</sup>*, *LL*,*RL*) is a formal model of linguistic variable "about *<sup>z</sup>*". Understanding the phrase "about *z*" depends on the applied pragmatics of the natural language. Let us note that FN may be replaced by generalized FN [72] which does not meet the condition (4).

In line with the identity (7), the unary minus operator "–" on R is extended to the minus operator on F by the identity

$$\mathcal{L}\ominus\mathcal{L}(a,b,c,d,L\_{L},R\_{L})=\mathcal{L}\big(-d,-c,-b,-a,R\_{L}^{(-)},L\_{L}^{(-)}\big)\tag{11}$$

where

$$R\_L^{(-)}(\mathbf{x}) = R\_L(-\mathbf{x}),\tag{12}$$

$$L\_L^{(-)}(\mathbf{x}) = L\_L(-\mathbf{x}).\tag{13}$$

In further considerations, we will use the following concepts.

**Definition 2.** For *any upper semi-continuous non-decreasing function L* ∈ [0, 1] [*u*, *v*] *, its cut-function <sup>L</sup>* <sup>∈</sup> [*u*, *<sup>v</sup>*] [0;1] *is determined by the identity*

$$L^\star(\alpha) = \min \{ x \in [u, v] \, : \, L(x) \ge a \}. \tag{14}$$

**Definition 3.** *For any upper semi-continuous non-increasing function R* ∈ [0, 1] [*u*, *<sup>v</sup>*] *its cut-function <sup>R</sup>* <sup>∈</sup> [0, 1] [*u*, *<sup>v</sup>*] *is determined by the identity*

$$R^\star(\alpha) = \max \{ \mathbf{x} \in [\mu, v] \, : \, R(\mathbf{x}) \ge \alpha \}. \tag{15}$$

**Definition 4.** For *any bounded continuous and non-decreasing function <sup>l</sup>* <sup>∈</sup> [*l*(0), *<sup>l</sup>*(1)][0,1] *its pseudo inverse l* <sup>∈</sup> [0, 1] [*l*(0), *<sup>l</sup>*(1)] *is determined by the identity*

$$l^{\sphericalangle}(\mathbf{x}) = \max \{ a \in [0, 1] : l(a) = \mathbf{x} \}. \tag{16}$$

**Definition 5.** *For any bounded continuous and non-increasing function <sup>r</sup>* <sup>∈</sup> [*r*(0), *<sup>r</sup>*(1)][0,1] *its pseudo inverse <sup>r</sup>* <sup>∈</sup> [0; 1] [*r*(1), *<sup>r</sup>*(0)] *of is determined by the identity*

$$r^{\sphericalangle}(\mathbf{x}) = \min \{ a \in [0, 1] : r(a) = \mathbf{x} \}. \tag{17}$$

In reference [5], it is proved that FNs' sum ⊕ is given by the identity

$$
\mathcal{L}(a+\mathbf{c}, b+f, \mathbf{c}+\mathbf{g}, d+h, \mathcal{L}\_{\parallel}, \mathcal{R}\_{\parallel}) = \mathcal{L}(a, b, \mathbf{c}, d, \mathcal{L}\_{\mathbf{K}}, \mathcal{R}\_{\mathbf{K}}) \oplus \mathcal{L}(\mathbf{c}, f, \mathbf{g}, h, \mathcal{L}\_{\mathbf{M}}, \mathcal{R}\_{\mathbf{M}}), \tag{18}
$$

where

$$\mathcal{W}\_{\alpha \in [0,1]} l\_l(\alpha) = L\_K^\star(\alpha) + L\_M^\star(\alpha) \tag{19}$$

$$\Psi\_{a \in [0,1]} \, r\_f(a) = \mathcal{R}\_K^\star(a) + \mathcal{R}\_M^\star(a) \tag{20}$$

$$\forall\_{\mathbf{x}\in[a+\mathfrak{c},b+f]}L\_{\mathbf{f}}(\mathbf{x}) = I\_{\mathbf{f}}^{\mathfrak{c}}(\mathbf{x}),\tag{21}$$

$$\mathcal{W}\_{\mathbf{x}\in[c+\mathbf{g},d+h]}\,\mathrm{R}\chi(\mathbf{x}) = r\_f^{\circ}(\mathbf{x}).\tag{22}$$

The difference between FNs is determined in determined in the following way

$$\mathcal{L}(a, b, c, d, \mathcal{L}\_{\mathcal{K}}, \mathcal{R}\_{\mathcal{K}}) \oplus \mathcal{L}(\mathfrak{e}, f, \mathfrak{g}, h, \mathcal{L}\_{\mathcal{M}}, \mathcal{R}\_{\mathcal{M}}) = \mathcal{L}(a, b, \mathfrak{e}, d, \mathcal{L}\_{\mathcal{K}}, \mathcal{R}\_{\mathcal{K}}) \oplus (\ominus \mathcal{L}(\mathfrak{e}, f, \mathfrak{g}, h, \mathcal{L}\_{\mathcal{M}}, \mathcal{R}\_{\mathcal{M}})).\tag{23}$$

Then identities (11)–(13) and (18)–(23) imply that

$$\mathcal{L}(a-h, b-\emptyset, c-f, d-e, \mathcal{L}\_{\mathcal{W}}, \mathcal{R}\_{\mathcal{W}}) = \mathcal{L}(a, b, c, d, \mathcal{L}\_{\mathcal{K}}, \mathcal{R}\_{\mathcal{K}}) \oplus \mathcal{L}(e, f, \emptyset, h, \mathcal{L}\_{\mathcal{M}}, \mathcal{R}\_{\mathcal{M}}),\tag{24}$$

where

$$\mathcal{W}\_{a \in [0, 1]} \, \vert \, \mathcal{W}(a) = L\_K^\star(a) - R\_M^\star(a) \tag{25}$$

$$\forall\_{\alpha \in [0,1]} r\_W(\boldsymbol{a}) = R\_K^\star(\boldsymbol{a}) - L\_M^\star(\boldsymbol{a}) \tag{26}$$

$$\Psi\_{\mathbf{x}\in[a-l,b-\underline{g}]}L\_{\mathcal{W}}(\mathbf{x}) = l\_{\mathcal{W}}^{\mathfrak{q}}(\mathbf{x}),\tag{27}$$

$$\forall\_{\mathbf{x}\in[\mathfrak{c}-f,d-\mathfrak{e}]} R\_W(\mathbf{x}) = r\_W^{\mathfrak{e}}(\mathbf{x}). \tag{28}$$

The above arithmetic operators may be generalized to the case of intuitionistic FNs [73]. On the other hand, the dependencies (18)–(28) are not met for discrete FNs [74]. All above identities show a high complexity of arithmetic operations on the space F. Due to that, in many practical applications researchers limit the use of FNs only to their kind distinguished below [75].

**Definition 6.** *For any non-decreasing sequence* (*a*, *<sup>b</sup>*, *<sup>c</sup>*, *<sup>d</sup>*) <sup>⊂</sup> <sup>R</sup>*, a trapezoidal FN (TrFN) is the FN* <sup>T</sup> <sup>=</sup> *Tr*(*a*, *<sup>b</sup>*, *<sup>c</sup>*, *<sup>d</sup>*) <sup>∈</sup> <sup>F</sup> *defined by its membership functions* <sup>μ</sup>*<sup>T</sup>* <sup>∈</sup> [0, 1] <sup>R</sup> *in the following way*

$$\mu\_T(\mathbf{x}) = \mu\_{Tr}(\mathbf{x}|a, b, c, d) = \begin{cases} 0, & \mathbf{x} \notin [a, d], \\ \sum\_{b=a'}^{\underline{\mathbf{x}}=\underline{a}}, & \mathbf{x} \in [a, b], \\ 1, & \mathbf{x} \in [b, c], \\ \sum\_{c=d'}^{\underline{\mathbf{x}}=\underline{d}}, & \mathbf{x} \in [c, d]. \end{cases} \tag{29}$$

The space of all TrFNs is denoted by the symbol F*Tr*. For any TrFN we have

$$Tr(-d\_\prime - c\_\prime - b\_\prime - a) = \Leftrightarrow Tr(a, b, c, d) \tag{30}$$

$$Tr(a+c, b+f, c+g, d+h) = Tr(a, b, c, d) \not\Rightarrow Tr(e, f, g, h),\tag{31}$$

$$Tr(a-h,b-\mathbf{g},\mathbf{c}-f,d-\mathbf{c}) = Tr(a,b,\mathbf{c},d) \Leftrightarrow Tr(\mathbf{c},f,\mathbf{g},h). \tag{32}$$

#### *2.2. Ordered Fuzzy Numbers—Some Basic Facts*

The notion of OFN is intuitively introduced by Kosi ´nski [1–4], as such model of imprecise number that subtraction of OFNs is the inverse operator to addition of OFNs. Therefore, OFNs can contribute to specific problems concerning the solution of fuzzy linear equations of the form or help with the interpretation of specific improper fuzzy arithmetic results.

An important disadvantage of Kosi ´nski's theory is that there exist such OFNs which are not linked to any membership function [4]. For this reason, the Kosi ´nski's theory is revised in [6] where OFNs are defined as follows:

**Definition 7 [6].** *For any monotonic sequence* (*a*, *<sup>b</sup>*, *<sup>c</sup>*, *<sup>d</sup>*) <sup>⊂</sup> <sup>R</sup>*, the ordered fuzzy number OFN* <sup>↔</sup> <sup>L</sup>(*a*, *<sup>b</sup>*, *<sup>c</sup>*, *<sup>d</sup>*, *SL*, *EL*) <sup>=</sup> <sup>↔</sup> <sup>L</sup> *is the pair of orientation* <sup>→</sup> *<sup>a</sup>*, *<sup>d</sup>* <sup>=</sup> (*a*, *<sup>d</sup>*) *and fuzzy set* L∈F (R) *described by membership function* μ*L*(·|*a*, *b*, *c*, *d*, *SL*, *EL*) ∈ [0, 1] <sup>R</sup> *given by the identity*

$$\mu\_L(\mathbf{x}|a, b, c, d, S\_L, E\_L) = \begin{cases} 0, & \mathbf{x} \notin [a, d] \equiv [d, a], \\ S\_L(\mathbf{x}), & \mathbf{x} \in [a, b] \equiv [b, a], \\ 1, & \mathbf{x} \in [b, c] \equiv [c, b], \\ E\_L(\mathbf{x}), & \mathbf{x} \in [c, d] \equiv [d, c]. \end{cases} \tag{33}$$

*where the starting function SL* ∈ [0, 1] [*a*,*b*] *and the ending function EL* <sup>∈</sup> [0, 1] [*c*,*d*] *are upper semi-continuous monotonic ones meeting the conditions (6) and*

$$\text{SL}(b) = \text{EL}(\mathfrak{c}) = 1 \tag{34}$$

The identity (33) additionally describes such modified notation of numerical intervals which is applied in this work.

**Discussion about the terminology:** We see above that the notion of "ordered fuzzy number" is defined without applying any ordering relation between FNs. In original Kosi ´nki's works "ordered fuzzy number" is also defined without use of any ordering relation between FN. In each of these cases, "ordered fuzzy number" is defined as FN completed by orientation. Therefore, in my opinion term "ordered fuzzy number" should be replaced by the term "oriented fuzzy number". The following premises support such a proposal for change:


"Ordered fuzzy numbers" are the most important work of life for Professor Kosi ´nski. Therefore, the proposed change to the term OFN should be discussed with him. Because of Professor Kosi ´nski passed away, this is not possible. Therefore, I agree with other scientists [76,77] that the OFN may be called the "Kosi ´nski's number". Future scientific discussion will allow us to choose a "oriented fuzzy number" or "Kosi ´nski number" or another term. Today we still use the term "ordered fuzzy number". No less in this work, the abbreviation OFN can be read "ordered fuzzy number" or "oriented fuzzy numbers". The use of the second term makes easier to read the section on the ordering relationship between OFNs.

The symbol K denotes the space of all OFNs. Any OFN describes an imprecise number with additional information about the location of the approximated number. This information is given as orientation of OFN. If *<sup>a</sup>* <sup>&</sup>lt; *<sup>d</sup>* then OFN <sup>↔</sup> <sup>L</sup>(*a*, *<sup>b</sup>*, *<sup>c</sup>*, *<sup>d</sup>*, *SL*, *EL*) has the positive orientation −−→ *a*, *d*. For any *<sup>z</sup>* <sup>∈</sup> [*b*, *<sup>c</sup>*], the positively oriented OFN <sup>↔</sup> L(*a*, *b*, *c*, *d*, *SL*, *EL*) is a formal model of linguistic variable "about or slightly above *z*". The symbol K<sup>+</sup> denotes the space of all positively oriented OFN. If *a* > *d*, then OFN ↔ <sup>L</sup>(*a*, *<sup>b</sup>*, *<sup>c</sup>*, *<sup>d</sup>*, *SL*, *EL*) has the negative orientation −−→ *<sup>a</sup>*, *<sup>d</sup>*. For any *<sup>z</sup>* <sup>∈</sup> [*c*, *<sup>b</sup>*], the negatively oriented TrOFN <sup>↔</sup> <sup>L</sup>(*a*, *<sup>b</sup>*, *<sup>c</sup>*, *<sup>d</sup>*, *SL*, *EL*) is a formal model of linguistic variable "about or slightly below *<sup>z</sup>*". The symbol <sup>K</sup><sup>−</sup> denotes the space of all negatively oriented OFN. Understanding the phrases "about or slightly above

*z*" and "about or slightly below *z*" depend on the applied pragmatics of the natural language. If *a* = *d*, OFN <sup>↔</sup> <sup>L</sup>(*a*, *<sup>a</sup>*, *<sup>a</sup>*, *<sup>a</sup>*, *SL*, *EL*) <sup>=</sup> *a* describes unoriented number *<sup>a</sup>* <sup>∈</sup> <sup>R</sup>. Summing up, we see that

$$
\mathbb{K} = \mathbb{K}\_+ \cap \mathbb{R} \cap \mathbb{K}\_- \tag{35}
$$

The minus operator "–" on R is extended by Kosi ´nski [4] to the minus operator on K by means of the identity

$$\stackrel{\rightharpoonup}{\mathcal{L}}(a, b, c, d, \mathcal{S}\_{L}, E\_{L}) = \stackrel{\rightharpoonup}{\mathcal{L}}(-a, -b, -c, -d, \mathcal{S}\_{L}^{(-)}, E\_{L}^{(-)}) \tag{36}$$

where

$$S\_L^{(-)}(\mathbf{x}) = S\_L(-\mathbf{x})\tag{37}$$

$$E\_L^{(-)}(\mathbf{x}) = E\_L(-\mathbf{x})\tag{38}$$

Kosi ´nski [1] defines the addition operator *<sup>K</sup>* on <sup>K</sup> as the extension of operator <sup>⊕</sup> from <sup>F</sup> to <sup>K</sup>. This extension is determined by extension the domain identities (18)–(22) from F to K. In this way, Kosi ´nski defines addition of OFNs as an extension of results obtained by Goetschel and Voxman [5] for addition of FNs. Moreover, Kosi ´nski [4] have shown that there exist such OFNs that their sum *<sup>K</sup>* does not exist. For this reason, Kosi ´nski's operator *<sup>K</sup>* is replaced by addition operator defined on K by the identity [6]

$$
\stackrel{\leftrightarrow}{\mathcal{L}}(a\_{\mathcal{K}}, b\_{\mathcal{K}}, c\_{\mathcal{K}}, d\_{\mathcal{K}}, \mathcal{S}\_{\mathcal{K}}, E\_{\mathcal{K}}) \oplus \stackrel{\leftrightarrow}{\mathcal{L}}(a\_{\mathcal{M}}, b\_{\mathcal{M}}, c\_{\mathcal{M}}, d\_{\mathcal{M}}, \mathcal{S}\_{\mathcal{M}}, E\_{\mathcal{M}}) = \stackrel{\leftrightarrow}{\mathcal{J}} = \stackrel{\leftrightarrow}{\mathcal{L}}(a\_{\mathcal{I}}, b\_{\mathcal{I}}, c\_{\mathcal{I}}, d\_{\mathcal{I}}, \mathcal{S}\_{\mathcal{I}}, E\_{\mathcal{I}}),\tag{39}
$$

where we have

$$d\_f = a\_K + a\_{M\prime} \tag{40}$$

$$b\_{\!\!\!\!=}b\_{\!\!\!K} + b\_{\!\!\!M\!\!\/)} \tag{41}$$

$$
\mathfrak{c}\_{\!\!\!\!\!T} = \mathfrak{c}\_{\!\!\!K} + \mathfrak{c}\_{\!\!M} \tag{42}
$$

$$
\dot{d}\_I = d\_K + d\_{M\nu} \tag{43}
$$

$$a\_{\mathcal{I}} = \begin{cases} \min\{\mathfrak{d}\_{\mathcal{I}}, b\_{\mathcal{I}}\}, & (b\_{\mathcal{I}} < \mathfrak{c}\_{\mathcal{I}}) \vee (b\_{\mathcal{I}} = \mathfrak{c}\_{\mathcal{I}} \wedge \mathfrak{d}\_{\mathcal{I}} \le \check{d}\_{\mathcal{I}}),\\ \max\{\check{a}\_{\mathcal{I}}, b\_{\mathcal{I}}\}, & (b\_{\mathcal{I}} > \mathfrak{c}\_{\mathcal{I}}) \vee (b\_{\mathcal{I}} = \mathfrak{c}\_{\mathcal{I}} \wedge \check{a}\_{\mathcal{I}} > \check{d}\_{\mathcal{I}}), \end{cases} \tag{44}$$

$$d\eta = \begin{cases} \max\{\check{d}\_{\varGamma}, \mathsf{c}\}, & (b\mathsf{y} < \mathsf{c}\mathsf{y}) \vee (b\mathsf{y} = \mathsf{c}\mathsf{y} \wedge \mathsf{d}\_{\varGamma} \le \check{d}\_{\varGamma}),\\ \min\{\check{d}\_{\varGamma}, \mathsf{c}\_{\varGamma}\}, & (b\_{\varGamma} > \mathsf{c}\_{\varGamma}) \vee (b\_{\varGamma} = \mathsf{c}\_{\varGamma} \wedge \mathsf{d}\_{\varGamma} > \check{d}\_{\varGamma}), \end{cases} \tag{45}$$

$$\mathcal{V}\_{a \in [0;1]} \; s\_{\!\!\!/}(a) = \begin{cases} S\_K^\star(a) + S\_M^\star(a), & a\_{\!\!\!/} \neq b\_{\!\!\!/} \\ & b\_{\!\!\!/} & a\_{\!\!\!/} = b\_{\!\!\!/} \end{cases} \tag{46}$$

$$\Psi\_{\alpha \in [0;1]} e\_I(\alpha) = \begin{cases} E\_K^\star(\alpha) + E\_M^\star(\alpha), & \mathcal{c}\_I \neq \mathcal{d}\_{I'} \\ & \mathcal{c}\_{I'} \qquad \mathcal{c}\_I = \mathcal{d}\_I. \end{cases} \tag{47}$$

$$\mathcal{N}\_{x \in [a/b, b]} \; \mathcal{S}\_{\!\!\!/}(\mathbf{x}) = \mathbf{s}\_{\!\!\!\!/}^{\!\!\!/}(\mathbf{x}),\tag{48}$$

$$\Psi\_{\mathbf{x}\in[\mathbf{c}/\mathcal{A}]} \, E\_{\mathbf{J}}(\mathbf{x}) = \mathfrak{e}\_{\mathbf{J}}^{\mathfrak{e}}(\mathbf{x}).\tag{49}$$

In [6], the definition of addition operator is justified in detail. Then, difference between OFNs is given as follows

$$
\stackrel{\leftrightarrow}{\mathcal{L}}(a, b, c, d, \mathcal{S}\_{\mathcal{K}}, \mathcal{E}\_{\mathcal{K}}) \cong \stackrel{\leftrightarrow}{\mathcal{L}}(c, f, \mathcal{g}, h, \mathcal{S}\_{\mathcal{M}}, \mathcal{E}\_{\mathcal{M}}) \\
= \stackrel{\leftrightarrow}{\mathcal{L}}(a, b, c, d, \mathcal{L}\_{\mathcal{K}}, \mathcal{R}\_{\mathcal{K}}) \cong \left(\stackrel{\leftrightarrow}{\mathcal{L}}(c, f, \mathcal{g}, h, \mathcal{S}\_{\mathcal{M}}, \mathcal{E}\_{\mathcal{M}})\right).\tag{50}
$$

In [1,6], it is shown that for any <sup>↔</sup> L ∈ <sup>K</sup> we have

$$
\stackrel{\rightharpoonup}{\mathcal{L}} \boxdot\_K (\stackrel{\rightharpoonup}{\mathcal{L}}) = \mathbb{I}[0] = \stackrel{\rightharpoonup}{\mathcal{L}} \stackrel{\rightharpoonup}{\mathcal{L}}.\tag{51}
$$

We see that subtraction is inverse operator for both addition operators *<sup>K</sup>* and . We can say that OFNs meet the intuitive postulate put forward by Kosi ´nski.

Due to high complexity of arithmetic operations of OFN, in many practical applications researchers limit the use of OFNs only to their kind distinguished below.

**Definition 8 [6].** *For any monotonic sequence* (*a*, *<sup>b</sup>*, *<sup>c</sup>*, *<sup>d</sup>*) <sup>⊂</sup> <sup>R</sup>*, the trapezoidal OFN (TrOFN)* <sup>↔</sup> *Tr*(*a*, *<sup>b</sup>*, *<sup>c</sup>*, *<sup>d</sup>*) <sup>=</sup> <sup>↔</sup> T *is the pair of the orientation* <sup>→</sup> *<sup>a</sup>*, *<sup>d</sup>* <sup>=</sup> (*a*, *<sup>d</sup>*) *and fuzzy set* T ∈F (R) *determined explicitly by its membership functions* μ*<sup>T</sup>* ∈ [0, 1] <sup>R</sup> *as follows*

$$\mu\_T(\mathbf{x}) = \mu\_{Tr}(\mathbf{x}|a, b, c, d) = \begin{cases} \begin{array}{ll} 0, & \mathbf{x} \notin [a, d] \equiv [d, a], \\ \frac{\mathbf{x} - \mathbf{a}}{b - a}, & \mathbf{x} \in [a, b] \equiv [b, a], \\ 1, & \mathbf{x} \in [b, c] \equiv [c, b], \\ \frac{\mathbf{x} - \mathbf{d}}{c - \mathbf{d}}, & \mathbf{x} \in [c, d] \equiv [d, c]. \end{array} \end{cases} \tag{52}$$

The symbol K*Tr* denotes the space of all TrOFNs. Identity (36) implies that the minus operator on K*Tr* is given by the identity

$$
\stackrel{\leftrightarrow}{Tr}(-a, -b, -c, -d) = \mp \stackrel{\leftrightarrow}{Tr}(a, b, c, d). \tag{53}
$$

In line with (39), the sum of TrOFNs is determined as follows

$$\begin{array}{lcl}\stackrel{\leftrightarrow}{Tr}(a,b,\ c,d) \boxtimes \stackrel{\leftrightarrow}{Tr}(p-a,q-b,\ r-c,s-d) =\\ =\ & \begin{cases} \stackrel{\leftrightarrow}{Tr}(\min\{p,q\},q,\ r,\max\{r,s\}), \ (qr) \lor (q=r \land p>s).\end{cases} \end{array} \tag{54}$$

Then the difference between TrOFNs is the TrOFN given as follows

$$\begin{array}{lcl}\stackrel{\leftrightarrow}{Tr}(a,b,\ c,d)\equiv\stackrel{\leftrightarrow}{Tr}(a-p,b-q,c-r,d-s)=\\ =\quad\left(\begin{array}{c}\stackrel{\leftrightarrow}{Tr}(\min\{p,q\},q,\ r,\max\{r,s\}\ (qr)\lor(q=r\land p>s)\end{array}\right)\end{array}\tag{55}$$

#### *2.3. Ordered Fuzzy Numbers vs. Fuzzy Numbers*

For the case *<sup>a</sup>* <sup>≥</sup> *<sup>d</sup>* the membership function of OFN <sup>↔</sup> L(*a*, *b*, *c*, *d*, *SL*, *EL*) is equal to the membership function of FN <sup>L</sup>(*a*, *<sup>b</sup>*, *<sup>c</sup>*, *<sup>d</sup>*, *SL*, *EL*). This fact implies the existence of isomorphism <sup>Ψ</sup> : (K<sup>+</sup> <sup>∪</sup> <sup>R</sup>) <sup>→</sup> <sup>F</sup> given by the identity

$$
\mathcal{L}(a, b, c, d, S\_{L\prime}E\_{L}) = \Psi(\stackrel{\leftrightarrow}{\mathcal{L}}(a, b, c, d, S\_{L\prime}E\_{L})).\tag{56}
$$

This isomorphism may be extended to the space <sup>K</sup> by disorientation map <sup>=</sup> <sup>Ψ</sup> : <sup>K</sup> <sup>→</sup> <sup>F</sup> given as follows

$$
\bar{\Psi}(\stackrel{\rightharpoonup}{\mathfrak{X}}) = \left\{ \begin{array}{c} \Psi(\stackrel{\rightharpoonup}{\mathfrak{X}}) \stackrel{\rightharpoonup}{\mathfrak{X}} \in \mathbb{K}^+ \cup \mathbb{R}, \\ \rightsquigarrow \Psi(\stackrel{\rightharpoonup}{\mathfrak{X}}) \stackrel{\rightharpoonup}{\mathfrak{X}} \in \mathbb{K}^-. \end{array} \right.\tag{57}$$

Let us note, that the disorientation map <sup>=</sup> <sup>Ψ</sup> : <sup>K</sup> <sup>→</sup> <sup>F</sup> may be equivalently defined by the identity

$$
\overline{\Psi} \left( \overleftarrow{\mathcal{L}} (a, b, c, d, \mathrm{S}\_{L}, \mathrm{E}\_{L}) \right) = \left\{ \begin{array}{c} \mathcal{L}(a, b, c, d, \mathrm{S}\_{L}, \mathrm{E}\_{L}) \\ \stackrel{\leftrightarrow}{\longleftrightarrow} (a, b, c, d, \mathrm{S}\_{L}, \mathrm{E}\_{L}) \in \mathbb{K}^{+} \cup \mathbb{R}, \\ \mathcal{L}(a, c, b, a, \mathrm{E}\_{L}, \mathrm{S}\_{L}) \text{ } \mathcal{L}(a, b, c, d, \mathrm{S}\_{L}, \mathrm{E}\_{L}) \in \mathbb{K}^{-}. \end{array} \right.\tag{58}
$$

**Example 1.** *Let us consider the OFN* <sup>↔</sup> <sup>X</sup> <sup>=</sup> <sup>↔</sup> L(12, 14, 18, 20, *SX*, *EX*)*, where*

$$\mu\_X(\mathbf{x}) = \begin{cases} 0, & \mathbf{x} \notin [12, 20], \\ S\_X(\mathbf{x}), & \mathbf{x} \in [12, 14], \\ 1, & \mathbf{x} \in [14, 18], \\ E\_X(\mathbf{x}), & \mathbf{x} \in [18, 20], \end{cases} = \begin{cases} 0, & \mathbf{x} \notin [12, 20], \\ \frac{2 \cdot x - 24}{x - 10}, & \mathbf{x} \in [12, 14], \\ 1, & \mathbf{x} \in [14, 18], \\ \frac{7 \cdot x - 140}{2 \cdot x - 50}, & \mathbf{x} \in [18, 20]. \end{cases} \tag{59}$$

*and the* OFN <sup>↔</sup> <sup>Y</sup> <sup>=</sup> <sup>↔</sup> L(13, 11, 6, 5, SY, EY)*, where*

$$\mu\_{Y}(\mathbf{x}) = \begin{cases} 0, & \mathbf{x} \notin [13, 5], \\ S\_{Y}(\mathbf{x}), & \mathbf{x} \in [13, 11], \\ 1, & \mathbf{x} \in [11, 6], \\ E\_{Y}(\mathbf{x}), & \mathbf{x} \in [6, 5], \end{cases} = \begin{cases} 0, & \mathbf{x} \notin [13, 5], \\ \frac{6 \cdot x - 30}{\mathbf{x}}, & \mathbf{x} \in [13, 11], \\ 1, & \mathbf{x} \in [11, 6], \\ \frac{2 \cdot x - 26}{\mathbf{x} - 15}, & \mathbf{x} \in [6, 5]. \end{cases} \tag{60}$$

*Since* <sup>↔</sup> X ∈ <sup>K</sup>+*, using (57) we get*

$$\bar{\Psi}(\stackrel{\rightharpoonup}{X}) = \bar{\Psi}(\stackrel{\rightharpoonup}{\mathcal{L}}(12, 14, 18, 20, \mathcal{S}\_X, \mathcal{E}\_X) ) = \mathcal{L}(12, 14, 18, 20, \mathcal{S}\_X, \mathcal{E}\_X) = \mathcal{L}(12, 14, 18, 20, \mathcal{L}\_{\mathcal{U}}, \mathcal{R}\_{\mathcal{U}}) = \mathcal{U}, \tag{61}$$

*where FN* U *is explicitly determined by the following membership function*

$$\mu\_{lI}(\mathbf{x}) = \begin{cases} 0, & \mathbf{x} \notin [12, 20], \\ L\_{lI}(\mathbf{x}), & \mathbf{x} \in [12, 14], \\ 1, & \mathbf{x} \in [14, 18], \\ R\_{lI}(\mathbf{x}), & \mathbf{x} \in [18, 20], \end{cases} \tag{62} = \begin{cases} 0, & \mathbf{x} \notin [12, 20], \\ \frac{2 \cdot \mathbf{x} - 24}{\mathbf{x} - 10}, & \mathbf{x} \in [12, 14], \\ 1, & \mathbf{x} \in [14, 18], \\ \frac{7 \cdot \mathbf{x} - 140}{2 \cdot \mathbf{x} - 50}, & \mathbf{x} \in [18, 20]. \end{cases} \tag{62}$$

*Because* <sup>↔</sup> Y ∈ <sup>K</sup>−*, using (57) we get*

$$
\overline{\Psi}(\overline{\mathcal{Y}}) = \overline{\Psi}(\overline{\mathcal{L}}(13, 11, 6, 5, \mathcal{S}\_Y, \mathbb{E}\_Y)) = \mathcal{L}(5, 6, 11, 13, \mathcal{E}\_Y, \mathbb{S}\_Y) = \mathcal{L}(5, 6, 11, 13, \mathcal{L}\_Y, \mathbb{R}\nu) = \mathcal{V},\tag{63}
$$

*where FN* V *is described by the membership function*

$$\mu\_{V}(\mathbf{x}) = \begin{cases} 0, & \mathbf{x} \notin [5, 13], \\ L\nu(\mathbf{x}), & \mathbf{x} \in [5, 6], \\ 1, & \mathbf{x} \in [6, 11], \\ R\_{V}(\mathbf{x}), & \mathbf{x} \in [11, 13], \end{cases} = \begin{cases} 0, & \mathbf{x} \notin [5, 13], \\ \frac{6 \cdot \mathbf{x} - 30}{\mathbf{x}}, & \mathbf{x} \in [5, 6], \\ 1, & \mathbf{x} \in [6, 11], \\ \frac{2 \cdot \mathbf{x} - 26}{\mathbf{x} - 15}, & \mathbf{x} \in [11, 13]. \end{cases} \tag{64}$$

The above example shows that the disorientation map is a simple transformation the space K on the space F. This simplicity is apparent. It follows from the fact that the arithmetic operations on F are consistent with the Zadeh's Extension Principle when the arithmetic operations on K are not consistent with this principle. The main difficulties arise from the difference between the definition (11)–(13) of minus operator on F and the definition (36)–(38) of minus operator on K.

Let us compare the semigroups F,⊕ and K,. The identities (18)–(22) and (39)–(46) imply that the number 0 is the identity element in both these semigroups.

In [6], it is shown that addition is not associative. It implies that semigroup K, is not group. Moreover, the identity (51) implies that subtraction is the inverse operator to addition .

The identity (18–22) implies that the addition ⊕ is associative. On the other hand, for any TrFN <sup>T</sup> <sup>=</sup> *Tr*(*a*, *<sup>b</sup>*, *<sup>c</sup>*, *<sup>d</sup>*) <sup>∈</sup> (F*Tr*\R) <sup>⊂</sup> <sup>F</sup> we have

$$\mathcal{T}' \ominus \mathcal{T}' = \operatorname{Tr}(a-d, b-c, c-b, d-a) \not\models \|0\|. \tag{65}$$

It shows that subtraction  is not inverse operator to addition <sup>⊕</sup>. It proves that semigroup F,⊕ is not group.

All above simple conclusions imply that:


#### **3. Relation "Greater than or Equal to" for Fuzzy Numbers**

We consider the pair (K, <sup>L</sup>) <sup>∈</sup> <sup>F</sup><sup>2</sup> of FNs determined by membership functions <sup>μ</sup>*K*, <sup>μ</sup>*<sup>L</sup>* <sup>∈</sup> [0, 1] R. On the space <sup>F</sup>, we can consider the relation <sup>K</sup>.*GE*.L, which reads:

$$\text{\textquotedblleft FNN } \mathcal{K} \text{ is greater than or equal to \textquotedblright} \text{\textquotedblleft} \mathcal{L}\text{\textquotedblright} \tag{66}$$

Orlovsky [61] shows that in agreement with the Zadeh's Extension Principle, this relation is a fuzzy preorder [*GE*] ∈ F F2 described by membership function ν[*GE*] ∈ [0, 1] F2 determined as follows

$$\nu\_{[\rm GE]}(\mathcal{K}, \mathcal{L}) = \sup \{ \min \{ \mu\_K(\mathbf{x}), \mu\_L(y) \} : \mathbf{x} \ge y \}. \tag{67}$$

From the multivalued logic point of view, the value ν[*GE*](K, L) is considered as a truth-value of the sentence (66). It means that we have

$$\nu\_{[\rm GE]}(\mathcal{K}, \mathcal{L}) = \mathfrak{tr}(\mathcal{"\mathcal{K}.GE.\mathcal{L}" )}. \tag{68}$$

We prove that the fuzzy preorder [*GE*] ∈ F F2 fulfils the following well-known properties.

**Theorem 2.** *For any pair* (K, <sup>L</sup>) <sup>∈</sup> <sup>F</sup>2*, we have:*

$$\nu\_{[\rm GE]}(\mathcal{K}, \mathcal{L}) = \nu\_{[\rm GE]}(\oplus \mathcal{L}, \oplus \mathcal{K}),\tag{69}$$

$$\nu\_{[\rm GE]}(\mathcal{K}, \mathcal{L}) = \nu\_{[\rm GE]}(\mathcal{K} \ominus \mathcal{L}, \llbracket 0 \rrbracket). \tag{70}$$

**Proof.** Take into account the quadruple (K, <sup>L</sup>,M, <sup>N</sup>) <sup>∈</sup> <sup>F</sup><sup>4</sup> of FNs represented respectively by their membership functions μ*K*, μ*L*, μ*M*, μ*<sup>N</sup>* ∈ [0, 1] R.

Let us assume that M = K and N = L. Using the identities (11), (12), and (13) we obtain:

$$
\mu\_M(y) = \mu\_K(-y) \text{ and } \mu\_N(x) = \mu\_L(-x).
$$

Then the identity (67) implies

$$\begin{aligned} \nu\_{[\mathsf{GE}]}(\oplus \mathcal{L}, \ominus \mathcal{K}) &= \nu\_{[\mathsf{GE}]}(\mathcal{N}, \mathcal{M}) = \sup \{ \min \{ \mu\_{\mathcal{N}}(\mathbf{x}), \mu\_{\mathcal{M}}(y) \} : \mathbf{x} \ge y \} = \\ \nu = \sup \{ \min \{ \mu\_L(-\mathbf{x}), \mu\_K(-y) \} : -\mathbf{x} \le -y \} &= \sup \{ \min \{ \mu\_L(\mathbf{u}), \mu\_K(\mathbf{v}) \} : \mathbf{u} \le \mathbf{v} \} = \nu\_{[\mathsf{GE}]}(\mathcal{K}, \mathcal{L}). \end{aligned}$$

Let us assume now that M = K L. Using the identity (7) we obtain:

$$\mu\_M(z) = \sup \{ \min \{ \mu\_K(x), \mu\_L(y) \} : z = x - y, (x, y) \in \mathbb{R} \}.$$

Then the identity (67) implies

$$\begin{array}{l} \nu\_{[\rm CE]}(\mathcal{K}\ominus\mathcal{L}, \{\mathtt{0}\}) = \nu\_{[\rm CE]}(\mathcal{M}, \{\mathtt{0}\}) = \sup\{\mu\_{\rm M}(z) : z \ge \mathtt{0}\} = \\ = \sup\{\sup\{\min\{\mu\_{\rm K}(x), \mu\_{\rm L}(y)\} : z = x - y, (x, y) \in \mathbb{R}\} : z \ge \mathtt{0}\} = \\ = \sup\{\min\{\mu\_{\rm K}(x), \mu\_{\rm L}(y)\} : x - y \ge \mathtt{0}\} = \nu\_{[\rm CE]}(\mathcal{K}, \mathcal{L}). \text{ QED} \end{array}$$

**Theorem 3**. *For any FNs* <sup>L</sup>(*a*, *<sup>b</sup>*, *<sup>c</sup>*, *<sup>d</sup>*, *LK*, *RK*), <sup>L</sup>(*e*, *<sup>f</sup>*, *<sup>g</sup>*, *<sup>h</sup>*, *LM*,*RM*) <sup>∈</sup> <sup>F</sup> *we have*

$$\mathcal{V}\_{[\rm GE]}(\mathcal{L}(a,b,c,d,L\_{\rm K},R\_{\rm K}),\ \mathcal{L}(e,f,g,h,L\_{\rm M},R\_{\rm M})) = \begin{cases} 0 & 0 < d - e, \\\ R\_{\rm W}(0) & d - e \le 0 < c - f, \\\ 1 & 0 \le c - f, \end{cases} \tag{71}$$

*where the function RW* : [*d* − *e*, *c* − *f* ] → [0, 1] *is given by identity (28).*

**Proof.** For *e* > *d*, using (67) and (8) we get

$$\begin{array}{l} \nu\_{[\rm GE]}\left(\mathcal{L}(a,b,c,d,\mathtt{L}\_{\mathtt{K}},\mathtt{R}\_{\mathtt{K}}),\ \mathcal{L}(\mathtt{e},f,\mathtt{g},\mathtt{h},\mathtt{L}\_{\mathtt{M}},\mathtt{R}\_{\mathtt{M}})\right) = \sup\{\min\{\mu\_{\rm K}(\mathtt{x}),\mu\_{\rm M}(\mathtt{y})\}: \mathtt{x}\geq\mathtt{y}\} = \\\ \mathtt{=} \max\{\sup\{\min\{0,\mu\_{\rm M}(\mathtt{x})\}: \mathtt{x}\geq\mathtt{y}\}, \sup\{\min\{\mu\_{\rm K}(\mathtt{x}),0\}: \mathtt{x}\geq\mathtt{y}\in\mathtt{e}\} \neq 0. \end{array} \tag{72}$$

For *c* ≥ *f* we have

$$\begin{aligned} \text{tr}\,1 &\geq \nu\_{\text{[GE]}}(\mathcal{L}(a,b,c,d,L\_{K},\mathbb{R}\_{K}), \ \mathcal{L}(e,f,g,h,L\_{M},\mathbb{R}\_{M})) = \sup\{\min\{\mu\_{K}(\mathbf{x}),\mu\_{M}(y)\}: \mathbf{x} \geq y\} \geq \sup\{\min\{\mu\_{K}(\mathbf{x}),\mu\_{M}(y)\}: \mathbf{x} \geq y\} = \sup\{\min\{1,1\}\} = 1. \end{aligned} \tag{73}$$

For *d* ≤ *e* and *f* < *c* we have *d* − *e* ≤ 0 < *c* − *f*. Then from (24), (67) and (70) we obtain

$$\begin{aligned} \, \_{\mathbb{P}[\underline{\mathbf{c}}\mathbf{E}]} \left( \mathcal{L}(a, b, \mathbf{c}, d, \mathbf{L}\_{\mathbf{K}}, \mathbf{R}\_{\mathbf{K}}), \, \mathcal{L}(\mathbf{c}, f, \mathbf{g}, h, \mathbf{L}\_{\mathbf{M}}, \mathbf{R}\_{\mathbf{M}}) \right) &= \, \_{\mathbb{P}[\underline{\mathbf{c}}\mathbf{E}]} \left( \mathcal{L}(a - h, b - \mathbf{g}, \mathbf{c} - f, d - e, \mathbf{L}\_{\mathbf{W}}, \mathbf{R}\_{\mathbf{W}}), \|\mathbf{0}\| \right) = \, \\ \, \_{\mathbb{P}} &= \mathcal{R}\_{\mathbf{W}}(0). \, \text{QED} \end{aligned} \tag{74}$$

**Example 2.** *Let us take into account the FNs* U = L(12, 14, 18, 20, *LU*,*RU*) *and* V = L(5, 6, 11, 13, *LV*,*RV*) *respectively determined by identities (62) and (64). We compare these FNs with using of fuzzy preorder* [*GE*] ∈ F F2 *. We have here*

$$-1 = 12 - 13 \le 0 \le 14 - 11 = \text{ 3.}$$

*Therefore, we should establish the variability of the function RW* ∈ [0, 1] [−1,3] *determined by the identity (28). First, by using identities (14) and (15), we assign functions*

$$L\_{ll}^{\star}(a) = \min \{ \mathbf{x} \in [12, 14] : L\_{ll}(\mathbf{x}) \ge a \} = L\_{ll}^{-1}(a) = \frac{10a - 24}{a - 2},\tag{75}$$

$$R\_V^\star(a) = \max\{\mathbf{x} \in [11/15] : R\_V(\mathbf{x}) \ge a\} = R\_V^{-1}(a) = \frac{15a - 26}{a - 2}.\tag{76}$$

*In the next step, applying (25) and (27), we obtain*

$$r\_W(a) = R\_V^\star(a) - L\_U^\star(a) = \frac{15a - 26}{a - 2} - \frac{10a - 24}{a - 2} = \frac{5a - 2}{a - 2},\tag{77}$$

$$R\_W(\mathbf{x}) = r\_W^\circledast(\mathbf{x}) = \min\{a \in [0; 1] : l\_W(a) = \mathbf{x}\} = l\_W^{-1}(\mathbf{x}) = \frac{2 \cdot (\mathbf{x} - 1)}{\mathbf{x} - \mathbf{5}}.\tag{78}$$

*Finally, using identity (71), we get*

$$\nu\_{[\!\!\!\!E]}(\mathcal{V}, \mathcal{U}) = \mathcal{R}\_W(0) = \frac{2}{5}. \tag{79}$$

The above example together with Theorem 3 shows that fuzzy preorder [GE] ∈ F F2 depends only on the interaction between the right reference function of the first compared FN and the left reference function of the second compared FN.

Moreover, Theorem 3 immediately implies that for any TrFNs we have:

**Theorem 4.** *For any TrFNs Tr*(*a*, *<sup>b</sup>*, *<sup>c</sup>*, *<sup>d</sup>*), *Tr*(*e*, *<sup>f</sup>*, *<sup>g</sup>*, *<sup>h</sup>*) <sup>∈</sup> <sup>F</sup>*Tr we have*

$$\nu\_{[\rm GE]}(Tr(a,b,c,d),\,Tr(c,f,g,h)) = \begin{cases} 0, & 0 < d - c, \\ \frac{d - c}{d + f - c - e}, & d - e \le 0 < c - f, \\\ 1, & 0 \le c - f. \end{cases} \tag{80}$$

#### **4. Relation "Greater than or Equal to" for Ordered Fuzzy Numbers**

Let us consider the pair ( ↔ K, ↔ <sup>L</sup>) <sup>∈</sup> <sup>K</sup><sup>2</sup> represented by the pair (μ*K*, <sup>μ</sup>*L*) <sup>∈</sup> ([0, 1] R) <sup>2</sup> of their membership functions. On the space <sup>K</sup>, we introduce the relation <sup>↔</sup> K.*GE*-. ↔ L, which reads:

$$\text{\textquotedblleft OverN} \overset{\leftrightarrow}{\mathcal{K}} \text{ is greater than or equal to } \text{OFN} \overset{\leftrightarrow}{\mathcal{L}} \text{\textquotedblright} \tag{81}$$

This relation is a fuzzy preorder *GE*- ∈ F (K2) defined by its membership function <sup>ν</sup>*GE* <sup>∈</sup> [0, 1] K2 . From the point view of the multivalued logic, the value ν*GE*(K, L) is considered as a truth-value of the sentence (81). It means that we have

$$\nu\_{\rm GE}(\overleftrightarrow{\mathcal{K}}, \overleftrightarrow{\mathcal{L}}) = \mathfrak{t}\,\circ(\overleftrightarrow{\mathcal{K}}.\overleftrightarrow{\mathcal{G}}.\overleftrightarrow{\mathcal{L}}'').\tag{82}$$

The fuzzy preorder *GE*- ∈ F (K2) cannot be determined with use of the Zadeh's Extension Principle because of this principle is not valid for OFNs. Therefore, we additionally assume that any membership function ν*GE* ∈ [0, 1] K2 meets the following well-known conditions:

• for any pair ( ↔ K, ↔ <sup>L</sup>) <sup>∈</sup> (K<sup>+</sup> <sup>∪</sup> <sup>R</sup>) <sup>2</sup> the extension principle

$$\nu\_{\rm{VGE}}(\stackrel{\leftrightarrow}{\mathcal{K}},\stackrel{\leftrightarrow}{\mathcal{L}}) = \nu\_{[\rm{GE}]}(\stackrel{\leftrightarrow}{\Psi(\mathcal{K})},\stackrel{\leftrightarrow}{\Psi(\stackrel{\leftrightarrow}{\mathcal{L}})}),\tag{83}$$

• for any pair ( ↔ K, ↔ <sup>L</sup>) <sup>∈</sup> (K<sup>−</sup> <sup>∪</sup> <sup>R</sup>) <sup>2</sup> the sign exchange law

$$\nu\_{\rm GE}(\overrightarrow{\mathcal{K}}, \overleftarrow{\mathcal{L}}) = \nu\_{\rm GE}(\boxdot \overleftarrow{\mathcal{L}}, \boxdot \overleftarrow{\mathcal{K}}),\tag{84}$$

• for any pair ( ↔ K, ↔ <sup>L</sup>) <sup>∈</sup> (K<sup>+</sup> <sup>∪</sup> <sup>R</sup>) <sup>×</sup> (K<sup>−</sup> <sup>∪</sup> <sup>R</sup>) the law of subtraction of parties

$$\nu\_{\rm GE}(\overrightarrow{\mathcal{K}}, \overleftrightarrow{\mathcal{L}}) = \nu\_{\rm GE}(\overrightarrow{\mathcal{K}} \boxplus \overleftrightarrow{\mathcal{L}}, \llbracket 0 \rrbracket). \tag{85}$$

Among other things, we prove here:

**Lemma 1.** *Any pair* ( ↔ K, ↔ <sup>L</sup>) <sup>∈</sup> (K<sup>+</sup> <sup>∪</sup> <sup>R</sup>) <sup>×</sup> (K<sup>−</sup> <sup>∪</sup> <sup>R</sup>) *satisfies the condition*

$$\Psi(\stackrel{\leftrightarrow}{\mathcal{K}} \oplus \stackrel{\leftrightarrow}{\mathcal{L}}) = \Psi(\stackrel{\leftrightarrow}{\mathcal{K}}) \oplus (\stackrel{\leftrightarrow}{\Psi(\to} \stackrel{\leftrightarrow}{\mathcal{L}})) \tag{86}$$

**Proof.** Let <sup>↔</sup> <sup>K</sup> <sup>=</sup> <sup>↔</sup> <sup>L</sup>(*a*, *<sup>b</sup>*, *<sup>c</sup>*, *<sup>d</sup>*, *SK*, *EK*) <sup>∈</sup> <sup>K</sup><sup>+</sup> <sup>∪</sup> <sup>R</sup> and <sup>↔</sup> <sup>L</sup> <sup>=</sup> <sup>↔</sup> <sup>L</sup>(*e*, *<sup>f</sup>*, *<sup>g</sup>*, *<sup>h</sup>*, *SL*, *EL*) <sup>∈</sup> <sup>K</sup><sup>−</sup> <sup>∪</sup> <sup>R</sup>. Then, we have <sup>↔</sup> L, ↔ K ↔ L ∈ <sup>K</sup><sup>+</sup> because of the sequences (−*e*,−*f*,−*g*,−*h*) and (*<sup>a</sup>* <sup>−</sup> *<sup>e</sup>*, *<sup>b</sup>* <sup>−</sup> *<sup>f</sup>*, *<sup>c</sup>* <sup>−</sup> *<sup>g</sup>*, *<sup>d</sup>* <sup>−</sup> *<sup>h</sup>*) are nondecreasing. Then, from (39), (50), and (58) we get

$$\Psi(\stackrel{\leftrightarrow}{\mathcal{K}} \oplus \stackrel{\leftrightarrow}{\mathcal{L}}) = \Psi(\stackrel{\leftrightarrow}{\mathcal{L}}(a-\mathbf{e}, b-f, \mathbf{c}-\mathbf{g}, d-h, \mathbf{S}\_{M\star}\mathbf{E}\_{M})) = \mathcal{L}(a-\mathbf{e}, b-f, \mathbf{c}-\mathbf{g}, d-h, \mathbf{S}\_{M\star}\mathbf{E}\_{M}),\tag{87}$$

where

$$\mathcal{W}\_{a \in [0;1]} \; s\_M(a) = \begin{cases} \mathcal{S}\_L^\star(a) + \mathcal{S}\_L^\star(a), & a - e \neq b - f, \\ \quad b - f, & a - e = b - f. \end{cases} \tag{88}$$

$$\mathsf{V}\_{\mathsf{a}\in[0;1]}\,e\_{\mathsf{M}}(\alpha) = \begin{cases} E\_{\mathsf{K}}^{\star}(\alpha) + E\_{\mathsf{L}}^{\star}(\alpha), & \mathsf{c} - \mathsf{g} \neq d - h, \\ \mathsf{c} - \mathsf{g}, & \mathsf{c} - \mathsf{g} = d - h. \end{cases} \tag{89}$$

$$\mathcal{W}\_{\mathfrak{x}\in[a-\mathfrak{e},b-f]}\mathcal{S}\_{M}(\mathfrak{x}) = s\_{M}^{\mathfrak{x}}(\mathfrak{x}),\tag{90}$$

$$\Psi\_{\mathbf{x}\in[\mathbf{c}-\mathbf{g},d-h]}\,E\_M(\mathbf{x}) = \mathfrak{e}\_M^\clubsuit(\mathbf{x}).\tag{91}$$

On the other hand, successively from (36), (57), (11), and (22), we obtain

$$\begin{split} \Psi(\overset{\leftrightarrow}{\mathcal{K}}) \ominus (\oplus \Psi(\overset{\leftrightarrow}{\mathcal{L}})) &= \Psi(\overset{\leftrightarrow}{\mathcal{L}}(a, b, c, d, S\_{\mathcal{K}}, E\_{\mathcal{K}})) \oplus \left(\ominus \Psi(\overset{\leftrightarrow}{\mathcal{L}}(c, f, g, h, S\_{\mathcal{L}}, E\_{\mathcal{L}}))\right) = \\ &= \mathcal{L}(a, b, c, d, S\_{\mathcal{K}}, E\_{\mathcal{K}}) \oplus \left(\ominus \Psi(\overset{\leftrightarrow}{\mathcal{L}}(-c, -f, -g, -h, S\_{\mathcal{L}}^{(-)}, E\_{\mathcal{L}}^{(-)}))\right) = \\ &= \mathcal{L}(a, b, c, d, S\_{\mathcal{K}}, E\_{\mathcal{K}}) \oplus \left(\ominus \mathcal{L}(-c, -f, -g, -h, S\_{\mathcal{L}}^{(-)}, E\_{\mathcal{L}}^{(-)})\right) = \\ &= \mathcal{L}(a, b, c, d, S\_{\mathcal{K}}, E\_{\mathcal{K}}) \oplus \mathcal{L}(h, g, f, c, E\_{\mathcal{L}}^{(-)}, S\_{\mathcal{L}}^{(-)}) = \mathcal{L}(a - c, b - f, c - g, d - h, S\_{\mathcal{M}}, E\_{\mathcal{M}}). \text{ QED} \end{split} \tag{92}$$

The conjunction of assumptions (83)–(85) is a sufficient condition for the formulation of the following theorem:

**Theorem 5.** *For any pair* ( ↔ K, ↔ <sup>L</sup>) <sup>∈</sup> <sup>K</sup><sup>2</sup> *we have*

$$\nu\_{\rm GE}(\overrightarrow{\mathcal{K}}, \overleftarrow{\mathcal{L}}) = \nu\_{[\rm GE]}(\overleftarrow{\Psi}(\overleftarrow{\mathcal{K}}), \overleftarrow{\Psi}(\overleftarrow{\mathcal{L}})).\tag{93}$$

**Proof.** For any pair ( ↔ K, ↔ <sup>L</sup>) <sup>∈</sup> (K<sup>+</sup> <sup>∪</sup> <sup>R</sup>) <sup>2</sup> the identity (93) is obvious.

Let us assume that ( ↔ K, ↔ <sup>L</sup>) <sup>∈</sup> (K<sup>−</sup> <sup>∪</sup> <sup>R</sup>) 2 . Then, ( ↔ K, ↔ <sup>L</sup>) <sup>∈</sup> (K<sup>+</sup> <sup>∪</sup> <sup>R</sup>) <sup>2</sup> and successively from (84), (83), (69) and (56), we get

$$\begin{array}{rcl} \nu\_{\rm GE}(\overset{\leftrightarrow}{\mathcal{K}},\overset{\leftrightarrow}{\mathcal{L}}) = \nu\_{\rm GE}(\overset{\leftrightarrow}{\to} \overset{\leftrightarrow}{\mathcal{L}} \ni \mathcal{K}) = \nu\_{[\rm GE]}(\mathbb{A}(\boxdot{\mathbb{Z}}), \mathbb{A}(\boxdot{\mathbb{Z}}\mathbb{K})) = \nu\_{[\rm GE]}(\Leftrightarrow \mathbb{A}(\boxdot{\mathbb{Z}}), \hookrightarrow \mathbb{A}(\boxdot{\mathbb{Z}})) = \\ \nu = \nu\_{[\rm GE]}(\mathbb{A}(\mathscr{K}), \mathbb{A}(\mathbb{Z})). \end{array} \tag{94}$$

Let us assume now that ( ↔ K, ↔ <sup>L</sup>) <sup>∈</sup> (K<sup>+</sup> <sup>∪</sup> <sup>R</sup>) <sup>×</sup> (K<sup>−</sup> <sup>∪</sup> <sup>R</sup>). Then <sup>↔</sup> <sup>K</sup> <sup>↔</sup> L ∈ <sup>K</sup><sup>+</sup> and successively from (85), (83), (86), (70) and (57), we get

$$\begin{array}{lcl} \nu\_{\rm GE}(\stackrel{\scriptstyle\rightharpoonup}{\mathcal{K}}, \stackrel{\scriptstyle\rightharpoonup}{\mathcal{L}}) = \nu\_{\rm GE}(\stackrel{\scriptstyle\rightharpoonup}{\mathcal{K}} \oplus \stackrel{\scriptstyle\rightharpoonup}{\mathcal{L}}, \{0\}) = \nu\_{[\rm GE]}(\stackrel{\scriptstyle\rightharpoonup}{\mathcal{K}} \oplus \stackrel{\scriptstyle\rightharpoonup}{\mathcal{L}}), \{0\}) = \nu\_{[\rm GE]}(\Psi(\stackrel{\scriptstyle\rightharpoonup}{\mathcal{K}}) \oplus (\oplus \Psi(\stackrel{\scriptstyle\rightharpoonup}{\mathcal{L}})), \{0\}) = \\ \stackrel{\scriptstyle\rightharpoonup}{\mathcal{L}} = \nu\_{[\rm GE]}(\Psi(\stackrel{\scriptstyle\rightharpoonup}{\mathcal{K}}), \Theta(\stackrel{\scriptstyle\rightharpoonup}{\mathcal{L}})) = \nu\_{[\rm GE]}(\Psi(\stackrel{\scriptstyle\rightharpoonup}{\mathcal{K}}), \stackrel{\scriptstyle\rightharpoonup}{\mathcal{K}}). \end{array} \tag{95}$$

Let us assume now that ( ↔ K, ↔ <sup>L</sup>) <sup>∈</sup> (K<sup>−</sup> <sup>∪</sup> <sup>R</sup>) <sup>×</sup> (K<sup>+</sup> <sup>∪</sup> <sup>R</sup>). Then <sup>↔</sup> <sup>L</sup> <sup>↔</sup> K ∈ <sup>K</sup><sup>+</sup> and successively from (85), (84), (83), (86), (69), (70), and (57), we get

$$\begin{array}{lcl}\nu\_{\rm CE}(\stackrel{\scriptstyle\righth}{\mathcal{K}},\stackrel{\scriptstyle\righth}{\mathcal{L}})=\nu\_{\rm CE}(\stackrel{\scriptstyle\righth}{\mathcal{K}}\oplus\stackrel{\scriptstyle\righth}{\mathcal{L}},\stackrel{\scriptstyle\righth}{\mathcal{U}}[0])=\nu\_{\rm CE}([0],\stackrel{\scriptstyle\righth}{\mathcal{L}}\oplus\stackrel{\scriptstyle\righth}{\mathcal{K}})=\nu\_{[\rm CE]}([0],\!(\!(\!(\!)\!)\!))=\\\nu\_{[\rm CE]}([0],\!(\!(\!(\!)\!)\!)\oplus(\!(\!(\!(\!(\!)\!)\!)))=\nu\_{[\rm CE]}(\stackrel{\scriptstyle\righth}{\mathcal{K}}\,(\!(\!(\!(\!)\!)\!)\!),\![\!(\!(\!)\!)\!)=\\\nu\_{[\rm CE]}(\stackrel{\scriptstyle\righth}{\mathcal{K}}(\!(\!(\!(\!(\!)\!)\!)\!),\!(\!(\!(\!(\!)\!)\!)\!)),\!\!(\!(\!(\!)\!)\!)=\end{array}\tag{96}$$

**Example 3.** *Let us compare the OFN* <sup>↔</sup> <sup>X</sup> <sup>=</sup> <sup>↔</sup> <sup>L</sup>(12, 14, 18, 20, *SX*, *EX*) *determined by (59) and the OFN* <sup>↔</sup> <sup>Y</sup> <sup>=</sup> <sup>↔</sup> L(13, 11, 6, 5, *SY*, *EY*) *determined by (60). Using (93), (62), (64), and (41), we get*

$$\begin{array}{lcl} \nu\_{\rm GE}(\stackrel{\leftrightarrow}{\mathcal{Y}},\stackrel{\leftrightarrow}{\mathcal{X}}) = \nu\_{[\rm GE]}(\stackrel{\rightarrow}{\mathcal{Y}}(\stackrel{\leftrightarrow}{\mathcal{Y}}),\stackrel{\leftrightarrow}{\mathcal{Y}}(\stackrel{\leftrightarrow}{\mathcal{X}})) = \nu\_{[\rm GE]}(\mathcal{L}(5,6,11,13,E\_{\rm Y},S\_{\rm Y}),\ \mathcal{L}(12,14,18,20,S\_{\rm X},E\_{\rm X})) = \\ = \nu\_{[\rm GE]}(\mathcal{L}(5,6,11,13,L\_{\rm V},R\_{\rm Y}),\ \mathcal{L}(12,14,18,20,L\_{\rm U},R\_{\rm U})) = \nu\_{[\rm GE]}(\mathcal{V},\mathcal{U}) = \frac{2}{5}. \end{array} \tag{97}$$

The simplicity of the calculations in the above example is apparent. In fact, Example 3 together with Theorem 5 shows that:


#### **5. Relations "Greater Than" and "Equal to" for Ordered Fuzzy Numbers**

In the last section, we explicitly define the preorder "greater than or equal to" *GE* on the space K of all OFNs. This relation may be applied as start point for determining other basic relations on K.

Let us consider any pair ( ↔ K, ↔ <sup>L</sup>) <sup>∈</sup> <sup>K</sup>2. On the space <sup>K</sup> we introduce the relation <sup>↔</sup> K.*GT*-. ↔ L, which reads:

$$\text{\textbullet\textbullet\textbullet\textbullet\textbullet\textbullet}\newline\text{\textbullet\textbullet\textbullet\textbullet\textbullet\textbullet\textbullet\textbullet\textbullet\textbullet\textbullet\textbullet\textbullet\textbullet\textbullet\textbullet\textbullet\textbullet\textbullet\textbullet\textbullet\textbullet\textbullet\textbullet\textbullet\textbullet = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1 = 1\times 1\times 1 = 1\times 1$$

This relation is a fuzzy strict order *GT*- ∈ F (K2) defined by its membership function <sup>ν</sup>*GT* <sup>∈</sup> [0, 1] K2 . From the point view of the multivalued logic, the value ν*GT*( ↔ K, ↔ L) is considered as a truth-value of the sentence (98) which is equivalent to the sentence:

$$
\text{\textquotedblleft OFN } \stackrel{\leftrightarrow}{\mathcal{L}} \text{ is not greater than or equal to } \stackrel{\leftrightarrow}{\mathcal{K}} \stackrel{\leftrightarrow}{\mathcal{K}}.\tag{99}
$$

It means that we have

$$\nu\_{\rm GT}(\overset{\leftrightarrow}{\mathcal{K}}, \overset{\leftrightarrow}{\mathcal{L}}) = \mathfrak{tr}(\overset{\leftrightarrow}{\mathcal{K}} \overset{\leftrightarrow}{\mathcal{G}} \overset{\leftrightarrow}{\mathcal{L}} \overset{\leftrightarrow}{\mathcal{L}} \overset{\leftrightarrow}{\mathcal{L}}) = \mathfrak{tr}(\overset{\leftrightarrow}{\neg} \overset{\leftrightarrow}{\mathcal{L}} \overset{\leftrightarrow}{\mathcal{G}} \overset{\leftrightarrow}{\mathcal{K}} \overset{\leftrightarrow}{\mathcal{K}}) = 1 - \mathfrak{tr}(\overset{\leftrightarrow}{\mathcal{L}} \overset{\leftrightarrow}{\mathcal{G}} \overset{\leftrightarrow}{\mathcal{K}} \overset{\leftrightarrow}{\mathcal{K}}).\tag{100}$$

Therefore, the membership function ν*GT* ∈ [0, 1] K2 is determined by the identity

$$\nu\_{\rm GT}(\overrightarrow{\mathcal{K}}, \overrightarrow{\mathcal{L}}) = 1 - \nu\_{\rm GE}(\overrightarrow{\mathcal{L}} \prime \overrightarrow{\mathcal{K}}).\tag{101}$$

Moreover, on the space <sup>K</sup> we introduce the relation <sup>↔</sup> K.*EQ*-. ↔ L, which reads:

$$\text{\textbullet \textbullet CFN } \overleftrightarrow{\mathcal{K}} \text{ is equal to } \text{OFN } \overleftrightarrow{\mathcal{L}} \text{\textbullet}^{\star} \tag{102}$$

The relation *EQ*- ∈ F (K2) is fuzzy equivalence determined by membership function <sup>ν</sup>*EQ* <sup>∈</sup> [0, 1] K2 . From the point view of the multivalued logic, the value ν*EQ*( ↔ K, ↔ L) is considered as a truth-value of the sentence (102) which is equivalent to the sentence:

$$\text{C}^\*\text{OFN} \overset{\longleftrightarrow}{\mathcal{K}} \text{is greater than or equal to } \text{OFN} \overset{\longleftrightarrow}{\mathcal{L}} \text{and } \text{OFN} \overset{\longleftrightarrow}{\mathcal{L}} \text{is greater than or equal to } \text{OFN} \overset{\longleftrightarrow}{\mathcal{K}}'' \tag{103}$$

It means that we have

$$\begin{array}{lcl}\nu\_{\overline{E}\underline{Q}}(\overset{\leftrightarrow}{\mathcal{K}},\overset{\leftrightarrow}{\mathcal{L}}) &=& \mathsf{t}\cdot\sigma(\overset{\leftrightarrow}{\mathcal{K}}.\overline{E}\underline{Q}.\overset{\leftrightarrow}{\mathcal{L}}) =& \mathsf{t}\sigma(\overset{\leftrightarrow}{\mathcal{K}}.\overline{E}\underline{Q}.\overset{\leftrightarrow}{\mathcal{L}}'' \wedge \overset{\leftrightarrow}{\mathcal{L}}.\overline{G}\underline{E}.\overset{\leftrightarrow}{\mathcal{K}}) =\\ &=& \min\{\mathsf{t}\,\sigma(\overset{\leftrightarrow}{\mathcal{K}}.\overline{E}\underline{Q}.\overline{\mathcal{L}}''),\mathsf{t}\,\sigma(\overset{\leftrightarrow}{\mathcal{L}}.\overline{G}\underline{E}.\overset{\leftrightarrow}{\mathcal{K}}'')\}.\end{array} \tag{104}$$

Therefore, the membership function ν*EQ* ∈ [0, 1] K2 is determined by the identity

$$\nu\_{\vec{E}Q}(\overrightarrow{\mathcal{K}}, \overleftarrow{\mathcal{L}}) = \min \{ \nu\_{\vec{GE}}(\overleftarrow{\mathcal{K}}, \overleftarrow{\mathcal{L}}), \nu\_{\vec{GE}}(\overleftarrow{\mathcal{L}}, \overleftarrow{\mathcal{K}}) \}. \tag{105}$$

For any finite set *A* = { ↔ K1, ↔ K2, ... , ↔ <sup>K</sup>*n*} ⊂ <sup>K</sup>*Tr* we can distinguish set of maximal elements Max{*A*}∈F (*A*) which is described by membership function <sup>μ</sup>Max{*A*} <sup>∈</sup> [0, 1] *<sup>A</sup>* determined in the following way [62]

$$
\mu\_{\text{Max}\{A\}}(\overrightarrow{\mathcal{K}}\_{i}) = \min \{ \nu\_{\text{GE}}(\overleftarrow{\mathcal{K}}\_{i\prime}\overleftarrow{\mathcal{K}}\_{j}) : \overrightarrow{\mathcal{K}}\_{j} \in A \}.\tag{106}
$$

This set may be applied as solution of optimization tasks using OFNs. Moreover, let us note, that the set Max{*A*} of maximal elements may be used as a fuzzy choice function [78].

In [17], the relation *GE*- ∈ F (K<sup>2</sup> *Tr*) is applied for ordering negotiation packages [79]. The considered case study is fully described there. Moreover, let us look on a short case study of applying the relation *GE*-∈ F (K2) for financial effectivity analysis.

#### **6. Financial E**ff**ectivity Determined by Imprecise Return—A Numerical Example**

Let any financial security Z ∈ <sup>Z</sup> be represented by the pair (*Rz*, <sup>σ</sup><sup>2</sup> *<sup>Z</sup>*), where *Rz* <sup>∈</sup> <sup>R</sup> is an expected return rate from this security and σ<sup>2</sup> *<sup>Z</sup>* <sup>∈</sup> <sup>R</sup> is a variance of its return rate. The symbol <sup>Z</sup> denotes the family of all considered securities.

We introduce the relation P.*NLE*.Q which reads

$$\text{If "The security } \mathcal{P} \in \mathbb{Z} \text{ is no less effective than the security } \mathbf{Q} \in \mathbb{Z}\text{"}\text{.}\tag{107}$$

In financial practice, this relation is defined by the equivalence

$$\mathcal{P}.\text{NLE}.\mathcal{Q} \Leftrightarrow (\mathcal{R}\_P \ge \mathcal{R}\_Q \lor \sigma\_P^2 \le \sigma\_Q^2). \tag{108}$$

In [15], it is justified that return rate may be evaluated OFN. In this case, any financial security Z be represented by the pair ( ↔ *Rz*, σ<sup>2</sup> *<sup>Z</sup>*), where <sup>↔</sup> *Rz* <sup>∈</sup> <sup>K</sup> is an expected return rate evaluated by OFN. Therefore, the relation P.*NLE*.Q should be replaced by the relation P.*NLE* .Q defined by the equivalency

$$\mathcal{P}.\widetilde{\rm NLE}.\mathcal{Q} \Leftrightarrow (\widetilde{\mathcal{R}}\_P.\widetilde{GE}.\widetilde{\mathcal{R}}\_Q \vee \sigma\_P^2 \le \sigma\_Q^2). \tag{109}$$

The relation <sup>P</sup>.*NLE* .<sup>Q</sup> also reads as the sentence (107). The relation *NLE* ∈ F (K2) is fuzzy one determined by membership function ν*NLE* ∈ [0, 1] Z2 . From the point view of the multivalued logic, the value ν*NLE*(P, Q) is considered as a truth-value of the sentence (105). It means that we have

$$\begin{split} \upsilon\_{\textrm{NLE}}(\mathcal{P},\mathbb{Q}) &= \mathsf{tr}\,\mathsf{v}(\,^{\textrm{wt}}\overline{\mathcal{R}}\_{P}\widetilde{\operatorname{GE}}\,\overset{\scriptstyle{\scriptstyle}}{R}\_{Q}\vee\sigma\_{P}^{2} \leq \sigma\_{Q}^{2}\,\,\mathrm{\prime}) = \max\left\{\mathsf{tr}\,\mathsf{v}(\,^{\textrm{wt}}\overline{\mathcal{R}}\_{P}\widetilde{\operatorname{GE}}\,\overset{\scriptstyle{\scriptstyle}}{R}\_{Q}\prime\prime\right), \mathsf{tr}\,\mathsf{v}(\,^{\textrm{wt}}\sigma\_{P}^{2} \leq \sigma\_{Q}^{2}\,\,\mathrm{\prime})\right\} = \\ &= \left\{\begin{array}{c} \mathsf{tr}\,\mathsf{v}\left(\,^{\textrm{wt}}\overline{\mathcal{R}}\_{P}\widetilde{\operatorname{GE}}\,\overset{\scriptstyle{\scriptstyle}}{R}\_{Q}\prime\prime\right)\sigma\_{P}^{2} > \sigma\_{Q}^{2}\\ 1 \;\sigma\_{P}^{2} \leq \sigma\_{Q}^{2}. \end{array}\right\} = \begin{pmatrix} \stackrel{\scriptstyle{\scriptstyle}}{\upsilon\_{\textrm{E}}(\,^{\textrm{wt}}\overline{\mathcal{R}}\_{P}\rangle}\,\sigma\_{P}^{2} > \sigma\_{Q}^{2}\\ 1 \;\sigma\_{P}^{2} \leq \sigma\_{Q}^{2}. \end{pmatrix} . \end{split} \tag{110}$$

In order to increase the transparency of the considerations, we restrict our future considerations to the case of return rate evaluated by TrOFNs. We consider the securities P, Q and R respectively represented by the pairs ( ↔ *RP*, σ<sup>2</sup> *<sup>P</sup>*)=( <sup>↔</sup> *Tr*(0.010, 0.010, 0.035, 0.040), 0.00023), ( ↔ *RQ*, σ<sup>2</sup> *<sup>Q</sup>*) = ( ↔ *Tr*(0.020, 0.025, 0.030, 0.045), 0.00024) and ( ↔ *RR*, σ<sup>2</sup> *<sup>R</sup>*)=( <sup>↔</sup> *Tr*(0.065, 0.055, 0.050, 0.035), 0.00012). The return rates <sup>↔</sup> *RP* and <sup>↔</sup> *RQ* are positively oriented TrOFNs. Therefore, we can anticipate an increase in the rates of return from the securities P and Q. Moreover, we can predict a decrease in the rate of return from the security <sup>R</sup> because of the return rate <sup>↔</sup> *RR* is negatively oriented TrOFN. For these reasons, we consider two investment decisions:


Let us compare a financial effectivity of considered securities P and R. In line with (108), (93), (58) and (71), we get

$$\begin{split} \nu\_{\textrm{VLE}}(\mathcal{P},\mathcal{R}) &= \nu\_{\textrm{GE}}\Bigl(\widecheck{\rm T}(0.010, 0.010, 0.035, 0.040), \varprojcirc \widecheck{\rm T}(0.065, 0.055, 0.050, 0.035)\Bigr) = \\ &\nu\_{\textrm{GE}}\Bigl(\widecheck{\rm T}(\heartsuit(0.010, 0.010, 0.035, 0.040)), \varprojcirc \widecheck{\rm T}\Bigl(\widecheck{\rm T}(0.065, 0.055, 0.050, 0.035)\Bigr)\Bigr) = \\ &= \nu\_{\textrm{GE}}\Bigl(Tr(0.010, 0.010, 0.035, 0.040), Tr(0.035, 0.050, 0.055, 0.065)\Bigr) = \\ &= \frac{0.040 - 0.035}{0.040 + 0.090 - 0.035 - 0.035} = \frac{1}{4}. \end{split} \tag{111}$$

In the same way, we can compare a financial effectivity of considered securities Q and R. We obtain

$$\begin{split} \boldsymbol{\nu}\_{\text{NLE}}(\mathbf{Q},\mathbf{\mathcal{R}}) &= \nu\_{\text{GE}}\left(\stackrel{\leftrightarrow}{\mathcal{T}}(0.020, 0.025, 0.030, 0.045), \stackrel{\leftrightarrow}{\mathcal{T}}(0.065, 0.055, 0.050, 0.035)\right) = \\ \boldsymbol{\nu}\_{\text{|GE}}\left(\stackrel{\leftrightarrow}{\mathbf{V}}(0.020, 0.025, 0.030, 0.045), \stackrel{\leftrightarrow}{\mathbf{V}}\left(\stackrel{\leftrightarrow}{\mathcal{T}}(0.065, 0.055, 0.050, 0.035)\right)\right) = \\ &= \boldsymbol{\nu}\_{|GE|}(\boldsymbol{\mathcal{T}}(0.020, 0.025, 0.030, 0.045), \boldsymbol{\nu}\_{\text{|GE}}(0.035, 0.050, 0.055, 0.065)) = \\ &= \frac{0.045 - 0.035}{0.045 + 0.030 - 0.030 - 0.035} = \frac{1}{3}. \end{split} \tag{112}$$

Therefore, we can say that the investment decisions (A) and (B) are both partially justified. Because of ν*NLE*(Q, R) > ν*NLE*(P, R), we ultimately recommend the investment decision (B).

#### **7. Final Remarks**

Relation "greater than or equal to" *GE* is explicitly defined on the space of all OFNs. In my best knowledge, it will be the first fuzzy order determined for OFNs. Determined relation *GE* compares OFNs without losing information about the imprecision and orientation of evaluated OFNs. From the point-view of application needs, this approach is desirable. Nevertheless, I proved that the relation *GE*- is independent of the orientation of the numbers being compared. This conclusion may be applied for simplification of many OFN applications.

The first application of relation *GE* is cited in Section 5. The next application is described in Section 6. Meanwhile, we will employ the proposed relation to model some imprecision decision making problems from some concrete applied fields, such as medical decision making, behavioural economic [11], management [15,16], telecommunication, and financial assessment [7,9–14]. Then these relations may be used for decision making problems with scoring function evaluated by OFNs. In [15,16], such evaluation of scoring function follows from the fact that partial ratings are evaluated by OFNs. Moreover, studying multi criterial group decision making problems, we should take into account some imprecise weights of criteria [80]. Then these weights may be evaluated by OFNs what implies that also the scoring function is evaluated by OFNs. In general, the relation *GE* can be applied in any such quantitative model of the real world that a comparison of imprecise numbers is used.

In Section 2.2, I point out some terminology problems connected with the notion of OFN. I believe that this is a very important problem from an ethical point of view. I invite people of science to discuss this topic.

For any OFN we can determine the family of oriented α-cuts defined as a pair of usual α-cut and OFN orientation. An important direction for further development is to propose such fuzzy order of OFNs which is determined by the family of all α-cuts for FNs. At present, the oriented α-cuts theory is unknown.

**Funding:** This research did not receive external funding.

**Acknowledgments:** Author is very grateful to the anonymous reviewers for their insightful and constructive comments and suggestions. Using these comments allowed me to improve this article.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**


© 2019 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Mathematical Apparatus of Optimal Decision-Making Based on Vector Optimization**

#### **Yury K. Mashunin**

Far Eastern Federal University, 690068 Vladivostok, Russia; mashunin@mail.ru; Tel.: +79-143277508

Received: 17 April 2019; Accepted: 29 June 2019; Published: 11 October 2019

**Abstract:** We present a problem of "acceptance of an optimal solution" as a mathematical model in the form of a vector problem of mathematical programming. For the solution of such a class of problems, we show the theory of vector optimization as a mathematical apparatus of acceptance of optimal solutions. Methods of solution of vector problems are directed to problem solving with equivalent criteria and with the given priority of a criterion. Following our research, the analysis and problem definition of decision making under the conditions of certainty and uncertainty are presented. We show the transformation of a mathematical model under the conditions of uncertainty into a model under the conditions of certainty. We present problems of acceptance of an optimal solution under the conditions of uncertainty with data that are represented by up to four parameters, and also show geometrical interpretation of results of the decision. Each numerical example includes input data (requirement specification) for modeling, transformation of a mathematical model under the conditions of uncertainty into a model under the conditions of certainty, making optimal decisions with equivalent criteria (solving a numerical model), and, making an optimal decision with a given priority criterion.

**Keywords:** modeling; vector optimization; methods of solution of vector problems; optimal decision-making; numerical realization of decision-making

#### **1. Introduction**

The problem of making an optimal decision that meets the modern achievements of science and technology is connected, firstly, with the release of high-quality products, and, secondly, with the solution of problems of social and economic human development. The decision making can be undertaken under the conditions of certainty (when the functional dependence of the purpose of the parameters of the studied object and systems is known) [1–4], and under the conditions of uncertainty (when there is not sufficient information on the functional dependence of the purpose of the parameters of the studied object and systems) [5–8]. The conditions of uncertainty are characterized by the fact that input data for decision-making, can be presented as random, fuzzy or incomplete data, [1,2,9,10]. Research on this problem of decision-making began with the work of Keeney and Raiffa [11]. Analyses of modern decision-making approaches (i.e., "simple" methods) are submitted in [6,12]. One of the areas of decision-making automation is associated with the creation of mathematical models and the adoption of an optimal solution based on them [12–14]. Currently, the most common mathematical apparatus for model-based decision making is vector optimization [6,12,14–19]. The purpose of this work is to build a mathematical model for an object or system of decision making in the form of a vector problem of mathematical programming. Vector optimization is considered as a mathematical apparatus of a solution to the problem of acceptance of an optimal solution.

For the realization of the goal of this work, the study considered and solved the following problems.

The construction of a mathematical model of the problem of finding an optimal solution in the form of a vector problem of mathematical programming has been shown previously [4,6,15,20]. In the current paper, the theory and a mathematical apparatus of problem solving using vector optimization are presented. The theory includes an axiomatic principle of optimality of the solution of vector problems. The mathematical apparatus of the solution of vector problems is intended for the solution of vector problems with equivalent criteria [6,13,21] and with the given priority of a criterion [15,20]. The research, analysis and problem definition of decision making under the conditions of certainty and uncertainty are conducted. The realization of a mathematical apparatus of vector optimization is presented for numerical problems of decision making with one, two, three and four parameters. The solution of the problem of decision making includes creation of a numerical model of an object in the form of a vector problem, the solution of a problem of decision making with equivalent criteria, and, the solution of a vector problem of decision making with a criterion priority.

#### **2. Statement of a Problem: Creation of the Mathematical Model "Acceptance of an Optimal Solution"**

As an "object for optimal decision-making," we use a "technical system". The problem of the choice of optimum parameters of technical (engineering) systems according to functional characteristics arises during the study, analysis, and design of technical systems, and is connected with quality production.

The problem includes the solution of the following tasks:


The technical system depends on *N*, a set of design data: *X* = {*x*1, *x*2, ... , *xN*}, where *N* is the number of parameters, each of which lies in the set limits:

$$\mathbf{x}\_{j}^{\min} \le \mathbf{x}\_{j} \le \mathbf{x}\_{\prime} j = \overline{1, N}, \text{or } X^{\min} \le X \le X^{\max}$$

where *x*min *<sup>j</sup>* , *<sup>x</sup>*max *<sup>j</sup>* , ∀*j* ∈ *N* are the minimum and maximum limits of change of the vector of parameters of the technical system.

The result of the functioning of the technical system is defined by a set К of technical characteristics of *fk(X)*, *k* = 1,*K* which functionally depend on design data *X* = {*xj*, *j* = 1, *N*}, in total these represent a vector function: *F*(*X*) = (*f* 1(*X*) *f* 2(*X*) ... *fK* (*X*))T.

The set of characteristics (criteria) is subdivided into two subsets *K*<sup>1</sup> and *K*2: К = *K*<sup>1</sup> ∪ *K*2.

*K*<sup>1</sup> is a subset of technical characteristics, the numerical values of which are desired to be as high as possible: *fk*(*X*) →**max**, *k* = 1,*K*1.

*K*<sup>2</sup> is a subset of technical characteristics, the numerical values of which are desired to be as low as possible: *fk*(*X*) → **min**, *k* = *K*<sup>1</sup> + 1,*K*, *K*<sup>2</sup> ≡ *K*<sup>1</sup> + 1,*K*.

The mathematical model should consider, firstly, the purposes of the technical system which are represented by the characteristics of *<sup>F</sup>*(*X*), and, secondly, the *Xmin* <sup>≤</sup> *<sup>X</sup>* <sup>≤</sup> *<sup>X</sup>max* restrictions. The mathematical model of the technical system which solves in general a problem of the choice of the optimum design decision (a choice of optimum parameters) is presented in the form of a vector problem of mathematical programming.

$$\text{Opt}\,F(X) \,=\,\langle \mathbf{max}\,F\_1(X)\,\,=\,\langle \mathbf{max}\,f\_k(X),k\,=\,\overline{1,K\_1}\rangle,\tag{1}$$

$$\min \text{Fr}\_2(X) \;=\; \langle \mathbf{m} \mathbf{in} f\_k(X), k \;= \, \overline{1, K\_2} \rangle\!\!/,\tag{2}$$

$$G(X) \le 0,\tag{3}$$

$$\mathbf{x}\_{j}^{\min} \le \mathbf{x}\_{j} \le \mathbf{x}\_{j}^{\max}, j \ = \ \overline{1\_{\text{\textquotedblleft}}} \mathbf{N},\tag{4}$$

where *X* is the vector of controlled variables (constructive parameters) in Equation (1), *F*(*X*) = {*fk*(*X*), *k* = 1,*K*} represents the vector criterion for each component of the characteristics of the technical system in Equation (2), which functionally depends on the vector of variables *X*, *G*(*X*) = (*g*1(*X*) *g*2(*X*) ... *gM*(*X*))<sup>T</sup> represents a vector of the restrictions imposed on the functioning of the technical system, and *M* is a set of restrictions.

Restrictions are defined in terms of technological, physical or similar processes, and can be presented by functional restrictions, for example, *f* min *<sup>k</sup>* <sup>≤</sup> *fk*(*X*) <sup>≤</sup> *<sup>f</sup>* max *<sup>k</sup>* , *k* = 1,*K*.

It is supposed that the *fk*(*X*), *k* = 1,*K* functions are differentiated and convex, *gi*(*X*), *i* = 1, *M* are continuous, and Equations (3)–(4) represent a non-empty set of admissible points of *S* restrictions, which can be represented as *<sup>S</sup>* <sup>=</sup> {*<sup>X</sup>* <sup>∈</sup> *<sup>R</sup>N*|*G*(*X*) <sup>≤</sup> 0, *<sup>X</sup>min* <sup>≤</sup> *<sup>X</sup>* <sup>≤</sup> *Xmax*} ø.

Criteria and restrictions in Equations (1)–(4) form the mathematical model of a technical system. It is required to find a vector of the *<sup>X</sup>*<sup>0</sup> <sup>∈</sup> *<sup>S</sup>* parameters at which every component of the vector-functions *F*1(*X*) = {*fk*(*X*), *k* = 1,*K*1} accepts the greatest possible value, and vector-functions *F*2(*X*) = {*fk*(*X*), *k* = 1,*K*2} are accepted by the minimum value.

For a substantial class of technical systems which can be represented by the vector problem of Equations (1)–(4), it is possible to refer to their large number of applications in various fields, such as electro-engineering [23], airspace [10,13], metallurgical (choice of optimal structure of material), and chemical [24].

In this article, the technical system is considered to be static. However, technical systems can be considered to be dynamic [23], using differential-difference methods of transformation, conducted for a small discrete period Δ*t* ∈ *T*.

#### **3. Theory. Axioms, the Principle of Optimality and Methods for Solving Vector Problems of Mathematical Programming**

The theory of vector optimization includes theoretical foundations (axioms) and methods of the solution of vector problems with equivalent criteria and with the given criterion priority. The theory is a basis of mathematical apparatus of modeling of an "object for optimal decision-making", which allows selection of any point from a set of points that is Pareto optimal, and shows why the selection is optimal.

We have presented, first, axioms and methods of the solution of problems of vector optimization with equivalent criteria (Section 3.1) and, second, the specified priority criteria (Section 3.2).

#### *3.1. Vector Optimization with Equivalent Criteria*

3.1.1. Axioms and the Principle of Optimality of Vector Optimization with Equivalent Criteria

**Definition 1.** (Definition of the relative assessment of criteria).

In the vector problem of Equations (1)–(4), definitions are as follows: <sup>λ</sup>*k*(*X*) <sup>=</sup> *fk*(*X*)−*<sup>f</sup> <sup>o</sup> k f*∗ *<sup>k</sup>*−*<sup>f</sup> <sup>o</sup> k* , ∀*k* ∈ *K* is the relative estimate of a point *X* ∈ *S k*th criterion, *fk*(X) is the *k*th criterion at the point *X* ∈ *S*, *f*<sup>∗</sup> *<sup>k</sup>* is the value of the *k*th criterion at the point of optimum *X*∗ *k* , obtained in the vector problem of Equations (1)–(4) of the individual *k*th criterion, *f* <sup>0</sup> *<sup>k</sup>* is the worst value of the *<sup>k</sup>*th criterion (anti-optimum) at the point *<sup>X</sup>*<sup>0</sup> *k* (superscript 0) on the admissible set *S* in Equations (1)–(4), at the task at *max* (3), (5), (6), the value of *f* <sup>0</sup> *<sup>k</sup>* is the lowest value of the *<sup>k</sup>*th criterion, *<sup>f</sup>* <sup>0</sup> *<sup>k</sup>* = min *X*∈*S fk*(*X*) <sup>∀</sup>*<sup>k</sup>* <sup>∈</sup> *<sup>K</sup>*<sup>1</sup> and the task min *<sup>f</sup>* <sup>0</sup> *<sup>k</sup>* is the greatest, *f* 0 *<sup>k</sup>* = max *X*∈*S f* k(*X*) ∀*k* ∈ *K*2. The relative estimate of λ*k*(X), ∀*k* ∈ *K* is firstly measured in relative units and, secondly, the relative assessment of λ*k*(*X*) ∀*k* ∈ *K* on the admissible set is changed from zero at a point of *X*<sup>0</sup> *k* : ∀*k* ∈ *K* lim *<sup>X</sup>*→*X<sup>o</sup> k* λ*k*(*X*) = 0, to the unit at the point of an optimum of X<sup>∗</sup> *k* :∀*k* ∈ *K,* lim *X*→*X*<sup>∗</sup> *k* λ*k*(*X*) = 1, i.e.,: ∀*k* ∈ *K*, 0 ≤ λ*k*(*X*) ≤ 1, *X* ∈ *S.* This allows the comparison of the criteria, measured in relative units, by joint optimization.

**Axiom 1.** (About equality and equivalence of criteria at an admissible point of vector problems of mathematical programming)

In vector problems of mathematical programming two criteria with the indexes *k* ∈ *K*, *q* ∈ *K* shall be considered as equal at point *X* ∈ *S* if relative estimates of the *k*th and *q*th criterion are equal at this point, i.e., λ*k*(*X*) = λ*q*(*X*), *k*, *q* ∈ *K*.

We will consider criteria equivalent in vector problems of mathematical programming at a point *X* ∈ *S* if, when comparing the numerical size of relative estimates of λ*k*(*X*), *k* = 1,*K*, for each criterion of *fk*(*X*), *k* = 1,*K*, and, respectively, relative estimates of λ*k*(*X*), conditions are not imposed about the priorities of criteria.

**Definition 2.** (Definition of a minimum level among all relative estimates of criteria).

The relative level λ in a vector problem represents the lower assessment of a point of *X* ∈ *S* among all relative estimates of λ*k*(*X*), *k* = 1,*K*:

$$\forall X \in \mathcal{S}, \,\lambda \le \lambda\_k(X), k = \,\overline{1, K}, \tag{5}$$

The lower level for the performance of the condition of Equation (5) at an admissible point of *X* ∈ *S* is defined as:

$$\forall X \in \mathcal{S}, \lambda = \min\_{k \in K} \lambda\_k(X). \tag{6}$$

Equations (5) and (6) are interconnected. They serve as a transition from Equation (6) of the definition of a minimum to the restrictions of Equation (5), and vice versa.

The level λ allows the union of all criteria in a vector problem with one numerical characteristic of λ, made over certain operations, thereby carrying out these operations over all criteria measured in relative units. The level λ functionally depends on the *X* ∈ *S* variable, by changing *X*, we can change the lower level, λ. From here we will formulate the rules of searching for the optimum decision.

**Definition 3.** (The principle of optimality with equivalent criteria).

The vector problem of mathematical programming with equivalent criteria is solved if the point of *<sup>X</sup>*<sup>0</sup> <sup>∈</sup> *<sup>S</sup>* and a maximum level of <sup>λ</sup><sup>0</sup> (the top index optimum) among all relative estimates is found such that:

$$
\lambda^0 = \max\_{\substack{X \in \mathcal{S} \ k \in \mathcal{K}}} \lambda\_k(X). \tag{7}
$$

Using the interrelation of Equations (5) and (6), we will transform the maximine problem of Equation (7) into an extreme problem:

$$
\lambda^0 = \max\_{\lambda \in S} \lambda,\tag{8}
$$

$$
\lambda \le \lambda\_k(X), k = \overline{1, K}. \tag{9}
$$

We can call the resulting problem of Equations (8) and (9) the λ-problem.

The λ-problem of Equations (8) and (9) has (*N* + 1) dimensions. As a consequence, the solution of the <sup>λ</sup>-problem represents an optimum vector of *<sup>X</sup>*◦ <sup>∈</sup> *<sup>R</sup>N*<sup>+</sup>1, where (*<sup>N</sup>* <sup>+</sup> 1) is a component which has the essence of the value of λ0, i.e., *X*<sup>0</sup> = {*x<sup>o</sup>* <sup>1</sup>, *<sup>x</sup><sup>o</sup>* <sup>2</sup>, ..., *<sup>x</sup><sup>o</sup> <sup>N</sup>*, *<sup>x</sup><sup>o</sup> <sup>N</sup>*+1}, thus *<sup>x</sup><sup>o</sup> <sup>N</sup>*+<sup>1</sup> <sup>=</sup> <sup>λ</sup>0, and (*<sup>N</sup>* <sup>+</sup> 1) is a component of a vector of *X*<sup>0</sup> selected in view of its specificity.

The obtained pair of {λ0, *X*0} = *X*<sup>0</sup> characterizes the optimum solution of the λ-problem and according to the vector problem of mathematical programming in Equations (1)–(4) with equivalent criteria, can be solved on the basis of normalization of criteria and the principle of the guaranteed result. In the optimum solution of *X*<sup>0</sup> = {*X*0, λ0}, *X*<sup>0</sup> is an optimal point and λ<sup>0</sup> is a maximum level.

An important result of the algorithm for solving the vector problems of Equations (1)–(4) with equivalent criteria is the following theorem.

**Theorem 1.** (The theorem of the two most contradictory criteria in the vector problem of mathematical programming with equivalent criteria).

In convex vector problems of mathematical programming with equivalent criteria that are solved on the basis of normalization of criteria and the principle of the guaranteed result, for an optimum point of *<sup>X</sup>*<sup>0</sup> <sup>=</sup> {λ0, *<sup>X</sup>*0}, two criteria are denoted by their indexes *<sup>q</sup>* <sup>∈</sup> *<sup>K</sup>*, *<sup>p</sup>* <sup>∈</sup> *<sup>K</sup>* (which in a sense are the most contradictory of the criteria *k* = 1,*K*), for which an equality is carried out:

$$
\lambda^0 = \lambda\_q(X^0) = \lambda\_p(X^0), \neq p, \, p \in \mathbb{K}, \, X \in \mathbb{S}, \tag{10}
$$

and other criteria are defined by inequalities:

*k*

$$
\lambda^0 \le \lambda\_k(X^0) \,\,\forall k \in \mathcal{K}, q \not\equiv p \not\equiv k. \tag{11}
$$

#### 3.1.2. Mathematical Algorithm of the Solution of a Vector Problem with Equivalent Criteria

To solve the vector problems of mathematical programming of Equations (1)–(4), the methods based on axioms of the normalization of criteria and the principle of the guaranteed result [12,21] are offered. Methods follow from Axiom 1 and the principle of optimality (Definition 3). We will present this as a number of steps as follows:

The method of the solution of the vector problem of Equations (1)–(4) with equivalent criteria is presented in the form of the sequence of steps [25].

Step 1. The problem of Equations (1)–(4) is solved separately for each criterion, i.e., for ∀*k* ∈ *K*<sup>1</sup> is solved at the maximum, and for ∀*k* ∈ *K*<sup>2</sup> is solved at a minimum. As a result of the decision, we obtain *X*∗ *k* , an optimum point by the corresponding criterion, *k* = 1,*K*; *f*∗ *<sup>k</sup>* = *fk*(*X*<sup>∗</sup> *k* ), the *k*th criterion size at this point, *k* = 1,*K*.

Step 2. We define the worst value of each criterion on *S*: *f* <sup>0</sup> *k* , *k* = 1,*K*. For the problem of Equations (1)–(4) for each criterion of *k* = 1,*K*, a minimum is solved as: *f* <sup>0</sup> *<sup>k</sup>* = min *fk*(*X*), *G*(*X*) ≤ *B*, *X* ≥ 0, *k* = 1,*K*.

In addition, for Equations (1)–(4) for each criterion, a maximum is solved as: *f* <sup>0</sup> *<sup>k</sup>* = max *fk*(*X*), *G*(*X*) ≤ *B*, *X* ≥ 0, *k* = 1, *K*.

As a result of the decision, we obtain *X*<sup>0</sup> *<sup>k</sup>* = {*xj*, *j* = 1, *N*}, an optimum point by the corresponding criterion, *k* = 1, *K*; *f* <sup>0</sup> *<sup>k</sup>* <sup>=</sup> *fk*(*X*<sup>0</sup> *k* ), the *k*th criterion size at the point, *X*<sup>0</sup> *k* , *k* = 1,*K*.

Step 3. For the system analysis of a set of Pareto optimal points, for this purpose optimum points of *X*∗ = {*X*∗ *k* , *k* = 1,*K*}, are defined as sizes of criterion functions of *F*(*X*\*) and relative estimates λ(*X*∗ ), <sup>λ</sup>*k*(*X*) <sup>=</sup> *fk*(*X*)−*<sup>f</sup> <sup>o</sup> k f*∗ *<sup>k</sup>*−*<sup>f</sup> <sup>o</sup>* , ∀*k* ∈ *K*:

$$F(X^\*) = \langle f\_q(X^\*\_k), q = \overline{1, K}, k = \overline{1, K} \rangle \quad = \begin{vmatrix} f\_1(X^\*\_1), \dots, f\_k(X^\*\_1) \\ \dots \\ f\_1(X^\*\_k), \dots, f\_k(X^\*\_k) \\ \lambda\_1(X^\*\_1), \dots, \lambda\_k(X^\*\_1) \\ \dots \\ \lambda\_1(X^\*\_k), \dots, \lambda\_k(X^\*\_k) \end{vmatrix} \quad \text{(12)}$$

As a whole, for the problem ∀*k* ∈ К the relative assessment of λ*k*(*X*), *k* = 1, *K* lies within 0 ≤ λ*k*(*X*) ≤ 1, ∀*k* ∈ К.

Step 4. Creation of the λ-problem.

Creation of the λ-problem is carried out in two stages: initially build the maximine problem of optimization with the normalized criteria, which at the second stage will be transformed into the standard problem of mathematical programming called the λ-problem.

For the construction of the maximine problem of optimization we use Definition 2, relative level ∀*X* ∈ *S*, λ = min *<sup>k</sup>*∈*<sup>K</sup>* <sup>λ</sup>*k*(*X*).

The bottom λ level is maximized on *X* ∈ *S*, as a result, we obtain a maximine problem of optimization with the normalized criteria:

$$
\lambda^0 = \max\_x \min\_k \lambda\_k(X), G(X) \le B, X \ge 0. \tag{13}
$$

At the second stage we transform the problem of Equation (13) into a standard problem of mathematical programming:

$$
\lambda^0 = \max \lambda\_\prime \to \lambda^0 = \max \lambda\_\prime \tag{14}
$$

$$
\lambda - \lambda\_k(X) \le 0,\\
k = \overline{1, K}, \to \lambda - \frac{f\_k(X) - f\_k^0}{f\_k^\* - f\_k^0},\\k = \overline{1, K}, \tag{15}
$$

$$G(X) \le B, X \ge 0, \to G(X) \le B, X \ge 0,\tag{16}$$

where the vector of unknowns of *X* has the dimension *N* + 1: *X* = {λ, *x*1, ... , *xN*}.

Step 5. Solution of the λ-problem.

The λ-problem of Equations (14)–(16) is a standard problem of convex programming for which decision standard methods are used.

As a result of the solution of the λ-problem, the following are obtained:

*X***<sup>0</sup>** = {λ0, *X*0}, an optimum point,

*k*

*fk*(*X*0), *k* = 1,*K*, values of the criteria at this point, <sup>λ</sup>*k*(*X*0) <sup>=</sup> *fk*(*Xo*)−*<sup>f</sup> <sup>o</sup> k f*∗ *<sup>k</sup>*−*<sup>f</sup> <sup>o</sup>* , *k* = 1,*K*, sizes of the relative estimates,

λ0, the maximum relative estimates which represent the maximum bottom level for all relative estimates of λ*k*(*X*0), or the guaranteed result in relative units. λ<sup>0</sup> guarantees that all relative estimates of <sup>λ</sup>*k*(*X*◦) are equal <sup>λ</sup>0: <sup>λ</sup>*k*(*X*0) <sup>≥</sup> <sup>λ</sup>0, *<sup>k</sup>* <sup>=</sup> 1,*<sup>K</sup>* or <sup>λ</sup><sup>0</sup> <sup>≤</sup> <sup>λ</sup>*k*(*X*0), *<sup>k</sup>* <sup>=</sup> 1,*K*, *<sup>X</sup>*<sup>0</sup> <sup>∈</sup> *<sup>S</sup>*, and according to Theorem 1 [12,21], the point of *X*<sup>0</sup> = {λ0, *x*1, ... , *xN*} is Pareto optimal.

3.1.3. Implementation of the Decision using the Example of a Vector Problem of Linear Programming with Equivalent Criteria

The use of the vector problem of Equations (1)–(4) for decision making is carried out in four stages: statement of the problem, construction of the mathematical model, software development for solving the vector problem, and, solution of the vector problem.

These stages are carried out on the example of a model of an economic system presented by a vector problem of linear programming with equivalent criteria.

Stage 1. Statement of the problem.

As an economic system, a model of the production schedule of an enterprise is considered.

It is given that the company, which produces heterogeneous products of four types, *N* = 4, uses resources of three types, *M* = 3, in production: labor (various specialties), material (different types of materials), power (equipment: welding, turning, etc.).

The technological matrix of production is presented in Table 1. It also indicates the potential of the enterprise for each type of the resource *bi*, *i* = 1, 3, as well as income *c*<sup>1</sup> *<sup>j</sup>* and profit *<sup>c</sup>*<sup>2</sup> *<sup>j</sup>* from the sale of a unit of each type of product.


**Table 1.** The consumption of resources and operational performance.

It is required to make the production schedule of the enterprise, which includes indicators according to the nomenclature (by types of products) and on a volume basis, i.e., how many products of the corresponding type should be made by the enterprise so that income and profit can be realized as shown above. Construction and solution of the mathematical model follow.

Stage 2. Construction of a mathematical model.

As variables, we take the volume of products that the company produces: *X* = {*x*1, ... , *xN*}, *N* = 4. We express target orientation of the production schedule by means of a vector problem of linear programming (VPLP) which will take the form:

opt *F*(*X*) = {max *f* 1(*X*) = (4.0*x*<sup>1</sup> + 5.0*x*<sup>2</sup> + 9.0*x*<sup>3</sup> + 11.0*x*4),

max *f* 2(*X*) = (2*x*<sup>1</sup> + 10*x*<sup>2</sup> + 6*x*<sup>3</sup> + 20*x*4)},

with restrictions *x*<sup>1</sup> + *x*<sup>2</sup> + *x*<sup>3</sup> + *x*<sup>4</sup> ≤ 15,

7*x*<sup>1</sup> + 5*x*<sup>2</sup> + 3*x*<sup>3</sup> + 2*x*<sup>4</sup> ≤ 120,

3*x*<sup>1</sup> + 5*x*<sup>2</sup> +10*x*<sup>3</sup> +15*x*<sup>4</sup> ≤ 100, *x*<sup>1</sup> ≥ 0, *x*<sup>2</sup> ≥ 0, *x*<sup>3</sup> ≥ 0, *x*<sup>4</sup> ≥ 0.

In this VPLP the following is formulated: it is required to find the non-negative solution of *x*1, ... , *x*<sup>4</sup> in the system of inequalities at which the *f* 1(*X*) and *f* 2(*X*) functions obtain maximum values.

Stage 3. The software engineering of the solution of the VPLP.

The "Solution of a Vector Problem of Linear Programming" program is presented in the annex to this section.

Stage 4. Solution of a vector problem of linear programming.

We show the solution of a problem of linear programming in the MATLAB system according to an algorithm of the solution of the VPLP on the basis of normalization of criteria and the principle of the guaranteed result. At first input data are prepared (the italicized font indicates the text of the program in the MATLAB system). The vector target function in the form of a matrix is formed:

disp ('Solution of a vector problem of the linear programming')

cvec = [ −4.0 −5.0 −9.0 −11.0, % Sales volume

−2. −10. −6. −20.] % profit Volume

a = [1.0 1.0 1.0 1.0,

7.0 5.0 3.0 2.0,

3.0 5.0 10.0 15.0], % matrix of linear restrictions

b= [15. 120. 100] % the vector containing restrictions (*bi*)

Aeq = [], beq = [] % restriction like equality

*X*0 = [0. 0. 0. 0.], % a vector of variables

*The algorithm of the solution of VPLP* is represented as a sequence of steps.

Step 1. A decision on each criterion.

The decision on the first criterion of the VPLP: [*x1,f1*] = *linprog*(*cvec(1,:),a,b,Aeq,beq,lb,ub*)

Decision on the first criterion: *X*∗ <sup>1</sup> = *x1* = {*x*<sup>1</sup> = 7.14, *x*<sup>2</sup> = *x*<sup>4</sup> = 0, *x*<sup>3</sup> = 7.85}, *f*<sup>∗</sup> <sup>1</sup> = *f1* = −99.286.

Decision on the second criterion: *X*∗ <sup>2</sup> = {*x*<sup>1</sup> = 0, *x*<sup>2</sup> = 12.5, *x*<sup>3</sup> = 0, *x*<sup>4</sup> = 2.5}, *f*<sup>∗</sup> <sup>2</sup> = *f* <sup>2</sup> = 175.

Step 2. The worst point of an optimum is determined for each criterion (anti-optimum) by multiplication of criterion by a minus unit. For the decisions on the first and second criterion:

 $X\_1^0 = \ge 1$  $min = \{x\_1 = 0, \dots, x\_4 = 0\}, f\_1^0 = f1$  $min = 0.1$  $X\_2^0 = \ge 2$  $min = \{x\_1 = 0, \dots, x\_4 = 0\}, f\_2^0 = f2$  $min = 0.1$ 

Step 3. The system analysis of the criteria in the VPLP is undertaken (i.e., the system of two criteria at optimum points is analyzed). For this purpose, optimum points of *X*∗ <sup>1</sup>, *X*<sup>∗</sup> <sup>2</sup> are defined sizes of criterion functions and relative estimates of: *F*(*X\**) = *fq*(*X*<sup>∗</sup> *k* ) *q* = 1,*K k* = 1,*K* , λ(*X\**) = λ*q*(*X*<sup>∗</sup> *k* ) *q* = 1,*K k* = 1,*K* , <sup>λ</sup>(*X\**) <sup>=</sup> *fk*(*X*∗)−*<sup>f</sup> <sup>o</sup> k f*∗ *<sup>k</sup>*−*<sup>f</sup> <sup>o</sup> k* , *k* = 1,*K***,** *F(X\*)* = *f*1 *X*∗ <sup>1</sup>) *f*<sup>1</sup> *X*∗ 1 *f*1 *X*∗ <sup>2</sup>) *f*<sup>2</sup> *X*∗ 2 = 99.29 61.43 90.0 175.0 , λ(*X\**) = λ1 *X*∗ <sup>1</sup>) λ<sup>2</sup> *X*∗ 1 λ1 *X*∗ <sup>2</sup>) λ<sup>2</sup> *X*∗ 2 = 1.0 0.351 0.907 1.0 . Step 4. The λ-problem is constructed as: λ<sup>0</sup> = *max* λ, with restrictions <sup>λ</sup> <sup>−</sup> (*<sup>f</sup>* 1(*X*) <sup>−</sup> *<sup>f</sup>* <sup>0</sup> <sup>1</sup> )/(*f*<sup>∗</sup> <sup>1</sup> <sup>−</sup> *<sup>f</sup>* <sup>0</sup> <sup>1</sup> ) <sup>≤</sup> 0, <sup>λ</sup> <sup>−</sup> (*<sup>f</sup>* 2(*X*) <sup>−</sup> *<sup>f</sup>* <sup>0</sup> <sup>2</sup> )/(*f*<sup>∗</sup> <sup>2</sup> <sup>−</sup> *<sup>f</sup>* <sup>0</sup> <sup>2</sup> ) ≤ 0, *G*(*X*) ≤ *B*, *X* ≥ 0. Substituting numerical data, we obtain: λ<sup>0</sup> = max λ, with restrictions: <sup>λ</sup> <sup>−</sup> (4.0*x*<sup>1</sup> <sup>+</sup> 5.0*x*<sup>2</sup> <sup>+</sup> 9.0*x*<sup>3</sup> <sup>+</sup> 11.0*x*<sup>4</sup> <sup>−</sup> *<sup>f</sup>* <sup>0</sup> <sup>1</sup> )/(*f*<sup>∗</sup> <sup>1</sup> <sup>−</sup> *<sup>f</sup>* <sup>0</sup> <sup>1</sup> ) ≤ 0, <sup>λ</sup> <sup>−</sup> (2*x*<sup>1</sup> <sup>+</sup>10*x*<sup>2</sup> <sup>+</sup> <sup>6</sup>*x*<sup>3</sup> <sup>+</sup> <sup>20</sup>*x*<sup>4</sup> <sup>−</sup> *<sup>f</sup>* <sup>0</sup> <sup>2</sup> )/(*f*<sup>∗</sup> <sup>2</sup> <sup>−</sup> *<sup>f</sup>* <sup>0</sup> <sup>2</sup> ) ≤ 0, *x*<sup>1</sup> + *x*<sup>2</sup> + *x*<sup>3</sup> + *x*<sup>4</sup> ≤ 15, 7*x*<sup>1</sup> + 5*x*<sup>2</sup> + 3*x*<sup>3</sup> + 2*x*<sup>4</sup> ≤ 120, 3*x*<sup>1</sup> + 5*x*<sup>2</sup> + 10*x*<sup>3</sup> + 15*x*<sup>4</sup> ≤ 100, *x*<sup>1</sup> ≥ 0, *x*<sup>2</sup> ≥ 0, *x*<sup>3</sup> ≥ 0, *x*<sup>4</sup> ≥ 0. Step 5. Solution of the λ-problem. Results of the solution of the λ-problem: *X*<sup>0</sup> = {*x*<sup>1</sup> = 0.9217914, *x*<sup>2</sup> = 0.0, *x*<sup>3</sup> = 11.73964, *x*<sup>4</sup> = 1.520722, *x*<sup>5</sup> = 1.739639} - optimum values

of variables,

λ<sup>0</sup> = 0.9218, the optimum value of the criterion function

We execute a check, at an optimum point of *X*<sup>0</sup> we determine sizes of criterion functions of *F*(*X*0) = {*fk*(*X*0), *k* = 1,*K*}, relative estimates of λ(*X*0) = {λ*k*(*X*0), *k* = 1,*K*}.

As a result of the decision we obtain: *fX*<sup>0</sup> = [*f* 1(*X*0) = 91.52, *f* 2(*X*0) = 161.3],

<sup>λ</sup>1(*X*0) <sup>=</sup> 0. 9218, <sup>λ</sup>2(*X*0) <sup>=</sup> 0.9218, i.e., <sup>λ</sup><sup>0</sup> <sup>≤</sup> <sup>λ</sup>*k*(*X*0), *<sup>k</sup>* <sup>=</sup> 1,2.

These results show that at point *X*<sup>0</sup> both criteria in the relative units reached λ<sup>0</sup> = 0.92 from the optimum sizes. Any increase in one of the criteria of this level leads to a decrease in the other criterion, i.e., the point *X*<sup>0</sup> is Pareto optimal.

Here we present the text of the program in the MATLAB system.

The application.

%Vector linear programming problem, 2 criteria

% Author: Мaшунин Юрий Констaнтинович -Mashunin Yury. K.

%The program is designed for the training and research, for the commercial purposes

please contact:

% Mashunin@mail.ru *disp(Vector linear programming problem - 2 criteria') disp(' opt F(X)* = *{max f1(X)* = *(4.0x1* + *5.0x2* +*9.0x3* + *11.0x4), ') disp(' max f2(X)* = *(2x1* + *10x2* + *6x3* + *20x4),') disp(' x1* + *x2* + *x3* + *x4* <= *15,') disp('7x1* +*5x2* + *3x3* + *2x4* <= *120,') disp('3x1* +*5x2* +*10x3* +*15x4* <= *100, x1* >= *0,..., x42* >= *0 ') cvec* = *[-4.0 -5.0 -9.0 -11.0.; -2. -10. -6. -20.]; disp('*'Step 0. Input data of the vector problem*') cvec* = *[-4.0 -5.0 -9.0 -11.0; -2. -10. -6. -20.]; a* = *[1. 1. 1. 1.; 7. 5. 3. 2.; 3. 5. 10. 15.]; b* = *[15. 120. 100.]; Aeq* = *[]; beq* = *[]; x0* = *[0. 0. 0. 0.]; disp('Step 1.The solution for each criterion is the best') [x1,f1]* = *linprog(cvec(1,:),a,b,Aeq,beq,x0)*

```
[x2,f2] = linprog(cvec(2,:),a,b,Aeq,beq,x0)
disp('Step 2. Solution by criterion-best-worst')
[x1min,f1min] = linprog(-1*cvec(1,:),a,b,Aeq,beq,x0)
[x2min,f2min] = linprog(-1*cvec(2,:),a,b,Aeq,beq,x0)
disp('Step 3. System analysis of the results of the decision')
d1 = -f1- f1min; d2 = -f2- f2min;
f = [cvec(1,:)*x1 cvec(2,:)*x1; cvec(1,:)*x2 cvec(2,:)*x2]
L = [(-f(1,1) - f1min)/d1 (-f(1,2) - f2min)/d2;
(-f(2,1) − f1min)/d1 (-f(2,2) - f2min)/d2]
disp('Step 4. Solution of L-problem');
cvec0 = [-1. 0. 0. 0. 0.];
a0 = [1. -4.0/d1 -5.0/d1 -9.0/d1 -11.0/d1;
1. -2./d2 -10./d2 -6./d2 -20./d2;
0. 1. 1. 1. 1.;
0. 7. 5. 3. 2.;
0. 3. 5. 10. 15.];
b0 = [- f1min/d1 − f2min)/d2 15. 120. 100.]; x00 = [0. 0. 0. 0. 0.];
[X0,L0] = linprog(cvec0,a0,b0,Aeq,beq,x00)
fX0 = [cvec(1,:)*X0(2:5) cvec(2,:)*X0(2:5)]
LX0 = [(-fX0(1)- f1min)/ d1 (-fX0(2) - f2min)/d2]
```
#### *3.2. Vector Optimization with a Criterion Priority*

3.2.1. Axioms and the Principle of Optimality of Vector Optimization with a Criterion Priority

For development of methods of the solution of problems of vector optimization with a priority of criterion we use definitions as follows:


**Definition 4.** (About the priority of one criterion over the other).

The criterion of *q* ∈ *K* in the vector problem of Equations (12) and (13) in a point of *X* ∈ *S* has priority over other criteria of *k* = 1,*K*, and the relative estimate of λ*q*(*X*) by this criterion is greater than or equal to relative estimates of λ*k*(*X*) of other criteria, i.e.:

$$
\lambda\_q(X) \ge \lambda\_k(X), \, k = \overline{1,\overline{K}},$$

and a strict priority for at least one criterion of *t*∈*K*,

$$
\lambda\_{\emptyset}(X) > \lambda\_l(X), \ t \neq q, \text{ and for other criteria of } \lambda\_{\emptyset}(X) \ge \lambda\_k(X), \ k = 1, K, \ k \neq t \neq q.
$$

Introduction of the definition of a priority of criterion in the vector problem of Equations (1)–(4) executed the redefinition of the early concept of a priority. Earlier the intuitive concept of the importance of this criterion was outlined, now this "importance" is defined as a mathematical concept: the higher

the relative estimate of the *q*th criterion compared to others, the more it is important (i.e., more priority), and the highest priority at a point of an optimum is *X*∗ *k* , ∀*q* ∈ *K*.

From the definition of a priority of criterion of *q* ∈ *K* in the vector problem of Equations (1)–(4), it follows that it is possible to reveal a set of points *Sq* ⊂ *S* that is characterized by λ*q*(*X*) ≥ λ*k*(*X*), ∀*k q,* ∀*X* ∈ *Sq*. However, the answer to whether a criterion of *q* ∈ *K* at a point of the set *Sq* has more priority than others remains open. For clarification of this question, we define a communication coefficient between a couple of relative estimates of *q* and *k* that, in total, represent a vector: *Pq*(*X*) = {*p q k* (*X*)|*k* = 1, *K*}, *q* ∈ *K* ∀*X* ∈ *Sq*.

**Definition 5.** (About numerical expression of a priority of one criterion over another).

In the vector problem of Equations (12) and (13), with priority of the *q*th criterion over other criteria of *<sup>k</sup>* <sup>=</sup> 1,*K*, for <sup>∀</sup>*<sup>X</sup>* <sup>∈</sup> *Sq*, and a vector of *Pq*(*X*) which shows how many times a relative estimate of λ*q*(*X*), *q* ∈ *K*, is more than other relative estimates of λ*k*(*X*), *k* = 1,*K*, we define a numerical expression of the priority of the *q*th criterion over other criteria of *k* = 1,*K* as:

$$\begin{array}{lcl}P^{\mathfrak{q}}(\mathbf{X}) & \langle p\_{k}^{\mathfrak{q}}(\mathbf{X}) \rangle & \lambda\_{\mathfrak{q}}(\mathbf{X})/\lambda\_{k}(\mathbf{X}),k = \overbrace{\mathbf{1},\mathbf{K}}^{\mathfrak{q}},p\_{k}^{\mathfrak{q}}(\mathbf{X}) \geq 1, \\ \forall \mathbf{X} \in \mathbb{S}\_{\mathfrak{q}} \subset \mathbb{S},k = \overbrace{\mathbf{1},\mathbf{K}}^{\mathfrak{q}},\forall q \in \mathbf{K}. \end{array} \tag{17}$$

**Definition 6.** (About the set numerical expression of a priority of one criterion over another).

In the vector problem of Equations (1)–(4) with a priority of criterion of *q* ∈ *K* for ∀*X* ∈ *S*, vector *P<sup>q</sup>* = {*p q k* , *k* = 1,*K*} is considered to be set by the person making decisions (i.e., decision-maker) if everyone is set a component of this vector. Set by the decision-maker, component *p q k* , from the point of view of the decision-maker, shows how many times a relative estimate of λ*k*(*X*), *k* = 1,*K* is greater than other relative estimates of λk(X), *k* = 1, *K*. The vector of *p q k* , *k* = 1,*K* is the numerical expression of the priority of the *q*th criterion over other criteria of *k* = 1,*K*:

$$P^{\mathbb{q}}(X) \;=\; \langle p\_{k'}^{\mathbb{q}}k \;=\; \overline{1,\mathbb{K}}\rangle, p\_k^{\mathbb{q}} \ge 1, \forall X \in \mathbb{S}\_{\mathbb{q}} \subset \mathbb{S}, k \;= \; \overline{1,\mathbb{K}}, \forall q \in \mathcal{K}. \tag{18}$$

The vector problem of Equations (1)–(4), in which the priority of any criteria is set, is called a vector problem with the set priority of criterion. The problem of a task of a vector of priorities arises when it is necessary to determine the point *<sup>X</sup>*<sup>0</sup> <sup>∈</sup> *<sup>S</sup>* by the set vector of priorities. In the comparison of relative estimates with a priority of criterion of *q* ∈ *K*, as well as in a task with equivalent criteria, we define the additional numerical characteristic of λ which we call the level.

**Definition 7.** (About the lower level among all relative estimates with a criterion priority).

The λ level is the lowest among all relative estimates with a priority of criterion of *q*∈ such that:

$$
\lambda \le p\_k^q \lambda\_k(X), k = \overbrace{1, K}^q, q \in K, \forall X \in \mathcal{S}\_q \subset \mathcal{S}; \tag{19}
$$

The lower level for the performance of the condition in Equation (19) is defined as:

$$\lambda = \min\_{k \in K} p\_k^q \lambda\_k(X), q \in K, \forall X \in S\_q \subset S. \tag{20}$$

Equations (19) and (20) are interconnected and serve as a further transition from the operation of the definition of the minimum to restrictions, and vice versa. In Section 3.1, we gave the definition of a Pareto optimal point *<sup>X</sup>*<sup>0</sup> <sup>∈</sup> *<sup>S</sup>* with equivalent criteria. Considering this definition as an initial one, we will construct a number of the axioms dividing an admissible set of *S* into, first, a subset of Pareto optimal points *<sup>S</sup>*0, and, secondly, a subset of points *Sq* <sup>⊂</sup> *<sup>S</sup>*, *<sup>q</sup>* <sup>∈</sup> *<sup>K</sup>*, with priority for the *<sup>q</sup>*th criterion.

**Axiom 2.** (About a subset of points, priority by criterion).

In the vector problem of Equations (12)–(13), the subset of points *S<sup>q</sup>* ⊂ *S* is called the area of priority of criterion of *q* ∈ *K* over other criteria, if ∀*X* ∈ *S<sup>q</sup>* ∀*k*∈*K* λ*q*(*X*) ≥ λ*k*(*X*), *q k*.

This definition extends to a set of Pareto optimal points *S*<sup>0</sup> that is given by the following definition.

**Axiom 2a.** (About a subset of points, priority by criterion, on Pareto's great number in a vector problem). In a vector problem of mathematical programming the subset of points S*<sup>o</sup> <sup>q</sup>* <sup>⊂</sup> S0 <sup>⊂</sup> S is called the area of a priority of criterion of q∈K over other criteria, if <sup>∀</sup><sup>X</sup> <sup>∈</sup> <sup>S</sup>*<sup>o</sup> <sup>q</sup>* ∀*k* ∈ *K* λq(X) ≥ λk(X), *q k*.

In the following we provide explanations.

Axiom 2 and 2a allow the breaking of the vector problem in Equations (1)–(4) into an admissible set of points *<sup>S</sup>*, including a subset of Pareto optimal points, *<sup>S</sup>*<sup>0</sup> <sup>⊂</sup> *<sup>S</sup>*, and subsets:

One subset of points *S'* ∈ *S* where criteria are equivalent, and a subset of points of *S'* crossed with a subset of points *<sup>S</sup>*◦, allocated to a subset of Pareto optimal points at equivalent criteria *<sup>S</sup>*<sup>00</sup> <sup>=</sup> *S'* <sup>∩</sup> *<sup>S</sup>*0. As will be shown further, this consists of one point of *<sup>X</sup>*<sup>0</sup> <sup>∈</sup> *<sup>S</sup>*, i.e., *<sup>X</sup>*<sup>0</sup> <sup>=</sup> *<sup>S</sup>*<sup>00</sup> <sup>=</sup> *S'* <sup>∩</sup> *<sup>S</sup>*0, *S'* <sup>∈</sup> *<sup>S</sup>*, *<sup>S</sup>*<sup>0</sup> <sup>∈</sup> *<sup>S</sup>*.

"*K*" subsets of points where each criterion of *q* = 1,*K* has a priority over other criteria of *k* = 1,*K*, *q k*, and thus breaks, first, sets of all admissible points *S*, into subsets *Sq* ⊂ *S*, *q* = 1, *K* and, second, a set of Pareto optimal points, *S*0, into subsets *S<sup>o</sup> <sup>q</sup>* <sup>⊂</sup> *<sup>S</sup>*<sup>0</sup> <sup>⊂</sup> S, *<sup>q</sup>* <sup>=</sup> 1, *<sup>K</sup>*. This yields: *S'*U( *<sup>U</sup> q*∈*K So <sup>q</sup>*) = *S*0, *S<sup>o</sup> <sup>q</sup>* <sup>⊂</sup>*S*<sup>0</sup> <sup>⊂</sup> *<sup>S</sup>*,

*q* = 1,*K*.

We note that the subset of points *S<sup>o</sup> <sup>q</sup>*, on the one hand, is included in the area (a subset of points) of priority of criterion of *<sup>q</sup>*∈*<sup>K</sup>* over other criteria: *<sup>S</sup><sup>o</sup> <sup>q</sup>*⊂ *Sq* ⊂ *S*, and, on the other, in a subset of Pareto optimal points: *S<sup>o</sup> <sup>q</sup>* <sup>⊂</sup> *<sup>S</sup>*<sup>0</sup> <sup>⊂</sup> *<sup>S</sup>*.

Axiom 2 and the numerical expression of priority of criterion (Definition 5) allow the identification of each admissible point of *<sup>X</sup>*∈*<sup>S</sup>* (by means of vector *Pq*(*X*) <sup>=</sup> {*<sup>p</sup> q k* (*X*) = λ*q*(*X*)/λ*k*(*X*), *k* = 1, *K*}), to form and choose:


Thus, full identification of all points in the vector problem of Equations (12) and (13) is executed in sequence as:


This is the most important result which allows the output of the principle of optimality and to construct methods of a choice of any point of Pareto's great number.

**Definition 8.** (Principle of optimality 2. The solution of a vector problem with the set criterion priority).

The vector problem of Equations (12) and (13) with the set priority of the *q*th criterion of *p q k* , *k* = 1,*K* is considered solved if the point *X*<sup>0</sup> and maximum level λ<sup>0</sup> among all relative estimates is found such that:

$$\lambda^0 = \max\_{\mathbf{X} \in \mathcal{S}} \min\_{k \in \mathcal{K}} p\_k^q \lambda\_k(\mathbf{X}), q \in \mathcal{K}. \tag{21}$$

Using the interrelation of Equations (19) and (20), we can transform the maximine problem of Equation (33) into an extreme problem of the form:

$$
\lambda^0 = \max\_{\lambda \in \mathcal{S}} \lambda,\tag{22}
$$

$$
\lambda \le p\_k^q \lambda\_k(X), k = \overline{1, K}. \tag{23}
$$

We call Equations (22) and (23) the λ-problem with a priority of the *q*th criterion.

The solution of the λ-problem is the point *X*<sup>0</sup> = {*X*0, λ0} This is also the result of the solution of the vector problem of Equations (1)–(4) with the set priority of the criterion, solved on the basis of normalization of criteria and the principle of the guaranteed result.

In the optimum solution *X*<sup>0</sup> = {*X*0, λ0}, *X*0, an optimum point, and λ0, the maximum bottom level, the point of *X*<sup>0</sup> and the λ<sup>0</sup> level correspond to restrictions of Equation (15), which can be written as: λ<sup>0</sup> ≤ *p q k* λ*k*(*X*0), *k* = 1,*K*.

These restrictions are the basis of an assessment of the correctness of the results of a decision in practical vector problems of optimization.

From Definitions 1 and 2, "Principles of optimality", follows the opportunity to formulate the concept of the operation "opt".

#### **Definition 9.** (Mathematical operation "opt").

In the vector problem of Equations (1)–(4), in which "max" and "min" are part of the criteria, the mathematical operation "opt" consists of the definition of a point *X*<sup>0</sup> and the maximum λ<sup>0</sup> bottom level to which all criteria measured in relative units are lifted:

$$
\lambda^0 \le \lambda\_k(\mathbf{X}^0) = \frac{f\_k(\mathbf{X}) - f\_k^o}{f\_k^\* - f\_k^o}, k = \overline{1, K}, \tag{24}
$$

i.e., all criteria of λ*k*(*X*0), *k* = 1,*K* are equal to or greater than the maximum level of λ<sup>0</sup> (therefore λ<sup>0</sup> is also called the guaranteed result).

**Theorem 2.** (The theorem of the most inconsistent criteria in a vector problem with the set priority).

If in the convex vector problem of mathematical programming of Equations (1)–(4) the priority of the *q*th criterion of *p q k* , *<sup>k</sup>* <sup>=</sup> 1,*K*, <sup>∀</sup>*<sup>q</sup>* <sup>∈</sup> *<sup>K</sup>* over other criteria is set, at a point of an optimum *<sup>X</sup>*<sup>0</sup> <sup>∈</sup> *<sup>S</sup>* obtained on the basis of normalization of criteria and the principle of guaranteed result, there will always be two criteria with the indexes *r* ∈ *K*, *t* ∈ *K*, for which the following strict equality holds:

$$\lambda^0 = \left. p\_k^r \lambda\_l(\mathbf{X}^0) \right| = \left. p\_k^t \lambda\_l(\mathbf{X}^0), r, t, \in \mathcal{K}\_\prime \tag{25}$$

and other criteria are defined by inequalities:

$$
\lambda^0 \le p\_k^q(X^0), k = \overbrace{1, K}^q, \forall q \in K, q \neq r \neq t. \tag{26}
$$

Criteria with the indexes *r*∈*K*, *t*∈*K* for which the equality of Equation (38) holds are called the most inconsistent.

Proof. Similar to Theorem 2 [25].

We note that in Equations (25) and (26), the indexes of criteria *r*, *t* ∈ *K* can coincide with the *q* ∈ *K* index.

Consequence of Theorem 1, about equality of an optimum level and relative estimates in a vector problem with two criteria with a priority of one of them.

In a convex vector problem of mathematical programming with two equivalent criteria, solved on the basis of normalization of criteria and the principle of the guaranteed result, at an optimum point *Xo* equality is always carried out at a priority of the first criterion over the second:

$$
\lambda^0 = \lambda\_1(\mathbf{X}^0) \;= p\_2^1(\mathbf{X}^0) \lambda\_2(\mathbf{X}^0).
\mathbf{X}^0 \in \mathcal{S}, \text{ where}
 p\_2^1(\mathbf{X}^0) \;= \lambda\_1(\mathbf{X}^0) / \lambda\_2(\mathbf{X}^0). \tag{27}
$$

and at a priority of the second criterion over the first:

λ<sup>0</sup> = *p*<sup>2</sup> <sup>1</sup>(*X*0)λ1(*X*0) <sup>=</sup> <sup>λ</sup>2(*X*0), *<sup>X</sup>*<sup>0</sup> <sup>∈</sup> *<sup>S</sup>*, where *<sup>p</sup>*<sup>2</sup> <sup>1</sup>(*X*0) <sup>=</sup> <sup>λ</sup>2(*X*0)/λ1(*X*0).

3.2.2. Mathematical Method of the Solution of a Vector Problem with Criterion Priority

(Method of the decision in problems of vector optimization with a criterion priority) [25]. Step 1. We solve a vector problem with equivalent criteria. The algorithm of the decision is presented in Section 3.1.2. As a result of the decision we obtain:


$$
\lambda^0 \le \lambda\_k(X^0), k = \overline{1, K}, X^0 \in S. \tag{28}
$$

The person making the decision carries out the analysis of the results of the solution of the vector problem with equivalent criteria. If the received results satisfy the decision maker, then the process concludes, otherwise subsequent calculations are performed.

In addition, we calculate:

• in each point *X*<sup>∗</sup> *k* , *k* = 1,*K* we determine sizes of all criteria of *q* = 1,*K*: {*fq*(*X*<sup>∗</sup> *k* ), *q* = 1,*K*}, *k* = 1,*K*, and relative estimates λ(*X*∗ ) = {λ*q*(*X*<sup>∗</sup> *k* ), *<sup>q</sup>* <sup>=</sup> 1,*K*, *<sup>k</sup>* <sup>=</sup> 1, *<sup>K</sup>*}, <sup>λ</sup>*k*(*X*) <sup>=</sup> *fk*(*X*)−*<sup>f</sup> <sup>o</sup> k f*∗ *<sup>k</sup>*−*<sup>f</sup> <sup>o</sup> k* , ∀*k* ∈ *K*:

$$F(\mathbf{X}^\*) = \left| \begin{array}{c} f\_1(\mathbf{X}\_1^\*), \dots, f\_k(\mathbf{X}\_1^\*), \\ \dots \\ f\_1(\mathbf{X}\_k^\*), \dots, f\_k(\mathbf{X}\_k^\*) \end{array} \right| \lambda(\mathbf{X}^\*) \\ = \left| \begin{array}{c} \lambda\_1(\mathbf{X}\_1^\*), \dots, \lambda\_k(\mathbf{X}\_1^\*), \\ \dots \\ \lambda\_1(\mathbf{X}\_k^\*), \dots, \lambda\_k(\mathbf{X}\_k^\*) \end{array} \right| \tag{29}$$

Matrices of criteria of *F*(*X*\*) and relative estimates of λ(*X*\*) show the sizes of each criterion of *k* = 1,*K* upon transition from one optimum point *X*∗ *k* , *k*∈*K* to another *X*<sup>∗</sup> *<sup>q</sup>*, *q*∈*K*, i.e., on the border of a great number of Pareto.

• at an optimum point at equivalent criteria *<sup>X</sup>*<sup>0</sup> we calculate sizes of criteria and relative estimates:

$$\{f\_k(\mathbf{X}^0), k = \overline{1, K}; \lambda\_k(\mathbf{X}^0), k = \overline{1, K}\tag{30}$$

which satisfy the inequality of Equation (28). In other points *<sup>X</sup>*∈*S*0, in relative units the criteria of λ = min *<sup>k</sup>*∈*<sup>K</sup>* <sup>λ</sup>*k*(*X*) are always less than <sup>λ</sup>0, given the <sup>λ</sup>-problem of Equations (22) and (23).

This information is also a basis for further study of the structure of a great number of Pareto.

Step 2. Choice of priority criterion of *q* ∈ **K**.

From theory (see Theorem 1) it is known that at an optimum point *X*<sup>0</sup> there are always two most inconsistent criteria, *q* ∈ *K* and *v* ∈ *K*, for which in relative units an exact equality holds: <sup>λ</sup><sup>0</sup> <sup>=</sup> <sup>λ</sup>*q*(*X*0) <sup>=</sup> <sup>λ</sup>*p*(*X*0), *<sup>q</sup>*, *<sup>v</sup>* <sup>∈</sup> *<sup>K</sup>*, *<sup>X</sup>* <sup>∈</sup> *<sup>S</sup>*. Others are subject to inequalities: <sup>λ</sup><sup>0</sup> <sup>≤</sup> <sup>λ</sup>*k*(*X*0) <sup>∀</sup>*<sup>k</sup>* <sup>∈</sup> *<sup>K</sup>*, *<sup>q</sup> <sup>v</sup> <sup>k</sup>*.

As a rule, the criterion which the decision-maker would like to improve is part of this couple, and such a criterion is called a priority criterion, which we designate *q* ∈ *K*.

Step 3. Numerical limits of the change of the size of a priority of criterion *q* ∈ **K** are defined.

For priority criterion *q* ∈ *K* from the matrix of Equation (29) we define the numerical limits of the change of the size of criterion:

> • in physical units of *fq X*0 ≤ *fq*(*X*) ≤ *fq X*∗ *<sup>q</sup>* ), *k* ∈ *K*, (31)

where *fq*(*X*<sup>∗</sup> *<sup>q</sup>*) derives from the matrix of Equation (29) *F*(*X*\*), all criteria showing sizes measured in physical units, *fq*(*X*0) from Equation (30), and,

$$\bullet \quad \text{in relative units of } \lambda\_q(X^0) \le \lambda\_q(X) \le \lambda\_q(X^\*\_q), \ k \in \mathbb{K}, \tag{32}$$

where λ*q*(*X*<sup>∗</sup> *<sup>q</sup>*) derives from the matrix λ(*X*\*), all criteria showing sizes measured in relative units (we note that λ*q*(*X*<sup>∗</sup> *<sup>q</sup>*) = 1), λ*q*(*X*0) from Equation (29).

As a rule, Equations (31) and (32) are given for the display of the analysis.

Step 4. Choice of the size of priority criterion (decision-making).

The person making the decision carries out the analysis of the results of calculations of Equation (42) and from the inequality of Equation (31) chooses the numerical size *fq* of the criterion of *q*∈*K*:

$$f\_q(\mathbf{X}^0) \le f\_q \le f\_q(\mathbf{X}\_q^\*), \ q \in \mathbf{K}.\tag{33}$$

For the chosen size of the criterion of *fq* it is necessary to define a vector of unknown *Xo*. For this purpose, we carry out the subsequent calculations.

Step 5. Calculation of a relative assessment.

For the chosen size of the priority criterion of *fq* the relative assessment is calculated as:

$$
\lambda\_q = \frac{f\_q - f\_q^o}{f\_q^\* - f\_q^o} \tag{34}
$$

which upon transition from point *X*<sup>0</sup> to *X*<sup>∗</sup> *<sup>q</sup>*, according to Equation (32), lies in the limits: <sup>λ</sup>*q*(*X*0) <sup>≤</sup> <sup>λ</sup>*<sup>q</sup>* <sup>≤</sup> λ*q*(*X*<sup>∗</sup> *<sup>q</sup>*) = 1.

Step 6. Calculation of the coefficient of linear approximation.

Assuming a linear nature of the change of criterion of *fq*(*X*) in Equation (31) and according to the relative assessment of λ*q*(*X*) in Equation (32), using standard methods of linear approximation we calculate the proportionality coefficient between λ*q*(*X*◦), λ*q*, which we call ρ:

$$\rho = \frac{\lambda\_q - \lambda\_q(X^o)}{\lambda\_q(X\_q^\*) - \lambda\_q(X^o)}, q \in K \tag{35}$$

Step 7. Calculation of coordinates of priority criterion with the size *fq*.

In accordance with Equation (33), the coordinates of the *X<sup>q</sup>* priority criterion point lie within the following limits: *<sup>X</sup>*<sup>0</sup> <sup>≤</sup> *<sup>X</sup><sup>q</sup>* <sup>≤</sup> *<sup>X</sup>*<sup>∗</sup> *<sup>q</sup>*, *<sup>q</sup>* <sup>∈</sup> *<sup>K</sup>*. Assuming a linear nature of change of the vector *<sup>X</sup><sup>q</sup>* <sup>=</sup> {*<sup>x</sup> q* <sup>1</sup>, ... , *x q N*} we determine coordinates of a point of priority criterion with the size *fq* with the relative assessment of Equation (32):

$$\begin{aligned} X^q &= \langle \mathbf{x}\_1^q \;=\; \mathbf{x}\_1^0 + \rho(\mathbf{x}\_q^\*(\mathbf{1}) \;=\; \mathbf{x}\_1^0), \\ \vdots &\ddots \; \prime \\ \mathbf{x}\_N^q &= \; \mathbf{x}\_N^0 + \rho(\mathbf{x}\_q^\*(\mathbf{N}) \;=\; \mathbf{x}\_N^0) \}. \end{aligned} \tag{36}$$

where *X*<sup>0</sup> = {*xo* <sup>1</sup>, ... , *<sup>x</sup><sup>o</sup> <sup>N</sup>*}, *X*<sup>∗</sup> *<sup>q</sup>* = {*x*<sup>∗</sup> *<sup>q</sup>*(1), ... , *x*<sup>∗</sup> *<sup>q</sup>*(*N*)}.

Step 8. Calculation of the main indicators of a point *xq*. For the obtained point *xq*, we calculate:


Any point from Pareto's set *X<sup>o</sup> <sup>t</sup>* <sup>=</sup> {λ*<sup>o</sup> <sup>t</sup>* , *<sup>X</sup><sup>o</sup> <sup>t</sup>* }∈*So* can be similarly calculated.

Analysis of results. The calculated size of criterion *fq*(*X<sup>o</sup> <sup>t</sup>* ), *q*∈*K* is usually not equal to the set *fq*. The error of the choice of Δ*fq* = |*fq*(*X<sup>o</sup> <sup>t</sup>* ) − *fq*| is defined by the error of linear approximation.

#### **4. Research, Analysis, and Formulation of the Problem of Decision-Making with Uncertain Data**

*4.1. Investigation of the Model of the "Object for Making an Optimal Decision" under Certainty and Uncertainty*

#### 4.1.1. Characteristics of Certainty and Uncertainty

Building a mathematical model of an "object or system for making an optimal decision" (Equations (1)–(4)) is possible, under conditions of certainty and uncertainty, which happen often. Conditions of certainty are characterized by the fact that the functional dependence of each criterion *fk*, *k* = 1,*K* (1) and the constraints *G*(3) on the system parameters *xj*, *j* = 1, *N*, is known [6,12,22].

To build a mathematical model of a system under certainty, studies of the physical processes occurring in the system are conducted. At the creation of a mathematical model of such processes, fundamental laws of physics are used, for example, models of magnetic and temperature profiles, and laws of conservation of energy and movement. A complete list of all functional characteristics of technical systems and parameters on which these characteristics depend is formed. Their verbal description is given. The technical and information interrelationships of all components of a technical system is established, i.e., the structure is under construction. At this stage, the problem of the choice of the best structure of a technical system (a problem of structural optimization) is solved [4,23].

As a result of the conducted research, the functional interrelationship of a set of characteristics of *F*1(*X*), *F*2(*X*) and restrictions of *G*(*X*) from parameters *X* has to be constructed.

Conditions of uncertainty are characterized by that there is no sufficient information on the functional dependence of each characteristic and the restrictions on the parameters [8,9,12,20].

At the same time, there are two problems associated with decision making.

The first problem is characterized by the fact that only data for some of the indicators are known (such a task is presented in the following section, see Equation (37)). The second problem is that data on some set of parameters, as well as relevant data on some set of characteristics (criteria), of a problem are known (38).

Both problems arise when carrying out pilot studies based on the principle of " input-output". On the basis of the conducted pilot studies, there is a problem with the adoption of the acceptable decision. We present the analysis of these problems and decision making on their basis in the following sections.

Thus, under conditions of certainty the function *fk*(*X*), *k* ∈ *K* is known, for the infinite set of parameters there is a corresponding infinite set of estimates of the function (criterion). Conversely, under conditions of uncertainty, only a finite set of parameters and the corresponding set of function (criterion) estimates are known, the smaller the set of parameters, the higher the uncertainty.

#### 4.1.2. Investigation of Condition of Uncertainty

Conditions of uncertainty are considered in two aspects. The first relates to a lack of sufficient information. This is the uncertainty associated with the variety of characteristics (criteria) of the object under study.

This aspect is defined by the fact that there is not sufficient information on the functional dependence of the characteristic *f* and restrictions *g* from parameters *X* of the studied object. In this case, input data characterizing the object are presented as:

#### (a) random data,


For options (a) and (b), input data have to be transformed into option (c), and are presented in table form: 1 column—parameter size, 2 columns—characteristic size. The methodology of the transformation of random and fuzzy data into tabular form is presented in the magazine "Fuzzy Decision Making and Soft Computing Applications". Tabular (experimental) data, using regression analysis, will be transformed into the *f*(*X*) function, i.e., in terms of certainty.

In the future, only variant c) is considered in this work, and its transformations by regression methods into certainty conditions, i.e., into the function *f*(*X*).

The second aspect of decision-making uncertainty is related to the fact that an object is characterized by many characteristics: *f* 1(*X*), ... , *fK*(*X*). The set of characteristics *K* is divided into two subsets *K*<sup>1</sup> and *K*2. The subset of characteristics *K*<sup>1</sup> in a numerical value is desired to be as high as possible (maximum), and the subset of characteristics *K*<sup>2</sup> is desired to be as low as possible (minimum). Below it will be shown that the decision-making problem with a set of characteristics is reduced to a vector problem of mathematical programming, the solution of which is presented in Section 3.

#### *4.2. Conceptual Problem Definition of Decision Making under the Conditions of Uncertainty*

Initially, from a general view the conceptual problem definition of decision making was presented in work of R. L. Keeney and H. Raiffa [11], according to which we denote *ai*, *i* = 1, *M*, for the admissible decision-making alternatives, and *A* = (*a*<sup>1</sup> *a*<sup>2</sup> ... *aM*) for the vector of the set of admissible alternatives.

We match each alternative *a* ∈ *A* to *K* numerical indices (criteria) *f* 1(*a*), ... , *fK*(*a*) that characterize the system. We can assume that this set of indices maps each alternative onto the point of the *K*-dimensional space of outcomes (consequences) of decisions made, *F*(*a*) = (*f* 1(*a*) *f* 2(*a*) ... *fK*(*a*))*T*. We use the same symbol *fk*(*a*) both for the criterion and for the function that performs the estimation with respect to this criterion. Note that we cannot directly compare the variables *fv*(*a*) and *fk*(*a*), *v k* at any point *F*(*a*) of the *K*-dimensional space of consequences since it would most likely have no sense because these criteria are generally measured in different units. Using these data, we can state the decision-making problem.

The decision maker is to choose an alternative *a* ∈ *A* to obtain the most suitable result, i.e., *F*(*a*) → min.

This definition means that the required estimating function should reduce the vector *F*(*a*) to a scalar preference or "value" criterion. In other words, it is equivalent to setting a scalar function *V* given the space of consequences and possessing the following property:

$$V(F(a)) \ge \
V(F(a')) \iff F(a) \gg \
F(a'),$$

where the symbol means "no less preferable than" [24]. We call the function *V*(*F*(*a*)) the value function. The name of this function in other publications may vary from an order value function to a preference function to a value function. Thus, the decision maker is to choose *a* ∈ *A* such that *V*(*F*(*a*)) is maximized. The value function allows an indirect comparison of the importance of certain values of various criteria of the system. Thus, the matrix *F*(*a*) of admissible outcomes of alternatives takes the form:

$$F = \begin{bmatrix} a\_1 \ f\_1^1 \ \dots \ f\_1^K \\ \dots \\ a\_M \ f\_M^1 \dots \ f\_M^K \end{bmatrix} \tag{37}$$

where *fi <sup>j</sup>* = *fi (ai*) and all alternatives in it are represented by the vector of indices *F*(*a*). For the sake of definiteness and without loss of generality, we assume that the first criterion (any criterion can be the first) is arranged in increasing (decreasing) order, with the alternatives re-numbered *i* = 1, *M*.

Problem 1 (Equation (37)) implies that the decision maker is to choose the alternative *<sup>a</sup>*<sup>0</sup> <sup>∈</sup> *<sup>A</sup>* such that it will yield the "most suitable (optimal) result" [24].

For an engineering system, we can represent each alternative *ai* by the *N*-dimensional vector *Xi* = {*xij*, *j* = 1, *N*}, *i* = 1, *M*} of its parameters, and its outcomes by the *K*-dimensional vector criterion {*f* 1(*Xi*), ... , *fK*(*Xi*), *i* = 1, *M*}. Taking this into account, the matrix of outcomes (Equation (37)) takes the form:

$$I = \begin{bmatrix} X\_1 \ f\_1(X\_1) \dots f\_K(X\_1) \\ \dots \\ X\_M \ f\_1(X\_M) \dots f\_K(X\_M) \end{bmatrix} \tag{38}$$

Problem 2 (Equation (38)) for decision makers consists of the choice of the set of design data *X*<sup>0</sup> = {*xij*, *j* = 1, *N*}, *i* = 1, *M*} that would allow the optimal result [24].

#### *4.3. The Analysis of Modern Methods of Decision Making to the Experimental Data*

At present, the problems of Equations (37) and (38) are solved by a number of "simple" methods based on special criteria, such as Wald, Savage, Hurwitz, and Bayes–Laplace criteria, which provide the basis for decision making.

The Wald criterion of maximizing the minimal component helps make the optimal decision that ensures the maximal gain among minimal ones, max min *f k i* .

*k* = 1,*K i* = 1,*M* The Savage minimal risk criterion chooses the optimal strategy so that the value of the risk *r<sup>k</sup> <sup>i</sup>* is minimal among maximal values of risks over the columns, min *i* = 1,*M* max *k* = 1,*K rk i* . The value of the risk *rk <sup>i</sup>* is

chosen from the minimal difference between the decision that yields maximal profit max *i* = 1,*M f k <sup>i</sup>* , *k* = 1,*K*,

and the current value *f <sup>k</sup> <sup>i</sup>* , *rk <sup>i</sup>* = ( max *i* = 1,*M f k <sup>i</sup>* ) <sup>−</sup> *<sup>f</sup> <sup>k</sup> <sup>i</sup>* , with their set being the matrix of risks *<sup>R</sup>* <sup>=</sup> *rk i k* = 1,*K <sup>i</sup>* <sup>=</sup> 1,*M*.

The Hurwitz criterion helps choose the strategy that lies somewhere between absolutely pessimistic and optimistic (i.e., the most considerable risk):

$$\max\_{k=\overline{1,K}} \left( \alpha \min\_{i=\overline{1,M}} f\_i^k + (1-\alpha) \max\_{i=\overline{1,M}} f\_i^k \right).$$

where α is the pessimistic coefficient chosen in the interval 0 ≤ α ≤ 1.

The Bayes–Laplace criterion takes into account each possible consequence of all decision options, given their probabilities max "*K f k <sup>i</sup> pi* .

*i* = 1,*M k* = 1 All these and other methods are widely described in publications on decision making [11]. All have certain drawbacks. For instance, if we analyze the Wald maximin criterion, we can see that by the problem's hypothesis all criteria are in different units. Hence, the first step, which is to choose the minimal component *f* min *<sup>k</sup>* = min *i* = 1,*M f k <sup>i</sup>* , is quite reasonable. However, all *<sup>f</sup>* min *<sup>k</sup>* , *k* = 1,*K*, are measured in different units, therefore the second step, which is to maximize the minimal component max *k* = 1,*K f* min *<sup>k</sup>* , is pointless. Although it brings us slightly closer to a solution, the criteria measurement scale fails to solve the problem since the chosen criteria scales are judgmental.

We believe that to solve the problem of Equations (37) and (38), we need to form a measure that would allow the evaluation of any decision to be made, including the optimal one. In other words, we need to construct an axiom that shows, based on the set of *K* criteria, what makes one alternative better than the other. In turn, the axiom can help derive a principle that determines whether the chosen alternative is optimal. The optimality principle should become the basis for the constructive methods of choosing optimal decisions. We propose such an approach for the vector mathematical programming problem that is essentially close to the decision-making problem of Equations (37) and (38).

#### *4.4. Transforming the Decision-Making Problem into Vector Problem*

We compare the decision-making problem (DMP) of Equations (37) and (38) with the vector problem mathematical programming (VPMP). Table 2 shows the comparison.

**Table 2.** Comparing vector problem mathematical programming (VPMP) to decision-making problems (DMP).


The first, second and third rows of Table 2 show that all three problems have the common objective of "making the best (optimal) decision". Both types of decision-making problems (row 4) have some uncertainty, functional dependences of criteria and restrictions on the problem's parameters are not known. At present, many mathematical methods of regression analysis are implemented in software (such as MATLAB) that allow using some set of initial data (as in Equations (37) and (38)) to construct the functional dependences *fk*(*X*), *k* = 1,*K*. For this reason, we use regression methods, including multiple regression, to construct criteria and restrictions in decision-making problems of both types (row 5) [26]. Combining criteria and restrictions, we represent decision-making problems of both types as a vector mathematical programming problem (row 6).

We perform these transformations. We use methods of regression analysis for the problem of Equation (37) and multiple regression for the problem of Equation (38) to transform each *k*th column of the matrix Ψ into the criterion function *fk*(*X*). We combine them in the vector function *F*(*X*): max *F*1(*X*) = {*fk*(*X*), *k* = 1,*K*} in Equation (1), and, min *F*2(*X*) = {*fk*(*X*), *k* = 1,*K*} in Equation (2). The inequalities:

$$f\_k^{\min} \le f\_k(X) \le f\_k^{\max}, k = \overline{1,K}$$

where *f* min *<sup>k</sup>* = min *i* = 1,*M fk*(*Xi*), *f* max *<sup>k</sup>* = max *i* = 1,*M fk*(*Xi*), are the minimal and maximal values of each function, and the parameters bounded by minimal and maximal values of each of them serve as functional restrictions (Equation (3)). The result is a VPMP (Equations (1)–(4)) and uses the same methods based on normalizing criteria and maximin principles as for the ES model under complete certainty to solve it for tantamount criteria.

#### **5. Statement and Optimal Decision Making with Experimental Data in Problems with One Parameter**

We use a particular example to illustrate the decision-making problem of the first type (Equation (37)) and the choice of the optimal decision. We also show the proposed method is independent of the form of the sought extremum of partial criteria.

#### *5.1. Problem Definition of Decision Making of the First Type*

The problem definition is carried out by the designer of the system on experimental data.

It is given that the system is defined by one parameter *X* = {*x*}, a vector of (operated) variables. The experimental data for the task of decision making are provided in Table 3.


**Table 3.** Experimental data (matrix I).

The system comprises one parameter {x} and four characteristics (criteria):

$$f\_1(\mathbf{x}) \to \max, f\_2(\mathbf{x}) \to \max, f\_3(\mathbf{x}) \to \min, f\_4(\mathbf{x}) \to \max, \dots$$

used to make a choice. Taken together, they constitute a decision-making problem of the first type. The requirement is to make the best (optimal) decision given the experimental data available.

#### *5.2. The Solution of the Problem*

We find the optimal solution in two stages.

*Stage 1.* We transform the decision-making problem into a VPMP.

Step 1. We prepare the initial data in Table 2 in the form of the matrix *I* (Equation (38)). These watch points are shown in Figure 1, using MATLAB operators:

xlabel( X ); ylabel( Y ); hold on; plot(I(:,1),I(:,2)/10, k. );

plot(I(:,1)I(:,3), go ); plot(I(:,1),I(:,4), bp ); plot(I(:,1)I(:,5)\*10, r\* ).

For the sake of visualization, the order of the first criterion is decreased by one while the order of the fourth criterion is increased by one.

**Figure 1.** Polynomial approximation of four criteria.

Step 2. Using a method of the smallest square deviations [12], we calculate the coefficients of the approximating polynomial of the second degree:

$$\min\_{A} f(A, X) = \sum\_{i=1}^{M} \left( y\_j - (a\_0 + a\_1 \mathbf{x}\_{1i} + a\_2 \mathbf{x}\_{1i}^2) \right)^2. \tag{39}$$

An approximation is carried out in the MATLAB system using the polyfit function (X, Y, N) where X is a vector of tabular values (nodes), and Y represents preset values of assessment.

Limits of the change of parameter *x* of the lower and top scale in Figure 1 are set. In the MATLAB system, this is presented as *x* = −600.:100.:19000.

We calculate the first criterion by means of the function: c1=polyfit(I(:,1),I(:,2)/10,2). The result is *<sup>c</sup>*1(1) <sup>=</sup> –3.1937 <sup>×</sup> <sup>10</sup>−6, *<sup>c</sup>*1(2) <sup>=</sup> 0.1947, *<sup>c</sup>*1(3) <sup>=</sup> 365.1, which corresponds to the polynomial of the second degree:

$$f\_1(\mathbf{x}) = -3.1937 \times 10^{-6} \text{ x}^2 + 0.1947 \mathbf{x} + 365.1. \tag{40}$$

We calculate the values of the polynomial y5 = polyval(c1,x) and show it on a graph using plot(x,y5, k- ), hold on.

Similarly, the rest of the criteria are:

$$f\_2(\mathbf{x}) = 9.467 \times 10^{-10} \text{ x}^3 - 2.7968 \times 10^{-5} \text{ x}^2 + 0.1090 \text{ x} + 1949.2,\tag{41}$$

$$f\_3(\mathbf{x}) = 1.0174 \times 10^{-5} \,\mathrm{x}^2 - 0.1707 \mathbf{x} + 1737.4,\tag{42}$$

$$f\_4(\mathbf{x}) = 1.6458 \times 10^{-7} \,\mathrm{x}^2 - 0.01309 \mathbf{x} + 247.83. \tag{43}$$

All the resulting points and functions are also shown in Figure 1.

Step 3. We form and solve the VPMP. Using the results of the previous stage, we represent the decision making problem as the VPMP of Equations (1)–(4) with the vector criterion *F*(*x*) = (−*f* 1(*x*) − *<sup>f</sup>* 2(*x*) *<sup>f</sup>* 3(*x*) <sup>−</sup> *<sup>f</sup>* 4(*x*))T and restrictions 630 <sup>≤</sup> *<sup>x</sup>* <sup>≤</sup> 18,810:

$$Opt\,F(X) = \langle \mathbf{max}\,F\_1(X) = \langle \mathbf{max}\,f\_1(X), \mathbf{max}\,f\_2(X), \mathbf{max}\,f\_4(X) \rangle,\tag{44}$$

$$\min F\_2(X) = \langle \min f\_3 X \rangle \vert,\tag{45}$$

$$\text{at restrictions} \,\mathbf{x}\_j^{\text{min}} \le \mathbf{x}\_j \le \mathbf{x}\_j^{\text{max}}, j = \,\overline{1, N}. \tag{46}$$

*Stage 2.* We solve the VPMP of Equations (44)–(46) similarly to that shown in Section 2.

Step 1. We solve the problem of Equations (44)–(46) for each criterion separately. Since each is a unimodal function, we use the function [x,f] = fminbnd(c,a,b) to find its minimum or maximum on the segment (Equation (46)). Here, c, a, b are the input parameters, c is the given function, a and b are the beginning and the end of the interval, respectively, and *x* and f are the output parameters (the optimum point and the value of the objective function at the optimum, respectively). It takes the form:

> [x1max,f1max]=fminbnd( <sup>−</sup>(3.1937 <sup>×</sup> <sup>10</sup>−5×*x*ˆ2+1.9467×*x*+3651.1) ,I(1,1),I(10,1))

for the first criterion. We thus derive the optimum point with respect to the first criterion *x*max <sup>1</sup> = 18,810 and the value of the criterion at this point *f*∗ <sup>=</sup> *<sup>f</sup>* 1(*x*max ) <sup>=</sup> <sup>−</sup>28,969. Similarly, we have *<sup>x</sup>*max = 2192.8, *f*∗ <sup>=</sup> *<sup>f</sup>* 2(*x*max ) <sup>=</sup> <sup>−</sup>2063.7, *<sup>x</sup>*max = 630.0, *f*<sup>∗</sup> <sup>=</sup> *<sup>f</sup>* 4(*x*max ) <sup>=</sup> <sup>−</sup>239.65, *<sup>x</sup>*min = 8389.0, *f*<sup>∗</sup> <sup>=</sup> *<sup>f</sup>* 3(*x*min ) = 1021.4 for other criteria.

Step 2. We find the worst part for each criterion. We end up with the optimum point *x*min <sup>1</sup> = 630 with respect to the first criterion and the value of the criterion at the optimum *f <sup>o</sup>* <sup>1</sup> <sup>=</sup> *<sup>f</sup>* 1(*xo* <sup>1</sup>) = 4864.9 (the worst constant part with respect to the first criterion). We use the operator plot(x1min, f1min/10, k× ) to represent this and other points in Figure 2. Similarly, we have for other criteria *x*min <sup>2</sup> = 17,502, *f o* <sup>2</sup> <sup>=</sup> *<sup>f</sup>* 2(*x*min <sup>2</sup> ) <sup>=</sup> 365.22, *<sup>x</sup>*min <sup>4</sup> <sup>=</sup> 18,810, *<sup>f</sup> <sup>o</sup>* <sup>4</sup> <sup>=</sup> *<sup>f</sup>* 4(*x*min <sup>4</sup> ) <sup>=</sup> 59.838, *<sup>x</sup>*max <sup>3</sup> <sup>=</sup>18,810, *<sup>f</sup> <sup>o</sup>* <sup>3</sup> <sup>=</sup> *<sup>f</sup>* 3(*x*max <sup>3</sup> ) = −2126.3.

Step 3. We analyze the set of Pareto optimal points. In points at an optimum *X* \* = {*X1\**, *X2\**, *X3\**, *X4\**, *X5\**}, sizes of criterion functions of *F*(*X*\*) = *fq*(*X*<sup>∗</sup> *k* ) *k* = 1,*K <sup>q</sup>* <sup>=</sup> 1,*<sup>K</sup>* are determined. We calculate a vector *D* = (*d*<sup>1</sup> *d*<sup>2</sup> *d*<sup>3</sup> *d*<sup>4</sup> *d*5) <sup>T</sup> of deviations by each criterion on an admissible set *<sup>S</sup>*: *dk* <sup>=</sup> *fk\** <sup>−</sup> *fk* 0, *k* = 1, 5, and a matrix of relative estimates λ(*X\**) = λ*q*(*X*<sup>∗</sup> *k* ) *k* = 1,*K q* = 1,*K* , where λ*k*(*X*) = (*fk*\* − *fk* 0)/*dk*.

**Figure 2.** Approximations of four normalized criteria.

In the problem of Equations (44)–(46), criteria in the normalized form λ*k*(*Xo*), *k* = 1,*K* can be represented as shown in Table 4. We calculate the coefficients of the approximating polynomial for the normalized criteria to obtain:

<sup>λ</sup>1(*x*) <sup>=</sup> <sup>−</sup>1.325 <sup>×</sup> <sup>10</sup>−<sup>9</sup> *<sup>x</sup>*<sup>2</sup> <sup>+</sup> 8.0762 <sup>×</sup> <sup>10</sup>−<sup>5</sup> *<sup>x</sup>* <sup>×</sup> 0.0504, <sup>λ</sup>2(*x*) <sup>=</sup> 5.5735 <sup>×</sup> <sup>10</sup>−<sup>13</sup> *<sup>x</sup>*<sup>3</sup> <sup>−</sup> 1.6467 <sup>×</sup> <sup>10</sup>−<sup>8</sup> *<sup>x</sup>*<sup>2</sup> <sup>+</sup> 6.4179 <sup>×</sup> <sup>10</sup>−<sup>5</sup> *<sup>x</sup>* <sup>+</sup> 0.9325, <sup>λ</sup>3(*x*) <sup>=</sup> <sup>−</sup>9.2080 <sup>×</sup> <sup>10</sup>−<sup>9</sup> *<sup>x</sup>*<sup>2</sup> <sup>+</sup> 1.5451 <sup>×</sup> <sup>10</sup>−<sup>4</sup> *<sup>x</sup>* <sup>+</sup> 0.3519, <sup>λ</sup><sup>4</sup> (*x*) <sup>=</sup> 9.1530 <sup>×</sup> <sup>10</sup>−<sup>10</sup> *<sup>x</sup>*<sup>2</sup> <sup>−</sup> 7.2811 <sup>×</sup> <sup>10</sup>−<sup>5</sup> *<sup>x</sup>* <sup>+</sup> 1.0455.


**Table 4.** Normalized criteria.

Figure 2 shows the optimum points *X*<sup>0</sup> and normalized criteria.

The optimal point *X*<sup>0</sup> in Figure 2 can be chosen manually.

We solve the λ-problem to find the exact value of *X*0.

Step 4. We construct the λ-problem. Using the obtained function and relative evaluations λ1(*x*), λ2(*x*), λ3(*x*), λ4(*x*), we construct the λ-problem:

$$
\lambda^0 = \max \lambda,\tag{47}
$$

$$\begin{aligned} \lambda - (-3.1937 \times 10^{-5} \text{x}^2 + 1.9467 \text{x} + 3651.1 - f\_1^o)/d\_1 &\le 0, \\ \lambda - (9.467 \times 10^{-10} \text{x}^3 - 2.7968 \times 10^{-5} \text{x}^2 + 0.1090 \text{x} + 1949.2 - f\_2^o)/d\_2 &\le 0, \\ \lambda - (1.0174 \times 10^{-5} \text{x}^2 - 0.1707 \text{x} + 1737.4 + f\_3^o)/d\_3 &\le 0, \\ \lambda - (1.6458 \times 10^{-7} \text{x}^2 - 0.01309 \text{x} + 247.83 - f\_4^o)/d\_4 &\le 0, \end{aligned} \tag{48}$$

$$630 \le \mathbf{x} \le 18810 \tag{49}$$

Step 5. We solve the λ-problem of Equations (47)–(49). We use standard methods, in particular the MATLAB function fmincon ( ... ). From the solution we obtain:


It follows from this result that the first and fourth criteria are equal: λ1(*X*0) = λ4(*X*0) = λ<sup>0</sup> = 0.5163. According to Theorem 1, the first and fourth criteria are most contradictory. Other criteria are

greater than or equal to the maximum relative assessment of λ<sup>o</sup> which is the guaranteed result in relative units.

#### **6. Statement and Optimal Decision Making with Experimental Data in Problems with Two Parameters**

We use a particular example to illustrate the decision-making problem of the second type (Equation (38)) and the choice of the optimal decision. The solution to the problem of making decisions of the second type and the choice of the optimal solution will be shown on a concrete example. The decision-making problem of the second type with two parameters is solved in three stages: problem statement—the formation of the initial data, transformation of experimental data into the vector problem of mathematical programming (VPMP), and, the VPMP decision—making the best decision.

#### *6.1. Problem Definition of Decision Making of the Second Type with Two Parameters*

The problem setting is performed by the system designer based on experimental data.

It is given that we have an engineering system functioning according to a vector of controlled variables *X* = (*x*1, *x*2) with two parameters that take values: *x*1, *x*<sup>2</sup> ∈{0 2.5 5. 7.5 10}.

Decision-making criteria are represented by five functions *f* 1(*X*), ... , *f* 5(*X*). For the first two, it is desirable to obtain values as high as possible (maximum) while for the other three it is desirable to obtain values as small as possible (minimum). Experimental data are given in Table 5.


**Table 5.** Experimental data (matrix I).

The requirement is to make the best (optimal) decision given the experimental data available.

#### *6.2. Transformation of Experimental Data into a Vector Problem of Mathematical Programming*

We construct a regression model and the regression problem based on it—the decision-making model of Equation (38)—and solve it. This is done in two stages, each of which consists of a number of steps.

*Stage 1.* We represent the decision-making problem as a VPMP.

Step 0. We prepare the initial data in MATLAB, forming the matrix *I* (Table 5).

Step 1. We approximate the function initial data (the third column) by cubic splines using: zz=interp2(X,Y,Z,xx,yy,zz, linear ).

We use the function surf(xx,yy,zz), hold on to represent the piecewise polynomial function *f* 1(*X*) in Figure 3. Similarly, we perform the same approximation of the four other functions that we also represent (in natural units) in Figure 3 together with their minimal and maximal values. Although it appears to be difficult to choose the optimal solution visually using Figure 3, it is easier than using the initial data of the matrix *I* to do so.

**Figure 3.** Approximation of five criteria of *f* 1(*X*), ... , *f* 5(*X*).

Step 2. Using a method of the smallest square deviations [12], we calculate the coefficients of the approximating polynomial of the second degree:

$$\min\_{A} f(A, X) \equiv \sum\_{i=1}^{M} \left( y\_j - \left( a\_0 + a\_1 \mathbf{x}\_{1i} + a\_2 \mathbf{x}\_{1i}^2 + a\_3 \mathbf{x}\_{2i} + a\_4 \mathbf{x}\_{2i}^2 + a\_5 \mathbf{x}\_{1i} \ast \mathbf{x}\_{2i} \right) \right)^2. \tag{50}$$

Based on data from columns 3–7 of the matrix *I*, we calculate the coefficients of the best approximating polynomial in the sense of minimum quadratic deviation at the nodes. This yields the polynomials of the second degree with two variables (four factors):

$$\begin{array}{l}f\_{1}(\mathbf{x}) = -0.4\mathbf{x}\_{1}^{2} - 8\mathbf{x}\_{1} - 0.4\mathbf{x}\_{2}^{2} - 8\mathbf{x}\_{2} - 80, \\ f\_{2}(\mathbf{x}) = -0.3\mathbf{x}\_{1}^{2} - 6\mathbf{x}\_{1} - 0.3\mathbf{x}\_{2}^{2} + 12\mathbf{x}\_{2} - 150, \\ f\_{3}(\mathbf{x}) = 0.5\mathbf{x}\_{1}^{2} - 20\mathbf{x}\_{1} + 0.5\mathbf{x}\_{2}^{2} - 8\mathbf{x}\_{2} + 232, \\ f\_{4}(\mathbf{x}) = 0.6\mathbf{x}\_{1}^{2} - 9.6\mathbf{x}\_{1} + 0.6\mathbf{x}\_{2}^{2} - 24\mathbf{x}\_{2} + 278.4, \\ f\_{5}(\mathbf{x}) = \mathbf{x}\_{1}^{2} - 40\mathbf{x}\_{1} + \mathbf{x}\_{2}^{2} + 20\mathbf{x}\_{2} + 500, \end{array} \tag{51}$$

$$\text{the restrictions } 0 \le \mathbf{x}\_1 \le 10 \text{, } 0 \le \mathbf{x}\_2 \le 10. \tag{52}$$

*Stage 2*. We form and solve the VPMP.

Step 0. Using the results of the previous stage, we represent the decision-making problem as the VPMP of Equations (1)–(4) with the vector criterion *<sup>F</sup>*(*X*) <sup>=</sup> (−*<sup>f</sup>* 1(*X*) <sup>−</sup>*<sup>f</sup>* 2(*X*) *<sup>f</sup>* 3(*X*) *<sup>f</sup>* 4(*X*) *<sup>f</sup>* 5(*X*))T for the stated restrictions:

$$\text{Opt } F(X) = \langle \mathbf{m} \mathbf{a} \mathbf{x} \, F\_1(X) = \langle \mathbf{m} \mathbf{a} \mathbf{x} \, f\_1(X), \mathbf{m} \mathbf{a} \mathbf{x} \, f\_2(X), \tag{53}$$

$$\min F\_2(X) = \langle \min f\_3 X \rangle,\\ \min f\_4(X) \rangle,\\ \min f\_5(X) |\rangle,\tag{54}$$

$$\text{at restrictions } 0 \le \mathbf{x}\_1 \le 10 \text{, } 0 \le \mathbf{x}\_2 \le 10. \tag{55}$$

#### *6.3. The Solution of a Vector Problem of Mathematical Programming–Decision-Making*

For the solution of a vector problem of mathematical programming using the algorithm based on normalization of criteria, the above is used.

Step 1. We solve the problem of Equations (53)–(55) with respect to each criterion separately using the function fmincon( ... ), resulting in the optimum points:

$$\begin{array}{lclclcl}X\_1^\* & \langle \mathbf{x}\_1 = 0, \mathbf{x}\_2 = 0 \rangle, X\_2^\* & \langle \mathbf{x}\_1 = 0, \mathbf{x}\_2 = 10 \rangle, X\_3^\* & = \langle \mathbf{x}\_1 = 10, \mathbf{x}\_2 = 8 \rangle, \\\ X\_4^\* & \langle \mathbf{x}\_1 = 8, \mathbf{x}\_2 = 10 \rangle, X\_5^\* & \langle \mathbf{x}\_1 = 10, \mathbf{x}\_2 = 0 \rangle. \end{array} \tag{56}$$

and values of the criteria at these points:

$$(1) f\_1' = \ -80, \ (2) f\_2' = \ -60, \ (3) f\_3' = \ 50, \ (4) f\_4' = \ 60, \ (5) f\_5' = \ 200. \tag{57}$$

Step 2. We find the worst constant part by solving the problem of Equations (53)–(55) with respect to each criterion separately, i.e., we minimize the first two criteria and maximize the other criteria. The result is: (1) *X<sup>o</sup>* <sup>=</sup> (10, 10), *<sup>f</sup> <sup>o</sup>* <sup>=</sup> <sup>−</sup>320, (2), *<sup>X</sup><sup>o</sup>* <sup>=</sup> (10, 0), *<sup>f</sup> <sup>o</sup>* <sup>=</sup> <sup>−</sup>240, (3) *<sup>X</sup><sup>o</sup>* <sup>=</sup> (0, 0), *<sup>f</sup> <sup>o</sup>* <sup>=</sup> 232, (4) *<sup>X</sup><sup>o</sup>* = (0, 0), *f o* <sup>=</sup> 278.4, (5) *<sup>X</sup><sup>o</sup>* <sup>=</sup> (0, 10), *<sup>f</sup> <sup>o</sup>* <sup>5</sup> = 800.

We represent the domain of admissible points *S* given by the restrictions of Equation (55) and the optimum points *X*∗ <sup>1</sup>, ... , *X*<sup>∗</sup> <sup>5</sup> in Figure 4.

**Figure 4.** The solution of the vector problem of Equations (53)–(55).

Step 3. We analyze the set of Pareto optimal points using the matrix of the values of all objective functions, the column of deviations, and the matrix of the relative estimates at the optimal points:

$$F(X^\*) = \begin{bmatrix} 320.0 & 150.0 & 52 & 62.4 & 500 \\ 200.0 & 240.0 & 82 & 42.4 & 200 \\ 289.6 & 163.2 & 50 & 88.8 & 424 \\ 289.6 & 127.2 & 74 & 60.0 & 544 \\ 200.0 & 240.0 & 82 & 42.4 & 200 \end{bmatrix}, D = \begin{bmatrix} 240 \\ 180 \\ -182 \\ -218.4 \\ -600 \end{bmatrix}$$

$$\Lambda(X^\*) = \begin{bmatrix} 1.0000 & 0.5000 & 0.9890 & 0.9890 & 0.5000 \\ 0.5000 & 1.0000 & 0.8242 & 0.1648 & 1.0000 \\ 0.8733 & 0.5733 & 1.0000 & 0.8681 & 0.6267 \\ 0.8733 & 0.3733 & 0.8681 & 1.0000 & 0.4267 \\ 0.5000 & 1.0000 & 0.8242 & 0.1648 & 1.0000 \end{bmatrix}$$

,

We also show the relative estimates in Figure 4.

Step 4. We construct the λ-problem for the VPMP of Equations (53)–(55)

$$
\lambda^0 = \max \lambda,\tag{58}
$$

$$\begin{array}{ll}\lambda - \left(-0.4\mathbf{x}\_1^2 - 8\mathbf{x}\_1 - 0.4\mathbf{x}\_2^2 - 8\mathbf{x}\_2 - 80 - f\_1^o\right)/d\_1 \le 0, \\\lambda - \left(-0.3\mathbf{x}\_1^2 - 6\mathbf{x}\_1 - 0.3\mathbf{x}\_2^2 + 12\mathbf{x}\_2 - 150 - f\_2^o\right)/d\_2 \le 0, \\\lambda - \left(0.5\mathbf{x}\_1^2 - 20\mathbf{x}\_1 + 0.5\mathbf{x}\_2^2 - 8\mathbf{x}\_2 + 232 - f\_3^o\right)/d\_3 \le 0, \\\lambda - \left(0.6\mathbf{x}\_1^2 - 9.6\mathbf{x}\_1 + 0.6\mathbf{x}\_2^2 - 2\mathbf{x}\_2 + 278.4 - f\_4^o\right)/d\_4 \le 0, \\\lambda - \left(\mathbf{x}\_1^2 - 40\mathbf{x}\_1 + \mathbf{x}\_2^2 + 20\mathbf{x}\_2 + 500 - f\_5^o\right)/d\_5 \le 0, \\\ 0 \le \lambda \le 1, 0 \le \mathbf{x}\_1 \le 10, 0 \le \mathbf{x}\_2 \le 10. \end{array} \tag{60}$$

Step 5. We solve the λ-problem of Equations (58)–(60) using the same MATLAB function fmincon( ... ). This results in:


Figure 5 shows all found points and relative estimates.

**Figure 5.** Solution of the λ-problem of Equations (58)–(60).

From the result of the solution of the λ-problem of Equations (58)–(60), the optimal point *X*<sup>0</sup> and the maximum relative assessment λ<sup>0</sup> represent the results of the decision with equivalent criteria. For the solution of the VPMP with a priority of criterion coordinate *X*<sup>00</sup> = {*x*1, *x*2}, in Figure 4 the area where the corresponding assessment is higher than other relative estimates is chosen.

For example, if the second criterion has priority, then point *<sup>X</sup>*<sup>00</sup> <sup>=</sup> {*x*1, *<sup>x</sup>*2} is chosen where <sup>λ</sup><sup>2</sup> <sup>≥</sup> λ1, λ3, λ4, λ5. More difficult technology illustrating the choice of a priority of criterion is presented in problems of decision-making with three and four criteria in the following sections.

#### **7. Statement and Optimal Decision Making with Experimental Data in Problems with Three Parameters**

The conditional object, namely, the technical system for which data on some set of the functional characteristics (certainty conditions), the discrete values of characteristics (certainty conditions), and the restrictions imposed on the functioning of the system [10] are known is considered. The numerical problem of model operation of the system is considered with equivalent criteria and with the given priority of criterion and proceeds as:

Statement of the problem of decision making in a system with three parameters,

Construction of a numerical model of a system with three parameters in the form of a vector problem,

The solution of the vector problem and decision making with equivalent criteria, Decision making in a system with three parameters with a criterion priority, Analysis of the results of the final decision.

#### *7.1. Statement of the Problem of Decision Making in a System with Three Parameters*

It is given that the technical system is defined by three parameters. (Practical problems of the simulation of technical systems using this algorithm can be solved with the dimensionality of parameters *X* greater than two, *N* > 2. The structure of the software becomes complicated and geometric interpretation of *N* = 3,4 ... is not possible.) *X* = {*x*1, *x*2, *x*3} represents a vector of operating variables. The basic data for the solution of the problem are the characteristics (criterion) of *F*(*X*) = {*f* 1(*X*), *f* 2(*X*), *f* 3(*X*), *f* 4(*X*)}, whose size of assessment depends on the vector *X*. For characteristics *f* 3(*X*), *f* 4(*X*), functional dependence on parameters *X* (a definiteness condition) is known:

$$\begin{aligned} f\_{\mathcal{G}}(\mathbf{X}) &= 55.7188 - 0.1187 \times \mathbf{x}\_1 + 0.1844 \times \mathbf{x}\_2 - 0.0438 \times \mathbf{x}\_3 - 0.0002 \times \mathbf{x}\_1 \times \mathbf{x}\_2 - 0.00023 \times \mathbf{x}\_3 + 0.0001 \times \mathbf{x}\_1 \\ &- 0.0023 \times \mathbf{x}\_1 \times \mathbf{x}\_3 - 0.0011 \times \mathbf{x}\_2 \times \mathbf{x}\_3 + 0.0032 \times \mathbf{x}\_1^2 + 0.0634 \times \mathbf{x} - 0 \times \mathbf{x}\_3^2 \end{aligned} \tag{61}$$

$$f\_4(X) = 25.6484 - 0.2967 \times \mathbf{x}\_1 - 0.3384 \times \mathbf{x}\_2 + 0.1433 \times \mathbf{x}\_3 - 0.0048 \times \mathbf{x}\_1 \times \mathbf{x}\_2 + \dots \tag{62}$$

$$\begin{pmatrix} 0.0169 \times \mathbf{x}\_1 \times \mathbf{x}\_3 + 0.0009 \times \mathbf{x}\_2 \times \mathbf{x}\_3 + 0.012 \times \mathbf{x}\_1^2 + 0.0014 \times \mathbf{x}\_2^2 - 0.0018 \times \mathbf{x}\_3^2 \end{pmatrix} \tag{62}$$

$$\text{Parametrical restrictions: } 25 \le \mathbf{x}\_1 \le 100, \mathbf{25} \le \mathbf{x}\_2 \le 100, \mathbf{25} \le \mathbf{x}\_3 \le 100. \tag{63}$$

For the first and second characteristic results of experimental data, sizes of parameters and corresponding characteristics are known (uncertainty condition).

The numerical values of parameters *X* and characteristics of *y*1(*X*), *y*2(*X*) are presented in Table 6.

**Table 6.** Numerical values of parameters and characteristics of the system.



**Table 6.** *Cont.*

In the decision, in the assessment size of the first and the third characteristic (criterion), it is possible to obtain: *f* 1(*X*)→**max** *y*3(*X*)→**max**, for the second and fourth characteristic: *y*2(*X*)→**min** *y*4(*X*)→**min**. Parameters *X* = {*x*1, *x*2, *x*3} change according to the following limits: *x*1, *x*2, *x*<sup>3</sup> ∈ (25. 50. 75. 100.).

The following is required: to construct a model of the technical system in the form of a vector problem, to solve the vector problem with equivalent criteria, to choose a priority criterion, to establish a numerical value of the priority criterion, to make the best decision (optimum).

#### *7.2. Construction of a Numerical Model of a System with Three Parameters in the Form of a Vector Problem*

The construction of a numerical model of the system in the form of a vector problem includes three stages:


#### 7.2.1. Building a Model under the Conditions of Certainty

Construction under the conditions of definiteness is defined by functional dependence of each characteristic and restrictions on the parameters of the technical system. In our example, two characteristics (Equations (61) and (62)) and the restrictions of Equation (63) are known. Uniting these, we obtain a vector task with two criteria:

*opt F(X)* = {*max F*1(*X*)} = {*max f* 3(*X*)} = 55.7188 − 0.1187 × *x*<sup>1</sup> + 0.1844 × *x*<sup>2</sup> − 0.0438 × *x*<sup>3</sup> − 0.0002 × *x*<sup>1</sup> × *x*<sup>2</sup> − 0.0023 × *x*<sup>1</sup> × *x*<sup>3</sup> − 0.0011 × *x*<sup>2</sup> × *x*<sup>3</sup> + 0.0032 × *x*<sup>1</sup> <sup>2</sup> <sup>+</sup> 0.0634 <sup>×</sup> *<sup>x</sup>* <sup>−</sup> <sup>0</sup> <sup>×</sup> *<sup>x</sup>*<sup>3</sup> <sup>2</sup> (64)

$$\begin{aligned} \min F\_2(\mathbf{X}) &= \langle \min f\_4(\mathbf{X}) \rangle = 25.6484 - 0.2967 \times \mathbf{x}\_1 - 0.3384 \times \mathbf{x}\_2 + 0.1433 \times \mathbf{x}\_3 - 0.0048 \times \mathbf{x}\_4 \\ \mathbf{x}\_1 &\times \mathbf{x}\_2 + 0.0169 \times \mathbf{x}\_1 \times \mathbf{x}\_3 + 0.0009 \times \mathbf{x}\_2 \times \mathbf{x}\_3 + 0.012 \times \mathbf{x}\_1^2 + 0.0014 \times \mathbf{x}\_2^2 - 0.0018 \times \mathbf{x}\_3^2 \end{aligned} \tag{65}$$

$$\text{Parametrical restrictions: } 25 \le \mathbf{x}\_1 \le 100, \, 25 \le \mathbf{x}\_2 \le 100, \, 25 \le \mathbf{x}\_3 \le 100. \tag{66}$$

These data are used further to create a mathematical model of the technical system.

#### 7.2.2. Building a Model under the Conditions of Uncertainty

Construction under the conditions of uncertainty entails the use of the qualitative and quantitative descriptions of the technical system obtained by the "input-output" principle in Table 5. Transformation of information (basic data *y*3(*X*), *y*4(*X*)) into functional types *f* 3(*X*), *f* 4(*X*) is carried out by the use of mathematical methods (i.e., regression analysis).

The basic data of Table 1 are created in the MATLAB system in the form of a matrix:

$$I = |X, Y| = |y\_{i1} y\_{i2}, i = 1, M|\,. \tag{67}$$

For each experimental set function *yk*, *k* = 1, 2, regression using the method of least squares min"*<sup>M</sup> <sup>i</sup>* <sup>=</sup> <sup>1</sup> (*yi* − *yi*) <sup>2</sup> in MATLAB is performed. *Ak*, a polynomial defining the interrelationship of factors *Xi* = {*y*1*i*, *y*2*i*} (67) and functions *yki* = *f(Xi,Ak)*, *k* = 1, 2 is constructed. As a result, we obtain a system of coefficients *Ak* = {*A0k, A1k,* ... *, A9k*} which define the coefficients of a polynomial (function):

$$f\_k(X, A) = A\_{0k} + A\_{1k}\mathbf{x}\_1 + A\_{2k}\mathbf{x}\_1^2 + A\_{3k}\mathbf{x}\_2 + A\_{4k}\mathbf{x}\_2^2 + A\_{5k}\mathbf{x}\_3 + A\_{6k}\mathbf{x}\_3^2 + A\_{7k}\mathbf{x}\_1 \ast \mathbf{x}\_2 + A\_{8k}\mathbf{x}\_1 \ast \mathbf{x}\_3 + \tag{68}$$

As a result of the calculation of the coefficients *Ak*, *k* = 1, we obtain the *f* 1(*X*) function:

$$\begin{array}{ll} f\_1(\mathbf{X}) &=& 50.0 + 11.55 \times \mathbf{x}\_1 + 3.55 \times \mathbf{x}\_2 + 1.0 \times \mathbf{x}\_3 \\ &+& 0.0144 \times \mathbf{x}\_1 \times \mathbf{x}\_2 - 0 \times \mathbf{x}\_1 \times \mathbf{x}\_3 + 0 \times \mathbf{x}\_2 \times \mathbf{x}\_3 - 0.07 \times \mathbf{x}\_1^2 \\ &-& 0.07 \times \mathbf{x}\_2^2 - 0 \times \mathbf{x}\_3^2. \end{array} \tag{69}$$

As a result of the calculations of the coefficients *Ak*, *k* =2, we obtain the *f* 2(*X*) function:

$$\begin{aligned} f\_2(\mathbf{X}) &= -53.87\mathbf{5} + 0.7359 \times \mathbf{x}\_1 + 51.3703 \times \mathbf{x}\_2 \\ &+ 0.3516 \times \mathbf{x}\_3 + 0.0072 \times \mathbf{x}\_1 \times \mathbf{x}\_2 + 0.0519 \times \mathbf{x}\_1 \times \mathbf{x}\_3 \\ &+ 0.0005 \times \mathbf{x}\_2 \times \mathbf{x}\_3 - 0.0066 \times \mathbf{x}\_1^2 - 0.1454 \times \mathbf{x}\_2^2 + 0.0003 \times \mathbf{x}\_3^2 \end{aligned} \tag{70}$$

Parametric restrictions are similar to those of Equation (8).

7.2.3. Creation of a Mathematical Model of a Technical System under the Conditions of Definiteness and Uncertainty

For the creation of a mathematical model of the technical system we used: the functions obtained from conditions of definiteness (Equations (64) and (65)) and uncertainty (Equations (69) and (70)), and parametric restrictions (Equation (66)).

*Block 4.* We consider the functions of Equations (64), (65), (69) and (70) as the criteria defining the functioning of the technical system. A set of criteria *K* = 4 includes three criteria of *f* <sup>1</sup>*(X)*, *f* <sup>3</sup>*(X)* → max and two of *f* <sup>2</sup>*(X)*, *f* <sup>4</sup>*(X)* → min. As a result, the model of the functioning of the technical system is presented as a vector problem of mathematical programming:

$$\begin{array}{l} \text{opt } F(\mathbf{X}) = \langle \max F\_1(\mathbf{X}) \rangle = \langle \max f\_1(\mathbf{X}) \oplus \dots \oplus \mathbf{0}, \mathbf{0} \rangle + 11.55 \times \mathbf{x}\_1 \\ \mathbf{x}\_1 + 3.55 \times \mathbf{x}\_2 + 1.0 \times \mathbf{x}\_3 + 0.0144 \times \mathbf{x}\_1 \times \mathbf{x}\_2 - 0.0 \times \mathbf{x}\_1 \times \mathbf{x}\_3 \\ + 0.0 \times \mathbf{x}\_2 \times \mathbf{x}\_3 - 0.07 \times \mathbf{x}\_1^2 - 0.07 \times \mathbf{x}\_2^2 - 0.0 \times \mathbf{x}\_3^2 \\ \max f\_3(\mathbf{X}) = \langle \mathbf{5}5.7188 - 0.1187 \times \mathbf{x}\_1 + 0.1844 \times \mathbf{x}\_2 - \\ - 0.0438 \times \mathbf{x}\_3 - 0.0002 \times \mathbf{x}\_1 \times \mathbf{x}\_2 - 0.0023 \times \mathbf{x}\_1 \times \mathbf{x}\_3 - \\ - 0.0011 \times \mathbf{x}\_2 \times \mathbf{x}\_3 + 0.0032 \times \mathbf{x}\_1^2 + 0.0634 \times \mathbf{x} - 0 \times \mathbf{x}\_3^2 \rangle, \end{array} \tag{71}$$

$$\begin{aligned} \min F\_2(\mathbf{X}) &= \langle \min f\_2(\mathbf{X}) \equiv -53.875 + 0.7359 \times \mathbf{x}\_1 \\ + 51.3703 \times \mathbf{x}\_2 &+ 0.3516 \times \mathbf{x}\_3 + 0.0072 \times \mathbf{x}\_1 \times \mathbf{x}\_2 \\ + 0.0519 \times \mathbf{x}\_1 \times \mathbf{x}\_3 &+ 0.0005 \times \mathbf{x}\_2 \times \mathbf{x}\_3 - 0.0066 \times \mathbf{x}\_1^2 \\ - 0.1454 \times \mathbf{x}\_2^2 &+ 0.0003 \times \mathbf{x}\_3^2 \\ \min f\_4(\mathbf{X}) &= 25.6484 - 0.2967 \times \mathbf{x}\_1 - 0.3384 \times \mathbf{x}\_2 \\ + 0.1433 \times \mathbf{x}\_3 - 0.0048 \times \mathbf{x}\_1 \times \mathbf{x}\_2 + 0.0169 \times \mathbf{x}\_1 \times \mathbf{x}\_3 \\ + 0.0009 \times \mathbf{x}\_2 \times \mathbf{x}\_3 + 0.012 \times \mathbf{x}\_1^2 + 0.0014 \times \mathbf{x}\_2^2 - 0.0018 \times \mathbf{x}\_3^2 \rangle \\ \text{ restrictions: } 25 \le \mathbf{x}\_1 \le 100, 25 \le \mathbf{x}\_2 \le 100, 25 \le \mathbf{x}\_3 \le 100. \end{aligned} \tag{73}$$

The vector problem of mathematical programming in Equations (71)–(73) represents the model of optimal decision making under conditions of certainty and uncertainty in the aggregate.

#### *7.3. The Solution of the Vector Problem and Decision Making with Equivalent Criteria*

(Algorithm1 of decision making in problems of vector optimization with equivalent criteria).

The solution of the vector problem of Equations (71)–(73) is undertaken as a sequence of steps.

Step 1. Equations (71)–(73) are solved for each criterion separately, using the function *fmincon* ( ... ) of the MATLAB system, the use of the function *fmincon* ( ... ) is considered in [12].

As a result, we obtain optimum points: *X*∗ *<sup>k</sup>* and *f*<sup>∗</sup> *<sup>k</sup>* = *fk*(*X*<sup>∗</sup> *k* ), *k* = 1,*K*, the sizes of the criteria at this point, i.e., the best decision for each criterion:

*X*∗ <sup>1</sup> = {*x*<sup>1</sup> = 86.02, *x*<sup>2</sup> = 34.2, *x*<sup>3</sup> = 100}, *f*<sup>∗</sup> <sup>1</sup> = *f* 1(*X*<sup>∗</sup> <sup>1</sup>) = −707.47, *X*<sup>∗</sup> <sup>2</sup> = {*x*<sup>1</sup> = 25, *x*<sup>2</sup> = 25, *x*<sup>3</sup> = 25}, *f*∗ <sup>2</sup> = *f* 2(*X*<sup>∗</sup> <sup>2</sup>) = 1200.0,

*X*∗ <sup>3</sup> = {*x*<sup>1</sup> = 100, *x*<sup>2</sup> = 100, *x*<sup>3</sup> = 25}, *f*<sup>∗</sup> <sup>3</sup> = *f* 3(*X*<sup>∗</sup> <sup>3</sup>) = −724.69, *X*<sup>∗</sup> <sup>4</sup> = {*x*<sup>1</sup> = 25, *x*<sup>2</sup> = 100, *x*<sup>3</sup> = 25}, *f*∗ <sup>4</sup> = *f* 4(*X*<sup>∗</sup> <sup>4</sup>) = 9.16.

The restrictions in Equation (73) and points of an optimum of coordinates {*x*1, *x*2} are presented in Figure 6.

**Figure 6.** Pareto's great number, S0 <sup>⊂</sup> S in a two-dimensional system of coordinates.

Step 2. We define the worst unchangeable part of each criterion (anti-optimum): *X*0 <sup>1</sup> <sup>=</sup> {*x*<sup>1</sup> <sup>=</sup> 25, *<sup>x</sup>*<sup>2</sup> <sup>=</sup> 100, *<sup>x</sup>*<sup>3</sup> <sup>=</sup> 25}, *<sup>f</sup>* <sup>0</sup> <sup>1</sup> <sup>=</sup> *<sup>f</sup>* 1(*X*<sup>0</sup> <sup>1</sup>) <sup>=</sup> 11.0, *<sup>X</sup>*<sup>0</sup> <sup>2</sup> = {*x*<sup>1</sup> = 100, *x*<sup>2</sup> = 100, *x*<sup>3</sup> = 100}, *f* 0 <sup>2</sup> <sup>=</sup> *<sup>f</sup>* 2(*X*<sup>0</sup> <sup>2</sup>) = −4270.9, *X*0 <sup>3</sup> <sup>=</sup> {*x*<sup>1</sup> <sup>=</sup> 43.5, *<sup>x</sup>*<sup>2</sup> <sup>=</sup> 20, *<sup>x</sup>*<sup>3</sup> <sup>=</sup> 80}, *<sup>f</sup>* <sup>0</sup> <sup>3</sup> <sup>=</sup> *<sup>f</sup>* 3(*X*<sup>0</sup> <sup>3</sup>) <sup>=</sup> 85.0, *<sup>X</sup>*<sup>0</sup> <sup>4</sup> = {*x*<sup>1</sup> = 100, *x*<sup>2</sup> = 25, *x*<sup>3</sup> = 100},*f* <sup>0</sup> <sup>4</sup> <sup>=</sup> *<sup>f</sup>* 2(*X*<sup>0</sup> <sup>4</sup>) = −263.97.

(Top index zero).

Step 3. The system analysis of a set of Pareto optimal points is conducted, (i.e., analysis by each criterion). At the optimal points *X* \* = {*X1\**, *X2\**, *X3\**, *X4\**}, the sizes of the criterion functions of *F*(*X*\*) = *fq*(*X*<sup>∗</sup> *k* ) *k* = 1,*K <sup>q</sup>* <sup>=</sup> 1,*<sup>K</sup>* are determined. We calculated a vector of *<sup>D</sup>* <sup>=</sup> (*d*<sup>1</sup> *<sup>d</sup>*<sup>2</sup> *<sup>d</sup>*<sup>3</sup> *<sup>d</sup>*4) T, deviations of each criterion on an admissible set *S*: *dk* = *fk\** − *fk* 0, *k* = 1, 4, and a matrix of relative estimates of λ(*X\**) = λ*q*(*X*<sup>∗</sup> *k* ) *k* = 1,*K q* = 1,*K* , where λ*k*(*X*) = (*fk\** − *fk 0)*/*dk*:

$$F(X^\*) = \begin{vmatrix} 707.5 & 2055.1 & 127.1 & 209.6\\ 374.0 & 1200.0 & 96.1 & 28.7\\ 329.0 & 3848.7 & 724.7 & 95.1\\ 11.0 & 3704.1 & 701.9 & 9.2 \end{vmatrix}, D = \begin{vmatrix} 696.5\\ -3070.9\\ 639.7\\ -254.8 \end{vmatrix}$$

$$\lambda(X^\*) = \begin{vmatrix} 1.0000 & 0.7216 & 0.0658 & 0.2132\\ 0.5212 & 1.0000 & 0.0174 & 0.9232\\ 0.4566 & 0.1375 & 1.0000 & 0.6628\\ 0 & 0.1846 & 0.9644 & 1.0000 \end{vmatrix}$$

**Discussion.** The analysis of sizes of criteria in relative estimates shows that at optimal points *X*\* = {*X1\**, *X2\**, *X3\**, *X4\**} the relative assessment is equal to unity. Other criteria there are much less than unity. It is required to find such points (parameters) at which relative estimates are closest to unity. The following steps 4 and 5 are directed to the solution of this problem.

Step 4. Creation of the λ-problem is carried out in two stages: first, the maximum problem of optimization with normalized criteria is constructed:

$$
\lambda^0 = \max\_{\mathcal{X}} \min\_k \lambda\_k(X), \mathcal{G}(X) \le 0, X \ge 0,
$$

Second, this is transformed into a standard problem of mathematical programming (the λ-problem):

$$
\lambda^0 = \max \lambda,\tag{74}
$$

$$\begin{aligned} \text{tr}(\text{recfc}(\text{crs}:\lambda - \frac{50.0 + 11.55 \times \chi\_1 \dots + 0.04 \times \chi\_1 \times \chi\_2 \dots - 0.07 \times \chi\_1^2 \dots - f\_1^n}{f\_1^n - f\_1^n} \le 0\\ \lambda - \frac{55.71 - 0.118 \times \chi\_1 \dots - 0.002 \times \chi\_2 \dots - 0.0032 \times \chi\_1^2 \dots - f\_3^n}{f\_3^n - f\_3^n} \le 0\\ \lambda - \frac{53.87 + 0.7359 \times \chi\_1 + \dots - 0.0519 \times \chi\_1 \times \chi\_2 \dots + 0.0066 \times \chi\_1^2 \dots - f\_2^n}{f\_3^n - f\_3^n} \le 0\\ \lambda - \frac{25.6484 - 0.2967 \times \chi\_1 \dots - 0.0048 \times \chi\_1 \times \chi\_2 \dots + 0.012 \times \chi\_1^2 \dots - f\_4^n}{f\_4^n - f\_4^n} \le 0 \end{aligned} \tag{75}$$

$$25 \le x\_1 \le 100, 25 \le x\_2 \le 100, 25 \le x\_3 \le 100.} \tag{76}$$

where the vector of unknowns has the dimension *N* + 1: *X* = {*x*1, ... , *xN*, λ}.

Step 5. The λ-problem solution.

Using the function fmincon( ... ), [12,15]:

[Xo,Lo] = fmincon('Z\_TehnSist\_4Krit\_L',X0,Ao,bo,Aeq,beq,lbo,ubo,'Z\_TehnSist\_LConst',options). As a result, the solutions of the vector problem of mathematical programming of Equations

(71)–(73) with equivalent criteria and the λ-problem corresponding to Equations (74)–(75) are obtained: *X*<sup>0</sup> = {*X*0, λ0} = {*X*<sup>0</sup> = {*x*<sup>1</sup> = 33.027, *x*<sup>2</sup> = 69.54, *x*<sup>3</sup> = 25.0, λ<sup>0</sup> = 0.4459}} is an optimum point of the design data of the technical system. Point *X*<sup>0</sup> is presented in Figure 6. *fk*(*X*0), *k* = 1,*K* represents sizes

of criteria (characteristics of technical system):

$$f\_1(X^0) = 321.5, f\_2(X^0) = 2901.7, f\_3(X^0) = 370.2, f\_4(X^0) = 19.1\rangle,\tag{77}$$

and λ*k*(*X*0), *k* = 1, *K* represents sizes of relative estimates:

$$
\lambda\_1(X^0) = 0.4459, \lambda\_2(X^0) = 0.4459, \lambda\_3(X^0) = 0.4459, \lambda\_4(X^0) = 0.9609|\_{\circ} \tag{78}
$$

λ<sup>0</sup> = 0.4459 is the maximum lower level among all relative estimates measured in relative units: λ<sup>0</sup> = *min* (λ1(*X*0), λ2(*X*0), λ3(*X*0), λ4(*X*0)) = 0.4459. A relative assessment, λ0, is called the guaranteed result in relative units, i.e., λ*k*(*X*0). According to the characteristics of the technical system *fk*(*X*0), it is impossible to improve, without worsening other characteristics.

**Discussion**. We note that according to Theorem 1, at point *X*<sup>0</sup> criteria 1, 2, 3 are contradictory. This contradiction is defined by the equality of λ1(*X*0) = λ2(*X*0) = λ3(*X*0) = λ<sup>0</sup> = 0.4459, and for other criteria an inequality of {λ4(*X*0) = 0.9609} > λ0.

Thus, Theorem 1 forms a basis for the determination of correctness of the solution of a vector problem. In a vector problem of mathematical programming, as a rule, for two criteria the equality holds: <sup>λ</sup><sup>0</sup> <sup>=</sup> <sup>λ</sup>*q*(*X*0) <sup>=</sup> <sup>λ</sup>*p*(*X*0), *<sup>q</sup>*, *<sup>p</sup>* <sup>∈</sup> *<sup>K</sup>*, *<sup>X</sup>* <sup>∈</sup> *<sup>S</sup>*, (in our example, three criteria), and for other criteria is defined as an inequality: <sup>λ</sup><sup>0</sup> <sup>≤</sup> <sup>λ</sup>*k*(*X*0) "*<sup>k</sup>* <sup>∈</sup> *<sup>K</sup>*, *<sup>q</sup> <sup>p</sup> <sup>k</sup>*.

In an admissible set of points *S* formed by the restrictions of Equation (76), the optimum points *X1\**, *X2\**, *X3\**, *X4\** are united in a contour and presented as a set of Pareto optimal points, *<sup>S</sup>*<sup>0</sup> <sup>⊂</sup> *<sup>S</sup>*. For specification of the border of a great number of Pareto additional points are calculated: *X<sup>o</sup>* <sup>12</sup>, *<sup>X</sup><sup>o</sup>* <sup>13</sup>, *<sup>X</sup><sup>o</sup>* 42, *Xo* <sup>34</sup> which lie between the corresponding criteria. For definition of a point *<sup>X</sup><sup>o</sup>* <sup>12</sup>, the vector problem was solved with two criteria of Equations (71), (72): λ1X, λ2X, (Equation (76)).

Results of the decision are:

*Xo* <sup>12</sup> <sup>=</sup> {80.78 25.0 55.89}, <sup>λ</sup>0(*X<sup>o</sup>* <sup>12</sup>) = 0.9264, F12 = {656.2 1426.0 101.7 142.7}, L12 = {**0.9264 0.9264** 0.0261 0.4761}. Other points X*<sup>o</sup>* 13, X*<sup>o</sup>* 42, X*<sup>o</sup>* <sup>43</sup> were similarly defined: X*o* <sup>13</sup> <sup>=</sup> {93.29 87.49 100.0}, <sup>λ</sup>o(X*<sup>o</sup>* <sup>13</sup>) = 0. 7173, F13 = {510.6 3924.4 543.8 206.2}, L13 = (**0.7173** 0.1128 **0.7173** 0.2267}, X*o* <sup>42</sup> <sup>=</sup> {25.0 29.92 25.0}, <sup>λ</sup>0(X*<sup>o</sup>* <sup>42</sup>) = 0. 9301, F42 = {374.3 1414.5 114.0 27.0},

L42 = {0.5217 **0.9301** 0.0454 **0.9301**},

X*o* <sup>43</sup> <sup>=</sup> {25.0 100.0 56.02}, <sup>λ</sup>0(X*<sup>o</sup>* <sup>43</sup>) = 0. 8366, F43 = {42.0 3757.6 695.4 25.0},

L43 = {0.0445 0.1672 **0.9541 0.9541**},

Points: *X<sup>o</sup>* <sup>12</sup>, *<sup>X</sup><sup>o</sup>* <sup>13</sup>, *<sup>X</sup><sup>o</sup>* <sup>42</sup>, *<sup>X</sup><sup>o</sup>* <sup>43</sup> are presented in Figure 6. Coordinates of these points and the characteristics of the technical system in relative units of λ1(*X*), λ2(*X*), λ3(*X*), λ4(*X*), λ5(*X*) are shown in Figure 7 in three-dimensional measured space {*x*1, *x*2, λ}, where the third axis λ is a relative assessment.

The solution of the λ-problem of Equations (74)–(76) is the optimal point *X<sup>o</sup>* and the maximum relative assessment of λ<sup>0</sup> represents the result of the decision with equivalent criteria.

**Figure 7.** The solution of the λ-problem in a three-dimensional system of coordinates of x1, x2 and λ.

#### *7.4. Decision Making in a System with Three Parameters with a Criterion Priority*

(Method of decision making in problems of vector optimization with a criterion priority)

*Step1*. We solve a vector problem with equivalent criteria. The numerical results of the solution of the vector problem are given above. Pareto's great number *So* <sup>⊂</sup> S lies between optimum points:

$$S^0 \ = \ \{X\_1^\*X\_3^0X\_3^\*X\_{43}^0X\_4^\*X\_{42}^0X\_2^\*X\_{12}^0X\_1^\*\}.$$

We carry out the analysis of the great number of Pareto *<sup>S</sup>*<sup>0</sup> <sup>⊂</sup> *S.* For this purpose, we will connect auxiliary points: *X<sup>o</sup>* <sup>12</sup>, *<sup>X</sup><sup>o</sup>* <sup>13</sup>, *<sup>X</sup><sup>o</sup>* <sup>43</sup>, *<sup>X</sup><sup>o</sup>* <sup>42</sup>, with a point Xo which conditionally represents the center of a great number of Pareto. As a result, we obtain four subsets of points *<sup>X</sup>* <sup>∈</sup> *<sup>S</sup><sup>o</sup> <sup>q</sup>* <sup>⊂</sup> *<sup>S</sup>*<sup>0</sup> <sup>⊂</sup> *<sup>S</sup>*, *<sup>q</sup>* <sup>=</sup> 1, 4. The subset of *S<sup>o</sup>* <sup>1</sup><sup>⊂</sup> *<sup>S</sup>*<sup>0</sup> <sup>⊂</sup> *<sup>S</sup>* is characterized by the fact that in the relative assessment, <sup>λ</sup><sup>1</sup> <sup>≥</sup> <sup>λ</sup>2, <sup>λ</sup>3, <sup>λ</sup>*45*, i.e., in the field of *S* the first criterion has priority over the others. This applies similarly for the *S<sup>o</sup>* <sup>2</sup>, *<sup>S</sup><sup>o</sup>* <sup>3</sup>, *<sup>S</sup><sup>o</sup>* 4, subsets of points where the second, third or fourth criterion has a priority over the others, respectively. We designate the set of Pareto optimal points *S***<sup>0</sup>** = *S<sup>o</sup>* <sup>1</sup> <sup>∪</sup> *<sup>S</sup><sup>o</sup>* <sup>2</sup> <sup>∪</sup> *<sup>S</sup><sup>o</sup>* <sup>3</sup> <sup>∪</sup> *<sup>S</sup><sup>o</sup>* <sup>4</sup>. Coordinates of all obtained points and relative estimates are presented in two-dimensional space in Figure 6. These coordinates are shown in three-dimensional space {*x*1, *x*2, λ} from a point *X*<sup>∗</sup> <sup>4</sup> in Figure 7, where the third axis λ is a relative assessment. Restrictions of the set of Pareto optimal points in Figure 7 is lowered to −0.5 (so that restrictions are visible). This information is also a basis for further research on the structure of a great number of Pareto. The person making decisions, as a rule, is the designer of the technical system. If results of the solution of the vector problem with equivalent criteria do not satisfy the person making the decision, then the choice of the optimal solution is taken from any subset of points *S<sup>o</sup>* <sup>1</sup>, *<sup>S</sup><sup>o</sup>* <sup>2</sup>, *<sup>S</sup><sup>o</sup>* <sup>3</sup>, *<sup>S</sup><sup>o</sup>* 4.

Step 2. Choice of priority criterion of *q*∈*K*. From theory (see Theorem 2) it is known that at an optimum point *<sup>X</sup>***<sup>0</sup>** there are always two most inconsistent criteria, *<sup>q</sup>* <sup>∈</sup> *<sup>K</sup>* and *<sup>v</sup>* <sup>∈</sup> *K,* for which in relative units an equality holds: <sup>λ</sup><sup>0</sup> <sup>=</sup> <sup>λ</sup>*q*(*X*0) <sup>=</sup> <sup>λ</sup>*p*(*X*0), *<sup>q</sup>*, *<sup>v</sup>* <sup>∈</sup> *<sup>K</sup>*, *<sup>X</sup>* <sup>∈</sup> *<sup>S</sup>*. Others are subject to inequalities: <sup>λ</sup>**<sup>0</sup>** <sup>≤</sup> <sup>λ</sup>*k*(*X***0**) "*<sup>k</sup>* <sup>∈</sup> *<sup>K</sup>*, *<sup>q</sup> <sup>v</sup> <sup>k</sup>*.

In the model of the technical system of Equations (71)–(73) and the corresponding λ-problem of Equations (74)–(76), such criteria are the first, second and third:

$$
\lambda^0 = \lambda\_1(X^0) = \lambda\_2(X^0) = \lambda\_3(X^0) = 0.4459. \tag{79}
$$

These are shown in Figure 8.

**Figure 8.** The solution of the λ-problem (1, 2, 3 criterion) in a three-dimensional system of coordinates of x1, x2 and λ.

As a rule, the criterion which the decision-maker would like to improve is chosen from a couple of contradictory criteria. Such a criterion is called the "priority criterion", which we designate *q* = *2* ∈ *K*. This criterion is investigated in interaction with the first criterion of *k* = 1 ∈ *K*.

On the display the message is given:

q = input ('Enter priority criterion (number) of *q* = '), Have entered: *q* = 2.

Step 3. Numerical limits of the change of the size of a priority of criterion of *q* = 2∈*K* are defined. For priority criterion *q* = 2, the numerical limits in physical units upon transition from an optimal point *X***<sup>0</sup>** to the point *X*<sup>∗</sup> *<sup>q</sup>* obtained in the first step are defined. Information about the criteria for *q* = 2 is given on the screen:

$$f\_q(\mathbf{X}^0) \;=\; 2901.68 \le f\_q(\mathbf{X}) \le 1200.0 \;=\; f\_q(\mathbf{X}\_q^\*), q \in \mathbf{K}.\tag{80}$$

In relative units the criterion of *q* = 2 changes according to the following limits:

<sup>λ</sup>*q*(*X***0**) <sup>=</sup> 0.4459 <sup>≤</sup> <sup>λ</sup>*q*(*X*) <sup>≤</sup> <sup>1</sup><sup>=</sup> <sup>λ</sup>*q*(*X*<sup>∗</sup> *<sup>q</sup>*), *q* = 2∈*K*. These data are analyzed.

Step 4. Choice of the size of priority criterion *q*∈*K* (decision making). The message is displayed: "Enter the size of priority criterion *fq* =", we enter, for example, *fq* = 1500.

Step 5. Calculation of relative assessment.

For the chosen size of priority criterion *fq* = 1600 the relative assessment is calculated:

$$
\lambda\_q = \frac{f\_q - f\_q^o}{f\_q^\* - f\_q^o} = \frac{1600 - 4279.9}{1200.0 - 4279.9} = 0.8697,\tag{81}
$$

which upon transition from point *X***<sup>0</sup>** to *X*<sup>∗</sup> *<sup>q</sup>* according to Equation (78) lies in the limits:

$$0.4459 \;=\; \lambda\_2(\mathbf{X}^0) \le \lambda\_2 \;=\; 0.8697 \le \lambda\_2(\mathbf{X}\_2^\*) \;=\; 1, q \in \mathcal{K}.$$

Step 6. Calculation of the coefficient of linear approximation.

Assuming a linear nature of the change of the criterion *fq*(*X*) in Equation (80) and according to a relative assessment of λ*q*(*X*), using standard methods of linear approximation we calculate the proportionality coefficient between λ*q*(*X***0**), λ*q*, which we call ρ:

$$\rho = \frac{\lambda\_q - \lambda\_q(X^o)}{\lambda\_q(X\_q^\*) - \lambda\_q(X^o)} = \frac{0.8697 - 0.4459}{1 - 0.4459} = 0.7649,\\ q = 2. \tag{82}$$

Step 7. Calculation of coordinates of the priority criterion with the size fq.

Assuming a linear nature of the change of a vector *X<sup>q</sup>* = {*x*<sup>1</sup> *x*2}, *q* = 2 we determine coordinates of a point of priority criterion with the size *fq* =1600 with a relative assessment (Equation (81)):

*X<sup>q</sup>* = {*x*<sup>1</sup> = *X***0**(1) + ρ(*X*<sup>∗</sup> *<sup>q</sup>*(1) <sup>−</sup> *<sup>X</sup>***0**(1)) *<sup>x</sup>*<sup>2</sup> <sup>=</sup>*X***<sup>0</sup>** (2) <sup>+</sup> <sup>ρ</sup>(*X*<sup>∗</sup> *<sup>q</sup>*(2) <sup>−</sup> *<sup>X</sup>***0**(2))}, where *X***<sup>0</sup>** = {*x*<sup>1</sup> = 33.02, *x*<sup>2</sup> = 69.54}, *X*<sup>∗</sup> <sup>2</sup> = {*x*<sup>1</sup> = 25, *x*<sup>2</sup> = 25}.

As a result of these calculations we obtain the point coordinates:

$$X^{\eta} = \langle \mathbf{x}\_1 = 26.88, \mathbf{x}\_2 = 69.54 \rangle. \tag{83}$$

Step 8. Calculation of the main indicators of a point of *Xq*. For the obtained *X<sup>q</sup>* point, we calculate:


Any point from Pareto's set *X<sup>o</sup> <sup>t</sup>* <sup>=</sup> {λ*<sup>o</sup> <sup>t</sup>* , *<sup>X</sup><sup>o</sup> <sup>t</sup>* }∈*So* can be similarly calculated.

#### *7.5. Analysis of the Results of the Final Decision*

The calculated size of criterion *fq*(*X<sup>o</sup> <sup>t</sup>* ), *q*∈*K* is usually not equal to the set *fq*. The error of the choice of Δ*fq* = |*fq*(*X<sup>o</sup> <sup>t</sup>* ) − *fq*|=|1651.5 − 1600| = 51.5 is defined by an error of linear approximation, Δ*fq%* = 3.2%.

In the course of modeling, parametrical restrictions of Equation (73) can be changed, i.e., some set of optimum decisions is obtained. We can choose a final version, which in our example includes this set of optimum decisions:


We represent these parameters in two-dimensional (*x*1, *x*2) and three-dimensional (*x*1, *x*2and λ) coordinate systems in Figures 6–8, and also in physical units for each function *f* 1(*X*), ... , *f* 4(*X*) in Figures 9–12, respectively.

The first characteristic *f* 1(*X*) in physical units is shown in Figure 9.

**Figure 9.** The first characteristic *f* 1(*X*) of the technical system in a natural indicator.

At point *X***0**, *Xq* of the second characteristic *f* 2(*X*) will appear as presented in Figure 10.

**Figure 10.** The second characteristic *f* 2(*X*) of the technical system in a natural indicator.

At point *X***0**, *Xq* of the third characteristic *f* 3(*X*) will appear as presented in Figure 11.

**Figure 11.** The third characteristic *f* 3(*X*) of the technical system in a natural indicator.

At point *X***0**, *Xq* of the fourth characteristic *f* 4(*X*) will appear as presented in Figure 12.

**Figure 12.** The fourth characteristic *f* 4(*X*) of the technical system in a natural indicator.

Collectively, for the submitted version, at the point *X*<sup>0</sup> there exist characteristics of *f* 1(*X***0**), *f* 2(*X***0**), *<sup>f</sup>* 3(*X*0), *<sup>f</sup>* 4(*X*0), relative estimates of <sup>λ</sup>1(*X*0), <sup>λ</sup>2(*X*0), <sup>λ</sup>3(*X*0), <sup>λ</sup>4(*X*0), and maximum <sup>λ</sup>*<sup>o</sup>* relative level <sup>λ</sup><sup>0</sup> <sup>≤</sup> <sup>λ</sup>*k*(*X*0) "*<sup>k</sup>* <sup>∈</sup> *<sup>K</sup>* such that there is an optimal solution with equivalent criteria (characteristics), and the procedure for obtaining acceptance of the optimal solution with equivalent criteria (characteristics).

At point *Xq* there exist: characteristics of *f* 1(*Xq*), *f* 2(*Xq*), *f* 3(*Xq*), *f* 4(*Xq*), relative estimates of λ1(*Xq*), <sup>λ</sup>2(*Xq*), <sup>λ</sup>3(*Xq*), <sup>λ</sup>4(*Xq*), maximum <sup>λ</sup><sup>0</sup> relative level <sup>λ</sup><sup>0</sup> <sup>≤</sup> <sup>λ</sup>*k*(*Xq*) "*k*∈*<sup>K</sup>* such that there is an optimal solution at the set priority of the second criterion (characteristic) in relation to other criteria. The procedure of obtaining a point *Xq* is the adoption of the optimal solution at the set priority of the second criterion.

Based on the theory of vector optimization, methods of solution of vector problems with equivalent criteria and a given priority of criterion allow the choice of any point from the set of Pareto optimal points and demonstration of the optimality of this point.

**Conclusions.** The problem of adoption of the optimum decision in a difficult technical system based on some set of functional characteristics is one of the most important problems of system analysis and design.

#### **8. The Methodology of Making Optimal Decisions with the Functional and Experimental Data (For Example, a Problem with Four Parameters)**

In the studied object, a system is known. Data on the functional characteristics, discrete values of separate characteristics, and data on restrictions that are imposed on the functioning of a system. The process of model operation of such a system is presented in the methodology form: "The methodology of making the optimal decision based on the functional and experimental data".

The methodology includes a number of stages.


#### *8.1. Formation of Technical Specifications (Source Data) for the Numerical Simulation of the System*

We will consider a problem "Numerical modeling of the system" in which data on some set of functional characteristics (definiteness conditions), discrete values of characteristics (an uncertainty condition) and the restrictions imposed on the functioning of the technical system are known [6–10,13,15,20,22].

It is given that the system function is defined by four parameters *X* = {*x*1, *x*2, *x*3, *x*4}, a vector of (operated) variables. Basic data for the solution of the problem are the four characteristics (criterion) of:

*F*(*X*) = {*f* 1(*X*), *f* 2(*X*), *f* 3(*X*), *f* 4(*X*)}, whose size of assessment depends on a vector of *X*.

*The definiteness condition*. For the first and third characteristics of *f* 1(*X*) and *f* 3(*X*) functional dependence on parameters *X* is known (indexing of formulas within the individual section (methods)):

$$\begin{aligned} f\_1(\mathbf{X}) &= 269.867 - 1.8746 \times \mathbf{x}\_1 - 1.7469 \times \mathbf{x}\_2 + 0.8939 \times \mathbf{x}\_3 \\ &+ 1.0937 \times \mathbf{x}\_4 + 0.0484 \times \mathbf{x}\_1 \times \mathbf{x}\_2 - 0.0052 \times \mathbf{x}\_1 \times \mathbf{x}\_3 - \\ &0.0141 \times \mathbf{x}\_1 \times \mathbf{x}\_4 + 0.0037 \times \mathbf{x}\_2 \times \mathbf{x}\_3 - 0.0052 \times \mathbf{x}\_2 \times \mathbf{x}\_4 - \\ &0.0002 \times \mathbf{x}\_3 \times \mathbf{x}\_4 + 0.0119 \times \mathbf{x}\_1^2 + 0.0035 \times \mathbf{x}\_2^2 - 0.002 \times \mathbf{x}\_3^2 \\ &- 0.0042 \times \mathbf{x}\_4^2 \end{aligned} \tag{84}$$

$$\begin{aligned} f\_4(\mathbf{X}) &= 19.25\mathbf{\hat{s}} - 0.008\mathbf{I} \times \mathbf{x}\_1 - 0.7005 \times \mathbf{x}\_2 - 0.3605 \times \mathbf{x}\_3 \\ &+ 0.9769 \times \mathbf{x}\_4 + 0.0126 \times \mathbf{x}\_1 \times \mathbf{x}\_2 + 0.0644 \times \mathbf{x}\_1 \times \mathbf{x}\_3 - 0 \times \mathbf{x}\_1 \times \mathbf{x}\_4 \\ &+ 0.0396 \times \mathbf{x}\_2 \times \mathbf{x}\_3 + 0.0002 \times \mathbf{x}\_2 \times \mathbf{x}\_4 + 0.0004 \times \mathbf{x}\_3 \times \mathbf{x}\_4 - \\ &0.0016 \times \mathbf{x}\_1^2 + 0.0027 \times \mathbf{x}\_2^2 + 0.0045 \times \mathbf{x}\_3^2 - 0.0235 \times \mathbf{x}\_4^2 \end{aligned} \tag{85}$$

$$\text{Testrections: } 22 \le \mathbf{x}\_1 \le 88, \mathbf{0} \le \mathbf{x}\_2 \le 66, \mathbf{2.2} \le \mathbf{x}\_3 \le 8.8, \mathbf{2.2} \le \mathbf{x}\_4 \le 8.8 \tag{86}$$

*The uncertainty condition*. For the second and fourth characteristic results of the experimental data, the sizes of parameters and corresponding characteristics are known. Numerical values of parameters *X* and characteristics of *y*2(*X*) and *y*4(*X*) are presented in Table 7.


**Table 7.** Numerical values of parameters and characteristics of the system.


**Table 7.** *Cont.*

In the decision, from the assessment size of the first and third characteristic (criterion), it is possible to obtain: *f* 1(*X*)→max *f* 3(*X*)→max, and for the second and fourth characteristic: *y*2(*X*)→min *y*4(*X*)→min. Parameters *X* = {*x*1, *x*2, *x*3, *x*4} change according to the following limits:

*x*<sup>1</sup> ∈ [22. 55. 88.], *x*<sup>2</sup> ∈ [0. 33. 66.], *x*<sup>3</sup> ∈ [2.2 5.5 8.8], *x*<sup>4</sup> ∈ [2.2 5.5 8.8].

The requirements are as follows: to construct a model of the system in the form of a vector problem, to solve a vector problem with equivalent criteria, to choose a priority criterion, to establish the numerical value of the priority criterion, and to make the best decision (optimum) with a specified priority criterion.

Note that using the MATLAB system, the author developed the software for the decision making of a vector problem of mathematical programming. The vector problem includes four variables (parameters of the technical system): *X* = {*x*1, *x*2, *x*3, *x*4} and four criteria (characteristics) of *F*(*X*) = {*f* 1(*X*), *f* 2(*X*), *f* 3(*X*), *f* 4(*X*)}. However, for each new set of data (new system) the program is configured individually. In the software criteria *F*(*X*) = {*f* 1(*X*), *f* 2(*X*), ... *f* 6(*X*)} with uncertainty conditions (provided as a part of *y*2(*X*)*, y*4(*X*) in Table 6) can vary between zero (i.e., all criteria are constructed under the conditions of determinacy) and six (i.e., all criteria are constructed under the conditions of uncertainty).

#### *8.2. Creation of a Mathematical and Numerical Model of the System under the Conditions of Definiteness and Indeterminacy*

Creating a numerical model of the system includes the following sections:


#### 8.2.1. Mathematical Model of the System

We will present the model of the system under the conditions of definiteness and uncertainty in total:

$$\text{Opt}\,F(X) \,=\,\langle \mathbf{max}F\_1(X)\,=\,\langle \mathbf{max}f\_k(X),k\,=\,1,\mathbf{K}\_{\mathbf{1}}^{def}\rangle\,\tag{87}$$

$$\mathbf{max}I\_1(X) \equiv \langle \mathbf{max} | f\_k(X\_i, i = \ \overline{1, M}) \rangle^T, k = \ \overline{1, K\_1^{unc}} \rangle,\tag{88}$$

$$\mathbf{m}\mathbf{in}F\_2(X) \;=\; \langle \mathbf{m}\mathbf{in}f\_k(X), k \;=\; \overline{1, K\_2^{def}}\rangle,\tag{89}$$

$$\min l\_2(X) o \vert \min \vert f\_k(X\_{i\prime} i = \overline{1, M}) \vert \,^T, k = \overline{1, K\_2^{unc}} \vert \vert\_{\prime} \tag{90}$$

$$\text{mat restrictions } f\_k^{\min} \le f\_k(X) \le f\_k^{\max}, k = \overline{1, K}, \mathfrak{x}\_j^{\min} \le \mathfrak{x}\_j \le \mathfrak{x}\_j^{\max}, j = \overline{1, N} \tag{91}$$

where *X* = {*xj*, *j* = 1,*N*1, *N*} is a vector of operated variable (design data), *F*(*X*) = {*F*1(*X*) *F*2(*X*) *I*1(*X*), *I*2(*X*)} represents the vector criterion of Equations (87)–(91) in which each component represents a vector of criteria (characteristics) of the system that functionally depend on the discrete values of a vector of variables *<sup>X</sup>*, *<sup>F</sup>*1(*X*) <sup>=</sup> {*fk*(*X*), *<sup>k</sup>* <sup>=</sup> 1, *<sup>K</sup>de f* <sup>1</sup> }, *<sup>F</sup>*2(*X*) <sup>=</sup> {*fk*(*X*), *<sup>k</sup>* <sup>=</sup> 1,*Kde f* <sup>2</sup> } is a set of the max and min functions, respectively, *I*1(*X*) = {{*fk*(*Xi*, *i* = 1, *M*)}T, *k* = 1,*Kunc* <sup>1</sup> }, *<sup>I</sup>*2(*X*) <sup>=</sup> {{*fk*(*Xi*, *<sup>i</sup>* <sup>=</sup> 1, *<sup>M</sup>*)}T, *<sup>k</sup>* <sup>=</sup> 1,*Kunc* <sup>2</sup> } is a set of matrices of max and min, respectively, *Kde f* <sup>1</sup> , *<sup>K</sup>de f* <sup>2</sup> (*definiteness*), *<sup>K</sup>unc* <sup>1</sup> , *<sup>K</sup>unc* <sup>2</sup> (*uncertainty*) are the sets of criteria of *max* and *min* created under the conditions of definiteness and uncertainty. In Equation (91), *f* min *<sup>k</sup>* <sup>≤</sup> *fk(X)* <sup>≤</sup> *<sup>f</sup>* max *<sup>k</sup>* , *k* = 1,*K* is a vector function of the restrictions imposed on the functioning of the technical system, and *x*min *<sup>j</sup>* <sup>≤</sup> *xj* <sup>≤</sup> *<sup>x</sup>*max *<sup>j</sup>* , *j* = 1, *N* represent the parametrical restrictions.

It is assumed that the functions *fk*(*X*), *k* = 1,*K* are differentiable and convex, *gi*(*X*), *i* = 1, *M* are continuous, and the set of admissible points *S* given by constraints of Equation (5) is non−empty and is a compact: *<sup>S</sup>* <sup>=</sup> {*X*∈*RN*|*G*(*X*) <sup>≤</sup> <sup>0</sup>*, X*min <sup>≤</sup> *<sup>X</sup>* <sup>≤</sup> *<sup>X</sup>*max} ø.

#### 8.2.2. Building a Model under the Conditions of Certainty

Construction under the conditions of definiteness is defined by the functional dependence of each characteristic and the restrictions on the parameters of the technical system. In our example, three characteristic (92) and (93) and restrictions (94) are known:

$$\begin{aligned} f\_1(\mathbf{X}) &\equiv 269.867 - 1.8746 \times \mathbf{x}\_1 - 1.7469 \times \mathbf{x}\_2 + 0.8939 \times \mathbf{x}\_3 + \\ 1.0937 \times \mathbf{x}\_4 &+ 0.0484 \times \mathbf{x}\_1 \times \mathbf{x}\_2 - 0.0052 \times \mathbf{x}\_1 \times \mathbf{x}\_3 - 0.0141 \times \mathbf{x}\_1 \times \mathbf{x}\_4 \\ + 0.0037 \times \mathbf{x}\_2 \times \mathbf{x}\_3 - 0.0052 \times \mathbf{x}\_2 \times \mathbf{x}\_4 - 0.0002 \times \mathbf{x}\_3 \times \mathbf{x}\_4 + 0.0119 \times \mathbf{x}\_1^2 \\ + 0.0035 \times \mathbf{x}\_2^2 - 0.002 \times \mathbf{x}\_3^2 - 0.0042 \times \mathbf{x}\_{4'}^2 \end{aligned} \tag{92}$$

$$\begin{array}{ll} f\_4(\mathbf{X}) &= 19.25\mathbf{3} - 0.008\mathbf{1} \times \mathbf{x}\_1 - 0.7005 \times \mathbf{x}\_2 - 0.3605 \times \mathbf{x}\_3 + \\ 0.9769 \times \mathbf{x}\_4 + 0.0126 \times \mathbf{x}\_1 \times \mathbf{x}\_2 + 0.0644 \times \mathbf{x}\_1 \times \mathbf{x}\_3 \\ - 0 \times \mathbf{x}\_1 \times \mathbf{x}\_4 + 0.0396 \times \mathbf{x}\_2 \times \mathbf{x}\_3 + 0.0002 \times \mathbf{x}\_2 \times \mathbf{x}\_4 + \\ 0.0004 \times \mathbf{x}\_3 \times \mathbf{x}\_4 - 0.0016 \times \mathbf{x}\_1^2 + 0.0027 \times \mathbf{x}\_2^2 + 0.0045 \times \mathbf{x}\_3^2 \\ - 0.0235 \times \mathbf{x}\_{4'}^2 \end{array} \tag{93}$$
 
$$\text{restrtitions: } 22 \le \mathbf{x}\_1 \le 88, 0 \le \mathbf{x}\_2 \le 66, 2.2 \le \mathbf{x}\_3 \le 8.8, 2.2 \le \mathbf{x}\_4 \le 8.8\tag{94}$$

These data are further used in the creation of the mathematical model of the technical system.

#### 8.2.3. Construction under the Conditions of Uncertainty

Construction under the conditions of uncertainty involves the use of the qualitative and quantitative descriptions of the technical system obtained by the "input−output" principle shown in Table 6. Transformation of information (basic data of *y*2(*X*), *y*3(*X*)) to a functional type of *f* 2(*X*), *f* 3(*X*) is carried out by the use of mathematical methods (i.e., regression analysis).

The basic data of Table 1 are created in the MATLAB system in the form of a matrix:

$$I = |\mathbf{X}\_{\prime}\mathbf{Y}| = \langle \mathbf{x}\_{i1}\mathbf{x}\_{i2}y\_{i3}y\_{i4\prime}\mathbf{i} = 1,M\rangle. \tag{95}$$

For each experimental set function *yk*, *k* = 2, 3 regression was performed using the method of least squares min"*<sup>M</sup> <sup>i</sup>* <sup>=</sup> <sup>1</sup> (*yi* − *yi*) <sup>2</sup> in MATLAB. *Ak*, a polynomial defining the interrelationship of the parameters *Xi* = {*x*1*i*, *x*2*i*, *x*3*i*, *x*4*i*} and functions *yki* = *f(Xi,Ak)*, *k* = 2, 3, is formed for this purpose.

As a result of the calculations, we obtain the system of coefficients *Ak* = *{A0k, A1k,* ... , *A14k*} which define the coefficients of the quadratic polynomial (function):

$$f\_k(X, A) = A\_{0k} + A\_{1k} \mathbf{x}\_1 + A\_{2k} \mathbf{x}\_2 + A\_{3k} \mathbf{x}\_3 + A\_{4k} \mathbf{x}\_4 + A\_{5k} \mathbf{x}\_1 \ast \mathbf{x}\_2 + A\_{6k} \mathbf{x}\_1 \ast \mathbf{x}\_3 + A\_{7k} \mathbf{x}\_1 \ast \mathbf{x}\_4 + A\_{8k} \mathbf{x}\_2 \ast \mathbf{x}\_3 \tag{96}$$
  $1 + A\_{9k} \mathbf{x}\_2 \ast \mathbf{x}\_4 + A\_{10k} \mathbf{x}\_3 \ast \mathbf{x}\_4 + A\_{11k} \mathbf{x}\_1^2 + A\_{12k} \mathbf{x}\_2^2 + A\_{13k} \mathbf{x}\_3^2 + A\_{14k} \mathbf{x}\_4^2$   $k = 2, 3.$ 

As a result of the calculations of the coefficients *Ak*, *k* = 2, we obtain the *f* 2(*X*) function:

$$\begin{array}{ll} f\_2(\mathbf{X}) &= 875.3 + 23.893 \times \mathbf{x}\_1 - 30.866 \times \mathbf{x}\_2 - 25.858 \times \mathbf{x}\_3 - 45 \times \mathbf{x}\_4 \\ &- 0.6984 \times \mathbf{x}\_1 \times \mathbf{x}\_2 + 0.4276 \times \mathbf{x}\_1 \times \mathbf{x}\_3 + 0.6793 \times \mathbf{x}\_1 \times \mathbf{x}\_4 \\ &- 0.1167 \times \mathbf{x}\_2 \times \mathbf{x}\_3 + 0.2969 \times \mathbf{x}\_2 \times \mathbf{x}\_4 - 0.0093 \times \mathbf{x}\_3 \times \mathbf{x}\_4 \\ &+ 0.0362 \times \mathbf{x}\_1^2 + 0.0331 \times \mathbf{x}\_2^2 + 2.9158 \times \mathbf{x}\_3^2 + 2.4052 \times \mathbf{x}\_4^2 \end{array} \tag{97}$$

As a result of the calculations of coefficients *Ak*, *k* =3, we obtain the *f* 3(*X*) function:

$$\begin{array}{l} f\_3(\mathbf{X}) = 43.734 + 0.6598 \times \mathbf{x}\_1 + 0.4493 \times \mathbf{x}\_2 - 0.3094 \times \mathbf{x}\_3 - \\ 1.8334 \times \mathbf{x}\_4 - 0.01 \times \mathbf{x}\_1 \times \mathbf{x}\_2 - 0.0062 \times \mathbf{x}\_1 \times \mathbf{x}\_3 + 0.0146 \times \mathbf{x}\_1 \times \mathbf{x}\_4 \\ -0.013 \times \mathbf{x}\_2 \times \mathbf{x}\_3 + 0.0121 \times \mathbf{x}\_2 \times \mathbf{x}\_4 - 0.0004 \times \mathbf{x}\_3 \times \mathbf{x}\_4 - 0.0003 \times \mathbf{x}\_1^2 \\ -0.0002 \times \mathbf{x}\_2^2 + 00.0254 \times \mathbf{x}\_3^2 + 0.0939 \times \mathbf{x}\_4^2 \end{array} \tag{98}$$

$$\text{In restrictions } 22 \le \text{x}\_1 \le 88, 0 \le \text{x}\_2 \le 66, 2.2 \le \text{x}\_3 \le 8.8, 2.2 \le \text{x}\_4 \le 8.8 \tag{99}$$

The minimum and maximum values of experimental data *y*1(*X*), *y*2(*X*), *y*4(*X*) are presented in the lower part of Table 1. The minimum and maximum values of the functions *f* 1(*X*), *f* 2(*X*), *f* 4(*X*) slightly differ from experimental data. For comparison, the settlement of these *f* 4(*X*) functions at the specified points of *X* presented in the right part of the eighth column of Table 7 are given. The index of correlation and coefficients of determination are presented in the lower lines of Table 7.

Results of the regression analysis of Equations (97)–(99) are further used in the creation of the mathematical model of the technical system.

#### 8.2.4. Construction of a Numerical Model of the System under Certainty and Uncertainty

For the creation of a numerical model of the system, we use the functions obtained under conditions of definiteness (Equations (92) and (93)) and uncertainty (Equations (97) and (98)), and parametric restrictions (Equations (94) and (99)).

We consider the functions of Equations (92), (93), (97) and (98) as the criteria defining the functioning of the system. A set of criteria *K* = 4 includes two criteria *f* <sup>1</sup>*(X)*, *f* <sup>3</sup>*(X)* → max and two *f* <sup>2</sup>*(X)*, *f* <sup>4</sup>*(X)* → min. As a result, the model of the functioning of the system is presented as a vector problem of mathematical programming:

$$\begin{array}{l} \text{opt}\,F(X) = \langle \max F\_1(X) \rangle = \langle \max f\_1(X) \rangle \cong 269.867 - 1.8746 \times \mathbf{x}\_1 \\ -1.7469 \times \mathbf{x}\_2 + 0.8939 \times \mathbf{x}\_3 + 1.0937 \times \mathbf{x}\_4 + 0.0484 \times \mathbf{x}\_1 \times \mathbf{x}\_2 - \\ 0.0052 \times \mathbf{x}\_1 \times \mathbf{x}\_3 - 0.0141 \times \mathbf{x}\_1 \times \mathbf{x}\_4 + 0.0037 \times \mathbf{x}\_2 \times \mathbf{x}\_3 - 0.0052 \times \mathbf{x}\_2 \times \mathbf{x}\_4 \\ -0.0002 \times \mathbf{x}\_3 \times \mathbf{x}\_4 + 0.0119 \times \mathbf{x}\_1^2 + 0.0035 \times \mathbf{x}\_2^2 - 0.002 \times \mathbf{x}\_3^2 - 0.0042 \times \mathbf{x}\_4^2 \end{array} \tag{100}$$

$$\begin{array}{ll}\max f\_5(\mathbf{X}) \equiv 43.734 + 0.659 \times \mathbf{x}\_1 + 0.4493 \times \mathbf{x}\_2 - 0.3094 \times \mathbf{x}\_3\\ -1.8334 \times \mathbf{x}\_4 - 0.01 \times \mathbf{x}\_1 \times \mathbf{x}\_2 - 0.0062 \times \mathbf{x}\_1 \times \mathbf{x}\_3 + 0.0146 \times \mathbf{x}\_1 \times \mathbf{x}\_4\\ -0.013 \times \mathbf{x}\_2 \times \mathbf{x}\_3 + 0.0121 \times \mathbf{x}\_2 \times \mathbf{x}\_4 - 0.0004 \times \mathbf{x}\_3 \times \mathbf{x}\_4 - 0.0003 \times \mathbf{x}\_1^2\\ -0.0002 \times \mathbf{x}\_2^2 + 0.0254 \times \mathbf{x}\_3^2 + 0.0939 \times \mathbf{x}\_4^2 \end{array} \tag{101}$$

$$\begin{array}{l}\min F\_{2}(X) = \langle \min f\_{2}(X) \equiv 875.3 + 23.893 \times \mathbf{x}\_{1} - 30.866 \times \mathbf{x}\_{2} - \\\ 25.858 \times \mathbf{x}\_{3} - 45 \times \mathbf{x}\_{4} - 0.6984 \times \mathbf{x}\_{1} \times \mathbf{x}\_{2} + 0.4276 \times \mathbf{x}\_{1} \times \mathbf{x}\_{3} + \\\ 0.6793 \times \mathbf{x}\_{1} \times \mathbf{x}\_{4} - 0.1167 \times \mathbf{x}\_{2} \times \mathbf{x}\_{3} + 0.2969 \times \mathbf{x}\_{2} \times \mathbf{x}\_{4} - \\\ 0.0093 \times \mathbf{x}\_{3} \times \mathbf{x}\_{4} + 0.0362 \times \mathbf{x}\_{1}^{2} + 0.0331 \times \mathbf{x}\_{2}^{2} + 2.9158 \times \mathbf{x}\_{3}^{2} + 2.4052 \times \mathbf{x}\_{4}^{2}.\end{array} \tag{102}$$

$$\begin{aligned} \min f\_4(\mathbf{X}) & \equiv 19.25 - 0.008 \times \mathbf{x}\_1 - 0.7005 \times \mathbf{x}\_2 - 0.3605 \times \mathbf{x}\_3 + \\ 0.977 \times \mathbf{x}\_4 & + 0.0126 \times \mathbf{x}\_1 \times \mathbf{x}\_2 + 0.0644 \times \mathbf{x}\_1 \times \mathbf{x}\_3 - 0 \times \mathbf{x}\_1 \times \mathbf{x}\_4 + \\ 0.0396 \times \mathbf{x}\_2 \times \mathbf{x}\_3 & + 0.0002 \times \mathbf{x}\_2 \times \mathbf{x}\_4 + 0.0004 \times \mathbf{x}\_3 \times \mathbf{x}\_4 - 0.0016 \times \mathbf{x}\_1^2 \\ & + 0.0027 \times \mathbf{x}\_2^2 + 0.0045 \times \mathbf{x}\_3^2 - 0.0235 \times \mathbf{x}\_4^2) \end{aligned} \tag{103}$$

$$\text{In restrictions: } 22 \le \text{x}\_1 \le 88, 0 \le \text{x}\_2 \le 66, 2.2 \le \text{x}\_3 \le 8.8, 2.2 \le \text{x}\_4 \le 8.8 \tag{104}$$

The vector problem of mathematical programming of Equations (100)–(104) represents the model of decision making under certainty and uncertainty in the aggregate.

#### *8.3. The Solution of the Vector Problem of Mathematical Programming (VPMP)—Model of the System with Equivalent Criteria*

To solve the vector problem of mathematical programming of Equations (14)–(18), methods based on the axioms of the normalization of criteria and the principle of guaranteed results are presented, which follow from Axiom 1 and the principle of optimality 1.

The solution of the vector problem of Equations (14)–(18) follows a sequence of steps.

*Step 1*. Equations (100)–(104) are solved for each criterion separately, using the function *fmincon* ( ... ) of the MATLAB system, the use of the function *fmincon* ( ... ) is considered in [7–10,20,22].

As a result of the calculation for each criterion we obtain optimum points: *X*∗ *<sup>k</sup>* and *f*<sup>∗</sup> *<sup>k</sup>* = *fk*(*X*<sup>∗</sup> *k* ), *k* = 1,*K*, the sizes of criteria at this point, i.e., the best decision for each criterion:

*X*∗ <sup>1</sup> = {*x*<sup>1</sup> = 88.0, *x*<sup>2</sup> = 66.0, *x*3=8.8, *x*<sup>4</sup> = 2.2}, *f*<sup>∗</sup> <sup>1</sup> = *f* 1(*X*<sup>∗</sup> <sup>1</sup>) = −535.06, *X*∗ <sup>2</sup> = {*x*<sup>1</sup> = 22.0, *x*<sup>2</sup> = 0.0, *x*3=2.83, *x*<sup>4</sup> = 6.25}, *f*<sup>∗</sup> <sup>2</sup> = *f* 2(*X*<sup>∗</sup> <sup>2</sup>) = 1301.2, *X*∗ <sup>3</sup> = {*x*<sup>1</sup> = 88.0, *x*<sup>2</sup> = 0.0, *x*3=2.2, *x*<sup>4</sup> = 8.8}, *f*<sup>∗</sup> <sup>3</sup> = *f* 3(*X*<sup>∗</sup> <sup>3</sup>) = −100.15, *X*∗ <sup>4</sup> = {*x*1=22.0, *x*<sup>2</sup> = 62.17, *x*<sup>3</sup> = 2.2, *x*<sup>4</sup> = 2.2}, *f*<sup>∗</sup> <sup>4</sup> = *f* 4(*X*<sup>∗</sup> <sup>4</sup>) = 12.247.

The restrictions of Equation (104) and the points of an optimum *X*∗ <sup>1</sup>, *X*<sup>∗</sup> <sup>2</sup>, *X*<sup>∗</sup> <sup>3</sup>, *X*<sup>∗</sup> <sup>4</sup> in coordinates {*x*1, *x*2} are presented in Figure 13.

**Figure 13.** Pareto's great number, *<sup>S</sup><sup>o</sup>* <sup>⊂</sup> *<sup>S</sup>* in a two−dimensional system of coordinates {*x*1, *<sup>x</sup>*2}.

Step 2. We define the worst unchangeable part of each criterion (anti-optimum):


Step 3. We analyze the set of Pareto optimal points. At optimal points *X* \* = {*X1\**, *X2\**, *X3\**, *X4\**}, the sizes of the criterion functions of *F*(*X*\*) = *fq*(*X*<sup>∗</sup> *k* ) *k* = 1,*K <sup>q</sup>* <sup>=</sup> 1,*<sup>K</sup>* are determined. We calculate a vector

*D* = (*d*<sup>1</sup> *d*<sup>2</sup> *d*<sup>3</sup> *d*4) T, deviations by each criterion on an admissible set *<sup>S</sup>*: *dk* <sup>=</sup>*fk\**−*fk* 0, *k* = 1, 4, and a matrix of relative estimates λ(*X\**) = λ*q*(*X*<sup>∗</sup> *k* ) *k* = 1,*K q* = 1,*K* , where λ*k*(*X*) = (*fk\**−*fk 0)*/*dk*.

$$F(X^\*) = \begin{vmatrix} 535.1 & 1731.9 & 58.1 & 117.0 \\ 317.6 & 1301.2 & 51.3 & 26.5 \\ 192.5 & 3614.3 & 100.2 & 24.6 \\ 244.0 & 2458.2 & 67.7 & 12.2 \\ \end{vmatrix}, D = \begin{vmatrix} 291.8 \\ -2602.0 \\ 50.12 \\ -109.58 \\ \end{vmatrix}, \tag{105}$$
 
$$\lambda(X^\*) = \begin{vmatrix} 1.0000 & 0.8345 & 0.1603 & 0.0443 \\ 0.2548 & 1.0000 & 0.0244 & 0.8697 \\ -0.1740 & 0.1110 & 1.0000 & 0.8870 \\ 0.0027 & 0.5553 & 0.3532 & 1.0000 \\ \end{vmatrix} \tag{105}$$

The analysis of the sizes of the criteria in relative estimates shows that at optimal points *X*\*= {*X1\**, *X2\**, *X3\**, *X4\**} the relative assessment is equal to unity. Other criteria are much less than unity. It is required to find such points (parameters) at which the relative estimates are closest to unity. Step 4 is directed to the solution of this problem.

Step 4. Creation of the λ-problem is carried out in two stages: first, the maximine problem of optimization with the normalized criteria is constructed:

$$
\lambda^0 = \max\_{\mathbf{x}} \min\_k \lambda\_k(X), G(X) \le 0, X \ge 0,\tag{106}
$$

Second, this is transformed into a standard problem of mathematical programming (the λ-problem):

$$
\lambda^0 = \max \lambda,\tag{107}
$$

$$\text{at restrictions } \lambda - \left(f\_1(X) - f\_1^o\right) / \left(f\_1^\* - f\_1^o\right) \le 0,\tag{108}$$

$$
\lambda - \left( f\_2(X) - f\_2^o \right) / \left( f\_2^\* - f\_2^o \right) \le 0 \tag{109}
$$

$$
\lambda - \left( f\_3(X) - f\_3^o \right) / \left( f\_3^\* - f\_3^o \right) \le 0 \tag{110}
$$

$$
\lambda - \left( f\_4(X) - f\_4^o \right) / \left( f\_4^\* - f\_4^o \right) \le 0 \tag{111}
$$

$$0 \le \lambda \le 1, \, 22 \le \mathbf{x}\_1 \le 88, \, 0 \le \mathbf{x}\_2 \le 66, \, 2.2 \le \mathbf{x}\_3 \le 8.8, \, 2.2 \le \mathbf{x}\_4 \le 8.8,\tag{112}$$

where the vector of unknowns had the dimension *N* + 1: *X* = {*x*1, ... , *xN*, λ}, the functions *f* 1(*X*), *f* 2(*X*), *f* 3(*X*), *f* 4(*X*) correspond to Equations (100)–(104), respectively. Substituting the numerical values of the functions *f* 1(*X*), *f* 2(*X*), *f* 3(*X*), *f* 4(*X*), we obtain the λ-problem in the following form:

$$
\lambda^0 = \max \lambda,\tag{113}
$$

$$\text{At restrictions } \lambda - \frac{296.85 - 1.875 \times \mathbf{x}\_1 \ \dots + 0.0734 \times \mathbf{x}\_1 \times \mathbf{x}\_2 \ \dots - 0.0108 \times \mathbf{x}\_1^2 \ \dots - f\_1^o}{f\_1^\* - f\_1^o} \le 0,\qquad(114)$$

$$\lambda - \frac{43.73 + 0.659 \times \mathbf{x}\_1 \ \dots \ - \ 0.01 \times \mathbf{x}\_1 \times \mathbf{x}\_2 \ \dots - 0.0003 \times \mathbf{x}\_1^2 \ \dots - f\_3^o}{f\_3^\* - f\_3^o} \le 0,\tag{115}$$

$$\lambda - \frac{875.3 + 23.893 \times \mathbf{x}\_1 + \dots \dots - 0.6984 \times \mathbf{x}\_1 \times \mathbf{x}\_2 \dots + 0.036 \times \mathbf{x}\_1^2 \dots \dots - f\_2^o}{f\_2^\* - f\_2^o} \le 0,\tag{116}$$

$$\lambda - \frac{19.253 - 0.0081 \times \mathbf{x}\_1 \dots + 0.0126 \times \mathbf{x}\_1 \times \mathbf{x}\_2 \dots + (-0.0016 \times \mathbf{x}\_1^2) \dots - f\_4^o}{f\_4^\* - f\_4^o} \le 0,\tag{117}$$

$$\text{If } 0 \le \lambda \le 1, \text{ } 22 \le \mathbf{x}\_1 \le 88, \mathbf{0} \le \mathbf{x}\_2 \le 66, \text{ } 2.2 \le \mathbf{x}\_3 \le 8.8, \text{ } 2.2 \le \mathbf{x}\_4 \le 8.8,\tag{118}$$

Using function fmincon( ... ):

[Xo,Lo] = fmincon('Z\_TehnSist\_4Krit\_L',X0,Ao,bo,Aeq,beq,lbo,ubo,'Z\_TehnSist\_LConst',options). As a result of the solution of the vector problem of mathematical programming in Equations (14)–(18) with equivalent criteria and the λ-problem corresponding to Equations (113)–(118), we obtain:

$$\mathbf{X}^{0} = \{X^{0}, \lambda^{0}\} = \{X^{0} = \{\mathbf{x}\_{1} = 52.9, \mathbf{x}\_{2} = 36.097, \mathbf{x}\_{3} = 8.8, \mathbf{x}\_{4} = 2.2, \lambda^{0} = 0.3179\} \tag{119}$$

i.e., the optimum point of the design data of the system, point *X*0, which is presented in Figure 8, *fk*(*X*0), *k* = 1,*K*, the sizes of criteria (characteristics of technical system):

$$f\_1(X^0) = 336.0, f\_2(X^0) = 2239.5, f\_3(X^0) = 65.962, f\_4(X^0) = 58.435\text{l},\tag{120}$$

And λ*k*(*Xo*), *k* = 1,*K*, the sizes of the relative estimates:

$$
\lambda\_1(X^0) = 0.3179, \lambda\_2(X^0) = 0.6394, \lambda\_3(X^0) = 0.3179, \lambda\_4(X^0) = 0.5785\text{)},\tag{121}
$$

λ<sup>0</sup> = 0.3179 is the maximum lower level among all relative estimates measured in relative units: λ<sup>0</sup> = *min* (λ1(*X*0), λ2(*X*0), λ3(*X*0), λ4(*X*0), λ5(*X*0)) = 0.3179. A relative assessment, λ0, is called the guaranteed result in relative units, i.e., λ*k*(*Xo*), and according to the characteristics of the technical *fk*(*X*0) system is impossible to improve upon, without worsening thus characteristics.

We note that according to Theorem 1, at point *Xo* criteria 1 and 3 are contradictory. This contradiction is defined by the equality of λ1(*X*0) = λ3(*X*0) =λ<sup>0</sup> = 0.3179, and other criteria are subject to an inequality of {λ2(*X*0) = 0.7954, λ4(*X*0) = 0.5557} > λ0.

Thus, Theorem 1 forms a basis for the determination of the correctness of the solution of a vector problem. In a vector problem of mathematical programming, as a rule, for two criteria an equality holds: <sup>λ</sup><sup>0</sup> <sup>=</sup> <sup>λ</sup>*q*(*X*0) <sup>=</sup> <sup>λ</sup>*p*(*X*0), *<sup>q</sup>*, *<sup>p</sup>* <sup>∈</sup> *<sup>K</sup>*, *<sup>X</sup>* <sup>∈</sup> *S,* and other criteria are subject to an inequality: <sup>λ</sup><sup>0</sup> <sup>≤</sup> <sup>λ</sup>*k*(*X*0) ∀*k* ∈ *K*, *q p k*.

#### *8.4. Geometric Interpretation of Results of the Decision in a Three-Dimensional Coordinate System in Relative Units*

In an admissible set of points, *S* formed by restrictions of Equation (32), the optimum points *X1\**, *X2\**, *X3\**, *X4\** are united in a contour and presented as a set of Pareto optimal points *<sup>S</sup>*0⊂*<sup>S</sup>* in Figure 13. Coordinates of these points, and the characteristics of the technical system in relative units of λ1(*X*), λ2(*X*), λ3(*X*), λ4(*X*) are shown in Figure 14 in three-dimensional space, where the third axis λ is a relative assessment.

**Figure 14.** The solution of the λ-problem in a three-dimensional system of coordinates of *x*1, *x*<sup>2</sup> and λ.

**Discussion**. Looking at Figure 9, we can provide changes of all functions of λ1(*X*), λ2(*X*), λ3(*X*), λ4(*X*) in four-dimensional space. We consider, for example, an optimum point *X*<sup>∗</sup> <sup>3</sup>. The λ3(*X*) function is created from the functions *f* 3(*X*) with variable coordinates {*x*1, *x*2} and with constant coordinates {*x*<sup>3</sup> = 8.8, *x*<sup>4</sup> = 2.2}, taken from an optimum point *X*<sup>0</sup> (33). At point *X*<sup>∗</sup> <sup>3</sup> the relative assessment of λ3(*X*<sup>∗</sup> <sup>3</sup>) = 0.83 is shown in Figure 9 by a black point. However, we know that the relative assessment of λ3(*X*<sup>∗</sup> <sup>3</sup>) obtained from the function *f* 3(*X*<sup>∗</sup> <sup>3</sup>) in the third step is equal to unity, which we designate as λΔ <sup>3</sup> (*X*<sup>∗</sup> 3 ∗ <sup>1</sup>) <sup>=</sup> 1, and is shown in Figure <sup>9</sup> by a red point. The difference between <sup>λ</sup><sup>Δ</sup> <sup>3</sup> (*X*<sup>∗</sup> 3 ∗ <sup>1</sup>) = 1 and λ3(*X*<sup>∗</sup> <sup>3</sup>) = 0.83 is an error Δ = 0.17 due to transitioning from four-dimensional (and generally *N*-dimensional) to two-dimensional space.

The point *X*∗ <sup>1</sup> and appropriate relative estimates of λ1(*X*<sup>∗</sup> <sup>1</sup>) and <sup>λ</sup><sup>Δ</sup> <sup>1</sup> (*X*<sup>∗</sup> <sup>1</sup>) are similarly shown.

Thus, for the first time in domestic and foreign practice, the transition and its geometric illustration from an *N*-dimensional to a two-dimensional measurement of function is shown in vector problems of mathematical programming with the appropriate errors.

#### *8.5. The Solution of a Vector Problem of Mathematical Programming—Model of the System at the Given Priority of the Criterion*

The decision maker is usually the system designer.

Step 1. We solve a vector problem with equivalent criteria. The algorithm of the decision is presented in Section 8.3. The numerical results of the solution of the vector problem are given above.

Pareto's great number S0⊂S lies between optimum points X<sup>∗</sup> <sup>1</sup> <sup>X</sup><sup>0</sup> <sup>X</sup><sup>∗</sup> <sup>3</sup> X0 <sup>X</sup><sup>∗</sup> <sup>4</sup> X0 <sup>X</sup><sup>∗</sup> <sup>2</sup> <sup>X</sup><sup>0</sup> <sup>X</sup><sup>∗</sup> <sup>1</sup>. We carry out the analysis of a great number of Pareto So⊂S. For this purpose, we will connect auxiliary points: X∗ 1X<sup>∗</sup> 3X<sup>∗</sup> 4X<sup>∗</sup> 2X<sup>∗</sup> <sup>1</sup> with a point X0 which conditionally represents the center of a great number of Pareto. As a result, we obtain four subsets of points X∈**S***<sup>o</sup> <sup>q</sup>*⊂**S**0⊂**S**, q <sup>=</sup> 1, 4. The subset **<sup>S</sup>***<sup>o</sup>* <sup>1</sup>⊂**S**0⊂**<sup>S</sup>** is characterized by the fact that the relative assessment <sup>λ</sup><sup>1</sup> <sup>≥</sup>λ2, <sup>λ</sup>3, <sup>λ</sup>4, i.e., in the field **<sup>S</sup>***<sup>o</sup>* <sup>1</sup>, the first criterion has priority over the others. Similarly, **S***<sup>o</sup>* <sup>2</sup>, **<sup>S</sup>***<sup>o</sup>* <sup>3</sup>, **<sup>S</sup>***<sup>o</sup>* <sup>4</sup> are the subsets of points where the second, third and fourth criterion has a priority over the others, respectively. We designate the set of Pareto optimal points **S**0=**S***<sup>o</sup>* 1∪**S***<sup>o</sup>* 2∪**S***<sup>o</sup>* 3∪**S***<sup>o</sup>* 4. Coordinates of all obtained points and relative estimates are presented in two-dimensional space {x1, x2} in Figure 13. These coordinates are shown in three-dimensional space {x1, x2, λ} in Figure 14 where the third axis λ is a relative assessment. The restrictions of the set of Pareto optimal points in Figure 14 are lowered to −0.5 (so that restrictions are visible). This information is also a basis for further research on the structure of a great number of Pareto. The person making decisions, as a rule, is the designer of the system. If results of the solution of a vector problem with equivalent criteria do not satisfy the

person making the decision, then the choice of the optimal solution is taken from any subset of points **S***o* <sup>1</sup>, **<sup>S</sup>***<sup>o</sup>* <sup>2</sup>, **<sup>S</sup>***<sup>o</sup>* <sup>3</sup>, **<sup>S</sup>***<sup>o</sup>* <sup>4</sup>. These subsets of Pareto points are shown in Figure 8 in the form of functions *f* 1(*X*), *f* 2(*X*), *f* 3(*X*), *f* 4(*X*).

Step 2. Choice of priority criterion of *q*∈*K*. From the theory (see Theorem 1) it is known that at an optimum point *<sup>X</sup>*<sup>0</sup> there are always two most inconsistent criteria, *<sup>q</sup>*∈*<sup>K</sup>* and *<sup>v</sup>*∈*<sup>K</sup>* for which in relative units an equality holds: <sup>λ</sup><sup>0</sup> <sup>=</sup>λ*q*(*X*0) <sup>=</sup>λ*p*(*X*0), *<sup>q</sup>*, *<sup>v</sup>*∈*K*, *<sup>X</sup>*∈*S*. Others are subject to inequalities: <sup>λ</sup><sup>0</sup> <sup>≤</sup> <sup>λ</sup>*k*(*X*0) ∀*k*∈*K*, *q v k*.

In a model of the system in Equations (100)–(104) and the corresponding λ-problem in Equations (113)–(117), such criteria are the first and third:

$$
\lambda^0 = \lambda\_1(X^0) = \lambda\_3(X^0) = 0.3179. \tag{122}
$$

We show the λ1(*X*) andλ3(*X*) functions separately in Figure 15 for an optimum point *X<sup>o</sup>* = {*Xo*, λ*o*}.

**Figure 15.** The solution of the λ-problem (first and third criteria) in a three-dimensional system of coordinates of *x*1, *x*<sup>2</sup> and λ.

All points and data are shown in Figure 14.

As a rule, the criterion which the decision-maker would like to improve is taken from a couple of contradictory criteria. Such a criterion is called the "priority criterion", which we designate *q*=*3*∈*K*. This criterion is investigated in interaction with the first criterion of *k* = 1∈*K*. We allocate these two criteria from the set of all criteria *K* = 4 shown in Figure 15.

On the display the message is given:

q=input ('Enter priority criterion (number) of *q* ='), have entered: *q* = 3.

Step 3. Numerical limits of the change of the size of a priority of criterion of *q* = 3∈*K* are defined. For priority criterion *q* = *3*, the numerical limits in physical units upon transition from an optimal point *X<sup>o</sup>* (119) to the point *X*<sup>∗</sup> *<sup>q</sup>* obtained in the first step are defined.

Information about the criteria for *q* = 3 are given on the screen:

$$f\_{\emptyset}(\mathbf{X}^{0}) \;=\; \; \; 65.96 \le f\_{\emptyset}(\mathbf{X}) \le 100.15 \;=\; f\_{\emptyset}(\mathbf{X}\_{\emptyset}^{\*}), \mathbf{q} \in \mathcal{K}.\tag{123}$$

In relative units the criterion of *q* = 2 changes according to the following limits:

$$
\lambda\_q(X^0) = 0.3179 \le \lambda\_q(X) \le 1 = \lambda\_q(X^\*\_q), \; q = 3 \in \mathbb{K}.
$$

These data are analyzed.

Step 4. Choice of the size of priority criterion *q*∈*K* (decision making).

The message is displayed: "Enter the size of priority criterion *fq* = ", we enter, for example, *fq* = 80. Step 5. Calculation of a relative assessment.

For the chosen size of the priority criterion of *fq* =80 the relative assessment is calculated:

$$
\lambda\_q = \frac{f\_q - f\_q^o}{f\_q^\* - f\_q^o} = \frac{80 - 50.03}{100.15 - 50.03} = 0.5979,\tag{124}
$$

which upon transition from point *X*<sup>0</sup> to *X*<sup>∗</sup> *<sup>q</sup>* according to Equation (38) lies in the limits:

$$0.3179 \quad = \ \lambda \mathfrak{z} \begin{pmatrix} X^0 \\ \end{pmatrix} \le \lambda\_3 \quad = \ 0.5979 \le \lambda \mathfrak{z} \begin{pmatrix} X \\ \end{pmatrix} \quad = \ 1, q \in K. \tag{125}$$

Step 6. Calculation of the coefficient of linear approximation.

Assuming a linear nature of the change of criterion of *fq*(*X*) in Equation (123) and according to a relative assessment of λ*q*(*X*), using standard methods of linear approximation we calculate the proportionality coefficient between λ*q*(*X*0), λ*q*, which we call ρ:

$$\rho = \frac{\lambda\_{\!\!q} - \lambda\_{\!\!q}(X^{\nu})}{\lambda\_{\!\!q}(X\_{\!\!q}^{\*}) - \lambda\_{\!\!q}(X^{\nu})} = \frac{0.5979 - 0.3179}{1 - 0.3179} = 0.4106,\\ q = \ 3 \in \mathcal{K}. \tag{126}$$

Step 7. Calculation of the coordinates of priority criterion with the size *fq*.

Assuming a linear nature of the change of a vector *X<sup>q</sup>* = {*x*<sup>1</sup> *x*2}, *q* = 3 we determine coordinates of a point of priority criterion with the size *fq* with a relative assessment (95):

$$X^q = \langle \mathbf{x}\_1 = X^0(1) + \rho(X^\*\_{\eta}(1) - X^0(1))\mathbf{x}\_2 = X^0(2) + \rho(X^\*\_{\eta}(2) - X^0(2))\rangle. \tag{127}$$

where *X*<sup>0</sup> = {*X*0(1) = 80.0, *X*<sup>0</sup> (2) = 69.11}, *X*<sup>∗</sup> <sup>3</sup> <sup>=</sup> {*X3\**(1) <sup>=</sup> 80.0, *X3\**(2) <sup>=</sup> 0.0}.

As a result of the calculations, we obtain point coordinates: *Xq* = {*x*<sup>1</sup> = 67.31, *x*<sup>2</sup> = 21.27}.

Step 8. Calculation of the main indicators of a point *Xq*.

For the obtained point *Xq*, we calculate:


Any point from Pareto's set X*<sup>o</sup> <sup>t</sup>* <sup>=</sup> {λ*<sup>o</sup> <sup>t</sup>* , X*<sup>o</sup> <sup>t</sup>* }∈So can be similarly calculated.

Step 9. Analysis of results. The calculated size of criterion *fq*(*X<sup>o</sup> <sup>t</sup>* ), *q*∈*K* is usually not equal to the set *fq*. The error of the choice of Δ*fq* = |*fq*(*X<sup>o</sup> <sup>t</sup>* ) − *fq*|=|74.2 − 80|= 5.8 is defined by an error of linear approximation, Δ*fq%* = 7.25%.

In the course of modeling and simulation, as well as in the previous example, Section 7.5, the parametric restrictions of Equation (118) can be changed, i.e., some set of optimum decisions is obtained. We can choose a final version from, in our example, this set of optimum decisions: parameters of technical system *X*<sup>0</sup> = {*x*<sup>1</sup> = 52.9, *x*<sup>2</sup> = 36.097, *x*<sup>3</sup> = 8.8, *x*<sup>4</sup> = 2.2, λ<sup>0</sup> = 0.3179, the parameters of the technical system at a given priority criterion *q* = 2: *Xq* = {*x*<sup>1</sup> = 67.31, *x*<sup>2</sup> = 21.27}.

If the error <sup>Δ</sup>*fq* = |*fq*(*X*00) <sup>−</sup> *fq*|=|79.6−80| = 0.4, measured in physical units or as a percentage Δ*fq%* = Δ*fq* / *fq* × 100 = 0.5%, is more than set Δ*f*, Δ*fq* > Δ*f*, f, we pass to Step 2, else if Δ*fq* ≤ Δ*f*, calculations come to an end.

*8.6. Geometric Interpretation of Results of the Decision in a Three-Dimensional Coordinate System in Physical Units*

In the course of modeling, the parametric restrictions of Equation (32) and functions can be changed, i.e., some set of optimum decisions is obtained. We can choose a final version from, in our example, this set of optimum decisions:


We represent these parameters in a two-dimensional (*x*1, *x*2) system in Figure 13, in a three-dimensional coordinate (*x*1, *x*2and λ) system in Figure 14, and, in physical units for each function *f* 1(*X*), ... , *f* 4(*X*) in Figures 16–19, respectively. The first characteristic *f* 1(*X*) in physical units is shown in Figure 16.

**Figure 16.** The first characteristic *f* 1(*X*) of the system in a natural indicator.

Indicators of the first *f* <sup>Δ</sup> <sup>1</sup> (*X*<sup>∗</sup> <sup>1</sup>), *<sup>f</sup>* <sup>Δ</sup> <sup>1</sup> (*X*<sup>0</sup> <sup>1</sup>) characteristic of the system (highlighted in red) define the transition errors from four-dimensional *X*<sup>0</sup> = {*x*1, *x*2, *x*3, *x*4} to two-dimensional *X*<sup>0</sup> = {*x*1, *x*2} systems of coordinates. The second characteristic *f* 2(*X*) in physical units is shown in Figure 17.

**Figure 17.** The second characteristic *f* 2(*X*) of the system in a natural indicator.

Indicators of the second *f* <sup>Δ</sup> <sup>2</sup> (*X*<sup>∗</sup> <sup>2</sup>), *<sup>f</sup>* <sup>Δ</sup> <sup>2</sup> (*X*<sup>0</sup> <sup>2</sup>) characteristic of the system (highlighted in red) define the transition errors from four-dimensional *X*<sup>0</sup> = {*x*1, *x*2, *x*3, *x*4} to two-dimensional *X*<sup>0</sup> = {*x*1, *x*2} systems of coordinates. The third characteristic *f* 3(*X*) in physical units is shown in Figure 18.

**Figure 18.** The third characteristic f3(X) of the system in a natural indicator.

Indicators of the third *f* <sup>Δ</sup> <sup>3</sup> (*X*<sup>∗</sup> <sup>3</sup>), *<sup>f</sup>* <sup>Δ</sup> <sup>3</sup> (*X*<sup>0</sup> <sup>3</sup>) characteristic of the system (highlighted in red) define transition errors from four-dimensional *X*<sup>0</sup> = {*x*1, *x*2, *x*3, *x*4} to two-dimensional *X*<sup>0</sup> = {*x*1, *x*2} systems of coordinates. The fourth characteristic *f* 4(*X*) in physical units is shown in Figure 19.

**Figure 19.** The fourth characteristic f3(X) of the system in a natural indicator.

Indicators of the fourth *f* <sup>Δ</sup> <sup>4</sup> (*X*<sup>∗</sup> <sup>4</sup>), *<sup>f</sup>* <sup>Δ</sup> <sup>4</sup> (*X*<sup>0</sup> <sup>4</sup>) characteristic of the system (highlighted in red) define the transition errors from four-dimensional *X<sup>o</sup>* = {*x*1, *x*2, *x*3, *x*4} to two-dimensional *Xo* = {*x*1, *x*2} systems of coordinates.

Collectively, for the submitted version with:


there is an optimum decision with equivalent criteria (characteristics) and, for the procedure of obtaining the optimum decision with equivalent criteria (characteristics):


there is an optimal solution at the set priority of the *q*th criterion (characteristic) in relation to other criteria. The procedure of obtaining a point *X<sup>q</sup>* is the adoption of the optimal solution at the set priority of the second criterion.

Based on the theory of vector optimization, methods of solution of vector problems with equivalent criteria, and a given priority of criterion, we can choose any point from the set of Pareto optimal points, and show the optimality of this point.

**Conclusions**. The problem of the development of mathematical methods of vector optimization and the adoption of the optimal solution in a difficult technical system based on some set of experimental data and functional characteristics are some of the most important tasks of system analysis and design.

In this work, the methodology of the creation of a mathematical model of a technical system under the conditions of definiteness and indeterminacy in the form of a vector problem of mathematical programming is developed. New methods of vector optimization based on normalization of criteria and the principle of the guaranteed result are developed for the solution of a vector problem. Methods of vector optimization allow making a decision, first, with equivalent criteria, and second, with a given priority of criterion. In the creation of the characteristics under conditions of indeterminacy, regression methods of transformation of information are used. The practice of "making optimal decisions" on the basis of a mathematical model is shown using a number of numerical examples of solutions of vector problems of optimization. The solution to the problem of "acceptance of an optimal solution" is realized with examples of 1, 2, 3 and 4 variables, respectively.

These methods of processing experimental data and vector optimization can be used in the design of technical systems of various industries: electro-technical, aerospace, metallurgical, etc. This methodology has system characteristics and can be used when modeling technical, economic and other systems. The author is ready to contribute to the solutions of vector problems of linear and nonlinear programming.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2019 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Using Dual Double Fuzzy Semi-Metric to Study the Convergence**

#### **Hsien-Chung Wu**

Department of Mathematics, National Kaohsiung Normal University, Kaohsiung 802, Taiwan; hcwu@nknucc.nknu.edu.tw

Received: 6 March 2019; Accepted: 4 April 2019; Published: 11 April 2019

**Abstract:** Convergence using dual double fuzzy semi-metric is studied in this paper. Two types of dual double fuzzy semi-metric are proposed in this paper, which are called the infimum type of dual double fuzzy semi-metric and the supremum type of dual double fuzzy semi-metric. Under these settings, we also propose different types of triangle inequalities that are used to investigate the convergence using dual double fuzzy semi-metric.

**Keywords:** dual double fuzzy semi-metric; double fuzzy semi-metric; fuzzy semi-metric space; triangle inequality; triangular norm

#### **1. Introduction**

The concept of fuzzy metric space proposed by Kramosil and Michalek [1] was inspired by the Menger space that is a special kind of probabilistic metric space by referring to Schweizer and Sklar [2–4], Hadži´c and Pap [5], and Chang et al. [6]. Kaleva and Seikkala [7] proposed another concept of fuzzy metric space by considering the membership degree of the distance between any two different points. George and Veeramani [8,9] studied some properties of fuzzy metric spaces in the sense of Kramosil and Michalek [1]. Gregori and Romaguera [10–12] also extended the study of the properties of fuzzy metric spaces and fuzzy quasi-metric spaces in which the symmetric condition was not assumed.

The Hausdorff topology induced by the fuzzy metric space was studied in Wu [13]. In this paper, we shall propose the concept of double fuzzy-semi metric in fuzzy semi-metric space and study its convergent properties. The potential application for using the convergence of dual double fuzzy semi-metric is to study the new type of fixed point theorems in fuzzy semi-metric space by considering the Cauchy sequences, which will be the future research and may refer to the previous work of Wu [14] for studying the common coincidence points and common fixed points in fuzzy semi-metric spaces. Wu [15] studied the so-called fuzzy semi-metric space without assuming the symmetric condition. In the fuzzy semi-metric space (*X*, *M*), the symmetric condition *M*(*x*, *y*, *t*) = *M*(*y*, *x*, *t*) for all *x*, *y* ∈ *X* and *t* > 0 is not assumed to be true. Therefore, four kinds of triangle inequalities should be considered.

In order to obtain the new type of fixed point theorems in fuzzy semi-metric space, we need to study the convergence using dual double fuzzy semi-metric. Based on the concept of t-norm ∗, we shall firstly define the double fuzzy semi-metric by considering the mapping *<sup>ζ</sup>* : *<sup>X</sup>*<sup>4</sup> × [0, +∞) → [0, 1] that is defined by:

$$\mathcal{J}(\mathbf{x}, \mathbf{y}; \boldsymbol{u}, \boldsymbol{v}, \mathbf{t}) = M(\mathbf{x}, \mathbf{y}, \mathbf{t}) \* M(\boldsymbol{u}, \boldsymbol{v}, \mathbf{t})\_\*)$$

where *ζ* is called a double fuzzy semi-metric.

The convergence using fuzzy semi-metric has been studied in Wu [16], where the infimum type of dual fuzzy semi-metric is the function Γ↓(*λ*, ·, ·) : *X* × *X* → [0, +∞) defined by:

$$\Gamma^\downarrow(\lambda, x, y) = \inf \left\{ t > 0 : M(x, y, t) \ge 1 - \lambda \right\}, t$$

and the supremum type of dual fuzzy semi-metric is the function Γ↑(*λ*, ·, ·) : *X* × *X* → [0, +∞) defined by:

$$\Gamma^\uparrow(\lambda, \mathbf{x}, \mathbf{y}) = \sup \left\{ t > 0 : M(\mathbf{x}, \mathbf{y}, t) \le 1 - \lambda \right\}.$$

In this paper, we shall consider the double fuzzy semi-metric *ζ* to define the infimum and supremum types of dual double fuzzy semi-metric. The infimum type of dual double fuzzy semi-metric is the function <sup>Ψ</sup>↓(*λ*, ·, ·; ·, ·) : *<sup>X</sup>*<sup>4</sup> → [0, +∞) defined by:

$$\Psi^\downarrow(\lambda, \mathfrak{x}, \mathfrak{y}; \mathfrak{u}, \mathfrak{v}) = \inf \left\{ t > 0 : \mathbb{Q}(\mathfrak{x}, \mathfrak{y}; \mathfrak{u}, \mathfrak{v}, \mathfrak{t}) \ge 1 - \lambda \right\}, \mathfrak{y}$$

and the supremum type of dual double fuzzy semi-metric is the function <sup>Ψ</sup><sup>↑</sup> : *<sup>X</sup>*<sup>4</sup> → [0, +∞) defined by:

$$\Psi^\uparrow(\lambda, \mathfrak{x}, \mathfrak{y}; u, v) = \sup \{ t > 0 : \zeta(\mathfrak{x}, \mathfrak{y}; u, v, t) \le 1 - \lambda \} \dots$$

Using the infimum and supremum types of dual fuzzy semi-metric Γ↓(*λ*, *x*, *y*) and Γ↑(*λ*, *x*, *y*), the convergence of sequences in (*X*, *M*) and the concept of Cauchy sequence in (*X*, *M*) have been studied in Wu [16]. In this paper, we study the extended convergence of sequences in (*X*, *M*) and the concept of joint Cauchy sequence in (*X*, *M*) using the infimum and supremum types of dual double fuzzy semi-metric Ψ↓(*λ*, *x*, *y*; *u*, *v*) and Ψ↑(*λ*, *x*, *y*; *u*, *v*). As we mentioned above, these convergences will be used in the near future to establish the new types of fixed point theorems in fuzzy semi-metric space (*X*, *M*).

In Section 2, we review some basic properties of fuzzy semi-metric space that will be used for further discussion. In Section 3, we introduce the concept of double fuzzy semi-metric and derive the related triangle inequalities. In Sections 4 and 5, the concepts of infimum and supremum types of dual double fuzzy semi-metric are proposed, and their convergent properties and triangle inequalities are studied.

#### **2. Fuzzy Semi-Metric Space**

Let *X* be a nonempty universal set, and let *M* be a mapping defined on *X* × *X* × [0, ∞) into [0, 1]. Then (*X*, *M*) is called a fuzzy semi-metric space if and only if the following conditions are satisfied:


We say that *M* satisfies the symmetric condition if and only if *M*(*x*, *y*, *t*) = *M*(*y*, *x*, *t*) for all *x*, *y* ∈ *X* and *t* > 0. We say that *M* satisfies the strongly symmetric condition if and only if *M*(*x*, *y*, *t*) = *M*(*y*, *x*, *t*) for all *x*, *y* ∈ *X* and *t* ≥ 0. Since the symmetric condition is not assumed to be true in fuzzy semi-metric space, four kinds of triangle inequalities called ◦-triangle inequality for ◦∈{-, -, , } were proposed by Wu [15].

**Example 1.** *Let X be a universal set, and let d* : *<sup>X</sup>* <sup>×</sup> *<sup>X</sup>* <sup>→</sup> <sup>R</sup><sup>+</sup> *satisfy the following conditions:*


*Note that we do not assume d*(*x*, *y*) = *d*(*y*, *x*)*. For example, let X* = [0, 1]*. We define:*

$$d(\mathbf{x}, y) = \begin{cases} \ y - \mathbf{x} & \text{if } y \ge \mathbf{x} \\ 1 & \text{otherwise.} \end{cases}$$

*Then d*(*x*, *y*) = *d*(*y*, *x*) *and the above three conditions are satisfied. Now we take t-norm* ∗ *as a* ∗ *b* = *ab and define:*

$$M(\mathbf{x}, y, t) = \begin{cases} \frac{t}{t + d(\mathbf{x}, y)} & \text{if } t > 0 \\ 1 & \text{if } t = 0 \text{ and } d(\mathbf{x}, y) = 0 \\ 0 & \text{if } t = 0 \text{ and } d(\mathbf{x}, y) > 0 \end{cases} = \begin{cases} \frac{t}{t + d(\mathbf{x}, y)} & \text{if } t > 0 \\ 1 & \text{if } t = 0 \text{ and } \mathbf{x} = y \\ 0 & \text{if } t = 0 \text{ and } \mathbf{x} \neq y. \end{cases}$$

*It is clear to see that M*(*x*, *y*, *t*) = *M*(*y*, *x*, *t*) *for t* > 0*, since d*(*x*, *y*) = *d*(*y*, *x*)*. It is not hard to check that* (*X*, *M*, ∗) *is a fuzzy semi-metric space satisfying the* -*-triangle inequality.*

The following interesting observations will be used in further study.

**Remark 1.** *Let* (*X*, *M*) *be a fuzzy semi-metric space.*

• *Suppose that M satisfies the* -*-triangle inequality. Then:*

$$M(a,b,t\_1) \* M(b,c,t\_2) \* M(c,d,t\_3) \le M(a,c,t\_1+t\_2) \* M(c,d,t\_3) \le M(a,d,t\_1+t\_2+t\_3).$$

*In general, we have:*

$$M\left(\mathbf{x}\_{1},\mathbf{x}\_{2},t\_{1}\right) \* M\left(\mathbf{x}\_{2},\mathbf{x}\_{3},t\_{2}\right) \* \dots \* M\left(\mathbf{x}\_{p},\mathbf{x}\_{p+1},t\_{p}\right) \leq M\left(\mathbf{x}\_{1},\mathbf{x}\_{p+1},t\_{1} + t\_{2} + \dots + t\_{p}\right) . \tag{1}$$

• *Suppose that M satisfies the* -*-triangle inequality. Since:*

$$M(a,b,t\_1) \* M(c,b,t\_2) \le \min\left\{M(a,c,t\_1+t\_2), M(c,a,t\_1+t\_2)\right\},$$

*which implies:*

$$M(a,b,t\_1) \* M(c,b,t\_2) \* M(d,c,t\_3) \le \min\left\{M(a,d,t\_1+t\_2+t\_3), M(d,a,t\_1+t\_2+t\_3)\right\}.\tag{2}$$

*In general, we have:*

$$\begin{aligned} &\mathcal{M}\left(\mathbf{x}\_{1},\mathbf{x}\_{2},t\_{1}\right)\*\mathcal{M}\left(\mathbf{x}\_{3},\mathbf{x}\_{2},t\_{2}\right)\*\mathcal{M}\left(\mathbf{x}\_{4},\mathbf{x}\_{3},t\_{3}\right)\*\dots\*\mathcal{M}\left(\mathbf{x}\_{p+1},\mathbf{x}\_{p},t\_{p}\right) \\ &\leq \min\left\{\mathcal{M}\left(\mathbf{x}\_{1},\mathbf{x}\_{p+1},t\_{1}+t\_{2}+\dots+t\_{p}\right),\mathcal{M}\left(\mathbf{x}\_{p+1},\mathbf{x}\_{1},t\_{1}+t\_{2}+\dots+t\_{p}\right)\right\}.\end{aligned}$$

• *Suppose that M satisfies the -triangle inequality. Since:*

$$M(b,a,t\_1) \* M(b,c,t\_2) \le \min\left\{M(a,c,t\_1+t\_2), M(c,a,t\_1+t\_2)\right\},$$

*which implies:*

$$M(b,a,t\_1) \* M(b,c,t\_2) \* M(c,d,t\_3) \le \min\left\{M(a,d,t\_1+t\_2+t\_3), M(d,a,t\_1+t\_2+t\_3)\right\}.\tag{3}$$

*In general, we have:*

$$\begin{aligned} &M\left(\mathbf{x}\_{2},\mathbf{x}\_{1},t\_{1}\right)\*M\left(\mathbf{x}\_{2},\mathbf{x}\_{3},t\_{2}\right)\*M\left(\mathbf{x}\_{3},\mathbf{x}\_{4},t\_{3}\right)\*\cdots\*M\left(\mathbf{x}\_{p},\mathbf{x}\_{p+1}\right) \\ &\leq \min\left\{M\left(\mathbf{x}\_{1},\mathbf{x}\_{p+1},t\_{1}+t\_{2}+\cdots+t\_{p}\right), M\left(\mathbf{x}\_{p+1},\mathbf{x}\_{1},t\_{1}+t\_{2}+\cdots+t\_{p}\right)\right\}. \end{aligned}$$

• *Suppose that M satisfies the -triangle inequality. Then:*

$$M(a,b,t\_1) \* M(b,c,t\_2) \* M(d,c,t\_3) = M(b,c,t\_1) \* M(a,b,t\_2) \* M(d,c,t\_3)$$

$$\leq M(c,a,t\_1+t\_2) \* M(d,c,t\_3) \leq M(a,d,t\_1+t\_2+t\_3) \tag{4}$$

*and:*

$$M(b, a, t\_1) \* M(c, b, t\_2) \* M(c, d, t\_3) \le M(a, c, t\_1 + t\_2) \* M(c, d, t\_3)$$

$$= M(c, d, t\_3) \* M(a, c, t\_1 + t\_2) \le M(d, a, t\_1 + t\_2 + t\_3). \tag{5}$$

*From Equation (4), we also have:*

$$\begin{aligned} M(a,b,t\_1) \* M(c,b,t\_2) \* M(d,c,t\_3) &= M(d,c,t\_3) \* M(c,b,t\_2) \* M(a,b,t\_1) \\ &\le M(d,a,t\_1+t\_2+t\_3) \end{aligned} \tag{6}$$

*which implies:*

$$M(b, a, t\_1) \* M(b, c, t\_2) \* M(c, d, t\_3) \ge M(a, d, t\_1 + t\_2 + t\_3) \tag{7}$$

*by referring to Equation (5). In general, we have the following cases:*

*(a) If p is even, then:*

$$\begin{aligned} M\left(\mathbf{x}\_1, \mathbf{x}\_2, t\_1\right) \ast M\left(\mathbf{x}\_2, \mathbf{x}\_3, t\_2\right) \ast M\left(\mathbf{x}\_4, \mathbf{x}\_3, t\_3\right) \ast M\left(\mathbf{x}\_4, \mathbf{x}\_5, t\_4\right) \ast M\left(\mathbf{x}\_6, \mathbf{x}\_5, t\_5\right) \\ \ast M\left(\mathbf{x}\_6, \mathbf{x}\gamma, t\_6\right) \ast \dots \ast M\left(\mathbf{x}\_{p\_r}, \mathbf{x}\_{p+1}, t\_p\right) \le M\left(\mathbf{x}\_{p+1}, \mathbf{x}\_1, t\_1 + t\_2 + \dots + t\_p\right) \end{aligned}$$

*and:*

$$\begin{split} &M\left(\mathbf{x}\_{2},\mathbf{x}\_{1},t\_{1}\right)\*M\left(\mathbf{x}\_{3},\mathbf{x}\_{2},t\_{2}\right)\*M\left(\mathbf{x}\_{3},\mathbf{x}\_{4},t\_{3}\right)\*M\left(\mathbf{x}\_{5},\mathbf{x}\_{4},t\_{4}\right)\*M\left(\mathbf{x}\_{5},\mathbf{x}\_{6},t\_{5}\right) \\ &\*M\left(\mathbf{x}\_{7},\mathbf{x}\_{6},t\_{6}\right)\*\cdots\*M\left(\mathbf{x}\_{p},\mathbf{x}\_{p+1},t\_{p}\right) \leq M\left(\mathbf{x}\_{1},\mathbf{x}\_{p+1},t\_{1}+t\_{2}+\cdots+t\_{p}\right) .\end{split}$$

*(b) If p is odd, then:*

$$\begin{aligned} M\left(\mathbf{x}\_{1},\mathbf{x}\_{2},t\_{1}\right) &\ast M\left(\mathbf{x}\_{2},\mathbf{x}\_{3},t\_{2}\right) \ast M\left(\mathbf{x}\_{4},\mathbf{x}\_{3},t\_{3}\right) \ast M\left(\mathbf{x}\_{4},\mathbf{x}\_{5},t\_{4}\right) \ast M\left(\mathbf{x}\_{6},\mathbf{x}\_{5},t\_{5}\right) \\ &\ast M\left(\mathbf{x}\_{6},\mathbf{x}\_{7},t\_{6}\right) \ast \cdots \ast M\left(\mathbf{x}\_{p},\mathbf{x}\_{p+1},t\_{p}\right) \leq M\left(\mathbf{x}\_{1},\mathbf{x}\_{p+1},t\_{1}+t\_{2}+\cdots+t\_{p}\right) \end{aligned}$$

*and:*

$$\begin{split} &M\left(\mathbf{x}\_{2},\mathbf{x}\_{1},t\_{1}\right) \ast M\left(\mathbf{x}\_{3},\mathbf{x}\_{2},t\_{2}\right) \ast M\left(\mathbf{x}\_{3},\mathbf{x}\_{4},t\_{3}\right) \ast M\left(\mathbf{x}\_{5},\mathbf{x}\_{4},t\_{4}\right) \ast M\left(\mathbf{x}\_{5},\mathbf{x}\_{6},t\_{5}\right) \\ &\ast M\left(\mathbf{x}\_{7},\mathbf{x}\_{6},t\_{6}\right) \ast \cdots \ast M\left(\mathbf{x}\_{p+1},\mathbf{x}\_{p},t\_{p}\right) \leq M\left(\mathbf{x}\_{p+1},\mathbf{x}\_{1},t\_{1}+t\_{2}+\cdots+t\_{p}\right) .\end{split}$$

Let (*X*, *M*) be a fuzzy semi-metric space.


The following interesting results were modified from Wu [15] using the similar argument, which will be used in further discussion.

**Proposition 1.** *(Wu [15]) Let* (*X*, *M*) *be a fuzzy semi-metric space. Then we have the following properties:*


#### **3. Double Fuzzy Semi-Metric**

Let (*X*, *M*) be a fuzzy semi-metric space along with a t-norm ∗. Given any four elements *x*, *y*, *u*, *v* ∈ *X*, recall that the value *M*(*x*, *y*, *t*) means the membership degree of the distance that is less than *t* between *x* and *y*, and the value *M*(*u*, *v*, *t*) means the membership degree of the distance that is less than *t* between *u* and *v*. In this case, we can define a value:

$$\mathcal{J}(x, y; u, v, t) = \min\left\{M(x, y, t), M(u, v, t)\right\}, t$$

which means the membership degree of the distance that is simultaneously less than *t* between *x* and *y* and between *u* and *v*. In general, instead of considering the min function, we shall use the t-norm. The formal definition is given below.

**Definition 1.** *Let* (*X*, *M*) *be a fuzzy semi-metric space along with a t-norm* ∗*. We define the mapping <sup>ζ</sup>* : *<sup>X</sup>*<sup>4</sup> × [0, +∞) → [0, 1] *by:*

$$
\zeta(\mathfrak{x}, \mathfrak{y}; \mathfrak{u}, \mathfrak{v}, t) = M(\mathfrak{x}, \mathfrak{y}, t) \* M(\mathfrak{u}, \mathfrak{v}, t) .
$$

*Then ζ is called a double fuzzy semi-metric.*

**Example 2.** *Continued from Example 1, we consider:*

$$M(\mathbf{x}, y, t) = \begin{cases} \frac{t}{t + d(\mathbf{x}, y)} & \text{if } t > 0 \\ 1 & \text{if } t = 0 \text{ and } \mathbf{x} = y \\ 0 & \text{if } t = 0 \text{ and } \mathbf{x} \neq y. \end{cases}$$

*If we take t-norm as a* ∗ *b* = *a* · *b, then the double fuzzy semi-metric can be obtained as:*

$$\begin{aligned} \widetilde{\varphi}(x, y; u, v, t) &= M(x, y, t) \cdot M(u, v, t) \\ &= \begin{cases} \frac{t}{t + d(x, y)} \cdot \frac{t}{t + d(u, v)} & \text{if } t > 0 \\ 1 & \text{if } t = 0 \text{ and } x = y \text{ and } u = v, t \\ 0 & \text{if } t = 0 \text{ and } x \neq y \text{ or } u \neq v. \end{cases} \end{aligned}$$

The potential application for considering the double fuzzy semi-metric is to study the new type of fixed point theorems in fuzzy semi-metric space.

**Proposition 2.** *(Triangle Inequalities for Dual Fuzzy Semi-Metric) Let* (*X*, *M*) *be a fuzzy semi-metric space along with a t-norm* ∗*. Given any x*, *y*, *z*, *u*, *v*, *w* ∈ *X, we have the following properties:*

(i) *Suppose that M satisfies the* -*-triangle inequality. Then we have the inequality:*

*ζ*(*x*, *z*; *u*, *w*, *t* + *s*) ≥ *ζ*(*x*, *y*; *u*, *v*, *t*) ∗ *ζ*(*y*, *z*; *v*, *w*,*s*)

*for s*, *t* > 0*.*

(ii) *Suppose that M satisfies the* -*-triangle inequality. Then we have the inequality:*

$$\mathcal{J}(\mathfrak{x}, z; u, w, t + s) \supseteq \mathcal{J}(\mathfrak{x}, \underline{y}; u, v, t) \* \mathcal{J}(z, \underline{y}; w, v, s)$$

*for s*, *t* > 0*.*

(iii) *Suppose that M satisfies the -triangle inequality. Then we have the inequality:*

$$\mathcal{Z}(\mathbf{x}, z; u, w, t + s) \supseteq \mathcal{Z}(y, x; v, u, t) \* \mathcal{Z}(y, z; v, w, s)$$

*for s*, *t* > 0*.*

(iv) *Suppose that M satisfies the -triangle inequality. Then we have the inequality:*

$$\mathbb{Z}(\mathbf{x}, z; \mathbf{u}, \mathbf{w}, \mathbf{t} + \mathbf{s}) \succeq \mathbb{Z}(y, \mathbf{x}; \mathbf{v}, \mathbf{u}, \mathbf{t}) \ast \mathbb{Z}(z, y; \mathbf{w}, \mathbf{v}, \mathbf{s})$$

*for s*, *t* > 0*.*

**Proof.** It suffices to prove part (i); we have:

$$\begin{aligned} \mathcal{J}(\mathbf{x}, z; \boldsymbol{\mu}, \mathbf{w}, t + \mathbf{s}) &= M(\mathbf{x}, z, t + \mathbf{s}) \ast M(\mathbf{u}, \mathbf{w}, t + \mathbf{s}) \\ &\ge \left( M(\mathbf{x}, \mathbf{y}, t) \ast M(\mathbf{y}, z, \mathbf{s}) \right) \ast M(\mathbf{u}, \mathbf{w}, t + \mathbf{s}) \end{aligned}$$

(using the --triangle inequality and the increasing property of t-norm)

≥ (*M*(*x*, *y*, *t*) ∗ *M*(*y*, *z*,*s*)) ∗ (*M*(*u*, *v*, *t*) ∗ *M*(*v*, *w*,*s*))

(using the --triangle inequality and the increasing property of t-norm)

= (*M*(*x*, *y*, *t*) ∗ *M*(*u*, *v*, *t*)) ∗ (*M*(*y*, *z*,*s*) ∗ *M*(*v*, *w*,*s*))

(using the associative and commutative properties of t-norm)

= *ζ*(*x*, *y*; *u*, *v*, *t*) ∗ *ζ*(*y*, *z*; *v*, *w*,*s*).

This completes the proof.

**Definition 2.** *Let* (*X*, *M*) *be a fuzzy semi-metric space along with a t-norm* ∗*, and let ζ be a double fuzzy semi-metric given by:*

$$
\zeta(\mathfrak{x}, \mathfrak{y}; \mathfrak{u}, \mathfrak{v}, t) = M(\mathfrak{x}, \mathfrak{y}, t) \* M(\mathfrak{u}, \mathfrak{v}, t) .
$$

*Given any fixed x*, *y*, *u*, *v* ∈ *X, we define the following concepts of monotonicity:*


**Proposition 3.** *Let* (*X*, *M*) *be a fuzzy semi-metric space along with a t-norm* ∗*. Given any fixed x*, *y*, *u*, *v* ∈ *X, the double fuzzy semi-metric ζ satisfies the following properties:*


**Proof.** Part (i) of Proposition 1 says that the mappings *M*(*x*, *y*, ·) and *M*(*u*, *v*, ·) from [0, ∞) into [0, 1] are nondecreasing. According to the increasing property of t-norm, we conclude that the mapping *ζ*(*x*, *y*; *u*, *v*, ·) from [0, ∞) into [0, 1] is nondecreasing, which proves part (i).

Part (ii) can be obtained from part (ii) of Proposition 1, and part (iii) can be obtained from part (iii) of Proposition 1. This completes the proof.

By using the strictly increasing property of t-norm, the proof of Proposition 3 is still valid for obtaining the following results.

**Proposition 4.** *Let* (*X*, *M*) *be a fuzzy semi-metric space along with a t-norm* ∗*. Suppose that the t-norm satisfies the strictly increasing property. Given any fixed x*, *y*, *u*, *v* ∈ *X, the double fuzzy semi-metric ζ satisfies the following properties:*


Let (*X*, *M*) be a fuzzy semi-metric space. The motivation for considering the following two concepts can refer to Wu [16].

• *M* is said to satisfy the canonical condition if and only if:

$$\lim\_{t \to +\infty} M(x, y, t) = 1 \text{ for any fixed } x, y \in X.$$

• *M* is said to satisfy the rational condition if and only if:

$$\lim\_{t \to 0+} M(\mathbf{x}, y, t) = 0 \text{ for any fixed } \mathbf{x}, y \in X.$$

**Proposition 5.** *Let* (*X*, *M*) *be a fuzzy semi-metric space along with a t-norm* ∗*.*

(i) *Suppose that M satisfies the canonical condition. If the t-norm* ∗ *is left-continuous at* 1 *with respect to the first or second argument, then we have:*

$$\lim\_{t \to +\infty} \mathbb{J}(\mathbf{x}, \mathbf{y}; \boldsymbol{\mu}, \boldsymbol{\upsilon}, t) = 1. \tag{8}$$

(ii) *Suppose that M satisfies the rational condition. If the t-norm* ∗ *is right-continuous at* 0 *with respect to the first or second argument, then we have:*

$$\lim\_{t \to 0+} \mathbb{\zeta}(x, y; u, v, t) = 0. \tag{9}$$

**Proof.** To prove part (i), the canonical condition says that:

$$\lim\_{t \to +\infty} M(\mathbf{x}, y, t) = 1 = \lim\_{t \to +\infty} M(\mathbf{u}, v, t).$$

The left-continuity of t-norm ∗ at 1 also says that:

$$\lim\_{t \to +\infty} \mathbb{J}(x, y; u, v, t) = \left(\lim\_{t \to +\infty} M(x, y, t)\right) \ast \left(\lim\_{t \to +\infty} M(u, v, t)\right) = 1 \ast 1 = 1\text{.}$$

To prove part (ii), the rational condition says that:

$$\lim\_{t \to 0+} M(x, y, t) = 0 = \lim\_{t \to 0+} M(u, v, t).$$

The right-continuity of t-norm ∗ at 0 also says that:

$$\lim\_{t \to 0+} \zeta(x, y; u, v, t) = \left(\lim\_{t \to 0+} M(x, y, t)\right) \* \left(\lim\_{t \to 0+} M(u, v, t)\right) = 0 \* 0 = 0.$$

This completes the proof.

**Example 3.** *Continued from Example 1, it is not hard to check that M satisfies both the canonical and rational conditions. Suppose that we take t-norm as a* ∗ *b* = *a* · *b. Then Proposition 5 says that:*

$$\lim\_{t \to +\infty} \zeta(x, y; u, v, t) = 1 \text{ and } \lim\_{t \to 0+} \zeta(x, y; u, v, t) = 0.$$

#### **4. Convergence Based on the Infimum**

From Definition 1, we see that the double fuzzy semi-metric *<sup>ζ</sup>* is a mapping from *<sup>X</sup>*<sup>4</sup> × [0, <sup>∞</sup>) into [0, 1]. Here, we shall consider its dual sense by considering the mapping from (0, 1] × *<sup>X</sup>*<sup>4</sup> into [0, <sup>∞</sup>). The formal definition is given below.

**Definition 3.** *Let* (*X*, *M*) *be a fuzzy semi-metric space along with a t-norm* ∗*. We also assume that M satisfies the canonical condition, and that the t-norm* ∗ *is left-continuous at* 1 *with respect to the first or second argument. Given any fixed x*, *y*, *u*, *v* ∈ *X and any fixed λ* ∈ (0, 1]*, we consider the following set:*

$$\Pi^\downarrow(\lambda, \mathfrak{x}, \mathfrak{y}; \mathfrak{u}, \mathfrak{v}) = \{ t > 0 : \mathbb{Q}(\mathfrak{x}, \mathfrak{y}; \mathfrak{u}, \mathfrak{v}, t) \ge 1 - \lambda \}, \mathfrak{v}$$

*which is used to define a mapping* <sup>Ψ</sup>↓(*λ*, ·, ·; ·, ·) : *<sup>X</sup>*<sup>4</sup> → [0, +∞) *by:*

$$\Psi^\downarrow(\lambda, \mathbf{x}, y; u, v) = \inf \Pi^\downarrow(\lambda, \mathbf{x}, y; u, v) = \inf \left\{ t > 0 : \mathbb{J}(\mathbf{x}, y; u, v, t) \ge 1 - \lambda \right\}.$$

*In this case, the mapping* <sup>Ψ</sup><sup>↓</sup> *from* (0, 1] × *<sup>X</sup>*<sup>4</sup> *into* [0, <sup>∞</sup>) *is called the infimum type of dual double fuzzy semi-metric.*

**Example 4.** *Continued from Example 2, we have:*

$$\Pi^\perp(\lambda, \mathbf{x}, y; u, v) = \left\{ t > 0 : \frac{t}{t + d(\mathbf{x}, y)} \cdot \frac{t}{t + d(\mathbf{u}, v)} \ge 1 - \lambda \right\} = \left\{ t > 0 : t \ge \frac{\mathbb{C} + \sqrt{\mathbb{C}^2 + D}}{2} \right\},$$
  $v$ :

*where:*

$$C = \frac{(d(\mathbf{x}, \mathbf{y}) + d(\mathbf{u}, \mathbf{v}))(1 - \lambda)}{\lambda} \text{ and } D = \frac{d(\mathbf{x}, \mathbf{y}) \cdot d(\mathbf{u}, \mathbf{v}) \cdot (1 - \lambda)}{\lambda}.$$

*We also have:*

$$\Psi^\downarrow(\lambda, \mathbf{x}, y; u, v) = \inf \Pi^\downarrow(\lambda, \mathbf{x}, y; u, v) = \left\{ t > 0 : t \ge \frac{\mathbb{C} + \sqrt{\mathbb{C}^2 + D}}{2} \right\} = \frac{\mathbb{C} + \sqrt{\mathbb{C}^2 + D}}{2}.$$

The potential application of dual double fuzzy semi-metric will be used to study the fixed point theorems in fuzzy semi-metric space. However, we first need to claim that the set Π↓(*λ*, *x*, *y*; *u*, *v*) is nonempty. Suppose that Π↓(*λ*, *x*, *y*; *u*, *v*) = ∅. The definition says that *ζ*(*x*, *y*; *u*, *v*, *t*) < 1 − *λ* for all *t* > 0; that is:

$$\lim\_{t \to +\infty} \zeta(x, y; u, v, t) \le 1 - \lambda < 1,$$

which contradicts Equation (8). Therefore, Definition 3 is well-defined and Π↓(*λ*, *x*, *y*; *u*, *v*) = ∅.

**Remark 2.** *The following observations will be useful for further discussion.*

• *For any λ* ∈ (0, 1]*, we have:*

$$\Psi^\downarrow(1,\mathbf{x},\mathbf{y};\boldsymbol{\mu},\boldsymbol{\upsilon}) = \inf\left\{ t > 0 : \mathbb{Q}(\mathbf{x},\mathbf{y};\boldsymbol{\mu},\boldsymbol{\upsilon},t) \ge 0 \right\} = \inf\{ t > 0 \} = 0,$$

*and:*

$$\begin{split} \Psi^{\downarrow}(\lambda, \mathbf{x}, \mathbf{x}; u, u) &= \inf \left\{ t > 0 : \mathbb{Q}(\mathbf{x}, \mathbf{x}; u, u, t) \ge 1 - \lambda \right\} \\ &= \inf \left\{ t > 0 : 1 \ge 1 - \lambda \right\} = \inf \{ t > 0 \} = 0. \end{split} \tag{10}$$

• *Given any fixed x*, *y*, *u*, *v* ∈ *X, if λ*<sup>1</sup> > *λ*2*, then:*

$$\Pi^{\downarrow}(\lambda\_2, \mathbf{x}, y; u, v) \subseteq \Pi^{\downarrow}(\lambda\_1, \mathbf{x}, y; u, v) \text{ and } \Psi^{\downarrow}(\lambda\_1, \mathbf{x}, y; u, v) \le \Psi^{\downarrow}(\lambda\_2, \mathbf{x}, y; u, v). \tag{11}$$

**Proposition 6.** *Let* (*X*, *M*) *be a fuzzy semi-metric space along with a t-norm* ∗*. We also assume that M satisfies the canonical condition, and that the t-norm* ∗ *is left-continuous at* 1 *with respect to the first or second argument. Given any fixed x*, *y*, *u*, *v* ∈ *X, suppose that the following conditions are satisfied:*

$$\Pi^\downarrow(0+,\mathsf{x},\mathsf{y};\mathsf{u},\mathsf{v}) \equiv \bigcap\_{0<\lambda\le 1} \Pi^\downarrow(\lambda,\mathsf{x},\mathsf{y};\mathsf{u},\mathsf{v}) \not\equiv \oslash \lambda$$

*and:*

$$\{t > 0 : \zeta(x, y; \mu, v, t) = 1\} \neq \mathcal{Q}.$$

*Then we have:*

$$\Pi^\downarrow(0+,\mathbf{x},\mathbf{y};\boldsymbol{u},\boldsymbol{v}) = \left\{ t > 0: \mathbb{Q}(\mathbf{x},\mathbf{y};\boldsymbol{u},\boldsymbol{v},t) = 1 \right\}.\tag{12}$$

*Moreover, the following limit exists:*

$$\lim\_{\lambda \to 0+} \Psi^{\downarrow}(\lambda, \mathbf{x}, \underline{y}; u, v) = \sup\_{0 < \lambda \le 1} \Psi^{\downarrow}(\lambda, \mathbf{x}, \underline{y}; u, v). \tag{13}$$

**Proof.** The assumption Π↓(0+, *x*, *y*; *u*, *v*) = ∅ says that we can consider *t* ∈ Π↓(0+, *x*, *y*; *u*, *v*), i.e., *ζ*(*x*, *y*; *u*, *v*, *t*) ≥ 1 − *λ* for all *λ* ∈ (0, 1]. Then we obtain *ζ*(*x*, *y*; *u*, *v*, *t*) ≥ 1 by taking *λ* → 0+, which shows that *ζ*(*x*, *y*; *u*, *v*, *t*) = 1, i.e.,

$$\Pi^\downarrow(0+,\mathbf{x},\mathbf{y};\mu,\sigma) = \bigcap\_{0<\lambda\le 1} \Pi^\downarrow(\lambda,\mathbf{x},\mathbf{y};\mu,\sigma) \subseteq \{t>0: \mathbb{Q}(\mathbf{x},\mathbf{y};\mu,\sigma,t) = 1\}\dots$$

On the other hand, suppose that *ζ*(*x*, *y*; *u*, *v*, *t*) = 1. Then *ζ*(*x*, *y*; *u*, *v*, *t*) = 1 ≥ 1 − *λ* for all *λ* ∈ (0, 1]. Therefore, we obtain *t* ∈ Π↓(0+, *x*, *y*; *u*, *v*), which implies the desired equality (Equation (12)). Further, the inequality (Equation (11)) says that the limit (Equation (13)) exists. This completes the proof.

**Proposition 7.** *Suppose that* (*X*, *M*) *is a fuzzy semi-metric space along with a t-norm* ∗*. We also assume that M satisfies the canonical and rational conditions, and that the t-norm* ∗ *is left-continuous at* 1 *and right-continuous at* 0 *with respect to the first or second argument. If M satisfies the* ◦*-triangle inequality for* ◦∈{-, -, , }*, then, for any fixed x*, *y*, *u*, *v* ∈ *X with x* = *y or u* = *v, we have* Ψ↓(*λ*, *x*, *y*; *u*, *v*) > 0 *for λ* ∈ (0, 1)*.*

**Proof.** We first consider the case of *M*, satisfying the ◦-triangle inequality for ◦∈{-, -, }. From Equation (10), we need to consider *x* = *y* or *u* = *v*. We want to assume Ψ↓(*λ*, *x*, *y*; *u*, *v*) = 0 for *λ* ∈ (0, 1) to obtain a contradiction. Using the concept of infimum from Ψ↓(*λ*, *x*, *y*; *u*, *v*), given any > 0, there exists *t* > 0 such that *ζ*(*x*, *y*; *u*, *v*, *t*) ≥ 1 − *λ* and:

$$t\_{\mathfrak{c}} < \Psi^{\downarrow}(\lambda, \mathfrak{x}, \mathfrak{y}; \mathfrak{u}, \mathfrak{v}) + \mathfrak{e} = \mathfrak{e}.$$

Parts (i) and (ii) of Proposition 3 say that the mapping *ζ*(*x*, *y*; *u*, *v*, ·) from [0, ∞) into [0, 1] is nondecreasing. Therefore, we obtain:

$$\mathbb{Q}\left(\mathfrak{x}, \mathfrak{y}; \mathfrak{u}, \mathfrak{v}, \mathfrak{e}\right) \ge \mathbb{Q}\left(\mathfrak{x}, \mathfrak{y}; \mathfrak{u}, \mathfrak{v}, \mathfrak{t}\_{\mathfrak{e}}\right) \ge 1 - \lambda.$$

Since can be any positive real number, using Equation (9), we must have:

$$0 = \lim\_{\varepsilon \to 0+} \zeta \left( \mathfrak{x}, \mathfrak{y}; \mathfrak{u}, \mathfrak{v}, \mathfrak{e} \right) \geq \zeta \left( \mathfrak{x}, \mathfrak{y}; \mathfrak{u}, \mathfrak{v}, \mathfrak{t}\_{\mathfrak{e}} \right) \geq 1 - \lambda\_{\varepsilon}$$

which contradicts 0 < *λ* < 1.

Now we assume that *M* satisfies the -triangle inequality. Suppose that Ψ↓(*λ*, *y*, *x*; *v*, *u*) = 0 for *λ* ∈ (0, 1). Part (iii) of Proposition 3 says that the mapping *ζ*(*x*, *y*; *u*, *v*, ·) is symmetrically nondecreasing. Therefore, we can similarly obtain:

$$\mathcal{J}\left(\mathfrak{x}, \mathfrak{y}; \mathfrak{u}, \mathfrak{v}, \mathfrak{e}\right) \ge \mathcal{J}\left(\mathfrak{y}, \mathfrak{x}; \mathfrak{v}, \mathfrak{u}, \mathfrak{t}\_{\mathfrak{e}}\right) \ge 1 - \lambda.$$

This completes the proof.

**Proposition 8.** *Let* (*X*, *M*) *be a fuzzy semi-metric space along with a t-norm* ∗*. We also assume that M satisfies the canonical condition, and that the t-norm* ∗ *is left-continuous at* 1 *with respect to the first or second argument. Given any fixed x*, *y*, *u*, *v* ∈ *X and λ* ∈ (0, 1)*, we have the following properties:*

(i) *If* > 0 *is sufficiently small satisfying* Ψ↓(*λ*, *x*, *y*; *u*, *v*) > *, then we have:*

$$\mathbb{Z}\left(x, y; u, v, \Psi^{\downarrow}(\lambda, x, y; u, v) - \varepsilon\right) < 1 - \lambda. \tag{14}$$

(ii) *Suppose that M satisfies the* ◦*-triangle inequality for* ◦∈{-, -, }*. For any* > 0*, we have:*

$$\zeta\left(\mathbf{x},\mathbf{y};\boldsymbol{\mu},\boldsymbol{\upsilon},\,\Psi^{\perp}(\lambda,\mathbf{x},\mathbf{y};\boldsymbol{\mu},\boldsymbol{\upsilon})+\epsilon\right)\geq 1-\lambda\tag{15}$$

(iii) *Suppose that M satisfies the* ◦*-triangle inequality for* ◦∈{-, }*. For any* > 0*, we have:*

$$\zeta\left(x, y; \mu, v, \Psi^{\downarrow}(\lambda, y, x; \mu, v) + \epsilon\right) \ge 1 - \lambda \tag{16}$$

*and:*

$$\zeta \left( x, y; u, v, \Psi^{\downarrow}(\lambda, x, y; v, u) + \epsilon \right) \ge 1 - \lambda \tag{17}$$

(iv) *Suppose that M satisfies the* ◦*-triangle inequality for* ◦∈{-, , }*. For any* > 0*, we have:*

$$\zeta \left( \mathbf{x}, \mathbf{y}; \boldsymbol{\mu}, \upsilon, \Psi^{\downarrow}(\lambda, \mathbf{y}, \mathbf{x}; \upsilon, \mu) + \varepsilon \right) \geq 1 - \lambda \tag{18}$$

**Proof.** To prove part (i), we assume that:

$$\mathbb{Q}(\mathfrak{x}, \mathfrak{y}; \mathfrak{u}, \mathfrak{v}, \Psi^{\downarrow}(\lambda, \mathfrak{x}, \mathfrak{y}; \mathfrak{u}, \mathfrak{v}) - \mathfrak{e}) \ge 1 - \lambda.$$

The definition of Ψ<sup>↓</sup> says that Ψ↓(*λ*, *x*, *y*; *u*, *v*) ≤ Ψ↓(*λ*, *x*, *y*; *u*, *v*) − . This contradiction implies *ζ*(*x*, *y*; *u*, *v*, Ψ↓(*λ*, *x*, *y*; *u*, *v*) − ) ≤ 1 − *λ*.

To prove part (ii), using the concept of infimum from Ψ↓(*λ*, *x*, *y*; *u*, *v*), given any > 0, there exists *t* > 0 such that *ζ*(*x*, *y*; *u*, *v*, *t*) ≥ 1 − *λ* and *t* < Ψ↓(*λ*, *x*, *y*; *u*, *v*) + . Parts (i) and (ii) of Proposition 3 says that the mapping *ζ*(*x*, *y*; *u*, *v*, ·) is nondecreasing. Therefore, we obtain:

$$\mathbb{Z}\left(\mathbf{x},\mathbf{y};\boldsymbol{\mu},\upsilon,\Psi^{\downarrow}(\lambda,\mathbf{x},\mathbf{y};\boldsymbol{\mu},\upsilon)+\epsilon\right) \geq \mathbb{Z}(\mathbf{x},\mathbf{y};\boldsymbol{\mu},\upsilon,t\_{\mathfrak{c}}) \geq 1-\lambda.$$

To prove part (iii), using the concept of infimum from Ψ↓(*λ*, *y*, *x*; *u*, *v*), given any > 0, there exists *t* > 0 such that *ζ*(*y*, *x*; *u*, *v*, *t*) ≥ 1 − *λ* and *t* < Ψ↓(*λ*, *y*, *x*; *u*, *v*) + . Part (ii) of Proposition 3 says that the mapping *ζ*(*y*, *x*; *u*, *v*, ·) is -semisymmetrically nondecreasing. Therefore, we obtain:

$$\mathbb{Z}\left(\mathbf{x},\mathbf{y};\boldsymbol{\mu},\boldsymbol{\upsilon},\,\Psi^{\downarrow}(\lambda,\boldsymbol{y},\mathbf{x};\boldsymbol{\mu},\boldsymbol{\upsilon})+\epsilon\right) \geq \mathbb{Z}(\boldsymbol{y},\mathbf{x};\boldsymbol{\mu},\boldsymbol{\upsilon},\boldsymbol{t}\_{\sigma}) \geq 1-\lambda.$$

Since the mapping *ζ*(*x*, *y*; *u*, *v*, ·) is also --semisymmetrically nondecreasing, we can similarly obtain another inequality.

Since the mapping *ζ*(*x*, *y*; *u*, *v*, ·) is semisymmetrically nondecreasing, using parts (ii) and (iii) of Proposition 3, we can similarly obtain part (iv). This completes the proof.

**Proposition 9.** *Let* (*X*, *M*) *be a fuzzy semi-metric space along with a t-norm* ∗*. We also assume that M satisfies the canonical condition, and that the t-norm* ∗ *is left-continuous at* 1 *with respect to the first or second argument. Given any fixed x*, *y*, *u*, *v* ∈ *X and λ* ∈ (0, 1)*, the following statements hold true:*

	- *Suppose that M satisfies the* ◦*-triangle inequality for* ◦∈{-, -, }*. Then we have ζ*(*x*, *y*; *u*, *v*, *t*) < 1 − *λ.*
	- *Suppose that M satisfies the* ◦*-triangle inequality for* ◦∈{-, }*. Then we have ζ*(*y*, *x*; *u*, *v*, *t*) < 1 − *λ and ζ*(*x*, *y*; *v*, *u*, *t*) < 1 − *λ.*
	- *Suppose that M satisfies the* ◦*-triangle inequality for* ◦∈{-, , }*. Then we have ζ*(*y*, *x*; *v*, *u*, *t*) < 1 − *λ.*

**Proof.** To prove part (i), the inequality *t* > Ψ↓(*λ*, *x*, *y*; *u*, *v*) says that there exists > 0, satisfying *t* ≥ Ψ↓(*λ*, *x*, *y*; *u*, *v*) + . Therefore, we consider the following cases:

• Suppose that *M* satisfies the ◦-triangle inequality for ◦∈{-, -, }. Parts (i) and (ii) of Proposition 3 say that the mapping *ζ*(*x*, *y*; *u*, *v*, ·) is nondecreasing. Therefore, using Equation (15), we obtain:

$$\mathbb{Q}(\mathbf{x}, \mathbf{y}; \boldsymbol{\mu}, \boldsymbol{\upsilon}, t) \ge \mathbb{Q}\left(\mathbf{x}, \mathbf{y}; \boldsymbol{\mu}, \boldsymbol{\upsilon}, \boldsymbol{\Psi}^{\downarrow}(\boldsymbol{\lambda}, \mathbf{x}, \mathbf{y}; \boldsymbol{\mu}, \boldsymbol{\upsilon}) + \boldsymbol{\varepsilon}\right) \ge 1 - \lambda.$$

• Suppose that *M* satisfies the -triangle inequality. Part (iii) of Proposition 3 says that the mapping *ζ*(*x*, *y*; *u*, *v*, ·) is symmetrically nondecreasing. Therefore, using Equation (18), we obtain:

$$\mathcal{J}(\mathbf{x}, \mathbf{y}; \boldsymbol{\mu}, \boldsymbol{\upsilon}, \mathbf{t}) \ge \mathcal{J}\left(\mathbf{y}, \mathbf{x}; \boldsymbol{\upsilon}, \boldsymbol{\mu}, \mathbf{y}^{\mathbf{y}}(\boldsymbol{\lambda}, \mathbf{x}, \mathbf{y}; \boldsymbol{\mu}, \boldsymbol{\upsilon}) + \boldsymbol{\varepsilon}\right) \ge 1 - \lambda.$$

To prove part (ii), the inequality 0 < *t* < Ψ↓(*λ*, *x*, *y*; *u*, *v*) says that there exists > 0, satisfying *t* ≤ Ψ↓(*λ*, *x*, *y*; *u*, *v*) − . Therefore, we consider the following cases:

• Suppose that *M* satisfies the ◦-triangle inequality for ◦∈{-, -, }. Parts (i) and (ii) of Proposition 3 say that the mapping *ζ*(*x*, *y*; *u*, *v*, ·) is nondecreasing. Therefore, using Equation (14), we obtain:

$$\mathbb{Z}(x, y; u, v, t) \le \mathbb{Z}\left(x, y; u, v, \Psi^{\downarrow}(\lambda, x, y; u, v) - \epsilon\right) < 1 - \lambda.$$

• Suppose that *M* satisfies the ◦-triangle inequality for ◦∈{-, }. Part (ii) of Proposition 3 says that the mapping *ζ*(*x*, *y*; *u*, *v*, ·) is -semisymmetrically nondecreasing. Therefore, using Equation (14), we obtain:

$$\mathbb{Z}(\underline{y},\underline{x};\underline{u},\underline{v},t) \le \mathbb{Z}\left(\underline{x},\underline{y};\underline{u},\underline{v},\underline{\Psi}^{\downarrow}(\lambda,\underline{x},\underline{y};\underline{u},\underline{v}) - \varepsilon\right) < 1 - \lambda.$$

We can similarly obtain another inequality using the fact that the mapping *ζ*(*x*, *y*; *u*, *v*, ·) is also --semisymmetrically nondecreasing.

• Suppose that *M* satisfies the ◦-triangle inequality for ◦∈{-, , }. Parts (ii) and (iii) of Proposition 3 say that the mapping *ζ*(*x*, *y*; *u*, *v*, ·) is symmetrically nondecreasing. Therefore, using Equation (14), we obtain:

$$\mathbb{Q}(\underline{y}, \underline{x}; v, \underline{u}, t) \le \mathbb{Q}\left(\underline{x}, \underline{y}; \underline{u}, v, \Psi^{\downarrow}(\underline{\lambda}, \underline{x}, \underline{y}; \underline{u}, v) - \epsilon\right) < 1 - \lambda.$$

This completes the proof.

**Proposition 10.** *Let* (*X*, *M*) *be a fuzzy semi-metric space along with a t-norm* ∗*. We also assume that M satisfies the canonical condition, and that the t-norm* ∗ *is left-continuous at* 1 *with respect to the first or second argument. Given any fixed x*, *y*, *u*, *v* ∈ *X and λ* ∈ (0, 1)*, the following statements hold true:*

	- *Suppose that M satisfies the* ◦*-triangle inequality for* ◦∈{-, -, , }*. Then we have t* ≤ Ψ <sup>↓</sup>(*λ*, *x*, *y*; *u*, *v*)*.*
	- *Suppose that M satisfies the* ◦*-triangle inequality for* ◦∈{-, }*. Then we have t* ≤ Ψ↓(*λ*, *y*, *x*; *u*, *v*) *and t* ≤ Ψ↓(*λ*, *x*, *y*; *v*, *u*)*.*
	- *Suppose that M satisfies the* ◦*-triangle inequality for* ◦∈{-, , }*. Then we have t* ≤ Ψ <sup>↓</sup>(*λ*, *y*, *x*; *v*, *u*)*.*
	- *Suppose that M satisfies the strict* ◦*-triangle inequality for* ◦∈{-, -, }*. If* Ψ↓(*λ*, *x*, *y*; *u*, *v*) > 0*, then we have t* = Ψ↓(*λ*, *x*, *y*; *u*, *v*)*.*
	- *Suppose that M satisfies the strict* ◦*-triangle inequality for* ◦∈{-, }*. If* Ψ↓(*λ*, *y*, *x*; *u*, *v*) > 0*, then we have t* = Ψ↓(*λ*, *y*, *x*; *u*, *v*)*, and if* Ψ↓(*λ*, *x*, *y*; *v*, *u*) > 0*, then we have t* = Ψ↓(*λ*, *x*, *y*; *v*, *u*)*.*
	- *Suppose that M satisfies the strict* ◦*-triangle inequality for* ◦∈{-, , }*. If* Ψ↓(*λ*, *y*, *x*; *v*, *u*) > 0*, then we have t* = Ψ↓(*λ*, *y*, *x*; *v*, *u*)*.*

(iii) *If ζ*(*x*, *y*; *u*, *v*, *t*) ≥ 1 − *λ, then we have the following properties:*


**Proof.** To prove part (i), three cases are separately considered below:


$$t \le t\_c < \Psi^\downarrow(\lambda, y, \mathfrak{x}; \mathfrak{u}, \mathfrak{v}) + \epsilon.$$

Since can be any positive real number, we must have *t* ≤ Ψ↓(*λ*, *y*, *x*; *u*, *v*). We can similarly obtain another inequality using the fact of the mapping *ζ*(*x*, *y*; *u*, *v*, ·) being --semisymmetrically nondecreasing.

• Suppose that *M* satisfies the ◦-triangle inequality for ◦∈{-, , }. According to the concept of infimum, given any > 0, there exists *t* > 0, satisfying *ζ*(*y*, *x*; *v*, *u*, *t*) ≥ 1 − *λ* and *t* < Ψ↓(*λ*, *y*, *x*; *v*, *u*) + . Parts (ii) and (iii) of Proposition 3 say that if *t* > *t* then *ζ*(*x*, *y*; *u*, *v*, *t*) ≥ *ζ*(*y*, *x*; *v*, *u*, *t*), which contradicts *ζ*(*x*, *y*; *u*, *v*, *t*) < 1 − *λ*. It says that:

$$t \le t\_{\mathfrak{c}} < \Psi^{\downarrow}(\lambda, y, \mathfrak{x}; v, \mathfrak{u}) + \epsilon.$$

Since can be any positive real number, we must have *t* ≤ Ψ↓(*λ*, *y*, *x*; *v*, *u*).

To prove part (ii), three cases are separately considered below:

• Suppose that *M* satisfies the ◦-triangle inequality for ◦∈{-, -, }. According to the concept of infimum, given any > 0, there exists *t* > 0, satisfying *ζ*(*x*, *y*; *u*, *v*, *t*) ≥ 1 − *λ* and *t* < Ψ↓(*λ*, *x*, *y*; *u*, *v*) + . Regarding the strict property, parts (i) and (ii) of Proposition 4 say that if *t* > *t* then *ζ*(*x*, *y*; *u*, *v*, *t*) > *ζ*(*x*, *y*; *u*, *v*, *t*), which contradicts *ζ*(*x*, *y*; *u*, *v*, *t*) = 1 − *λ*. It says that:

$$t \le t\_{\mathfrak{c}} < \Psi^{\downarrow}(\lambda, \mathfrak{x}, y; u, v) + \epsilon.$$

Since can be any positive real number, we must have *t* ≤ Ψ↓(*λ*, *x*, *y*; *u*, *v*). Now we assume that *t* < Ψ↓(*λ*, *x*, *y*; *u*, *v*). The first case of part (ii) of Proposition 9 says that *ζ*(*x*, *y*; *u*, *v*, *t*) < 1 − *λ*, which also contradicts *ζ*(*x*, *y*; *u*, *v*, *t*) = 1 − *λ*. Therefore, we must have *t* = Ψ↓(*λ*, *x*, *y*; *u*, *v*).


Part (iii) can be obtained from the contrapositive statement of part (ii) of Proposition 9. This completes the proof.

**Proposition 11.** *Let* (*X*, *M*) *be a fuzzy semi-metric space along with a t-norm* ∗*. We also assume that M satisfies the canonical condition, and that the t-norm* ∗ *is left-continuous at* 1 *with respect to the first or second argument. Given any fixed x*, *y*, *u*, *v* ∈ *X and λ* ∈ (0, 1)*, the following statements hold true:*

(i) *Suppose that the mapping ζ*(*x*, *y*; *u*, *v*, ·) : (0, ∞) → [0, 1] *is left-continuous on* (0, ∞)*. If* Ψ↓(*λ*, *x*, *y*; *u*, *v*) > 0*, then we have:*

$$\zeta\left(x, y; u, v, \Psi^{\downarrow}(\lambda, x, y; u, v)\right) \le 1 - \lambda. \tag{19}$$

	- *Suppose that M satisfies the* ◦*-triangle inequality for* ◦∈{-, -, }*. If* Ψ↓(*λ*, *x*, *y*; *u*, *v*) > 0*, then we have:*

$$\zeta\left(\mathbf{x},\mathbf{y};\boldsymbol{u},\upsilon,\Psi^{\downarrow}(\lambda,\mathbf{x},\mathbf{y};\boldsymbol{u},\upsilon)\right)\geq 1-\lambda.\tag{20}$$

• *Suppose that M satisfies the* ◦*-triangle inequality for* ◦∈{-, }*. If* Ψ↓(*λ*, *y*, *x*; *u*, *v*) > 0*, then we have:*

$$\zeta\left(x, y; u, v, \Psi^{\downarrow}(\lambda, y, x; u, v)\right) \ge 1 - \lambda,\tag{21}$$

*and if* Ψ↓(*λ*, *x*, *y*; *v*, *u*) > 0*, then we have:*

$$\zeta\left(x, y; u, v, \Psi^{\downarrow}(\lambda, x, y; v, u)\right) \ge 1 - \lambda. \tag{22}$$

• *Suppose that M satisfies the* ◦*-triangle inequality for* ◦∈{-, , }*. If* Ψ↓(*λ*, *y*, *x*; *v*, *u*) > 0*, then we have:*

$$\zeta\left(x, y; u, v, \Psi^{\downarrow}(\lambda, y, x; v, u)\right) \ge 1 - \lambda. \tag{23}$$

**Proof.** By applying → 0+ to the inequality Equation (14), we obtain Equation (19), which proves part (i). By applying → 0+ to the inequality (Equation (15)), we obtain Equation (20), which proves part (ii). The other inequalities can be similarly obtained by parts (iii) and (iv) of Proposition 8. This completes the proof.

In order to establish the triangle inequalities for the infimum type of dual double fuzzy semi-metric, we provide a useful lemma.

**Lemma 1.** (Wu [16]) *Suppose that the t-norm* ∗ *is left-continuous at* 1 *with respect to the first or second argument. For any a* <sup>∈</sup> (0, 1) *and any p* <sup>∈</sup> <sup>N</sup>*, there exists r* <sup>∈</sup> (0, 1) *such that:*

$$\overbrace{r\*r\*\cdots\*r}^{p\text{ times}} > a.$$

**Theorem 1.** (Triangle Inequalities for Dual Double Fuzzy Semi-Metric) *Let* (*X*, *M*) *be a fuzzy semi-metric space along with a t-norm* ∗*. We also assume that M satisfies the canonical condition, and that the t-norm* ∗ *is left-continuous at* 1 *with respect to the first or second argument. Given any fixed μ* ∈ (0, 1] *and any fixed and distinct x*1, *x*2, ··· , *xp*, *y*1, *y*2, ··· , *yp* ∈ *X, we have the following inequalities:*

(i) *Suppose that M satisfies the* -*-triangle inequality. Then, there exists λ* ∈ (0, 1)*, satisfying:*

Ψ↓(*μ*, *x*1, *xp*; *y*1, *yp*) ≤ Ψ↓(*λ*, *x*1, *x*2; *y*1, *y*2) + Ψ↓(*λ*, *x*2, *x*3; *y*2, *y*3) + ··· + Ψ↓(*λ*, *xp*−2, *xp*−1; *yp*−2, *yp*−1) + Ψ↓(*λ*, *xp*−1, *xp*; *yp*−1, *yp*) (24) Ψ↓(*μ*, *x*1, *xp*; *yp*, *y*1) ≤ Ψ↓(*λ*, *x*1, *x*2; *y*2, *y*1) + Ψ↓(*λ*, *x*2, *x*3; *y*3, *y*2) + ··· + Ψ↓(*λ*, *xp*−2, *xp*−1; *yp*−1, *yp*−2) + Ψ↓(*λ*, *xp*−1, *xp*; *yp*, *yp*−1) (25) Ψ↓(*μ*, *xp*, *x*1; *y*1, *yp*) ≤ Ψ↓(*λ*, *xp*, *xp*−1; *yp*−1, *yp*) + Ψ↓(*λ*, *xp*−1, *xp*−2; *yp*−2, *yp*−1) + ··· + Ψ↓(*λ*, *x*3, *x*2; *y*2, *y*3) + Ψ↓(*λ*, *x*2, *x*1; *y*1, *y*2) Ψ↓(*μ*, *xp*, *x*1; *yp*, *y*1) ≤ Ψ↓(*λ*, *xp*, *xp*−1; *yp*, *yp*−1) + Ψ↓(*λ*, *xp*−1, *xp*−2; *yp*−1, *yp*−2) + ··· + Ψ↓(*λ*, *x*3, *x*2; *y*3, *y*2) + Ψ↓(*λ*, *x*2, *x*1; *y*2, *y*1).

(ii) *Suppose that M satisfies the* -*-triangle inequality. Then, there exists λ* ∈ (0, 1)*, satisfying:*

$$\begin{split} & \max \left\{ \Psi^{\downarrow}(\boldsymbol{\mu}, \mathbf{x}\_{1}, \mathbf{x}\_{p}; y\_{1}, y\_{p}), \Psi^{\downarrow}(\boldsymbol{\mu}, \mathbf{x}\_{1}, \mathbf{x}\_{p}; y\_{p}, y\_{1}), \Psi^{\downarrow}(\boldsymbol{\mu}, \mathbf{x}\_{p}, \mathbf{x}\_{1}; y\_{1}, y\_{p}), \Psi^{\downarrow}(\boldsymbol{\mu}, \mathbf{x}\_{p}, \mathbf{x}\_{1}; y\_{p}, y\_{1}) \right\} \\ & \leq \Psi^{\downarrow}(\boldsymbol{\lambda}, \mathbf{x}\_{1}, \mathbf{x}\_{2}; y\_{1}, y\_{2}) + \Psi^{\downarrow}(\boldsymbol{\lambda}, \mathbf{x}\_{3}, \mathbf{x}\_{2}; y\_{3}, y\_{2}) + \Psi^{\downarrow}(\boldsymbol{\lambda}, \mathbf{x}\_{4}, \mathbf{x}\_{3}; y\_{4}, y\_{3}) \\ & \quad + \dots + \Psi^{\downarrow}(\boldsymbol{\lambda}, \mathbf{x}\_{p}, \mathbf{x}\_{p-1}; y\_{p}, y\_{p-1}). \end{split}$$

(iii) *Suppose that M satisfies the -triangle inequality. Then, there exists λ* ∈ (0, 1)*, satisfying:*

$$\begin{split} & \max \left\{ \Psi^{\downarrow}(\boldsymbol{\mu}, \mathbf{x}\_{1}, \mathbf{x}\_{p}; y\_{1}, y\_{p}), \Psi^{\downarrow}(\boldsymbol{\mu}, \mathbf{x}\_{1}, \mathbf{x}\_{p}; y\_{p}, y\_{1}), \Psi^{\downarrow}(\boldsymbol{\mu}, \mathbf{x}\_{p}, \mathbf{x}\_{1}; y\_{1}, y\_{p}), \Psi^{\downarrow}(\boldsymbol{\mu}, \mathbf{x}\_{p}, \mathbf{x}\_{1}; y\_{p}, y\_{1}) \right\} \\ & \leq \Psi^{\downarrow}(\boldsymbol{\lambda}, \mathbf{x}\_{2}, \mathbf{x}\_{1}; y\_{2}, y\_{1}) + \Psi^{\downarrow}(\boldsymbol{\lambda}, \mathbf{x}\_{2}, \mathbf{x}\_{3}; y\_{2}, y\_{3}) + \Psi^{\downarrow}(\boldsymbol{\lambda}, \mathbf{x}\_{3}, \mathbf{x}\_{4}; y\_{3}, y\_{4}) \\ & \quad + \dots + \Psi^{\downarrow}(\boldsymbol{\lambda}, \mathbf{x}\_{p-1}, \mathbf{x}\_{p}; y\_{p-1}, y\_{p}). \end{split}$$

	- *If p is even, then:*

Ψ↓(*μ*, *x*1, *xp*; *y*1, *yp*) ≤ Ψ↓(*λ*, *x*2, *x*1; *y*2, *y*1) + Ψ↓(*λ*, *x*3, *x*2; *y*3, *y*2) + Ψ↓(*λ*, *x*3, *x*4; *y*3, *y*4) + Ψ↓(*λ*, *x*5, *x*4; *y*5, *y*4) + Ψ↓(*λ*, *x*5, *x*6; *y*5, *y*6) + Ψ↓(*λ*, *x*7, *x*6; *y*7, *y*6) + ··· + <sup>Ψ</sup>↓(*λ*, *xp*−1, *xp*; *yp*−1, *yp*), (26) Ψ↓(*μ*, *x*1, *xp*; *yp*, *y*1) ≤ Ψ↓(*λ*, *x*2, *x*1; *y*1, *y*2) + Ψ↓(*λ*, *x*3, *x*2; *y*2, *y*3) + Ψ↓(*λ*, *x*3, *x*4; *y*4, *y*3) + Ψ↓(*λ*, *x*5, *x*4; *y*4, *y*5) + Ψ↓(*λ*, *x*5, *x*6; *y*6, *y*5) + Ψ↓(*λ*, *x*7, *x*6; *y*6, *y*7) + ··· + <sup>Ψ</sup>↓(*λ*, *xp*−1, *xp*; *yp*, *yp*−1), (27) Ψ↓(*μ*, *xp*, *x*1; *y*1, *yp*) ≤ Ψ↓(*λ*, *x*1, *x*2; *y*2, *y*1) + Ψ↓(*λ*, *x*2, *x*3; *y*3, *y*2) + Ψ↓(*λ*, *x*4, *x*3; *y*3, *y*4) + Ψ↓(*λ*, *x*4, *x*5; *y*5, *y*4) + Ψ↓(*λ*, *x*6, *x*5; *y*5, *y*6) + Ψ↓(*λ*, *x*6, *x*7; *y*7, *y*6) + ··· + <sup>Ψ</sup>↓(*λ*, *xp*, *xp*−1; *yp*−1, *yp*), (28) Ψ↓(*μ*, *xp*, *x*1; *yp*, *y*1) ≤ Ψ↓(*λ*, *x*1, *x*2; *y*1, *y*2) + Ψ↓(*λ*, *x*2, *x*3; *y*2, *y*3) + Ψ↓(*λ*, *x*4, *x*3; *y*4, *y*3) + Ψ↓(*λ*, *x*4, *x*5; *y*4, *y*5) + Ψ↓(*λ*, *x*6, *x*5; *y*6, *y*5) + Ψ↓(*λ*, *x*6, *x*7; *y*6, *y*7) + ··· + <sup>Ψ</sup>↓(*λ*, *xp*, *xp*−1; *yp*, *yp*−1). (29)

• *If p is odd, then:*

Ψ↓(*μ*, *x*1, *xp*; *y*1, *yp*) ≤ Ψ↓(*λ*, *x*1, *x*2; *y*1, *y*2) + Ψ↓(*λ*, *x*2, *x*3; *y*2, *y*3) + Ψ↓(*λ*, *x*4, *x*3; *y*4, *y*3) + Ψ↓(*λ*, *x*4, *x*5; *y*4, *y*5) + Ψ↓(*λ*, *x*6, *x*5; *y*6, *y*5) + Ψ↓(*λ*, *x*6, *x*7; *y*6, *y*7) + ··· + <sup>Ψ</sup>↓(*λ*, *xp*, *xp*−1; *yp*, *yp*−1), (30) Ψ↓(*μ*, *x*1, *xp*; *yp*, *y*1) ≤ Ψ↓(*λ*, *x*1, *x*2; *y*2, *y*1) + Ψ↓(*λ*, *x*2, *x*3; *y*3, *y*2) + Ψ↓(*λ*, *x*4, *x*3; *y*3, *y*4) + Ψ↓(*λ*, *x*4, *x*5; *y*5, *y*4) + Ψ↓(*λ*, *x*6, *x*5; *y*5, *y*6) + Ψ↓(*λ*, *x*6, *x*7; *y*7, *y*6) + ··· + <sup>Ψ</sup>↓(*λ*, *xp*, *xp*−1; *yp*−1, *yp*), (31) Ψ↓(*μ*, *xp*, *x*1; *y*1, *yp*) ≤ Ψ↓(*λ*, *x*2, *x*1; *y*1, *y*2) + Ψ↓(*λ*, *x*3, *x*2; *y*2, *y*3) + Ψ↓(*λ*, *x*3, *x*4; *y*4, *y*3) + Ψ↓(*λ*, *x*5, *x*4; *y*4, *y*5) + Ψ↓(*λ*, *x*5, *x*6; *y*6, *y*5) + Ψ↓(*λ*, *x*7, *x*6; *y*6, *y*7) + ··· + <sup>Ψ</sup>↓(*λ*, *xp*−1, *xp*; *yp*, *yp*−1), (32) Ψ↓(*μ*, *xp*, *x*1; *yp*, *y*1) ≤ Ψ↓(*λ*, *x*2, *x*1; *y*2, *y*1) + Ψ↓(*λ*, *x*3, *x*2; *y*3, *y*2) + Ψ↓(*λ*, *x*3, *x*4; *y*3, *y*4)

$$\begin{split} \mathbb{W}^{\downarrow} \times \mathbb{W}^{\downarrow} &= \mathbb{W}^{\downarrow} \times \mathbb{W}^{\downarrow} + \mathbb{W}^{\downarrow} \times \mathbb{W}^{\downarrow} \mathbb{W}^{\downarrow} + \mathbb{W}^{\downarrow} \times \mathbb{W}^{\downarrow} \mathbb{W}^{\downarrow} + \mathbb{W}^{\downarrow} \mathbb{W}^{\downarrow} \\ &+ \mathbb{W}^{\downarrow} (\lambda\_{\star} \mathbf{x}\_{5\star} \mathbf{x}\_{4\star} \mathbf{y}\_{5\star} y\_{4}) + \mathbb{W}^{\downarrow} (\lambda\_{\star} \mathbf{x}\_{5\star} \mathbf{x}\_{6\star} \mathbf{y}\_{5\star} y\_{6}) + \mathbb{W}^{\downarrow} (\lambda\_{\star} \mathbf{x}\_{7\star} \mathbf{x}\_{6\star} \mathbf{y}\_{7\star} y\_{6}) \\ &+ \mathbb{W}^{\downarrow} (\lambda\_{\star} \mathbf{x}\_{p-1} \mathbf{y}\_{p} \mathbf{y}\_{p-1} y\_{p}). \end{split}$$

**Proof.** To prove part (i), if *μ* = 1, then Ψ(1, *x*1, *xp*; *y*1, *yp*) = 0. Therefore, the result is obvious. Now we assume *μ* ∈ (0, 1). Using Lemma 1, there exists *λ* ∈ (0, 1), satisfying:

$$(1 - \lambda) \* \dots \* (1 - \lambda) > 1 - \mu. \tag{34}$$

Given any > 0, the first observation of Remark 1 says that:

$$\begin{split} &M\left(\mathbf{x}\_{1},\mathbf{x}\_{p},\Psi^{\downarrow}(\lambda,\mathbf{x}\_{1},\mathbf{x}\_{2};y\_{1},y\_{2}) + \Psi^{\downarrow}(\lambda,\mathbf{x}\_{2},\mathbf{x}\_{3};y\_{2},y\_{3}) + \dots + \Psi^{\downarrow}(\lambda,\mathbf{x}\_{p-1},\mathbf{x}\_{p};y\_{p-1},y\_{p}) + (p-1)\varepsilon\right) \\ &\geq M\left(\mathbf{x}\_{1},\mathbf{x}\_{2},\Psi^{\downarrow}(\lambda,\mathbf{x}\_{1},\mathbf{x}\_{2};y\_{1},y\_{2}) + \varepsilon\right) \ast \dots \ast M\left(\mathbf{x}\_{p-1},\mathbf{x}\_{p},\Psi^{\downarrow}(\lambda,\mathbf{x}\_{p-1},\mathbf{x}\_{p};y\_{p-1},y\_{p}) + \varepsilon\right), \end{split} \tag{35}$$

and:

$$\begin{split} &M\left(y\_{1},y\_{p},\Psi^{\downarrow}(\lambda,\mathbf{x}\_{1},\mathbf{x}\_{2};y\_{1},y\_{2}) + \Psi^{\downarrow}(\lambda,\mathbf{x}\_{2},\mathbf{x}\_{3};y\_{2},y\_{3}) + \dots + \Psi^{\downarrow}(\lambda,\mathbf{x}\_{p-1},\mathbf{x}\_{p};y\_{p-1},y\_{p}) + (p-1)\varepsilon\right) \\ &\geq M\left(y\_{1},y\_{2},\Psi^{\downarrow}(\lambda,\mathbf{x}\_{1},\mathbf{x}\_{2};y\_{1},y\_{2}) + \varepsilon\right) \ast \dots \ast M\left(y\_{p-1},y\_{p},\Psi^{\downarrow}(\lambda,\mathbf{x}\_{p-1},\mathbf{x}\_{p};y\_{p-1},y\_{p}) + \varepsilon\right). \end{split} \tag{36}$$

Now applying the increasing property and commutativity of t-norm to Equations (35) and (36), we obtain:

$$\begin{split} &\mathbb{E}\left(\xi\left(\mathbf{x}\_{1},\mathbf{x}\_{p};y\_{1},y\_{p},\Psi^{\downarrow}(\lambda,\mathbf{x}\_{1},\mathbf{x}\_{2};y\_{1},y\_{2})+\Psi^{\downarrow}(\lambda,\mathbf{x}\_{2},\mathbf{x}\_{3};y\_{2},y\_{3})+\dots+\Psi^{\downarrow}(\lambda,\mathbf{x}\_{p-1},\mathbf{x}\_{p};y\_{p-1},y\_{p})+(p-1)\varepsilon\right)\right) \\ &\geq \xi\left(\mathbf{x}\_{1},\mathbf{x}\_{2};y\_{1},y\_{2},\Psi^{\downarrow}(\lambda,\mathbf{x}\_{1},\mathbf{x}\_{2};y\_{1},y\_{2})+\varepsilon\right) \ast \cdots \ast \,\_{\delta}^{\tau}\left(\mathbf{x}\_{p-1},\mathbf{x}\_{p};y\_{p-1},y\_{p},\Psi^{\downarrow}(\lambda,\mathbf{x}\_{p-1},\mathbf{x}\_{p};y\_{p-1},y\_{p})+\varepsilon\right) \\ &\geq (1-\lambda)\ast\cdots\ast(1-\lambda)\text{ (by Equation (15) and the increasing property of t-norm)} \\ &> 1-\mu\text{ (by Equation (34)).}\end{split}$$

The definition of Ψ↓ says that:

$$\begin{aligned} &\Psi^{\downarrow}(\lambda,\mathbf{x}\_{1},\mathbf{x}\_{2};y\_{1},y\_{2}) + \Psi^{\downarrow}(\lambda,\mathbf{x}\_{2},\mathbf{x}\_{3};y\_{2},y\_{3}) + \dots + \Psi^{\downarrow}(\lambda,\mathbf{x}\_{p-1},\mathbf{x}\_{p};y\_{p-1},y\_{p}) + (p-1)\epsilon \\ &\geq \Psi^{\downarrow}(\mu,\mathbf{x}\_{1},\mathbf{x}\_{p};y\_{1},y\_{p}). \end{aligned}$$

By taking → 0+, we obtain the desired inequality (Equation (24)).

On the other hand, we also have:

$$\begin{split} &M\left(\mathbf{x}\_{1},\mathbf{x}\_{p},\Psi^{\downarrow}(\lambda,\mathbf{x}\_{1},\mathbf{x}\_{2};y\_{2},y\_{1}) + \Psi^{\downarrow}(\lambda,\mathbf{x}\_{2},\mathbf{x}\_{3};y\_{3},y\_{2}) + \dots + \Psi^{\downarrow}(\lambda,\mathbf{x}\_{p-1},\mathbf{x}\_{p};y\_{p},y\_{p-1}) + (p-1)\varepsilon\right) \\ &\geq M\left(\mathbf{x}\_{1},\mathbf{x}\_{2},\Psi^{\downarrow}(\lambda,\mathbf{x}\_{1},\mathbf{x}\_{2};y\_{2},y\_{1}) + \varepsilon\right) \ast \dots \ast M\left(\mathbf{x}\_{p-1},\mathbf{x}\_{p},\Psi^{\downarrow}(\lambda,\mathbf{x}\_{p-1},\mathbf{x}\_{p};y\_{p},y\_{p-1}) + \varepsilon\right), \end{split} \tag{37}$$

and:

$$\begin{split} &M\left(y\_{p},y\_{1},\Psi^{\downarrow}(\lambda,\mathbf{x}\_{1},\mathbf{x}\_{2};y\_{2},y\_{1}) + \Psi^{\downarrow}(\lambda,\mathbf{x}\_{2},\mathbf{x}\_{3};y\_{3},y\_{2}) + \dots + \Psi^{\downarrow}(\lambda,\mathbf{x}\_{p-1},\mathbf{x}\_{p};y\_{p},y\_{p-1}) + (p-1)\varepsilon\right) \\ &\geq M\left(y\_{2},y\_{1},\Psi^{\downarrow}(\lambda,\mathbf{x}\_{1},\mathbf{x}\_{2};y\_{2},y\_{1}) + \varepsilon\right) \ast \dots \ast M\left(y\_{p},y\_{p-1},\Psi^{\downarrow}(\lambda,\mathbf{x}\_{p-1},\mathbf{x}\_{p};y\_{p},y\_{p-1}) + \varepsilon\right). \end{split} \tag{38}$$

Now applying the increasing property and commutativity of t-norm to Equations (37) and (38), we also obtain:

$$\begin{split} &\mathbb{E}\left(\xi\left(\mathbf{x}\_{1},\mathbf{x}\_{p};y\_{p},y\_{1},\Psi^{\downarrow}(\lambda,\mathbf{x}\_{1},\mathbf{x}\_{2};y\_{2},y\_{1})+\Psi^{\downarrow}(\lambda,\mathbf{x}\_{2},\mathbf{x}\_{3};y\_{3},y\_{2})+\dots+\Psi^{\downarrow}(\lambda,\mathbf{x}\_{p-1},\mathbf{x}\_{p};y\_{p},y\_{p-1})+(p-1)\varepsilon\right)\right) \\ &\geq \xi\left(\mathbf{x}\_{1},\mathbf{x}\_{2};y\_{2},y\_{1},\Psi^{\downarrow}(\lambda,\mathbf{x}\_{1},\mathbf{x}\_{2};y\_{2},y\_{1})+\varepsilon\right) \cdot \dots \cdot \,\_{\delta}^{\tau}\left(\mathbf{x}\_{p-1},\mathbf{x}\_{p};y\_{p},y\_{p-1},\Psi^{\downarrow}(\lambda,\mathbf{x}\_{p-1},\mathbf{x}\_{p};y\_{p},y\_{p-1})+\varepsilon\right) \\ &\geq (1-\lambda)\ast\cdots\ast(1-\lambda)\text{ (by Equation (15) and the increasing property of t-norm)} \\ &> 1-\mu\text{ (by Equation (34)).}\end{split}$$

The definition of Ψ↓ says that:

$$\begin{aligned} &\Psi^{\downarrow}(\lambda,\mathbf{x}\_{1},\mathbf{x}\_{2};y\_{2},y\_{1}) + \Psi^{\downarrow}(\lambda,\mathbf{x}\_{2},\mathbf{x}\_{3};y\_{3},y\_{2}) + \dots + \Psi^{\downarrow}(\lambda,\mathbf{x}\_{p-1},\mathbf{x}\_{p};y\_{p},y\_{p-1}) + (p-1)\epsilon \\ &\geq \Psi^{\downarrow}(\mu,\mathbf{x}\_{1},\mathbf{x}\_{p};y\_{p},y\_{1}). \end{aligned}$$

By taking → 0+, we obtain the desired inequality (Equation (25)). Since the other inequalities can be similarly obtained, we omit the details.

The above argument is still valid to obtain part (ii) by referring the second observation of Remark 1. Further, we can use the third observation of Remark 1 to obtain part (iii). Finally, part (iv) can be obtained by referring to the fourth observation of Remark 1. This completes the proof.

Let (*X*, *<sup>M</sup>*) be a fuzzy semi-metric space, and let {*xn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> be a sequence in *<sup>X</sup>*. We write *xn <sup>M</sup>*- −→ *x* as *n* → ∞ if and only if:

$$\lim\_{n \to \infty} M(\mathfrak{x}\_n, \mathfrak{x}, t) = 1 \text{ for all } t > 0.$$

We also write *xn <sup>M</sup>* −→ *x* as *n* → ∞ if and only if:

$$\lim\_{n \to \infty} M(\mathfrak{x}, \mathfrak{x}\_{\mathfrak{h}}, t) = 1 \text{ for all } t > 0.$$

The main convergence theorem is presented below. We first provide a useful lemma.

**Lemma 2.** *Let* ∗ *be a t-norm. If a* ∗ *b* > *k then a* > *k and b* > *k.*

**Proof.** Since *b* ≤ 1, the increasing property and boundary condition show that *b* ∗ *k* ≤ 1 ∗ *k* = *k*. Suppose that *a* ≤ *k*. Then we have *a* ∗ *b* ≤ *k* ∗ *b* and:

$$k < a \ast b \le k \ast b \le k.$$

A contradiction occurs. Therefore, we must have *a* > *k*. We can similarly show that *b* > *k*. This completes the proof.

**Theorem 2.** *Let* (*X*, *M*) *be a fuzzy semi-metric space along with a t-norm* ∗*. We also assume that M satisfies the canonical condition, and that the t-norm* ∗ *is left-continuous at* 1 *with respect to the first or second argument. Let* {*xn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> *and* {*yn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> *be two sequences in X. Then we have the following properties:*


**Proof.** For any fixed *λ* ∈ (0, 1), using Lemma 1, it follows that there exists *λ*<sup>0</sup> ∈ (0, 1), satisfying:

$$(1 - \lambda\_0) \* (1 - \lambda\_0) > 1 - \lambda. \tag{39}$$

We just prove the first case, since the other cases can be similarly obtained. Suppose that *M*(*xn*, *x*, *t*) → 1 and *M*(*yn*, *y*, *t*) → 1 as *n* → ∞ for all *t* > 0. Then, given any *t* > 0 and *δ* > 0, there exists *n*(1) *<sup>t</sup>*,*<sup>δ</sup>* , *<sup>n</sup>*(2) *<sup>t</sup>*,*<sup>δ</sup>* <sup>∈</sup> <sup>N</sup>, satisfying <sup>|</sup>*M*(*xn*, *<sup>x</sup>*, *<sup>t</sup>*) <sup>−</sup> <sup>1</sup><sup>|</sup> <sup>&</sup>lt; *<sup>δ</sup>* for *<sup>n</sup>* <sup>≥</sup> *<sup>n</sup>*(1) *<sup>t</sup>*,*<sup>δ</sup>* and |*M*(*yn*, *y*, *t*) − 1| < *δ* for *<sup>n</sup>* <sup>≥</sup> *<sup>n</sup>*(2) *<sup>t</sup>*,*<sup>δ</sup>* . Therefore, given any <sup>∈</sup> (0, 1), there exists *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>, satisfying:

$$\left| M\left(\mathbf{x}\_{n\prime}, \mathbf{x}\_{\prime} \frac{\boldsymbol{\varepsilon}}{2}\right) - 1 \right| < \lambda\_0 \text{ and } \left| M\left(\mathbf{y}\_{n\prime}, \mathbf{y}\_{\prime} \frac{\boldsymbol{\varepsilon}}{2}\right) - 1 \right| < \lambda\_0 \text{ s.}$$

for *n* ≥ *n*. We also have:

$$M\left(\mathbf{x}\_{n},\mathbf{x},\frac{\epsilon}{2}\right) > 1 - \lambda\_0 \text{ and } M\left(y\_{n}, y\_{\prime}, \frac{\epsilon}{2}\right) > 1 - \lambda\_0.$$

for *n* ≥ *n*. The increasing property of t-norm says that:

$$\,\_3\zeta\left(\mathbf{x}\_{\mathrm{n}},\mathbf{x};y\_{\mathrm{n}},y\_{\mathrm{}},\frac{\boldsymbol{\epsilon}}{2}\right) = M\left(\mathbf{x}\_{\mathrm{n}},\mathbf{x},\frac{\boldsymbol{\epsilon}}{2}\right) \ast M\left(y\_{\mathrm{n}},y\_{\mathrm{}},\frac{\boldsymbol{\epsilon}}{2}\right) \geq \left(1-\lambda\_{0}\right) \ast \left(1-\lambda\_{0}\right) > 1-\lambda.$$

The definition of Ψ↓ says that:

$$\|\Psi^\downarrow(\lambda, \mathbf{x}\_{\mathsf{H}}, \mathbf{x}; y\_{\mathsf{H}}, y) \le \frac{\epsilon}{2} < \epsilon,$$

for *n* ≥ *n*. This shows that Ψ↓(*λ*, *xn*, *x*; *yn*, *y*) → 0 as *n* → ∞.

Conversely, assume that Ψ↓(*λ*, *xn*, *x*; *yn*, *y*) → 0 as *n* → ∞ for all *λ* ∈ (0, 1). Now, given any *δ* > 0 and *<sup>λ</sup>* <sup>∈</sup> (0, 1], there exists *<sup>n</sup>δ*,*<sup>λ</sup>* <sup>∈</sup> <sup>N</sup>, satisfying <sup>|</sup>Ψ↓(*λ*, *xn*, *<sup>x</sup>*; *yn*, *<sup>y</sup>*)<sup>|</sup> <sup>&</sup>lt; *<sup>δ</sup>* for all *<sup>n</sup>* <sup>≥</sup> *<sup>n</sup>δ*,*λ*. Therefore, for any fixed *<sup>t</sup>* <sup>&</sup>gt; 0 and given any <sup>∈</sup> (0, 1), there exists *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>, satisfying:

$$\left| \Psi \left( \frac{\epsilon}{2}, x\_{\mathsf{n}}, x; y\_{\mathsf{n}}, y \right) \right| = \left| \Psi \left( \frac{\epsilon}{2}, x\_{\mathsf{n}}, x; y\_{\mathsf{n}}, y \right) \right| < t\_{\mathsf{n}}$$

for *n* ≥ *n*, which implies:

$$\mathcal{J}(\mathfrak{x}\_{\hbar'}\mathfrak{x}; y\_{\hbar'}y\_{\prime}t) \ge 1 - \frac{\epsilon}{2} > 1 - \epsilon\_{\prime\prime}$$

for *n* ≥ *n* by part (i) of Proposition 9, i.e.,

$$M\left(\mathfrak{x}\_{\mathsf{n}},\mathfrak{x},t\right)\*M\left(\mathfrak{y}\_{\mathsf{n}},\mathfrak{y},t\right) > 1 - \epsilon\_{\mathsf{n}}$$

for *n* ≥ *n*. Lemma 2 says that:

$$M\left(\mathbf{x}\_{n\prime}, \mathbf{x}, t\right) > 1 - \epsilon \text{ and } M\left(y\_{n\prime}, y, t\right) > 1 - \epsilon,$$

for *<sup>n</sup>* <sup>≥</sup> *<sup>n</sup>*. This shows that *xn <sup>M</sup>*- −→ *<sup>x</sup>* and *yn <sup>M</sup>*- −→ *y* as *n* → ∞, and the proof is complete.

**Example 5.** *From Example 1, we see that:*

$$\chi\_{\mathfrak{n}} \stackrel{\mathcal{M}^{\flat}}{\longrightarrow} \mathfrak{x} \text{ if and only if } \lim\_{n \to \infty} d(\mathfrak{x}\_{\mathfrak{n}}, \mathfrak{x}) = 0\_{\mathfrak{n}}$$

*and:*

$$\propto\_{\mathfrak{n}} \stackrel{M^{\circ}}{\longrightarrow} \mathfrak{x} \text{ if and only if } \lim\_{\mathfrak{n} \to \infty} d(\mathfrak{x}, \mathfrak{x}\_{\mathfrak{n}}) = 0.$$

*From Example 4, we have:*

$$
\Psi^\downarrow(\lambda, \mathfrak{x}, \mathfrak{y}; \mathfrak{u}, \mathfrak{v}) = \frac{\mathbb{C} + \sqrt{\mathbb{C}^2 + D}}{2},
$$

*where:*

$$C = \frac{(d(\mathbf{x}, \mathbf{y}) + d(\mathbf{u}, \mathbf{v}))(1 - \lambda)}{\lambda} \text{ and } D = \frac{d(\mathbf{x}, \mathbf{y}) \cdot d(\mathbf{u}, \mathbf{v}) \cdot (1 - \lambda)}{\lambda}.$$

*It is clear to see that xn <sup>M</sup>*- −→ *<sup>x</sup> and yn <sup>M</sup>*- −→ *y as n* → ∞ *if and only if* Ψ↓(*λ*, *xn*, *x*; *yn*, *y*) → 0 *as n* → ∞ *for all λ* ∈ (0, 1)*. The other convergence presented in Theorem 2 can be similarly verified.*

**Definition 4.** *Let* (*X*, *<sup>M</sup>*) *be a fuzzy semi-metric space, and let* {*xn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> *be a sequence in X.*


**Definition 5.** *Let* (*X*, *M*) *be a fuzzy semi-metric space such that M satisfies the canonical condition, and let* {*xn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> *and* {*yn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> *be two sequences in X.*


**Theorem 3.** *Let* (*X*, *M*) *be a fuzzy semi-metric space along with a t-norm* ∗*. We also assume that M satisfies the canonical condition, and that the t-norm* ∗ *is left-continuous at* 1 *with respect to the first or second argument. Let* {*xn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> *and* {*yn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> *be two sequences in X. Then, we have the following properties:*


**Proof.** It suffices to just prove part (i), since the other cases can be similarly obtained. Suppose that {*xn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> and {*yn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> are <sup>&</sup>gt;-Cauchy sequences. Then, given any *<sup>t</sup>* <sup>&</sup>gt; 0 and *<sup>δ</sup>* <sup>&</sup>gt; 0, there exists *nt*,*<sup>δ</sup>* <sup>∈</sup> <sup>N</sup> such that *m* > *n* ≥ *nt*,*<sup>δ</sup>* implies *M*(*xm*, *xn*, *t*) > 1 − *δ* and *M*(*ym*, *yn*, *t*) > 1 − *δ*. Now, given any <sup>∈</sup> (0, 1), there exists *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> such that *<sup>m</sup>* <sup>&</sup>gt; *<sup>n</sup>* <sup>≥</sup> *<sup>n</sup>* implies:

$$M\left(\mathbf{x}\_{\mathrm{m}\_{\prime\prime}}, \mathbf{x}\_{\mathrm{n}\_{\prime\prime}}, \frac{\boldsymbol{\varepsilon}}{2}\right) > 1 - \lambda\_0 \text{ and } M\left(y\_{\mathrm{m}\_{\prime\prime}}, y\_{\mathrm{n}\_{\prime\prime}}, \frac{\boldsymbol{\varepsilon}}{2}\right) > 1 - \lambda\_0.$$

The increasing property of t-norm says that:

$$\begin{aligned} \zeta \left( \mathbf{x}\_{\mathrm{m}\_{\prime\prime}} \mathbf{x}\_{\mathrm{n}\_{\prime}} \mathbf{y}\_{\mathrm{m}\_{\prime}} \mathbf{y}\_{\mathrm{n}\_{\prime}} \frac{\boldsymbol{\varepsilon}}{2} \right) &= M \left( \mathbf{x}\_{\mathrm{m}\_{\prime}} \mathbf{x}\_{\mathrm{n}\_{\prime}} \frac{\boldsymbol{\varepsilon}}{2} \right) \ast M \left( \mathbf{y}\_{\mathrm{m}\_{\prime}} \mathbf{y}\_{\mathrm{n}\_{\prime}} \frac{\boldsymbol{\varepsilon}}{2} \right) \\ &\geq (1 - \lambda\_{0}) \ast (1 - \lambda\_{0}) > 1 - \lambda \text{ (using Equation (39)).} \end{aligned}$$

Further, by referring to the definition of Ψ↓, we obtain:

$$\, \_\prime \Psi^\downarrow \left( \lambda, \ge\_{m'} \ge\_{n'} \right) \le \frac{\epsilon}{2} < \epsilon\_{\prime\prime}$$

for *m* > *n* ≥ *n*.

Conversely, using the assumption, for any fixed *<sup>t</sup>* <sup>&</sup>gt; 0 and given any <sup>∈</sup> (0, 1), there exists *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> such that *m* > *n* ≥ *n* implies Ψ(/2, *xm*, *xn*; *ym*, *yn*) < *t*. Using Proposition 9, we obtain:

$$\mathcal{J}(\mathfrak{x}\_{m'}\mathfrak{x}\_{n'}\mathfrak{y}\_{m'}\mathfrak{y}\_{n'}\mathfrak{t}) \ge 1 - \frac{\epsilon}{2} > 1 - \epsilon\_{\prime}$$

for *m* > *n* ≥ *n*, i.e.,

$$\mathcal{M}\left(\mathfrak{x}\_{m\_{\prime\prime}}\mathfrak{x}\_{n\_{\prime}}t\right) \* \mathcal{M}\left(\mathfrak{y}\_{m\_{\prime\prime}}\mathfrak{y}\_{n\_{\prime}}t\right) > 1 - \epsilon\_{\prime\prime}$$

for *m* > *n* ≥ *n*. Lemma 2 says that:

$$M\left(\mathbf{x}\_{m\prime}, \mathbf{x}\_{n\prime}, t\right) > 1 - \epsilon \text{ and } M\left(y\_{m\prime}, y\_{n\prime}, t\right) > 1 - \epsilon,$$

for *<sup>m</sup>* <sup>&</sup>gt; *<sup>n</sup>* <sup>≥</sup> *<sup>n</sup>*, which shows that {*xn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> and {*yn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> are >-Cauchy sequences. This completes the proof.

#### **5. Convergence Based on the Supremum**

Using the infimum and assuming the canonical condition, the infimum type of dual double fuzzy semi-metric was proposed in the previous section. In this section, we shall consider the supremum to propose the so-called supremum type of dual double fuzzy semi-metric.

Recall that the purpose for considering the canonical condition is to guarantee the infimum type of dual fuzzy semi-metric space to be well-defined. Now, we shall consider the rational condition to guarantee the supremum type of dual fuzzy semi-metric space to be well-defined. The formal definition is given below.

**Definition 6.** *Let* (*X*, *M*) *be a fuzzy semi-metric space along with a t-norm* ∗ *such that M satisfies the rational condition, and that the t-norm* ∗ *is right-continuous at* 0 *with respect to the first or second argument. Given any fixed x*, *y*, *u*, *v* ∈ *X with x* = *y or u* = *v and any fixed λ* ∈ [0, 1)*, we consider the following set:*

$$\Pi^\top(\lambda, \mathfrak{x}, \mathfrak{y}; \mathfrak{u}, \mathfrak{v}) = \{ t > 0 : \mathbb{Q}(\mathfrak{x}, \mathfrak{y}; \mathfrak{u}, \mathfrak{v}, t) \le 1 - \lambda \}, \mathfrak{v}$$

*which will be used to define a function* <sup>Ψ</sup><sup>↑</sup> : *<sup>X</sup>*<sup>4</sup> → [0, +∞) *by:*

$$\Psi^\top(\lambda, \mathbf{x}, y; u, v) = \sup \Pi^\uparrow(\lambda, \mathbf{x}, y; u, v) = \sup \left\{ t > 0 : \mathbb{Z}(\mathbf{x}, y; u, v, t) \le 1 - \lambda \right\}.$$

*The mapping* <sup>Π</sup><sup>↑</sup> *from* (0, 1] × *<sup>X</sup>*<sup>4</sup> *into* [0, <sup>∞</sup>) *is called the supremum type of dual double fuzzy semi-metric.*

**Example 6.** *Continued from Example 1, we have:*

$$\Pi^\dagger(\lambda, \mathbf{x}, y; u, v) = \left\{ t > 0 : \frac{t}{t + d(\mathbf{x}, y)} \cdot \frac{t}{t + d(u, v)} \le 1 - \lambda \right\} = \left\{ t > 0 : t \le \frac{\mathbb{C} + \sqrt{\mathbb{C}^2 + D}}{2} \right\},$$

*where:*

$$C = \frac{(d(\mathbf{x}, y) + d(\mathbf{u}, \mathbf{v}))(1 - \lambda)}{\lambda} \text{ and } D = \frac{d(\mathbf{x}, y) \cdot d(\mathbf{u}, \mathbf{v}) \cdot (1 - \lambda)}{\lambda}.$$

*We also have:*

$$\Psi^\uparrow(\lambda, \mathbf{x}, y; u, v) = \sup \Pi^\uparrow(\lambda, \mathbf{x}, y; u, v) = \left\{ t > 0 : t \le \frac{\mathbb{C} + \sqrt{\mathbb{C}^2 + D}}{2} \right\} = \frac{\mathbb{C} + \sqrt{\mathbb{C}^2 + D}}{2}.$$

For any *x* = *y* or *u* = *v*, we need to claim that the set Π↑(*λ*, *x*, *y*; *u*, *v*) is nonempty. Suppose that Π↑(*λ*, *x*, *y*; *u*, *v*) = ∅. The definition says that *ζ*(*x*, *y*; *u*, *v*, *t*) > 1 − *λ* for all *t* > 0. Therefore, we obtain:

$$\lim\_{t \to 0+} \zeta(x, y; u, v, t) \ge 1 - \lambda\_r$$

which contradicts Equation (9). This says that Definition 6 is well-defined, which also says that Ψ↑(*λ*, *x*, *y*; *u*, *v*) > 0. We also have:

$$\Psi^\top(0, x, y; u, v) = \sup\left\{ t > 0 : \mathbb{Q}(x, y; u, v, t) \le 0 \right\} = \sup\{ t > 0 \} = +\infty.$$

Moreover, if *λ*<sup>1</sup> > *λ*2, then:

$$
\Pi^\uparrow(\lambda\_1, \mathbf{x}, y; u, v) \subseteq \Pi^\uparrow(\lambda\_2, \mathbf{x}, y; u, v) \text{ and } \Psi^\uparrow(\lambda\_1, \mathbf{x}, y; u, v) \le \Psi^\uparrow(\lambda\_2, \mathbf{x}, y; u, v). \tag{40}
$$

**Proposition 12.** *Let* (*X*, *M*) *be a fuzzy semi-metric space along with a t-norm* ∗*. We also assume that M satisfies the rational condition, and that the t-norm* ∗ *is right-continuous at* 0 *with respect to the first or second argument. Given any fixed x*, *y*, *u*, *v* ∈ *X with x* = *y or u* = *v, suppose that* Ψ↑(*λ*, *x*, *y*; *u*, *v*)=+∞*. Then, the following statements hold true:*


**Proof.** The fact Ψ↑(*λ*, *x*, *y*; *u*, *v*)=+∞ says that *ζ*(*x*, *y*; *u*, *v*, *t*) ≤ 1 − *λ* for sufficiently large *t* > 0 in the sense of *t* → ∞. To prove part (i), we assume that there exists *t*<sup>0</sup> > 0, satisfying *ζ*(*x*, *y*; *u*, *v*, *t*0) > 1 − *λ*. Parts (i) and (ii) of Proposition 3 say that the mapping *ζ*(*x*, *y*; *u*, *v*, ·) is nondecreasing. Therefore, if *t*<sup>1</sup> > *t*0, then:

$$\mathbb{Z}(\mathfrak{x}, \mathfrak{y}; \mathfrak{u}, \mathfrak{v}, t\_1) \ge \mathbb{Z}(\mathfrak{x}, \mathfrak{y}; \mathfrak{u}, \mathfrak{v}, t\_0) > 1 - \lambda\_\star$$

which contradicts *ζ*(*x*, *y*; *u*, *v*, *t*) ≤ 1 − *λ* for sufficiently large *t* > 0.

To prove part (ii), we assume that there exists *t*<sup>0</sup> > 0, satisfying *ζ*(*y*, *x*; *u*, *v*, *t*0) > 1 − *λ*. Part (ii) of Proposition 3 says that the mapping *ζ*(*x*, *y*; *u*, *v*, ·) is -semisymmetrically nondecreasing. Therefore, if *t*<sup>1</sup> > *t*0, then:

$$\mathbb{Q}(x, y; u, v, t\_1) \ge \mathbb{Q}(y, x; u, v, t\_0) > 1 - \lambda\_\prime$$

which contradicts *ζ*(*x*, *y*; *u*, *v*, *t*) ≤ 1 − *λ* for sufficiently large *t* > 0. We can similarly obtain another inequality using the fact of the mapping *ζ*(*x*, *y*; *u*, *v*, ·) to be --semisymmetrically nondecreasing.

To prove part (iii), we assume that there exists *t*<sup>0</sup> > 0, satisfying *ζ*(*y*, *x*; *v*, *u*, *t*0) > 1 − *λ*. Parts (ii) and (iii) of Proposition 3 say that the mapping *ζ*(*x*, *y*; *u*, *v*, ·) is symmetrically nondecreasing. Therefore, if *t*<sup>1</sup> > *t*0, then:

$$\mathbb{Z}(\mathbf{x}, \mathbf{y}; \mathbf{u}, \mathbf{v}, t\_1) \ge \mathbb{Z}(\mathbf{y}, \mathbf{x}; \mathbf{v}, \mathbf{u}, t\_0) > 1 - \lambda\_\prime$$

which contradicts *ζ*(*x*, *y*; *u*, *v*, *t*) ≤ 1 − *λ* for sufficiently large *t* > 0. This completes the proof.

**Proposition 13.** *Let* (*X*, *M*) *be a fuzzy semi-metric space along with a t-norm* ∗*. We also assume that M satisfies the rational and canonical conditions, and that the t-norm* ∗ *is right-continuous at* 0 *and left-continuous at* 1 *with respect to the first or second argument. Then, given any fixed x*, *y*, *u*, *v* ∈ *X with x* = *y or u* = *v, we have* Ψ↑(*λ*, *x*, *y*; *u*, *v*) < +∞ *for λ* ∈ (0, 1)*.*

**Proof.** We assume that Ψ↑(*λ*, *x*, *y*; *u*, *v*)=+∞, which means that *ζ*(*x*, *y*; *u*, *v*, *t*) ≤ 1 − *λ* for sufficiently large *t* in the sense of *t* → ∞. Using Equation (8), we obtain

$$1 = \lim\_{t \to \infty} \zeta(x, y; u, v, t) \le 1 - \lambda\_{\prime t}$$

which leads to a contradiction for 0 < *λ* < 1. This completes the proof.

**Proposition 14.** *Let* (*X*, *M*) *be a fuzzy semi-metric space along with a t-norm* ∗*. Assume that M satisfies the canonical and rational conditions. We also assume that the t-norm* ∗ *is left-continuous at* 1 *and right-continuous at* 0 *with respect to the first or second argument, and that the t-norm* ∗ *also satisfies the strictly increasing property. For any fixed x*, *y*, *u*, *v* ∈ *X with x* = *y or u* = *v, the following statements hold true:*

(i) *Suppose that M satisfies the strict* ◦*-triangle inequality for* ◦∈{-, -, }*. Then we have:*

$$\Psi^\uparrow(\lambda, x, y; u, v) \le \Psi^\downarrow(\lambda, x, y; u, v)$$

*for each λ* ∈ (0, 1)*.*

(ii) *Suppose that M satisfies the strict* ◦*-triangle inequality for* ◦∈{-, }*. Then we have:*

$$\Psi^\uparrow(\lambda, \mathbf{x}, y; u, v) \le \Psi^\downarrow(\lambda, y, \mathbf{x}; u, v) \, and \, \Psi^\uparrow(\lambda, \mathbf{x}, y; u, v) \le \Psi^\downarrow(\lambda, \mathbf{x}, y; v, u)$$

*for each λ* ∈ (0, 1)*.*

(iii) *Suppose that M satisfies the strict* ◦*-triangle inequality for* ◦∈{-, , }*. Then we have:*

$$\Psi^{\uparrow\uparrow}(\lambda,\mathfrak{x},\mathfrak{y};u,v) \le \Psi^{\downarrow}(\lambda,\mathfrak{y},\mathfrak{x};v,u).$$

*for each λ* ∈ (0, 1)*.*

**Proof.** Proposition 13 says that Ψ↑(*λ*, *x*, *y*; *u*, *v*) < +∞ for all *λ* ∈ (0, 1). According to the concept of supremum, given any > 0, there exists *t* > 0, satisfying *ζ*(*x*, *y*; *u*, *v*, *t*) ≤ 1 − *λ* and Ψ↑(*λ*, *x*, *y*; *u*, *v*) − < *t*. To prove part (i), parts (i) and (ii) of Proposition 10 say that *t* ≤ Ψ↓(*λ*, *x*, *y*; *u*, *v*), which implies Ψ↑(*λ*, *x*, *y*; *u*, *v*) − < Ψ↓(*λ*, *x*, *y*; *u*, *v*). Since can be any positive real number, we obtain the desired inequality.

To prove part (ii), parts (i) and (ii) of Proposition 10 say that *t* ≤ Ψ↓(*λ*, *y*, *x*; *u*, *v*), which implies Ψ↑(*λ*, *x*, *y*; *u*, *v*) − < Ψ↓(*λ*, *y*, *x*; *u*, *v*). Since can be any positive real number, we obtain Ψ↑(*λ*, *x*, *y*; *u*, *v*) ≤ Ψ↓(*λ*, *y*, *x*; *u*, *v*). Another inequality can be similarly obtained.

To prove part (iii), parts (ii) and (iii) of Proposition 10 say that *t* ≤ Ψ↓(*λ*, *y*, *x*; *v*, *u*), which implies Ψ↑(*λ*, *x*, *y*; *u*, *v*) − < Ψ↓(*λ*, *y*, *x*; *v*, *u*). Since can be any positive real number, we obtain the desired inequality. This completes the proof.

**Proposition 15.** *Let* (*X*, *M*) *be a fuzzy semi-metric space along with a t-norm* ∗*. We also assume that M satisfies the rational condition, and that the t-norm* ∗ *is right-continuous at* 0 *with respect to the first or second argument. For any fixed x*, *y*, *u*, *v* ∈ *X with x* = *y or u* = *v, and any fixed λ* ∈ (0, 1)*, we assume* Ψ↑(*λ*, *x*, *y*; *u*, *v*) < +∞*.*

(i) *For any* > 0*, we have the following inequality:*

$$\zeta \left( x, y; \mu, v, \Psi^{\uparrow}(\lambda, x, y; \mu, v) + \varepsilon \right) > 1 - \lambda \tag{41}$$

	- *Suppose that M satisfies the* ◦*-triangle inequality for* ◦∈{-, -, }*. Then we have:*

$$\mathbb{Z}\left(x, y; u, v, \Psi^{\uparrow}(\lambda, x, y; u, v) - \epsilon\right) \le 1 - \lambda. \tag{42}$$

• *Suppose that M satisfies the* ◦*-triangle inequality for* ◦∈{-, }*. Then we have:*

$$\mathbb{E}\left(\left\|\left(y,\mathbf{x};\boldsymbol{\mu},\boldsymbol{\upsilon},\boldsymbol{\Psi}^{\uparrow}(\lambda,\mathbf{x},\boldsymbol{y};\boldsymbol{\mu},\boldsymbol{\upsilon})-\boldsymbol{\varepsilon}\right)\right\|\leq 1-\lambda \text{ and } \mathbb{E}\left(\mathbf{x},\mathbf{y};\boldsymbol{\upsilon},\boldsymbol{\mu},\boldsymbol{\Psi}^{\uparrow}(\lambda,\mathbf{x},\boldsymbol{y};\boldsymbol{\mu},\boldsymbol{\upsilon})-\boldsymbol{\varepsilon}\right)\leq 1-\lambda.\tag{43}$$

• *Suppose that M satisfies the* ◦*-triangle inequality for* ◦∈{-, , }*. Then we have:*

$$\zeta\left(y,\mathbf{x};\upsilon,u,\Psi^{\uparrow}(\lambda\_{\prime}\mathbf{x},y;u,\upsilon)-\mathfrak{e}\right)\le 1-\lambda.\tag{44}$$

**Proof.** To prove part (i), given any > 0, we assume that *ζ*(*x*, *y*, Ψ↑(*λ*, *x*, *y*; *u*, *v*) + ) ≤ 1 − *λ*. The definition of Ψ<sup>↑</sup> says that Ψ↑(*λ*, *x*, *y*; *u*, *v*) ≥ Ψ↑(*λ*, *x*, *y*; *u*, *v*) + . This contradiction shows that *ζ*(*x*, *y*; *u*, *v*, Ψ↑(*λ*, *x*, *y*; *u*, *v*) + ) > 1 − *λ*.

To prove part (ii), according to the concept of supremum for Ψ↑(*λ*, *x*, *y*; *u*, *v*), given any > 0 with Ψ↑(*λ*, *x*, *y*; *u*, *v*) > , there exists *t* > 0, satisfying *ζ*(*x*, *y*; *u*, *v*, *t*) ≤ 1 − *λ* and *t* > Ψ↑(*λ*, *x*, *y*; *u*, *v*) − . Therefore, we consider three cases below:

• Suppose that *M* satisfies the ◦-triangle inequality for ◦∈{-, -, }. Parts (i) and (ii) of Proposition 3 say that the mapping *ζ*(*x*, *y*; *u*, *v*, ·) is nondecreasing. Therefore, we have:

$$\zeta \left( \mathbf{x}, \mathbf{y}; \boldsymbol{\mu}, \upsilon, \Psi^{\uparrow}(\lambda, \mathbf{x}, \mathbf{y}; \boldsymbol{\mu}, \upsilon) - \epsilon \right) \le \zeta(\mathbf{x}, \mathbf{y}; \boldsymbol{\mu}, \upsilon, \mathbf{t}\_{\varepsilon}) \le 1 - \lambda.$$

• Suppose that *M* satisfies the ◦-triangle inequality for ◦∈{-, }. Part (ii) of Proposition 3 says that the mapping *ζ*(*x*, *y*; *u*, *v*, ·) is -semisymmetrically nondecreasing. Therefore, we have:

$$\mathbb{Q}\left(y,\mathbf{x};\boldsymbol{\mu},\boldsymbol{\upsilon},\,\Psi^{\uparrow}(\lambda,\mathbf{x},y;\boldsymbol{\mu},\boldsymbol{\upsilon})-\epsilon\right) \leq \mathbb{Q}(\mathbf{x},y;\boldsymbol{\mu},\boldsymbol{\upsilon},t\_{\mathsf{c}}) \leq 1-\lambda.$$

We can similarly obtain another inequality using the fact of the mapping *ζ*(*x*, *y*; *u*, *v*, ·) to be --semisymmetrically nondecreasing.

• Suppose that *M* satisfies the ◦-triangle inequality for ◦∈{-, , }. Parts (ii) and (iii) of Proposition 3 say that the mapping *ζ*(*x*, *y*; *u*, *v*, ·) is symmetrically nondecreasing. Therefore, we have:

$$\mathbb{Q}\left(y,\mathbf{x};\upsilon,\mu,\Psi^{\uparrow}(\lambda,\mathbf{x},y;\mu,\upsilon)-\varepsilon\right) \le \mathbb{Q}(\mathbf{x},y;\mu,\upsilon,t\_{\mathfrak{c}}) \le 1-\lambda.$$

This completes the proof.

**Proposition 16.** *Let* (*X*, *M*) *be a fuzzy semi-metric space along with a t-norm* ∗*. We also assume that M satisfies the rational condition, and that the t-norm* ∗ *is right-continuous at* 0 *with respect to the first or second argument. Given any fixed x*, *y*, *u*, *v* ∈ *X with x* = *y or u* = *v, and any fixed λ* ∈ (0, 1)*, the following statements hold true:*

	- *If M satisfies the* ◦*-triangle inequality for* ◦∈{-, -, }*, then ζ*(*x*, *y*; *u*, *v*, *t*) > 1 − *λ.*
	- *If M satisfies the* ◦*-triangle inequality for* ◦∈{-, }*, then ζ*(*y*, *x*; *u*, *v*, *t*) > 1 − *λ and ζ*(*x*, *y*; *v*, *u*, *t*) > 1 − *λ.*
	- *If M satisfies the* ◦*-triangle inequality for* ◦∈{-, , }*, then ζ*(*y*, *x*; *v*, *u*, *t*) > 1 − *λ.*

(ii) *We have the following properties:*


**Proof.** To prove part (i), the fact *t* > Ψ↑(*λ*, *x*, *y*; *u*, *v*) says that there exists > 0, satisfying *t* ≥ Ψ <sup>↑</sup>(*λ*, *x*, *y*; *u*, *v*) + . We consider three cases below:

• Suppose that *M* satisfies the ◦-triangle inequality for ◦∈{-, -, }. Parts (i) and (ii) of Proposition 3 say that the mapping *ζ*(*x*, *y*; *u*, *v*, ·) is nondecreasing. Therefore, using Equation (41), we obtain:

$$\mathcal{J}(\mathbf{x}, \mathbf{y}; \boldsymbol{\mu}, \boldsymbol{\upsilon}, t) \ge \mathcal{J}\left(\mathbf{x}, \mathbf{y}; \boldsymbol{\mu}, \boldsymbol{\upsilon}, \boldsymbol{\Psi}^{\uparrow}(\boldsymbol{\lambda}, \mathbf{x}, \mathbf{y}; \boldsymbol{\mu}, \boldsymbol{\upsilon}) + \boldsymbol{\varepsilon}\right) > 1 - \lambda.$$

• Suppose that *M* satisfies the ◦-triangle inequality for ◦∈{-, }. Part (ii) of Proposition 3 says that the mapping *ζ*(*x*, *y*; *u*, *v*, ·) is both -semisymmetrically nondecreasing and --semisymmetrically nondecreasing.Therefore, using Equation (41), we obtain:

$$\mathcal{J}(y, x; u, v, t) \ge \mathcal{J}\left(x, y; u, v, \Psi^\uparrow(\lambda, x, y; u, v) + \epsilon\right) > 1 - \lambda\_\prime$$

and:

$$\mathbb{Z}(\mathbf{x}, \mathbf{y}; \upsilon, \mu, t) \ge \mathbb{Z}\left(\mathbf{x}, \mathbf{y}; \mu, \upsilon, \Psi^{\uparrow}(\lambda, \mathbf{x}, \mathbf{y}; \mu, \upsilon) + \epsilon\right) > 1 - \lambda.$$

• Suppose that *M* satisfies the ◦-triangle inequality for ◦∈{-, , }. Parts (ii) and (iii) of Proposition 3 say that the mapping *ζ*(*x*, *y*; *u*, *v*, ·) is symmetrically nondecreasing. Therefore, using Equation (41), we obtain:

$$\mathbb{Z}(y,\boldsymbol{x};\boldsymbol{v},\boldsymbol{u},t) \ge \mathbb{Z}\left(\mathbf{x},\boldsymbol{y};\boldsymbol{u},\boldsymbol{v},\boldsymbol{\Psi}^{\uparrow}(\boldsymbol{\lambda},\boldsymbol{x},\boldsymbol{y};\boldsymbol{u},\boldsymbol{v}) + \boldsymbol{\epsilon}\right) > 1 - \boldsymbol{\lambda}.$$

To prove part (ii), we consider three cases below:

• Suppose that *M* satisfies the ◦-triangle inequality for ◦∈{-, -, }. Using part (i) of Proposition 12, if Ψ↑(*λ*, *x*, *y*; *u*, *v*)=+∞, then it is done. Now, for Ψ↑(*λ*, *x*, *y*; *u*, *v*) < +∞, the fact *t* < Ψ <sup>↑</sup>(*λ*, *x*, *y*; *u*, *v*) says that there exists > 0, satisfying 0 < *t* ≤ Ψ↑(*λ*, *x*, *y*; *u*, *v*) − . Using Equation (42), we obtain:

$$\mathcal{J}(\mathbf{x}, \mathbf{y}; \boldsymbol{\mu}, \boldsymbol{\upsilon}, t) \leq \mathcal{J}\left(\mathbf{x}, \mathbf{y}; \boldsymbol{\mu}, \boldsymbol{\upsilon}, \mathbf{y}^{\star}(\boldsymbol{\lambda}, \mathbf{x}, \mathbf{y}; \boldsymbol{\mu}, \boldsymbol{\upsilon}) - \boldsymbol{\varepsilon}\right) \leq 1 - \lambda.$$

• Suppose that *M* satisfies the ◦-triangle inequality for ◦∈{-, }. Using part (ii) of Proposition 12, if Ψ↑(*λ*, *y*, *x*; *u*, *v*)=+∞ or Ψ↑(*λ*, *x*, *y*; *v*, *u*)=+∞, then it is done. Now, for Ψ↑(*λ*, *x*, *y*; *u*, *v*) < +∞, using Equation (43), we obtain:

$$\mathbb{Q}(x, y; u, v, t) \le \mathbb{Q}\left(y, x; u, v, \Psi^{\uparrow}(\lambda, x, y; u, v) - \epsilon\right) \le 1 - \lambda\_{\epsilon}$$

and:

$$\mathcal{J}(x, y; u, v, t) \le \mathcal{J}\left(x, y; v, u, \Psi^\uparrow(\lambda, x, y; u, v) - \epsilon\right) \le 1 - \lambda.$$

• Suppose that *M* satisfies the ◦-triangle inequality for ◦∈{-, , }. Using part (iii) of Proposition 12, if Ψ↑(*λ*, *y*, *x*; *v*, *u*)=+∞, then it is done. Now, for Ψ↑(*λ*, *x*, *y*; *u*, *v*) < +∞, using Equation (44), we obtain:

$$\mathcal{J}(\mathbf{x}, \mathbf{y}; \boldsymbol{\mu}, \boldsymbol{\upsilon}, \mathbf{t}) \leq \mathcal{J}\left(\mathbf{y}, \mathbf{x}; \boldsymbol{\upsilon}, \boldsymbol{\mu}, \Psi^{\uparrow}(\boldsymbol{\lambda}, \mathbf{x}, \mathbf{y}; \boldsymbol{\mu}, \boldsymbol{\upsilon}) - \boldsymbol{\varepsilon}\right) \leq 1 - \lambda.$$

This completes the proof.

**Proposition 17.** *Let* (*X*, *M*) *be a fuzzy semi-metric space along with a t-norm* ∗*. We also assume that M satisfies the rational condition, and that the t-norm* ∗ *is right-continuous at* 0 *with respect to the first or second argument. Given any fixed x*, *y*, *u*, *v* ∈ *X with x* = *y or u* = *v, and any fixed λ* ∈ (0, 1)*, the following statements hold true:*

	- *If M satisfies the* ◦*-triangle inequality for* ◦∈{-, -, }*, then t* ≤ Ψ↑(*λ*, *x*, *y*; *u*, *v*)*.*
	- *If M satisfies the* ◦*-triangle inequality for* ◦∈{-, }*, then t* ≤ Ψ↑(*λ*, *y*, *x*; *u*, *v*) *and t* ≤ Ψ <sup>↑</sup>(*λ*, *x*, *y*; *v*, *u*)*.*
	- *If M satisfies the* ◦*-triangle inequality for* ◦∈{-, , }*, then t* ≤ Ψ↑(*λ*, *y*, *x*; *v*, *u*)*.*

(ii) *We have the following properties:*

	- **–** *If ζ*(*x*, *y*; *u*, *v*, *t*) > 1 − *λ, then* Ψ↑(*λ*, *y*, *x*; *v*, *u*) < +∞*.*
	- **–** *If ζ*(*x*, *y*; *u*, *v*, *t*) > 1 − *λ and* Ψ↑(*λ*, *x*, *y*; *u*, *v*) < +∞*, then t* ≥ Ψ↑(*λ*, *x*, *y*; *u*, *v*)*.*

**Proof.** To prove part (i), we consider three cases below:

• Suppose that *M* satisfies the ◦-triangle inequality for ◦∈{-, -, }. It is clear to see that the fact Ψ↑(*λ*, *x*, *y*; *u*, *v*)=+∞ implies *t* ≤ Ψ↑(*λ*, *x*, *y*; *u*, *v*). Now, for Ψ↑(*λ*, *x*, *y*; *u*, *v*) < +∞, using the contraposition of first property of part (i) of Proposition 16, we see that if *ζ*(*x*, *y*; *u*, *v*, *t*) ≤ 1 − *λ*, then *t* ≤ Ψ↑(*λ*, *x*, *y*; *u*, *v*).


To prove part (ii), we consider three cases below:


This completes the proof.

**Proposition 18.** *Let* (*X*, *M*) *be a fuzzy semi-metric space along with a t-norm* ∗*. We also assume that M satisfies the rational condition, and that the t-norm* ∗ *is right-continuous at* 0 *with respect to the first or second argument. Given any fixed x*, *y*, *u*, *v* ∈ *X with x* = *y or u* = *v, and any fixed λ* ∈ (0, 1)*, the following statements hold true:*

(i) *Suppose that* Ψ↑(*λ*, *x*, *y*; *u*, *v*) < +∞*, and that the mapping ζ*(*x*, *y*; *u*, *v*, ·) : (0, ∞) → [0, 1] *is right-continuous on* (0, ∞)*. Then we have:*

$$\zeta\left(\mathbf{x},\mathbf{y};\boldsymbol{u},\boldsymbol{v},\Psi^{\uparrow}(\lambda,\mathbf{x},\mathbf{y};\boldsymbol{u},\boldsymbol{v})\right)\geq 1-\lambda.\tag{45}$$

	- *Suppose that M satisfies the* ◦*-triangle inequality for* ◦∈{-, -, }*.*

$$\|f\Psi^\uparrow(\lambda,\mathfrak{x},\mathfrak{y};u,v) \prec +\infty, \text{then } \tilde{\mathfrak{z}}\left(\mathfrak{x},\mathfrak{y};u,v,\Psi^\uparrow(\lambda,\mathfrak{x},\mathfrak{y};u,v)\right) \le 1-\lambda.$$

• *Suppose that M satisfies the* ◦*-triangle inequality for* ◦∈{-, }*.*

$$\|f\Psi^\uparrow(\lambda, y, \mathbf{x}; u, v) \prec +\infty, \text{then } \mathbb{J}\left(\mathbf{x}, y; u, v, \Psi^\uparrow(\lambda, y, \mathbf{x}; u, v)\right) \le 1 - \lambda \rho$$

*and:*

$$\inf \Psi^{\uparrow}(\lambda, \mathbf{x}, \underline{y}; \boldsymbol{\upsilon}, \boldsymbol{u}) \leqslant + \infty, \text{ then } \mathcal{J}\left(\mathbf{x}, \underline{y}; \boldsymbol{u}, \boldsymbol{\upsilon}, \Psi^{\uparrow}(\lambda, \mathbf{x}, \underline{y}; \boldsymbol{\upsilon}, \boldsymbol{u})\right) \leq 1 - \lambda.$$

• *Suppose that M satisfies the* ◦*-triangle inequality for* ◦∈{-, , }*.*

$$\text{If } \Psi^\uparrow(\lambda, y, \mathbf{x}; \boldsymbol{\upsilon}, \boldsymbol{\mu}) < +\infty \text{, then } \mathbb{Q}\left(\mathbf{x}, y; \boldsymbol{\mu}, \boldsymbol{\upsilon}, \Psi^\uparrow(\lambda, y, \mathbf{x}; \boldsymbol{\upsilon}, \boldsymbol{\mu})\right) \le 1 - \lambda.$$

(iii) *Suppose that M satisfies the* ◦*-triangle inequality for* ◦∈{-, -, }*, and that the mapping ζ*(*x*, *y*; *u*, *v*, ·) : (0, ∞) → [0, 1] *is continuous on* (0, ∞)*.*

$$\text{If } \Psi^\uparrow(\lambda, \mathbf{x}, y; u, v) < +\infty \text{, then } \mathbb{Q}\left(\mathbf{x}, y; u, v, \Psi^\uparrow(\lambda, \mathbf{x}, y; u, v)\right) = 1 - \lambda.$$

**Proof.** To prove part (i), by taking the limit → 0+ to the inequality (Equation (41)), we obtain Equation (45). To prove part (ii), by taking the limit → 0+ to the inequalities (Equations (42)–(44)), we also obtain the desired results. Part (iii) follows from parts (i) and (ii) immediately. This completes the proof.

**Theorem 4.** (Triangle Inequalities for Dual Double Fuzzy Semi-Metric)*. Let* (*X*, *M*) *be a fuzzy semi-metric space along with a t-norm* ∗*. We also assume that M satisfies the rational condition, and that the t-norm* ∗ *is right-continuous at* 0 *and left-continuous at* 1 *with respect to the first or second argument. Given any distinct fixed x*1, *x*2, ··· , *xp*, *y*1, *y*2, ··· , *yp* ∈ *X and any fixed μ* ∈ (0, 1]*, we have the following properties:*

(i) *Suppose that M satisfies the* -*-triangle inequality. There exists λ* ∈ (0, 1)*, satisfying:*

$$\begin{split} \Psi^{\uparrow}(\boldsymbol{\mu},\mathbf{x}\_{1},\mathbf{x}\_{p};y\_{1},y\_{p}) &\leq \Psi^{\uparrow}(\lambda,\mathbf{x}\_{1},\mathbf{x}\_{2};y\_{1},y\_{2}) + \Psi^{\uparrow}(\lambda,\mathbf{x}\_{2},\mathbf{x}\_{3};y\_{2},y\_{3}) + \cdots \\ &+ \Psi^{\uparrow}(\lambda,\mathbf{x}\_{p-2},\mathbf{x}\_{p-1};y\_{p-2},y\_{p-1}) + \Psi^{\uparrow}(\lambda,\mathbf{x}\_{p-1},\mathbf{x}\_{p};y\_{p-1},y\_{p}), \\ \Psi^{\uparrow}(\boldsymbol{\mu},\mathbf{x}\_{1},\mathbf{x}\_{p};y\_{p},y\_{1}) &\leq \Psi^{\uparrow}(\lambda,\mathbf{x}\_{1},\mathbf{x}\_{2};y\_{2},y\_{1}) + \Psi^{\uparrow}(\lambda,\mathbf{x}\_{2},\mathbf{x}\_{3};y\_{3},y\_{2}) + \cdots \\ &+ \Psi^{\uparrow}(\lambda,\mathbf{x}\_{p-2},\mathbf{x}\_{p-1};y\_{p-1},y\_{p-2}) + \Psi^{\uparrow}(\lambda,\mathbf{x}\_{p-1},\mathbf{x}\_{p};y\_{p},y\_{p-1}), \end{split} \tag{46}$$

$$\begin{split} \Psi^{\uparrow}(\mu, \mathbf{x}\_{p}, \mathbf{x}\_{1}; y\_{1}, y\_{p}) &\leq \Psi^{\uparrow}(\lambda, \mathbf{x}\_{p}, \mathbf{x}\_{p-1}; y\_{p-1}, y\_{p}) + \Psi^{\uparrow}(\lambda, \mathbf{x}\_{p-1}, \mathbf{x}\_{p-2}; y\_{p-2}, y\_{p-1}) \\ &+ \dots + \Psi^{\uparrow}(\lambda, \mathbf{x}\_{3}, \mathbf{x}\_{2}; y\_{2}, y\_{3}) + \Psi^{\uparrow}(\lambda, \mathbf{x}\_{2}, \mathbf{x}\_{1}; y\_{1}, y\_{2}), \\ \Psi^{\uparrow}(\mu, \mathbf{x}\_{p}, \mathbf{x}\_{1}; y\_{p}, y\_{1}) &\leq \Psi^{\uparrow}(\lambda, \mathbf{x}\_{p}, \mathbf{x}\_{p-1}; y\_{p}, y\_{p-1}) + \Psi^{\uparrow}(\lambda, \mathbf{x}\_{p-1}, \mathbf{x}\_{p-2}; y\_{p-1}, y\_{p-2}) \\ &+ \dots + \Psi^{\uparrow}(\lambda, \mathbf{x}\_{3}, \mathbf{x}\_{2}; y\_{3}, y\_{2}) + \Psi^{\uparrow}(\lambda, \mathbf{x}\_{2}, \mathbf{x}\_{1}; y\_{2}, y\_{1}). \end{split}$$

(ii) *Suppose that M satisfies the* -*-triangle inequality. There exists λ* ∈ (0, 1)*, satisfying:*

$$\begin{split} & \max \left\{ \Psi^{\uparrow}(\boldsymbol{\mu}, \mathbf{x}\_{1}, \mathbf{x}\_{p}; y\_{1}, y\_{p}), \Psi^{\uparrow}(\boldsymbol{\mu}, \mathbf{x}\_{1}, \mathbf{x}\_{p}; y\_{p}, y\_{1}), \Psi^{\uparrow}(\boldsymbol{\mu}, \mathbf{x}\_{p}, \mathbf{x}\_{1}; y\_{1}, y\_{p}), \Psi^{\uparrow}(\boldsymbol{\mu}, \mathbf{x}\_{p}, \mathbf{x}\_{1}; y\_{p}, y\_{1}) \right\} \\ & \leq \Psi^{\uparrow}(\boldsymbol{\lambda}, \mathbf{x}\_{1}, \mathbf{x}\_{2}; y\_{1}, y\_{2}) + \Psi^{\uparrow}(\boldsymbol{\lambda}, \mathbf{x}\_{3}, \mathbf{x}\_{2}; y\_{3}, y\_{2}) + \Psi^{\uparrow}(\boldsymbol{\lambda}, \mathbf{x}\_{4}, \mathbf{x}\_{3}; y\_{4}, y\_{3}) \\ & \quad + \dots + \Psi^{\uparrow}(\boldsymbol{\lambda}, \mathbf{x}\_{p}, \mathbf{x}\_{p-1}; y\_{p}, y\_{p-1}). \end{split}$$

(iii) *Suppose that M satisfies the -triangle inequality. There exists λ* ∈ (0, 1)*, satisfying:*

$$\begin{split} & \max \left\{ \Psi^{\uparrow}(\boldsymbol{\mu}, \mathbf{x}\_{1}, \mathbf{x}\_{p}; y\_{1}, y\_{p}), \Psi^{\uparrow}(\boldsymbol{\mu}, \mathbf{x}\_{1}, \mathbf{x}\_{p}; y\_{p}, y\_{1}), \Psi^{\uparrow}(\boldsymbol{\mu}, \mathbf{x}\_{p}, \mathbf{x}\_{1}; y\_{1}, y\_{p}), \Psi^{\uparrow}(\boldsymbol{\mu}, \mathbf{x}\_{p}, \mathbf{x}\_{1}; y\_{p}, y\_{1}) \right\} \\ & \leq \Psi^{\uparrow}(\boldsymbol{\lambda}, \mathbf{x}\_{2}, \mathbf{x}\_{1}; y\_{2}, y\_{1}) + \Psi^{\uparrow}(\boldsymbol{\lambda}, \mathbf{x}\_{2}, \mathbf{x}\_{3}; y\_{2}, y\_{3}) + \Psi^{\uparrow}(\boldsymbol{\lambda}, \mathbf{x}\_{3}, \mathbf{x}\_{4}; y\_{3}, y\_{4}) \\ & \quad + \dots + \Psi^{\uparrow}(\boldsymbol{\lambda}, \mathbf{x}\_{p-1}, \mathbf{x}\_{p}; y\_{p-1}, y\_{p}). \end{split}$$

	- *If p is even and* Ψ↑(*μ*, *x*1, *xp*; *y*1, *yp*) < +∞*, then:*

$$\begin{split} \Psi^{\uparrow}(\boldsymbol{\mu}, \mathbf{x}\_{1}, \mathbf{x}\_{p}; y\_{1}, y\_{p}) &\leq \Psi^{\uparrow}(\boldsymbol{\lambda}, \mathbf{x}\_{2}, \mathbf{x}\_{1}; y\_{2}, y\_{1}) + \Psi^{\uparrow}(\boldsymbol{\lambda}, \mathbf{x}\_{3}, \mathbf{x}\_{2}; y\_{3}, y\_{2}) + \Psi^{\uparrow}(\boldsymbol{\lambda}, \mathbf{x}\_{3}, \mathbf{x}\_{4}; y\_{3}, y\_{4}) \\ &+ \Psi^{\uparrow}(\boldsymbol{\lambda}, \mathbf{x}\_{5}, \mathbf{x}\_{4}; y\_{5}, y\_{4}) + \Psi^{\uparrow}(\boldsymbol{\lambda}, \mathbf{x}\_{5}, \mathbf{x}\_{6}; y\_{5}, y\_{6}) + \Psi^{\uparrow}(\boldsymbol{\lambda}, \mathbf{x}\_{7}, \mathbf{x}\_{6}; y\_{7}, y\_{6}) \\ &+ \dots + \Psi^{\uparrow}(\boldsymbol{\lambda}, \mathbf{x}\_{p-1}, \mathbf{x}\_{p}; y\_{p-1}, y\_{p}). \end{split} \tag{48}$$

• *If p is even and* Ψ↑(*μ*, *x*1, *xp*; *yp*, *y*1) < +∞*, then:*

$$\begin{split} \Psi^{\downarrow}(\mu, \mathbf{x}\_{1}, \mathbf{x}\_{p}; y\_{p}, y\_{1}) &\leq \Psi^{\downarrow}(\lambda, \mathbf{x}\_{2}, \mathbf{x}\_{1}; y\_{1}, y\_{2}) + \Psi^{\downarrow}(\lambda, \mathbf{x}\_{3}, \mathbf{x}\_{2}; y\_{2}, y\_{3}) + \Psi^{\downarrow}(\lambda, \mathbf{x}\_{3}, \mathbf{x}\_{4}; y\_{4}, y\_{3}) \\ &+ \Psi^{\downarrow}(\lambda, \mathbf{x}\_{5}, \mathbf{x}\_{4}; y\_{4}, y\_{5}) + \Psi^{\downarrow}(\lambda, \mathbf{x}\_{5}, \mathbf{x}\_{6}; y\_{6}, y\_{5}) + \Psi^{\downarrow}(\lambda, \mathbf{x}\_{7}, \mathbf{x}\_{6}; y\_{6}, y\_{7}) \\ &+ \dots + \Psi^{\downarrow}(\lambda, \mathbf{x}\_{p-1}, \mathbf{x}\_{p}; y\_{p}, y\_{p-1}). \end{split} \tag{49}$$

• *If p is even and* Ψ↑(*μ*, *xp*, *x*1; *y*1, *yp*) < +∞*, then:*

$$\begin{split} \Psi^{\downarrow}(\mu,\mathbf{x}\_{p},\mathbf{x}\_{1};y\_{1},y\_{p}) &\leq \Psi^{\downarrow}(\lambda,\mathbf{x}\_{1},\mathbf{x}\_{2};y\_{2},y\_{1}) + \Psi^{\downarrow}(\lambda,\mathbf{x}\_{2},\mathbf{x}\_{3};y\_{3},y\_{2}) + \Psi^{\downarrow}(\lambda,\mathbf{x}\_{4},\mathbf{x}\_{3};y\_{3},y\_{4}) \\ &+ \Psi^{\downarrow}(\lambda,\mathbf{x}\_{4},\mathbf{x}\_{5};y\_{5},y\_{4}) + \Psi^{\downarrow}(\lambda,\mathbf{x}\_{6},\mathbf{x}\_{5};y\_{5},y\_{6}) + \Psi^{\downarrow}(\lambda,\mathbf{x}\_{6},\mathbf{x}\_{7};y\_{7},y\_{6}) \\ &+ \dots + \Psi^{\downarrow}(\lambda,\mathbf{x}\_{p},\mathbf{x}\_{p-1};y\_{p-1},y\_{p}). \end{split} \tag{50}$$

• *If p is even and* Ψ↑(*μ*, *xp*, *x*1; *yp*, *y*1) < +∞*, then:*

$$\begin{split} \Psi^{\uparrow}(\boldsymbol{\mu}, \mathbf{x}\_{p}, \mathbf{x}\_{1}; y\_{p}, y\_{1}) &\leq \Psi^{\uparrow}(\lambda, \mathbf{x}\_{1}, \mathbf{x}\_{2}; y\_{1}, y\_{2}) + \Psi^{\uparrow}(\lambda, \mathbf{x}\_{2}, \mathbf{x}\_{3}; y\_{2}, y\_{3}) + \Psi^{\uparrow}(\lambda, \mathbf{x}\_{4}, \mathbf{x}\_{3}; y\_{4}, y\_{3}) \\ &+ \Psi^{\uparrow}(\lambda, \mathbf{x}\_{4}, \mathbf{x}\_{5}; y\_{4}, y\_{5}) + \Psi^{\uparrow}(\lambda, \mathbf{x}\_{6}, \mathbf{x}\_{5}; y\_{6}, y\_{7}) + \Psi^{\uparrow}(\lambda, \mathbf{x}\_{6}, \mathbf{x}\_{7}; y\_{6}, y\_{7}) \\ &+ \dots + \Psi^{\uparrow}(\lambda, \mathbf{x}\_{p}, \mathbf{x}\_{p-1}; y\_{p}, y\_{p-1}). \end{split} \tag{51}$$

• *If p is odd and* Ψ↑(*μ*, *x*1, *xp*; *y*1, *yp*) < +∞*, then:*

$$\begin{split} \Psi^{\uparrow}(\boldsymbol{\mu}, \mathbf{x}\_{1}, \mathbf{x}\_{p}; y\_{1}, y\_{p}) &\leq \Psi^{\uparrow}(\boldsymbol{\lambda}, \mathbf{x}\_{1}, \mathbf{x}\_{2}; y\_{1}, y\_{2}) + \Psi^{\uparrow}(\boldsymbol{\lambda}, \mathbf{x}\_{2}, \mathbf{x}\_{3}; y\_{2}, y\_{3}) + \Psi^{\uparrow}(\boldsymbol{\lambda}, \mathbf{x}\_{4}, \mathbf{x}\_{3}; y\_{4}, y\_{3}) \\ &+ \Psi^{\uparrow}(\boldsymbol{\lambda}, \mathbf{x}\_{4}, \mathbf{x}\_{5}; y\_{4}, y\_{5}) + \Psi^{\uparrow}(\boldsymbol{\lambda}, \mathbf{x}\_{6}, \mathbf{x}\_{5}; y\_{6}, y\_{5}) + \Psi^{\uparrow}(\boldsymbol{\lambda}, \mathbf{x}\_{6}, \mathbf{x}\_{7}; y\_{6}, y\_{7}) \\ &+ \dots + \Psi^{\uparrow}(\boldsymbol{\lambda}, \mathbf{x}\_{p}, \mathbf{x}\_{p-1}; y\_{p}, y\_{p-1}). \end{split} \tag{52}$$

• *If p is odd and* Ψ↑(*μ*, *x*1, *xp*; *yp*, *y*1) < +∞*, then:*

$$\begin{split} \Psi^{\downarrow}(\mu,\mathbf{x}\_{1},\mathbf{x}\_{p};y\_{p},y\_{1}) &\leq \Psi^{\downarrow}(\lambda,\mathbf{x}\_{1},\mathbf{x}\_{2};y\_{2},y\_{1}) + \Psi^{\downarrow}(\lambda,\mathbf{x}\_{2},\mathbf{x}\_{3};y\_{3},y\_{2}) + \Psi^{\downarrow}(\lambda,\mathbf{x}\_{4},\mathbf{x}\_{3};y\_{3},y\_{4}) \\ &+ \Psi^{\downarrow}(\lambda,\mathbf{x}\_{4},\mathbf{x}\_{5};y\_{5},y\_{4}) + \Psi^{\downarrow}(\lambda,\mathbf{x}\_{6},\mathbf{x}\_{5};y\_{5},y\_{6}) + \Psi^{\downarrow}(\lambda,\mathbf{x}\_{6},\mathbf{x}\_{7};y\_{7},y\_{6}) \\ &+ \dots + \Psi^{\downarrow}(\lambda,\mathbf{x}\_{p},\mathbf{x}\_{p-1};y\_{p-1},y\_{p}). \end{split} \tag{53}$$


$$\begin{split} \Psi^{\uparrow}(\boldsymbol{\mu},\mathbf{x}\_{p},\mathbf{x}\_{1};y\_{p},y\_{1}) &\leq \Psi^{\uparrow}(\boldsymbol{\lambda},\mathbf{x}\_{2},\mathbf{x}\_{1};y\_{2},y\_{1}) + \Psi^{\uparrow}(\boldsymbol{\lambda},\mathbf{x}\_{3},\mathbf{x}\_{2};y\_{3},y\_{2}) + \Psi^{\uparrow}(\boldsymbol{\lambda},\mathbf{x}\_{3},\mathbf{x}\_{4};y\_{3},y\_{4}) \\ &+ \Psi^{\uparrow}(\boldsymbol{\lambda},\mathbf{x}\_{5},\mathbf{x}\_{4};y\_{5},y\_{4}) + \Psi^{\uparrow}(\boldsymbol{\lambda},\mathbf{x}\_{5},\mathbf{x}\_{6};y\_{5},y\_{6}) + \Psi^{\uparrow}(\boldsymbol{\lambda},\mathbf{x}\_{7},\mathbf{x}\_{6};y\_{7},y\_{6}) \\ &+ \dots + \Psi^{\uparrow}(\boldsymbol{\lambda},\mathbf{x}\_{p-1},\mathbf{x}\_{p};y\_{p-1},y\_{p}). \end{split} \tag{55}$$

**Proof.** Lemma 1 says that there exists *λ* ∈ (0, 1), satisfying:

$$(1 - \lambda) \* \dots \* (1 - \lambda) > 1 - \mu. \tag{56}$$

To prove part (i), we assume that Ψ↑(*λ*, *xi*, *xi*+1; *yi*, *yi*+1) < +∞ for all *i* = 1, ··· , *p* − 1. Given any > 0, the first observation of Remark 1 says that:

$$\begin{split} &\mathcal{M}\left(\mathbf{x}\_{1},\mathbf{x}\_{p},\mathbf{V}^{\uparrow}(\lambda,\mathbf{x}\_{1},\mathbf{x}\_{2};y\_{1},y\_{2}) + \mathbf{V}^{\uparrow}(\lambda,\mathbf{x}\_{2},\mathbf{x}\_{3};y\_{2},y\_{3}) + \dots + \mathbf{V}^{\uparrow}(\lambda,\mathbf{x}\_{p-1},\mathbf{x}\_{p};y\_{p-1},y\_{p}) + (p-1)\varepsilon \right) \\ &\geq \mathcal{M}\left(\mathbf{x}\_{1},\mathbf{x}\_{2},\mathbf{V}^{\uparrow}(\lambda,\mathbf{x}\_{1},\mathbf{x}\_{2};y\_{1},y\_{2}) + \varepsilon \right) \ast \dots \ast \mathcal{M}\left(\mathbf{x}\_{p-1},\mathbf{x}\_{p},\mathbf{V}^{\uparrow}(\lambda,\mathbf{x}\_{p-1},\mathbf{x}\_{p};y\_{p-1},y\_{p}) + \varepsilon \right), \end{split} \tag{57}$$

and:

$$\begin{split} &M\left(y\_1, y\_p, \Psi^\uparrow(\lambda, \mathbf{x}\_1, \mathbf{x}\_2; y\_1, y\_2) + \Psi^\uparrow(\lambda, \mathbf{x}\_2, \mathbf{x}\_3; y\_2, y\_3) + \dots + \Psi^\uparrow(\lambda, \mathbf{x}\_{p-1}, \mathbf{x}\_p; y\_{p-1}, y\_p) + (p-1)\varepsilon\right) \\ &\geq M\left(y\_1, y\_2, \Psi^\uparrow(\lambda, \mathbf{x}\_1, \mathbf{x}\_2; y\_1, y\_2) + \varepsilon\right) \ast \dots \ast M\left(y\_{p-1}, y\_p, \Psi^\uparrow(\lambda, \mathbf{x}\_{p-1}, \mathbf{x}\_p; y\_{p-1}, y\_p) + \varepsilon\right) .\end{split} \tag{58}$$

Now, applying the increasing property and commutativity of t-norm to Equations (57) and (58), we obtain:

*ζ <sup>x</sup>*1, *xp*; *<sup>y</sup>*1, *yp*, <sup>Ψ</sup>↑(*λ*, *<sup>x</sup>*1, *<sup>x</sup>*2; *<sup>y</sup>*1, *<sup>y</sup>*2) + <sup>Ψ</sup>↑(*λ*, *<sup>x</sup>*2, *<sup>x</sup>*3; *<sup>y</sup>*2, *<sup>y</sup>*3) + ··· + <sup>Ψ</sup>↑(*λ*, *xp*−1, *xp*; *yp*−1, *yp*)+(*<sup>p</sup>* − <sup>1</sup>) ≥ *ζ x*1, *x*2; *y*1, *y*2, Ψ↑(*λ*, *x*1, *x*2; *y*1, *y*2) + ∗···∗ *ζ xp*−1, *xp*; *yp*−1, *yp*, <sup>Ψ</sup>↑(*λ*, *xp*−1, *xp*; *yp*−1, *yp*) + ≥ (1 − *λ*) ∗···∗ (1 − *λ*) (by Equation (41) and the increasing property of t-norm) > 1 − *μ* (by Equation (56)). (59)

Therefore, we consider the following cases:

	- **–** If there exists *i*0, satisfying Ψ↑(*λ*, *xi*<sup>0</sup> , *xi*0<sup>+</sup>1; *yi*<sup>0</sup> , *yi*0+1)=+∞, then the inequality (Equation (46)) also holds true.
	- **–** We assume that Ψ↑(*λ*, *xi*, *xi*+1; *yi*, *yi*+1) < +∞ for all *i* = 1, ··· , *p* − 1. Using Equation (59) and part (ii) of Proposition 17 again, it follows that:

$$\begin{aligned} &\Psi^{\uparrow}(\lambda,\mathbf{x}\_{1},\mathbf{x}\_{2};y\_{1},y\_{2}) + \Psi^{\uparrow}(\lambda,\mathbf{x}\_{2},\mathbf{x}\_{3};y\_{2},y\_{3}) + \dots + \Psi^{\uparrow}(\lambda,\mathbf{x}\_{p-1},\mathbf{x}\_{p};y\_{p-1},y\_{p}) + (p-1)\epsilon \\ &\geq \Psi^{\uparrow}(\mu,\mathbf{x}\_{1},\mathbf{x}\_{p};y\_{1},y\_{p}). \end{aligned}$$

By taking the limit → 0+, we obtain the desired inequality (Equation (46)).

On the other hand, we also have:

$$\begin{split} &\mathcal{M}\left(\mathbf{x}\_{1},\mathbf{x}\_{p},\mathbf{V}^{\uparrow}(\lambda,\mathbf{x}\_{1},\mathbf{x}\_{2};y\_{2},y\_{1}) + \mathbf{V}^{\uparrow}(\lambda,\mathbf{x}\_{2},\mathbf{x}\_{3};y\_{3},y\_{2}) + \dots + \mathbf{V}^{\uparrow}(\lambda,\mathbf{x}\_{p-1},\mathbf{x}\_{p};y\_{p},y\_{p-1}) + (p-1)\varepsilon\right) \\ &\geq \mathcal{M}\left(\mathbf{x}\_{1},\mathbf{x}\_{2},\mathbf{V}^{\uparrow}(\lambda,\mathbf{x}\_{1},\mathbf{x}\_{2};y\_{2},y\_{1}) + \varepsilon\right) \ast \dots \ast \mathcal{M}\left(\mathbf{x}\_{p-1},\mathbf{x}\_{p},\mathbf{V}^{\uparrow}(\lambda,\mathbf{x}\_{p-1},\mathbf{x}\_{p};y\_{p},y\_{p-1}) + \varepsilon\right), \end{split} \tag{60}$$

and:

$$\begin{split} &M\left(y\_{p},y\_{1},\Psi^{\uparrow}(\lambda,\mathbf{x}\_{1},\mathbf{x}\_{2};y\_{2},y\_{1}) + \Psi^{\uparrow}(\lambda,\mathbf{x}\_{2},\mathbf{x}\_{3};y\_{3},y\_{2}) + \dots + \Psi^{\uparrow}(\lambda,\mathbf{x}\_{p-1},\mathbf{x}\_{p};y\_{p},y\_{p-1}) + (p-1)\varepsilon\right) \\ &\geq M\left(y\_{2},y\_{1},\Psi^{\uparrow}(\lambda,\mathbf{x}\_{1},\mathbf{x}\_{2};y\_{2},y\_{1}) + \varepsilon\right) \ast \dots \ast M\left(y\_{p},y\_{p-1},\Psi^{\uparrow}(\lambda,\mathbf{x}\_{p-1},\mathbf{x}\_{p};y\_{p},y\_{p-1}) + \varepsilon\right) .\end{split} \tag{61}$$

Now, applying the increasing property and commutativity of t-norm to Equations (60) and (61), we obtain:

$$\begin{split} &\mathbb{E}\left(\mathbf{x}\_{1},\mathbf{x}\_{p};y\_{p},y\_{1},\Psi^{\uparrow}(\lambda,\mathbf{x}\_{1},\mathbf{x}\_{2};y\_{2},y\_{1})+\Psi^{\uparrow}(\lambda,\mathbf{x}\_{2},\mathbf{x}\_{3};y\_{3},y\_{2})+\dots+\Psi^{\uparrow}(\lambda,\mathbf{x}\_{p-1},\mathbf{x}\_{p};y\_{p},y\_{p-1})+(p-1)\varepsilon\right) \\ &\geq \mathbb{E}\left(\mathbf{x}\_{1},\mathbf{x}\_{2};y\_{2},y\_{1},\Psi^{\uparrow}(\lambda,\mathbf{x}\_{1},\mathbf{x}\_{2};y\_{2},y\_{1})+\varepsilon\right) \ast \dots \ast \,\_{\mathbb{S}}^{\tau}\left(\mathbf{x}\_{p-1},\mathbf{x}\_{p};y\_{p},y\_{p-1},\Psi^{\uparrow}(\lambda,\mathbf{x}\_{p-1},\mathbf{x}\_{p};y\_{p},y\_{p-1})+\varepsilon\right) \\ &\geq (1-\lambda)\ast\dots\ast(1-\lambda)\text{ (by Equation (41) and the increasing property of t-norm)} \\ &> 1-\mu\text{ (by Equation (56)).}\end{split}$$

The inequality (Equation (47)) can be similarly obtained using the above argument. Further, the other inequalities can be similarly obtained.

The above argument is still valid by applying the second observation of Remark 1 to obtain part (ii). We can also apply the third observation of Remark 1 to obtain part (iii). Finally, part (iv) can be obtained using the fourth observation of Remark 1. This completes the proof.

**Theorem 5.** *Let* (*X*, *M*) *be a fuzzy semi-metric space along with a t-norm* ∗*. We also assume that M satisfies the rational condition, and that the t-norm* ∗ *is right-continuous at* 0 *with respect to the first or second argument. Let* {*xn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> *and* {*yn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> *be two sequences in X. Then we have the following properties:*

	- *xn <sup>M</sup>*- −→ *<sup>x</sup> and yn <sup>M</sup>*- −→ *y as n* → ∞ *if and only if* Ψ↑(*λ*, *xn*, *x*; *yn*, *y*) → 0 *as n* → ∞ *for all λ* ∈ (0, 1)*.*
	- *xn <sup>M</sup>*- −→ *<sup>x</sup> and yn <sup>M</sup>* −→ *y as n* → ∞ *if and only if* Ψ↑(*λ*, *xn*, *x*; *y*, *yn*) → 0 *as n* → ∞ *for all λ* ∈ (0, 1)*.*
	- *xn <sup>M</sup>* −→ *<sup>x</sup> and yn <sup>M</sup>*- −→ *y as n* → ∞ *if and only if* Ψ↑(*λ*, *x*, *xn*; *yn*, *y*) → 0 *as n* → ∞ *for all λ* ∈ (0, 1)*.*
	- *xn <sup>M</sup>* −→ *<sup>x</sup> and yn <sup>M</sup>* −→ *y as n* → ∞ *if and only if* Ψ↑(*λ*, *x*, *xn*; *y*, *yn*) → 0 *as n* → ∞ *for all λ* ∈ (0, 1)*.*

(ii) *Suppose that M satisfies the -triangle inequality. Then the following statements hold true:*


**Proof.** For any fixed *λ* ∈ (0, 1), using Lemma 1, it follows that there exists *λ*<sup>0</sup> ∈ (0, 1), satisfying:

$$(1 - \lambda\_0) \star (1 - \lambda\_0) > 1 - \lambda\_\cdot$$

To prove part (i), we just prove the first case, since the other cases can be similarly obtained. Suppose that *M*(*xn*, *x*, *t*) → 1 and *M*(*yn*, *y*, *t*) → 1 as *n* → ∞ for all *t* > 0. Then, given any *t* > 0 and *δ* > 0, there exists *n*(1) *<sup>t</sup>*,*<sup>δ</sup>* , *<sup>n</sup>*(2) *<sup>t</sup>*,*<sup>δ</sup>* <sup>∈</sup> <sup>N</sup>, satisfying <sup>|</sup>*M*(*xn*, *<sup>x</sup>*, *<sup>t</sup>*) <sup>−</sup> <sup>1</sup><sup>|</sup> <sup>&</sup>lt; *<sup>δ</sup>* for *<sup>n</sup>* <sup>≥</sup> *<sup>n</sup>*(1) *<sup>t</sup>*,*<sup>δ</sup>* and |*M*(*yn*, *y*, *t*) − 1| < *δ* for *<sup>n</sup>* <sup>≥</sup> *<sup>n</sup>*(2) *<sup>t</sup>*,*<sup>δ</sup>* . Given any <sup>∈</sup> (0, 1), there exists *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>, satisfying

$$\left| M\left(\mathbf{x}\_{n\prime}, \mathbf{x}\_{\prime} \frac{\boldsymbol{\varepsilon}}{2} \right) - 1 \right| < \lambda\_0 \text{ and } \left| M\left(\mathbf{y}\_{n\prime}, \mathbf{y}\_{\prime} \frac{\boldsymbol{\varepsilon}}{2} \right) - 1 \right| < \lambda\_{0\prime}$$

for *n* ≥ *n*. We also have:

$$M\left(\mathbf{x}\_{n\prime}, \mathbf{x}\_{\prime}\frac{\epsilon}{2}\right) > 1 - \lambda\_0 \text{ and } M\left(y\_{n\prime}, y\_{\prime}\frac{\epsilon}{2}\right) > 1 - \lambda\_0.$$

for *n* ≥ *n*. The increasing property of t-norm says that:

$$\,\_3\zeta\left(\mathbf{x}\_{\hbar},\mathbf{x};y\_{\hbar},y\_{\prime},\frac{\epsilon}{2}\right) = M\left(\mathbf{x}\_{\hbar},\mathbf{x},\frac{\epsilon}{2}\right) \ast M\left(y\_{\hbar},y\_{\prime},\frac{\epsilon}{2}\right) \geq \left(1-\lambda\_{0}\right) \ast \left(1-\lambda\_{0}\right) > 1-\lambda.$$

The first result of part (ii) of Proposition 17 says that:

$$\left(\Psi^\dagger(\lambda, \mathfrak{x}\_{\mathfrak{n}}, \mathfrak{x}; y\_{\mathfrak{n}}, y) \le \frac{\epsilon}{2} < \epsilon, \epsilon\right)$$

for *n* ≥ *n*. This shows that Ψ↑(*λ*, *xn*, *x*; *yn*, *y*) → 0 as *n* → ∞.

To prove the converse, suppose that Ψ↑(*λ*, *xn*, *x*; *yn*, *y*) → 0 as *n* → ∞ for all *λ* ∈ (0, 1). Given any *<sup>δ</sup>* <sup>&</sup>gt; 0 and *<sup>λ</sup>* <sup>∈</sup> (0, 1], there exists *<sup>n</sup>δ*,*<sup>λ</sup>* <sup>∈</sup> <sup>N</sup>, satisfying <sup>|</sup>Ψ↑(*λ*, *xn*, *<sup>x</sup>*; *yn*, *<sup>y</sup>*)<sup>|</sup> <sup>&</sup>lt; *<sup>δ</sup>* for all *<sup>n</sup>* <sup>≥</sup> *<sup>n</sup>δ*,*λ*. For any fixed *<sup>t</sup>* <sup>&</sup>gt; 0 and given any <sup>∈</sup> (0, 1), there exists *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>, satisfying:

$$\Psi^{\uparrow} \left( \epsilon, \mathbf{x}\_{\hbar \prime}, \mathbf{x}; y\_{\hbar \prime} y \right) = \left| \Psi^{\uparrow} \left( \epsilon, \mathbf{x}\_{\hbar \prime}, \mathbf{x}\_{\prime}^{\prime} y\_{\hbar \prime} y \right) \right| < \mathfrak{t}\_{\prime} \tag{62}$$

for *n* ≥ *n*, which implies:

$$\zeta(\mathbf{x}\_{n\prime}\mathbf{x}; y\_n, y\_{\prime\prime}t) > 1 - \epsilon\_{\prime\prime}$$

for *n* ≥ *n* by the first result of part (i) of Proposition 16, i.e.,

$$\mathcal{M}\left(\mathfrak{x}\_{n\_{\prime\prime}}\mathfrak{x}\_{\prime}t\right) \* \mathcal{M}\left(\mathfrak{y}\_{n\_{\prime\prime}}\mathfrak{y}\_{\prime}t\right) > 1 - \epsilon\_{\prime\prime}$$

for *n* ≥ *n*. Lemma 2 says that:

$$M\left(\mathbf{x}\_{n\prime}, \mathbf{x}, t\right) > 1 - \epsilon \text{ and } M\left(y\_{n\prime}, y, t\right) > 1 - \epsilon,$$

for *<sup>n</sup>* <sup>≥</sup> *<sup>n</sup>*. This shows that the sequences {*xn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> and {*yn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> in *X* converge to *x* and *y*, respectively.

To prove part (ii), the first result to the fourth result can be similarly obtained by the third result of part (ii) of Proposition 17. For proving the fifth result, the fact Ψ↑(*λ*, *xn*, *x*; *yn*, *y*) → 0 implies the inequality (Equation (62)). The third result of part (i) of Proposition 16 says that *ζ*(*x*, *xn*; *y*, *yn*, *t*) > 1 − , which implies *<sup>M</sup>* (*x*, *xn*, *<sup>x</sup>*, *<sup>t</sup>*) <sup>&</sup>gt; <sup>1</sup> <sup>−</sup> and *<sup>M</sup>* (*y*, *yn*, *<sup>y</sup>*, *<sup>t</sup>*) <sup>&</sup>gt; <sup>1</sup> <sup>−</sup> . In other words, we have *xn <sup>M</sup>* −→ *x* and *yn <sup>M</sup>* −→ *y* as *n* → ∞. The remaining three results can be similarly obtained. This completes the proof.

**Example 7.** *From Example 1, we see that:*

$$\propto\_{n} \stackrel{\mathcal{M}^{\flat}}{\longrightarrow} \propto if \; and \; only \; if \; \lim\_{n \to \infty} d(\mathfrak{x}\_{n}, \mathfrak{x}) = 0\_{\prime}$$

*and:*

$$\propto\_{\mathfrak{U}} \stackrel{M^{\circ}}{\longrightarrow} \mathfrak{x} \text{ if and only if } \lim\_{\mathfrak{u} \to \infty} d(\mathfrak{x}, \mathfrak{x}\_{\mathfrak{U}}) = 0.$$

*From Example 6, we have:*

$$
\Psi^\uparrow(\lambda, x, y; u, v) = \frac{\mathbb{C} + \sqrt{\mathbb{C}^2 + D}}{2},
$$

*where:*

$$C = \frac{(d(\mathbf{x}, y) + d(\mathbf{u}, \mathbf{v}))(1 - \lambda)}{\lambda} \text{ and } D = \frac{d(\mathbf{x}, y) \cdot d(\mathbf{u}, \mathbf{v}) \cdot (1 - \lambda)}{\lambda}.$$

*It is clear to see that xn <sup>M</sup>*- −→ *<sup>x</sup> and yn <sup>M</sup>*- −→ *y as n* → ∞ *if and only if* Ψ↑(*λ*, *xn*, *x*; *yn*, *y*) → 0 *as n* → ∞ *for all λ* ∈ (0, 1)*. The other convergence presented in Theorem 5 can be similarly verified.*

According to Definition 5, we can similarly define the concepts of four types of the joint Cauchy sequences with respect to Ψ↑. We omit the details.

**Theorem 6.** *Let* (*X*, *M*) *be a fuzzy semi-metric space along with a t-norm* ∗*. We also assume that M satisfies the rational condition, and that the t-norm* ∗ *is right-continuous at* 0 *with respect to the first or second argument. Let* {*xn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> *and* {*yn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> *be two sequences in X. Then we have the following properties:*

	- {*xn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> *and* {*yn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> *are two* <sup>&</sup>gt;*-Cauchy sequences in a metric sense if and only if* {*xn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> *and* {*yn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> *are the joint* (*λ*, >, >)*-Cauchy sequences with respect to* Ψ<sup>↑</sup> *for any λ* ∈ (0, 1)*.*
	- {*xn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> *is a* <sup>&</sup>gt;*-Cauchy sequences in a metric sense and* {*yn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> *is a* <*-Cauchy sequences in a metric sense if and only if* {*xn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> *and* {*yn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> *are the joint* (*λ*, >, <)*-Cauchy sequences with respect to* Ψ<sup>↑</sup> *for any λ* ∈ (0, 1)*.*
	- {*xn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> *is a* <sup>&</sup>lt;*-Cauchy sequences in a metric sense and* {*yn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> *is a* >*-Cauchy sequences in a metric sense if and only if* {*xn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> *and* {*yn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> *are the joint* (*λ*, >, <)*-Cauchy sequences with respect to* Ψ<sup>↑</sup> *for any λ* ∈ (0, 1)*.*
	- {*xn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> *and* {*yn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> *are two* <sup>&</sup>lt;*-Cauchy sequences in a metric sense if and only if* {*xn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> *and* {*yn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> *are the joint* (*λ*, <, <)*-Cauchy sequences with respect to* Ψ<sup>↑</sup> *for any λ* ∈ (0, 1)*.*

(ii) *Suppose that M satisfies the -triangle inequality. Then the following statements hold true:*


**Proof.** For any fixed *λ* ∈ (0, 1), using Lemma 1, it follows that there exists *λ*<sup>0</sup> ∈ (0, 1), satisfying:

$$(1 - \lambda\_0) \star (1 - \lambda\_0) > 1 - \lambda\_\cdot$$

To prove part (i), we just prove the first case, since the other cases can be similarly obtained. Suppose that {*xn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> and {*yn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> are >-Cauchy sequences in a metric sense. Therefore, given any *<sup>t</sup>* <sup>&</sup>gt; 0 and *<sup>δ</sup>* <sup>&</sup>gt; 0, there exists *nt*,*<sup>δ</sup>* <sup>∈</sup> <sup>N</sup> such that *<sup>m</sup>* <sup>&</sup>gt; *<sup>n</sup>* <sup>≥</sup> *nt*,*<sup>δ</sup>* implies *<sup>M</sup>*(*xm*, *xn*, *<sup>t</sup>*) <sup>&</sup>gt; <sup>1</sup> <sup>−</sup> *<sup>δ</sup>* and *<sup>M</sup>*(*ym*, *yn*, *<sup>t</sup>*) <sup>&</sup>gt; <sup>1</sup> <sup>−</sup> *<sup>δ</sup>*. Now, given any <sup>∈</sup> (0, 1), there exists *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> such that *<sup>m</sup>* <sup>&</sup>gt; *<sup>n</sup>* <sup>≥</sup> *<sup>n</sup>* implies:

$$M\left(\mathbf{x}\_{\mathrm{m}\_{\prime\prime}}, \mathbf{x}\_{\mathrm{n}\_{\prime\prime}}, \frac{\boldsymbol{\epsilon}}{2}\right) > 1 - \lambda\_0 \text{ and } M\left(y\_{\mathrm{m}\_{\prime\prime}}, y\_{\mathrm{n}\_{\prime\prime}}, \frac{\boldsymbol{\epsilon}}{2}\right) > 1 - \lambda\_0.$$

The increasing property of t-norm says that:

$$\begin{aligned} \zeta\left(\mathbf{x}\_{\mathrm{m}}, \mathbf{x}\_{\mathrm{n}}; y\_{\mathrm{m}}, y\_{\mathrm{n}}, \frac{\boldsymbol{\varepsilon}}{2}\right) &= M\left(\mathbf{x}\_{\mathrm{m}}, \mathbf{x}\_{\mathrm{n}}, \frac{\boldsymbol{\varepsilon}}{2}\right) \ast M\left(y\_{\mathrm{m}}, y\_{\mathrm{n}}, \frac{\boldsymbol{\varepsilon}}{2}\right) \\ &\geq \left(1 - \lambda\_{0}\right) \ast \left(1 - \lambda\_{0}\right) > 1 - \lambda. \end{aligned}$$

Further, the first result of part (ii) of Proposition 17 says that:

$$\|\Psi^\uparrow(\lambda, \mathfrak{x}\_{m'} \mathfrak{x}\_n; y\_{m'} y\_n) \le \frac{\epsilon}{2} < \epsilon\_{\prime\prime}$$

for *m* > *n* ≥ *n*.

To prove the converse, from the assumption, we see that for any fixed *t* > 0 and given any <sup>∈</sup> (0, 1), there exists *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> such that *<sup>m</sup>* <sup>&</sup>gt; *<sup>n</sup>* <sup>≥</sup> *<sup>n</sup>* implies <sup>Ψ</sup>↑(, *xm*, *xn*; *ym*, *yn*) <sup>&</sup>lt; *<sup>t</sup>*. Therefore, using the first result of part (i) of Proposition 16, we obtain *ζ*(*xm*, *xn*; *ym*, *yn*, *t*) > 1 − for *m* > *n* ≥ *n*, i.e.,

$$\mathcal{M}\left(\mathfrak{x}\_{m\prime}\mathfrak{x}\_{n\prime}t\right) \* \mathcal{M}\left(\mathfrak{y}\_{m\prime}\mathfrak{y}\_{n\prime}t\right) > 1 - \epsilon\_{\prime\prime}$$

for *m* > *n* ≥ *n*. Lemma 2 says that:

$$M\left(\mathbf{x}\_{m'}\mathbf{x}\_{n'}t\right) > 1 - \epsilon \text{ and } M\left(y\_{m'}y\_{n'}t\right) > 1 - \epsilon\_{n'}$$

for *<sup>m</sup>* <sup>&</sup>gt; *<sup>n</sup>* <sup>≥</sup> *<sup>n</sup>*. This shows that {*xn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> and {*yn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> are >-Cauchy sequences in a metric sense.

To prove part (ii), the first result to the fourth result can be similarly obtained by the third result of part (ii) of Proposition 17. For proving the fifth result, using the assumption, we see that for any fixed *<sup>t</sup>* <sup>&</sup>gt; 0 and given any <sup>∈</sup> (0, 1), there exists *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> such that *<sup>m</sup>* <sup>&</sup>gt; *<sup>n</sup>* <sup>≥</sup> *<sup>n</sup>* implies Ψ↑(, *xm*, *xn*; *ym*, *yn*) < *t*. The third result of part (i) of Proposition 16 says that *ζ*(*xn*, *xm*; *yn*, *ym*, *t*) > 1 − for *m* > *n* ≥ *n*. Therefore, we obtain:

$$\mathcal{M}\left(\mathfrak{x}\_{n\prime},\mathfrak{x}\_{m\prime},t\right) \ast \mathcal{M}\left(\mathfrak{y}\_{n\prime}\mathfrak{y}\_{m\prime}t\right) > 1 - \epsilon\_{\prime}$$

for *<sup>m</sup>* <sup>&</sup>gt; *<sup>n</sup>* <sup>≥</sup> *<sup>n</sup>*. This shows that {*xn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> and {*yn*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> are <-Cauchy sequences in a metric sense. The remaining three results can be similarly obtained. This completes the proof.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **An Adaptive Neuro-Fuzzy Propagation Model for LoRaWAN**

#### **Salaheddin Hosseinzadeh 1, Hadi Larijani 1,\*, Krystyna Curtis <sup>1</sup> and Andrew Wixted <sup>2</sup>**


Received: 12 November 2018; Accepted: 13 March 2019; Published: 18 March 2019

**Abstract:** This article proposes an adaptive-network-based fuzzy inference system (ANFIS) model for accurate estimation of signal propagation using LoRaWAN. By using ANFIS, the basic knowledge of propagation is embedded into the proposed model. This reduces the training complexity of artificial neural network (ANN)-based models. Therefore, the size of the training dataset is reduced by 70% compared to an ANN model. The proposed model consists of an efficient clustering method to identify the optimum number of the fuzzy nodes to avoid overfitting, and a hybrid training algorithm to train and optimize the ANFIS parameters. Finally, the proposed model is benchmarked with extensive practical data, where superior accuracy is achieved compared to deterministic models, and better generalization is attained compared to ANN models. The proposed model outperforms the nondeterministic models in terms of accuracy, has the flexibility to account for new modeling parameters, is easier to use as it does not require a model for propagation environment, is resistant to data collection inaccuracies and uncertain environmental information, has excellent generalization capability, and features a knowledge-based implementation that alleviates the training process. This work will facilitate network planning and propagation prediction in complex scenarios.

**Keywords:** propagation modeling; adaptive-network-based fuzzy system; LoRa & LoRaWAN; radio wave propagation; artificial neural networks; subtracting clustering

#### **1. Introduction**

In recent years, the exponentially increasing number of wireless devices has made the maintenance and efficient planning of wireless networks crucial. Therefore, it is essential to gain a better understanding of propagation and be able to readily predict it in various scenarios. The area of radio propagation has been studied over many decades, during which the physics of the electromagnetic wave and its propagation has been unchanged. Hence, the same free space path loss (FSPL) formula holds to this date. However, telecommunication devices, network requirements and the propagation environment have undergone many developments. Perhaps, adapting to these rapid changes and lack of efficiency and generalization in current models has been the main research incentive in this field. It is clear that establishing an adaptive, efficient and accurate modeling of modern wireless technologies, that is, LoRaWAN, is of great importance. The main contributions of this paper are:


(c) Results from the models are critically analyzed with real-world measurements using a relatively new wireless technology.

This novel implementation has several advantages, however, for the purpose of comparison, firstly, a brief review of current models would be required. The rest of the paper is organized as follows: Section 2 provides a critical review of the conventional propagation models. Section 3 explains the implementation details and advantages of the proposed model. In Section 4, the results of the practical and comparative analysis are demonstrated, and Section 5 provides the findings of this research along with final conclusions.

#### **2. Brief Review of Common Models**

There are numerous practical propagation reports and models for various scenarios, such as indoor, outdoor, urban, sea, foliage, undergrounds and even tunnels. Reviewing all of these models would be out of the scope of this research. Instead, only some of the most influential and well-established models, based on the appearance in published literatures, are reviewed. A more comprehensive review of the propagation models is provided in [1].

#### *2.1. Okumura and Hata Models*

Okumura's signal propagation investigation has been a cornerstone in this field. Okumura's model [2] was built upon a practical data collection in Tokyo, Japan, within the frequency range of 150–2000 MHz. In his model, the path loss estimation depends on the FSPL, antenna gain factor, propagation gains due to mobile/base-station antennae heights and collective correction gain. The latter, collective correction gain, is to compensate for the type of environment, average slope of terrain and finally land/sea parameters. This model is somewhat a more comprehensive version of the log-distance model. Like other nondeterministic methods, Okumura's model lacks the higher accuracy of deterministic models. In addition, independent parameters of this model, such as frequency, and mobile and base station antenna heights, were limited in their range. Okumura's findings then became the basis of the Hata model [3], also known as the Okumura–Hata model. This model introduces many more independent parameters, such as reflection, diffraction and scattering factors. It suggests new correction factors for suburban and rural environments, and extends the range of the parameters of Okumura's model, such as mobile and base station antenna heights. The International Telecommunication Union (ITU) model [4] was inspired by Okumura's and Hata's models.

#### *2.2. Walfisch, Ikegami and Ray Tracing Models*

Walfisch and Bertoni [5] proposed a semi-deterministic model that added new modeling parameters into Okumura's model. These parameters were added to account for the multiple-screen diffraction, caused by the buildings and structures. Ikegami et al. [6] took a new approach by using a simplified ray-tracing method. They assumed multiple-reflected/diffracted waves were highly attenuated and, therefore, only accounted for first-reflected/diffracted rays. The other compromise of this deterministic model was perhaps due to the limited computational power, since the reflection/diffraction losses were approximated with constant values. There are several 2D and 3D ray-tracing algorithms, based on the geometrical optics, uniform theory of diffraction and geometrical theory of diffraction. These models have complemented Ikegami's efforts. For instance, it is possible to consider two- and three-fold reflections/diffraction, and reflection/diffraction factors depend on the angle of incidence and permittivity of materials. Further detailed summaries of these models with their implementations are provided in [7–10]. Nevertheless, there are the following drawbacks to these deterministic models:

(a) Having precise building data is a prerequisite, meaning that, ideally, a 3D topographical model of the environment and structures is required [6,11]. However, such a database may not be readily available, especially for outdoor environments.


These drawbacks are even more aggravated when the radio frequency is increased.

#### *2.3. COST Action 231 Model*

The European Cooperation in Science and Technology (COST) Action 231 [15] proposed a new propagation model, plus an extension to the previous Hata model. The COST–Hata PCS Extension simply extended the original Hata model to be applicable to frequencies up to 2 GHz. The COST–Walfisch–Ikegami (COST W.I.) model, however, provided a revised version of the Walfisch and Ikegami models, except that it does not require a 2/3D model of the environment. Eliminating this requirement is perhaps the main advantage of COST W.I., making it the most commonly used outdoor model. COST W.I. has been recommended by ITU and the European Telecommunication Standards Institute (ETSI) [16]. The model still requires the height of the buildings, width of the streets and some other environmental information, and therefore, it is not a fully deterministic model [17]. Although COST W.I. (or models similar to it) may seem to be an acceptable compromise between accuracy and computational complexity, it is rather a granulated set of formulas which has the following disadvantages:


#### *2.4. Hybrid and Artificial Neural Network-Based Models*

The next generation of the propagation models relies on artificial neural networks (ANNs). These models are based on the training of an ANN with empirically collected data. Usually this involves training the ANN from scratch, where the ANN has to learn to derive the very basic mechanisms of radio propagation. For instance, the ANN even needs to learn the trivial fact that increasing the link distance has a negative impact on the signal strength. It is the ANN's responsibility to learn the FSPL from the distance and frequency of transmission or understand the attenuating impact of obstacles on the LoS. A review of the ANN-based models is provided in [19]. The drawbacks of this approach include:

(a) the time-consuming and exhaustive data collection process that is required for the training of ANNs. As an instance, in [20], authors collected 600,000 data samples in a relatively small indoor environment and trained the network for several hours. Not only is such data collection in a small environment tedious but it also defeats the purpose of estimation [19].

(b) ANN requires considerable training time as it contains numerous neurons in each layer; furthermore, an overly complex ANN may lead to data overfitting and hence failing to reach a generalized solution [21,22].

To address these challenges, a hybrid model for a simple indoor environment was proposed [19]. The model comprised an optimized multiwall model (MWM), whose estimations were monitored by an ANN. Therefore, the ANN only had to learn and compensate the deficiencies of the MWM. This strategy drastically reduced the required training data samples and improved the accuracy of the estimation. A relatively similar strategy was employed in [23,24] using the COST W.I. There are, however, two drawbacks of the latter implementations:


#### **3. Proposed Model Description**

To address the explained shortfalls and challenges, an adaptive neuro-fuzzy model is proposed. This allows the user to initiate the system with incorporated linguistic knowledge of propagation and then train the system further to achieve a higher accuracy. Next, the essential fuzzy/linguistic input–output parameters of a propagation model are identified. Furthermore, to avoid overfitting and achieve a better generalization, an efficient clustering method to determine the optimum size of the nodes is used. Finally, the system is trained with a hybrid algorithm that tunes the input and output parameters of the fuzzy system. This section explains the implementation details.

#### *3.1. Adaptive Neuro-Fuzzy Inference System for Propagation Estimation*

Fuzzy systems are universal approximators of nonlinear dynamic systems [26,27]. The idea of fuzzy sets, fuzzy logics and consequently fuzzy inference systems was first proposed by Zadeh [28]. As stated by Zadeh, fuzzy systems "provide an approximate and yet effective means of describing the behavior of systems which are too complex or too ill-defined to admit of precise mathematical analysis" [29]. The humanistic nature of the fuzzy systems allows us to define a complex system with fuzzy/linguistic variables using a human-like reasoning instead of using conventional mathematical tools or precise quantitative analysis. Fuzzy systems provide some degree of resistance to handle vague, ambiguous, imprecise, noisy, missing and uncertain information [30–33]. This should provide the level of flexibility required to deal with data that is hard or rather impossible to accurately infuse into the model, such as the *φ* and *ws*. This resistance also relaxes the inevitable inaccuracies in data collection. This is mainly due to the fuzzification of continuous variables. The fuzzification process transforms the crisp value of the inputs (*x*) to degrees of membership *μ*(*x*) using a membership function (*μ*). Next, these membership functions *μ* are tuned using the gradient-descent algorithm to optimize the output. Changes in the *μ* are therefore affecting the degrees of membership *μ*(*x*) of the inputs (*x*).

The proposed ANFIS architecture comprises first-order Tagaki-Sugeno (T-S)-type fuzzy systems [34], where the output membership functions are first-order polynomials. Therefore, a hybrid

training allows a linear least-squares estimation to be used for the identification of the consequent parameters, and a gradient descent optimization is used to identify the premise parameters [30,35]. Compared to most neural networks, ANFIS also has fewer parameters, many of which can be tuned with linear least-squares. These features give ANFIS the advantage of fast training and computational speed; furthermore, since there are fewer tunable parameters, the pitfall of overfitting the data would be avoided [36].

The T-S fuzzy implication (if–then rule) is analogous to that of defining a nonlinear input–output mapping. The process can be interpreted as the decomposition of a system into a finite number of subsystems and then approximating each subsystem. The output of the T-S is determined by the aggregation of the implications. Considering a number of implications *R<sup>i</sup>* , with antecedents (premise) *A<sup>i</sup>* and consequences *y<sup>i</sup>* , implication *i*th (*i* = 1, . . . , *n*) is of the format of Equation (1),

$$\begin{aligned} R^i: \text{if } A^i \text{ then } y^i\\ A^i: \text{ If } x\_1 \text{ is } \mu\_1 \text{ and } \dots \text{ and } x\_k \text{ is } \mu\_k \equiv \mu\_1(x\_1)\Lambda \dots \Lambda \mu\_k(x\_k)\\ y^i = p\_0 + \mathbf{x}\_1 p\_1 + \underset{\longrightarrow}{\to} \mathbf{x}\_k p\_k\\ \overline{A}^i = \frac{A^i}{\sum\_i A^i} \end{aligned} \tag{1}$$

where *x* is the input vector (premise variable), *μ<sup>k</sup>* contains the membership functions of the *k*th input, *pk* is the consequence parameters vector, and *<sup>A</sup><sup>i</sup>* is the normalized firing strength, or truth value, of the implication *R<sup>i</sup>* .

#### *3.2. Model Input*

A relatively similar set of inputs that are defined in the COST231 model was considered, however, three additional inputs were added based on our knowledge of propagation and common sense. In addition, three of the COST231 inputs (base station height, mobile station height and their height difference) were combined into one. Many of these modeling inputs were acquired from Google maps to further facilitate the modeling. The only output of the system is the received signal strength indicator (*RSSI*). These inputs are explained as follows:


where *hbs*, *hms*, *φ*, *ws* and *dw* are acquired from Google map images and, therefore, may have some inaccuracies. For certain scenarios, some of these parameters may not be very important, or may not exist, and therefore, would not apply at all.

#### *3.3. Model Identification*

Various membership functions including triangular, trapezoidal and sigmoidal were examined for the fuzzification; however, a normalized Gaussian membership function, with the general form of Equation (2), yielded the best result, where *σ* is the standard deviation and determines the spread of *μ*, and *τ* is the mean, which determines the center of the *μ*.

$$
\mu\_{\mathfrak{c},\mathfrak{r}}(\mathfrak{x}) = \mathfrak{e}^{-(\mathfrak{x}-\mathfrak{r})^2/2\sigma^2} \tag{2}
$$

Two approaches were considered for the identification of the premise structure. The first approach was to define fuzzy if–then rules using all the possible permutations of all or some of the fuzzified inputs. For instance, using common sense knowledge of propagation one could define the following implication:

• "If *dlos* is short and *sm* is high and *clos* is clear then *RSSI* is good."

This states that "if the transmission was done over a short distance, with a high spreading factor, and the LoS was relatively clear of clutter, then reception should be good, regardless of other input parameters". However, this approach can be prone to combinatorial explosion of rules, especially for complex systems. Considering that there was a total of seven inputs, each with two membership functions, then the total number of rules is 27.

The second approach was to use a clustering method [36]. A subtractive clustering [37] was chosen for the identification of the rules, since subtractive clustering does not require an initial estimate of the center or the number of clusters [38]. Other clustering algorithms could be used, where eventually, each cluster center forms a fuzzy rule.

A first-order T-S model was selected, as it provided a higher accuracy compared to zero-order T-S. Hence, output membership functions were of the form *y* in Equation (1), where the output linear functions (*pk*) were identified by linear least-squares optimization.

#### **4. Analysis and Results**

About 5000 data samples were collected over a relatively large area (4.25 km × 2.7 km) in the commercial area of Glasgow, Scotland. Data was collected from three base stations at different locations, with 1931, 1820 and 1256 samples being collected from BS1, BS2 and BS3, respectively. Figure 1 shows the area of the investigation, where some of the measurement locations are pinpointed with markers, and gateways are labeled as BS1, BS2 and BS3. Base stations are equipped with the same antennae, mounted relatively at the same height from the ground. Data was analyzed to provide an insight of the performance of the proposed model.

To check the goodness of fit and benchmarking with other models, the most commonly used measures in the literature were reported, RMSE (root mean square error in dB), *E<sup>σ</sup>* (error standard deviation dB) and *Em* (mean error dB). Unfortunately, the first two measures depend on the range or scale of data and *Em* ignores the error sign. Therefore, to address these issues, the Nash-Sutcliffe efficiency (NSE) coefficient is used as a measure of the goodness of the fit. Having a universal measure of performance benchmarking is especially important, as various wireless technologies have different sensitivities. This difference impacts the dynamic range of measured data and, therefore, its RMSE scale; however, the NSE is less prone to the dynamic range. NSE ranges from −∞ to 1, where 1 would indicate a perfect match between the model predictions and measurements [39].

In addition, to investigate the model's generalization capability, instead of training with one BS at a time, data from all the three BSs was used to train and validate the model. For the purpose of comparison, an ANN model was also used to model the propagation. A feedforward ANN was chosen with three hidden layers of size seven, 14 and four neurons for each layer, respectively. The best ANN structure was chosen heuristically after trying ANNs with two to five hidden layers of various neuron sizes. Results in Table 1 demonstrate the average of a 10-fold cross-validation analysis; 90% of data was used for training.

**Figure 1.** Area of practical investigation 4.25 km × 2.7 km, and base station locations (BS1, BS2, BS3). A few measurement locations are pinpointed with markers.


**Table 1.** ANFIS and ANN model performance with 10-fold cross-validation.

The results of the COST W.I. model are tabulated in Table 2 to make a comparison with other practical investigations conducted in [15].


**Table 2.** COST W.I. model performance with optimization.

Figure 2 compares the measurements and estimation for BS1, and Figure 3 compares the overall estimation of all base stations with measurements.

**Figure 2.** ANFIS estimations vs practical measurements at BS1.

**Figure 3.** ANFIS estimation vs practical measurements at all BSs.

A series of models were benchmarked against similar practical measurements in COST Action 231 [15]. In this comparison, 970, 335 and 1031 samples were collected from three different base stations in Munich, Germany. Propagation models were then used to estimate the signal strength only for each individual site. Since the examined models were either deterministic or semi-deterministic, 2D or 3D building layout and building height information were provided to the models in the study conducted by COST, whereas the COST231 in this article only used the collected environmental information that was explained earlier (Section 3.2). In Table 3, only the range of measures (maximum and minimum of *Eσ* and *Em*) of COST performance analysis report are included. A detailed performance of each model can be found in Table 4.5.2 of COST Action 231 Chapter 4 [15]. Unfortunately, other performance metrics such and RMSE, NSE and MAE are not stated in this report. Therefore, the RMSE in Table 3 is extracted for instances with *Em* ≈ 0 (if *Em* = 0 then RMSE = *Eσ*). Table 3 is provided as a measure of overall modelling accuracy that can be achieved given the availability of 2D or 3D environmental information.

**Table 3.** Range of standard deviation and mean in COST W.I. measurements [15].


To further observe the generalization capability of the ANFSI model, only 20% of the data was used for training. These results are tabulated in Table 4.


**Table 4.** ANFIS model performance with 20% training data.

#### **5. Discussion and Conclusions**

The decomposition of a propagation system into smaller subsystems has been the ultimate goal of the Okumura, Hata and COST models. In fact, the suggestion of the breaking point-distance phenomenon in ITU recommendation P.1411 [40] follows the same idea. These attempts used crisp or Boolean logic to differentiate between a limited set of propagation conditions or scenarios. In contrast to sudden transitions, fuzzy logics make it possible to have smoother transitions, while mitigating the uncertainties within the data. ANFIS further allowed for the implementation of an expert's knowledge into the system, which addressed part of the challenges of the ANN models.

Comparison of the models used in this investigation indicated that the ANFIS and ANN models resulted in a remarkably better performance in terms of estimation, compared to the COST W.I. model. The ANFIS model resulted in a better performance compared to the ANN model. *Eσ* and NSE were consistently improved by about 1 dB and 10%, respectively. ANFIS was found as a better generalization candidate. In this study, the performance of ANN in Table 1 was almost identical to the ANFIS results in Table 4. This is while the ANN was trained with 90% of the data, whereas ANFIS achieved the same results with only 20% of the data.

Furthermore, two new parameters were added into the model without having to formulate them. *sm* was required due to the wireless technology of choice, and *dw* was added due to the features of the propagation environment. Inclusion of these parameters in the modeling reduced the RMSE and NSE of the ANFIS model by 0.55 dB and 7%, respectively. These two parameters, however, did not make a significant change to the ANN model results. This might be due to the limited number of measurements (380 samples) that had *dw*. This is the most likely reason, given that ANFIS, with a better generalization, could benefit from this parameter.

In this investigation, the proposed ANFIS model was used for outdoor environments. However, it can be easily adopted for indoor propagation as well. This is as simple as providing the impacting propagation parameters for the system and roughly describing their effect using fuzzy linguistic reasoning. For instance, in an indoor environment, the effect of a higher number of walls, windows or doors on LoS can increase the loss.

**Author Contributions:** S.H. and A.W. conceived and designed and performed the experiments; S.H. analyzed the data; H.L. and K.C. contributed materials/analysis tools; S.H., K.C. and H.L. wrote the paper reported.

**Funding:** This research received no external funding.

**Acknowledgments:** The authors would like to thank Glasgow Caledonian University for funding this research. Also, Stream Technologies for facilitating the data collection and measurements, and Innovate UK (KTP).

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **New Fuzzy Numerical Methods for Solving Cauchy Problems**

#### **Hussein ALKasasbeh 1,\*, Irina Perfilieva 2, Muhammad Zaini Ahmad <sup>1</sup> and Zainor Ridzuan Yahya <sup>1</sup>**


Received: 7 April 2018; Accepted: 3 May 2018; Published: 11 May 2018

**Abstract:** In this paper, new fuzzy numerical methods based on the fuzzy transform (F-transform or FT) for solving the Cauchy problem are introduced and discussed. In accordance with existing methods such as trapezoidal rule, Adams Moulton methods are improved using FT. We propose three new fuzzy methods where the technique of FT is combined with one-step, two-step, and three-step numerical methods. Moreover, the FT with respect to generalized uniform fuzzy partition is able to reduce error. Thus, new representations formulas for generalized uniform fuzzy partition of FT are introduced. As an application, all these schemes are used to solve Cauchy problems. Further, the error analysis of the new fuzzy methods is discussed. Finally, numerical examples are presented to illustrate these methods and compared with the existing methods. It is observed that the new fuzzy numerical methods yield more accurate results than the existing methods.

**Keywords:** fuzzy partition; fuzzy transform; new iterative method; Cauchy problems

#### **1. Introduction**

In fact, most mathematical models in engineering and science requires the solution of ordinary differential equations (ODEs). Generally, it is difficult to obtain the closed form solutions for ODEs, especially, for nonlinear and nonhomogeneous cases. Many models often lead to ordinary differential equations which consist of Cauchy problems are an important branch of modern mathematics that arises naturally in different areas of applied sciences, physics, and engineering. Thus, many researchers start developing methods for solving Cauchy problems are of particular importance [1–3].

FT was coined by Perfilieva as a new mathematical method was developed [4]. The core idea of FT is a fuzzy partition of a universe into fuzzy subsets. The technique of FT has been successfully applied into other mathematical problems as well including image processing, analysis of time series and elsewhere [5–7]. This idea has been applied to Cauchy problems was first published as well as other numerical classical methods [8], by proposing generalized Euler and Euler- Cauchy methods, so that the Mid-point FT method was demonstrated in [9]. The success of these applications is due in part to the fact that FT is capable to accurately approximate any continuous function. Thus, we will propose new fuzzy numerical methods for Cauchy problems with help of the FT and new iterative method.

The motivation of the proposed study comes from the papers [3,8,10]. Numeric Solution to the Cauchy problem was considered and the authors showed that the error can be reduced by using FT with uniform fuzzy partitions [8,9]. At the same time, [10,11], the concept of generalized fuzzy partition was proposed. Besides others, a necessary and sufficient condition making it possible to design easily the generalized fuzzy partition was provided [12]. This is important for various practical applications

of FT. Further [3], the authors have proposed modifications trapezoidal rule and Adams-Moulton methods (2 and 3-step) to solve ODEs based on the new iterative method was introduced [2].

In this paper, we discuss the problem that considered in [8,9]. The triangular and raised cosine generating function was replaced by new representations formulas for generalized uniform fuzzy partition of FT such as power of the triangular and raised cosine generating function. We study approximation properties of the FT based on powers of triangular and raised cosine generalized uniform fuzzy partition can be constructed in such way that the FT can reduce error. Also, we propose modifications in the FT introduced by I. Perfilieva [4] with respect to new representations formulas for generalized uniform fuzzy partition of FT and then the technique of FT is combined with traditional methods based on the new iterative method [2,3] to solve Cauchy problems. It is observed that the new methods proposed are more accurate results than the fuzzy approximation method [8,9].

This paper is organized as follows. In Section 2, we introduce the basic concepts and results of the FT with respect to the generalized uniform fuzzy partition needed throughout this paper. The main part of this paper is Sections 3 and 4, new representations for basic functions of FT, followed by the modified one step, 2-step , and 3-step based on new representations formulas for generalized uniform fuzzy partition of FT. In Section 5, numeric examples are discussed. Concluding remarks are presented in Section 6.

Throughout the paper, we denote by N, N+, Z, R, and R<sup>+</sup> the sets of natural (including zero), positive natural, integer, real , and positive real numbers, respectively.

#### **2. Basic Concepts**

In this section, we give some definitions and introduce the necessary notation in [10], which will be used throughout the paper. Throughout this section, we deal with an interval [*a*, *<sup>b</sup>*] <sup>⊂</sup> <sup>R</sup> of real numbers.

**Definition 1.** *(generalized uniform fuzzy partition) Let xi* ∈ [*a*, *b*] , *i* = 1, ... , *n*, *be fixed nodes such that a* = *x*<sup>1</sup> < ... < *xn* = *b*, *n* ≥ 2*. We say that the fuzzy sets Ai* : [*a*, *b*] → [0, 1] *constitute a generalized fuzzy partition of* [*a*, *b*] *if for every i* = 1, ... , *n there exists h* > 0 *such that x*<sup>0</sup> = *x*1, *xn* = *xn*+1, [*xi* − *h*, *xi* + *h*] ⊆ [*a*, *b*] *and the following conditions are fulfilled:*


*Fuzzy sets A*1, ... , *An are called basic functions. It is important to remark that by conditions of locality and continuity, <sup>b</sup> <sup>a</sup> Ai*(*x*)*dx* > 0*. A generalized of uniform fuzzy partition of* [*a*, *b*] *is defined for equidistant nodes, i.e., for all i* = 1, ... , *n* − 1, *xi* = *xi*+<sup>1</sup> + *h*, *where h* = (*b* − *a*) / (*n* − 1) *and two additional properties are satisfied,*

*4. Ai* (*xi* − *x*) = *Ai* (*xi* + *x*) *for all x* ∈ [0, *h*] , *i* = 2, . . . , *n* − 1*;*

$$\text{15.} \quad A\_i \left( \mathbf{x} \right) = A\_{i-1} \left( \mathbf{x} - h \right) \text{ and } A\_{i+1} \left( \mathbf{x} \right) = A\_i \left( \mathbf{x} - h \right) \text{ for all } \mathbf{x} \in \left[ \mathbf{x}\_i, \left. \mathbf{x}\_{i+1} \right], \ i = 2, \dots, n-1;$$

*then the fuzzy partition is called h-uniform generalized fuzzy partition. Throughout this paper, we will write generalized uniform fuzzy partition instead of h-uniform generalized fuzzy partition.*

**Definition 2.** *(generating function) A function K* : [−1, 1] → [0, 1] *is called a generating function if it is assumed to be even, continuous and <sup>K</sup>* (*x*) <sup>&</sup>gt; <sup>0</sup> *if <sup>x</sup>* <sup>∈</sup> (−1, 1)*. The function <sup>K</sup>* : [−1, 1] <sup>→</sup> <sup>R</sup> *is even if for all x* ∈ [0, 1] , *K* (−*x*) = *K* (*x*)*.*

The following definition recall the concept of generalized fuzzy partition which can be easily extended to the interval [*a*, *b*]. We assume that [*a*, *b*] is partitioned by *A*1, ... , *An*, according to Definition 1.

**Definition 3.** *A generalized uniform fuzzy partition of interval* [*a*, *b*]*, determined by the triplet* (*K*, *h*, *a*)*, can be defined using generating function K (Definition 2). Then, basic functions of a generalized uniform fuzzy partition are shifted copies of K defined by*

$$A\_i\left(\mathbf{x}\right) = K\left(\frac{\mathbf{x} - \mathbf{x}\_i}{h}\right), \ \mathbf{x} \in \left[\mathbf{x}\_i - h, \ \mathbf{x}\_i + h\right], \ \mathbf{x}$$

*for all i* = 1, ... , *n. The parameter h is called the bandwidth or the shift of the fuzzy partition and the nodes xi* = *a* + *ih are called the central point of the fuzzy sets A*1,..., *An.*

**Remark 1.** *A fuzzy partition is called Ruspini if the following condition*

$$A\_i(\mathbf{x}) + A\_{i+1}(\mathbf{x}) = 1, \ i = 1, \ldots, n-1,\tag{1}$$

*holds for any x* ∈ [*xi*, *xi*+1]*. This condition is often called Ruspini condition.*

#### **3. New Representations of Basic Functions for Particular Cases**

In this section, we propose two subsection, new representations of basic functions constitute a generalized uniform fuzzy partition of interval [*a*, *b*] and then FT technique based on new representations of basic functions.

#### *3.1. Power of the Triangular and Raised Cosine Generalized Uniform Fuzzy Partition*

Two types of basic functions, triangular and sinusoidal shaped membership functions, were proposed by [4,8]. Later [13], the authors considered different shapes for the basic functions of fuzzy partition. Furthermore, a generalized fuzzy partition appeared in connection with the notion of a higher-degree F-transform [11]. Its even weaker version was implicitly introduced to satisfy the requirements of image compression [14]. Recently, the different conditions for generalized uniform fuzzy partitions was proposed by [10,12]. Table 1 provides the definition two types of generating function, triangular and raised cosine generating functions [7,10–12,15].

**Table 1.** Generating functions of strong uniform fuzzy partition.


In the following, we present new representations for generating function. In particular, we present three new representations, based on the triangular and raised cosine generating functions: two generating function based on the triangular generating functions and one generating function based on the raised cosine generating function.

**Definition 4.** *(natural order triangular generating function) Let KTm <sup>i</sup>* : <sup>R</sup> <sup>→</sup> [0, 1]*, i* <sup>=</sup> 1, 2*, be defined by*

$$1. \ K\_{T\_1^{\mathfrak{m}}}(\mathbf{x}) \; = \; \begin{cases} (1 - |\mathbf{x}|)^{\mathfrak{m}}, & |\mathbf{x}| \le 1, \\ 0, & \text{otherwise} \end{cases} \; = \; \min\left( (1 - |\mathbf{x}|)^{\mathfrak{m}}, 1 \right), \tag{2}$$

$$2. \, K\_{T\_2^{\mathrm{m}}}(\mathbf{x}) \; \;= \begin{cases} 1 - (|\mathbf{x}|)^{\mathrm{m}}, & |\mathbf{x}| \le 1, \\ 0, & \text{otherwise} \end{cases} \;= \min \left( 1 - (|\mathbf{x}|)^{\mathrm{m}}, 1 \right), \tag{3}$$

*are called power of the triangular (shaped) generating functions, when m* <sup>∈</sup> <sup>N</sup>+*.*

**Definition 5.** *(odd natural order raised cosine generating function) Let KCm* : <sup>R</sup> <sup>→</sup> [0, 1] *be defined by*

$$K\_{\mathbb{C}^{\mathrm{m}}}(\mathbf{x}) \; \;= \begin{cases} \frac{1}{2} \left( 1 + \cos^{\mathrm{m}}(\pi \mathbf{x}) \right), & |\mathbf{x}| \le 1; \\ 0, & \text{otherwise.} \end{cases} \tag{4}$$

*is called power of the raised cosine generating function , when m is an odd natural number* (*i.e., m* <sup>=</sup> <sup>2</sup>*<sup>k</sup>* <sup>−</sup> 1, *<sup>k</sup>* <sup>∈</sup> <sup>N</sup>+)*.*

**Remark 2.** *Particularly, we can check the validity of Equation (4) using the following relation*

$$\begin{array}{rcl} K\_{\mathbb{C}^{\mathfrak{m}}} \left( \mathbf{x} \right) &=& \begin{cases} \frac{1}{2} \left( 1 + \cos^{\mathfrak{m}} \left( \pi \mathbf{x} \right) \right), & |\mathbf{x}| \le 1, \\ 0, & \text{otherwise.} \end{cases} \\\ =& \begin{cases} \frac{1}{2} \left( 1 + \sin^{\mathfrak{m}} \left( \frac{\pi}{2} \left( 2\mathbf{x} + 1 \right) \right) \right), & |\mathbf{x}| \le 1, \\ 0, & \text{otherwise.} \end{cases} \end{array}$$

**Lemma 1.** *If KTn <sup>i</sup>* (*x*), *<sup>i</sup>* = 1, 2, (*KCm* (*x*)) *determines power of the triangular (raised cosine) generating functions, then*

$$1. \quad \int\_{-1}^{1} K\_{\mathbb{T}\_1^n}(\mathbf{x}) \, d\mathbf{x} = \begin{array}{c} \frac{2}{n+1}, \qquad 2. \quad \int\_{-1}^{1} K\_{\mathbb{T}\_2^n}(\mathbf{x}) \, d\mathbf{x} = \begin{array}{c} \frac{2n}{n+1}, \qquad 3. \quad \int\_{-1}^{1} K\_{\mathbb{C}^n}(\mathbf{x}) \, d\mathbf{x} = 1, \end{array}$$

*or equivalent*

$$\begin{array}{ccccccccc} 1. & \int\_{-h}^{h} K\_{\mathbb{T}^{n}\_{1}} \left( \frac{t}{h} \right) dt = & \frac{2h}{n+1}, & 2. & \int\_{-h}^{h} K\_{\mathbb{T}^{n}\_{2}} \left( \frac{t}{h} \right) dt = & \frac{2nh}{n+1}, & 3. & \int\_{-h}^{h} K\_{\mathbb{C}^{n}} \left( \frac{t}{h} \right) dt = & h, \end{array}$$

*where* <sup>0</sup> <sup>≤</sup> 2 *n*+1 <sup>≤</sup> 1, 1 <sup>≤</sup> <sup>2</sup>*<sup>n</sup> n*+1 <sup>≤</sup> <sup>2</sup> *, h be positive real numbers, m is an odd natural number and n* <sup>∈</sup> <sup>N</sup>+*.*

**Proof.** The proof can be easily obtained by using integration methods within the boundaries and then substitution *x* = *t*/*h* .

On the basis of Definitions 4 and 5, Lemma 1, and according to Definition 3, we can also be defined using generating function *αK* for *α* > 0 (in general, not necessarily satisfy Ruspini condition). Thus, basic functions of a generalized uniform fuzzy partition are shifted copies of *αK* defined by

$$A\_k(\mathbf{x}, \mathbf{x}\_0) = aK \left(\frac{\mathbf{x} - \mathbf{x}\_0}{h} - k\right), \mathbf{x} \in \left[\mathbf{x}\_{i-1}, \mathbf{x}\_{i+1}\right].\tag{5}$$

In particular, let *KTm* <sup>1</sup> , *KTm* <sup>2</sup> ,(and *KCm* ) be power of the triangular (and raised cosine) generating function defined above. We will say that a generalized uniform fuzzy partition is power of a triangular (or of raised cosine) generalized uniform fuzzy partition if its generating function *K* belongs to *αKTm* <sup>1</sup> , *αKTm* <sup>2</sup> , (or *<sup>α</sup>KCm* ) whenever *<sup>α</sup>* = 1/ <sup>1</sup> <sup>−</sup><sup>1</sup> *<sup>K</sup>*(*t*)*dt* . Indeed, the equality *α* immediately follows from <sup>1</sup> <sup>−</sup><sup>1</sup> *<sup>α</sup>KTm* <sup>1</sup> (*t*) *dt* = 1 ⇒ *<sup>α</sup>* = 1/ <sup>1</sup> <sup>−</sup><sup>1</sup> *KTm* <sup>1</sup> (*t*) *dt* . In the following, we modified the definition a triangular and raised cosine generalized uniform fuzzy partition by propose that power of the triangular and raised cosine generalized uniform fuzzy partitions can be simply using the equality *<sup>α</sup>* <sup>=</sup> 1/ <sup>1</sup> <sup>−</sup><sup>1</sup> *<sup>K</sup>*(*t*)*dt* .

**Definition 6.** *Let m* <sup>∈</sup> <sup>N</sup>+*. A system of fuzzy sets* {*Ak* <sup>|</sup> *<sup>k</sup>* <sup>∈</sup> <sup>Z</sup>} *defined by*

$$1. \ A\_k \left( \mathbf{x}, \mathbf{x}\_0 \right) \quad = \quad a \mathbb{K}\_{T\_1^w} \left( \frac{\mathbf{x} - \mathbf{x}\_0}{h} - k \right), \quad a = \frac{m+1}{2}, \tag{6}$$

$$2. A\_k \left( \mathbf{x}, \mathbf{x}\_0 \right) \quad = \quad a \mathcal{K}\_{T\_2^w} \left( \frac{\mathbf{x} - \mathbf{x}\_0}{h} - k \right), \quad a = \frac{m+1}{2m}, \tag{7}$$

*is called power of the triangular generalized uniform fuzzy partition of the real line determined by the triplet* (*KTm <sup>i</sup>* , *<sup>h</sup>*, *<sup>x</sup>*0), *<sup>i</sup>* <sup>=</sup> 1, 2*. Further, let <sup>m</sup> is an odd natural number. A system of fuzzy sets* {*Ak* <sup>|</sup> *<sup>k</sup>* <sup>∈</sup> <sup>Z</sup>} *defined by*

$$\begin{array}{rcl} \text{a.3. } A\_k \left( \mathbf{x}, \mathbf{x}\_0 \right) &=& \mathbf{a} K\_{\mathbb{C}^{\mathsf{m}}} \left( \frac{\mathbf{x} - \mathbf{x}\_0}{h} - k \right), & \mathbf{a} = \mathbf{1}, \end{array} \tag{8}$$

*is called power of the raised cosine generalized uniform fuzzy partition of the real line determined by the triplet* (*KCm* , *h*, , *x*0)*. The parameter h is bandwidth of the fuzzy partition and x*<sup>0</sup> + *kh* = *xk.*

**Definition 7.** *Let <sup>x</sup>*<sup>1</sup> <sup>&</sup>lt; ... <sup>&</sup>lt; *xn be fixed nodes within* [*a*, *<sup>b</sup>*] <sup>⊂</sup> <sup>R</sup>*, such that <sup>x</sup>*<sup>1</sup> <sup>=</sup> *a, xn* <sup>=</sup> *<sup>b</sup> and <sup>n</sup>* <sup>≥</sup> <sup>2</sup>*. We consider nodes x*1, ... , *xn are equidistant, with distance (shift) h* = (*b* − *a*) / (*n* − 1)*. A system of fuzzy sets B*1, ... , *Bn* : [*a*, *b*] → [0, 1] *be power of a triangular and raised cosine generalized uniform fuzzy partitions of* [*a*, *b*] *if it is defined by*

$$B\_{\mathbf{k}}\left(\mathbf{x}\right) \quad = \begin{cases} A\_{\mathbf{k}}(\mathbf{x}, a), & \mathbf{x} \in [a, b], \\ 0, & \text{otherwise.} \end{cases} \\ \text{or equivalent} \quad B\_{\mathbf{k}}\left(\mathbf{x}\right) \quad = \begin{cases} aK\left(\frac{\mathbf{x} - \mathbf{x}\_{\mathbf{k}}}{\hbar}\right), & \mathbf{x} \in [a, b], \\ 0, & \text{otherwise.} \end{cases} \tag{9}$$

*where xk* = *a* + *kh. In the sequel, we denote K for a generating function determined by the Formulas (2)–(4). Further, α, Ak*(*x*, *a*), *k* = 1, . . . , *n, are determined by the Formulas (6)–(8).*

**Lemma 2.** *If Bk* (*x*) *determines power of the raised cosine generalized uniform fuzzy partition of* [*a*, *b*]*, then Bk* (*x*) *satisfied Ruspini condition (1) when m (see (4) ) is an odd natural number.*

**Proof.** Indeed, if *x* ∈ [*a*, *b*], there exists *k* ∈ {1, . . . , *n* − 1} such that *x* ∈ [*xk*, *xk*<sup>+</sup>1]. By (4) and (8), and Remark 1, we get

$$\begin{split} B\_{k}\left(\mathbf{x}\right) + B\_{k+1}\left(\mathbf{x}\right) &= A\_{k}\left(\mathbf{x},a\right) + A\_{k+1}\left(\mathbf{x},a\right) = aK\_{\mathbb{C}^{m}}\left(\frac{\mathbf{x} - \mathbf{x}\_{k}}{h}\right) + aK\_{\mathbb{C}^{m}}\left(\frac{\mathbf{x} - \mathbf{x}\_{k+1}}{h}\right), \\ &= \frac{1}{2}\left(1 + \cos^{m}\left(\pi\left(\frac{\mathbf{x} - \mathbf{x}\_{k}}{h}\right)\right)\right) + \frac{1}{2}\left(1 + \cos^{m}\left(\pi\left(\frac{\mathbf{x} - \mathbf{x}\_{k+1}}{h}\right)\right)\right) \cdot \mathbf{x}. \\ &= 1 + \frac{1}{2}\left(\cos^{m}\left(\frac{\pi}{h}\left(\mathbf{x} - \mathbf{x}\_{k}\right)\right) + \cos^{m}\left(\frac{\pi}{h}\left(\mathbf{x} - \mathbf{x}\_{k+1}\right)\right)\right). \end{split}$$

By the properties of trigonometric functions, notice thatcos (*θ* + *π*) = − cos (*θ*) , it is easy to see that

$$\cos^m\left(\pi\left(\frac{\mathbf{x}-\mathbf{x}\_k}{h}\right)\right) + \cos^m\left(\pi\left(\frac{\mathbf{x}-\mathbf{x}\_{k+1}}{h}\right)\right) = \cos^m\left(\pi\left(\frac{\mathbf{x}-\mathbf{x}\_{k+1}}{h}\right) + \pi\right) + \cos^m\left(\pi\left(\frac{\mathbf{x}-\mathbf{x}\_{k+1}}{h}\right)\right).$$

Thus, if *m* is an odd natural number, the result is 0.

In the following, if *K* is a normal generating function (i.e., *K*(0) = 1, not necessarily satisfy Ruspini condition), we use generating function *αK* for *α* > 0, where (*αK*) (*x*) = *α* · *K* (*x*).

**Lemma 3.** *If basic functions Bk*, *k* = 1, ... , *n*, *of a generalized uniform fuzzy partition are shifted copies of αK*, *α* > 0, *defined by the Formula (5) and moreover, K is normal as an additional condition. Then, for each k* = 1, . . . , *n*, *Bk*(*xk*) = *α*, *xk* ∈ [*xk* − *h*, *xk* + *h*]*.*

**Proof.** A generating function *K* is said to be normal if *K*(0) = 1. By the Formula (5) and a generating function *K* is normal, we get *Bk* (*xk*) = *αK xk*−*xk h* = *αK*(0) = *α* > 0.

**Corollary 1.** *Let the assumptions of Lemma 3 be fulfilled, but fuzzy sets Bk*, *k* = 1, ... , *n*, *n* ≥ 2, *determined by Definition 7. Then, for each k* = 1, ... , *n*, *Bk*(*xk*) = *α*, *xk* ∈ [*xk* − *h*, *xk* + *h*]*, where α is defined by Definition 7.*

**Proof.** Indeed, the proof immediately follows from Definition 7 and Lemma 3.

**Corollary 2.** *Let the assumptions of Lemma 3 be fulfilled, but fuzzy sets Bk*, *k* = 1, ... , *n*, *n* ≥ 2, *determined by Definition 3. Then, for each k* = 1, . . . , *n, Bk*(*xk*) = 1, *xk* ∈ [*xk* − *h*, *xk* + *h*]*.*

#### *3.2. New FT Based Power of the Triangular and Raised Cosine Generalized Uniform Fuzzy Partition*

In this subsection, we present the main principles of F-transform detailed in [8,10,11] that are modified with respect to power of the triangular and raised cosine generalized uniform fuzzy partition. Further, we will show that FT components with respect to power of the triangular and raised cosine generalized uniform fuzzy partition can be simplified and approximated of an original function, say *f* .

**Definition 8.** *Let f be a continuous function on* [*a*, *b*] *and Bk*(*t*)*, k* = 1, ... , *n, be power of the triangular and raised cosine generalized uniform fuzzy partition of* [*a*, *b*] *, n* ≥ 2*. A vector of real numbers F*[ *f* ] = (*F*1, *F*2,..., *Fn*) *given by*

$$F\_k = \frac{\int\_a^b f\left(t\right) B\_k(t) \, dt}{\int\_a^b B\_k(t) \, dt},\tag{10}$$

*for k* = 1, ... , *n is called the direct FT of f with respect to power of the triangular and raised cosine generalized uniform fuzzy partition Bk.*

In the following, we assume a generating function *K* in the Formulas (2)–(4). We will simplify the representation (10).

**Lemma 4.** *Let f* ∈ *C* ([*a*, *b*]) *and according to Definition 7, fuzzy sets Bk*, *k* = 1, ... , *n*, *n* ≥ 2, *be power of a triangular and raised cosine generalized uniform fuzzy partition of* [*a*, *b*] *with a generating function K, then representation (10) of direct FT can be simplified as follows for k* = 1, . . . , *n*

$$F\_k = \frac{\int\_{-1}^{1} f\left(th + t\_k\right) K(t) \, dt}{\int\_{-1}^{1} K(t) \, dt} = \frac{\int\_{-h}^{h} f\left(t + t\_k\right) K(\frac{t}{h}\right) \, dt}{\int\_{-h}^{h} K(\frac{t}{h}) \, dt}.$$

**Proof.** In this proof, we will write a generating function *K* instead of (2)–(4). By Definition 7, we get

$$B\_k\left(t\right) = \kappa K \left(\frac{t - t\_k}{h}\right), \ t \in \left[t\_k - h, t\_k + h\right], \tau$$

for *<sup>k</sup>* = 1, ... , *<sup>n</sup>* , *<sup>t</sup>*<sup>0</sup> = *<sup>t</sup>*1, *tn*+<sup>1</sup> = *tn* , and substituting *<sup>u</sup>* = *<sup>t</sup>*−*tk <sup>h</sup>* and then substituting *t* = *s*/*h* . Thus, we get

$$\int\_{t\_{k-1}}^{t\_{k+1}} f\left(t\right) B\_k(t) \, dt = ah \int\_{-1}^1 f\left(th + t\_k\right) K(t) \, dt = a \int\_{-h}^h f\left(t + t\_k\right) K(\frac{t}{h}) \, dt$$

$$\int\_{t\_{k-1}}^{t\_{k+1}} B\_k(t) \, dt = ah \int\_{-1}^1 K(t) \, dt = a \int\_{-h}^h K(\frac{t}{h}) \, dt$$

and its corresponding results with representation (10).

Indeed, the previous lemma holds for every fuzzy partition generated by a kernel. Now, we will simplify the above given expressions for the coefficients *F*[ *f* ] = (*F*1, *F*2,..., *Fn*) in the representation (10) even more. This fact is very important for applications which are more flexible and consequently easier to use.

**Lemma 5.** *Let the assumptions of Lemma 4 be fulfilled. Then, the coefficients F*[ *f* ] = (*F*1, *F*2,..., *Fn*) *in the expression (10) of the FT component Fk of f as follows:*

$$F\_k = \frac{1}{h} \int\_a^b f\left(t\right) B\_k(t) \, dt = \frac{a}{h} \int\_a^b f\left(t\right) K\left(\frac{t - t\_k}{h}\right) \, dt,\tag{11}$$

*for k* = 1, ... , *n, where interval* [*a*, *b*] *is partitioned by power of the triangular and raised cosine generalized uniform fuzzy partition B*1,..., *Bn and α is defined by Definition 7.*

**Proof.** Let *k* ∈ {1, . . . , *n*} and consider set of fuzzy sets *Bk*(*x*) from power of the triangular and raised cosine generalized uniform fuzzy partition of [*a*, *b*] in (9). We will prove the equality *tk*+<sup>1</sup> *tk*−<sup>1</sup> *Bk*(*t*) *dt* <sup>=</sup> *<sup>h</sup>*. We get by virtue of Lemmas 1 and 4, and (6):

$$\int\_{t\_{k-1}}^{t\_{k+1}} B\_k(t) \, dt = \int\_{t\_{k-1}}^{t\_{k+1}} A\_k(t, a) \, dt = \int\_{t\_k - h}^{t\_k + h} \left( \frac{m + 1}{2} \right) K\_{T\_1^m} \left( \frac{t - t\_k}{h} \right) \, dt = h \int\_{-1}^1 \left( \frac{m + 1}{2} \right) K\_{T\_1^m} (t) \, dt = h, \quad t = 0$$

where *h* is the bandwidth of the fuzzy partition and *tk* = *a* + *kh*. Similarly, the other Formulas (7) and (8) will be proved and then its corresponding in the expression (10).

**Lemma 6.** *Let <sup>f</sup>* <sup>∈</sup> *<sup>C</sup>* [*a*, *<sup>b</sup>*]*. Then for any <sup>ε</sup>* <sup>&</sup>gt; <sup>0</sup> *there exist <sup>n</sup><sup>ε</sup>* <sup>∈</sup> <sup>N</sup> *and <sup>B</sup>*1, ... , *Bn<sup>ε</sup> be basic functions form power of the triangular and raised cosine generalized uniform fuzzy partition of* [*a*, *b*]*. Let Fk*, *k* = 1 ... , *n*, *be the integral FT components of f with respect to B*1, ... , *Bnε. Then for each k* = 1 ... , *n<sup>ε</sup>* − 1 *the following estimations hold:* | *f*(*t*) − *Fi*| ≤ *ε for each t* ∈ [*a*, *b*] ∩ [*tk*, *tk*<sup>+</sup>1] *and i* = *k*, *k* + 1*.*

**Proof.** see [4].

**Corollary 3.** *Let the conditions of Lemma 6 be fulfilled. Then for each k* = 1 ... , *n<sup>ε</sup>* − 1 *the following estimations hold:* |*Fk* − *Fk*<sup>+</sup>1| < *ε.*

**Proof.** According to [4,16], let *t* ∈ [*a*, *b*]∩[*tk*, *tk*<sup>+</sup>1]. Then by Lemma 6, for any *k* = 1, ... , *n* −1 we obtain <sup>|</sup> *<sup>f</sup>* (*t*) <sup>−</sup> *Fk*<sup>|</sup> <sup>&</sup>lt; *<sup>ε</sup>*/2 and <sup>|</sup> *<sup>f</sup>* (*t*) <sup>−</sup> *Fk*<sup>+</sup>1<sup>|</sup> <sup>&</sup>lt; *<sup>ε</sup>*/2. Thus, <sup>|</sup>*Fk* <sup>−</sup> *Fk*<sup>+</sup>1<sup>|</sup> <sup>≤</sup> <sup>|</sup> *<sup>f</sup>* (*t*) <sup>−</sup> *Fk*<sup>|</sup> <sup>+</sup> <sup>|</sup> *<sup>f</sup>* (*t*) <sup>−</sup> *Fk*<sup>+</sup>1<sup>|</sup> <sup>&</sup>lt; *<sup>ε</sup>* <sup>2</sup> <sup>+</sup> *<sup>ε</sup>* <sup>2</sup> = *ε*.

The following theorem estimates the difference between the original function and its direct FT with respect to power of the triangular and raised cosine generalized uniform fuzzy partition.

**Theorem 1.** *Let f* (*t*) ∈ *<sup>C</sup>*<sup>2</sup> [*a*, *<sup>b</sup>*] *and the conditions of Lemma <sup>5</sup> be fulfilled. Then for k* = 1, . . . , *<sup>n</sup>*

$$F\_k = \mathfrak{a}f\left(t\_k\right) + \mathcal{O}\left(h^2\right),\tag{12}$$

*where α* > 0 *or α is defined by Definition 7.*

**Proof.** By locality condition for Definition 1, Lemmas 3 and 5, and according to the proof of Lemma 9.3 [8], using the trapezoid formula with nodes *tk*−1, *tk*, *tk*<sup>+</sup><sup>1</sup> to the numerical computation of the integral, we get for *α* > 0

$$\begin{split} &F\_{k} = \frac{1}{h} \int\_{t\_{k-1}}^{t\_{k+1}} f\left(t\right) B\_{k}(t) \, dt, \\ &= \frac{1}{h} \frac{h}{2} \left( f\left(t\_{k-1}\right) B\_{k}(t\_{k-1}) + 2f\left(t\_{k}\right) B\_{k}(t\_{k}) + f\left(t\_{k+1}\right) B\_{k}(t\_{k+1}) \right) + \mathcal{O}\left(h^{2}\right), \\ &= f\left(t\_{k}\right) B\_{k}(t\_{k}) + \mathcal{O}\left(h^{2}\right) = f\left(t\_{k}\right) A\_{k}(t\_{k}, a) + \mathcal{O}\left(h^{2}\right), \\ &= f\left(t\_{k}\right) aK\left(0\right) + \mathcal{O}\left(h^{2}\right), \\ &= af\left(t\_{k}\right) + \mathcal{O}\left(h^{2}\right). \end{split}$$

**Definition 9.** *Let F*[ *f* ] = (*F*1, *F*2,..., *Fn*) *be direct FT of a function f* ∈ *C* [*a*, *b*] *with respect to the fuzzy partition Bk*(*t*)*, k* = 1, . . . , *n of* [*a*, *b*]*. Then, the function* ˆ *f defined on* [*a*, *b*]

$$\begin{array}{c} \hat{f} \begin{pmatrix} t \end{pmatrix} = \frac{\sum\_{k=1}^{n} F\_k B\_k(t)}{\sum\_{k=1}^{n} B\_k(t)}, \end{array} \tag{13}$$

*is called the inverse FT of f .*

**Corollary 4.** *Let the assumptions of Lemma 2 and moreover, Let* ˆ *f* (*t*) *be the inverse FT of f with respect to power of the raised cosine generating function . Then, for all <sup>t</sup>* <sup>∈</sup> [*a*, *<sup>b</sup>*] *the following holds:* <sup>ˆ</sup> *f* (*t*) = ∑*<sup>n</sup> <sup>k</sup>*=<sup>1</sup> *FkBk*(*t*).

**Proof.** This proof immediately follows from Defintion 9, Lemma 2 and then using ∑*<sup>n</sup> <sup>k</sup>*=<sup>1</sup> *Bk*(*t*) = 1.

The following lemma estimates the difference between the original function and its inverse FT.

**Lemma 7.** *Let the assumptions of Theorem 1 and let* ˆ *f* (*t*) *be the inverse FT of f with respect to the fuzzy partition of* [*a*, *b*] *is given by Definition 7. Then, for all t* ∈ [*a*, *b*] *the following estimation holds:*

$$
\hat{f}\left(t\right) = af\left(t\_k\right) + \mathcal{O}\left(h^2\right). \tag{14}
$$

**Proof.** Let *t* ∈ [*a*, *b*] so that *x* ∈ [*tk*, *tk*<sup>+</sup>1] for some *k* = 1, . . . , *n*. By Theorem 1,

$$\begin{split} \left(\widehat{f}\left(t\right) - af\left(t\_{k}\right) = \frac{\sum\_{k=1}^{n} F\_{k} B\_{k}\left(t\right)}{\sum\_{k=1}^{n} B\_{k}\left(t\right)} - af\left(t\right) = \frac{\sum\_{k=1}^{n} F\_{k} B\_{k}\left(t\right)}{\sum\_{k=1}^{n} B\_{k}\left(t\right)} - \frac{\sum\_{k=1}^{n} af\left(t\_{k}\right) B\_{k}\left(t\right)}{\sum\_{k=1}^{n} B\_{k}\left(t\right)} \\ = \frac{\sum\_{k=1}^{n} \left(F\_{k} - \mathfrak{a} f\left(t\_{k}\right)\right) B\_{k}\left(t\right)}{\sum\_{k=1}^{n} B\_{k}\left(t\right)} = \mathcal{O}\left(h^{2}\right). \end{split}$$

**Corollary 5.** *Let the assumptions of Lemma 7, then* ˆ *f* (*t*) − *f* (*t*) <sup>&</sup>lt; *<sup>ε</sup>*.

**Proof.** The proof easily follows from the proof of Lemma 7 and then using Lemma 6 as follows:

$$\left| \widehat{f}\left(t\right) - f\left(t\right) \right| = \frac{\sum\_{k=1}^{n} \left| F\_k - f\left(t\right) \right| B\_k(t)}{\sum\_{k=1}^{n} B\_k(t)} < \varepsilon.$$

**Remark 3.** *According to the Definitions 1 and 2, if the normality is considered to be an additional condition for generating function (i.e., K*(0) = 1*) and generalized uniform fuzzy partition of* [*a*, *b*] *satisfies Ak*(*xk*) = *α*, *α* > 0*, then it is easy to see that the inverse FT* ˆ *f* (*tk*) = *Fk for all k* = 1, ... , *n. This is true for Definition 7. Moreover, if orthogonality condition (Ruspini condition (1)) is replaced by covering condition in Definition 1 and generalized uniform fuzzy partition of* [*a*, *b*] *satisfies Ak*(*xk*) = *α* = 1*, then it is easy to also see that the inverse FT* ˆ *f* (*tk*) = *Fk for all k* = 1, . . . , *n. This is true for Formula (8) only.*

Important property of the direct FT as well as inverse FT is their linearity, namely, given *<sup>f</sup>* , *<sup>g</sup>* <sup>∈</sup> *<sup>C</sup>* [*a*, *<sup>b</sup>*] and *<sup>α</sup>*, *<sup>β</sup>* <sup>∈</sup> *<sup>R</sup>*, if *<sup>h</sup>* <sup>=</sup> *<sup>α</sup> <sup>f</sup>* <sup>+</sup> *<sup>β</sup>g*, then *<sup>F</sup>* [*h*] <sup>=</sup> *<sup>α</sup><sup>F</sup>* [ *<sup>f</sup>* ] <sup>+</sup> *<sup>β</sup><sup>F</sup>* [*g*] and <sup>ˆ</sup> *h* = *α* ˆ *f* + *βg*ˆ. In the next section, we present new fuzzy numerical methods based on the FT and a new iterative method to numeric solution of the Cauchy problem.

#### **4. New Fuzzy Numerical Methods for Cauchy Problem**

Consider the initial value problem (IVP) for the Cauchy problem:

$$y' = f(t, y), \ y(t\_1) = y\_1, \ a = t\_1 \le t \le t\_n = b. \tag{15}$$

where *<sup>y</sup>*<sup>1</sup> <sup>∈</sup> <sup>R</sup> and *<sup>f</sup>* is continuous function on [*a*, *<sup>b</sup>*] <sup>×</sup> <sup>R</sup> and satisfies Lipschitz condition. In fact, the analytical solution of problem (15) is often difficult and sometimes impossible to obtain. Instead, numerical analysis is interested with obtaining approximate solutions with errors within reasonable bounds. Thus, a usage of fuzzy numerical methods seems to be suitable.

In [8,9], the authors have presented Euler method and Mid-point rule, based on FT to numeric solution of Cauchy problem (15). A new iterative method (NIM) has been proposed for solving linear (nonlinear) functional equations, ordinary differential equations and delay differential equations [2,3].

In this section, we present three new schemes to solve Cauchy problem (15), that use the FT and NIM. Our motivation stems from the classical approach, trapezoidal rule (1-step) and Adams Moulton methods (2 and 3-step). For the rest of this paper, suppose that we are given the Cauchy problem (15), where the function *f* on [*a*, *b*] are sufficiently smooth and we assume that all necessary requirements for constructing the FT of the solution of Cauchy problem (15) are fulfilled. Now, we present numerical Scheme I, II, and III. The first scheme uses 1-step method, while the second one uses 2-step method, and the third uses 3-step method.

#### *4.1. Numeric Scheme I: Modified Trapezoidal Rule Based on FT and NIM for Cauchy Problem*

In the present subsection, we will construct a numeric scheme of the more advanced method known as the Trapezoidal Rule. Recall that it is a one-step method with second-order accuracy, which can be considered as a Runge–Kutta method. We propose modification of trapezoidal rule based on FT and NIM for solving Cauchy problem. Modification of the trapezoidal rule can be improved by the FT to solve Cauchy problem (15). We contributed to numeric methods of Cauchy problem (15) by scheme provides formulas for the FT components, *Yk*, *k* = 2, ... , *n* − 1, of the unknown function *y*(*t*) with respect to choose some power of the triangular (or raised cosine) generalized uniform fuzzy partition, *B*1, ... , *Bn*, of interval [*a*, *b*] with parameter *h* to approximate solution of Cauchy problem (15). The first, choose the number *n* ≥ 2 and compute *h* = (*b* − *a*) / (*n* − 1), then construct the generalized uniform fuzzy partition of [*a*, *b*] using Definition 7. Note that each function *Bk* spans over three nodes *tk*−1, *tk*, *tk*+1, *<sup>k</sup>* = 2, ... , *<sup>n</sup>* − 1. Nevertheless, *Bk*(*tk*−1) = *Bk*(*tk*<sup>+</sup>1) = 0 and *Bk*(*tk*) = 1. Now, we apply the FT and NIM to Cauchy problem (15) and obtain the numeric Scheme I for *k* = 1, ... , *n* − 1 as follows (see [3,8] for technical details):

$$\begin{aligned} Y\_1 &= y\_{1\prime} \\ Y\_{k+1}^\* &= Y\_k + hF\_k / 2, \\ Y\_{k+1}^{\*\*} &= Y\_{k+1}^\* + hF\_{k+1}^\* / 2, \\ Y\_{k+1} &= Y\_k + h\left(F\_k + F\_{k+1}^{\*\*}\right) / 2, \end{aligned} \tag{16}$$

where

$$F\_{k} = \begin{array}{c c c} \frac{\int\_{a}^{b} f(t, \mathbf{Y}\_{k}) B\_{k}(t) dt \\ \int\_{a}^{b} B\_{k}(t) dt \end{array}, \quad F\_{k+1}^{\*} = \begin{array}{c c c} \frac{\int\_{a}^{b} f(t, \mathbf{Y}\_{k+1}^{\*}) B\_{k+1}(t) dt \\ \int\_{a}^{b} B\_{k+1}(t) dt \end{array}, \quad F\_{k+1}^{\*\*} = \begin{array}{c c} \frac{\int\_{a}^{b} f(t, \mathbf{Y}\_{k+1}^{\*\*}) B\_{k+1}(t) dt}{\int\_{a}^{b} B\_{k+1}(t) dt}. \end{array}, \tag{17}$$

In the sequel, the approximate solution of Cauchy problem (15) can be obtained using the inverse FT as follows:

$$y\_n(t) = \sum\_{k=1}^n \mathbb{Y}\_k B\_k(t). \tag{18}$$

#### *4.2. Numeric Scheme II: Modified 2-Step Adams Moulton Method Based on FT and NIM for Cauchy Problem*

The Scheme I uses 1-step method for solving Cauchy problem (15). In this subsection, we improve 2-step Adams Moulton method using FT and NIM for solving Cauchy problem (15). The 2-step Adams Moulton method can be improved to effectively approximate the solution of (15) by the FT components, *Yk*, *k* = 2, ... , *n* − 1, of the unknown function *y*(*t*) with respect to choose some power of the triangular (or raised cosine) generalized uniform fuzzy partition (9). Let *Y*<sup>1</sup> = *y*<sup>1</sup> and *Y*<sup>2</sup> = *y*<sup>2</sup> if possible; otherwise, we can compute FT component *Y*<sup>2</sup> from numeric Scheme I. Analogously to [3,8], we apply the FT and NIM to Cauchy problem (15) and obtain the numeric Scheme II in the following form for *k* = 2, . . . , *n* − 1:

$$\begin{aligned} Y\_{k+1}^{\*} &= Y\_k + h \left( 8F\_k - F\_{k-1} \right) / 12, \\ Y\_{k+1}^{\*\*} &= Y\_{k+1}^{\*} + 5hF\_{k+1}^{\*} / 12, \\ Y\_{k+1} &= Y\_k + h \left( 8F\_k - F\_{k-1} + 5F\_{k+1}^{\*\*} \right) / 12, \end{aligned} \tag{19}$$

where

$$F\_{k-1} = \begin{array}{c c c} \frac{\int\_{a}^{b} f(t, Y\_{k-1}) B\_{k-1}(t) dt}{\int\_{a}^{b} B\_{k-1}(t) dt}, & F\_{k} &=& \frac{\int\_{a}^{b} f(t, Y\_{k}) B\_{k}(t) dt}{\int\_{a}^{b} B\_{k}(t) dt},\\\\ F\_{k+1}^{\*} = \frac{\int\_{a}^{b} f(t, Y\_{k+1}^{\*}) B\_{k+1}(t) dt}{\int\_{a}^{b} B\_{k+1}(t) dt}, & \text{and } F\_{k+1}^{\*\*} = \frac{\int\_{a}^{b} f(t, Y\_{k+1}^{\*\*}) B\_{k+1}(t) dt}{\int\_{a}^{b} B\_{k+1}(t) dt}. \end{array}$$

Then, obtain the desired approximation for *y* by the inverse FT (18) applied to [*Y*1,...,*Yn*].

#### *4.3. Numeric Scheme III: Modified 3-Step Adams Moulton Method Based on FT and NIM for Cauchy Problem*

In this subsection, we improve 3-step Adams Moulton method using FT and NIM for solving Cauchy problem (15). The 3-step Adams Moulton method can be improved to effectively approximate the solution of (15) by the FT components, *Yk*, *k* = 2, ... , *n* − 1, of the unknown function *y*(*t*) with respect to choose some power of the triangular (or raised cosine) generalized uniform fuzzy partition (see Definition 7), *B*1, ... , *Bn*, of interval [*a*, *b*] with parameter *h* = (*b* − *a*) / (*n* − 1) , *n* ≥ 2. Let *Y*<sup>1</sup> = *y*1, *Y*<sup>2</sup> = *y*<sup>2</sup> and *Y*<sup>3</sup> = *y*<sup>3</sup> if possible; otherwise, we can compute FT components *Y*<sup>2</sup> and *Y*<sup>3</sup> from numeric Scheme I. Now, we apply the FT and NIM to Cauchy problem (15) and obtain the following numeric Scheme III for *k* = 3, . . . , *n* − 1 (see [3,8] for technical details):

$$\begin{aligned} Y\_{k+1}^\* &= \Upsilon\_k + h \left( 19F\_k - 5F\_{k-1} + F\_{k-2} \right) / 24, \\ Y\_{k+1}^{\*\*} &= Y\_{k+1}^\* + 9hF\_{k+1}^\* / 24, \\ \Upsilon\_{k+1} &= \Upsilon\_k + h \left( 19F\_k - 5F\_{k-1} + F\_{k-2} + 9F\_{k+1}^{\*\*} \right) / 24, \end{aligned} \tag{20}$$

where

$$\begin{array}{llll} F\_{k-2} &=& \frac{\int\_{a}^{b} f(t, Y\_{k-2}) A\_{k-2}(t) dt}{\int\_{a}^{b} A\_{k-2}(t) dt}, & F\_{k-1} &=& \frac{\int\_{a}^{b} f(t, Y\_{k-1}) A\_{k-1}(t) dt}{\int\_{a}^{b} A\_{k-1}(t) dt}, & F\_{k} &=& \frac{\int\_{a}^{b} f(t, Y\_{k}) A\_{k}(t) dt}{\int\_{a}^{b} A\_{k}(t) dt}, \\\\ F\_{k+1}^{\*} &=& \frac{\int\_{a}^{b} f(t, Y\_{k+1}^{\*}) A\_{k+1}(t) dt}{\int\_{a}^{b} A\_{k+1}(t) dt}, \text{and} & F\_{k+1}^{\*\*} &=& \frac{\int\_{a}^{b} f(t, Y\_{k+1}^{\*\*\*}) A\_{k+1}(t) dt}{\int\_{a}^{b} A\_{k+1}(t) dt}. \end{array}$$

In the sequel, the inverse FT (18) approximates the solution *y*(*t*) of the Cauchy problem (15).

#### *4.4. Error Analysis of Fuzzy Numeric Method for Cauchy Problem*

In this subsection, we present error analysis for numeric scheme I only, because the technique of error analysis for rest numeric schemes (Schemes II and III) can be obtained analogously. Consider the Formula (16). If *y*(*tk*) = *yk* and *Yk* denote the exact solution and the numerical solution and substituting the exact solution in the Formula (16), we get

$$\begin{aligned} y\_{k+1}^{\*} &= y\_k + hF\_k^{\varepsilon}/2, \\ y\_{k+1}^{\*\*} &= y\_{k+1}^{\*} + hF\_{k+1}^{\varepsilon\*}/2, \\ y\_{k+1} &= y\_k + h\left(F\_k^{\varepsilon} + F\_{k+1}^{\varepsilon\*}\right)/2, \end{aligned} \tag{21}$$

where

$$F\_{k}^{\epsilon} = \begin{array}{c c} \frac{\int\_{a}^{b} f(t, y\_{k}) B\_{k}(t) dt}{\int\_{a}^{b} B\_{k}(t) dt}, & F\_{k+1}^{\epsilon \*} = \frac{\int\_{a}^{b} f(t, y\_{k+1}^{\*}) B\_{k+1}(t) dt}{\int\_{a}^{b} B\_{k+1}(t) dt}, & F\_{k+1}^{\epsilon \*} = \frac{\int\_{a}^{b} f(t, y\_{k+1}^{\*\*}) B\_{k+1}(t) dt}{\int\_{a}^{b} B\_{k+1}(t) dt}, \end{array} \tag{22}$$

and the truncation error *Tk* of the Scheme I is given by

$$T\_k = \frac{y\_{k+1} - y\_k}{h} - \frac{1}{2} \left( F\_k^\epsilon + F\_{k+1}^{\epsilon \ast \ast} \right) \,. \tag{23}$$

Rearranging (16), we get

$$0 = \frac{\chi\_{k+1} - \chi\_k}{h} - \frac{1}{2} \left( F\_k + F\_{k+1}^{\ast \ast} \right) . \tag{24}$$

If we denote the error *ek*<sup>+</sup><sup>1</sup> = *Yk*<sup>+</sup><sup>1</sup> − *yk*<sup>+</sup><sup>1</sup> and subtracting (24) from (23), so:

$$T\_k h = \mathfrak{e}\_{k+1} - \mathfrak{e}\_k - \frac{h}{2} \left( F\_k - F\_k^\epsilon \right) - \frac{h}{2} \left( F\_{k+1}^{\ast \ast} - F\_{k+1}^{\ast \ast \ast} \right) \,. \tag{25}$$

**Lemma 8.** *Let f is assumed to be sufficiently smooth function of its arguments on* [*a*, *b*] *and satisfies the Lipschitz condition with the constant L with respect to y, then we get for k* = 1, . . . , *n,*

> |*ek*<sup>+</sup>1| ≤ |*ek*|(1 + *c*) + *Th and Fe <sup>k</sup>* − *<sup>F</sup>e*∗∗ *k*+1 <sup>≤</sup> *LhM*<sup>2</sup>

*where c* = *hL* + *<sup>h</sup>*2*L*<sup>2</sup> <sup>2</sup> <sup>+</sup> *<sup>h</sup>*3*L*<sup>3</sup> <sup>8</sup> *, T* = max 1≤*k*≤*n* <sup>|</sup>*Tk*|*, <sup>M</sup>*<sup>2</sup> *is upper bound for <sup>f</sup> , and <sup>F</sup><sup>e</sup> <sup>k</sup>* , *<sup>F</sup>e*∗∗ *<sup>k</sup>*+<sup>1</sup> *are determined by Formula (22).*

**Proof.** By hypothesis, *f* satisfies the Lipschitz condition and using Lemma 5, Formulas (16), (17), (21) and (22), we get

<sup>|</sup>*Fk* <sup>−</sup> *<sup>F</sup><sup>e</sup> k* | ≤ 1 *h b a f*(*t*,*Yk*)*Bk*(*t*)*dt* − *b a f*(*t*, *yk*)*Bk*(*t*)*dt* ≤ *L* |*ek*| *F*∗ *<sup>k</sup>*+<sup>1</sup> <sup>−</sup> *<sup>F</sup>e*<sup>∗</sup> *k*+1 ≤ 1 *h b a f*(*t*,*Y*∗ *<sup>k</sup>*+1)*Bk*<sup>+</sup>1(*t*)*dt* − *b a f*(*t*, *y*∗ *<sup>k</sup>*+1)*Bk*<sup>+</sup>1(*t*)*dt* ≤ *L Y*<sup>∗</sup> *<sup>k</sup>*+<sup>1</sup> − *y*<sup>∗</sup> *k*+1 *F*∗∗ *<sup>k</sup>*+<sup>1</sup> <sup>−</sup> *<sup>F</sup>e*∗∗ *k*+1 ≤ 1 *h b a f*(*t*,*Y*∗∗ *<sup>k</sup>*+1)*Bk*<sup>+</sup>1(*t*)*dt* − *b a f*(*t*, *y*∗∗ *<sup>k</sup>*+1)*Bk*<sup>+</sup>1(*t*)*dt* ≤ *L Y*∗∗ *<sup>k</sup>*+<sup>1</sup> − *y*∗∗ *k*+1 *Y*<sup>∗</sup> *<sup>k</sup>*+<sup>1</sup> − *y*<sup>∗</sup> *k*+1 <sup>≤</sup> <sup>|</sup>(*Yk* <sup>+</sup> *hFk*/2) <sup>−</sup> (*yk* <sup>+</sup> *hF<sup>e</sup> <sup>k</sup>*/2)| ≤ |*ek*| 1 + *hL* 2 *Y*∗∗ *<sup>k</sup>*+<sup>1</sup> − *y*∗∗ *k*+1 <sup>≤</sup> *Y*∗ *<sup>k</sup>*+<sup>1</sup> + *hF*<sup>∗</sup> *k*+1/2 − *y*∗ *<sup>k</sup>*+<sup>1</sup> + *hFe*<sup>∗</sup> *k*+1/2 ≤ |*ek*| 1 + *hL* 2 2 |*ek*<sup>+</sup>1| ≤ |*ek*| + *hL* <sup>2</sup> <sup>|</sup>*ek*<sup>|</sup> <sup>+</sup> *hL* 2 *y*∗∗ *<sup>k</sup>*+<sup>1</sup> − *Y*∗∗ *k*+1 + *Th* ≤ |*ek*| + *hL* <sup>2</sup> <sup>|</sup>*ek*<sup>|</sup> <sup>+</sup> *hL* <sup>2</sup> <sup>|</sup>*ek*<sup>|</sup> 1 + *hL* 2 2 + *Th* = |*ek*| 1 + *hL* + *h*2*L*<sup>2</sup> <sup>2</sup> <sup>+</sup> *h*3*L*<sup>3</sup> 8 + *Th*

Furthermore, by using | *f*(*t*, *y*(*t*))| ≤ *M*2, we get

$$\begin{split} |F\_{k}^{\epsilon} - F\_{k+1}^{\epsilon \ast \star}| &= \left| \frac{1}{h} \left( \int\_{a}^{b} \left( f(t, y\_{k}) - f(t + h, y\_{k+1}^{\ast \star}) \right) B\_{k}(t) dt \right) \right| \\ &= \left| \frac{1}{h} \left( \int\_{a}^{b} \left( f(t, y\_{k}) - f(t, y\_{k+1}^{\ast \star}) + f(t, y\_{k+1}^{\ast \star}) - f(t + h, y\_{k+1}^{\ast \star}) \right) B\_{k}(t) dt \right) \right| \\ &\leq L \left| y\_{k} - y\_{k+1}^{\ast \star} \right| \\ &= \frac{Lh}{2} \left| -F\_{k}^{\epsilon} - F\_{k+1}^{\epsilon \ast} \right| \\ &= \frac{Lh}{2} \left| \frac{1}{h} \int\_{a}^{b} \left( f(t, y\_{k}) + f(t, y\_{k+1}^{\ast}) \right) B\_{k}(t) dt \right| \\ &\leq LhM\_{2} \end{split}$$

This completes the proof.

**Theorem 2.** *Consider the the numeric Scheme I (16), where f* ∈ *<sup>C</sup>*<sup>2</sup> [*a*, *<sup>b</sup>*] *and satisfies the Lipschitz condition with the constant L with respect to y. Then, the solution Yk*, *k* = 1, ... , *n*, *obtained by the numeric scheme I (16) for solving Cauchy problem (15) satisfies*

$$|e\_k| = |Y\_k - y\_k| \le \frac{hM}{2L}e^{kc},\tag{26}$$

*where c* = *hL* + *<sup>h</sup>*2*L*<sup>2</sup> <sup>2</sup> <sup>+</sup> *<sup>h</sup>*3*L*<sup>3</sup> <sup>8</sup> *, M*1, *M*<sup>2</sup> *are upper bound for f* , *f , respectively, on* [*a*, *b*]*, and M*<sup>1</sup> + *M*2*L* = *M.* **Proof.** By hypothesis, *y* exists and bounded on [*a*, *b*] with max *a*≤*t*≤*b* |*y* (*t*)| = *M*<sup>1</sup> by assuming that *<sup>f</sup>* ∈ *<sup>C</sup>*<sup>2</sup> [*a*, *<sup>b</sup>*]. Then, using Lemma 8, (23) and Taylor's theorem for *<sup>k</sup>* = 1, . . . , *<sup>n</sup>* − 1, we get

$$\begin{split} T\_{k} &= \frac{y\_{k+1} - y\_{k}}{h} - \frac{1}{2} \left( F\_{k}^{\epsilon} + F\_{k+1}^{\epsilon \ast \ast} \right) \\ &= \frac{1}{2} \text{l} \mathbf{y}^{\prime\prime} \left( \mathbf{\bar{x}}\_{k} \right) + f \left( t\_{k\prime} y\_{k} \right) - \frac{1}{2} \left( F\_{k}^{\epsilon} + F\_{k+1}^{\epsilon \ast \ast} \right) \\ &= \frac{1}{2} \text{l} \mathbf{y}^{\prime\prime} \left( \mathbf{\bar{x}}\_{k} \right) + f \left( t\_{k\prime} y\_{k} \right) - F\_{k}^{\epsilon} + \frac{1}{2} F\_{k}^{\epsilon} - \frac{1}{2} F\_{k+1}^{\epsilon \ast \ast} \right. \\ &= \frac{1}{2} \text{l} \mathbf{y}^{\prime\prime} \left( \mathbf{\bar{y}}\_{k} \right) + \frac{1}{2} \left( F\_{k}^{\epsilon} - F\_{k+1}^{\epsilon \ast \ast} \right) \end{split}$$

where *ξ<sup>k</sup>* ∈ [*tk*, *tk*+]. Now, using Lemma 8

$$\begin{aligned} T = \max\_{1 \le k \le n} |T\_k| &\le \frac{1}{2} h \left| y'''' \left( \xi\_k \right) \right| + \frac{L h M\_2}{2} \\ &\le \frac{h}{2} \left( M\_1 + L M\_2 \right) = \frac{h M\_1}{2} \end{aligned}$$

Now, by virtue of Lemma <sup>8</sup> and we have used *<sup>e</sup>*<sup>1</sup> = 0, (<sup>1</sup> + *<sup>c</sup>*)*<sup>k</sup>* ≤ *<sup>e</sup>kc*, we get for *<sup>k</sup>* = 1, . . . , *<sup>n</sup>*

$$\begin{aligned} |e\_k| &\le \frac{(1+c)^k - 1}{c} Th \le \frac{(1+c)^k}{L + \frac{hL^2}{2} + \frac{h^2L^3}{8}} T\\ &\le \frac{T}{L} e^{kx} \le \frac{hM}{2L} e^{kx} \end{aligned}$$

where *c* = *hL* + *<sup>h</sup>*2*L*<sup>2</sup> <sup>2</sup> <sup>+</sup> *<sup>h</sup>*3*L*<sup>3</sup> <sup>8</sup> . Thus, if the step length *h* → 0, then for all *k*, the error,|*ek*| converges to zero. So the method is convergent. This completes the proof.

#### **5. Numerical Examples**

In this section, we present examples of the Cauchy problem (15).

**Example 1.** *Consider the following initial value problem with initial conditions y* (0) = 1 *and with a smooth right-hand function*

$$y'(t) = t^2 - y, \quad t \in \left[0, 2\right]. \tag{27}$$

**Example 2.** *Consider the Cauchy problem (15) with oscillating right-hand function. We take f*(*t*, *y*) = 1+ 2*y* cos *t* 2 + sin 2*t* 2 , *t*( *<sup>π</sup>* <sup>2</sup> ) = 2.1951, *<sup>a</sup>* <sup>=</sup> *<sup>π</sup>* <sup>2</sup> *and b* <sup>=</sup> <sup>3</sup>*<sup>π</sup>* 2 *.*

The results are listed in Tables 2–4 by fuzzy numerical methods proposed in this paper with respect to case *KT*<sup>201</sup> 1 and Table 5 by fuzzy numerical methods proposed in this paper with respect to case *KT*<sup>1</sup> 1 , *KT*<sup>3</sup> 1 , *KT*<sup>201</sup> 1 , *KC*<sup>1</sup> . The Euclidean distance is given by Norm <sup>2</sup> defined as *Y* − *y*(*t*)<sup>2</sup> = ∑*<sup>k</sup>* (*Yk* − *y*(*tk*)) <sup>2</sup> and mean square error (MSE) defined as MSE = <sup>1</sup> *<sup>n</sup>* (*Yk* − *y*(*tk*)2) 2 . This is an easily computable quantity for a particular sample. Concluding remarks are summarized as follows:

• In view of Table 2, a comparison between the Euler method (Euler-FT) [8], the Mid-point rule (Mid-FT), Scheme I and II [9] and three new schemes (16), (19) and (20) in this paper for Example 1. We can easily observe from Table 2, the better results (in comparison with the Euler-FT method [8]) are obtained by the three new schemes in this paper and the best result (in comparison with the Scheme I, II and II) is obtained by the Scheme III. Also, the better results (in comparison with the Mid-point rule (Mid-FT), Scheme I and II [9]) are obtained by the Scheme II (19) and Scheme III (20) in this paper where all fuzzy numerical methods used the FT components and the best approximation is shown by the Scheme III (20) with FT components.

**Table 2.** Comparison of numeric results for Example 1. The columns contain the exact and seven approximate solutions of the Cauchy problem (27) with a smooth right-hand function: the first three approximate solution is obtained by the three new schems ((16), (19) and (20)), the fourth approximate solution by the Euler-FT [8] with FT components and the last three by the schemes are proposed in [9]. The best approximation is shown by the Scheme III proposed above (20) with FT components.


• In Tabel 3, a comparison of MSE and a comparison of Norm <sup>2</sup> for Examples 1 and 2. We can easily observe, the best results are obtained by the three new schemes in this paper and the better results (in comparison with the other numerical classical methods) are obtained by all fuzzy numerical methods used the FT components except Euler-FT [8] for these examples.


**Table 3.** The values of MSE and Norm <sup>2</sup> for Example 1– 2.

• In view of Table 4, a comparison between the three new schemes (16), (19) and (20) in this paper and the Trapezoidal Rule, 2-Step Adams Moulton Method and 3-Step Adams Moulton Method based on Euler method for Example 2. We can easily observe from Table 4, the better results are obtained by the three new schemes in this paper and the best result (in comparison with the Scheme I, II and II) is obtained by the Scheme III.

**Table 4.** Comparison of numeric results for Example 2. The columns contain the exact and six approximate solutions of the Cauchy problem (27) with oscillating right-hand function: the first three approximate solution is obtained by the three new schems ((16), (19), and (20)), the last three approximate solution by the Trapezoidal Rule, 2-Step Adams Moulton Method and 3-Step Adams Moulton Method. The best approximation is shown by the Scheme III proposed above (20) with FT components.


<sup>1</sup> Trapezoidal Rule; <sup>2</sup> 2-Step Adams Moulton Method; <sup>3</sup> 3-Step Adams Moulton Method.

• In Tabel 5, a comparison between computation errors for three schemes based on the FT with respect to the power of the triangular and raised cosine generalized uniform fuzzy partition determined by Formula (9), where the advantage of the *KTm* <sup>1</sup> for Examples 1 and 2 is evident.

**Table 5.** The values of MSE and Norm <sup>2</sup> for Examples 1 and 2 by the three schemes with respect to the power of the triangular and raised cosine generalized uniform fuzzy partition are proposed in this paper. The best approximation is shown by using *KT*<sup>201</sup> 1 .


This constitutes an important improvement to previous methods which do not provide such information except in the methods such as Euler-FT proposed in [8] and Mid-FT , Scheme I, and Scheme II [9] for Cauchy problems by the more efficient way of computation approximate solutions. Thus, this study will be of particular importance.

#### **6. Conclusions**

We extended applicability of fuzzy numeric methods to the initial value problem (the Cauchy problem). We proposed three new numeric methods based on the FT and NIM and then analyzed their suitability. We considered in the case of the generalized uniform fuzzy partition is power of the triangular (raised cosine) generalized uniform fuzzy partition and showed that the newly proposed schemes outperform the Euler-FT [8] and Mid-FT , Scheme I, and Scheme II [9] especially on examples where the generalized uniform fuzzy partition is power of the triangular generalized uniform fuzzy partition by using generating function (2). Alos, the newly proposed schemes in this paper outperform the Trapezoidal Rule, 2-Step Adams Moulton Method and 3-Step Adams Moulton Method. Moreover, we proved that the Scheme I determines an approximate solution which converges to the exact solution. This constitutes an important improvement to previous results were coined by I. Perfilieva [8]. To conclude previous sections, the proposed schemes are more accurate and stable. In particular, these schemes can be used for solving initial value problem.

**Author Contributions:** H. A. ALKasasbeh and M. Z. Ahmad proposed and designed the numerical methods. H. A. ALKasasbeh performed the numerical experiments. I. Perfilieva evaluated the results and supported this work. M. Z. Ahmad project administration. Z. R. Yahya provided software and data curation.

**Acknowledgments:** This work of Irina Perfilieva has been supported by the project "LQ1602 IT4Innovations excellence in science" and by the Grant Agency of the Czech Republic (project No. 16-09541S). Also, many thanks given to Universiti Malaysia Perlis for providing all facilities until this work completed successfully.

**Conflicts of Interest:** The authors declare no conflicts of interest.

#### **References**


© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **New Approximation Methods Based on Fuzzy Transform for Solving SODEs: I**

#### **Hussein ALKasasbeh 1,\*, Irina Perfilieva 2, Muhammad Zaini Ahmad <sup>1</sup> and Zainor Ridzuan Yahya <sup>1</sup>**


Received: 17 June 2018; Accepted: 14 August 2018; Published: 23 August 2018

**Abstract:** In this paper, new approximation methods for solving systems of ordinary differential equations (SODEs) by fuzzy transform (FzT) are introduced and discussed. In particular, we propose two modified numerical schemes to solve SODEs where the technique of FzT is combined with one-stage and two-stage numerical methods. Moreover, the error analysis of the new approximation methods is discussed. Finally, numerical examples of the proposed approach are confirmed, and applications are presented.

**Keywords:** fuzzy partition; fuzzy transform; numerical methods; systems of ordinary differential equations

#### **1. Introduction**

Differential equations have great potential to model and understand real-world problems in science and engineering. In many cases, differential equations cannot be solved analytically, so that numerical methods are required. Therefore, numerical methods have been elaborated frequently in scientific research for solving differential equations, for example [1–4]. In this connection, fuzzy approaches successfully cope with solving differential equations. One of fuzzy approaches that has been proposed in the literature is fuzzy transform (FzT).

FzT is a general mathematical technique coined and developed by Perfilieva [5]. The study of FzT is rapidly expanding as a new branch of approximation method based on fuzzy sets. The main idea of FzT is usually forming a fuzzy partition of a universe into fuzzy subsets. Two shapes for the basic functions of fuzzy partition, triangular- and sinusoidal-shaped membership functions, were considered by [6]. Applications of the FzT can be used in the construction of approximate models, the approximation of functions and the solution of differential equations. FzT has been generalized from the case of constant components to the case of polynomial components [7]. Later, FzT was successfully used by [8] for a second order initial value problem. From this idea, FzT was proposed for numerical solutions of two point boundary value problems by the more efficient way in comparison with the similar ones obtained by the finite difference method [9].

Recently, in [10], a new numerical method based on FzT to solve a class of delay differential equations by means of the Picard-like numerical scheme was presented. The author demonstrated the stability of the method, and the obtained results have good agreement with existing methods. Furthermore, in some cases, a better approximation was achieved through sinusoidal-shaped basic functions, while the Bernstein basis polynomials allow better results in the other examples. On the other hand, a new approach to the fuzzy boundary value problem in the form of the fuzzy relation was investigated by [11]. Another approach for the second order linear differential equation with

constant coefficients and Dirichlet boundary conditions was introduced by [12], where the ability of FzT was demonstrated to deal with boundary value problems affected by noise, and the results were compared with the finite difference method. To confirm again the ability of FzT with respect to noisy and non-noisy source functions, FzT based on the shooting method was introduced by [13] for solving a nonlinear second-order ODE, and the obtained results were better than the classical method, namely the second order Runge–Kutta. Further, in [14], FzT to approximate the solution of boundary value problems by minimizing the integral squared error in the two-norm was considered. The trigonometric function based on higher degree FzT and the accuracy of the resulting approximation increase with the increase in the degree of FzT were presented by [15], while the weighted transform method was discussed by [16]. The conditions for quasi-consensus in a multi-agent system with sampled data based on FzT were proposed by [17]. In [18], the dynamical properties of a two-neuron system with respect to FzT and a single delay have been investigated. Multi-step FzT was first studied by [19] for solving ODEs. From this perspective, FzT was considered for solving a class of ODEs.

In this regard, many approximation methods have been studied for solving ODEs, for example using neural networks [20], embedded three and two pairs of nonlinear methods [21], electrical analogy [22], multi-general purpose graphical processing units [23], the differential transform method [24] and a Galerkin finite element method [25]. A numerical method based on the trapezoidal rule for the Cauchy–Smoluchowski problem was discussed by [26]. In [27,28], the authors studied ODEs with the initial value as the triangular fuzzy number. A new fuzzification of the classical Euler method and then incorporating an unconstrained optimization technique were proposed by [1]. Furthermore, most real-life problems involve systems of ODEs, for example the Lotka–Volterra prey predator model based on an autonomous model [29], a non-autonomous model [30,31] and fuzzy initial populations [4].

In this study, our aim is to extend the applicability of the FzT to general coupled Systems of ODEs (SODEs) where this method works better than its classical counterpart. The motivation of the present research stems from the fuzzy approach as follows. The first application of the FzT for solving ODEs had been proposed by [6] where a generalization of the Euler method for an ordinary Cauchy problem was developed and its potential in comparison with classical methods (Euler method) was demonstrated. The same approach has been successfully used by [32] to solve the Cauchy problem for a more accurate comparison with the classical method (the second-order Runge–Kutta method) and with the generalization of Euler method based on FzT, as proposed by [6]. Further, in [19], new fuzzy methods based on FzT for solving the Cauchy problem were presented, and the authors compared the results with existing numerical results in [6,32] and with classical methods, including one, two and three steps. All these fuzzy approximation methods performed better than the classical trapezoidal rule (one step) and the classical Adam–Moulton method (two and three steps) and outperformed the previous fuzzy methods in [6,32].

In this contribution, two new approximation methods are presented in detail to solve SODEs where the technique of FzT is combined with one-stage and two-stage numerical methods. The first approximation method improves the Euler method (one-stage), and the other approximation method improves trapezoidal rule (two-stage). The primary focus of this contribution is to demonstrate the applicability of the FzT for functions of two variables based on the uniform fuzzy partition. The error analysis is discussed in the context of the uniform fuzzy partition. Algorithms inspired by the FzT are shown for solving SODEs. Two new approximation methods are applied to the Lotka–Volterra prey-predator model. This contribution is an important modification relative to classical methods, the Euler method and the trapezoidal rule. Thus, these methods are compared with the Euler method and trapezoidal rule. Both approximation methods with the help of FzT provide better numerical solutions than the classical Euler method and the classical trapezoidal rule.

The paper is organized as follows. In Section 2, several related concepts and results associated with the FzT are reviewed. In Section 3, we construct procedures to obtain an approximate solution for SODEs by using the FzT method. Applications are discussed in Section 4. Finally, conclusions are given in Section 5.

#### **2. Basic Concepts**

Throughout this section, we deal with an interval [*a*, *<sup>b</sup>*] <sup>⊂</sup> <sup>R</sup> of real numbers. Let [*a*, *<sup>b</sup>*] be an interval on the real line R. Fuzzy sets on [*a*, *b*] will be identified with their membership functions mapping from [*a*, *b*] into [0, 1]. We will assume an interval [*a*, *b*] as a real domain. In this section, we remind about the definitions and claims that were introduced and proven by [5].

**Definition 1.** *(Fuzzy partition) Let x*<sup>1</sup> < ··· < *xn be fixed nodes within* [*a*, *b*] *such that x*<sup>1</sup> = *a*, *xn* = *b and n* ≥ 2*. The fuzzy sets A*1, ... , *An are often called basic functions. We say that fuzzy sets A*1, ... , *An* ⊂ [*a*, *b*] *establish a fuzzy partition of* [*a*, *b*] *if they fulfill the following conditions for k* = 1, ... , *n (for the uniformity of notation, we set x*<sup>0</sup> = *a and xn*+<sup>1</sup> = *b):*


We say that a fuzzy partition of [*a*, *b*] is *h*-uniform if its nodes *x*1, ... , *xn*, where *n* ≥ 2 are equidistant. This means that *xk* <sup>=</sup> *<sup>a</sup>* <sup>+</sup> *<sup>h</sup>* (*<sup>k</sup>* <sup>−</sup> <sup>1</sup>), *<sup>k</sup>* <sup>=</sup> 1, ... , *<sup>n</sup>*, where *<sup>h</sup>* <sup>=</sup> *<sup>b</sup>*−*<sup>a</sup> <sup>n</sup>*−<sup>1</sup> , *<sup>n</sup>* <sup>≥</sup> 2, and the two additional properties are fulfilled:


Two uniform fuzzy partitions with triangular- and sinusoidal-shaped basic functions can be found in [5,6]. Throughout this paper, we will write uniform fuzzy partition instead of *h*-uniform fuzzy partition.

**Definition 2.** *Let f be a continuous function on* [*a*, *b*] *and Ak*(*x*)*, k* = 1, ... , *n, be a uniform fuzzy partition of* [*a*, *b*]*, n* ≥ 2*. A vector of real numbers F*[ *f* ] = (*F*1, *F*2,..., *Fn*) *given by (to complete this notation, we set x*<sup>1</sup> = *a and xn*+<sup>1</sup> = *b):*

$$\,\_{F\_k}^{}\left[f\right] = \frac{\int\_a^b f\left(\mathbf{x}\right) \, A\_k(\mathbf{x}) \, d\mathbf{x}}{\int\_a^b A\_k(\mathbf{x}) \, d\mathbf{x}} = \frac{\int\_{\mathbf{x}\_{k-1}}^{\mathbf{x}\_{k+1}} f\left(\mathbf{x}\right) A\_k(\mathbf{x}) d\mathbf{x}}{\int\_{\mathbf{x}\_{k-1}}^{\mathbf{x}\_{k+1}} A\_k(\mathbf{x}) d\mathbf{x}}, k = 1, \ldots, n,\tag{1}$$

*is called the direct FzT of f with respect to A*1,..., *An.*

**Remark 1.** *The elements F*1[ *f* ], ... , *Fn*[ *f* ] *are called components of the FzT. If A*1, ... , *An forms a uniform fuzzy partition, then the expression (1) can be simplified as follows:*

$$F\_1[f] = \frac{2}{\hbar} \int\_{x\_1}^{x\_2} f(\mathbf{x}) \, A\_1(\mathbf{x}) d\mathbf{x}, \,\, F\_n[f] = \frac{2}{\hbar} \int\_{x\_{n-1}}^{x\_n} f(\mathbf{x}) \, A\_n(\mathbf{x}) d\mathbf{x},$$

$$F\_k[f] = \frac{1}{\hbar} \int\_{x\_{k-1}}^{x\_{k+1}} f(\mathbf{x}) \, A\_k(\mathbf{x}) d\mathbf{x}, \,\, k = 2, \dots, n - 1. \tag{2}$$

**Definition 3.** *Let F*[ *f* ] = (*F*1, *F*2,..., *Fn*) *be the direct FzT of f* ∈ *C* [*a*, *b*] *with respect to Ak*(*x*)*, k* = 1, ... , *n. Then, the inverse FzT of f ,* ˆ *f* : [*a*, *b*] → *R, given by:*

$$\hat{f}(\mathbf{x}) = \sum\_{k=1}^{n} F\_k \, A\_k(\mathbf{x}). \tag{3}$$

**Lemma 1.** *[6] Let f*(*x*) *be continuous on* [*a*, *b*] *and twice continuously differentiable in* (*a*, *b*)*, and let basic functions form a uniform fuzzy partition of* [*a*, *b*]*. Then, for each k* = 1, . . . , *n:*

$$F\_k = f(\mathbf{x}\_k) + \mathcal{O}(h^2). \tag{4}$$

**Remark 2.** *An important property of the direct FzT, as well as inverse FzT is their linearity, namely, given <sup>f</sup>* , *<sup>g</sup>* <sup>∈</sup> *<sup>C</sup>* [*a*, *<sup>b</sup>*] *and <sup>α</sup>*, *<sup>β</sup>* <sup>∈</sup> <sup>R</sup>*, if h* <sup>=</sup> *<sup>α</sup> <sup>f</sup>* <sup>+</sup> *<sup>β</sup>g, then F* [*h*] <sup>=</sup> *<sup>α</sup><sup>F</sup>* [ *<sup>f</sup>* ] <sup>+</sup> *<sup>β</sup><sup>F</sup>* [*g*] *and* <sup>ˆ</sup> *h* = *α* ˆ *f* + *βg.*ˆ

#### **3. FzT for Solving SODEs**

In this section, we present methodological remarks and numerical schemes for solving SODEs.

#### *3.1. Methodological Remarks to Applications of the FzT*

Consider the Initial Value Problem (IVP) for the SODEs:

$$\begin{cases} \mathbf{x}'(t) = f\left(t, \mathbf{x}\left(t\right), \mathbf{y}\left(t\right)\right), & \mathbf{x}\left(t\_1\right) = \mathbf{a}, t\_1 \le t \le t\_{\text{tr}}\\ \mathbf{y}'(t) = \mathbf{g}\left(t, \mathbf{x}\left(t\right), \mathbf{y}\left(t\right)\right), & \mathbf{y}\left(t\_1\right) = \boldsymbol{\uptheta}, \end{cases} \tag{5}$$

where *<sup>α</sup>*, *<sup>β</sup>* <sup>∈</sup> <sup>R</sup>, *<sup>f</sup>* and *<sup>g</sup>* are continuous functions on [*t*1, *tn*] <sup>×</sup> <sup>R</sup> <sup>×</sup> <sup>R</sup> and satisfy the Lipschitz condition. Unfortunately, the analytical solution (*x*(*t*), *y*(*t*)) of Problem (5) is often difficult and sometimes impossible to obtain. Thus, approximate solutions by means of FzT are extremely important for solving (5). A numerical method for (5) is an algorithm that computes FzT components *Xk* ≈ *x*(*tk*) and *Yk* ≈ *y*(*tk*), for each *k* = 2, ... , *n* (to complete this notation, we set *α* = *X*<sup>1</sup> = *x*(*t*1) and *β* = *Y*<sup>1</sup> = *y*(*t*1)).

Below, we extend the main principles of FzT detailed in Formulas (6) that are needed later.

**Definition 4.** *Let f* (*g*) *be a continuous function on* [*t*1, *tn*] *and A*1, ... , *An be the fuzzy partition of* [*t*1, *tn*]*. A vector of real numbers Fk*[ *f* ] = (*F*1[ *f* ],..., *Fn*[ *f* ]) (*Gk*[*g*] = (*G*1[*g*],..., *Gn*[*g*])) *given by:*

$$F\_k[f] = \frac{\int\_{t\_1}^{t\_n} f(t, \mathbf{x}\left(t\right), \mathbf{y}\left(t\right)) A\_k(t) \, dt}{\int\_{t\_1}^{t\_n} A\_k(t) \, dt} \quad \left(\mathcal{G}\_k[\mathbf{g}] = \frac{\int\_{t\_1}^{t\_n} \mathbf{g}\left(t, \mathbf{x}\left(t\right), \mathbf{y}\left(t\right)\right) A\_k(t) \, dt}{\int\_{t\_1}^{t\_n} A\_k(t) \, dt}\right),\tag{6}$$

*is called the direct FzT of f* (*g*) *that is extended with independent variable t and two dependent variables x and y.*

**Remark 3.** *We need a way to approximate the direct FzT components 6. This is discussed in Corollary 1.*

In the following, the necessary steps of the FzT are given.

	- (a) Specify the number *n* of components, and compute the step *h* = (*tn* − *t*1) / (*n* − 1).
	- (b) Construct the nodes *t*<sup>1</sup> < ... < *tn*, where *tk* = *t*<sup>1</sup> + *h*(*k* − 1).
	- (c) Select the shape of basic functions. We mostly use triangular- or sinusoidal-shaped basic functions. Recall that the shape of the basic functions determines the course of ˆ *f* , that is whether it is piecewise linear or nonlinear.
	- (d) Construct a uniform fuzzy partition of [*t*1, *tn*] by triangular- or sinusoidal-shaped basic functions [5].

space and then transfer them back by the inverse FzT. Compute the approximation for *x* and *y* by the inverse FzT applied to [*X*1,..., *Xn*] and [*Y*1,...,*Yn*]. In the next subsections, the schemes provide formulas for the computation of components of FzT.

#### *3.2. Numerical Scheme I for SODEs*

In this subsection, we present a modified scheme to solve SODEs using the FzT. Suppose that the functions *f* and *g* on [*t*1, *tn*] are sufficiently smooth in (5). For solving SODEs (5) on [*t*1, *tn*], the interval is divided into *n* − 1 subintervals. Let us choose some uniform fuzzy partition of interval [*t*1, *tn*] with parameter *h* = (*tn* − *t*1)/(*n* − 1), *n* ≥ 2, and basic functions *A*1, ... , *An*. In view of the methodological remarks in Subsection 3.1, we describe the complete sequence of steps, which leads to the approximation solution of SODEs (5) (see [6] for technical details). Before we apply the direct FzT to both parts of the differential equation, we will use the Taylor expansion and replace the first derivatives of the left-hand sides in (5) by their approximations, i.e.,

$$\begin{cases} \mathbf{x}(t+h) = \mathbf{x}(t) + h\mathbf{x}'(t) + \mathcal{O}(h^2), \\ \mathbf{y}(t+h) = \mathbf{y}(t) + h\mathbf{y}'(t) + \mathcal{O}(h^2). \end{cases} \tag{7}$$

Denote - *x*+(*t*) = *x*(*t* + *h*), *<sup>y</sup>*+(*t*) = *<sup>y</sup>*(*<sup>t</sup>* <sup>+</sup> *<sup>h</sup>*), as new functions and then apply the direct FzT components - *Fn Gn* to both parts of Equation (7).

$$\begin{cases} F\_{\boldsymbol{n}}[\boldsymbol{\pi}'(t)] = \frac{1}{\hbar} (F\_{\boldsymbol{n}}[\boldsymbol{\pi}^+] - F\_{\boldsymbol{n}}[\boldsymbol{x}]) + \mathcal{O}(h^2), \\ G\_{\boldsymbol{n}}[\boldsymbol{y}'(t)] = \frac{1}{\hbar} (G\_{\boldsymbol{n}}[\boldsymbol{y}^+] - G\_{\boldsymbol{n}}[\boldsymbol{y}]) + \mathcal{O}(h^2), \end{cases}$$

where - *Fn*[*x* ]=[*X* <sup>1</sup>,... , *X <sup>n</sup>*−1], *Gn*[*y* ]=[*Y* 1,... ,*Y <sup>n</sup>*−1], - *Fn*[*x*]=[*X*1,... , *Xn*−1], *Gn*[*y*]=[*Y*1,... ,*Yn*−1], and - *Fn*[*x*+]=[*X*<sup>+</sup> <sup>1</sup> ,... , *<sup>X</sup>*<sup>+</sup> *<sup>n</sup>*−1], *Gn*[*y*+]=[*Y*<sup>+</sup> <sup>1</sup> ,... ,*Y*<sup>+</sup> *<sup>n</sup>*−1].

Now, prove that:

$$\begin{cases} X\_1^+ = X\_2 + \mathcal{O}(h^2), \\ X\_k^+ = X\_{k+1}, \; k = 2, \dots, n-2, \end{cases} \quad \text{and} \quad \begin{cases} Y\_1^+ = Y\_2 + \mathcal{O}(h^2), \\ Y\_k^+ = Y\_{k+1}, \; k = 2, \dots, n-2. \end{cases}$$

For the values *<sup>k</sup>* <sup>=</sup> 1 and *<sup>k</sup>* <sup>=</sup> *<sup>n</sup>* <sup>−</sup> 1 by Lemma 1, we have - *X*<sup>+</sup> *<sup>k</sup>* <sup>=</sup> *Xk*<sup>+</sup><sup>1</sup> <sup>+</sup> <sup>O</sup>(*h*2), *Y*<sup>+</sup> *<sup>k</sup>* <sup>=</sup> *Yk*<sup>+</sup><sup>1</sup> <sup>+</sup> <sup>O</sup>(*h*2), , and for *k* = 2, . . . , *n* − 2, we have:

$$\begin{cases} X\_k^+ = \frac{1}{\hbar} \int\_{t\_{k-1}}^{t\_{k+1}} x(t+h) . A\_k(t) dt = \frac{1}{\hbar} \int\_{t\_k}^{t\_{k+2}} x(s) . A\_{k+1}(s) ds = X\_{k+1}, \\ Y\_k^+ = \frac{1}{\hbar} \int\_{t\_{k-1}}^{t\_{k+1}} y(t+h) . A\_k(t) dt = \frac{1}{\hbar} \int\_{t\_k}^{t\_{k+2}} y(s) . A\_{k+1}(s) ds = Y\_{k+1}. \end{cases}$$

Then, we get:

$$\begin{cases} hX\_k' = (X\_{k+1} - X\_k) + \mathcal{O}(h^2), \\ hY\_k' = (Y\_{k+1} - Y\_k) + \mathcal{O}(h^2). \end{cases}, k = 1, \dots, n - 1. \tag{8}$$

Therefore, we can introduce the (*n* − 1) × *n* matrix:

$$D = \frac{1}{h} \begin{pmatrix} -1 & 1 & 0 & \cdots & 0 \\ 0 & -1 & 1 & \cdots & 0 \\ \vdots & & & \ddots & \vdots \\ 0 & 0 & \cdots & -1 & 1 \end{pmatrix}.$$

Thus, according to (5), the equality (8) can be rewritten (up to O *h*2 ) as matrix equality:

$$\begin{cases} F\_n[\mathbf{x'}] = DF\_n[\mathbf{x}],\\ G\_n[\mathbf{y'}] = DG\_n[\mathbf{y}], \end{cases} \tag{9}$$

$$\text{where}\begin{cases} F\_{\mathbb{R}}[\mathbf{x}'] = [X\_1', \dots, X\_{n-1}']^T, \\ \mathbf{G}\_{\mathbb{R}}[\mathbf{y}'] = [Y\_{1'}', \dots, Y\_{n-1}']^T, \end{cases} \text{ and } \begin{cases} F\_{\mathbb{R}}[\mathbf{x}] = [X\_1, \dots, X\_{n-1}]^T, \\ \mathbf{G}\_{\mathbb{R}}[\mathbf{y}] = [Y\_1, \dots, Y\_{n-1}]^T. \end{cases}$$

Now, let us return to the problem (5) and apply the FzT to both sides of the differential equation. Based on the linearity of FzT and Formula (9), we obtain the following system with respect to the unknown *Fn*[*x*] and *Gn*[*y*]:

$$\begin{cases} DF\_n[\mathbf{x}] = F\_n[f]\_\prime \\ DG\_n[y] = G\_n[g]\_\prime \end{cases} \tag{10}$$

where - *Fn*[ *f* ]=[*F*1,..., *Fn*−1] *T*, *Gn*[*g*]=[*G*1,..., *Gn*−1] *<sup>T</sup>*, are the FzT of *<sup>f</sup>*(*t*, *<sup>x</sup>*(*t*), *<sup>y</sup>*(*t*)) (*g*(*t*, *<sup>x</sup>*(*t*), *<sup>y</sup>*(*t*))) as a function of *<sup>t</sup>*

w.r.t. the chosen basic functions *A*1, ... , *An*. Note that the last components *Fn* and *Gn* are not presented in *Fn*[ *f* ] and *Gn*[*g*], respectively, due to the dimensionality limitation, and (10) does not include two initial conditions of (5). Thus, we complete the matrix *D* by adding the first row as the initial value as follows:

$$D^\* = \frac{1}{h} \begin{pmatrix} 1 & 0 & 0 & \cdots & 0 \\ -1 & 1 & 0 & \cdots & 0 \\ 0 & -1 & 1 & \cdots & 0 \\ \vdots & & & \ddots & \vdots \\ 0 & 0 & \cdots & -1 & 1 \end{pmatrix} \times$$

so that *D*<sup>∗</sup> is an *n* × *n* nonsingular matrix. Based on the initial conditions and the matrix *D*∗, we also expand *Fn*[ *f* ] and *Gn*[*g*] by adding the first element, as follows:

$$\begin{cases} F\_n^\*[f] = \left[ \frac{x\_1}{\hbar}, F\_{1'}, \dots, F\_{n-1} \right]^T, \\ G\_n^\*[g] = \left[ \frac{y\_1}{\hbar}, G\_{1'}, \dots, G\_{n-1} \right]^T. \end{cases}$$

Then, the problem (5) can be completely represented by the following expression with respect to the unknown *Fn*[*x*] and *Gn*[*y*]:

$$\begin{cases} D^\*.F\_n[\mathbf{x}] = F\_n^\*[f], \\ D^\*.G\_n[\mathbf{y}] = G\_n^\*[\mathbf{g}]. \end{cases} \tag{11}$$

The solution of (11) can be obtained by the following formula:

$$\begin{cases} F\_{\mathfrak{n}}[\mathbf{x}] = (D^\*)^{-1}.F\_{\mathfrak{n}}^\*[f]. \\ G\_{\mathfrak{n}}[\mathbf{y}] = (D^\*)^{-1}.G\_{\mathfrak{n}}^\*[\mathbf{g}]. \end{cases} \tag{12}$$

In fact, to obtain the solution of (12), we should compute the inverse matrix of *D*∗. Therefore, we have:

$$(D^\*)^{-1} = h \begin{pmatrix} 1 & 0 & 0 & \cdots & 0 \\ 1 & 1 & 0 & \cdots & 0 \\ 0 & 1 & 1 & \cdots & 0 \\ \vdots & & & \ddots & \vdots \\ 1 & 1 & \cdots & 1 & 1 \end{pmatrix},$$

and by (12), we get:

$$X\_{k+1} = X\_k + h\mathcal{F}\_{\mathbf{k}} \qquad X\_1 = \mathfrak{a}, \\ \big|\ \mathcal{Y}\_{k+1} = \mathcal{Y}\_k + h\mathcal{G}\_{\mathbf{k}} \qquad \mathcal{Y}\_1 = \mathcal{Y}, \\ k = 1, \ldots, n-1,\tag{13}$$

where *Fk* (*Gk*) is given by Formula (6). Formula (13) can be applied to the computation of *X*2, ... , *Xn* and *Y*2, ... ,*Yn*. However, it cannot be applied directly by using the function *f*(*t*, *x*, *y*) or *g*(*t*, *x*, *y*), because it uses unknown functions *x* and *y*. Therefore, we will use the same trick as in [6] and replace functions by their FzT components:

$$\hat{\mathbf{r}}\_{k}[f] = \frac{\int\_{t\_1}^{t\_n} f(t, \mathbf{X}\_k, \mathbf{Y}\_k) \, A\_k(t) \, dt}{\int\_{t\_1}^{t\_n} A\_k(t) \, dt},\\\hat{\mathbf{G}}\_{k}[\mathbf{g}] = \frac{\int\_{t\_1}^{t\_n} g(t, \mathbf{X}\_k, \mathbf{Y}\_k) \, A\_k(t) \, dt}{\int\_{t\_1}^{t\_n} A\_k(t) \, dt},\\k = 1, \ldots, n-1. \tag{14}$$

Thus, the components of FzT of *x* and *y* can be approximated from the following Scheme I:

$$X\_{k+1} = X\_k + h\P\_{k'} \qquad X\_1 = \mathfrak{a}, \quad | \quad Y\_{k+1} = Y\_k + h\widehat{\mathfrak{C}}\_{k'} \qquad Y\_1 = \mathfrak{f}, \; k = 1, \ldots, n-1. \tag{15}$$

Finally, the approximate solution of (5) can be obtained using the inverse FzT as follows:

$$\mathbf{x}\_n(t) = \sum\_{k=1}^n \mathbf{X}\_k \, A\_k(t), \ y\_n(t) = \sum\_{k=1}^n \mathbf{Y}\_k \, A\_k(t), \tag{16}$$

where *Ak*(*t*), *k* = 1, 2, ... , *n* are given basic functions. The proposed method is similar to the well-known Euler method and under similar assumptions. It has the same degree of accuracy. In the next theorem, we obtain an error estimate in the context of a fuzzy partition and error analysis of numerical Scheme I.

**Theorem 1.** *Let <sup>f</sup>* , *<sup>g</sup>* : [*t*1, *tn*] <sup>→</sup> <sup>R</sup> *be twice continuously differentiable on* [*t*1, *tn*]*. Let, moreover, <sup>f</sup>* , *<sup>g</sup>* : [*t*1, *tn*] <sup>×</sup> <sup>R</sup> <sup>×</sup> <sup>R</sup> <sup>→</sup> <sup>R</sup> *be Lipschitz continuous with respect to <sup>x</sup> and y, i.e. there exists a constant <sup>L</sup>* <sup>∈</sup> <sup>R</sup>*, such that for all t* ∈ [*t*1, *tn*] *and x*, *x* , *<sup>y</sup>*, *<sup>y</sup>* <sup>∈</sup> <sup>R</sup>*,*

$$|f(t,x,y) - f(t,x',y')| \le L(|x - x'| + |y - y'|),$$

$$|g(t,x,y) - g(t,x',y')| \le L(|x - x'| + |y - y'|). \tag{17}$$

*Assume that* {*Ak* | *k* = 1, . . . , *n*}*, n* ≥ 2, *is a uniform fuzzy partition of* [*t*1, *tn*]*. Then, the local (global) error of Scheme I (14)–(15) is of the order h*<sup>2</sup> *(h).*

**Proof.** Let us choose and fix some *k*, where 2 ≤ *k* ≤ *n*, and assume that *Xk* (*Yk*) is the *k*-th FzT component of *x* (*y*). We consider the SODEs (5) and their FzT representation by the system of equations (11). We start with the following easy consequences from the Taylor expansions:

$$\begin{aligned} x(t\_{k+1}) &= x(t\_k) + h f(t\_{k\prime} x\_{k\prime} y\_k) + \mathcal{O}\left(h^2\right), \\ y(t\_{k+1}) &= y(t\_k) + h \mathcal{g}(t\_{k\prime} x\_{k\prime} y\_k) + \mathcal{O}\left(h^2\right). \end{aligned}$$

Let *ek* = *xk* − *Xk*, *dk* = *yk* − *Yk*, *x*(*tk*) = *xk* and *y*(*tk*) = *yk*, then according to (14)–(15), we get:

$$\mathbf{e}\_{k+1} = \mathbf{e}\_k + h\left(f(t\_k, \mathbf{x}\_k, y\_k) - \hat{F}\_k\right) + \mathcal{O}\left(h^2\right),\tag{18}$$

$$d\_{k+1} = d\_k + h\left(\mathcal{g}\left(t\_{k'}, \mathbf{x}\_{k'} y\_k\right) - \mathcal{G}\_k\right) + \mathcal{O}\left(h^2\right). \tag{19}$$

By the assumption that *f* and *g* have continuous second order derivatives on [*t*1, *tn*] and are Lipschitz continuous with respect to *x* and *y*, therefore, using the trapezoid rule and Remark 1, we get:

$$\begin{split} |f(t\_k, \mathbf{x}\_k, y\_k) - \hat{f}\_k| &= \quad |f(t\_k, \mathbf{x}\_k, y\_k) - \frac{1}{h} \int\_{t\_{k-1}}^{t\_{k+1}} f(t, \mathbf{X}\_k, \mathbf{Y}\_k) A\_k(t) dt|, \\ &= \quad |f(t\_k, \mathbf{x}\_k, y\_k) - \frac{1}{h} \frac{h}{2} 2 \, f(t\_k, \mathbf{X}\_k, \mathbf{Y}\_k) + \mathcal{O}(h^2)|, \\ &= \quad |f(t\_k, \mathbf{x}\_k, y\_k) - f(t\_{k'}, \mathbf{X}\_{k'}, \mathbf{Y}\_k) + \mathcal{O}(h^2)|, \\ &\leq \quad L(|e\_k| + |d\_k|) + \mathcal{O}(h^2). \end{split} \tag{20}$$

Similarly,

$$|\mathcal{g}(t\_{k\prime}x\_k, y\_k) - \hat{\mathcal{G}}\_k| \le L(|e\_k| + |d\_k|) + \mathcal{O}(h^2).$$

Therefore,

$$\begin{aligned} |c\_{k+1}| &\le |e\_k| + hL(|e\_k| + |d\_k|) + \mathcal{O}(h^2), \\ |d\_{k+1}| &\le |d\_k| + hL(|e\_k| + |d\_k|) + \mathcal{O}(h^2). \end{aligned}$$

Denoting *δ<sup>k</sup>* = max(|*ek*|, |*dk*|) and using the obvious equality max(*a* + *b*, *c* + *b*) = max(*a*, *c*) + *b*, we obtain from the above :

$$
\delta\_{k+1} \le \delta\_k + 2hL(|e\_k| + |d\_k|) + \mathcal{O}(h^2),
\tag{21}
$$

and finally,

$$
\delta\_{k+1} \le \delta\_k (1 + 2hL) + \mathcal{O}(h^2).
$$

By iteration, we come to:

$$
\delta\_{k+1} \le \delta\_1 (1 + \mathbb{C})^k + \mathcal{O}(h^2)(1 + (1 + \mathbb{C}) + \dots + (1 + \mathbb{C})^{k-1}) \\
= \delta\_1 (1 + \mathbb{C})^k + \mathcal{O}(h^2) \frac{(1 + \mathbb{C})^k - 1}{\mathbb{C}},
$$

where *C* = 2*hL*. For *k* = *n* − 1, we have:

$$
\delta\_n \overset{\kappa}{\underset{\sim}{\sim}} e^{2\mathcal{L}} \delta\_1 + \mathcal{O}(h), \tag{22}
$$

where we made use of the following fact: *h* = (*tn* − *t*1)/(*n* − 1), and the following asymptotic equalities: (<sup>1</sup> + 1/*n*)*<sup>n</sup>* ∼ *<sup>e</sup>*, (<sup>1</sup> + *<sup>a</sup>*)*<sup>n</sup>* ∼ <sup>1</sup> + *na*.

By (21) and (22), we conclude that Scheme I has the local order *h*<sup>2</sup> and the global order *h*.

**Remark 4.** *Theorem 1 extends Theorem* 9.1 *of [6] to the SODEs.*

**Corollary 1.** *Let the assumptions of Theorem 1 be fulfilled. Then, for each k* = 1, . . . , *n:*

$$F\_k - \hat{F}\_k = \mathcal{O}(h^2), \text{ and } G\_k - \hat{G}\_k = \mathcal{O}(h^2),$$

*where Fk, Gk are defined by (6) and F*ˆ *<sup>k</sup>*, *G*ˆ *<sup>k</sup> are defined by (14).*

**Proof.** By Lemma 1, *xk* − *Xk* = O(*h*2) and *yk* − *Yk* = O(*h*2) and (20), we get:

$$
\hat{F}\_k - f(t\_{k\prime} x\_{k\prime} y\_k) = \mathcal{O}(h^2)\_{\prime\prime}
$$

and using the trapezoid rule and Remark 1, we obtain:

$$\begin{aligned} F\_k &= -\frac{1}{h} \int\_{t\_{k-1}}^{t\_{k+1}} f(t, \mathbf{x}\_{k'} y\_k) A\_k(t) dt, \\ &= -\frac{1}{h} \frac{h}{2} 2 \, f(t\_{k'} \mathbf{x}\_{k'} y\_k) + \mathcal{O}(h^2), \\ &= \quad f(t\_{k'} \mathbf{x}\_{k'} y\_k) + \mathcal{O}(h^2). \end{aligned}$$

which together with the previous estimation proves that:

$$
\mathcal{F}\_k - \mathcal{F}\_k = \mathcal{O}(h^2).
$$

Similarly,

$$\log(t\_{k\prime}x\_k, y\_k) - \hat{G}\_k = \mathcal{O}(h^2) \text{ and } G\_k - \hat{G}\_k = \mathcal{O}(h^2). \quad \square$$

**Corollary 2.** *The approximation method for (5) is given by Scheme I (14)–(15) with the local error* O(*h*2)*. The approximate solution to (5) can be found by taking the inverse FzT (16) where A*1, ... , *An are fixed basic functions.*

**Theorem 2.** *Let <sup>f</sup>* , *<sup>g</sup>* ∈ *<sup>C</sup>*<sup>2</sup> [*t*1, *tn*] *and bounded on <sup>I</sup>* = [*t*1, *tn*]*. Let, moreover, basic functions* {*Ak* | *k* = 1, . . . , *n*} , *n* ≥ 2, *form a uniform fuzzy partition of I. Assume that Fk* (*Gk*), *k* = 1 ... , *n*, *and F k G k* , *k* = 1 ... , *n, are the FzT components of f* (*g*) *and f* (*g* ) *with respect to* {*Ak* | *k* = 1, . . . , *n*}*, respectively. Then, for k* = 1, . . . , *n* − 1*:*

$$\left| \left| F\_{k+1} - F\_k \right| \le h \left| F\_k' \right| + \frac{h^2}{2} M\_{f'} \left| \mathcal{G}\_{k+1} - \mathcal{G}\_k \right| \le h \left| \mathcal{G}\_k' \right| + \frac{h^2}{2} M\_{\mathcal{G}'} \tag{23}$$

*where Mf* = *max <sup>t</sup>*∈*<sup>I</sup>* <sup>|</sup> *<sup>f</sup>* (*t*, *<sup>x</sup>*(*t*), *<sup>y</sup>*(*t*))<sup>|</sup> *and Mg* <sup>=</sup> *max <sup>t</sup>*∈*<sup>I</sup>* <sup>|</sup>*g*(*t*, *<sup>x</sup>*(*t*), *<sup>y</sup>*(*t*))|*.*

**Proof.** Let us write the following result from Taylor's theorem:

$$\begin{aligned} f(t+h, \mathbf{x}(t), \mathbf{y}(t)) &= f(t, \mathbf{x}(t), \mathbf{y}(t)) + hf'(t, \mathbf{x}(t), \mathbf{y}(t)) + \frac{h^2}{2}f''(\varepsilon, \mathbf{x}(\varepsilon), \mathbf{y}(\varepsilon)), \\ g(t+h, \mathbf{x}(t), \mathbf{y}(t)) &= g(t, \mathbf{x}(t), \mathbf{y}(t)) + hg'(t, \mathbf{x}(t), \mathbf{y}(t)) + \frac{h^2}{2}g''(\xi, \mathbf{x}(\xi), \mathbf{y}(\xi)). \end{aligned}$$

where *tk* < *ε*, *ξ* < *tk*+1. Using the linearity of the FzT by Remark 2 and the properties of the uniform fuzzy partition by Definition 1, we get:

$$\begin{aligned} \,\_2F\_k\left[f(t+h, \mathbf{x}, y)\right] &= \frac{1}{h} \int\_{t\_{k-1}}^{t\_{k+1}} f(t+h, \mathbf{x}, y) A\_k(t) dt, \\ &= \frac{1}{h} \int\_{t\_k}^{t\_{k+2}} f(t, \mathbf{x}, y) A\_{k+1}(t) dt, \\ &= F\_{k+1}[f(t, \mathbf{x}, y)], \, k = 2, 3, \dots, n-1, \end{aligned}$$

and:

$$\begin{aligned} F\_{k+1} &= F\_k + hF'\_k + \mathcal{O}\left(h^2\right), \; G\_{k+1} = G\_k + hG'\_k + \mathcal{O}\left(h^2\right), \\ F\_{k+1} &\le F\_k + hF'\_k + \frac{h^2}{2}M\_{f'}, \; G\_{k+1} \le G\_k + hG'\_k + \frac{h^2}{2}M\_{\mathcal{S}'} \end{aligned}$$

which easily leads to (23).

**Lemma 2.** *Consider Scheme I (14)–(15). If the set of fuzzy sets* {*Ak* | *k* = 1, . . . , *n* − 1} , *n* ≥ 2, *is a uniform fuzzy partition of* [*t*1, *tn*]*, then we have for fixed k* = 1, . . . , *n* − 1*:*

$$X\_{k+1} - X\_k = \begin{cases} 2\int\_{t\_1}^{t\_n} f(t, X\_k, \mathcal{Y}\_k) A\_1(t)dt & \text{if } \qquad k = 1, \\\\ \int\_{t\_1}^{t\_n} f(t, X\_k, \mathcal{Y}\_k) A\_k(t)dt & \text{if } \quad k = 2, \dots, n-1, \end{cases} \tag{24}$$

$$\mathcal{Y}\_{k+1} - \mathcal{Y}\_k = \begin{cases} 2\int\_{t\_1}^{t\_n} g(t, X\_k, \mathcal{Y}\_k) A\_1(t)dt & \text{if } \qquad k = 1, \\\\ \int\_{t\_1}^{t\_n} g(t, X\_k, \mathcal{Y}\_k) A\_k(t)dt & \text{if } \quad k = 2, \dots, n-1, \end{cases} \tag{25}$$

**Proof.** The proof can be obtained from Remark 1; in particular, by the properties of the uniform fuzzy partition *tn <sup>t</sup>*<sup>1</sup> *Ak*(*t*)*dt* <sup>=</sup> *<sup>h</sup>*/2 for *<sup>k</sup>* <sup>=</sup> 1 and *tn <sup>t</sup>*<sup>1</sup> *Ak*(*t*)*dt* = *<sup>h</sup>* for *<sup>k</sup>* = 2, ... , *<sup>n</sup>* − 1 and after substituting this into (14)–(15).

**Lemma 3.** *Suppose that f , g are continuous and bounded on I* = [*t*1, *tn*]*, and consider that Scheme I (14)–(15) with respect to the basic functions forms a uniform fuzzy partition of I. Then, we have for fixed k* = 1, ... , *n* − 1*:*

$$|X\_{k+1} - X\_k| \le M\_1 h\_\prime \ |Y\_{k+1} - Y\_k| \le M\_2 h\_\prime$$

*where M*<sup>1</sup> = *max t*∈[*t*1,*tn*] | *f*(*t*, *x*, *y*)| *and M*<sup>2</sup> = *max t*∈[*t*1,*tn*] |*g*(*t*, *x*, *y*)|*.*

**Proof.** Let us choose a value of *k* in the range 1 ≤ *k* ≤ *n* − 1. By using Lemma 2, we get:

$$\begin{aligned} |X\_2 - X\_1| &= \left| 2 \int\_{t\_1}^{t\_2} f(t, X\_k, Y\_k) A\_1(t) dt \right|, \\ &\le 2 \int\_{t\_1}^{t\_2} |f(t, X\_k, Y\_k) A\_1(t)| \, dt \quad \le 2M\_1 \quad \int\_{t\_{k-1}}^{t\_{k+1}} A\_1(t) dt = M\_1 h\_{\lambda}, \\ |X\_{k+1} - X\_k| &= \left| \int\_{t\_{k-1}}^{t\_{k+1}} f(t, X\_k, Y\_k) A\_k(t) dt \right|, \\ &\le \int\_{t\_{k-1}}^{t\_{k+1}} |f(t, X\_k, Y\_k) A\_k(t)| \, dt \quad \le M\_1 \quad \int\_{t\_{k-1}}^{t\_{k+1}} A\_k(t) dt = M\_1 h. \end{aligned}$$

Similarly, |*Y*<sup>2</sup> − *Y*1| ≤ *M*2*h* and |*Yk*<sup>+</sup><sup>1</sup> − *Yk*| ≤ *M*2*h*.

Throughout the assumptions of Theorem 3, we consider (*t*, *x*1, *x*2) instead (*t*, *x*, *y*).

**Theorem 3.** *Suppose that <sup>f</sup>*(*t*, *<sup>x</sup>*1, *<sup>x</sup>*2), *<sup>g</sup>*(*t*, *<sup>x</sup>*1, *<sup>x</sup>*2) ∈ *<sup>C</sup>*<sup>2</sup> [*t*1, *<sup>t</sup>*2]*. Let ∂ f ∂xi* <sup>≤</sup> *Lf ∂g ∂xi* <sup>≤</sup> *Lg* , *i* = 1, 2, *and* | *f*(*t*, *x*1, *x*2)| ≤ *M*<sup>1</sup> (|*g*(*t*, *x*1, *x*2)| ≤ *M*2)*. Consider Scheme I (14)–(15) for some positive integer k, and* {*Ak* | *k* = 1, . . . , *n* − 1} *, n* ≥ 2, *is a uniform fuzzy partition of* [*t*1, *tn*]*, then the following hold true:*

*1. for a value of k in the range* 1 ≤ *k* ≤ *n* − 1*:*

$$\left| \hat{F}\_{k} - \hat{F}\_{k-1} \right| \leq L\_{f} \text{h} \mathcal{U}\_{k,k-1} \text{ } \left| \hat{G}\_{k} - \hat{G}\_{k-1} \right| \leq L\_{\mathfrak{F}} \text{h} \mathcal{U}\_{k,k-1} \text{ } \left| \hat{F}\_{k} - \hat{G}\_{k} \right| $$

*where Uk*,*k*−<sup>1</sup> = |*Xk* − *Xk*−1| + |*Yk* − *Yk*−1|*. 2. for all k* = 1, . . . , *n* − 1*:*

$$|X\_{n+1} - X\_n| \le \frac{Mh}{2} e^{n\left(2Lh^2\right)}, \ |Y\_{n+1} - Y\_n| \le \frac{Mh}{2} e^{n\left(2Lh^2\right)},$$

*where L* = *Lf* + *Lg and M* = ∑<sup>2</sup> *<sup>i</sup> Mi.*

**Proof.** 1. Using (6), we can get for each *k* = 2, . . . , *n* − 1 and *t* ∈ *I* ∩ [*tk*, *tk*<sup>+</sup>1]:

$$\left| \left| \hat{F}\_{k} - \hat{F}\_{k-1} \right| \right| \quad = \left| \left| \frac{\int\_{t\_{k-1}}^{t\_{k+1}} f(t, \mathbf{X}\_{k}, \mathbf{Y}\_{k}) A\_{k}(t) dt}{\int\_{t\_{k-1}}^{t\_{k+1}} A\_{k}(t) dt} - \frac{\int\_{t\_{k-2}}^{t\_{k}} f(t, \mathbf{X}\_{k-1}, \mathbf{Y}\_{k-1}) A\_{k-1}(t) dt}{\int\_{t\_{k-2}}^{t\_{k}} A\_{k-1}(t) dt} \right| \right|.$$

Based on Remark 1 and Definition 1, the properties of the uniform fuzzy partition, we replace *t* by *<sup>t</sup>* − *<sup>h</sup>* and then *Ak*−1(*<sup>t</sup>* − *<sup>h</sup>*) by *Ak*(*t*). Thus,

$$\begin{split} |\hat{F}\_{k} - \hat{F}\_{k-1}| &= \left| \frac{\int\_{t\_{k-1}}^{t\_{k+1}} f(t, X\_k, Y\_k) A\_k(t) dt}{\int\_{t\_{k-1}}^{t\_{k+1}} A\_k(t) dt} - \frac{\int\_{t\_{k-1}}^{t\_{k+1}} f(t - h, X\_{k-1}, Y\_{k-1}) A\_k(t) dt}{\int\_{t\_{k-1}}^{t\_{k+1}} A\_k(t) dt} \right|, \\ &\leq \quad L\_f \left( |X\_k - X\_{k-1}| + |Y\_k - Y\_{k-1}| \right) \int\_{t\_{k-1}}^{t\_{k+1}} A\_k(t) dt, \\ &= \quad L\_f h I\_{k, k-1}. \end{split} \tag{25}$$

In a similar way, *G*ˆ *<sup>k</sup>* <sup>−</sup> *<sup>G</sup>*<sup>ˆ</sup> *k*−1 ≤ *LghUk*,*k*−1.

2. We first prove the estimate for *k* = 1. Then, we show that for all *k* = 2, ... , *n* − 1, by using Lemma 2, for *k* = 1,

$$\begin{aligned} |X\_2 - X\_1| &= \left| \int\_{t\_1}^{t\_2} f(t, X\_k, Y\_k) A\_k(t) dt \right|, \\ &\le \int\_{t\_1}^{t\_2} |f(t, X\_k, Y\_k) A\_k(t)| \, dt \quad \le M\_1 \quad \int\_{t\_1}^{t\_2} A\_k(t) dt \le \frac{Mh}{2}, \end{aligned}$$

where *M* = ∑<sup>2</sup> *<sup>i</sup> Mi*. By (15), we get:

$$X\_{k+1} - X\_k = X\_k - X\_{k-1} + h\left(\mathring{\mathcal{F}}\_k - \mathring{\mathcal{F}}\_{k-1}\right), \quad \mathcal{Y}\_{k+1} - \mathcal{Y}\_k = \mathcal{Y}\_k - \mathcal{Y}\_{k-1} + h\left(\mathring{\mathcal{G}}\_k - \mathring{\mathcal{G}}\_{k-1}\right).$$

By using (25), we get:

$$\begin{split} |X\_{k+1} - X\_k| &\leq |X\_k - X\_{k-1}| + Lh^2 \left( |X\_k - X\_{k-1}| + |Y\_k - Y\_{k-1}| \right), \\ &\leq |X\_k - X\_{k-1}| + |Y\_k - Y\_{k-1}| + 2Lh^2 \left( |X\_k - X\_{k-1}| + |Y\_k - Y\_{k-1}| \right), \\ &\leq \left[ 1 + 2Lh^2 \right] \left( |X\_k - X\_{k-1}| + |Y\_k - Y\_{k-1}| \right), \\ &\leq \left( 1 + 2Lh^2 \right)^k \mathcal{U}\_{2,1} \leq \frac{Mh}{2} \left( 1 + 2Lh^2 \right)^k, \end{split}$$

where *<sup>U</sup>*2,1 <sup>=</sup> <sup>|</sup>*X*<sup>2</sup> <sup>−</sup> *<sup>X</sup>*1<sup>|</sup> <sup>+</sup> <sup>|</sup>*Y*<sup>2</sup> <sup>−</sup> *<sup>Y</sup>*1|, *<sup>L</sup>* <sup>=</sup> *Lf* <sup>+</sup> *Lg* and *<sup>M</sup>* <sup>=</sup> <sup>∑</sup><sup>2</sup> *<sup>i</sup> Mi*. In particular,

$$|X\_{n+1} - X\_n| \le \frac{Mh}{2} \left( 1 + 2Lh^2 \right)^n \le e^{\mu \left( 2Lh^2 \right)} \frac{Mh}{2},$$

where we have used inequality 1 + 2*h*2*L <sup>n</sup>* <sup>≤</sup> *<sup>e</sup> <sup>n</sup>*( <sup>2</sup>*h*2*<sup>L</sup>* ), *<sup>n</sup>* <sup>≥</sup> 0. Analogously, <sup>|</sup>*Yn*+<sup>1</sup> <sup>−</sup> *Yn*<sup>|</sup> <sup>≤</sup> *e n*(2*Lh*<sup>2</sup>) *Mh* <sup>2</sup> . which concludes the proof.

**Remark 5.** *The following estimations are used in Theorem 4 for k* = 1, ... , *n* − 1*. Let f* (*g*) *satisfy the Lipschitz condition in the second and third arguments; we get:*

$$\begin{array}{rcl} \left| f(t\_{k\prime}\mathbf{x}\_{k\prime}\mathbf{y}\_{k}) - f(t\_{k\prime}\mathbf{X}\_{k\prime}\mathbf{Y}\_{k}) \right| & \leq & L\_{1} \left| \mathbf{x}\_{k} - \mathbf{X}\_{k} \right| + L\_{2} \left| \mathbf{y}\_{k} - \mathbf{Y}\_{k} \right|, \\ \left| g(t\_{k\prime}\mathbf{x}\_{k\prime}\mathbf{y}\_{k}) - g(t\_{k\prime}\mathbf{X}\_{k\prime}\mathbf{Y}\_{k}) \right| & \leq & L\_{3} \left| \mathbf{x}\_{k} - \mathbf{X}\_{k} \right| + L\_{4} \left| \mathbf{y}\_{k} - \mathbf{Y}\_{k} \right|. \end{array}$$

*From (20), we get:*

$$\begin{array}{c|c} f(t\_{k'}\mathbf{x}\_{k'}\mathbf{y}\_{k}) - \hat{\mathbf{f}}\_{k} & \leq f(t\_{k'}\mathbf{x}\_{k'}\mathbf{y}\_{k}) - f(t\_{k'}X\_{k'}\mathbf{Y}\_{k}) + \frac{h^{2}}{12}\mathcal{M}\_{2}, \\ \left| f(t\_{k'}\mathbf{x}\_{k'}\mathbf{y}\_{k}) - \hat{\mathbf{f}}\_{k} \right| & \leq L\_{1}\left| \mathbf{x}\_{k} - \mathbf{X}\_{k} \right| + L\_{2}\left| \mathbf{y}\_{k} - \mathbf{Y}\_{k} \right| + \frac{h^{2}}{6}\mathcal{M}\_{2}, \\ \left| g(t\_{k'}\mathbf{x}\_{k'}\mathbf{y}\_{k}) - \hat{\mathbf{G}}\_{k} \right| & \leq L\_{3}\left| \mathbf{x}\_{k} - \mathbf{X}\_{k} \right| + L\_{4}\left| \mathbf{y}\_{k} - \mathbf{Y}\_{k} \right| + \frac{h^{2}}{6}\mathcal{M}\_{2}. \end{array}$$

*Thus,*

$$\left| f(t\_{k'}X\_{k'}Y\_k) - f\_k \right| \le \frac{h^2}{6}M\_{2,\prime} \text{ and } \left| g(t\_{k'}X\_{k'}Y\_k) - \mathcal{G}\_k \right| \le \frac{h^2}{6}M\_{2,\prime}$$

$$\left| \begin{array}{ccccc} \cdot & M & \cdot & \cdot & \cdot & \cdot \end{array} \right| \left| f^{\prime\prime}(t\_{k'}\dots) \right| \text{ and } M & \cdot & \cdot & \cdot & \cdot \end{array} \right| \le \frac{h^2}{6}M\_{2,\prime}$$

*where M*<sup>2</sup> = *M*<sup>2</sup> *<sup>f</sup>* + *M*2*g, M*<sup>2</sup> *<sup>f</sup>* = *max t*∈[*t*1,*tn*] | *f* (*t*, *x*, *y*)| *and M*2*<sup>g</sup>* = *max t*∈[*t*1,*tn*] |*g* (*t*, *x*, *y*)|*.*

Now, we show that the proposed Scheme I is convergent.

**Theorem 4.** *Let the assumptions of Theorem (3) be fulfilled, and further assume that f* (*g*) *satisfies a Lipschitz condition in the second and third arguments. Consider Scheme I (14)–(15) for some positive integer k, and* {*Ak* | *k* = 1, ... , *n* − 1}*, n* ≥ 2, *is a uniform fuzzy partition of* [*t*1, *tn*]*. Thus, if a sequence of h* = {*h*1, ..., *hm*} → 0, *<sup>m</sup>* <sup>&</sup>gt; <sup>0</sup>*, and with each h, we compute the Xk*,*h*,*Yk*,*<sup>h</sup> component, then x*(*tk*) − *Xk*,*<sup>h</sup> , y*(*tk*) − *Yk*,*<sup>h</sup> converges to zero for each k* = 1, . . . , *<sup>n</sup>* − <sup>1</sup>*.*

**Proof.** Let us drop the *h* subscript in the errors, writing |*x*(*tk*) − *Xk*| and |*y*(*tk*) − *Yk*|. Now, when *k* = 1, the result is clearly true, since *x*(*t*1) = *X*<sup>1</sup> = *x*1, *y*(*t*1) = *Y*<sup>1</sup> = *y*1. By Taylor's theorem, we have:

$$\begin{aligned} \mathbf{x}(t\_{k+1}) &= \mathbf{x}(t\_k) + h f(t\_{k\prime} \mathbf{x}\_{k\prime} \mathbf{y}\_k) + \frac{h^2}{2} f'(\varepsilon\_{k\prime} \mathbf{x}(\varepsilon\_k), \mathbf{y}(\varepsilon\_k)), \\ \mathbf{y}(t\_{k+1}) &= \mathbf{y}(t\_k) + h \mathbf{g}(t\_{k\prime} \mathbf{x}\_{k\prime} \mathbf{y}\_k) + \frac{h^2}{2} \mathbf{g}'(\xi\_{k\prime} \mathbf{x}(\xi\_k), \mathbf{y}(\xi\_k)), \end{aligned}$$

where *tk* < *εk*, *ξ<sup>k</sup>* < *tk*+1. Denote *e*1*<sup>k</sup>* = *xk* − *Xk*, *e*2*<sup>k</sup>* = *yk* − *Yk*, *xk* = *x*(*tk*) and *yk* = *y*(*tk*). Then, by (15), we get:

$$\begin{split} \varepsilon \mathbb{1}\_{k+1} &= \varepsilon \mathbb{1}\_{k} + h \left( f(t\_{k\prime}, \mathbf{x}\_{k\prime} y\_{k}) - f(t\_{k\prime}, \mathbf{X}\_{k\prime} \mathbf{Y}\_{k}) + f(t\_{k\prime} \mathbf{X}\_{k\prime} \mathbf{Y}\_{k}) - \mathbf{f}\_{k} \right) + \frac{h^{2}}{2} \mathbf{x}^{\prime\prime}(\varepsilon\_{k}), \\ \varepsilon \mathbb{2}\_{k+1} &= \varepsilon \mathbb{2}\_{k} + h \left( g(t\_{k\prime}, \mathbf{x}\_{k\prime} y\_{k}) - g(t\_{k\prime} \mathbf{X}\_{k\prime} \mathbf{Y}\_{k}) + g(t\_{k\prime} \mathbf{X}\_{k\prime} \mathbf{Y}\_{k}) - \hat{\mathbf{G}}\_{k} \right) + \frac{h^{2}}{2} y^{\prime\prime}(\mathbb{1}\_{k}). \end{split}$$

By virtue of Remark 5, we get:

$$\begin{aligned} |\varepsilon 1\_{k+1}| &\le |\varepsilon 1\_k| + hL\_1 \left| \varepsilon 1\_k \right| + hL\_2 \left| \varepsilon 2\_k \right| + \frac{h^2}{2} \left( M\_{1f} + \frac{h}{3} M\_2 \right), \\ |\varepsilon 2\_{k+1}| &\le |\varepsilon 2\_k| + hL\_4 \left| \varepsilon 2\_k \right| + hL\_3 \left| \varepsilon 1\_k \right| + \frac{h^2}{2} \left( M\_{1g} + \frac{h}{3} M\_2 \right), \end{aligned}$$

where *M*<sup>1</sup> *<sup>f</sup>* = max *t*∈[*t*1,*tn*] |*x*(*t*)| and *M*1*<sup>g</sup>* = max *t*∈[*t*1,*tn*] |*y*(*t*)|. Therefore,

*Case* 1. If *c* = *M*<sup>1</sup> *<sup>f</sup>* + *M*1*<sup>g</sup>* + <sup>2</sup>*<sup>h</sup>* <sup>3</sup> *<sup>M</sup>*2, *<sup>L</sup>* <sup>=</sup> <sup>∑</sup><sup>4</sup> *<sup>i</sup>*=<sup>1</sup> *Li* , we get:

$$\begin{aligned} \left| \epsilon \mathbb{1}\_{k+1} \right| &\leq \left| \epsilon \mathbb{1}\_{k+1} \right| + \left| \epsilon \mathbb{2}\_{k+1} \right| \leq \left( 1 + 2hL \right) \left( \left| \epsilon \mathbb{1}\_{k} \right| + \left| \epsilon \mathbb{2}\_{k} \right| \right) + h^{2} \epsilon / 2, \\ &\leq \left( 1 + 2hL \right) \mathcal{U}\_{k} + \frac{h^{2}}{2} \epsilon \leq \left( 1 + 2hL \right)^{k} \mathcal{U}\_{1} + \left( \sum\_{j=0}^{k-1} \left( 1 + 2hL \right)^{j} \right) \frac{h^{2}}{2} \epsilon, \\ &= \left( 1 + 2hL \right)^{k} \mathcal{U}\_{1} + \left( \frac{\left( 1 + 2hL \right)^{k} - 1}{2hL} \right) \frac{h^{2}}{2} \epsilon \leq \epsilon^{k(2hL)} \left( \left( \mathcal{U}\_{1} + \frac{hc}{4L} \right) - \frac{hc}{4L} \epsilon \right). \end{aligned}$$

where *Uk* = |*e*1*k*| + |*e*2*k*|. Indeed, we have used inequality (1 + 2*hL*) *<sup>k</sup>* <sup>≤</sup> *<sup>e</sup>k*( <sup>2</sup>*hL* ), *<sup>k</sup>* <sup>≥</sup> 0, and the quantity ∑*k*−<sup>1</sup> *<sup>j</sup>*=<sup>0</sup> (1 + 2*hL*) *<sup>j</sup>* is a finite geometric series; these can be calculated by:

$$2Lh\left(\sum\_{j=0}^{k-1} \left(1 + 2hL\right)^j\right) = \left(1 + 2Lh\right)\left(\sum\_{j=0}^{k-1} \left(1 + 2hL\right)^j\right) - \left(\sum\_{j=0}^{k-1} \left(1 + 2hL\right)^j\right) = \left(1 + 2hL\right)^k - 1.1$$

In particular, when *U*<sup>1</sup> = 0, this implies that:

$$|\mathbf{x}\_{\mathrm{n}} - \mathbf{X}\_{\mathrm{n}}| \le \frac{hc}{4L} \left( e^{2L(t\_n - t\_1)} - 1 \right), \ |y\_{\mathrm{n}} - \mathbf{Y}\_{\mathrm{n}}| \le \frac{hc}{4L} \left( e^{2L(t\_n - t\_1)} - 1 \right). \tag{26}$$

*Case* 2. In view of Remark 5, let *L* = *Li*, *i* = 1, . . . , 4

$$\begin{aligned} \left| f(t\_k, \mathbf{x}\_k, \boldsymbol{\mathcal{y}}\_k) - \hat{\mathbf{r}}\_k \right| &\leq \frac{h^2}{6} M\_2 + 2L \max \left\{ \left| \mathbf{x}\_k - \mathbf{X}\_k \right|, \left| \boldsymbol{\mathcal{y}}\_k - \boldsymbol{\mathcal{Y}}\_k \right| \right\}, \\ \left| \mathcal{g}(t\_k, \mathbf{x}\_k, \boldsymbol{\mathcal{y}}\_k) - \hat{\mathbf{G}}\_k \right| &\leq \frac{h^2}{6} M\_2 + 2L \max \left\{ \left| \mathbf{x}\_k - \mathbf{X}\_k \right|, \left| \boldsymbol{\mathcal{y}}\_k - \boldsymbol{\mathcal{Y}}\_k \right| \right\}. \end{aligned}$$

Thus, *e*1*k*+<sup>1</sup> = *xk* − *Xk*, *e*2*<sup>k</sup>* = *yk* − *Yk*,

$$\begin{aligned} |\varepsilon 1\_{k+1}| &\le |\varepsilon 1\_k| + 2Lh \max\left\{ |\varepsilon 1\_k|, |\varepsilon 2\_k| \right\} + \frac{h^2}{2}c\_{1,\epsilon} \\ |\varepsilon 2\_{k+1}| &\le |\varepsilon 2\_k| + 2Lh \max\left\{ |\varepsilon 1\_k|, |\varepsilon 2\_k| \right\} + \frac{h^2}{2}c\_{2,\epsilon} \end{aligned}$$

where *c*<sup>1</sup> = *M*<sup>1</sup> *<sup>f</sup>* + *<sup>h</sup>* <sup>3</sup> *<sup>M</sup>*<sup>2</sup> and *<sup>c</sup>*<sup>2</sup> <sup>=</sup> *<sup>M</sup>*1*<sup>g</sup>* <sup>+</sup> *<sup>h</sup>* <sup>3</sup> *M*2. Consequently,

$$\begin{aligned} |\varepsilon 1\_k| &\le \left(1 + 4Lh\right)^k |\mathcal{U}\_1| + h^2 c\_1 \frac{\left(1 + 4Lh\right)^k - 1}{4Lh}, \\ |\varepsilon 2\_k| &\le \left(1 + 4Lh\right)^k |\mathcal{U}\_1| + h^2 c\_2 \frac{\left(1 + 4Lh\right)^k - 1}{4Lh}, \end{aligned}$$

where *U*<sup>1</sup> = |*e*11| + |*e*21|. In particular, when *U*<sup>1</sup> = 0, this implies that:

$$|\mathbf{x}\_n - X\_n| \le hc\_1 \frac{e^{4L(t\_n - t\_1)} - 1}{4L}, \ |y\_n - \mathbf{y}\_n| \le hc\_2 \frac{e^{4L(t\_n - t\_1)} - 1}{4L},\tag{27}$$

and if a sequence of *<sup>h</sup>* <sup>→</sup> 0, we get *e*1*n*,*<sup>h</sup>* <sup>→</sup> 0, *e*2*n*,*<sup>h</sup>* → 0, which concludes the proof.

#### *3.3. Numerical Scheme II for SODEs*

In this subsection, we will construct numerical Scheme II, a more advanced method than that of Scheme I. The components of FzT of *x* and *y* can be approximated by the average of the two methods, Scheme I (14)–(15) and the backward Scheme I (or implicit Scheme I), then the FzT is given by:

$$\begin{array}{ccccc} X\_p = & X\_k + h\hat{\mathcal{F}}\_{k\prime} & X\_1 = a, \\ X\_\mathcal{c} = & X\_k + h\hat{\mathcal{F}}\_{k+1\prime} & X\_1 = a, \\ X\_{k+1} = & \frac{1}{2} \left( X\_p + X\_{\mathcal{c}} \right), \\ \end{array} \quad \begin{array}{ccccc} Y\_p = & Y\_k + h\hat{\mathcal{G}}\_{k\prime} & Y\_1 = \beta, \\ Y\_\mathcal{c} = & Y\_k + h\hat{\mathcal{G}}\_{k+1\prime} & Y\_1 = \beta, \\ Y\_{k+1} = & \frac{1}{2} \left( Y\_p + Y\_{\mathcal{c}} \right), \end{array} \quad \begin{array}{ccccc} \mathcal{Y}\_1 = & \mathcal{Y}\_{\mathcal{c}} \\ \mathcal{Y}\_2 = & \mathcal{Y}\_{\mathcal{c}} \\ \end{array} \tag{28}$$

for *k* = 1, . . . , *n* − 1.

The problem with the previous scheme (28) is that the unknown quantities *F*ˆ *<sup>k</sup>*+<sup>1</sup> and *G*ˆ *<sup>k</sup>*+<sup>1</sup> appear on both sides (an implicit method). Therefore, one solution to this problem would be to use an explicit method such as another fuzzy approach. The following Scheme II for *k* = 1, ... , *n* − 1, *X*<sup>1</sup> = *α*, *Y*<sup>1</sup> = *β*:

$$\begin{array}{rclcrcl}X\_{k+1}^{\*} &=& X\_{k} + h\hat{\mathsf{F}}\_{k}, \\ X\_{k+1} &=& X\_{k} + \frac{h}{2}\left(\hat{\mathsf{F}}\_{k} + \hat{\mathsf{F}}\_{k+1}^{\*}\right), \\ &\end{array}\Big|\begin{array}{rclcrcl}Y\_{k+1}^{\*} &=& \mathsf{Y}\_{k} + h\hat{\mathsf{G}}\_{k}, \\ \mathsf{Y}\_{k+1} &=& \mathsf{Y}\_{k} + \frac{h}{2}\left(\hat{\mathsf{G}}\_{k} + \hat{\mathsf{G}}\_{k+1}^{\*}\right), \\ \end{array}\Big{)}\tag{29}$$

where:

$$\begin{array}{rclcrcl}\hat{F}\_{\mathbf{k}}[f] &=& \frac{\int\_{t\_{1}}^{t\_{\mathrm{f}}} f(t, \mathbf{X}\_{\mathbf{k}} \mathbf{Y}\_{\mathbf{k}}) A\_{\mathbf{k}}(t) dt}{\int\_{t\_{1}}^{t\_{\mathrm{f}}} A\_{\mathbf{k}}(\mathbf{x}) dx},\\ \hat{F}\_{\mathbf{k}+1}^{\star}[f] &=& \frac{\int\_{t\_{1}}^{t\_{\mathrm{f}}} f(t, \mathbf{X}\_{\mathbf{k}+1}^{\star} \mathbf{Y}\_{\mathbf{k}+1}^{\star}) A\_{\mathbf{k}+1}(t) dt}{\int\_{t\_{1}}^{t\_{\mathrm{f}}} A\_{\mathbf{k}+1}(\mathbf{x}) dx}, \end{array} \begin{array}{rclcrcl} \hat{G}\_{\mathbf{k}}[\mathbf{g}] &=& \frac{\int\_{t\_{1}}^{t\_{\mathrm{f}}} \frac{g(t, \mathbf{X}\_{\mathbf{k}}, Y\_{\mathbf{k}}) A\_{\mathbf{k}}(t) dt}{\int\_{t\_{1}}^{t\_{\mathrm{f}}} A\_{\mathbf{k}}(\mathbf{x}) dx},\\ \hat{G}\_{\mathbf{k}+1}[\mathbf{g}] &=& \frac{\int\_{t\_{1}}^{t\_{\mathrm{f}}} \frac{g(t, \mathbf{X}\_{\mathbf{k}+1}^{\star}, Y\_{\mathbf{k}+1}^{\star}) A\_{\mathbf{k}+1}(t) dt}{\int\_{t\_{1}}^{t\_{\mathrm{f}}} A\_{\mathbf{k}+1}(\mathbf{x}) dx}.\end{array} \tag{30}$$

This method computes the approximate coordinates [*X*1, ... , *Xn*] and [*Y*1, ... ,*Yn*] of the direct FzT of the functions *x*(*t*) and *y*(*t*), respectively. In the sequel, the inverse FzT (16) approximates the solution *x*(*t*) (*y*(*t*)) of the SODEs (5).

In the next theorem, we obtain an error estimate in the context of a fuzzy partition and error analysis of approximation Scheme II.

**Theorem 5.** *Suppose that <sup>f</sup>*(*t*, *<sup>x</sup>*1, *<sup>x</sup>*2), *<sup>g</sup>*(*t*, *<sup>x</sup>*1, *<sup>x</sup>*2) ∈ *<sup>C</sup>*<sup>2</sup> [*t*1, *<sup>t</sup>*2]*. Let ∂ f ∂xi* <sup>≤</sup> *Lf ∂g ∂xi* <sup>≤</sup> *Lg* , *i* = 1, 2, *and* | *f*(*t*, *x*1, *x*2)| ≤ *M*<sup>1</sup> (|*g*(*t*, *x*1, *x*2)| ≤ *M*2)*. Consider Scheme II (29)–(30) for some positive integer k and* {*Ak* | *k* = 1, . . . , *n* − 1} *, n* ≥ 2, *to be a uniform fuzzy partition of* [*t*1, *tn*]*, then the following hold true:*

*1. for a value of k in the range* 1 ≤ *k* ≤ *n* − 1*:*

$$\begin{aligned} \left| \left| \hat{F}\_{k+1}^\* - \hat{F}\_k^\* \right| \right| \leq & \operatorname{Lh} \left( 1 + 2Lh^2 \right) \mathcal{U}\_{k,k-1\prime} \end{aligned} \qquad \qquad \left| \left| \hat{\mathcal{G}}\_{k+1}^\* - \hat{\mathcal{G}}\_k^\* \right| \right| \leq & \operatorname{Lh} \left( 1 + 2Lh^2 \right) \mathcal{U}\_{k,k-1\prime} \end{aligned}$$

*where Uk*,*k*−<sup>1</sup> = |*Xk* − *Xk*−1| + |*Yk* − *Yk*−1|*. 2. for all k* = 1, . . . , *n* − 1

$$|X\_{n+1} - X\_n| \le \frac{Mh}{2} e^{n\left(2Lh^2\right)}, \ |Y\_{n+1} - Y\_n| \le \frac{Mh}{2} e^{n\left(2Lh^2\right)}.$$

*where L* = *Lf* + *Lg and M* = ∑<sup>2</sup> *<sup>i</sup> Mi.* **Proof.** The proof is similar to the proof of Theorem 3, so we just write out the procedure. The proofs of Part 1 is as follows.

$$\begin{split} \left| X\_{k+1}^{\*} - X\_{k}^{\*} \right| + \left| Y\_{k+1}^{\*} - Y\_{k}^{\*} \right| &\leq \left| X\_{k} - X\_{k-1} \right| + h \left| \mathfrak{F}\_{k} - \mathfrak{F}\_{k-1} \right| + \left| \mathfrak{Y}\_{k} - \mathfrak{Y}\_{k-1} \right| + h \left| \mathfrak{G}\_{k} - \mathfrak{G}\_{k-1} \right|, \\ \left| \mathfrak{F}\_{k+1}^{\*} - \mathfrak{F}\_{k}^{\*} \right| &\leq L\_{f} h \left( \left| X\_{k+1}^{\*} - X\_{k}^{\*} \right| + \left| Y\_{k+1}^{\*} - Y\_{k}^{\*} \right| \right), \\ \left| \mathfrak{F}\_{k} - \mathfrak{F}\_{k-1} \right| &\leq L\_{f} h \mathcal{U}\_{k,k-1}, \text{ and } \left| \mathfrak{G}\_{k} - \mathfrak{G}\_{k-1} \right| \leq L\_{g} h \mathcal{U}\_{k,k-1}. \end{split}$$

The proof of Part 2, using (29), gives:

$$\begin{aligned} |X\_{k+1} - X\_k| &\le |X\_k - X\_{k-1}| + Lh^2 \left(1 + Lh^2\right) \mathcal{U}\_{k,k-1}, \\ |X\_{k+1} - X\_k| &\le |X\_{k+1} - X\_k| + |Y\_{k+1} - Y\_k| \le \left(1 + 2Lh^2 \left(1 + Lh^2\right)\right)^k \mathcal{U}\_{2,1,k}, \\ &\le \frac{Mh}{2} \left(1 + 2Lh^2 \left(1 + Lh^2\right)\right)^k. \end{aligned}$$

where *Uk*,*k*−<sup>1</sup> = |*Xk* − *Xk*−1| + |*Yk* − *Yk*−1| and *<sup>M</sup>* = *<sup>M</sup>*<sup>1</sup> + *<sup>M</sup>*2. In particular:

$$|X\_{n+1} - X\_{\mathbb{H}}| \le \exp\left(n\left(2Lh^2\right)\left(1 + Lh^2\right)\right)\frac{Mh}{2}, \ |Y\_{n+1} - Y\_{\mathbb{H}}| \le \exp\left(n\left(2Lh^2\right)\left(1 + Lh^2\right)\right)\frac{Mh}{2},$$

which concludes the proof.

**Lemma 4.** *Let f and g have continuous second order derivatives on t* ∈ [*t*1, *tn*]*, and f* (*g*) *satisfies a Lipschitz condition in the second and third arguments. Then, a local error of Scheme II (29)–(30) is of the order h*3*.*

**Proof.** We consider the SODEs (5). We start with the Taylor expansion and the forward divided difference approximation of the second derivative (please see Appendix A for more details), i.e.,

$$\begin{aligned} \mathbf{x}(t\_{k+1}) &= \mathbf{x}(t\_k) + h\mathbf{x}'(t\_k) + \frac{h^2}{2} \left( \frac{\mathbf{x}'(t\_{k+1}) - \mathbf{x}'(t\_k)}{h} - \frac{h}{2} \mathbf{x}'''(\varepsilon\_{2k}) \right) + \frac{h^3}{6} \mathbf{x}'''(\varepsilon\_{1k}),\\ \mathbf{x}(t\_{k+1}) &= \mathbf{x}(t\_k) + \frac{h}{2} \mathbf{x}'(t\_k) + \frac{h}{2} \mathbf{x}'(t\_{k+1}) + h^3 \left[ \frac{1}{6} \mathbf{x}'''(\varepsilon\_{1k}) - \frac{1}{4} \mathbf{x}'''(\varepsilon\_{2k}) \right], \end{aligned}$$

where *εik* ∈ (*tk*, *tk*<sup>+</sup>1), *i* = 1, 2. The first derivative can be replaced by the right-hand side of the differential Equation (5). The Taylor expansion becomes:

$$\mathbf{x}(t\_{k+1}) = \mathbf{x}(t\_k) + \frac{h}{2} \left( f(t\_{k\prime}\mathbf{x}\_k, y\_k) + f\left(t\_{k+1\prime}\mathbf{x}\_k + h f(t\_{k\prime}\mathbf{x}\_k, y\_k), y\_k + h g(t\_k, \mathbf{x}\_k, y\_k) \right) \right) + \varepsilon\_f \mathbf{h}^3, \quad \forall \mathbf{x} \in \mathcal{X}\_k$$

where *ef* = # 1 <sup>6</sup> *<sup>x</sup>*(*ε*1*k*) <sup>−</sup> <sup>1</sup> <sup>4</sup> *<sup>x</sup>*(*ε*2*k*) + <sup>1</sup> <sup>4</sup> *f*<sup>2</sup> \$ and *f*<sup>2</sup> = *<sup>∂</sup> <sup>∂</sup><sup>x</sup> <sup>f</sup>*(*ε*3*k*, *<sup>x</sup>*(*ε*3*k*), *<sup>y</sup>*(*ε*3*k*)) + *<sup>∂</sup> <sup>∂</sup><sup>y</sup> f*(*ε*3*k*, *x*(*ε*3*k*), *y*(*ε*3*k*)). We can write this as:

$$\begin{aligned} x\_{k+1} &= x\_k + \frac{h}{2} \left( K\_0 + K\_f \right) + e\_f h^3, \\ y\_{k+1} &= y\_k + \frac{h}{2} \left( K\_1 + K\_\mathcal{S} \right) + e\_\mathcal{S} h^3, \end{aligned}$$

where:

*K*<sup>0</sup> = *f*(*tk*, *xk*, *yk*), *K*<sup>1</sup> = *g*(*tk*, *xk*, *yk*),*Kf* = *f*(*tk*+1, *xk* + *hK*0, *yk* + *hK*1), *Kg* = *g*(*tk*+1, *xk* + *hK*0, *yk* + *hK*1), *eg* = \* 1 6 *<sup>y</sup>*(*ε*1*k*) <sup>−</sup> <sup>1</sup> 4 *<sup>y</sup>*(*ε*2*k*) + <sup>1</sup> 4 *g*2*y*(*ε*3*k*) + and *<sup>g</sup>*<sup>2</sup> <sup>=</sup> *<sup>∂</sup> ∂x <sup>g</sup>*(*ε*3*k*, *<sup>x</sup>*(*ε*3*k*), *<sup>y</sup>*(*ε*3*k*)) + *<sup>∂</sup> ∂y g*(*ε*3*k*, *x*(*ε*3*k*), *y*(*ε*3*k*)).

Now, let *f* (*g*) satisfy a Lipschitz condition in the second and third arguments. By Lemma 1 and Remark 5, we have:

$$|f(t\_k, \mathbf{x}\_k, y\_k) - f(t, \mathbf{X}\_k, \mathbf{Y}\_k)| \le L\left(|\mathbf{x}\_k - \mathbf{X}\_k| + |y\_k - \mathbf{Y}\_k|\right) \le a\_f h^2 \le a h^2,\\ |g(t\_k, \mathbf{x}\_k, y\_k) - g(t, \mathbf{X}\_k, \mathbf{Y}\_k)| \le a\_\mathcal{g} \le a h^2, \quad \forall \mathbf{x}\_k \in \mathcal{X}\_k$$

where *α* = *α<sup>f</sup>* + *α<sup>g</sup>* is a positive constant. Once again, using Remark 5 and according to (29)–(30), we obtain for fixed *k* = 1, . . . , *n* − 1:

$$\mathbf{x}\_{k+1} - \mathbf{X}\_{k+1} = \mathbf{x}\_k - \mathbf{X}\_k + \frac{h}{2} \left( \mathbf{K}\_0 - \mathbf{f}\_k \right) + \frac{h}{2} \left( \mathbf{K}\_f - \mathbf{f}\_{k+1}^\* \right) + \mathbf{e}\_f h^3.$$

$$\mathbf{K}\_0 - \mathbf{f}\_k \le \left| f(t\_k, \mathbf{x}\_k, \mathbf{y}\_k) - f(t\_k, \mathbf{X}\_k, \mathbf{Y}\_k) + f(t\_k, \mathbf{X}\_k, \mathbf{Y}\_k) - \mathbf{f}\_k \right| \le a h^2 + \frac{1}{6} M\_2 h^2.$$

$$\mathbf{K}\_1 - \mathbf{\hat{G}}\_k = \left| \mathbf{g}(t\_k, \mathbf{x}\_k, \mathbf{y}\_k) - \mathbf{g}(t\_k, \mathbf{X}\_k, \mathbf{Y}\_k) + \mathbf{g}(t\_k, \mathbf{X}\_k, \mathbf{Y}\_k) - \mathbf{f}\_k \right| \le a h^2 + \frac{1}{6} M\_2 h^2.$$

By the trapezium formula, we have:

$$f(t\_{k+1}, X\_{k+1}^\*, Y\_{k+1}^\*) - \mathfrak{f}\_{k+1}^\* \le f(t\_{k+1}, X\_{k+1}^\*, Y\_{k+1}^\*) - f(t\_{k+1}, X\_{k+1}^\*, Y\_{k+1}^\*) + \frac{1}{6}M\_{2f}h^2 = \frac{1}{6}M\_{2f}h^2 \le \frac{1}{6}M\_{2}h^2, \quad f \in L\_2(\mathbb{R}^d)$$

Note that:

$$L\left(f(t\_{k+1},\mathbf{x}\_{k}+h\mathbf{K}\_{0},y\_{k}+h\mathbf{K}\_{1})-f(t\_{k+1},\mathbf{X}\_{k+1}^{\*},\mathbf{Y}\_{k+1}^{\*})\right) \leq L\left[|\mathbf{x}\_{k}-\mathbf{X}\_{k}|+h\left|\mathbf{K}\_{0}-\hat{\mathbf{F}}\_{k}\right|+|y\_{k}-\mathbf{Y}\_{k}|+h\left|\mathbf{K}\_{1}-\hat{\mathbf{G}}\_{k}\right|\right].$$

Next,

$$\begin{aligned} K\_f - \hat{\mathbb{F}}\_{k+1}^\* &= f(t\_{k+1}, \mathbf{x}\_k + h\mathbf{K}\_0, y\_k + h\mathbf{K}\_1) - f(t\_{k+1}, X\_{k+1}^\*, Y\_{k+1}^\*) + f(t\_{k+1}, X\_{k+1}^\*, Y\_{k+1}^\*) - \hat{\mathbb{F}}\_{k+1}^\*, \\ &\leq \left[ah^2 + \frac{1}{6}M\_2h^2\right] \left(2Lh + 1\right). \end{aligned}$$

This leads to:

$$|\mathbf{x}\_{k+1} - \mathbf{X}\_{k+1}| \le |\mathbf{x}\_k - \mathbf{X}\_k| + \frac{h}{2} \left( a h^2 + \frac{1}{6} M\_2 h^2 \right) + \frac{h}{2} \left( \left[ a h^2 + \frac{1}{6} M\_2 h^2 \right] (2Lh + 1) \right) + \epsilon\_f h^3 = |\mathbf{x}\_k - \mathbf{X}\_k| + \mathcal{E}\_f h^3. \tag{31}$$

Similarly, <sup>|</sup>*yk*<sup>+</sup><sup>1</sup> <sup>−</sup> *Yk*<sup>+</sup>1<sup>|</sup> <sup>≤</sup> <sup>|</sup>*yk* <sup>−</sup> *Yk*<sup>|</sup> <sup>+</sup> *Egh*3, where *Ef* and *Eg* are appropriate constants that depend on *f* and *g*, respectively. Therefore, the error of this method is *O*(*h*3).

To see that Scheme II is globally a second-order method, we need to establish its convergence.

**Theorem 6.** *Let the assumptions of Lemma 4 be fulfilled. Consider Scheme II (29)–(30) for some positive integer k, and* {*Ak* | *k* = 1, ... , *n* − 1}*, n* ≥ 2, *is a uniform fuzzy partition of* [*t*1, *tn*]*. Thus, if a sequence of h* → 0*, and with each h, we compute the Xk*,*h*,*Yk*,*<sup>h</sup> component, then x*(*tk*) − *Xk*,*<sup>h</sup> , y*(*tk*) − *Yk*,*<sup>h</sup> converges to zero for each k* = 1, . . . , *n* − 1*.*

**Proof.** The proof is similar to the proof of Theorem 4, so we just write out the procedure. According to Remark 5 and (31), we get:

$$\mathbf{x}\_{k+1} - \mathbf{X}\_{k+1} = \mathbf{x}\_k - \mathbf{X}\_k + \frac{h}{2} \left( \mathbf{K}\_0 - \mathbf{f}\_k \right) + \frac{h}{2} \left( \mathbf{K}\_f - \mathbf{f}\_{k+1}^\* \right) + e\_f h^3,$$

$$\mathbf{K}\_0 - \mathbf{f}\_k \le L\_1 \left| \mathbf{x}\_k - \mathbf{X}\_k \right| + L\_2 \left| y\_k - \mathbf{Y}\_k \right| + \frac{1}{6} M\_2 h^2,$$

$$\mathbf{K}\_1 - \hat{\mathbf{G}}\_k \le L\_3 \left| \mathbf{x}\_k - \mathbf{X}\_k \right| + L\_4 \left| y\_k - \mathbf{Y}\_k \right| + \frac{1}{6} M\_2 h^2,$$

$$f(t\_{k+1}, \mathbf{X}\_{k+1}^\*, \mathbf{Y}\_{k+1}^\*) - \mathbf{f}\_{k+1}^\* = \frac{1}{6} M\_{2f} h^2 \le \frac{1}{6} M\_2 h^2,$$

$$\begin{aligned} f\left(t\_{k+1}, \chi\_k + hK\_0, y\_k + hK\_1\right) - f\left(t\_{k+1}, X\_{k+1}^\*, Y\_{k+1}^\*\right) &\leq L\_1 \left( |\chi\_k - X\_k| + h \left|K\_0 - \hat{F}\_k \right| \right) + \frac{1}{2} \\ &L\_2 \left( |y\_k - Y\_k| + h \left|K\_1 - \hat{G}\_k \right| \right), \end{aligned}$$

$$\begin{split} \mathbb{K}\_{f} - \hat{\mathbb{F}}\_{k+1}^{\*} &= f(t\_{k+1}, \mathbf{x}\_{k} + h\mathbf{K}\_{0}, y\_{k} + h\mathbf{K}\_{1}) - f(t\_{k+1}, X\_{k+1}^{\*}, Y\_{k+1}^{\*}) + f(t\_{k+1}, X\_{k+1}^{\*}, Y\_{k+1}^{\*}) - \hat{\mathbb{F}}\_{k+1}^{\*}, \\ &\leq L\_{1} \left| \mathbf{x}\_{k} - \mathbf{X}\_{k} \right| + hL\_{1} \left| \mathbf{K}\_{0} - \hat{\mathbb{F}}\_{k} \right| + L\_{2} \left| y\_{k} - \mathbf{Y}\_{k} \right| + hL\_{2} \left| \mathbf{K}\_{1} - \hat{\mathbb{G}}\_{k} \right| + \frac{1}{6}M\_{2}h^{2}. \end{split}$$

Once again, using Remark 5 gives:

$$\begin{aligned} \left| \mathbf{x}\_{k+1} - \mathbf{X}\_{k+1} \right| &\leq \left| \mathbf{x}\_{k} - \mathbf{X}\_{k} \right| + \frac{h}{2} \left( L\_{1} \left| \mathbf{x}\_{k} - \mathbf{X}\_{k} \right| + L\_{2} \left| y\_{k} - \mathbf{Y}\_{k} \right| + \frac{1}{6} M\_{2} h^{2} \right) + \\ \frac{h}{2} \left( L\_{1} \left| \mathbf{x}\_{k} - \mathbf{X}\_{k} \right| + hL\_{1} \left| \mathbf{X}\_{0} - \mathbf{f}\_{k} \right| + L\_{2} \left| y\_{k} - \mathbf{Y}\_{k} \right| + hL\_{2} \left| \mathbf{K}\_{1} - \mathbf{\hat{G}}\_{k} \right| + \frac{1}{6} M\_{2} h^{2} \right) + \\ \left( \frac{1}{4} S M\_{1} - \frac{1}{12} M\_{2} \right) h^{3}, \end{aligned}$$

where

$$\begin{split} & \varepsilon\_{f} \leq \frac{1}{6} M\_{2f} - \frac{1}{4} M\_{2f} + \frac{1}{4} S M\_{1f} = \frac{1}{4} S M\_{1f} - \frac{1}{12} M\_{2f}, \\ & \qquad \| M\_{1} = M\_{1f} + M\_{1\xi}, M\_{1f} = \max\_{t \in [t\_{1}, t\_{n}]} |f'(t, \mathbf{x}, y)|, \; M\_{1\xi} = \max\_{t \in [t\_{1}, t\_{n}]} |g'(t, \mathbf{x}, y)|, \\ & \qquad \| M\_{2} = M\_{2f} + M\_{2\xi}, M\_{2f} = \max\_{t \in [t\_{1}, t\_{n}]} |f'''(t, \mathbf{x}, y)|, \; M\_{2\xi} = \max\_{t \in [t\_{1}, t\_{n}]} |g''(t, \mathbf{x}, y)|, \\ & \qquad \| S = S\_{f} + S\_{\overline{\mathbf{x}}} \text{ is the upper bound of } \frac{\partial f}{\partial \overline{\mathbf{x}\_{i}}} \left(\frac{\partial \overline{\mathbf{x}}}{\partial \overline{\mathbf{x}\_{i}}}\right), i = 1, 2, \; \mathbf{x} = \mathbf{x}\_{1} \text{ and } \mathcal{Y} = \mathbf{x}\_{2}. \end{split}$$

Simplifying, then:

$$\begin{aligned} |\mathbf{x}\_{k+1} - \mathbf{X}\_{k+1}| &\leq |\mathbf{x}\_k - \mathbf{X}\_k| + \\ &+ \frac{h}{2} \left( 2L\_1 + hL\_1L\_1 + hL\_2L\_3 \right) |\mathbf{x}\_k - \mathbf{X}\_k| + \frac{h}{2} \left( 2L\_2 + hL\_1L\_2 + hL\_2L\_4 \right) |y\_k - \mathbf{Y}\_k| \\ &+ \left( \frac{1}{4}SM\_1 + \frac{1}{12} \left[ hL\_1 + hL\_2 + 1 \right] M\_2 \right) h^3. \end{aligned}$$

Similarly,

$$\begin{aligned} \left|y\_{k+1} - Y\_{k+1}\right| &\leq \left|y\_k - Y\_k\right| \\ &+ \frac{h}{2} \left(2L\_4 + hL\_3L\_2 + hL\_4L\_4\right) \left|y\_k - Y\_k\right| + \frac{h}{2} \left(2L\_3 + hL\_3L\_1 + hL\_4L\_3\right) \left|x\_k - X\_k\right| \\ &+ \left(\frac{1}{4}SM\_1 + \frac{1}{12}\left[hL\_3 + hL\_4 + 1\right]M\_2\right)h^3. \end{aligned}$$

Therefore,

*Case* 1. If *L* = ∑<sup>4</sup> *<sup>i</sup>*=<sup>1</sup> *Li* and *<sup>c</sup>* = <sup>1</sup> <sup>2</sup>*SM*<sup>1</sup> <sup>+</sup> <sup>1</sup> <sup>6</sup> [2*hL* + 1] *M*2, we get

$$|\mathbf{x}\_{k+1} - \mathbf{X}\_{k+1}| \le |\mathbf{x}\_{k+1} - \mathbf{X}\_{k+1}| + |y\_{k+1} - \mathbf{Y}\_{k+1}| \le \left[1 + 2hL\left(1 + hL\right)\right]\left(|\mathbf{x}\_k - \mathbf{X}\_k| + |y\_k - \mathbf{Y}\_k|\right) + \frac{h^3}{2}c\_r \le \left[1 + 2hL\left(1 + hL\right)\right]\left(|\mathbf{x}\_k - \mathbf{X}\_k| + |y\_k - \mathbf{Y}\_k|\right) + \frac{h^3}{2}c\_r$$

$$|y\_{k+1} - \mathbf{Y}\_{k+1}| \le |\mathbf{x}\_{k+1} - \mathbf{X}\_{k+1}| + |y\_{k+1} - \mathbf{Y}\_{k+1}| \le \left[1 + 2hL\left(1 + hL\right)\right]\left(|\mathbf{x}\_k - \mathbf{X}\_k| + |y\_k - \mathbf{Y}\_k|\right) + \frac{h^3}{2}c\_r$$

By using the proof of Theorem 4, when *U*<sup>1</sup> = 0, this implies that:

$$|x\_n - X\_n| \le \frac{h^2 c}{4L\left(1 + \frac{L(t\_n - t\_1)}{n}\right)} \left[\exp\left(2L\left(t\_n - t\_1\right)\left(1 + \frac{L\left(t\_n - t\_1\right)}{n}\right)\right) - 1\right].\tag{32}$$

In a similar way,

$$|y\_n - Y\_n| \le \frac{h^2 c}{4L\left(1 + \frac{L(t\_n - t\_1)}{n}\right)} \left[\exp\left(2L\left(t\_n - t\_1\right)\left(1 + \frac{L\left(t\_n - t\_1\right)}{n}\right)\right) - 1\right].\tag{33}$$

*Case* 2. If *L* = *Li*, *i* = 1, . . . , 4 and:

$$\begin{array}{rcl} \left| f(t\_{k'}\mathbf{x}\_{k'}\boldsymbol{y}\_{k}) - \mathbf{f}\_{k} \right| & \leq & \frac{h^{2}}{6}M\_{2} + 2L \max\left\{ \left| \mathbf{x}\_{k} - \mathbf{X}\_{k} \right|, \left| \mathbf{y}\_{k} - \mathbf{Y}\_{k} \right| \right\}, \\\left| g(t\_{k'}\mathbf{x}\_{k'}\boldsymbol{y}\_{k}) - \mathbf{G}\_{k} \right| & \leq & \frac{h^{2}}{6}M\_{2} + 2L \max\left\{ \left| \mathbf{x}\_{k} - \mathbf{X}\_{k} \right|, \left| \mathbf{y}\_{k} - \mathbf{Y}\_{k} \right| \right\}. \end{array}$$

Thus,

$$\begin{cases} |\mathbf{x}\_{k+1} - \mathbf{X}\_{k+1}| \le |\mathbf{x}\_k - \mathbf{X}\_k| + 2Lh\left(1 + hL\right) \max\left\{ |\mathbf{x}\_k - \mathbf{X}\_k|, |y\_k - \mathbf{Y}\_k| \right\} + \frac{h^3}{2}c, \\\ |y\_{k+1} - \mathbf{Y}\_{k+1}| \le |y\_k - \mathbf{Y}\_k| + 2Lh\left(1 + hL\right) \max\left\{ |\mathbf{x}\_k - \mathbf{X}\_k|, |y\_k - \mathbf{Y}\_k| \right\} + \frac{h^3}{2}c. \end{cases}$$

where *c* = <sup>1</sup> <sup>2</sup>*SM*<sup>1</sup> <sup>+</sup> <sup>1</sup> <sup>6</sup> [2*hL* + 1] *M*2. Consequently,

$$\begin{cases} \left| \mathbf{x}\_{k} - \mathbf{X}\_{k} \right| \le \left( 1 + 4Lh\left(1 + hL\right) \right)^{k} \left| \boldsymbol{\mathcal{U}}\_{1} \right| + h^{3} c \frac{\left( 1 + 4Lh\left(1 + hL \right) \right)^{k} - 1}{4Lh\left(1 + hL \right)}, \\\ \left| \mathbf{y}\_{k} - \mathbf{Y}\_{k} \right| \le \left( 1 + 4Lh\left(1 + hL \right) \right)^{k} \left| \boldsymbol{\mathcal{U}}\_{1} \right| + h^{3} c \frac{\left( 1 + 4Lh\left(1 + hL \right) \right)^{k} - 1}{4Lh\left(1 + hL \right)}. \end{cases}$$

where *Uk* = |*xk* − *Xk*| + |*yk* − *Yk*|. In particular, when *U*<sup>1</sup> = 0, we get:

$$\begin{cases} & |\mathbf{x}\_{n} - \mathbf{X}\_{n}| \le h^{2}c \frac{\left[\exp\left(4L(t\_{n} - t\_{1})\left(1 + \frac{L\left(t\_{n} - t\_{1}\right)}{n}\right)\right) - 1\right]}{4L\left(1 + \frac{L\left(t\_{n} - t\_{1}\right)}{n}\right)},\\ & |y\_{n} - \mathbf{Y}\_{n}| \le h^{2}c \frac{\left[\exp\left(4L(t\_{n} - t\_{1})\left(1 + \frac{L\left(t\_{n} - t\_{1}\right)}{n}\right)\right) - 1\right]}{4L\left(1 + \frac{L\left(t\_{n} - t\_{1}\right)}{n}\right)},\end{cases} \tag{34}$$

and if *<sup>h</sup>* <sup>=</sup> {*h*1, ..., *hm*} → 0, *<sup>m</sup>* <sup>&</sup>gt; 0 in (32), (33) and (34), we get *xn* − *Xn*,*<sup>h</sup>* → 0, *yn* − *Yn*,*<sup>h</sup>* → 0, which concludes the proof.

#### **4. Applications**

One of the main problems of mathematics appears with variable coefficients when *α* (*t*), *β* (*t*), *δ* (*t*), *γ* (*t*) are analytic functions and added to the model. The new differential equations are represented by non-autonomous SODEs. In this model, time varying values for the growth rate of the prey, the efficiency of the predator, being the ability to capture prey, the death rate of the predator and the growth rate of the predator are considered. It is important to remark that since in this problem, the coefficients are time varying, careful attention must be paid in order to obtain the correct recurrence equation system of the model. The model, incorporating the above functions, is as follows [30,33,34]:

$$\begin{aligned} \frac{d\mathbf{x}}{dt} &= \mathbf{a}\left(t\right)\mathbf{x}\left(t\right) - \boldsymbol{\beta}\left(t\right)\mathbf{x}\left(t\right)\mathbf{y}\left(t\right), & \mathbf{x}\left(0\right) = \mathbf{x}\_1\\ \frac{d\mathbf{y}}{dt} &= \boldsymbol{\delta}\left(t\right)\mathbf{x}\left(t\right)\mathbf{y}\left(t\right) - \boldsymbol{\gamma}\left(t\right)\mathbf{y}\left(t\right), & \mathbf{y}\left(0\right) = \mathbf{y}\_1 \end{aligned} \tag{35}$$

Three examples are discussed in order to prove the results obtained by Scheme I (14)–(15) and Scheme II (29)–(30), two examples for the numerical solution of the model (35) and one example for the linear case.

#### **Example 1.** *Consider the problem of the Lotka–Volterra prey-predator model (35). We take*

*α* (*t*) = 4 + tan (*t*), *β* (*t*) = exp(2*t*), *γ* (*t*) = −2, *δ* (*t*) = cos(*t*), *x*(0) = −4 *and y*(0) = 4*.*

*The exact solution for these coefficients is x*(*t*) = <sup>−</sup><sup>4</sup> cos(*t*), *y*(*t*) = 4 exp(−2*t*)*, as proposed by [30,33,34].*

**Example 2.** *Consider the problem of the Lotka–Volterra prey-predator model (35) with*

 $\text{av}\left(t\right) = -t$ ,  $\beta\left(t\right) = -t$ ,  $\gamma\left(t\right) = t$ ,  $\delta\left(t\right) = t$ ,  $\mathbf{x}\left(0\right) = 2$  and  $y\left(0\right) = 2$ .

*The exact solution for these coefficients is x*(*t*) = <sup>2</sup> <sup>2</sup>−exp(*t*2/2) , *y*(*t*) = <sup>2</sup> <sup>2</sup>−exp(*t*2/2) *, as proposed by [30,33].*

**Example 3.** *Consider the following non-autonomous SODEs with initial values (5):*

$$\begin{cases} \mathbf{x}'(t) = \mathbf{x}(t) - y(t) + 2t - t^2 - t^3 & , \mathbf{x}(0) = 1 \ , t \in [0, 1] \\ y'(t) = \mathbf{x}(t) + y(t) - 4t^2 + t^3 & , y(0) = 0. \end{cases} \tag{36}$$

*The exact solution of (36) is given by x*(*t*) = *e<sup>t</sup>* cos(*t*) + *t* <sup>2</sup> *and y*(*t*) = *<sup>e</sup><sup>t</sup>* sin(*t*) − *<sup>t</sup>* 3*.*

The results are listed in Tables 1–7 by the proposed fuzzy approximation methods with respect to the raised cosine generating function and Table 8 by the proposed fuzzy approximation methods with respect to the triangular generating function and raised cosine generating function. The proposed fuzzy approximation methods are generated by Algorithms A1 and A2 (please see Appendix B). The mean square error (MSE) is defined as MSE = <sup>1</sup> *<sup>n</sup>* (*Yk* − *y*(*tk*)2) 2 . This is an easily computable quantity for a particular sample. From the numerical tests, the results are summarized as follows:


The better results (in comparison with the non-linear case) are obtained by the linear case and non-autonomous SODEs in Example 3. Further, the results obtained using proposed fuzzy approximation methods for Examples 1–3 are shown in Figure 1 by using the raised cosine generating function. In view of Figure 1, the graphical results of Examples 1–3 show a comparison between numerical schemes (I and II) and the exact solution. Furthermore, in view of Figure 1, a comparison is given between the numerical results of Examples 1 and 2 and exact solutions for *h* = 0.01, while a comparison is given between the numerical results of Example 3 and exact solutions for *h* = 0.1. All the graphs are plotted using MATLAB software. This constitutes an important improvement to the previous fuzzy approach, which did not provide such information for SODEs. Thus, this study will be particularly important.

**Remark 6.** *We compare new results based on FzT with the conventional numerical methods. For a discussion of the conventional numerical methods, the Euler method and trapezoidal rule to solve SODEs, see for example [35,36].*


**Table 1.** The values of MSE for Example 1–3.

**Table 2.** Comparison of numerical results of *x*(*t*) for Example 3.


**Table 3.** Comparison of numerical results of *y*(*t*) for Example 3.



**Table 4.** Comparison of numerical results of *x*(*t*) for Example 1.

**Table 5.** Comparison of numerical results of *y*(*t*) for Example 1.



**Table 6.** Comparison of numerical results of *x*(*t*) for Example 2.

**Table 7.** Comparison of numerical results of *y*(*t*) for Example 2.


**Figure 1.** A comparison between three fuzzy numerical methods and the exact solution for three examples.


**Table 8.** The values of MSE for Examples 1–3 by the different types of fuzzy partitions.

<sup>1</sup> Triangular generating function; <sup>2</sup> Raised cosine generating function.

#### **5. Conclusions**

We extended the applicability of fuzzy-based numerical methods to the problems of conventional mathematics. In particular, we contributed to approximation methods of the SODEs. Two approximation methods based on the FzT were proposed and their error estimate analyzed. Moreover, we proved that two approximation methods, namely Schemes I and II, determine an approximate solution, which converges to the exact solution, and the local truncation error of the Scheme I (Scheme II) is O(*h*2) O(*h*3) . As an application, a system of nonlinear differential equations is solved by using Schemes I and II. From the numerical results, it is observed that the new fuzzy approximation methods yield more accurate results in comparison with the classical Euler method (one-stage) and classical trapezoidal rule (two-stage). Hence, the new fuzzy approximation methods provided alternative techniques for solving differential equations with better results, and the objective of this research was achieved and tested.

As a consequence, it should be noted that the numerical solutions depend on the types of uniform fuzzy partitions. For cases 1 − *x*−*xk h* and <sup>1</sup> 2 <sup>1</sup> <sup>+</sup> cos *π <sup>x</sup>*−*xk h* , the shape of the basic functions determines the form of representation (linear or non-linear) of the numerical solution. This agrees with the results proposed by [5,6] using uniform fuzzy partitions. It is also worth pointing out that the results in this research are better in comparison with the classical numerical methods using uniform fuzzy partitions for linear and nonlinear cases. Thus, the proposed method is very much suitable for solving SODEs (5) in a linear or nonlinear case under the assumption of *f* and *g* satisfying the Lipschitz condition. If we want to obtain the best approximation of *f* and *g* as possible, then the number *n* of components should be large. It should be stressed that the application of the FzT can be used for removing noise from the given data. This is especially important for various practical applications of FzT. The proposed methods can also be applied to the *n*-dimensional system of first-order coupled differential equations in the case of a non-noisy or noisy right-hand side. The discussion will continue in [37] to give more details about the fuzzy partition and the modification of multiple steps.

**Author Contributions:** Conceptualization and performed the numerical experiments, H.A. ALKasasbeh; Evaluated the results and supported this work, I. Perfilieva; Project administration and designed the numerical methods, M. Z. Ahmad; Software and data curation, Z. R. Yahyaa.

**Funding:** The work of Irina Perfilieva has been supported by the project "IT4Innovations excellence in science, LQ1602" and by the Grant Agency of the Czech Republic (Project No. 16-09541S).

**Acknowledgments:** The authors would like to express their thanks to the editors and the anonymous referees for their valuable comments and suggestions that contributed to the paper. Many thanks are given to Universiti Malaysia Perlis for providing all facilities until this work was completed successfully.

**Conflicts of Interest:** The authors declare no conflicts of interest.

#### **Appendix A. Taylor Series**

A Taylor series is given that:

$$\begin{split} \mathbf{x}(t\_{k+1}) &= \mathbf{x}(t\_k) + h\mathbf{x}'(t\_k) + \frac{h^2}{2}\mathbf{x}''(t\_k) + \frac{h^3}{6}\mathbf{x}'''(\varepsilon\_{1k}), \\ &= \mathbf{x}(t\_k) + h\mathbf{x}'(t\_k) + \frac{h^2}{2}\left(\frac{\mathbf{x}'(t\_{k+1}) - \mathbf{x}'(t\_k)}{h} - \frac{h}{2}\mathbf{x}'''(\varepsilon\_{2k})\right) + \frac{h^3}{6}\mathbf{x}'''(\varepsilon\_{1k}), \\ &= \mathbf{x}(t\_k) + \frac{h}{2}\mathbf{x}'(t\_k) + \frac{h}{2}\mathbf{x}'(t\_{k+1}) + h^3\left[\frac{1}{6}\mathbf{x}'''(\varepsilon\_{1k}) - \frac{1}{4}\mathbf{x}'''(\varepsilon\_{2k})\right], \end{split} \tag{A1}$$

where *x*(*t*) = *<sup>x</sup>* (*tk*+1)−*x* (*tk* ) *<sup>h</sup>* <sup>−</sup> *<sup>h</sup>* <sup>2</sup> *x*(*ε*2*k*). Calculus can be used to derive that:

$$\begin{aligned} \mathbf{x}'(t\_{k+1}) &= f\left(t\_{k+1}, \mathbf{x}(t\_{k+1}), \mathbf{y}(t\_{k+1})\right) \\ &= f\left(t\_{k+1}, \mathbf{x}(t\_k) + h f(t\_k, \mathbf{x}(t\_k), \mathbf{y}(t\_k)), \mathbf{y}(t\_k) + h \mathbf{g}(t\_k, \mathbf{x}(t\_k), \mathbf{y}(t\_k))\right) \\ &+ \frac{h^2}{2} f\_2(\varepsilon\_{3k}, \mathbf{x}(\varepsilon\_{3k}), \mathbf{y}(\varepsilon\_{3k})) \mathbf{x}''(\varepsilon\_{3k}), \end{aligned}$$

where *f*2(*ε*3*k*, *x*(*ε*3*k*), *y*(*ε*3*k*)) = *<sup>∂</sup> <sup>∂</sup><sup>x</sup> <sup>f</sup>*(*ε*3*k*, *<sup>x</sup>*(*ε*3*k*), *<sup>y</sup>*(*ε*3*k*)) + *<sup>∂</sup> <sup>∂</sup><sup>y</sup> f*(*ε*3*k*, *x*(*ε*3*k*), *y*(*ε*3*k*)).

Substituting Equation (A1), it is given that:

$$\begin{split} \mathbf{x}(t\_{k+1}) &= \mathbf{x}(t\_k) + \frac{h}{2} \mathbf{x}'(t\_k) \\ &+ \frac{h}{2} f\left(t\_{k+1}, \mathbf{x}(t\_k) + h f(t\_k, \mathbf{x}(t\_k), y(t\_k)), y(t\_k) + h \mathbf{g}(t\_k, \mathbf{x}(t\_k), y(t\_k))\right) \\ &+ \frac{h}{2} \frac{h^2}{2} f\_2(\varepsilon\_{3k}, \mathbf{x}(\varepsilon\_{3k}), y(\varepsilon\_{3k})) \mathbf{x}''(\varepsilon\_{3k}) \\ &+ h^3 \left[\frac{1}{6} \mathbf{x}'''(\varepsilon\_{1k}) - \frac{1}{4} \mathbf{x}'''(\varepsilon\_{2k})\right], \\ \mathbf{x}(t\_{k+1}) &= \mathbf{x}(t\_k) + \\ &\frac{h}{2} \left(\mathbf{x}'(t\_k) + f\left(t\_{k+1}, \mathbf{x}(t\_k) + h f(t\_k, \mathbf{x}(t\_k), y(t\_k)), y(t\_k) + h g(t\_k, \mathbf{x}(t\_k), y(t\_k))\right)\right) \\ &+ h^3 \left[\frac{1}{6} \mathbf{x}'''(\varepsilon\_{1k}) - \frac{1}{4} \mathbf{x}'''(\varepsilon\_{2k}) + \frac{1}{4} f2(\varepsilon\_{3k}, \mathbf{x}(\varepsilon\_{3k}), y(\varepsilon\_{3k})) \mathbf{x}''(\varepsilon\_{3k})\right]. \end{split}$$

It can be rewritten as:

$$\mathbf{x}(t\_{k+1}) = \mathbf{x}(t\_k) + \frac{h}{2} \left( \mathbf{K}\_0 + \mathbf{K}\_f \right) + e\_f h^3 \mathbf{y}$$

where:

$$\begin{aligned} \mathbf{K}\_0 &= \mathbf{x}'(t) = f(t\_k, \mathbf{x}(t\_k), y(t\_k)), \, \mathbf{K}\_1 = \mathbf{y}'(t) = \mathbf{g}(t\_k, \mathbf{x}(t\_k), y(t\_k)), \mathbf{K}\_f = f(t\_{k+1}, \mathbf{x}(t\_k) + h\mathbf{K}\_0, y(t\_k) + h\mathbf{K}\_1), \\ \mathbf{e}\_f &= \frac{1}{6} \mathbf{x}'''(\varepsilon\_{1k}) - \frac{1}{4} \mathbf{x}'''(\varepsilon\_{2k}) + \frac{1}{4} f\_2(\varepsilon\_{3k}, \mathbf{x}(\varepsilon\_{3k}), y(\varepsilon\_{3k})) \mathbf{x}''(\varepsilon\_{3k}), \, \text{and } t\_k < \varepsilon\_{1k}, \, \varepsilon\_{2k}, \, \varepsilon\_{3k} < t\_{k+1}. \end{aligned}$$

Similarly,

$$y(t\_{k+1}) = y(t\_k) + \frac{h}{2} \left(K\_1 + K\_{\mathcal{S}}\right) + \varepsilon\_{\mathcal{S}} h^{\mathcal{S}},$$

where:

$$\begin{split} \mathcal{K}\_{\mathcal{S}} &= \mathcal{g}(t\_{k+1}, \mathbf{x}(t\_{k}) + h\mathbf{K}\_{0}, \mathcal{y}(t\_{k}) + h\mathbf{K}\_{1}), \boldsymbol{e}\_{\mathcal{S}} = \frac{1}{6} \mathbf{y}^{\prime\prime\prime}(\boldsymbol{\xi}\_{1k}) - \frac{1}{4} \mathbf{y}^{\prime\prime\prime}(\boldsymbol{\xi}\_{2k}) + \frac{1}{4} \mathbf{g}\_{2}(\boldsymbol{\xi}\_{3k}, \mathbf{x}(\boldsymbol{\xi}\_{3k}), \boldsymbol{y}(\boldsymbol{\xi}\_{3k})) \mathbf{y}^{\prime\prime}(\boldsymbol{\xi}\_{3k}), \\ \mathcal{G}^{\prime}(\boldsymbol{\varepsilon}\_{3k}, \mathbf{x}(\boldsymbol{\varepsilon}\_{3k}), \boldsymbol{y}(\boldsymbol{\varepsilon}\_{3k})) &= \frac{\partial}{\partial \mathbf{x}} \mathbf{g}(\boldsymbol{\varepsilon}\_{3k}, \mathbf{x}(\boldsymbol{\varepsilon}\_{3k}), \boldsymbol{y}(\boldsymbol{\varepsilon}\_{3k})) + \frac{\partial}{\partial \mathbf{y}} \mathbf{g}(\boldsymbol{\varepsilon}\_{3k}, \mathbf{x}(\boldsymbol{\varepsilon}\_{3k}), \boldsymbol{y}(\boldsymbol{\varepsilon}\_{3k})), \end{split}$$

and *tk* < *ξ*1*k*, *ξ*2*k*, *ξ*3*<sup>k</sup>* < *tk*+1.

#### **Appendix B. Algorithms**

In this Appendix, the algorithms of the approximation methods based on FzT for Sections 3.2 and 3.3 are explained in detail. Pseudocode is used to describe the algorithms and a simplified code that is easy to read. This pseudocode specifies the form of the input to be supplied and the form of the desired output. As a consequence, a stopping technique independent of the numerical technique is incorporated into each algorithm to avoid infinite loops. Two punctuation symbols are used in the algorithms: a period (.) indicates the termination of a step, and a semicolon (;) separates tasks within a step. The integral symbol (integral(function,upper limits,lower limits)) is used to denote a definite integral. The steps in the algorithms follow the rules of structured program construction. They have been arranged so that there should be minimal difficulty translating pseudocode into any programming language suitable for scientific applications. We approximate the solution of SODEs (5) at (*N* + 1) equally-spaced numbers in the interval [*a*, *b*] as follows.

```
INPUT: f(t, x, y) and g(t, x, y) in Equation (5); endpoints a, b; integer N; initial condition y1.
Step 1 Set h = (b − a)/N; X1 = x1; Y1 = y1; t1 = a; k = 1, . . . , N + 1; tk = a + (k − 1)h.
Step 2 Define the generalized uniform fuzzy partitions as Ak(t) = 1
                                                                   2

                                                                      1 + cos 
                                                                               π
                                                                                  t−t(k)
                                                                                     h
                                                                                         .
Step 3 For k = 1 to N, do Steps 4–7.
              Step 4 F(k) =integral(f(t, X(k),Y(k))Ak(t), t(k − 1), t(k + 1))/integral(Ak(t), t(k − 1), t(k + 1)).
              Step 5 G(k) =integral(g(t, X(k),Y(k))Ak(t), t(k − 1), t(k + 1))/integral(Ak(t), t(k − 1), t(k + 1)).
              Step 6 X(k + 1) =X(k) + hF(k).
              Step 7 Y(k + 1) =Y(k) + hG(k).
     end.
OUTPUT: Approximation X and Y to x and y, respectively, at the (N + 1) values of t.
```
**Algorithm A2.** Two-stage (modified trapezoidal rule) algorithm for the system of ODEs.

```
INPUT: f(t, x, y); g(t, x, y); endpoints a, b; integer N; initial condition y1.
Step 1 Set h = (b − a)/N; X1 = x1; Y1 = y1; t1 = a; k = 1, . . . , N + 1; tk = a + (k − 1)h.
Step 2 Define the generalized uniform fuzzy partitions as Ak(t) = 1
                                                                  2

                                                                     1 + cos 
                                                                              π
                                                                                 t−t(k)
                                                                                    h
                                                                                       .
Step 3 For k = 1 to N, do Steps 4–11.
              Step 04 F(k) =integral(f(t, X(k),Y(k))Ak(t), t(k − 1), t(k + 1))/integral(Ak(t), t(k − 1), t(k + 1)).
              Step 05 G(k) =integral(g(t, X(k),Y(k))Ak(t), t(k − 1), t(k + 1))/integral(Ak(t), t(k − 1), t(k + 1)).
              Step 06 Xstar(k + 1) =X(k) + hF(k).
              Step 07 Ystar(k + 1) =Y(k) + hG(k).
              Step 08 Fstar(k + 1) =integral(f(t, Xstar(k + 1),Ystar(k + 1))Ak+1(t), t(k), t(k + 2))/integral(Ak+1(t), t(k), t(k + 2)).
              Step 09 Gstar(k + 1) =integral(g(t, Xstar(k + 1),Ystar(k + 1))Ak+1(t), t(k), t(k + 2))/integral(Ak+1(t), t(k), t(k + 2)).
              Step 10 X(k + 1) =X(k) + h (F(k) + Fstar(k + 1)) /2.
              Step 11 Y(k + 1) =Y(k) + h (G(k) + Gstar(k + 1)) /2.
     end.
OUTPUT: Approximation X and Y to x and y, respectively, at the (N + 1) values of t.
```
#### **References**


© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **New Approximation Methods Based on Fuzzy Transform for Solving SODEs: II**

#### **Hussein ALKasasbeh 1,\*, Irina Perfilieva <sup>2</sup> and Muhammad Zaini Ahmad <sup>1</sup> and Zainor Ridzuan Yahya <sup>1</sup>**


Received: 17 June 2018; Accepted: 14 August 2018; Published: 23 August 2018

**Abstract:** In this research, three approximation methods are used in the new generalized uniform fuzzy partition to solve the system of differential equations (SODEs) based on fuzzy transform (FzT). New representations of basic functions are proposed based on the new types of a uniform fuzzy partition and a subnormal generating function. The main properties of a new uniform fuzzy partition are examined. Further, the simpler form of the fuzzy transform is given alongside some of its fundamental results. New theorems and lemmas are proved. In accordance with the three conventional numerical methods: Trapezoidal rule (one step) and Adams Moulton method (two and three step modifications), new iterative methods (NIM) based on the fuzzy transform are proposed. These new fuzzy approximation methods yield more accurate results in comparison with the above-mentioned conventional methods.

**Keywords:** fuzzy partition; fuzzy transform; numerical methods; NIM; systems of ordinary differential equations

#### **1. Introduction**

Differential equation is particularly useful for different areas of applied sciences and engineering. Many differential equations have no closed form solutions. Thus, many researchers are developing approximation methods for solving differential equations, for example [1–3]. In this paper, we continue the study of approximation methods based on FzT to solutions of differential equations.

The core idea of FzT is a fuzzy partition of a universe into fuzzy subsets. The first fuzzy partition of FzT with the Ruspini condition was introduced by [4] and was extensively investigated by [5]. This condition implies normality of the fuzzy partition. In addition, the fuzzy partition with the generalized Ruspini condition (fuzzy *r*-partition) was introduced by [6]. This fuzzy partition was achieved by replacing the partition of unity by fuzzy *r*-partition. This type of partition was used by [6,7] for smoothing or filtering data based on the inverse FzT. Further, a generalized fuzzy partition appeared in connection with the notion of the FzT, where FzT components are polynomials of degree *m* [8]. By [9], different types of fuzzy partitions are taken into consideration such as B-splines, Shepard kernels, Bernstein basis polynomials and Favard-Szasz-Mirakjan operators. Later, the higher degree FzT based on B-splines was proposed [10] to improve the quality of the function approximation of two variables.

A generalized fuzzy partition was implicitly introduced by [11] with the purpose of meeting the requirements of image compression. In addition, a generalized fuzzy partition can also be considered in connection with radial membership functions [12]. Further, necessary and sufficient conditions for modeling the generalized fuzzy partition was provided by [13]. Recently, a new representation formula for basic functions of FzT and a new fuzzy numerical method based on block pulse functions for numerical solution of integral equations were presented by [14]. The approximation method based on the FzT with Shepard-type basic functions for linear Fredholm integral equations was discussed by [15]. New representations of the generalized uniform fuzzy partitions with the normal case to obtain better approximation solutions for solving Cauchy problems were presented by [16].

FzT is a soft computing method developed by Perfilieva [5] that has many applications, for example, in differential and integral equations. FzT for solving ordinary Cauchy problems with one variable was initiated by [4]. The generalization of the Euler method has been discussed by [17] for solving ordinary Cauchy problems. The author has applied this technique to reef growth and sea level variations models. Further, FzT has been generalized from the case of constant components to the case of polynomial components by [8]. Later, the first and second degree FzT based mid-point rule for solving the Cauchy problem and the uncertain initial value problem have been proposed by [18]. Furthermore, an algorithm to obtain the approximate solutions of second order initial value problems was constructed by [19]. From this idea, FzT for numerical solutions of two point boundary value problems was proposed by [20].

FzT of two variables based on finite differences method was used by [21] for solving a type of partial differential equations with Dirichlet boundary conditions and initial conditions. In addition, the first degree FzT of two variables was introduced by [22]. By [23], the partial derivatives using the first FzT were approximated and modification of the Canny edge detector was proposed. Furthermore, the uniform stability result for the vibrations of a telegraph equation using FzT of two variables was proposed by [24]. The composition of inverse and direct discrete FzT method was extended to numerical solution of Fredholm integral equations and Volterra Fredholm integral equations [25]. The general form of the higher order FzT was constructed by [26] for solving differential and integral equations using any arbitrary basis functions. The FzT has investigated for solving the Volterra population growth model using the approximation for the Caputo derivative [27]. A new numerical method based FzT was demonstrated to solve a class of delay differential equations by means of the Picard-like numerical scheme [28]. FzT was considered to approximate the solution of boundary value problems by minimizing the integral squared error in 2-norm [29]. In [30], the dynamical properties of a two neuron system with respect to FzT and a single delay have been investigated. The conditions under which quasi-consensus in a multi-agent system with sampled data based on FzT were proposed by [31].

NIM was proposed to solve nonlinear functional equations and the existence of solution for nonlinear Volterra integral equations [2]. At the same time, NIM was introduced for solving nonlinear equations by using a different decomposition technique [32]. From this conception, NIM was considered in terms up to fourth-order in Taylor series for solving nonlinear equations [33]. Sufficiency conditions have been presented for convergence of the NIM [34]. A new predictor-corrector approach was developed based on NIM for fractional differential equations [35]. Classical methods are modified by [3] to derive numerous formulas for solving the differential equations.

The motivation of the proposed study comes from [16,36,37]. In [16], new fuzzy numerical methods to solve the Cauchy problem was considered and the authors showed that the error can be reduced by FzT and NIM with respect to new generalized uniform fuzzy partitions, namely power of the triangular and raised cosine generalized uniform fuzzy partitions, where generating functions are normal (see also [37] for another approach). In addition, two basic approximation methods, modified Euler method and Trapezoidal rule, with help from FzT for solving SODEs are analyzed in detail by [36]. For this purpose, more generally, new generalized uniform fuzzy partitions are proposed in this study, where a generating function is not normal.

The membership functions in underlying fuzzy partitions are often called basic functions. There has been a growing interest in investigating the properties of fuzzy partitions. However, the problem arises on how one can effectively construct the basic function of fuzzy partitions. In this paper, new representations of basic functions are proposed. This is achieved by introducing

new generalized uniform fuzzy partitions, where a generating function is not normal. Further, new fuzzy numerical methods based on NIM and FzT for solving SODEs are introduced and discussed. In particular, we consider functions of two variables with initial conditions. In accordance with the existing methods, Trapezoidal rule and Adams Moulton are improved using FzT and NIM. The methods are combined with one-step, two-step and three-step. As an application, all these methods are used to solve a general model of the dynamical system, i.e., Lotka–Volterra equation with derivatives and with variable coefficients. Furthermore, numerical examples are presented. It is observed that the new fuzzy numerical methods yield more accurate results than classical Trapezoidal rule and classical Adams Moulton methods (2 and 3-step).

The paper is organized as follows. The main part of the paper is Sections 3 and 4, which provides new representations for basic functions of FzT, followed by the modified one step, 2-step and 3-step based on NIM and FzT method with respect to new representations formulas for generalized uniform fuzzy partition of FzT. In Section 5, numerical examples are discussed. Finally, conclusions are given in Section 6.

#### **2. Basic Concepts**

In this section, we give some definitions and introduce the necessary notation following [38], which will be used throughout the paper. Throughout this section, we deal with an interval [*a*, *<sup>b</sup>*] <sup>⊂</sup> <sup>R</sup> of real numbers.

**Definition 1.** *(generalized uniform fuzzy partition) Let ti* ∈ [*a*, *b*] , *i* = 1, ... , *n*, *be fixed nodes such that a* = *t*<sup>1</sup> < ... < *tn* = *b, t*<sup>0</sup> = *t*1, *tn* = *tn*+1, *n* ≥ 2 *and* [*ti* − *h*, *ti* + *h*] ⊆ [*a*, *b*]*. We say that the fuzzy sets Ai* : [*a*, *b*] → [0, 1] *constitute a generalized fuzzy partition of* [*a*, *b*] *if the following conditions are fulfilled:*


*Fuzzy sets A*1, ... , *An are called basic functions. It is important to remark that by conditions of locality and continuity, <sup>b</sup> <sup>a</sup> Ai*(*t*)*dt* > 0*. A generalized uniform fuzzy partition of* [*a*, *b*] *is defined for equidistant nodes, i.e., for all i* = 1, ... , *n* − 1, *ti* = *ti*+<sup>1</sup> + *h*, *where h* = (*b* − *a*) / (*n* − 1) *and two additional properties are satisfied,*

*4. Ai* (*ti* − *t*) = *Ai* (*ti* + *t*) *for all t* ∈ [0, *h*] , *i* = 2, . . . , *n* − 1*;*

$$\text{5.}\qquad A\_i\left(t\right) = A\_{i-1}\left(t - h\right) \text{ and } A\_{i+1}\left(t\right) = A\_i\left(t - h\right) \text{ for all } t \in \left[t\_i, t\_{i+1}\right], \ i = 2, \ldots, n-1;$$

*then the fuzzy partition is called h-uniform generalized fuzzy partition.*

**Definition 2.** *(generating function) A function K* : [−1, 1] → [0, 1] *is called a generating function if it is assumed to be even, continuous and <sup>K</sup>* (*t*) <sup>&</sup>gt; <sup>0</sup> *if <sup>t</sup>* <sup>∈</sup> (−1, 1)*. The function <sup>K</sup>* : [−1, 1] <sup>→</sup> <sup>R</sup> *is even if for all t* ∈ [0, 1] , *K* (−*t*) = *K* (*t*)*.*

The following definition recalls the concept of the generalized fuzzy partition which can be easily extended to the interval [*a*, *b*]. We assume that [*a*, *b*] is partitioned by *A*1, ... , *An*, according to Definition 1.

**Definition 3.** *A h-uniform generalized fuzzy partition of interval* [*a*, *b*]*, determined by the triplet* (*K*, *h*, *a*)*, can be defined using generating function K (Definition 2). Then, basic functions of a h-uniform generalized fuzzy partition are shifted copies of K defined by*

$$A\_i^{\prime}(t) = K \left( \frac{t - t\_i}{h} \right), \ t \in \left[ t\_i - h, t\_i + h \right],$$

*for all i* = 1, ... , *n. The parameter h is called the bandwidth or the shift of the fuzzy partition and the nodes ti* = *a* + *ih are called the central point of the fuzzy sets A*1,..., *An.*

**Remark 1.** *A h-uniform fuzzy partition is called Ruspini if the following condition*

$$A\_i \left( t \right) + A\_{i+1} \left( t \right) = 1, \ i = 1, \ldots, n - 1,\tag{1}$$

*holds for any t* ∈ [*ti*, *ti*+1]*. This condition is often called Ruspini condition.*

*New Iterative Method*

NIM have proposed by [2] for solving linear and nonlinear functional equations of the form

$$
\mathfrak{u} = f\_1 + \mathcal{N}\left(\mathfrak{u}\right),
\tag{2}
$$

where *f*<sup>1</sup> is a known function and *N* a non linear operator. Solutions obtained by this method are in the form of rapidly converging infinite series which can be effectively approximated by calculating only the first few terms. In this method non linear operator *N* is decomposed as *N*(*u*) = *N*(*u*0) + ∑<sup>∞</sup> *i*=1 *N* ∑*i <sup>n</sup>*=<sup>0</sup> *un* − *N* <sup>∑</sup>*i*−<sup>1</sup> *<sup>n</sup>*=<sup>0</sup> *un* . In [2], the authors were defined the recurrence relation:

$$\begin{cases} u\_0 = f\_1, \\ u\_1 = N\left(u\_0\right), \\ u\_{m+1} = N\left(u\_0 + \dots + u\_m\right) - N\left(u\_0 + \dots + u\_{m-1}\right), \quad m = 1, 2, \dots \end{cases} \tag{3}$$

Then(*u*<sup>1</sup> <sup>+</sup> ··· <sup>+</sup> *um*+1) <sup>=</sup> *<sup>N</sup>* (*u*<sup>0</sup> <sup>+</sup> ··· <sup>+</sup> *um*), *<sup>m</sup>* <sup>=</sup> 1, 2, 3, ... , and *<sup>u</sup>* <sup>=</sup> *<sup>f</sup>*<sup>1</sup> <sup>+</sup> <sup>∑</sup><sup>∞</sup> *<sup>i</sup>*=<sup>1</sup> *ui* = *f*<sup>1</sup> + *N*(*u*0) + [*N*(*u*<sup>0</sup> + *u*1) − *N*(*u*0)] + ··· = *f*<sup>1</sup> + *N*(*u*). Hence *u* satisfies the functional (2).

#### **3. New Representations for Basic Functions of FzT**

Let us recall the basic facts of an FzT of a continuous real function *f* as presented by [5,17]. The first step in the definition of the FzT of *f* involves the selection of a fuzzy partition of the domain [*a*, *b*] by a finite number *n* ≥ 2 of fuzzy sets *Bk*(*t*), *k* = 1, ... , *n*. In those papers, five axioms specified *Bk*(*t*), *k* = 1, ... , *n*, in the fuzzy partition: normality, locality, continuity, unimodality (monotonicity) and orthogonality (Ruspini condition). A fuzzy partition is called uniform if the fuzzy sets *Bk*(*t*), *k* = 2, ... , *n* − 1, are shifted copies of symmetrized *B*<sup>1</sup> (more details can be found in [17]). The membership functions *Bk*(*t*), *k* = 1, ... , *n*, in a fuzzy partition are called basic functions. Later, a generalized fuzzy partition appeared in connection with the notion of a higher-degree FzT [8]. Furthermore, summarize both these notions in [38]. Three axioms specify *Bk*(*t*), *k* = 1, ... , *n*, in the fuzzy partition: positivity and locality, continuity and covering. Recently, the different conditions for generalized uniform fuzzy partitions was proposed [13,38] while another approach was demonstrated by [37] where a function can be reconstructed from its F-transform components. In the following, we modify the definition *h*-uniform generalized fuzzy partition.

#### *3.1. Generalized Uniform Fuzzy Partitions with the Generalized Normal Case*

Let us recall the *h*-uniform generalized fuzzy partition of real line can be defined using generating function *K*. Then, basic functions of the *h*-uniform generalized fuzzy partition are shifted copies of *K*. On the basis of Definition 1 can be also defined using a generating function *λβK*(*t*) where *β* = 1/*K*(0), *K*(0) = 0, *β* > 0 and *λ* > 0 (in general, not necessarily satisfying normal and Ruspini condition) which is that *K*(*t*) assumed to be even, continuous and *K* (*t*) > 0 if *t* ∈ (−1, 1). Therefore, we will modify the basic functions of the *h*-uniform generalized fuzzy partition so that they are shifted copies of *λβK* defined by

$$A\_k\left(t, t\_0\right) = \lambda \beta K\left(\frac{t - t\_0}{h} - k\right), \ t \in \left[t\_k - h, t\_k + h\right], \ k \in \mathbb{Z}.\tag{4}$$

The parameter *h* is bandwidth of the fuzzy partition and *t*<sup>0</sup> + *kh* = *tk*. The concept of the *h*-uniform generalized fuzzy partition can be easily extended to the interval [*a*, *b*] as follows.

**Definition 4.** *Let <sup>t</sup>*<sup>1</sup> <sup>&</sup>lt; ... <sup>&</sup>lt; *tn be fixed nodes within* [*a*, *<sup>b</sup>*] <sup>⊂</sup> <sup>R</sup>*, such that <sup>t</sup>*<sup>1</sup> <sup>=</sup> *a, tn* <sup>=</sup> *<sup>b</sup> and <sup>n</sup>* <sup>≥</sup> <sup>2</sup>*. We consider nodes t*1, ... , *tn are equidistant, with distance (shift) h* = (*b* − *a*) / (*n* − 1)*. A system of fuzzy sets B*1,..., *Bn* [*a*, *b*] → [0, 1] *be a generalized uniform fuzzy partitions of* [*a*, *b*] *if it is defined by*

$$B\_k(t) \quad = \begin{cases} A\_k(t, a), & t \in [a, b], \\ 0, & \text{otherwise.} \end{cases} \quad = \begin{cases} \lambda \beta \mathcal{K} \left( \frac{t - t\_k}{h} \right), & t \in [a, b], \\ 0, & \text{otherwise.} \end{cases} \tag{5}$$

*where tk* = *a* + (*k* − 1)*h, β* = 1/*K*(0)*, K*(0) = 0*, β* > 0 *and λ* > 0*. In the sequel, a generating function denote by K and basic functions of FzT denote by Bk*, *k* = 1, . . . , *n.*

**Lemma 1.** *If basic functions Bk*, *k* = 1, ... , *n*, *of a h-uniform generalized fuzzy partition are shifted copies of λβK defined by (5). Then, for each k* = 1, . . . , *n*, *Bk*(*tk*) = *λ*, *tk* ∈ [*tk* − *h*, *tk* + *h*]*.*

**Proof.** By (5), we get *Bk* (*tk*) = *λβK tk*−*tk h* = *λ*.

*3.2. Simpler Form of F-Transform Components Based on Generalized Uniform Fuzzy Partitions with the Generalized Normal Case*

In this subsection, we present the main principles of FzT with respect to new representations of *h*-uniform generalized fuzzy partition. Further, we will show that FzT components with respect to new representations of *h*-uniform generalized fuzzy partition can be simplified and approximated of an original function, say *f* .

**Definition 5.** *Let f be a continuous function on* [*a*, *b*] *and Bk*(*t*)*, k* = 1, ... , *n, be h-uniform generalized fuzzy partition of* [*a*, *b*]*, n* ≥ 2*. A vector of real numbers F*[ *f* ] = (*F*1, *F*2,..., *Fn*) *given by*

$$F\_k = \frac{\int\_a^b f\left(t\right) B\_k(t) \, dt}{\int\_a^b B\_k(t) \, dt},\tag{6}$$

*for k* = 1, . . . , *n is called the direct FzT of f with respect to Bk.*

In the following, we will simplify the representation (6).

**Lemma 2.** *Let f* ∈ *C* ([*a*, *b*]) *and according to Definition 4, fuzzy sets Bk*, *k* = 1, ... , *n*, *n* ≥ 2, *be a h-uniform generalized fuzzy partition of* [*a*, *b*] *with a generating function K, then representation (6) of direct FzT can be simplified for k* = 1, . . . , *n as follows*

$$F\_k = \frac{\int\_{-1}^{1} f\left(th + t\_k\right) K(t) \, dt}{\int\_{-1}^{1} K(t) \, dt} = \frac{\int\_{-h}^{h} f\left(t + t\_k\right) K(\frac{t}{h}) \, dt}{\int\_{-h}^{h} K(\frac{t}{h}) \, dt}. \tag{7}$$

**Proof.** By Definition 4, we get

$$B\_k\left(t\right) = \lambda \beta K \left(\frac{t - t\_k}{h}\right), \ t \in \left[t\_k - h, t\_k + h\right], \epsilon$$

for *<sup>k</sup>* = 1, ... , *<sup>n</sup>*, *<sup>t</sup>*<sup>0</sup> = *<sup>t</sup>*1, *tn*+<sup>1</sup> = *tn*, and substituting *<sup>u</sup>* = *<sup>t</sup>*−*tk <sup>h</sup>* and then substituting *t* = *s*/*h*. Thus, we get

$$\int\_{t\_{k-1}}^{t\_{k+1}} f\left(t\right) B\_k(t) \, dt = \lambda \beta h \int\_{-1}^1 f\left(th + t\_k\right) K(t) \, dt = \lambda \beta \int\_{-h}^h f\left(t + t\_k\right) K(\frac{t}{h}) \, dt\_k$$

$$\int\_{t\_{k-1}}^{t\_{k+1}} B\_k(t) \, dt = \lambda \beta h \int\_{-1}^1 K(t) \, dt = \lambda \beta \int\_{-h}^h K(\frac{t}{h}) \, dt\_k$$

and its corresponding results with representation (6).

If *λ* > 0, the Lemma 1 still hold by choosing suitable constant *λ*, satisfying *λ* = 1/ <sup>1</sup> <sup>−</sup><sup>1</sup> *<sup>β</sup>K*(*t*)*dt* , where <sup>1</sup> <sup>−</sup><sup>1</sup> *<sup>β</sup>K*(*t*)*dt* <sup>&</sup>gt; 0. So, we will restrict ourselves to *<sup>h</sup>*-uniform generalized fuzzy partition with 0 < *λ* = 1/ <sup>1</sup> <sup>−</sup><sup>1</sup> *<sup>β</sup>K*(*t*)*dt* , where <sup>1</sup> <sup>−</sup><sup>1</sup> *<sup>β</sup>K*(*t*)*dt* <sup>=</sup> 0. In the following, we will simplify the above given expressions for the coefficients *F*[ *f* ] = (*F*1, *F*2,..., *Fn*) in the representation (6). This fact is very important for applications which are more flexible and consequently easier to use.

**Corollary 1.** *Let the assumptions of Lemma 2 be fulfilled and* 0 < *λ* = 1/ <sup>1</sup> <sup>−</sup><sup>1</sup> *<sup>β</sup>K*(*t*)*dt , where* 1 <sup>−</sup><sup>1</sup> *<sup>β</sup>K*(*t*)*dt* <sup>=</sup> <sup>0</sup>*. Then, the coefficients <sup>F</sup>*[ *<sup>f</sup>* ] = (*F*1, *<sup>F</sup>*2,..., *Fn*) *in the expression (6) of the FzT component Fk of f as follows:*

$$F\_k = \frac{1}{h} \int\_a^b f\left(t\right) B\_k(t) \, dt = \frac{\lambda \beta}{h} \int\_a^b f\left(t\right) K\left(\frac{t - t\_k}{h}\right) \, dt,\tag{8}$$

*for k* = 1, . . . , *n, where interval* [*a*, *b*] *is partitioned by the h-uniform generalized fuzzy partition B*1,..., *Bn.*

**Proof.** Let *k* ∈ {1, . . . , *n*} and consider set of fuzzy sets *Bk*(*t*) be the *h*-uniform generalized fuzzy partition of [*a*, *b*] defined by (5). Using the proof of Lemma 2, we get

$$\int\_{t\_{k-1}}^{t\_{k+1}} B\_k(t) \, dt = \int\_{t\_{k-1}}^{t\_{k+1}} A\_k(t, a), \\
\, dt = \int\_{t\_k - h}^{t\_k + h} \lambda \beta \mathcal{K} \left( \frac{t - t\_k}{h} \right) \, dt = h \lambda \int\_{-1}^1 \beta \mathcal{K}(t) \, dt = h,\tag{9}$$

where 0 < *λ* = 1/ <sup>1</sup> <sup>−</sup><sup>1</sup> *<sup>β</sup>K*(*t*)*dt* , 1 <sup>−</sup><sup>1</sup> *<sup>β</sup>K*(*t*)*dt* <sup>=</sup> 0, *<sup>h</sup>* is bandwidth of the fuzzy partition and *tk* = *a* + (*k* − 1)*h* and then its corresponding in the expression (6).

**Lemma 3.** *Let <sup>f</sup>* <sup>∈</sup> *<sup>C</sup>* [*a*, *<sup>b</sup>*]*. Then for any <sup>ε</sup>* <sup>&</sup>gt; <sup>0</sup> *there exist <sup>n</sup><sup>ε</sup>* <sup>∈</sup> <sup>N</sup> *and <sup>B</sup>*1, ... , *Bn<sup>ε</sup> be basic functions form the h-uniform generalized fuzzy partition of* [*a*, *b*]*. Let Fk*, *k* = 1 ... , *n*, *be the integral FzT components of f with respect to B*1, ... , *Bnε. Then for each k* = 1 ... , *n<sup>ε</sup>* − 1 *the following estimations hold:* | *f*(*t*) − *Fi*| ≤ *ε for each t* ∈ [*a*, *b*] ∩ [*tk*, *tk*<sup>+</sup>1] *and i* = *k*, *k* + 1*.*

**Proof.** see [5].

**Corollary 2.** *Let the conditions of Lemma 3 be fulfilled. Then for each k* = 1 ... , *n<sup>ε</sup>* − 1 *the following estimations hold:* |*Fk* − *Fk*<sup>+</sup>1| < *ε.*

**Proof.** According to [5,39], let *t* ∈ [*a*, *b*] ∩ [*tk*, *tk*<sup>+</sup>1]. Then by Lemma 3, for any *k* = 1, ... , *n* − 1 we obtain that | *f* (*t*) − *Fk*| < *ε*/2 and | *f* (*t*) − *Fk*<sup>+</sup>1| < *ε*/2. Thus,

$$|F\_k - F\_{k+1}| \le |f\left(t\right) - F\_k| + |f\left(t\right) - F\_{k+1}| < \frac{\varepsilon}{2} + \frac{\varepsilon}{2} = \varepsilon. \quad \Box$$

The following theorem estimates the difference between the original function and its direct FzT with respect to the *h*-uniform generalized fuzzy partition.

**Theorem 1.** *Let f* (*t*) ∈ *<sup>C</sup>*<sup>2</sup> [*a*, *<sup>b</sup>*] *and the conditions of Lemma <sup>2</sup> be fulfilled. Then for k* = 1, . . . , *<sup>n</sup>*

$$F\_k = \lambda f\left(t\_k\right) + \mathcal{O}\left(h^2\right),\tag{10}$$

$$where \ 0 < \lambda = 1 / \left(\int\_{-1}^{1} \beta K(t) dt\right) and \int\_{-1}^{1} \beta K(t) dt \neq 0.$$

**Proof.** By locality condition of definition of *h*-uniform generalized fuzzy partition, Corollary 1, Lemma 1, and according to [17], using the trapezoid formula with nodes *tk*−1, *tk*, *tk*<sup>+</sup><sup>1</sup> to the numerical computation of the integral, we get for *<sup>k</sup>* <sup>=</sup> 1, . . . , *<sup>n</sup>* and 0 <sup>&</sup>lt; *<sup>λ</sup>* <sup>=</sup> 1/ <sup>1</sup> <sup>−</sup><sup>1</sup> *<sup>β</sup>K*(*t*)*dt*

$$\begin{split} F\_k &= \frac{1}{h} \int\_{t\_{k-1}}^{t\_{k+1}} f\left(t\right) B\_k(t) \, dt, \\ &= \frac{1}{h} \frac{h}{2} \left( f\left(t\_{k-1}\right) B\_k(t\_{k-1}) + 2f\left(t\_k\right) B\_k(t\_k) + f\left(t\_{k+1}\right) B\_k(t\_{k+1}) \right) + \mathcal{O}\left(h^2\right), \\ &= f\left(t\_k\right) B\_k(t\_k) + \mathcal{O}\left(h^2\right) = \lambda f\left(t\_k\right) + \mathcal{O}\left(h^2\right). \quad \Box \tag{11} \end{split} \tag{12}$$

**Corollary 3.** *Let <sup>f</sup>* (*t*) ∈ *<sup>C</sup>*<sup>2</sup> [*a*, *<sup>b</sup>*] *and the conditions of Lemma <sup>2</sup> be fulfilled. Let moreover, <sup>f</sup> be Lipschitz continuous with respect to t, i.e., there exists a constant L* <sup>∈</sup> <sup>R</sup>*, such that for all t* <sup>∈</sup> [*a*, *<sup>b</sup>*] *and t*, *<sup>t</sup>* <sup>∈</sup> <sup>R</sup>*,*

$$|f(t) - f(t')| \le L|t - t'|.\tag{12}$$

*Then for k* = 1, . . . , *n*

$$\left| f\left( t \right) - \frac{1}{\lambda} F\_k \right| \le Lh + \frac{h^2}{6\lambda} M\_\prime $$

*where* 0 < *λ* = 1/ <sup>1</sup> <sup>−</sup><sup>1</sup> *<sup>β</sup>K*(*t*)*dt ,* 1 <sup>−</sup><sup>1</sup> *<sup>β</sup>K*(*t*)*dt* <sup>=</sup> <sup>0</sup>*, <sup>M</sup>* <sup>=</sup> max *<sup>t</sup>*∈[*tk*−1, *tk*+1] | *f* (*t*)| *and* |*t* − *tk*| < *h whenever t* ∈ [*tk*−1, *tk*<sup>+</sup>1]*.*

**Proof.** By the assumption *f* has continuous second order derivatives on [*a*, *b*] and is Lipschitz continuous with respect to *t*. Therefore, using the trapezoid rule and let us choose a value of *k* in the range 1 <sup>≤</sup> *<sup>k</sup>* <sup>≤</sup> *<sup>n</sup>* and *<sup>t</sup>* <sup>∈</sup> [*tk*−1, *tk*<sup>+</sup>1], we get for 0 <sup>&</sup>lt; *<sup>λ</sup>* <sup>=</sup> 1/ <sup>1</sup> <sup>−</sup><sup>1</sup> *<sup>β</sup>K*(*t*)*dt*

$$\begin{aligned} \left| f\left( t \right) - \frac{1}{\lambda} F\_k \right| &= \left| f\left( t \right) - \frac{1}{h\lambda} \int\_{t\_{k-1}}^{t\_{k+1}} f\left( t \right) B\_k(t) \, dt \right|, \\ &= \left| f\left( t \right) - \frac{1}{h\lambda} \left[ h\lambda f\left( t\_k \right) - \frac{h^3}{12} \left( f''\left( \xi\_{k-1} \right) + f''\left( \xi\_{k+1} \right) \right) \right] \right|, \\ &\le \left| f\left( t \right) - f\left( t\_k \right) \right| + \frac{h^2}{12\lambda} 2M, \\ &\le L \left| t - t\_k \right| + \frac{h^2}{6\lambda} M \le Lh + \frac{h^2}{6\lambda} M, \end{aligned} \tag{13}$$

where *<sup>ξ</sup>k*−<sup>1</sup> <sup>∈</sup> (*tk*−1, *tk*), *<sup>ξ</sup>k*+<sup>1</sup> <sup>∈</sup> (*tk*, *tk*<sup>+</sup>1) and *<sup>M</sup>* <sup>=</sup> max *<sup>t</sup>*∈[*tk*−1, *tk*+1] | *f* (*t*)|.

**Remark 2.** *In view of (13), if* 0 < *λ* ≤ 1*. Then, <sup>f</sup>* (*t*) <sup>−</sup> <sup>1</sup> *<sup>λ</sup> Fk* <sup>≤</sup> *Lh* <sup>+</sup> *<sup>h</sup>*<sup>2</sup> <sup>6</sup> *M*.

**Definition 6.** *Let F*[ *f* ] = (*F*1, *F*2,..., *Fn*) *be direct FzT of a function f* ∈ *C* [*a*, *b*] *with respect to the fuzzy partition Bk*(*t*)*, k* = 1, . . . , *n of* [*a*, *b*]*. Then, the function* ˆ *f defined on* [*a*, *b*]

$$\begin{array}{c} \hat{f} \begin{pmatrix} t \end{pmatrix} = \frac{\sum\_{k=1}^{n} F\_k B\_k(t)}{\sum\_{k=1}^{n} B\_k(t)}, \end{array} \tag{14}$$

*is called the inverse FzT of f .*

The following lemma estimates the difference between the original function and its inverse FzT.

**Lemma 4.** *Let the assumptions of Theorem 1 and let* ˆ *f* (*t*) *be the inverse FzT of f with respect to the fuzzy partition of* [*a*, *b*] *is given by Definition 4. Then, the following estimation holds for t* ∈ [*a*, *b*] *and k* = 1, . . . , *n*

$$
\hat{f}\left(t\right) = \lambda f\left(t\_k\right) + \mathcal{O}\left(h^2\right),
\tag{15}
$$

*where* <sup>0</sup> <sup>&</sup>lt; *<sup>λ</sup>* <sup>=</sup> 1/ <sup>1</sup> <sup>−</sup><sup>1</sup> *<sup>β</sup>K*(*t*)*dt and* <sup>1</sup> <sup>−</sup><sup>1</sup> *<sup>β</sup>K*(*t*)*dt* <sup>=</sup> <sup>0</sup>*.*

**Proof.** Let *t* ∈ [*a*, *b*] so that *t* ∈ [*tk*, *tk*<sup>+</sup>1] for some *k* = 1, . . . , *n*. By Theorem 1,

$$\begin{split} f\left(t\right) - \lambda f\left(t\_{k}\right) &= \frac{\sum\_{k=1}^{n} F\_{k} B\_{k}(t)}{\sum\_{k=1}^{n} B\_{k}(t)} - \lambda f\left(t\_{k}\right) = \frac{\sum\_{k=1}^{n} F\_{k} B\_{k}(t)}{\sum\_{k=1}^{n} B\_{k}(t)} - \frac{\sum\_{k=1}^{n} \lambda f\left(t\_{k}\right) B\_{k}(t)}{\sum\_{k=1}^{n} B\_{k}(t)} \\ &= \frac{\sum\_{k=1}^{n} \left(F\_{k} - \lambda f\left(t\_{k}\right)\right) B\_{k}(t)}{\sum\_{k=1}^{n} B\_{k}(t)} = \mathcal{O}\left(h^{2}\right) . \end{split}$$

**Theorem 2.** *Let <sup>f</sup>* <sup>∈</sup> *<sup>C</sup>* [*a*, *<sup>b</sup>*]*. Thus, for any <sup>ε</sup>* <sup>&</sup>gt; <sup>0</sup> *there exist <sup>n</sup><sup>ε</sup>* <sup>∈</sup> <sup>N</sup> *and <sup>B</sup>*1, ... , *Bn<sup>ε</sup> be the h-uniform generalized fuzzy partition of* [*a*, *b*] *defined by (5). Then, the following estimations hold* ˆ *f* (*t*) − *f* (*t*) <sup>&</sup>lt; *<sup>ε</sup> for each t* ∈ [*a*, *b*] ∩ [*tk*, *tk*<sup>+</sup>1]*.*

**Proof.** From the proof of Lemma 4 and then using Lemma (3) in the sense that for all *k* = 1, . . . , *n*,

$$\left| \left| \widehat{f} \left( t \right) - f \left( t \right) \right| \right| = \frac{\sum\_{k=1}^{n} \left| F\_k - f \left( t \right) \right| B\_k(t)}{\sum\_{k=1}^{n} B\_k(t)} < \varepsilon. \quad \square$$

**Remark 3.** *According to Definition (4), it is easy to see that the inverse FzT* ˆ *f* (*tk*) = *Fk for all k* = 1, . . . , *n.*

On the basis of Definition 4, necessary steps of a new method to construct generalized uniform fuzzy partitions of [−1, 1] for solve case *K* is not normal in the following.


**Example 1.** *Let K* : <sup>R</sup> <sup>→</sup> [0, 1] *be defined by*

$$K(t) = (1 + \cos\left(\pi t\right))^m \dots$$

*One can see in Table 1 the h-uniform generalized fuzzy partition of* [*a*, *b*] *determined by Definition 4.*



The following remark is for modified Trapezoidal rule based on FzT and NIM to solve SODEs.

**Remark 4.** *In view of Equation (9), tk*+<sup>1</sup> *tk*−<sup>1</sup> *Bk*(*t*) *dt* <sup>=</sup> *h. This means that tk*+<sup>1</sup> *tk Bk*(*t*) *dt* <sup>=</sup> *<sup>h</sup>* 2 *.*

Important property of the direct FzT as well as inverse FzT is their linearity, namely, given *<sup>f</sup>* , *<sup>g</sup>* <sup>∈</sup> *<sup>C</sup>* [*a*, *<sup>b</sup>*] and *<sup>α</sup>*, *<sup>β</sup>* <sup>∈</sup> *<sup>R</sup>*, if *<sup>h</sup>* <sup>=</sup> *<sup>α</sup> <sup>f</sup>* <sup>+</sup> *<sup>β</sup>g*, then *<sup>F</sup>* [*h*] <sup>=</sup> *<sup>α</sup><sup>F</sup>* [ *<sup>f</sup>* ] <sup>+</sup> *<sup>β</sup><sup>F</sup>* [*g*] and <sup>ˆ</sup> *h* = *α* ˆ *f* + *βg*ˆ.

#### **4. New Fuzzy Numerical Methods for Solving SODEs**

Consider the initial value problem (IVP) for the SODEs:

$$\begin{cases} \mathbf{x}'(t) = f\left(t, \mathbf{x}, y\right), & \mathbf{x}\left(t\_1\right) = \mathbf{x}\_1, \ a = t\_1 \le t \le t\_n = b \\ y'(t) = \mathbf{g}\left(t, \mathbf{x}, y\right), & y\left(t\_1\right) = y\_1. \end{cases} \tag{16}$$

where *x*1, *y*<sup>1</sup> ∈ *R* and *f* , *g* are continuous function on *D* = [*a*, *b*] × *R* × *R*. If *f* (*g*) satisfies a Lipschitz condition on *D* in the variable *x* (*y*), then the initial-value problem (16) has a unique solution *x*(*t*) (*y*(*t*)) for *a* ≤ *t* ≤ *b*. In many cases, the problem (16) cannot be solved analytically so that numerical solutions are required. In [16], new representations of basic function based on the FzT are constructed for solving generalized Cauchy problems with help of NIM, FzT and classical methods (one-step, two-step and three-step) have presented while Euler method and Mid-point rule, based on FzT to solve Cauchy problem proposed by [17,18]. Further, NIM has been proposed for solving ODEs and delay differential equations [3]. Moreover, Adams-Bashforth methods and Adams-Moulton methods are noted as two families of multistep methods in literature. Multistep methods refer to using several previous values from the previous steps. The Adams-Bashforth methods were presented by John Couch Adams to solve a differential equation modelling capillary action due to Francis Bashforth and it follows that the Adams-Moulton method was developed improved multistep methods for solving ballistic equations by Forest Ray Moulton. In particular, the Adams-Moulton method is similar to the Adams-Bashforth method and the Adams-Moulton method was used Newton's method to solve the implicit equation. Clearly, Adams-Bashforth methods are explicit methods and the Adams–Moulton methods are implicit methods, for example, see ([40], p. 111).

Necessary steps of construction of the generalized uniform fuzzy partitions can be summarized as follows.


To begin the derivation of a modified Trapezoidal rule (1-step) and Adams Moulton method (2 and 3-step), integrate (16) on the interval [*tk*, *tk*<sup>+</sup>1], *k* = 1, . . . , *n* − 1 to obtain=

$$\mathbf{x}(t\_{k+1}) = \mathbf{x}(t\_k) + \int\_{t\_k}^{t\_{k+1}} f\left(\mathbf{s}, \mathbf{x}(s), \mathbf{y}\left(\mathbf{s}\right)\right) d\mathbf{s},$$

$$\mathbf{y}(t\_{k+1}) = \mathbf{y}(t\_k) + \int\_{t\_k}^{t\_{k+1}} \mathbf{g}\left(\mathbf{s}, \mathbf{x}(s), \mathbf{y}\left(\mathbf{s}\right)\right) d\mathbf{s}.\tag{17}$$

Consider the following integral

$$I\_f = \int\_{t\_k}^{t\_{k+1}} f\left(s, \mathbf{x}(s), \mathbf{y}\left(s\right)\right) ds,$$

$$I\_{\mathcal{S}} = \int\_{t\_k}^{t\_{k+1}} g\left(s, \mathbf{x}(s), \mathbf{y}\left(s\right)\right) ds.\tag{18}$$

However, we cannot integrate *f* (*s*, *x*(*s*), *y* (*s*)) and *g* (*s*, *x*(*s*), *y* (*s*)) without knowing *x*(*s*) and *y*(*s*). So, the above integral (18) can be approximated by the following approach

$$I\_f \approx \int\_{t\_k}^{t\_{k+1}} f\_1\left(s, x(s), y\left(s\right)\right) ds,$$

$$I\_{\mathcal{S}} \approx \int\_{t\_k}^{t\_{k+1}} g\_1\left(s, x(s), y\left(s\right)\right) ds,\tag{19}$$

where *f*<sup>1</sup> and *g*<sup>1</sup> are the approximation of *f* and *g* on the interval [*tk*, *tk*<sup>+</sup>1]. Choosing different *f*<sup>1</sup> (*g*1) leads to different schemes. In particular, we choose *f*<sup>1</sup> (*g*1) which contributes to the one, two and three-step methods based on FzT. Later, modification of these methods based on FzT and NIM.

In this section, we present three new schemes to solve SODEs (16) that use the F-transform and NIM and suppose that the functions *f* and *g* on [*a*, *b*] are sufficiently smooth. The first scheme uses 1-step method while the second one uses the 2-step method and the last uses the 3-step method.

#### *4.1. Numerical Scheme I: Modified Trapezoidal Rule Based on FzT and NIM for SODEs*

According to necessary steps of construction of the generalized uniform fuzzy partitions in Section 4, we contributed to approximation methods of SODEs (16) by scheme provides formulas for the FzT components, *Xk* (*Yk*), *k* = 2, ... , *n* − 1, of the unknown function *x* (*t*) (*y* (*t*)) with respect to choose some of the *h*-uniform generalized fuzzy partition, *B*1, ... , *Bn*, of interval [*a*, *b*] with parameter *h* to approximate solution of SODEs (16). As initial step, choose the number *n* ≥ 2 and compute *h* = (*b* − *a*) / (*n* − 1), then construct the *h*-uniform generalized fuzzy partition of [*a*, *b*] using Definition 4. Let *X*<sup>1</sup> = *x*<sup>1</sup> and *Y*<sup>1</sup> = *y*1. In the following, we apply the FzT and NIM to the SODEs (16) for obtaining the numerical Scheme I, where *k* = 1, . . . , *n* − 1.

First, let *f*<sup>1</sup> (*g*1) in the Equation (19) is chosen as

$$\begin{aligned} f\_1 &= B\_k F\_k + B\_{k+1} F\_{k+1}, \\ g\_1 &= B\_k G\_k + B\_{k+1} G\_{k+1}, \end{aligned} \tag{20}$$

where

$$F\_k = \frac{\int\_a^b f\left(t, X\_k, \mathbf{Y}\_k\right) B\_k(t) \, dt}{\int\_a^b B\_k(t) \, dt}, \; G\_k = \frac{\int\_a^b g\left(t, X\_k, \mathbf{Y}\_k\right) B\_k(t) \, dt}{\int\_a^b B\_k(t) \, dt},\tag{21}$$

and *Bk* represents the generalized uniform fuzzy partitions that are defined by Definition 4. Then, substituting (20) into (19) for *k* = 1, . . . , *n* − 1

$$\begin{aligned} I\_f &\approx \int\_{t\_k}^{t\_{k+1}} B\_k F\_k ds + \int\_{t\_k}^{t\_{k+1}} B\_{k+1} F\_{k+1} ds, \\\\ I\_{\mathcal{S}} &\approx \int\_{t\_k}^{t\_{k+1}} B\_k G\_k ds + \int\_{t\_k}^{t\_{k+1}} B\_{k+1} G\_{k+1} ds. \end{aligned}$$

By Remark 4 in the interval [*tk*, *tk*<sup>+</sup>1], we have

$$I\_f \approx \frac{h}{2} \left( F\_k + F\_{k+1} \right), \ I\_\% \approx \frac{h}{2} \left( G\_k + G\_{k+1} \right).$$

Hence, the one step method based on FzT for (17) is derived as follows, where *k* = 1, . . . , *n* − 1.

$$X\_{k+1} = X\_k + \frac{h}{2} \left( F\_k + F\_{k+1} \right),$$

$$Y\_{k+1} = Y\_k + \frac{h}{2} \left( G\_k + G\_{k+1} \right),\tag{22}$$

where *Fk* and *Gk* are defined by (21).

This method computes the approximate coordinates [*X*1, ... , *Xn*] and [*Y*1, ... ,*Yn*] of the FzT for the functions *x*(*t*) and *y*(*t*). The problem with the previous scheme (22) is that the unknown quantities *Fk*<sup>+</sup><sup>1</sup> and *Gk*<sup>+</sup><sup>1</sup> which means that *Xk*+1(*Yk*+1) appears on both sides and an implicit method. Therefore, one solution to this problem would be to use an explicit method such as another fuzzy approach, namely Scheme I. For this purpose, the scheme (22) is of the form

$$X\_{k+1} = \left. \begin{array}{c} f\_x + N(X\_{k+1})\_\prime \right| \left. \begin{array}{c} \Upsilon\_{k+1} \end{array} \right| \\ = \left. \begin{array}{c} f\_y + N(\Upsilon\_{k+1})\_\prime \end{array} \right| \left. \begin{array}{c} \end{array} \right| \left. \begin{array}{c} \left. \begin{array}{c} f\_z \end{array} \right| \left. \begin{array}{c} \left. \begin{array}{c} \left( \begin{array}{c} f\_x \end{array} \right) \right| \end{array} \right| \end{array} \right| \left. \begin{array}{c} \left. \begin{array}{c} f\_y \end{array} \right| \left. \begin{array}{c} \left( \begin{array}{c} f\_z \end{array} \right) \right| \end{array} \right| \end{aligned}$$

and can be solved by NIM (2), where

$$f\_x = X\_k + \frac{h}{2} F\_{k\prime} \text{ and } N(X\_{k+1}) = \frac{h}{2} F\_{k+1\cdot}$$

$$f\_y = Y\_k + \frac{h}{2} G\_{k\prime} \text{ and } N(Y\_{k+1}) = \frac{h}{2} G\_{k+1\cdot}$$

The three term approximation of the NIM (3) gives the following formulas for solving SODEs (16):

*ux*<sup>0</sup> = *Xk* + *<sup>h</sup>* <sup>2</sup> *Fk*, *uy*<sup>0</sup> <sup>=</sup> *Yk* <sup>+</sup> *<sup>h</sup>* <sup>2</sup> *Gk*, *ux*<sup>1</sup> = *N* (*ux*0), *uy*<sup>1</sup> = *N uy*<sup>0</sup> , *ux*<sup>2</sup> = *<sup>N</sup>* (*ux*<sup>0</sup> + *ux*1) − *<sup>N</sup>* (*ux*0), *uy*<sup>2</sup> = *<sup>N</sup> uy*<sup>0</sup> + *uy*<sup>1</sup> − *<sup>N</sup> uy*<sup>0</sup> , ⎫ ⎪⎬ ⎪⎭

Hence, the three term approximate solution is

$$
\mu\_x = \mu\_{x0} + \mu\_{x1} + \mu\_{x2} = \mu\_{x0} + N\left(\mu\_{x0} + \mu\_{x1}\right),
$$

and

$$
\mu\_{\mathcal{Y}} = \mu\_{\mathcal{Y}0} + N \left( \mu\_{\mathcal{Y}0} + \mu\_{\mathcal{Y}1} \right), 
$$

which leads to the following formulas.

$$\begin{array}{rclclcl}X\_{k+1}^{\ast} &=& X\_k + h\mathbf{F}\_k/2, & &\\ & X\_{k+1}^{\ast\ast} &=& X\_{k+1}^{\ast} + h\mathbf{F}\_{k+1}^{\ast}/2, &\\ & X\_{k+1} &=& X\_k + h\left(\mathbf{F}\_k + \mathbf{F}\_{k+1}^{\ast\ast}\right)/2, &\\ & & X\_{k+1} &=& \mathbf{Y}\_k + h\left(\mathbf{G}\_k + \mathbf{G}\_{k+1}^{\ast\ast}\right)/2, \\ \end{array} \tag{23}$$

where

$$\begin{array}{rclcrcl} F\_{k} &=& \left. \frac{\int\_{a}^{b} f(t, \boldsymbol{X}\_{k}, \boldsymbol{Y}\_{k}) B\_{k} \left(t \right) dt}{\int\_{a}^{b} B\_{k} \left(t \right) dt}, & &\\ F\_{k+1}^{\*} &=& \left. \frac{\int\_{a}^{b} f(t, \boldsymbol{X}\_{k+1}^{\*}, \boldsymbol{Y}\_{k+1}^{\*}) B\_{k+1} \left(t \right) dt}{\int\_{a}^{b} B\_{k+1} \left(t \right) dt} \right|\_{} G\_{k+1}^{\*} &=& \left. \frac{\int\_{a}^{b} g(t, \boldsymbol{X}\_{k+1}^{\*}, \boldsymbol{Y}\_{k+1}^{\*}) B\_{k+1} \left(t \right) dt}{\int\_{a}^{b} B\_{k+1} \left(t \right) dt} \right|\_{} \\\\ F\_{k+1}^{\*\*} &=& \left. \frac{\int\_{a}^{b} f(t, \boldsymbol{X}\_{k+1}^{\*}, \boldsymbol{Y}\_{k+1}^{\*\*}) B\_{k+1} \left(t \right) dt}{\int\_{a}^{b} B\_{k+1} \left(t \right) dt}, & & & \\ G\_{k+1}^{\*\*} &=& \left. \frac{\int\_{a}^{b} g(t, \boldsymbol{X}\_{k+1}^{\*}, \boldsymbol{Y}\_{k+1}^{\*}) B\_{k+1} \left(t \right) dt}{\int\_{a}^{b} B\_{k+1} \left(t \right) dt} \right| . \end{array} \tag{24}$$

In the sequel, the approximate solution of SODEs (16) can be obtained using the inverse FzT as follows:

$$\mathbf{x}\_{\rm tr}\left(t\right) = \sum\_{k=1}^{n} \mathbf{X}\_{k}\mathbf{B}\_{k}\left(t\right), \ y\_{\rm tr}\left(t\right) = \sum\_{k=1}^{n} \mathbf{Y}\_{k}\mathbf{B}\_{k}\left(t\right). \tag{25}$$

#### *4.2. Numerical Scheme II: Modified 2-Step Adams Moulton Method Based on FzT and NIM for SODEs*

The Scheme I uses 1-step method for solving SODEs (16). In this subsection, we improve 2-step Adams Moulton method using FzT and NIM for solving SODEs (16). Let us recall that the modified 2-step Adams Moulton method proposed by [16]. From this idea, the modified 2-step Adams Moulton method can be extended to approximate the solution of (16) by necessary steps of construction of the generalized uniform fuzzy partitions in Section 4. It is worth noting that three terms of NIM were used in [16], while four terms of NIM are used in this study. Let FzT components, *Xk* (*Yk*), *k* = 2, ... , *n* − 1, of the unknown function *x* (*t*) (*y* (*t*)) with respect to choose some of the *h*-uniform generalized fuzzy partition (5) and let *X*<sup>1</sup> = *x*1,*Y*<sup>1</sup> = *y*1, *X*<sup>2</sup> = *x*<sup>2</sup> and *Y*<sup>2</sup> = *y*<sup>2</sup> if possible; otherwise, we can compute FzT components *X*<sup>2</sup> and *Y*<sup>2</sup> from the numerical Scheme I. In the following, we apply the F-transform and NIM to the SODEs (16) for obtaining the numerical Scheme II, where *k* = 2, ... , *n* − 1. First, if *f*<sup>1</sup> in the Equation (19) is approximated by

$$f\_1 = \left(p\_0 + p\_1\right)F\_{k+1} + p\_2F\_k + p\_3F\_{k-1},\tag{26}$$

where

$$F\_k = \frac{\int\_a^b f\left(t, X\_k, \mathcal{Y}\_k\right) B\_k(t) \, dt}{\int\_a^b B\_k(t) \, dt},$$

*pk* = (−1) *<sup>k</sup>* <sup>1</sup> 0 ( −*s*+1 *<sup>k</sup>* )*ds*. Substituting (26) into (19), then for *k* = 1, . . . , *n* − 1

$$I\_f \approx \frac{h}{12} \left( 5F\_{k+1} + 8F\_k - F\_{k-1} \right).$$

Similarity,

$$I\_{\mathcal{S}} \approx \frac{h}{12} \left( 5G\_{k+1} + 8G\_k - G\_{k-1} \right).$$

Thus, the two step method based FzT for (17) is given for *k* = 1, . . . , *n* − 1 as

$$\begin{array}{rclcrclcl} X\_{k+1} &=& X\_k + h\left(8F\_k - F\_{k-1} + 5F\_{k+1}\right)/12, \end{array} \Big| \begin{array}{rcl} Y\_{k+1} &=& Y\_k + h\left(8G\_k - G\_{k-1} + 5G\_{k+1}\right)/12, \end{array} \tag{27}$$

where

$$F\_{k\_{-}} = \left. \frac{\int\_{a}^{b} f(t, \mathbf{X}\_{k}, \mathbf{Y}\_{k}) B\_{k}(t) dt}{\int\_{a}^{b} A\_{k}(t) dt} \right| G\_{k\_{-}} = \left. \frac{\int\_{a}^{b} g(t, \mathbf{X}\_{k}, \mathbf{Y}\_{k}) B\_{k}(t) dt}{\int\_{a}^{b} B\_{k}(t) dt} \right| $$

The problem with the previous scheme (27) is that the unknown quantities *Fk*<sup>+</sup><sup>1</sup> and *Gk*+1. Therefore, one solution to this problem would be to use an explicit method. For this purpose, the scheme (27) is of the form

$$X\_{k+1} = \left. \begin{array}{c} f\_x + N(X\_{k+1})\_\prime \right| \, \left. \begin{array}{c} \gamma\_{k+1} \end{array} \right| \, = \left. \begin{array}{c} f\_y + N(Y\_{k+1})\_\prime \end{array} \right| \, \end{aligned}$$

and can be solved by NIM (2), where

$$f\_x = X\_k + \frac{h}{12} \left( 8F\_k - F\_{k-1} \right) \text{, and } N(X\_{k+1}) = \frac{5h}{12} F\_{k+1}.$$

$$f\_Y = Y\_k + \frac{h}{12} \left( 8G\_k - G\_{k-1} \right) \text{, and } N(Y\_{k+1}) = \frac{5h}{12} G\_{k+1}.$$

The four term approximation of the NIM (3) gives the following formulas for solving SODEs (16):

*ux*<sup>0</sup> = *Xk* + *<sup>h</sup>* <sup>12</sup> (8*Fk* <sup>−</sup> *Fk*−1), *uy*<sup>0</sup> <sup>=</sup> *Yk* <sup>+</sup> *<sup>h</sup>* <sup>12</sup> (8*Gk* − *Gk*−1), *ux*<sup>1</sup> = *N* (*ux*0), *uy*<sup>1</sup> = *N uy*<sup>0</sup> , *ux*<sup>2</sup> = *<sup>N</sup>* (*ux*<sup>0</sup> + *ux*1) − *<sup>N</sup>* (*ux*0), *uy*<sup>2</sup> = *<sup>N</sup> uy*<sup>0</sup> + *uy*<sup>1</sup> − *<sup>N</sup> uy*<sup>0</sup> , *ux*<sup>3</sup> = *<sup>N</sup>* (*ux*<sup>0</sup> + *ux*<sup>1</sup> + *ux*2) − *<sup>N</sup>* (*ux*<sup>0</sup> + *ux*1). *uy*<sup>3</sup> = *<sup>N</sup> uy*<sup>0</sup> + *uy*<sup>1</sup> + *uy*<sup>2</sup> − *<sup>N</sup> uy*<sup>0</sup> + *uy*<sup>1</sup> . ⎫ ⎪⎪⎪⎬ ⎪⎪⎪⎭

Hence, the four term approximate solution is

$$
\mu\_x = \mu\_{x0} + \mu\_{x1} + \mu\_{x2} + \mu\_{x3} = \mu\_{x0} + N\left(\mu\_{x0} + N\left(\mu\_{x0} + \mu\_{x1}\right)\right)
$$

and

$$
\mu\_{\mathcal{Y}} = \mu\_{\mathcal{Y}0} + N \left( \mu\_{\mathcal{Y}0} + N \left( \mu\_{\mathcal{Y}0} + \mu\_{\mathcal{Y}1} \right) \right),
$$

which leads to the following formulas.

$$\begin{array}{rclclcl}X^{\*}\_{k+1} &=& X\_{k} + h\left(8\mathcal{E}\_{\mathbb{R}} - \mathcal{E}\_{k-1}\right)/12, & &\\ & X^{\*}\_{k+1} &=& X^{\*}\_{k+1} + 5h\mathcal{F}^{\*}\_{k+1}/12, & &\\ & X^{\*}\_{k+1} &=& X^{\*}\_{k+1} + 5h\mathcal{F}^{\*}\_{k+1}/12, & &\\ & X^{\*}\_{k+1} &=& Y^{\*}\_{k+1} &=& Y^{\*}\_{k+1} + 5h\mathcal{G}^{\*}\_{k+1}/12, \\ & X\_{k+1} &=& X\_{k} + h\left(8\mathcal{E}\_{\mathbb{R}} - \mathcal{E}\_{k-1} + 5\mathcal{F}^{\*}\_{k+1}\right)/12, & &\\ & X\_{k+1} &=& Y^{\*}\_{k+1} &=& Y\_{k} + h\left(8\mathcal{G}\_{\mathbb{R}} - \mathcal{G}\_{k-1} + 5\mathcal{G}^{\*}\_{k+1}\right)/12, \\ \end{array} \tag{28}$$

where

$$\begin{array}{llll} F\_{k-1} &=& \frac{\int\_{a}^{b} f(t, X\_{k-1}, Y\_{k-1}) B\_{k-1}(t) dt}{\int\_{a}^{b} A\_{k-1}(t) dt},\\ F\_{k} &=& \frac{\int\_{a}^{b} f(t, X\_{k}, Y\_{k}) B\_{k}(t) dt}{\int\_{a}^{b} A\_{k}(t) dt},\\ F\_{k+1} &=& \frac{\int\_{a}^{b} f(t, X\_{k+1}^{\*}, Y\_{k+1}^{\*}) B\_{k+1}(t) dt}{\int\_{a}^{b} B\_{k+1}(t) dt},\\ F\_{k+1}^{\*} &=& \frac{\int\_{a}^{b} f(t, X\_{k+1}^{\*}, Y\_{k+1}^{\*}) B\_{k+1}(t) dt}{\int\_{a}^{b} B\_{k+1}(t) dt},\\ F\_{k+1}^{\*} &=& \frac{\int\_{a}^{b} f(t, X\_{k+1}^{\* \*}, Y\_{k+1}^{\* \*}) B\_{k+1}(t) dt}{\int\_{a}^{b} B\_{k+1}(t) dt},\\ F\_{k+1}^{\*} &=& \frac{\int\_{a}^{b} g(t, X\_{k+1}^{\* \*}, Y\_{k+1}^{\* \*}) B\_{k+1}(t) dt}{\int\_{a}^{b} B\_{k+1}(t) dt},\\ F\_{k+1}^{\* \*} &=& \frac{\int\_{a}^{b} f(t, X\_{k+1}^{\* \* \*}, Y\_{k+1}^{\* \*}) B\_{k+1}(t) dt}{\int\_{a}^{b} B\_{k+1}(t) dt},\\ F\_{k+1}^{\* \*} &=& \frac{\int\_{a}^{b} f(t, X\_{k+1}^{\* \* \*}, Y\_{k+1}^{\* \*}) B\_{k+1}(t) dt}{\int\_{a}^{b} B\_{k+1}(t) dt},\end{array}$$

Then, obtain the desired approximation for *x* and *y* by the inverse FzT (25) applied to [*X*1,..., *Xn*] and [*Y*1,...,*Yn*].

#### *4.3. Numerical Scheme III: Modified 3-Step Adams Moulton Method Based on FzT and NIM for SODEs*

In this subsection, we improve 3-step Adams Moulton method using FzT and NIM for solving SODEs (16). The modified 3-step Adams Moulton method proposed by [16] for solving Cauchy problems. From this idea, we can propose to approximate the solution of (16) by NIM and FzT components, *Xk* (*Yk*), *k* = 2, ... , *n* − 1, of the unknown function *x*(*t*) (*y*(*t*)) with respect to choose some of the *h*-uniform generalized fuzzy partition (see Definition 4), *B*1, ... , *Bn*, of interval [*a*, *b*] with parameter *h* = (*b* − *a*) / (*n* − 1), *n* ≥ 2. Let *X*<sup>1</sup> = *x*1, *Y*<sup>1</sup> = *y*1, *X*<sup>2</sup> = *x*2, *Y*<sup>2</sup> = *y*2, *X*<sup>3</sup> = *x*3, and*Y*<sup>3</sup> = *y*<sup>3</sup> if possible; otherwise, we can compute FzT components *X*2, *Y*2, *X*<sup>3</sup> and *Y*<sup>3</sup> from the numerical

Scheme I. Now, we apply the F -transform and NIM to the SODEs (16) and obtain the following numerical Scheme III for *k* = 3, . . . , *n* − 1:

According to steps of deriving Equation (27) and then steps of NIM in previous Subsection 4.2, we get the four term approximation of the NIM as follows.

*ux*<sup>0</sup> = *Xk* + *<sup>h</sup>* <sup>24</sup> (19*Fk* <sup>−</sup> <sup>5</sup>*Fk*−<sup>1</sup> <sup>+</sup> *Fk*−2), *uy*<sup>0</sup> <sup>=</sup> *Yk* <sup>+</sup> *<sup>h</sup>* <sup>24</sup> (19*Gk* − <sup>5</sup>*Gk*−<sup>1</sup> + *Gk*−2), *ux*<sup>1</sup> = *N* (*ux*0), *uy*<sup>1</sup> = *N uy*<sup>0</sup> , *ux*<sup>2</sup> = *<sup>N</sup>* (*ux*<sup>0</sup> + *ux*1) − *<sup>N</sup>* (*ux*0), *uy*<sup>2</sup> = *<sup>N</sup> uy*<sup>0</sup> + *uy*<sup>1</sup> − *<sup>N</sup> uy*<sup>0</sup> , *ux*<sup>3</sup> <sup>=</sup> *<sup>N</sup>* (*ux*<sup>0</sup> <sup>+</sup> *ux*<sup>1</sup> <sup>+</sup> *ux*2) − *<sup>N</sup>* (*ux*<sup>0</sup> <sup>+</sup> *ux*1). *uy*<sup>3</sup> <sup>=</sup> *<sup>N</sup> uy*<sup>0</sup> + *uy*<sup>1</sup> + *uy*<sup>2</sup> − *<sup>N</sup> uy*<sup>0</sup> + *uy*<sup>1</sup> . ⎫ ⎪⎪⎪⎬ ⎪⎪⎪⎭

Hence, the four term approximate solution is

$$
\mu\_x = \mu\_{x0} + \mu\_{x1} + \mu\_{x2} + \mu\_{x3} = \mu\_{x0} + N\left(\mu\_{x0} + N\left(\mu\_{x0} + \mu\_{x1}\right)\right)
$$

and

$$
\mu\_y = \mu\_{y0} + N \left( \mu\_{y0} + N \left( \mu\_{y0} + \mu\_{y1} \right) \right) \ .
$$

which leads to the following formulas.

$$\begin{array}{rclcrcl}X\_{k+1}^{\ast} &=& X\_{k} + \frac{\hbar}{24}\left(19F\_{k} - 5F\_{k-1} + F\_{k-2}\right),\\ \\ X\_{k+1}^{\ast\ast} &=& X\_{k+1}^{\ast} + \frac{9}{24}F\_{k+1}^{\ast\ast},\\ \\ X\_{k+1}^{\ast\ast\ast} &=& X\_{k+1}^{\ast} + \frac{9\hbar}{24}F\_{k+1}^{\ast\ast},\\ \\ X\_{k+1} &=& X\_{k\prime} \\ &+&\frac{\hbar}{24}\left(19F\_{k} - 5F\_{k-1} + F\_{k-2} + 9F\_{k+1}^{\ast\ast\ast}\right),\\ &\end{array}\tag{29}$$

$$\begin{array}{rclcrcl}X\_{k+1} &=& X\_{k\prime} \\ &+&\frac{\hbar}{24}\left(19F\_{k} - 5F\_{k-1} + F\_{k-2} + 9F\_{k+1}^{\ast\ast\ast}\right),\\ &\qquad\qquad\qquad\qquad\qquad\qquad Y\_{k+1} &=& Y\_{k\prime} \\ &+&\frac{\hbar}{24}\left(19G\_{k} - 5G\_{k-1} + G\_{k-2} + 9G\_{k+1}^{\ast\ast\ast}\right),\end{array}\tag{20}$$

where

*Fk*−<sup>2</sup> = *b <sup>a</sup> <sup>f</sup>*(*t*, *Xk*−2,*Yk*−2)*Bk*−2(*t*)*dt b <sup>a</sup> Bk*−2(*t*)*dt* , *Gk*−<sup>2</sup> <sup>=</sup> *b <sup>a</sup> <sup>g</sup>*(*t*, *Xk*−2,*Yk*−2))*Bk*−2(*t*)*dt b <sup>a</sup> Bk*−2(*t*)*dt* , *Fk*−<sup>1</sup> = *b <sup>a</sup> <sup>f</sup>*(*t*, *Xk*−1,*Yk*−1)*Bk*−1(*t*)*dt b <sup>a</sup> Bk*−1(*t*)*dt* , *Gk*−<sup>1</sup> <sup>=</sup> *b <sup>a</sup> <sup>g</sup>*(*t*, *Xk*−1,*Yk*−1))*Bk*−1(*t*)*dt b <sup>a</sup> Bk*−1(*t*)*dt* , *Fk* = *b <sup>a</sup> f*(*t*, *Xk*,*Yk*)*Bk*(*t*)*dt b <sup>a</sup> Bk*(*t*)*dt* , *Gk* <sup>=</sup> *b <sup>a</sup> g*(*t*, *Xk*,*Yk*)*Bk*(*t*)*dt b <sup>a</sup> Bk*(*t*)*dt* , *F*∗ *<sup>k</sup>*+<sup>1</sup> = *b <sup>a</sup> f*(*t*, *X*<sup>∗</sup> *k*+1,*Y*<sup>∗</sup> *<sup>k</sup>*+1)*Bk*<sup>+</sup>1(*t*)*dt b <sup>a</sup> Bk*<sup>+</sup>1(*t*)*dt* , *<sup>G</sup>*<sup>∗</sup> *<sup>k</sup>*+<sup>1</sup> = *b <sup>a</sup> g*(*t*, *X*<sup>∗</sup> *k*+1,*Y*<sup>∗</sup> *<sup>k</sup>*+1)*Bk*<sup>+</sup>1(*t*)*dt b <sup>a</sup> Bk*<sup>+</sup>1(*t*)*dt* , *F*∗∗ *<sup>k</sup>*+<sup>1</sup> = *b <sup>a</sup> f*(*t*, *X*∗∗ *k*+1,*Y*∗∗ *<sup>k</sup>*+1)*Bk*<sup>+</sup>1(*t*)*dt b <sup>a</sup> Bk*<sup>+</sup>1(*t*)*dt* , *<sup>G</sup>*∗∗ *<sup>k</sup>*+<sup>1</sup> = *b <sup>a</sup> g*(*t*, *X*∗∗ *k*+1,*Y*∗∗ *<sup>k</sup>*+1)*Bk*<sup>+</sup>1(*t*)*dt b <sup>a</sup> Bk*<sup>+</sup>1(*t*)*dt* , *F*∗∗∗ *<sup>k</sup>*+<sup>1</sup> = *b <sup>a</sup> f*(*t*, *X*∗∗∗ *k*+1,*Y*∗∗∗ *<sup>k</sup>*+1)*Bk*<sup>+</sup>1(*t*)*dt b <sup>a</sup> Bk*<sup>+</sup>1(*t*)*dt* , *<sup>G</sup>*∗∗∗ *<sup>k</sup>*+<sup>1</sup> = *b <sup>a</sup> g*(*t*, *X*∗∗∗ *k*+1,*Y*∗∗∗ *<sup>k</sup>*+1)*Bk*<sup>+</sup>1(*t*)*dt b <sup>a</sup> Bk*<sup>+</sup>1(*t*)*dt* .

In the sequel, the inverse FzT (25) approximates the solution *x*(*t*) (*y*(*t*)) of the problem (16).

#### *4.4. Error Analysis of Numerical Scheme I for SODEs*

In this subsection, we present error analysis for numerical Scheme I and consider the Formula (23). If *x*(*tk*) = *xk* and *y*(*tk*) = *yk* denote the exact solution and *Xk*, *Yk* denote the numerical solution. Then, substituting the exact solution in the Formula (23), we get

$$\begin{array}{rclcrcl}\mathbf{x}\_{k+1}^{\ast} &=& \mathbf{x}\_{k} + h\mathbf{F}\_{k}^{\varepsilon}/2, & & & \begin{bmatrix} y\_{k+1}^{\ast} &=& y\_{k} + h\mathbf{G}\_{k}^{\varepsilon}/2, \\\\ y\_{k+1}^{\ast} &=& \mathbf{x}\_{k+1}^{\ast} + h\mathbf{F}\_{k+1}^{\ast\ast}/2, & & \\\\ \mathbf{x}\_{k+1} &=& \mathbf{x}\_{k} + h\left(\mathbf{F}\_{k}^{\varepsilon} + \mathbf{F}\_{k+1}^{\varepsilon\ast\ast}\right)/2, & & \\\\ \mathbf{y}\_{k+1} &=& \mathbf{y}\_{k} + h\left(\mathbf{G}\_{k}^{\varepsilon} + \mathbf{G}\_{k+1}^{\varepsilon\ast\ast}\right)/2, & & \\ \end{array} \tag{30}$$

where

$$\begin{array}{rclclccl} F\_{k}^{\epsilon} &=& \frac{\int\_{a}^{b} f(t, \mathbf{x}\_{k}, y\_{k}) B\_{k}(t) \, dt}{\int\_{a}^{b} B\_{k}(t) \, dt}, & &\\ & & \frac{\int\_{a}^{b} f(t, \mathbf{x}\_{k+1}^{\*}, y\_{k+1}^{\*}) B\_{k+1}(t) \, dt}{\int\_{a}^{b} B\_{k+1}(t) \, dt}, & &\\ F\_{k+1}^{\epsilon\*} &=& \frac{\int\_{a}^{b} f(t, \mathbf{x}\_{k+1}^{\*}, y\_{k+1}^{\*}) B\_{k+1}(t) \, dt}{\int\_{a}^{b} B\_{k+1}(t) \, dt}, & & & \frac{\int\_{a}^{b} g(t, \mathbf{x}\_{k+1}^{\*}, y\_{k+1}^{\*}) B\_{k+1}(t) \, dt}{\int\_{a}^{b} B\_{k+1}(t) \, dt}, \\\\ F\_{k+1}^{\epsilon\*} &=& \frac{\int\_{a}^{b} f(t, \mathbf{x}\_{k+1}^{\*}, y\_{k+1}^{\*}) B\_{k+1}(t) \, dt}{\int\_{a}^{b} B\_{k+1}(t) \, dt}, & & & \frac{\int\_{a}^{b} g(t, \mathbf{x}\_{k+1}^{\*}, y\_{k+1}^{\*}) B\_{k+1}(t) \, dt}{\int\_{a}^{b} B\_{k+1}(t) \, dt}. \end{array} \tag{31}$$

and the truncation error *Txk* and *Tyk* of the scheme I are given by

$$T\mathbf{x}\_{k} = \begin{array}{c} \frac{\mathbf{x}\_{k+1} - \mathbf{x}\_{k}}{h} - \frac{1}{2} \left( F\_{k}^{\varepsilon} + F\_{k+1}^{\bullet \ast \bullet} \right), \\\\ \end{array} \\ \left| \begin{array}{c} \mathbf{y}\_{k} \\\\ \end{array} \right. \\ \left| \begin{array}{c} \mathbf{y}\_{k+1} - \mathbf{y}\_{k} \\\\ \end{array} \right. \\ \left| \begin{array}{c} \mathbf{y}\_{k+1} - \mathbf{y}\_{k} \\\\ \end{array} \right. \\ \left| \begin{array}{c} \mathbf{y}\_{k+1} - \mathbf{y}\_{k} \\\\ \end{array} \right. \\ \left| \begin{array}{c} \mathbf{y}\_{k+1} - \mathbf{y}\_{k} \\\\ \end{array} \right. \\ \left| \begin{array}{c} \mathbf{y}\_{k+1} - \mathbf{y}\_{k} \\\\ \end{array} \right. \\ \left| \begin{array}{c} \mathbf{y}\_{k+1} - \mathbf{y}\_{k} \\\\ \end{array} \right. \\ \left. \end{array} \right. \\ \left. \end{array} \right. \\ \left. \tag{32}$$

Rearranging (23), we get

$$\begin{array}{rcl} 0 & = & \frac{\mathbf{X}\_{k+1} - \mathbf{X}\_{k}}{h} - \frac{1}{2} \left( F\_{k} + F\_{k+1}^{\*\*} \right), \\ & \end{array} \\ \begin{array}{rcl} 0 & = & \frac{\mathbf{Y}\_{k+1} - \mathbf{Y}\_{k}}{h} - \frac{1}{2} \left( \mathbf{G}\_{k} + \mathbf{G}\_{k+1}^{\*\*} \right). \end{array} \tag{33}$$

Let *ek*<sup>+</sup><sup>1</sup> = *Xk*<sup>+</sup><sup>1</sup> − *xk*<sup>+</sup><sup>1</sup> and *dk*<sup>+</sup><sup>1</sup> = *Yk*<sup>+</sup><sup>1</sup> − *yk*+1, then subtracting (33) from (32), we get

$$\begin{aligned} T x\_k h &= e\_{k+1} - e\_k - \frac{h}{2} \left( F\_k - F\_k^\epsilon \right) - \frac{h}{2} \left( F\_{k+1}^{\ast \ast} - F\_{k+1}^{\ast \ast \ast} \right), \\ T y\_k h &= d\_{k+1} - d\_k - \frac{h}{2} \left( G\_k - G\_k^\epsilon \right) - \frac{h}{2} \left( G\_{k+1}^{\ast \ast} - G\_{k+1}^{\ast \ast \ast} \right), \end{aligned} \tag{34}$$

Similarly to Lemma 8 and Theorem 2 by [16], we have the following results.

**Lemma 5.** *Let f* , *g are assumed to be sufficiently smooth functions of its arguments on* [*a*, *b*] *and be Lipschitz continuous with respect to <sup>x</sup> and y, i.e., there exists a constant <sup>L</sup>* <sup>∈</sup> <sup>R</sup>*, such that for all <sup>t</sup>* <sup>∈</sup> [*a*, *<sup>b</sup>*] *and x*, *x* , *<sup>y</sup>*, *<sup>y</sup>* <sup>∈</sup> <sup>R</sup>*,*

$$|f(t, \mathbf{x}, y) - f(t, \mathbf{x'}, y')| \le L(|\mathbf{x} - \mathbf{x'}| + |y - y'|),$$

$$|g(t, \mathbf{x}, y) - g(t, \mathbf{x'}, y')| \le L(|\mathbf{x} - \mathbf{x'}| + |y - y'|). \tag{35}$$

*Assume that* {*Bk* | *k* = 1, . . . , *n*}*, n* ≥ 2, *is a h-uniform generalized fuzzy partition of* [*a*, *b*]*. Then we get for k* = 1, . . . , *n,*

$$|e\_{k+1}| \le \quad |e\_k| \left(1 + c\right) + Th \quad \text{and} \quad \left|F\_k^\varepsilon - F\_{k+1}^{\varepsilon \ast \ast}\right| \le \quad LhM\_{2\prime}$$

$$|d\_{k+1}| \le \quad |d\_k| \left(1 + c\right) + Th \quad \text{and} \quad \left|G\_k^\varepsilon - G\_{k+1}^{\varepsilon \ast \ast}\right| \le \quad LhM\_{3\prime}$$

*where c* = *hL* + *<sup>h</sup>*2*L*<sup>2</sup> <sup>2</sup> <sup>+</sup> *<sup>h</sup>*3*L*<sup>3</sup> <sup>8</sup> *, T* = max 1≤*k*≤*n* <sup>|</sup>*Txk*, *Tyk*|*, <sup>F</sup><sup>e</sup> <sup>k</sup>* , *<sup>F</sup>e*∗∗ *<sup>k</sup>*+1, *<sup>G</sup><sup>e</sup> <sup>k</sup>*, *<sup>G</sup>e*∗∗ *<sup>k</sup>*+<sup>1</sup> *are determined by Formula (31) and M*2, *M*<sup>3</sup> *are upper bound for f and g respectively on* [*a*, *b*]*.*

**Theorem 3.** *Let <sup>f</sup>* , *<sup>g</sup>* : [*a*, *<sup>b</sup>*] <sup>→</sup> <sup>R</sup> *be twice continuously differentiable on* [*a*, *<sup>b</sup>*]*. Let moreover, <sup>f</sup>* , *<sup>g</sup>* : [*a*, *<sup>b</sup>*] <sup>×</sup> <sup>R</sup> <sup>×</sup> <sup>R</sup> <sup>→</sup> <sup>R</sup> *be Lipschitz continuous with respect to <sup>x</sup> and y. Assume that* {*Bk* <sup>|</sup> *<sup>k</sup>* <sup>=</sup> 1, . . . , *<sup>n</sup>*}*, <sup>n</sup>* <sup>≥</sup> 2, *is a h-uniform generalized fuzzy partition of* [*a*, *b*]*. Then the scheme I (23) is convergent.*

The technique of error analysis for rest schemes can be obtained analogously to numerical Scheme I.

#### **5. Applications**

A general model for the dynamical system may be written as *dx dt* <sup>=</sup> *xg*(*x*, *<sup>y</sup>*), *dy dt* = *yh*(*x*, *y*), where *g* and *h* are arbitrary functions of the prey and predator species whose populations are *x*(*t*) and *y*(*t*) at time *t*. However, the following problem of Lotka–Volterra equation with derivatives and with variable coefficients *α* (*t*), *β* (*t*), *δ* (*t*), *γ* (*t*) as functions of time *t* have not yet been solved by any fuzzy numerical method. The new differential equations are represented by a non-autonomous ordinary differential equation system. The model, incorporating the above functions is as follows [41–43]:

$$\begin{aligned} \frac{d\mathbf{x}}{dt} &= \mathbf{a}\left(t\right)\mathbf{x}\left(t\right) - \boldsymbol{\beta}\left(t\right)\mathbf{x}\left(t\right)\mathbf{y}\left(t\right), & \mathbf{x}\left(0\right) = \mathbf{x}\_1\\ \frac{d\mathbf{y}}{dt} &= \boldsymbol{\delta}\left(t\right)\mathbf{x}\left(t\right)\mathbf{y}\left(t\right) - \boldsymbol{\gamma}\left(t\right)\mathbf{y}\left(t\right), & \mathbf{y}\left(0\right) = \mathbf{y}\_1 \end{aligned} \tag{36}$$

Two examples are discussed in order to prove that the results obtained by Scheme I (23), II (28) and III (29) for the numerical solution of the model (36).

**Example 2.** *Consider the problem of Lotka-Volterra-prey- predator model (36). We take α* (*t*) = 4 + tan (*t*), *β* (*t*) = exp(2*t*), *γ* (*t*) = −2, *δ* (*t*) = cos(*t*), *x*(0) = −4 *and y*(0) = 4*.*

*The exact solution for these coefficients is x*(*t*) = <sup>−</sup><sup>4</sup> cos(*t*), *y*(*t*) = 4 exp(−2*t*) *proposed by [41–43].*

**Example 3.** *Consider the problem of Lotka-Volterra-prey-predator model (36) with α* (*t*) = −*t*, *β* (*t*) = −*t*, *γ* (*t*) = *t*, *δ* (*t*) = *t*, *x*(0) = 2 *and y*(0) = 2*. The exact solution for these coefficients is x*(*t*) = <sup>2</sup> <sup>2</sup>−exp(*t*2/2) , *y*(*t*) = <sup>2</sup> <sup>2</sup>−exp(*t*2/2) *proposed by [41,42].*

The results are listed in Tables 2–6 by the proposed fuzzy approximation methods with respect to case *KC*<sup>201</sup> 2 is defined by Example 1. The proposed fuzzy approximation methods are generated by Algorithms A1–A3 (Appendix A). The mean square error (MSE) defined as MSE = <sup>1</sup> *<sup>n</sup>* (*Yk* − *y*(*tk*)2) 2 . This is an easily computable quantity for a particular sample. From the numerical tests, the results are summarized as follows:



**Table 2.** Comparison of numerical results of *x*(*t*) for Example 2.

<sup>1</sup> Trapezoidal rule; <sup>2</sup> 2-Step Adams Moulton Method; <sup>3</sup> 3-Step Adams Moulton Method.

**Table 3.** Comparison of numerical results of *y*(*t*) for Example 2.


<sup>1</sup> Trapezoidal rule; <sup>2</sup> 2-Step Adams Moulton Method; <sup>3</sup> 3-Step Adams Moulton Method.


**Table 4.** Comparison of numerical results of *x*(*t*) for Example 3.

<sup>1</sup> Trapezoidal rule; <sup>2</sup> 2-Step Adams Moulton Method; <sup>3</sup> 3-Step Adams Moulton Method.


<sup>1</sup> Trapezoidal rule; <sup>2</sup> 2-Step Adams Moulton Method; <sup>3</sup> 3-Step Adams Moulton Method.


**Table 6.** The values of MSE for Examples 2 and 3.

Further, the results obtained using proposed fuzzy approximation methods for Examples 2 and 3 are shown in Figures 1–3 by using *KC*<sup>201</sup> 2 . In view of Figures 2 and 3, the graphical results of Examples 2 and 3 show a comparison between numerical Schemes (I, II and III) and exact solutions are shown separated from each other for clarity while a comparison between three proposed fuzzy numerical methods (Schemes I, II and III) and exact solution are shown in Figure 1. Furthermore, in view of Figures 2 and 3, a comparison between the numerical results and exact solutions for *h* = 0.01. All the graphs are plotted using MATLAB software.

**Figure 1.** A comparison between three fuzzy numerical methods and exact solution for two examples. (**a**) Example 2; (**b**) Example 3.

**Figure 2.** The graphical solution of Example 2. (**a**) Scheme I; (**b**) Scheme II; (**c**) Scheme III.

**Figure 3.** The graphical solution of Example 3. (**a**) Scheme I; (**b**) Scheme II; (**c**) Scheme III.

#### **6. Conclusions**

Three approximation methods are used the new generalized uniform fuzzy partitions for solving SODEs. In accordance with the three approximation methods for Cauchy problem by [16], Trapezoidal rule (one step) and Adams Moulton method (two and three steps) are improved using FzT and NIM. The results proved that the first approximation method converged to the exact solution. As an application, a predator-prey model is solved by using three proposed approximation methods. From the numerical results, it is observed that the new fuzzy approximation methods yield more accurate results in comparison with the classical Trapezoidal rule (one step) and classical Adams Moulton method (two and three steps). So, it is recommended to use the proposed methods to solve differential equations.

In this regard, it is well-known that FzT has a certain advantage to cope with problems affected by noise. This is because the FzT components of original and noisy functions are very similar to each other. In addition, we can reduce a higher-order differential equation into a system of first-order differential equations by relabeling the variables. Thus, the proposed methods can also be applied to a higher-order differential equation in the case of non-noisy or noisy right-hand side. From Algorithms A1–A3 (Appendix A), it is observed that the new fuzzy approximation methods are more time consuming in comparison with the considered Trapezoidal rule and the Adams Moulton methods. In the future research, we plan to give more details about running time of proposed methods. Further, we plan to solve a boundary value problem for a second order ordinary differential equation with fuzzy boundary conditions, see preliminary results in [44].

**Author Contributions:** Conceptualization and performed the numerical experiments, H.A.A.; Evaluated the results and supported this work, I.P.; Project administration and designed the numerical methods, M.Z.A.; Software and data curation, Z.R.Y.

**Funding:** This work of Irina Perfilieva has been supported by the project "LQ1602 IT4Innovations excellence in science" and by the Grant Agency of the Czech Republic (project No. 16-09541S).

**Acknowledgments:** The authors would like to express their deep gratitude to the editors and the anonymous referees for their valuable comments and criticism towards the improvement of the paper. Also, many thanks given to Universiti Malaysia Perlis for providing all facilities until this work was completed successfully.

**Conflicts of Interest:** The authors declare no conflicts of interest.

#### **Appendix A. Algorithms**

In this appendix, algorithms of approximation methods based on FzT and NIM for Sections 4.1–4.3 are explained with details. A pseudocode is used to describe the algorithms and simplified code that is easy to read. This pseudocode specifies the form of the input to be supplied and the form of the desired output. As a consequence, a stopping technique independent of the numerical technique is incorporated into each algorithm to avoid infinite loops. Two punctuation symbols are used in the algorithms, a period (.) indicates the termination of a step and a semicolon (;) separates tasks within a step. In algorithms with help of MATLAB software, the definite integral is specified by integral (function,upper limits, lower limits). The steps in the algorithms follow the rules of structured program construction. They have been arranged so that there should be minimal difficulty translating pseudocode into any programming language suitable for scientific applications. To approximate the solution of SODEs (16) at (*N* + 1) equally spaced numbers in the interval [*a*, *b*], proceed as follows.


```
INPUT: f(t, x, y); g(t, x, y); endpoints a, b; integer N; initial condition y1; m.
Step 1 Set h = (b − a)/N; X1 = x1; Y1 = y1; t1 = a; k = 1, . . . , N + 1; tk = a + (k − 1)h.
Step 2 Define the generalized uniform fuzzy partitions as Bk(t) =  √πΓ(m+1)
                                                                    2Γ(m+ 1
                                                                          2 )
                                                                             
                                                                                1
                                                                               2m

                                                                                   1 + cos 
                                                                                            π t−t(k)
                                                                                                h
                                                                                                    m
                                                                                                        .
Step 3 for k = 1 to N do Steps 04–15.
              Step 04 F(k) = integral(f(t, X(k),Y(k))Bk(t), t(k − 1), t(k + 1))/integral(Bk(t), t(k − 1), t(k + 1)).
              Step 05 G(k) = integral(g(t, X(k),Y(k))Bk(t), t(k − 1), t(k + 1))/integral(Bk(t), t(k − 1), t(k + 1)).
              Step 06 Xstar(k + 1) = X(k) + hF(k)/2.
              Step 07 Ystar(k + 1) = Y(k) + hG(k)/2.
              Step 08 Fstar(k + 1) = integral(f(t, Xstar(k + 1),Ystar(k + 1))Bk+1(t), t(k − 1), t(k + 1))/integral(Bk+1(t), t(k − 1), t(k + 1)).
              Step 09 Gstar(k + 1) = integral(g(t, Xstar(k + 1),Ystar(k + 1))Bk+1(t), t(k − 1), t(k + 1))/integral(Bk+1(t), t(k − 1), t(k + 1)).
              Step 10 Xstar2(k + 1) = Xstar(k + 1) + hFstar(k + 1)/2.
              Step 11 Ystar2(k + 1) = Ystar(k + 1) + hGstar(k + 1)/2.
              Step 12 Fstar2(k + 1) = integral(f(t, Xstar2(k + 1),Ystar2(k + 1))Bk+1(t), t(k − 1), t(k + 1))/integral(Bk+1(t), t(k − 1), t(k + 1)).
              Step 13 Gstar2(k + 1) = integral(g(t, Xstar2(k + 1),Ystar2(k + 1))Bk+1(t), t(k − 1), t(k + 1))/integral(Bk+1(t), t(k − 1), t(k + 1)).
              Step 14 X(k + 1) = X(k) + h (F(k) + Fstar2(k + 1)) /2.
              Step 15 Y(k + 1) = Y(k) + h (G(k) + Gstar2(k + 1)) /2.
     end.
OUTPUT: Approximation X and Y to x and y, respectively at the (N + 1) values of t.
```

```
INPUT: f(t, x, y); g(t, x, y); endpoints a, b; integer N; initial condition y1; m.
Step 1 Set h = (b − a)/N; X1 = x1; Y1 = y1; t1 = a; k = 1, . . . , N + 1; tk = a + (k − 1)h.
Step 2 Define the generalized uniform fuzzy partitions as Bk(t) =  √πΓ(m+1)
                                                                    2Γ(m+ 1
                                                                          2 )
                                                                             
                                                                               1
                                                                               2m

                                                                                   1 + cos 
                                                                                            π t−t(k)
                                                                                                h
                                                                                                   m
                                                                                                       .
Step 3 Set X2 = x2; Y2 = y2. (In the case of no exact solutions, compute X2 and Y2 using Algorithm 1.)
Step 4 for k = 2 to N do Steps 05–18.
              Step 05 F(k − 1) =integral(f(t, X(k − 1),Y(k − 1))Bk−1(t), t(k − 1), t(k + 1))/integral(Bk−1(t), t(k − 1), t(k + 1)).
              Step 06 G(k − 1) =integral(g(t, X(k − 1),Y(k − 1))Bk−1(t), t(k − 1), t(k + 1))/integral(Bk−1(t), t(k − 1), t(k + 1)).
              Step 07 F(k) =integral(f(t, X(k),Y(k))Bk(t), t(k − 1), t(k + 1))/integral(Bk(t), t(k − 1), t(k + 1)).
              Step 08 G(k) =integral(g(t, X(k),Y(k))Bk(t), t(k − 1), t(k + 1))/integral(Bk(t), t(k − 1), t(k + 1)).
              Step 09 Xstar(k + 1) =X(k) + h(8F(k) − F(k − 1))/12.
              Step 10 Ystar(k + 1) =Y(k) + h(8G(k) − G(k − 1))/12.
              Step 11 Fstar(k + 1) =integral(f(t, Xstar(k + 1),Ystar(k + 1))Bk+1(t), t(k − 1), t(k + 1))/integral(Bk+1(t), t(k − 1), t(k + 1)).
              Step 12 Gstar(k + 1) =integral(g(t, Xstar(k + 1),Ystar(k + 1))Bk+1(t), t(k − 1), t(k + 1))/integral(Bk+1(t), t(k − 1), t(k + 1)).
              Step 13 Xstar2(k + 1) =Xstar(k + 1) + 5hFstar(k + 1)/12.
              Step 14 Ystar2(k + 1) =Ystar(k + 1) + 5hGstar(k + 1)/12.
              Step 15 Fstar2(k + 1) =integral(f(t, Xstar2(k + 1),Ystar2(k + 1))Bk+1(t), t(k − 1), t(k + 1))/integral(Bk+1(t), t(k − 1), t(k + 1)).
              Step 16 Gstar2(k + 1) =integral(g(t, Xstar2(k + 1),Ystar2(k + 1))Bk+1(t), t(k − 1), t(k + 1))/integral(Bk+1(t), t(k − 1), t(k + 1)).
              Step 17 X(k + 1) =X(k) + h(8F(k) − F(k − 1) + 5Fstar2(k + 1))/12.
              Step 18 Y(k + 1) =Y(k) + h(8G(k) − G(k − 1) + 5Gstar2(k + 1))/12.
     end.
OUTPUT: Approximation X and Y to x and y, respectively at the (N + 1) values of t.
```
**Algorithm A3.** Three-step algorithm for system of ODEs.

```
INPUT: f(t, x, y); g(t, x, y); endpoints a, b; integer N; initial condition y1; m.
Step 1 Set h = (b − a)/N; X1 = x1; Y1 = y1; t1 = a; k = 1, . . . , N + 1; tk = a + (k − 1)h.
Step 2 Define the generalized uniform fuzzy partitions as Bk(t) =  √πΓ(m+1)
                                                                    2Γ(m+ 1
                                                                          2 )
                                                                            
                                                                               1
                                                                               2m

                                                                                   1 + cos 
                                                                                            π t−t(k)
                                                                                                h
                                                                                                   m
                                                                                                       .
Step 3 Set X2 = x2; Y2 = y2; X3 = x3; Y3 = y3. (In the case of no exact solutions, compute X2, Y2, X3 and Y3 using Algorithm 1 or 2.)
Step 4 for k = 3 to N do Steps 05–20.
              Step 05 F(k − 2) =integral(f(t, X(k − 2),Y(k − 2))Bk−2(t), t(k − 1), t(k + 1))/integral(Bk−2(t), t(k − 1), t(k + 1)).
              Step 06 G(k − 2) =integral(g(t, X(k − 2),Y(k − 2))Bk−2(t), t(k − 1), t(k + 1))/integral(Bk−2(t), t(k − 1), t(k + 1)).
              Step 07 F(k − 1) =integral(f(t, X(k − 1),Y(k − 1))Bk−1(t), t(k − 1), t(k + 1))/integral(Bk−1(t), t(k − 1), t(k + 1)).
              Step 08 G(k − 1) =integral(g(t, X(k − 1),Y(k − 1))Bk−1(t), t(k − 1), t(k + 1))/integral(Bk−1(t), t(k − 1), t(k + 1)).
              Step 09 F(k) =integral(f(t, X(k),Y(k))Bk(t), t(k − 1), t(k + 1))/integral(Bk(t), t(k − 1), t(k + 1)).
              Step 10 G(k) =integral(g(t, X(k),Y(k))Bk(t), t(k − 1), t(k + 1))/integral(Bk(t), t(k − 1), t(k + 1)).
              Step 11 Xstar(k + 1) =X(k) + h(19F(k) − 5F(k − 1) + F(k − 2))/24.
              Step 12 Ystar(k + 1) =Y(k) + h(19G(k) − 5G(k − 1) + G(k − 2))/24.
              Step 13 Fstar(k + 1) =integral(f(t, Xstar(k + 1),Ystar(k + 1))Bk+1(t), t(k − 1), t(k + 1))/integral(Bk+1(t), t(k − 1), t(k + 1)).
              Step 14 Gstar(k + 1) =integral(g(t, Xstar(k + 1),Ystar(k + 1))Bk+1(t), t(k − 1), t(k + 1))/integral(Bk+1(t), t(k − 1), t(k + 1)).
              Step 15 Xstar2(k + 1) =Xstar(k + 1) + 9hFstar(k + 1)/24.
              Step 16 Ystar2(k + 1) =Ystar(k + 1) + 9hGstar(k + 1)/24.
              Step 17 Fstar2(k + 1) =integral(f(t, Xstar2(k + 1),Ystar2(k + 1))Bk+1(t), t(k − 1), t(k + 1))/integral(Bk+1(t), t(k − 1), t(k + 1)).
              Step 18 Gstar2(k + 1) =integral(g(t, Xstar2(k + 1),Ystar2(k + 1))Bk+1(t), t(k − 1), t(k + 1))/integral(Bk+1(t), t(k − 1), t(k + 1)).
              Step 19 X(k + 1) =X(k) + h(19F(k) − 5F(k − 1) + F(k − 2) + 9Fstar2(k + 1))/24.
              Step 20 Y(k + 1) =Y(k) + h(19G(k) − 5G(k − 1) + G(k − 2) + 9Gstar2(k + 1))/24.
     end.
OUTPUT: Approximation X and Y to x and y, respectively at the (N + 1) values of t.
```
#### **References**


© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **A Performance Study of the Impact of Different Perturbation Methods on the Efficiency of GVNS for Solving TSP**

#### **Christos Papalitsas 1,\*,† , Panayiotis Karakostas 2,\*,† and Theodore Andronikos 1,†**


Received: 18 August 2019; Accepted: 18 September 2019; Published: 20 September 2019

**Abstract:** The purpose of this paper is to assess how three shaking procedures affect the performance of a metaheuristic GVNS algorithm. The first shaking procedure is generally known in the literature as intensified shaking method. The second is a quantum-inspired perturbation method, and the third is a shuffle method. The GVNS schemes are evaluated using a search strategy for both First and Best improvement and a time limit of one and two minutes. The formed GVNS schemes were applied on Traveling Salesman Problem (sTSP, nTSP) benchmark instances from the well-known TSPLib. To examine the potential advantage of any of the three metaheuristic schemes, extensive statistical analysis was performed on the reported results. The experimental data shows that for aTSP instances the first two methods perform roughly equivalently and, in any case, much better than the shuffle approach. In addition, the first method performs better than the other two when using the First Improvement strategy, while the second method gives results quite similar to the third. However, no significant deviations were observed when different methods of perturbation were used for Symmetric TSP instances (sTSP, nTSP).

**Keywords:** variable neighborhood search; experimental comparison; statistical analysis; traveling salesman problem; soft computing

#### **1. Introduction**

Variable Neighborhood Search (VNS) is a metaheuristic approach proposed by Mladenovic and Hansen to solve combinatorial and global optimization problems [1,2]. This framework is primarily designed to systematically modify the neighborhood structure, to reach an optimal (or near-optimal) solution [3]. VNS and its extensions have demonstrated their effectiveness in solving many problems in the combinatorial and global optimization field [4,5].

Each VNS heuristic consists of three parts. The first is a process of shaking (phase of diversification) used to escape from local optimal solutions. The next one is changing the neighborhood, where the next neighborhood structure to be searched will be determined; an approval or rejection criterion will also be applied to the last solution found during this part. The third part is the phase of improvement (intensification) achieved by exploring neighborhood structures by applying various local search moves. This exploration is carried out primarily through one of the following steps to change the neighborhood:


• Skewed neighborhood change step: Accept as new incumbent alternatives that not only improve solutions, but also some that are worse than the current incumbent solution. Such a neighborhood change step is intended to allow valley exploration away from the incumbent solution. A trial solution is evaluated taking into consideration not only the trial's objective values and the incumbent solution, but also their distance.

**Variable neighborhood search variants.** Many VNS variants have already been developed and used to solve hard optimization problems [6,7]. The most commonly used variants are the Basic VNS (BVNS), the Variable Neighborhood Descent (VND), and the General VNS (GVNS) and the Reduced VNS (RVNS). In the BVNS a method of diversification is alternated with a local search operator. VND consists of an improvement procedure in which neighborhood structures are systematically explored and a neighborhood change step. According to their neighborhood change step, there are different variants of VND. The pipe-VND, which uses the pipe neighborhood change step, appears to be the most efficient way to solve computational problems [6]. General Variable Neighborhood Search (GVNS) is a VNS variant that uses a VND method to improve. In many applications, GVNS has been successfully tested, as several recent works have shown [8,9].

The efficiency of metaheuristics depends on the efficiency of their components. Performance studies are a prerequisite for evaluating different metaheuristics [10] or different components of a metaheuristic algorithm [11]. In this direction and based on the VNS, Huber and Geiger (2017) [12] examined the impact of different order of local search operators in the improvement component of a VNS algorithm. There are similar studies for the impact of the initial solution [13] or the use of different neighborhood change strategies [2] to the overall performance of a VNS algorithm. However, there is a lack of contributions on studying the impact of the shaking components to the overall performance of a VNS algorithm. Papalitsas et al. (2019) [14] attempted an initial study on the impact of diversification methods on the performance of GVNS by focusing on asymmetric TSP instances.

This work is a substantial extension of our recent conference paper [14] in which we investigated the impact of three shaking methods on a GVNS metaheuristic, applied on asymmetric Traveling Salesman Problem (TSP) instances from the TSPLib. In an effort to build a comprehensive view related to that potential impact of diversification methods, the findings of the previous work are integrated with further analysis on the obtained solutions of symmetric and world TSP instances from TSPLib. To examine this potential impact of the different perturbation strategies, the three shaking methods were examined within the same improvement step. Moreover, the resulting GVNS schemes were executed both with First and Best improvement search strategies, and two different time limits were used as the main stopping criteria: 60 s and 120 s. The obtained experimental results were analyzed statistically to establish whether the use of different perturbation methods affects the performance of the GVNS algorithm. Our findings demonstrate that the use of different perturbation strategies clearly affect the solution quality in aTSP instances, while no significant differences were observed for the case of sTSP instances, with the exception of the experiments conducted using Best Improvement and 120 s run time limit. Moreover, to examine the efficiency of the formed methods, a comparison is performed between the obtained results and other recent metaheuristic solution approaches for the TSP in the literature. As it can be confirmed by our experimental results, the proposed GVNS schemes produce better solutions than the other metaheuristics.

#### *Organization*

This paper is organized as follows. In Section 2 the proposed GVNS solution methods and their technical components are explained. Section 3 contains the experimental results of our performance analysis, while the statistical tests applied to our numerical results are presented in Section 4. Section 5 provides a comparative study between our algorithms and other metaheuristic solution approaches in the recent literature. Finally, conclusions and ideas for future work are given in Section 6.

#### **2. GVNS Heuristics**

The formed GVNS methods use the pipe-VND scheme, which means that the search is taking place in the same neighborhood where the improvement occurs, as their improvement phase.

#### *2.1. Neighborhood Structures*

Three local search operators are considered for exploring different solutions:


All three neighborhood structures are incorporated in a pipe-VND scheme, as illustrated in Algorithm 1, where *lmax* = 3 denotes the number of neighborhood structures.

**Algorithm 1** pipe-VND.

```
1: procedure PVND(N, lmax)
2: l = 1
3: while l <= lmax do
4: select case(l)
5: case(1) : S ← 1-0 Relocate(S)
6: case(2) : S ← 2-Opt(S)
7: case(3) : S ← 1-1 Exchange(S)
8: end select
9: if f(S
            ) < f(S) then
10: S ← S
11: else
12: l = l + 1
13: end if
14: end while
15: return S
16: end procedure
```
#### *2.2. Shaking Methods*

To avoid local optimum traps, three different shaking procedures are examined. These perturbation methods are the following:

**Shake\_1** . This diversification method randomly selects one of the predefined neighborhood structures and applies it *k* times (1 < *k* < *kmax*, where *kmax* is the maximum number of shaking iterations) in the current solution. The method is summarized in Algorithm 2.

**Shake\_2** [15]. The scientific community seems to tend to revolve around new unconventional computing methods. Overall, unconventional computing is a wide range of proposed new or unusual computing models. Part of these computing models is natural computing [16]. Nature-inspired computing has emerged as an efficient paradigm for designing and simulating innovative computational models inspired by natural phenomena to solve complex nonlinear, dynamic specific problems. Some of the well-known nature-inspired computational systems and algorithms are [17]:


**Algorithm 2** Shake\_1.


#### Quantum Computing Principles

Quantum inspired methods imitate the fundamental principles of quantum computing. Quantum computing, a natural computing subsection and a field recently introduced by Feynman (1980s). Feynman realized that an effective simulation of an actual quantum system using a standard computer is not possible because the simulation of actual quantum processes would be exponentially slowed down [18,19]. Quantum computing is an important addition to the existing standard computing models. A general concept which considers the process as a quantum phenomenon. Quantum computing combines, apart from computer science, definitions, mathematical abstractions, and physics. Mathematics, such as linear algebra, and physics, such as quantum mechanics, are mainly involved.

The *qubit* is the quantum analogue of the classical bit. Similarly, the *quantum register*, which is a collection of qubits, is the quantum analogue of the classical processor register. In each call of this shaking method, a simulated quantum *n*-qubit register generates a normalized complex *n*-dimensional unit vector. In this context, normalized means that if (*z*1, ... , *zn*) is the complex vector, then |*z*1| <sup>2</sup> + ... + |*zn*| <sup>2</sup> = 1. The dimension *n* of the complex unit vector is greater than or equal to the dimension of the problem. The complex *n*-dimensional vector is converted into a real *n*-dimensional vector, the components of which are real numbers in the interval [0, 1]. If *zi* and *ri* are the *i*th components of the complex and real vectors respectively, then *ri* = |*zi*| 2, i.e., *ri* is equal to the modulus squared of *zi*. Moreover, each of the real vector's selected components corresponds to a current solution node. For each node of the incumbent solution, the components are used as a flag. Sorting the first vector affects the order in the solution vector due to the correspondence between components and nodes in a tour and thus drives the exploration effort to another point in the search space. This shaking procedure's pseudocode is given in Algorithm 3.

#### **Algorithm 3** Shake\_2.


**Shake\_3**. This shaking method is a shuffle method, where in each iteration the customers are placed in a random order. The method is shown in Algorithm 4.


*2.3. GVNS Schemes*

For each perturbation method a GVNS scheme is formed. Specifically, the GVNS\_1 contains Shake\_1 as its shaking method, GVNS\_2 uses Shake\_2 to diversify solutions, and GVNS\_3 adopts the Shake\_3 perturbation method. The initial solution is produced by the Nearest Neighbor heuristic in all GVNS schemes. The pseudocode for three GVNS approaches is given in Algorithms 5–7, respectively.


**Algorithm 6** GVNS\_2.

1: **procedure** GVNS\_2(*S*, *n*, *max*\_*time*) 2: **while** *time* ≤ *max*\_*time* **do** 3: *S*∗ = Shake\_2(*S*, *n*) 4: *S* = *pVND*(*S*∗) 5: **if** *f*(*S* ) < *f*(*S*) **then** 6: *S* ← *S* 7: **end if** 8: **end while** 9: **return** S 10: **end procedure**

#### **Algorithm 7** GVNS\_3.

1: **procedure** GVNS\_3(*S*, *max*\_*time*) 2: **while** *time* ≤ *max*\_*time* **do** 3: *S*∗ = Shake\_3(*S*) 4: *S* = *pVND*(*S*∗) 5: **if** *f*(*S* ) < *f*(*S*) **then** 6: *S* ← *S* 7: **end if** 8: **end while** 9: **return** S 10: **end procedure**

It should be mentioned that in all three GVNS methods the neighborhoods are searched with both the First and Best improvement search strategy.

#### **3. Computational Analysis**

#### *3.1. Computing Environment & Parameter Settings*

The aforementioned methods were implemented in Fortran and were executed in a PC running Windows 64-bit on an Intel Core i7-6700 CPU at 2.6 GHz with 16 GB RAM. The compilation of the code was done using the Intel Fortran 64 compiler XE with the optimization option /O3. The maximum execution time limit was set to *max*\_*time* = 60 s and *max*\_*time* = 120 s and the maximum number of shaking iterations in the Shake\_1 was experimentally set to *kmax* = 12.

#### *3.2. Computational Results*

This section presents the computational results of the different perturbation strategies for each class of experiments. The GVNS schemes with the different shaking methods were applied on TSPLIB instances. The TSP is one of the most famous NP-hard combinatorial optimization problems. Solving the TSP means finding the minimum cost route so that the salesman starts from a particular node and returns to that node after passing from all the other nodes once.

All experiments were executed 5 times and the average value of all runs was computed. Tables 1 and 2 contain the aggregated experimental results. Specifically, they show the benchmark name, the optimal value (zOpt), the cost of the three GVNS schemes (GVNS\_1, GVNS\_2 and GVNS\_3) and their corresponding gaps from the optimal value. The results depicted in Table 1 were obtained using the First Improvement search strategy and an execution time limit of 1 min, whereas the results in Table 2 were obtained using the Best Improvement search strategy and the same execution time of 1 min. As mentioned earlier, the cost of each GVNS scheme is the average of 5 runs for each problem. The reported gap is computed as follows: given the outcome *x*, its gap from the optimal value *OV* is given by the formula <sup>100</sup>×(*x*−*OV*) *OV* .


**Table 1.** The results shown here were obtained using the First Improvement search strategy and an execution time limit of 1 min [14].

The results in Table 1 indicate a definite pattern, namely that both GVNS\_1 and GVNS\_2 outperform GVNS\_3 in most cases. Recall that GVNS\_3 is a shuffle perturbation strategy. For example, consider benchmark ftv47; we can see that the cost of GVNS\_1 is 1821, the cost of GVNS\_2 is 1992 and of GVNS\_3 is 2101. GVNS\_1 and GVNS\_2 both outperform GVNS\_3 and are also relatively close to the optimal value (1778).


**Table 2.** The results depicted here were obtained using the Best Improvement search strategy and an execution time limit of 1 min [14].

In addition, the provided results in Table 2 lead to the same statement that both GVNS\_1 and GVNS\_2 produce better results than GVNS\_3 in most cases. Table 3 shows the results of the GVNS schemes within a 2 min run time limit and the First Improvement as search strategy. The results of Table 3 mention that GVNS\_1 outperform GVNS\_2 and GVNS\_3 in most cases. However, the main difference from the results of Tables 1 and 2 is that now the behavior of GVNS\_2 is closer to that of GVNS\_3 solution approach. Table 4 shows the results achieved by the GVNS schemes within a 2 min run time limit and the Best Improvement search strategy. The results of Table 4 corroborate the conclusion of Tables 1 and 2 that both GVNS\_1 and GVNS\_2 outperform GVNS\_3 in most cases.


**Table 3.** Results using the First Improvement search strategy and an execution time limit of 2 min [14].


**Table 4.** Results using the Best Improvement search strategy and an execution time limit of 2 min [14].

Tables 5–8 contain the aggregated experimental results for Symmetric TSP instances. Specifically, they contain the benchmark name, the optimal value (zOpt), the cost of the three GVNS variations (GVNS\_1, GVNS\_2 and GVNS\_3) and their corresponding gaps from the optimal value. Table 5 depicts GVNS using the First Improvement search strategy and an execution time limit of 1 min. Table 6 shows GVNS using the Best Improvement as search strategy and an execution time of 1 min. Tables 7 and 8 are executed for 120 s within First and Best improvement search strategy respectively. The reported results show that the GVNS\_3 produce better solutions than the other two algorithms, and that GVNS\_1 and GVNS\_2 do not have significant differences.

**Table 5.** Results using the First Improvement search strategy and an execution time limit of 1 min.


**Table 5.** *Cont*.



**Table 5.** *Cont*.



**Table 6.** *Cont*.



**Table 6.** *Cont*.





**Table 7.** *Cont*.

In some cases, GVNS\_3 with a time limit of 1 min produced better results than using the 2 min time limit in solving sTSP instances. This might be happened due to the use of pure random diversification method such as the shuffle operator. More precisely, by executing more times GVNS\_3, the shuffle operator is also executed more times and consequently it can shift the search into not so promising areas. Thus, the search may be trapped into low quality local optima.


**Table 8.** Results using the Best Improvement search strategy and an execution time limit of 2 min.


**Table 8.** *Cont*.

**Average 253944.1262 272756.94 277586.89 273027.78 3.55 4.49 4.24**


**Table 8.** *Cont*.

Tables 9–12 contain the aggregated experimental results for the National TSP instances. They contain the benchmark name, the optimal value (zOpt), the cost of the three GVNS algorithms (GVNS\_1, GVNS\_2 and GVNS\_3) and the solution gaps from the optimal value for each method. Table 9 depicts GVNS using the First Improvement search strategy and an execution time limit of 1 min. Table 10 shows GVNS using the Best Improvement search strategy and an execution time limit of 1 min. Tables 11 and 12 provide the results achieved by the developed GVNS algorithms with a 2 min time limit within the First and Best improvement search strategy respectively. A notable observation is that in general there are not any significant differences between different methods. However, we notice that on First Improvement for both 1 and 2 min GVNS\_3 outperforms GVNS\_1 and GVNS\_2 implementations. Contrariwise, on Best Improvement all three methods perform better in general than on First Improvement. However, there is no significant difference between them.


**Table 9.** Results using the First Improvement search strategy and an execution time limit of 1 min.


**Table 10.** Results using the Best Improvement search strategy and an execution time limit of 1 min.

**Table 11.** Results using the First Improvement search strategy and an execution time limit of 2 min.



**Table 12.** Results using the Best Improvement search strategy and an execution time limit of 2 min.

#### **4. Statistical Analysis on Computational Results**

This section presents the statistical tests which were performed on the computational results, to evaluate the performance of the three different GVNS methods. Different statistical tests are applied to different data structures. In particular, statistical analysis methods can be divided on parametric and non-parametric tests. The first category examines normal variables whereas the other methods concern non-normal variables [20].

Initially, the application of a normality test showed that the numerical data does not follow the normal distribution. Consequently, we applied the Kruskal–Wallis test for checking the existence of a statistically significant difference between the methods. In this test receiving a *p*-value less than 0.05 means that there is a statistically significant difference.

At the present point, it should be mentioned that the statistical analysis was performed on median values to eliminate potential extreme deviation on the average values based on extreme values. Also, the related to the aTSP analysis is taken from our previous work [14] and it is presented here for building a more comprehensive view.

#### *4.1. Statistical Analysis on aTSP Results*

In Table 13 we can see that for all cases *p*-value is less than 0.05, which means that there is a statistically significant difference between the three methods in all cases. For further examination, pairwise Wilcoxon tests were performed.


**Table 13.** Kruskal–Wallis rank sum test.

In accordance with the pairwise tests which are summarized in Table 14, it is clear that the GVNS\_1 has significant differences with the other two schemes in all cases. Both GVNS\_2 and GVNS\_3 perform equivalently using the First Improvement search strategy independently of the execution time limit, while using the Best Improvement strategy they have significant reported differences with both time limits [14].


**Table 14.** KPairwise comparisons using Wilcoxon signed rank test.

In respect to this Kruskal–Wallis statistical analysis, we have four box-plots illustrated in Figures 1a,b and 2a,b four box plots. Each one depicts either First Improvement or Best Improvement for one minute, as well as for two minutes runs.

Moreover, based on the corresponding of the previous statistical summary, box plots, it is confirmed that the GVNS\_1 produces much better results than the other two algorithms in all cases for the aTSP, while the GVNS\_2 outperform the GVNS\_3 in also all cases. Also, by checking the medians at the box plots it can be seen that the GVNS\_2 performs significantly better on Best Improvement, as it produces results that are "close" to the results of GVNS\_1.

**Figure 1.** Statistical test for aTSP (1/2). (**a**) Statistical test box plots for aTSP 1min FI; (**b**) Statistical test box plots for aTSP 1min BI.

**Figure 2.** Statistical test for aTSP (2/2). (**a**) Statistical test box plots for aTSP 2mins FI; (**b**) Statistical test box plots for aTSP 2mins BI.

#### *4.2. Statistical Analysis on sTSP*

In this subsection the statistical analysis on the results achieved by the three GVNS schemes on sTSP instances is provided.

According to the values in Table 15 we can see that only using the Best Improvement search strategy within a 2 min execution time limit, there are statistically significant differences.

**Table 15.** Kruskal–Wallis rank sum test.


More specifically, the values in Table 16 highlights that there is a difference between the GVNS\_1 and the other two GVNS algorithms. In particular, based on the following box plots, the GVNS\_1 is slightly better than the GVNS\_2 and GVNS\_3, which they perform almost equivalently.

**Table 16.** KPairwise comparisons using Wilcoxon signed rank test.


Subsequently of this Kruskal–Wallis statistical analysis, we have four box-plots illustrated in Figures 3a,b and 4a,b four box plots. Each one depicts either First Improvement or Best Improvement for one minute, as well as for two minutes runs.

**Figure 3.** Statistical test for sTSP (1/2). (**a**) Statistical test box plots for sTSP 1min FI; (**b**) Statistical test box plots for sTSP 1min BI.

**Figure 4.** Statistical test for sTSP (2/2). (**a**) Statistical test box plots for sTSP 2min FI; (**b**) Statistical test box plots for sTSP 2min BI.

#### *4.3. Statistical Analysis on nTSP*

In the case of the National TSP instances and based on the values given in Table 17 we can see that there is no significant statistical difference between the three methods.


**Table 17.** Kruskal–Wallis rank sum test.

As a result of this Kruskal–Wallis statistical analysis, we have four box-plots illustrated in Figures 5a,b and 6a,b four box plots. Each one depicts either First Improvement or Best Improvement for one minute, as well as for two minutes runs. By checking the median value of each method, it is clear that the three algorithms perform equivalently.

**Figure 5.** Statistical test for nTSP (1/2). (**a**) Statistical test box plots for nTSP 1min FI; (**b**) Statistical test box plots for nTSP 1min BI.

**Figure 6.** Statistical test for nTSP (2/2). (**a**) Statistical test box plots for nTSP 2min FI; (**b**) Statistical test box plots for nTSP 2min BI

#### **5. Comparison with Recent Similar Works**

In their recent work, Halim et al. have presented an extensive analysis over the performance of different heuristic and metaheuristic algorithms on some TSPLib instances [21]. In addition, Hore et al. [22] proposed an improved hybrid VNS algorithm for solving TSP instances. In this section, a comparison between our proposed GVNS schemes (GVNS\_1 and GVNS\_2) and those algorithms presented on papers [21,22] is performed.

Table 18 shows the comparison between our GVNS\_1 and GVNS\_2, within the Best Improvement search strategy and a time limit of 120 s, with all metaheuristic solution approaches presented in the work of Halim et al. [21]. The results show that our methods produce better results than the previously mentioned metaheuristics, except the case of instance rat195.tsp in which the GA and the TS perform better than GVNS\_2.



Table 19 shows the comparison between our GVNS\_1 and GVNS\_2, within the Best Improvement search strategy and a time limit of 120 s with the results obtained by a hybrid VNS in the second mentioned work [22]. It is clearly observed that our methods outperform the hybrid VNS. More specifically, the GVNS\_1 outperforms the hybrid VNS algorithm in all tested instances, while the GNVS\_2 produce better results than those achieved by Hore et al. approach in seven out of 10 problem instances.

**Table 19.** Comparisons between GVNS\_1 and GVNS\_2 with BI for 2 mins with the recent VNS work of Hore et al.


Table 20 shows the abbreviations regarding the metaheuristic algorithms presented in [21].

#### **Table 20.** Abbreviations.


#### **6. Conclusions and Future Work**

A thorough and comprehensive performance analysis on the efficiency of three GVNS algorithms has been presented in this work. The main difference lies on the perturbation strategy used. Our comparative performance analysis involves problems that are modelled as asymmetric and Symmetric TSP instances that are resolved using GVNS. The well-known TSP benchmarks from the TSPLIB are used for extensive experimental testing. We believe that the experimental results are quite conclusive, as they confirm that for asymmetric TSP instances GVNS\_1 outperforms the other two methods and GVNS\_2 consistently provides better solutions in all cases compared to GVNS\_3. Simultaneously, the perturbation strategy does not seem to critically affect the solutions of Symmetric TSP instances.

It is also worth emphasizing that even though the improvement stage of the GVNS schemes has been given limited attention, the present paper shows that the tested approaches to the solution, can be quite promising. This is justified by the solutions produced, which are significantly better than other metaheuristic approaches in recent literature.

The investigation of alternative neighborhood structures and neighborhood change movements in VND under the GVNS framework could be a possible direction for future work. In the same vein, one might potentially study modifications or specific combinations with more than one perturbation strategy during the perturbation phase in an effort to determine whether optimal solutions can be achieved even closer, especially on large asymmetric benchmarks.

**Author Contributions:** All of the authors have contributed extensively to this work. C.P. and P.K. conceived the initial algorithm and worked on the first prototypes. P.K. and C.P. thoroughly analyzed the current literature gathering all the necessary material. T.A. assisted C.P. in designing the methods used in the main part. T.A. was responsible for supervising the construction of this work. C.P. was responsible for the interlinking between the theoretic model and the actual application. C.P. contributed to the appropriate typing of the formal definitions and the math used in the paper.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Abbreviations**

The following abbreviations are used in this manuscript:


#### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Adaptive Neuro-Fuzzy Inference System Based Grading of Basmati Rice Grains Using Image Processing Technique**

#### **Dipankar Mandal**

Centre of Studies in Resources Engineering, Indian Institute of Technology Bombay, Mumbai, India; dipankar\_mandal@iitb.ac.in; Tel.: +91-22-2576-4654

Received: 10 April 2018; Accepted: 15 June 2018; Published: 20 June 2018

**Abstract:** Grading of rice intents to discriminate broken and whole grain from a sample. Standard techniques for image-based rice grading using advanced statistical methods seldom take into account the domain knowledge associated with the data. In the context of a high product value basmati rice with an image based grading process, one ought to consider the physical properties of grain and the associated knowledge. In this present work, a model of quality grade testing and identification is proposed using a novel digital image processing and knowledge-based adaptive neuro-fuzzy inference system (ANFIS). The rationale behind adopting a grading system based on fuzzy rules relies on capabilities of ANFIS to simulate the behaviour of an expert in the characterization of rice grain using the physical properties of rice grains. The rice kernels are characterized with the help of morphological descriptors and geometric features which are derived from sample images of milled basmati rice. The predictive capability of the proposed technique has been tested on a sufficient number of training and test images of basmati rice grain. The proposed method outperforms with a promising result in an evaluation of rice quality with >98.5% classification accuracy for broken and whole grain as compared to standard machine learning technique viz. support vector machine (SVM) and K-nearest neighbour (KNN). The milling efficiency is also assessed using the ratio between head rice and broken rice percentage and it is 77.27% for the test sample. The overall results of the adopted methodology are promising in terms of classification accuracy and efficiency.

**Keywords:** ANFIS; basmati rice; image processing; grading; quality assessment; fuzzy inference system

#### **1. Introduction**

India is the leading exporter of the basmati rice (*Oryza sativa*) to the global market. The annual export of basmati rice was ∼4.05 million MT to the global market during the year 2015–2016 [1]. Basmati rice is a protracted slender grain variety of aromatic rice grown in the Indian sub-continent. It has a high product value due to its flavour, delicate texture, delightful fragrance, and softness. The length of basmati rice grain is longer than the width, and it grows even longer during cooking [2].

The high-value basmati rice grain has to go through several operations (such as threshing, handling, de-husking, milling and whitening of grains) starting from harvesting of paddy to final production of rice grains by means of several mechanical systems [3]. Thereby, the grade of the produced grain exclusively depends on the adjustment of the equipment used in the various mentioned operations. In general, in a rice milling facility, the quality grade of product is being monitored by visual inspection by experienced quality control personnel at 2–3 h intervals, rather utilizing a continuous operational measurement method. This means that the operator, based on his experience and proficiency with the processing machinery, assesses the quality grade of the product by mere visual inspection of rice grain appearance and making the required adjustments which are time-consuming and subjective.

Alternatively, image-based grading approaches are nondestructive and rapid. With suitable statistical or machine learning techniques, the image-based approaches are proven to be an efficient way to achieve automatic inspection and grade evaluation efficiently [4,5]. During last decade, researchers have investigated several techniques based on machine vision and digital image processing for quality assessment of rice kernels which are fast, non-destructive, accurate, and cost-effective as compared to traditional methods [6,7]. Image-based approaches have been applied for characterizing rice grains using either one of the morphological, colour, and textural features, or a combination. However, in order to find suitable rice grain descriptor and to improve the classification accuracy, it is imperative that some key features should be selected to describe grain feature exactly.

The marketing value of rice depends on its physical qualities after processing. Major axis/minor axis ratio of the rice kernel is reported as a key feature of basmati rice which might identify the adulteration of basmati rice with other rice varieties [8]. Vaingankar and Kulkarni [8] reported the major axis/minor axis ratio of 3.92–4.09 as an indicator of pure Basmati-370 variety. In the context of grading of rice, the percentage of the head or whole grain and broken grain is a most important factor which determines the milling efficiency. Till date, several studies reported improvement in classification accuracy of rice grain using machine vision and image processing techniques [9–12].

Pazoki et al. [13] illustrated that determining grain variety using a simple mathematical function is difficult because the grain has various morphologies, colours, and textures. Alternatively, artificial neural network (ANNs) techniques have been applied for grain quality control and discrimination of grain variety. Chen et al. [14] proposed a methodology to identify five corn varieties with the accuracy of more than 90% using pattern recognition techniques and neural networks. In a comparative analysis in [11] of artificial neural networks, support vector machines, decision trees and Bayesian Networks to classify milled rice samples, it has produced highest classification accuracy with ANN. Despite promising results, there are several problems might arise with ANN's training and designing [15–18]. The assignment of the weights in ANN structure is one of the most important problems [19] which has a direct effect on its performance. Moreover, the uncertainties in ANN output is proven to be a challenging issue [20].

ANN optimization is limited in practice by a finite training sample and is accomplished through a stochastic training process which gives ANN the ability to avoid being trapped at local minima. On the contrary, this stochastic process makes ANN optimization empirical and subject to strong influence from statistical variations [20]. To overcome these issues with ANN, a hybrid approach with the fuzzy system has introduced. Fuzzy systems are quite good at handling uncertainties and can interpret the relationship between input and output by producing rules. Therefore, to increase the capability of Fuzzy and ANN, hybridization of ANN and fuzzy is usually implemented. Sabanci et al. [21] used ANFIS for wheat grain classification with 99.46% of classification accuracy. Zareiforoush et al. [22] coupled a fuzzy inference system (FIS) with image processing technique for a decision-support system for qualitative grading of milled rice. The results are reported with 89.8% agreement between the grading results obtained from the FIS system and those determined by the experts.

In the context of rice grading, some head grains are easily misclassified with broken grain due to the resemblance in single feature (e.g., eccentricity) extracted from digital images and are not deterministically separable. In such cases, fuzzy approach [23] is more convenient for discrimination of head and broken rice grains [24]. Shiddiq et al. [25] investigated the rice milling degree using colour features (RGB) with an adaptive network-based fuzzy inference model. It was reported an error of 3.55–5.62% in milling degree using this process. However, the morphological features can improve the efficiency in terms of classification. In this context, a grading system based on fuzzy rules can simulate the behaviour of an expert in the evaluation and classification of physical properties of rice grains for grading. In this present work, the predictive capacity of ANFIS is assessed for quality testing and identification of basmati rice based on morphological features. This has been motivated by the fact that well-documented knowledge regarding rice kernels are usually available [26,27]. This knowledge has been incorporated in forming the rules of the fuzzy inference system used to determine head and broken rice kernels. Moreover, the proposed ANFIS based classification method provides a rationale behind the knowledge of morphological features and their underlying dependencies with rice grains. Subsequently, the milling efficiency is estimated with broken grain and head grain ratio for test images. Furthermore, the proposed classification method is compared with standard data-driven machine learning techniques viz., support vector machine (SVM) and k-nearest neighbour (KNN) classifier.

The rest of the manuscript is organized as follows: Section 2 briefly describes the materials and methods. Section 3 explain in detail the results and finally the work is succinctly summarized and concluded in Section 4.

#### **2. Materials and Methods**

The schematic workflow of the proposed ANFIS based grading of basmati rice grains is given in this section. Subsequently, the steps involved in the technique are detailed in the following subsections.

#### *2.1. Sample Preparation*

Basmati rice grains of different grades (Pusa basmati 1121), were used in this study. This variety of basmati rice possesses extra-long slender milled grains (∼9.0 mm), pleasant aroma, and an exceptionally high cooked kernel elongation ratio of ∼2.5 [28]. It is the most common Basmati rice variety in rice grain quality research for developing standard Basmati quality traits. The grades of the rice sample are based on the percentage of broken rice content (e.g., 5% broken rice).

The rice grain samples can be taken as heaped together or in a scattered arrangement for imaging. These arrangements are important which is likely due to the fact that the grain characterization method employs the visual attributes of grains obtained from image-processing techniques. Thus, the heaped grain images might attain certain disadvantages e.g., boundaries of grains not completely visible and distinguishable and noise appearing more prominent than the boundaries if grains are overlapping with each other [21]. Thereby, it is desirable to take samples in a scattered arrangement with a black background (it can improve the contrast of the image in the scattered configuration). Furthermore, it should be ensured that not too many rice grains are clustering in scattered configuration.

#### *2.2. Imaging System and Image Acquisition*

A schematic diagram of the image acquisition system is shown in Figure 1. Typically, a vision system consists of the illumination component to illuminate the sample under test; the camera to acquire an image; personal computer or microprocessor system to provide disk storage of images and computational capability.

In an image acquisition system, choosing the right lighting strategy remains a difficult problem because there is no specific guideline for integrating lighting and machine vision application. Despite this, some rules of thumb exist [29] which suggest that fluorescent bulbs are inherently more efficient and produce more intense illumination at specific wavelengths. Moreover, the fluorescent light provides a more even, uniform dispersion of light from the emitting surface [30]. A 25–40 kHz ring-shaped compact fluorescent light is used for illumination in this setup. Apart from the illuminant, the surface geometry is also important in the illumination design. In this present work, a diffuse illuminator is used to produce uniform lighting as shown in Figure 1. Such a setup is extremely useful for visual inspection of grains and oilseed with a success rate almost reaching 100% [31].

The system was enclosed in a dark chamber to prevent exposure to stray light. The digital camera (Canon EOS 1300D) was set in the manual mode for image acquisition with an ISO of 400 and a shutter of 1/30 s. The images were taken with a black background for basmati rice sample with different orientation and quality in a scattered arrangement for training and testing. Camera aperture and focus were adjusted to make individual grain boundaries distinguishable in the picture. A total of 40 images were acquired and saved in raw format, in which no adjustment (e.g., white balance) was applied.

**Figure 1.** Schematic diagram of image acquisition system equipped with a camera, illumination source and geometry, and connected PC.

#### *2.3. Image Processing*

The image processing was carried out with MATLAB to acquire the feature data. At first, the acquired RGB image was separated in single R, G and B channel in grayscale mode. Subsequently, each channel grey image was converted to a binary image using Otsu's method [32]. This method converts the grayscale image to a binary image based on image clustering in accordance with a threshold value. This threshold value is optimally determined between 0 and 1 by Otsu's method. The grey level is normalized from 0–255 to 0–1. The method then splits the normalized image into two classes having lower or higher grey level than the threshold value. Each pixel is set to white (1) if the grey level is higher than the threshold value, otherwise, it is set to black (0). Thus, the image segmentation considers the identification of objects within an image using an edge detection algorithm which identifies the boundaries of individual object and labels the centre of each object for further processing. Eventually, each grain's position is fixed and it is tagged according to its position through a segmentation process.

The noise of each image is then eliminated using a morphological process. It is followed by morphological opening operation [33] were applied with 'disk' type structuring element using 'imopen' function followed by hole filling and clear borders. Morphological opening operations generally smooth the objects of the image. Opening operation eliminates thin protrusions of the objects. Opening operation eliminates the objects which cannot accommodate the structuring element completely. Thus it removes the noise from the image. Then, each object was labelled followed by counting the objects.

Feature extraction involves the retrieval of quantitative information from the segmented images. Here, extraction of parameters e.g., eccentricity, equivalent diameter, area, perimeter, major axis length and minor axis length have been carried out further with 'regionprops' function for differentiating head grain from broken grain. Aspect ratio i.e., major axis length/minor axis length was estimated from object feature set. The schematic processing chain is shown in Figure 2. The similar operation was conducted for other training images of basmati rice sample and for the test image too. Feature dataset for training as well as testing was created from the object properties and object class (Whole grain = 1 or broken grain = 0).

**Figure 2.** Schematic workflow for image processing.

#### *2.4. Features of Basmati Rice*

To physical properties of the rice kernel is characterised using the morphological features. Rice grains are generally considered as an ellipse as shown in Figure 3. Based on this assumption of the object, the following morphological features as reported by Shantaiya and Ansari [34] are considered for the present study. The devised features are also reported to be promising in [35] as optimal morphological features for rice kernel identification using standard sequential forward (SFS) algorithm. These features are:


These parameters were extracted with image processing techniques as discussed in Section 2.3.

**Figure 3.** Rice grain properties.

#### *2.5. Fuzzy Inference System*

A Fuzzy Inference System (FIS) incorporates the knowledge of an expert, during design a model in between input and output parameters. In FIS, the input-output relations are defined by a set of fuzzy rules, e.g., IF-THEN rules [36]. Fuzzy logic-reasoning involves the assignment of membership function to the input and output parameters; and the rule base which processes the fuzzy values of the inputs to fuzzy values of the outputs. The accurate selection of these membership function and the rules is one of the most critical stages in the FIS which needs expert knowledge. FIS consists of three segments viz. fuzzification, inference engine and defuzzifier. Fuzzification converts the numeric value of the input to a linguistic variable with the help of the membership functions e.g., triangular, trapezoidal, Gaussian, etc. The inference engine evaluates the degree of the membership function of the input variables (premise) to the fuzzy consequent part using the fuzzy IF-THEN rules. The conditional statement contains a premise, the if-part, and a conclusion, the then-part [37]. The knowledge involved in a fuzzy inference system contains a group of several rules [38]. At last, the defuzzifier converts the fuzzy output into a crisp value. The fuzzy inference engine is the core of FIS which can represent the human decision-making process [36].

The Takagi-Sugeno (T-S) FIS has fuzzy inputs and a crisp output which is a linear combination of the inputs or constant. This method is computationally efficient and suitable to work with optimization and adaptive techniques [39]. The T-S method involves a systematic approach to generating fuzzy rules from a given input-output data set (Figure 4). It uses a membership function of the input variables for producing the consequent (then part). It uses the fuzzy rule: *IF x is A AND y is B THEN z is f*(*x*, *y*) where *x*, *y*, and *z* are linguistic variables, *A* and *B* are fuzzy sets and *f*(*x*, *y*) is a mathematical function [39]. T-S FIS uses a weighted average to generate the crisp output.

In this present work, zero order T-S was adopted for grading of basmati rice. The membership functions were taken as 'gbellmf' [40] for all inputs viz. eccentricity, equivalent diameter, perimeter and the major axis length/minor axis length (*a*/*b*); the outputs were taken as constant (for whole grain, output = 1 and broken grain output = 0). An example of fuzzy membership function is shown in Figure 5. The rules of the T-S method were taken as follows:


**Figure 4.** Takagi-Sugeno type FIS system with premise and consequent part.

**Figure 5.** Membership function for Eccentricity feature.

#### *2.6. Adaptive Neuro-Fuzzy Inference System (ANFIS)*

The adaptive neural network based fuzzy inference system (ANFIS) is a hybrid system. It includes both the advantages of the self-adaptability and learning competence of the neural network and the ability of the fuzzy system to take into account the prevailing uncertainty and imprecision of real systems. The neuro-fuzzy modeling approach is concerned with model extraction from numerical data which represents the dynamic behaviour of the scheme. With ANFIS method, an initial fuzzy model is generated with the help of the rules extracted from the input-output data. Next, the neural network is used to tune the rules of the initial fuzzy model to produce the final ANFIS model. The formulations and discussion of ANFIS architecture can be found in [41,42]. Unlike ANN, it has a higher capability in the learning process to adapt to its environment. Therefore, it can be used to automatically adjust the membership function's parameters and reduce the rate of errors in the determination of rules in fuzzy logic.

The ANFIS architecture shown in Figure 6 is an adaptive network that uses supervised learning algorithm and has a function similar to the model of Takagi-Sugeno fuzzy inference system as discussed in Section 2.5. Let's assume that there are two inputs *x* and *y*, and one output *f* of the architecture. Two rules are used in the method of "If-Then" for Takagi–Sugeno model, as follows:


where *A*1, *A*<sup>2</sup> and *B*1, *B*<sup>2</sup> are the membership functions of each input *x* and *y* (premises), while *p*1, *q*1, *r*<sup>1</sup> and *p*2, *q*2, *r*<sup>2</sup> are linear parameters in the consequent part of Takagi–Sugeno fuzzy inference model. ANFIS architecture has five layers. The first and fourth layers contain an adaptive node, while the other layers are fixed nodes.

**Figure 6.** ANFIS architecture with input, hidden and output layer [43].

**Layer 1:** Each node adapts to a function parameter. The output from each node is a degree of membership value that is given by the input of the membership functions. For example, the membership function used in this study is a generalized bell membership function (c.f. Section 2.5).

$$\mu\_{Ai}(\mathbf{x}) = \frac{1}{1 + \left| \frac{\mathbf{x} - \mathbf{c}}{a} \right|^{2b}} \tag{1}$$

where *μAi* is the degree of membership functions for the fuzzy set *Ai*, and {*a*, *b*, *c*} are the parameters of a membership function which can change the shape of the membership function as shown in Figure 5.

**Layer 2:** Each node in this layer is fixed or non-adaptive and represented with a product operator Π. Each node in this layer represents the firing strength for each rule.

**Layer 3:** Each node in this layer is fixed or non-adaptive and labeled as *N*. It is normalizing the firing strength as *w*¯*<sup>i</sup>* = *wi*/ ∑ *wi*.

**Layer 4:** Each node in this layer is an adaptive node with a node function defined as *w*¯*<sup>i</sup> fi* = *pix* + *qiy* + *ri*. The parameters in this layer are referred to as consequent parameters.

**Layer 5:** The single node in this layer is a fixed or non-adaptive node that computes the overall output as the summation of all incoming signals from previous nodes as ∑ *w*¯*<sup>i</sup> fi*.

In the ANFIS architecture, the first layer and the fourth layer contain the parameters which are tuned during the training phase. The number of training epochs, the membership functions and the number of fuzzy rules should be selected accurately while designing of ANFIS model [41], as it may lead system to overfit the data. This tuning is obtained with a hybrid algorithm combining the least-squares method and the gradient descent method with a mean square error method [44]. The training error, as well as test error, are also assessed using the rice grain sample data. A threshold was applied to the output of ANFIS to get a binary class of whole grain or broken grain.

#### *2.7. Design of Experiment*

The features (c.f. Section 2.4) were obtained form all the basmati rice sample images (in total 40 images) as discussed in aforementioned Sections. Among them, features obtained from 30 images were used to train the ANFIS and features derived from the remaining 10 images were used for testing. Subsequently, the classification accuracy was estimated with actual class of test data and ANFIS output results. Furthermore, the performance of ANFIS was compared with standard classification techniques of support vector machines (SVM) and K-nearest neighbours (KNN). The optimal margin and kernel parameters (radial basis function) for the Soft-margin SVM classifiers were determined using grid search and 10-fold cross-validation. Each image contains ∼15 rice kernels (objects). However, for representation we have kept only 1 image and have shown all image processing steps involved in result section (c.f. Section 3).

Furthermore, the effectiveness of morphological features was analyzed for a segmented test image (for each object). The histograms of individual feature can provide a rationale by relating the features to the physical properties derived from an image based technique associated with the grain class. In addition to the classification accuracy assessment, the milling efficiency (the ratio of broken grain and whole grain) was assessed using the broken grain and whole grain objects derived from a test image.

#### **3. Results and Discussion**

This section explains the results of the ANFIS based rice grading technique. The proposed morphological features are generated as detailed in Section 3.1 using the image processing technique which was performed using the steps described in Section 2.3. In Section 3.2, the ANFIS classification result using the features is analyzed and subsequently compared with the standard classification method. Furthermore, a histogram analysis of features are followed by estimation of the milling efficiency in Sections 3.3 and 3.4.

#### *3.1. Image Processing Outputs*

The sample images were processed as mentioned in Section 2.3. An example of the processed images is shown in Figure 7. These images were used for feature extraction which was utilized for a fuzzy model generation. The images are consisting of both whole and broken grain rice. After segmentation and morphological operations, each object of an individual image is labelled as shown in Figure 7h–l. The features associated with each object (e.g., Object 1 in Figure 7h) are stored with the associated objectID in a tuple. These data set are further being used in training and testing of the classifier.

#### *3.2. Classification Performance*

The features extracted from processed training images were used to build the ANFIS model. Test error was found to be on training data for 20 epochs during ANFIS training (Figure 8). The error was zero for test data after thresholding (threshold at 2) on ANFIS output. The results of training and testing are quite impressive as shown in Figure 8 and it is further analyzed in Table 1.

From Table 1 it is observed that for the majority of the objects (rice grains in test image) the ANFIS output threshold class is similar to actual class (i.e., 0 or 1). Therefore, the classification accuracy of ANFIS is 100%, as an actual class and ANFIS output class for the test objects are alike.

Here it is important to note that the proposed approach categorizes basmati rice grains into two categories: (1) whole grain and (2) broken grain. Perhaps the readers can imagine what happens with imperfect grains or the imperfect whole grain? For an imperfect grain, the morphological parameters are different from a whole grain. The present study considers a binary classification ('whole grain' = 1, 'broken grains or/and others' = 0). The membership function is taken as 'High' and 'Low' (cf. Section 2.5). For an imperfect grain eccentricity will be 'low', (*a*/*b*) is 'low'. Perimeter

might be 'medium'. However, in our case membership function is set to 'High' and 'Low'. Therefore, it will be classified as 'broken grain'. By virtue of these, imperfect grain anyhow is separated from 'whole grain', thus it is not affecting 'milling efficiency'. Figure 9 illustrates a similar scenario which is analyzed from a test image. It is clearly showing a dissimilarity between grain physical parameters for three-grain types.

**Figure 7.** Image processing outputs. (**a**) Raw sample image (RGB); (**b**) Red channel image; (**c**) Green channel image; (**d**) Blue channel image; (**e**) Binary image; (**f**) Image after morphological opening; (**g**) Hole filled and border cleared image; (**h**) Labelled objects in training sample1 image; (**i**) Labelled objects in training sample 2 image; (**j**) Labelled objects in training sample 3 image; (**k**) Labelled objects in training sample 4 image; (**l**) Labelled objects in test sample image.

**Figure 8.** Training and test error for sample images. During training, after 11 epochs the error significantly reduces. In test error plot, the blue dots are actual output, and the red stars are ANFIS output corresponding to each object.

**Figure 9.** Variations in features in between different grains. (**a**) Whole rice grain; (**b**) Broken rice grain; (**c**) Imperfect rice grain.



From the experimental results, it is observed that the ANFIS perform satisfactorily in evaluating the percentage of broken rice with an overall accuracy of >98.5%. However, the comparative analysis with the SVM and KNN are less favourable with accuracies <95% for 10 test images as shown in spider plot in Figure 10.

**Figure 10.** Classification performance of ANFIS, SVM and KNN for 10 test image samples. I1-I10 represents the test image IDs.

#### *3.3. Histogram of Features in Testing Images*

Histograms of feature extracted from the test image objects are shown in Figure 11. All the features are positively skewed representing a positive relation with basmati grain size (head grain). The occurrence of eccentricity >0.9 is found for more than 16 rice objects in the test image. The aspect ratio is >2.4 for more than 17 objects. Similar results are also shown in the case of area, perimeter and major axis length. The histogram images as shown in Figure 11, also exemplify the head grain and broken grain percentage in the test images.

**Figure 11.** Histogram of eccentricity, aspect ratio, equivalent diameter, area, perimeter and major axis length of test image objects.

#### *3.4. Milling Efficiency*

From the test image output results, the number of the whole grains = 17 out of 22 objects, whereas, the broken grain objectsare = 5 out of 22 objects. Therefore, the percentage of whole grain = (17/22) × 100 = 77.27%. This milling efficiency (*η*) was found for a specific roller characteristic (rpm = 2000 and gap between roller ∼0.5 mm) of the milling machine [45]. The milling efficiencies were evaluated in a similar way for the 10 test images. Subsequently, the average of all the 10 milling efficiencies was determined, which is ∼77.3% with a standard deviation (*σ*) of 1.5. The milling efficiency (*ηavg*) derived using the image-based method was in accordance with the manual calculation results (*η* = 76.87%).

#### **4. Summary and Conclusions**

Standard classification technique seldom incorporates domain knowledge associated with the physical properties of rice grain. Hence, a knowledge-based neuro-fuzzy classification technique was proposed in this study for grading of basmati rice grains. This technique takes into account the physical properties of grain devised from an image-based method to classify whole and broken grain.

A novel image processing technique was adopted for morphological feature extraction followed by ANFIS model building for discrimination of grains. The classification accuracy for the test images were >98.6%, which comparatively better than standard SVM and KNN classifier (<95%). Moreover, the proposed ANFIS classification results seem to be more reliable than the results obtained from SVM and KNN, since it deals with uncertainty in output. It is important to note that the standard technique does not take into account any domain knowledge associated with grain physical properties. In fact, the physical properties are essential for grain grading and characterization as analyzed in this study.

The milling efficiency was estimated in terms of percentage of whole grain or head grain and it was 77.27% for the test sample. However, the colour and texture based quality and grading was not considered during feature selection. The overall results of the adopted methodology were promising in terms of classification accuracy and efficiency. This work can be extended to discrimination of different rice varieties for determining the degree adulteration. Furthermore, the real-time image processing based grading can be addressed equally. This can be extended for optimization of milling machine parts characteristics during milling operation (roller parameter-speed, the gap between rollers) for process automation using micro-controller units.

**Funding:** This research received no external funding.

**Acknowledgments:** The author would like to thank Rice Processing Unit, Department of Postharvest Engineering, Bidhan Chandra Krishi Viswavidyalaya, India for providing sample images of basmati rice.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2018 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **A Fuzzy Inference System for Unsupervised Deblurring of Motion Blur in Electron Beam Calibration**

#### **Salaheddin Hosseinzadeh**

Department of Engineering, Design and Physical Sciences, Brunel University London, Kingston Lane, Uxbridge, London UB8 3PH, UK; Salaheddin.Hosseinzadeh@gmail.com

Received: 18 October 2018; Accepted: 25 November 2018; Published: 4 December 2018

**Abstract:** This paper presents a novel method of restoring the electron beam (EB) measurements that are degraded by linear motion blur. This is based on a fuzzy inference system (FIS) and Wiener inverse filter, together providing autonomy, reliability, flexibility, and real-time execution. This system is capable of restoring highly degraded signals without requiring the exact knowledge of EB probe size. The FIS is formed of three inputs, eight fuzzy rules, and one output. The FIS is responsible for monitoring the restoration results, grading their validity, and choosing the one that yields to a better grade. These grades are produced autonomously by analyzing results of a Wiener inverse filter. To benchmark the performance of the system, ground truth signals obtained using an 18 μm wire probe were compared with the restorations. Main aims are therefore: (a) Provide unsupervised deblurring for device independent EB measurement; (b) improve the reliability of the process; and (c) apply deblurring without knowing the probe size. These further facilitate the deployment and manufacturing of EB probes as well as facilitate accurate and probe-independent EB characterization. This paper's findings also makes restoration of previously collected EB measurements easier where the probe sizes are not known nor recorded.

**Keywords:** fuzzy inference system; fuzzy logics; linear motion blur; fuzzy deblurring; electron beam calibration; signal and image processing

#### **1. Introduction**

The main goal of fuzzy systems is to define and control sophisticated processes by incorporating and taking advantage of human knowledge and experience. Nowadays, fuzzy logics are widely used in industry for various applications ranging from cameras to cement kilns, trains, and vacuum cleaners [1]. Furthermore, deblurring techniques have versatile applications and they are either performed in spatial [2] or frequency domains [3–5]. Hosseinzadeh [6] modeled the electron beam (EB) measurement process with a linear motion blur and evaluated three of the well-established deblurring techniques for EB restoration. In this study [6], Hosseinzadeh used a Weiner inverse filter and blind Richardson-Lucy deconvolutions to restore the EB distribution and correct the measurements through deblurring. A simple motion blur is formulated in Equation (1).

$$\mathbf{g}(\mathbf{x}) = \int f(\mathbf{x})h(\mathbf{x}) + n(\mathbf{x}),\tag{1}$$

where in the spatial domain, *f* , *g*, *h*, and *n* are the ground truth signal (EB distribution) of length *Lf* , degraded signal (measurement from probe), point spread function (PSF) of length *Lh*, and noise respectively. Their frequency domains are represented by uppercase letters *F*, *G*, and *H*. In the case of electron beam measurements, the ground truth signal is the distribution of EB and the degraded signal is the measurement acquired from the probe. The electron absorption of a slit or wire probe of size *Lh* is modeled with a PSF kernel [6].

Linear motion blur point spread function has two distinct characteristics of motion direction and length (*L*) [7]. The PSF is known for having harmonically spaced vanishing magnitudes in the frequency domain due to its limited length in the spatial domain [8]. There are several approaches to estimate *Lh* such as log power spectrum, cepstrum, bispectrum, and pitch detection algorithms. In image deblurring jargon, it is assumed that the frequency spectrum of *F* is smooth and does not contain vanishing frequencies, hence any vanishing frequencies in *G* are associated to *H* [9,10]. However, this assumption usually does not hold for EB measurements, especially where the *Lf* is in the same order of *Lh*. This similarity makes it complicated to distinguish between *Lf* and *Lh* and therefore compromises the deblurring process by an incorrect detection of null frequencies. Such an erroneous deblurring process is likely to produce an incorrect but convincing result, notably when *f* and *h* have remarkable cross-correlation. This ambiguity is likely to happen in EB measurements, because: (a) *f* and *h* are usually in the same order of magnitude and they have relatively high cross-correlation; and (b) the *Lf* can be inconsistent. In Reference [6], a prior knowledge of *Lh* is used to estimate the position of null frequency of *h* from the spectrum analysis of *G*. Hosseinzadeh limited the spectrum of *G* to ±15% of the nominal *Lh* by applying a window to its log-power spectrum, thereby ignoring vanishing frequencies outside of this interval. This algorithm is available in Reference [11]. This strategy relies on knowing the *Lh*. Therefore, it is a good approach when it is known accurately. There are a few limitations with this method due to the varying nature of *Lf* during the calibration and measurement process. As a result, the beam's vanishing frequency (or its harmonics) can be located within the applied window and cause a false detection. Furthermore, if the inaccuracy of *Lh* is more than 15%, the null frequency of *h* is ignored by the window resulting in an erroneous restoration. In addition, any inaccuracy of more than ±15% cannot be compensated.

One solution to effectively address this uncertainty is to use fuzzy systems. Fuzzy inference systems are widely used to address instrumental uncertainties. A comprehensive review and explanation of fuzzy inference systems are provided in Reference [12].

It is known that a wrong estimation of *Lh* can lead to drastic noise-like errors in the restorations [13]. Furthermore, utilizing deblurring techniques for industrial purposes requires real-time, reliable, and unsupervised methods. To satisfy these requirements, this article proposes a Wiener filter that is monitored by a fuzzy inference system. A Wiener filter is selected due to its simplicity, real-time execution, and superior performance in the restoration of linear motion blur [6]. The fuzzy inference system deals with the uncertainty of the deconvolution by monitoring the entire restoration process. This FIS is comprised of three crisp inputs that included the PSF length or probe size (*Lh*) deviation, attenuation of the vanishing frequencies, and deconvolution residue.

However, probe size deviation is an optional input, which is based on a previous rough knowledge of *Lh*. If *Lh* is roughly known, it serves as a reference point from which the PSF length deviation is calculated. Therefore, unlike Reference [6], prior knowledge of *Lh* does not limit the inaccuracy compensation to ±15%. It is demonstrated in Reference [6] that the spatial domain of *h* has a sharper transition compared to the EB distribution (*f*). This is due to the semi-Gaussian distribution of *f* compared to *h*. Therefore, vanishing frequencies of *h* are expected to have higher attenuation or lower magnitude compared to *f* . Hence, the normalized magnitude of the detected null frequencies in *G* are the second crisp input to the fuzzy inference systems. The last input of the system is the quantified deblurring artifacts that are introduced during the restoration of *f* from *g*. The restored beam distributions are denoted as ( ˆ *f*). These residual artifacts are inevitable and they increase as the *h* deviates from its mathematical definition. Extraction of residues from ˆ *f* is explained in section II. The output of the fuzzy system (*Ei*) is defuzzified to represent the quality of the restorations. This output is generated based on the definition of the fuzzy rules that are explained in the next section.

The rest of this paper is arranged as follows: Section 2 illustrates the details of FIS implementation. This includes specifying the crisp inputs and fuzzifying them, defining the membership functions, and formulating the fuzzy sets. The section continues by identifying the fuzzy rules and making an inference to generate the output. Section 3 presents the practical results of the proposed method

and the ability of the system to distinguish the correct deblurring results. The values of membership functions parameters are provided and a comparison is made between implementing the fuzzy system with and without the knowledge of probe size (*Lh*).

#### **2. Modeling and Implementation**

As mentioned, when there is similarity between *Lf* and *Lh* it is difficult to discriminate between their null frequencies just by looking at *G*. This introduces an uncertainty and makes it hard to decide which null frequency belongs to the probe (*H*) because null frequencies can belong to either beam (*F*) or probe (*H*). To address the uncertainty of unsupervised *Lh* detection, all the null frequencies in *G* are identified and only the first two nulls with lowest frequencies are extracted while avoiding the harmonics. This implies that a maximum of two null frequencies (*ωi*<sup>=</sup>1,2) are to be extracted from *G*. There are three possibilities based on the extracted number of null frequencies: (a) If no null frequency is detected due to *Lh Lf* , then motion blur effect is negligible and deconvolution is not necessary; (b) if a single null frequency is detected as a result of *Lh Lf* , then the deconvolution can progress without involving the fuzzy system as the null frequency belongs to *Lh*; (c) in case two null frequencies are extracted (*ω*1, *ω*2), two deconvolutions are performed where each of the deconvolutions are performed by adjusting their corresponding *L*ˆ*i*=1,2 (*L*ˆ*i*=1,2 ∝ 1/*ωi*=1,2). This is done because both *ω*<sup>1</sup> and *ω*<sup>2</sup> could be belonging to *h* of different sizes.

The FIS is defined with three merits to grade the deblurrings. Deblurrings are performed by two individual Weiner filters that use *L*ˆ <sup>1</sup> and *L*ˆ <sup>2</sup> resulting in ˆ *f*<sup>1</sup> and ˆ *f*<sup>2</sup> respectively. The fuzzy system produces a single crisp output deconvolution grade (*Ei*=1,2) for each restoration. The restoration process that produces a higher *Ei* is then chosen as the correct process with its corresponding *L*ˆ*<sup>i</sup>* being the correct probe size ( *Lh* <sup>←</sup> *<sup>L</sup>*ˆ*<sup>i</sup>* ). A single layer (non-hierarchal) fuzzy inference system of three inputs and a single output is designed to evaluate the overall deblurring process. These inputs are: PSF length deviation, null frequency magnitude, and residue, and the deconvolution grade is the only output. These inputs and the output are explained in detail as follows.

#### *2.1. PSF Length Deviation*

As mentioned, *ω*<sup>1</sup> and *ω*<sup>2</sup> are extracted to accurately adjust the *Lh* during the restoration process. By having rough prior knowledge of the probe size (*Lh*) and the estimated sizes (*L*ˆ*i*) from *G*, we can define PSF length deviation as the distance between the expected and the estimations (|*Lh* <sup>−</sup> *<sup>L</sup>*ˆ*i*|). This definition converges to zero if the estimation is close to the prior knowledge, whereas it increases if *L*ˆ*<sup>i</sup>* is deviated from *Lh*. Two fuzzy sets (*Af ar* & *Aclose*) with membership functions of *μ <sup>m</sup>* and *μ<sup>m</sup>* are defined to account for the probe inaccuracy and assign a degree of membership to each *L*ˆ*<sup>i</sup>* based on its deviation from *Lh*. Membership functions are defined by polynomial-Z (zmf) and polynomial-S (smf). The *Aclose* fuzzy set definition and its membership function is formulated in Equation (2). A thorough evaluation of fuzzy membership functions are provided in Reference [14].

$$A\_{\text{close}} = \{ (\hat{L}\_i, \mu\_{m(\hat{L}\_i)}) \mid 0 < \hat{L}\_i < \infty, m(\hat{L}\_i) = \frac{2|L\_0 - \hat{L}\_i|}{\hat{L}\_k} \},$$

$$\mu\_m = \begin{cases} 1 & m \le a\_m \\ 1 - 2\left(\frac{m - a\_m}{c\_m - a\_m}\right)^2 & a\_m < m \le \frac{a\_m + c\_m}{2} \\ 2\left(\frac{m - c\_m}{c\_m - a\_m}\right)^2 & \frac{a\_m + c\_m}{2} < m \le c\_m \\ 0 & m > c\_m \end{cases} \tag{2}$$

where *am* and *cm* are the membership function parameters that are found heuristically through analysis of several measurements.

#### *2.2. Null Frequency Magnitude*

The second input of the fuzzy system is the magnitude of the extracted null frequencies. This is extracted from the normalized log-power spectrum of *g* and has a dynamic range of 0 to 1 dB, demonstrated in Figure 1.

**Figure 1.** Normalized power spectrum of *G* exhibits *ω*<sup>1</sup> and *ω*<sup>2</sup> at 0.12 and 0.165 MHz frequencies with their harmonics at higher frequencies.

As explained, *h* is most likely to have rapid spatial transitions compared to *f* . This implies that *H* is likely to have the nulls with higher attenuation in *G* (nulls with lower magnitude). As a result, two fuzzy sets (*Bhigh* & *BLow*) with membership functions of *μ <sup>o</sup>* and *μ<sup>o</sup>* are defined to assign a higher membership value to the nulls with more attenuation (or lower magnitude), whereas a lower degree of membership is assigned to less attenuated (higher magnitude) nulls. Membership functions are defined with zmf and sfm. *BLow* is formulated in Equation (3), where *GN* is the normalized frequency spectrum of the degraded signal *G* and *ao* and *co* are the membership function parameters. *AFar* membership function definition is similar to *BLow* as they are both defined by smf.

$$B\_{Low} = \{ (\hat{L}\_i, \mu\_{o(\hat{L}\_i)}') \mid 0 < \hat{L}\_i < \infty, o(\hat{L}\_i) = \log \left( |G\_N(\hat{L}\_i) + 1| \right) \},$$

$$\mu\_o' = \begin{cases} 0 & o \le a\_o \\ 2 \left( \frac{o - a\_o}{\varepsilon\_o - a\_o} \right)^2 & a\_o < o \le \frac{a\_o + c\_o}{2} \\ 1 - 2 \left( \frac{o - c\_o}{\varepsilon\_o - a\_o} \right)^2 & \frac{a\_o + c\_o}{2} < o \le c\_o \\ 1 & o > c\_o \end{cases} \tag{3}$$

#### *2.3. Deconvolution Artifact Residues*

Deconvolutions are performed using the Wiener inverse filtering process in Equation (4).

$$\mathcal{F}\_{\mathbf{i}} = \frac{1}{H(\omega\_{\mathbf{i}})} \left[ \frac{|H(\omega\_{\mathbf{i}})|^2}{|H(\omega\_{\mathbf{i}})|^2 + \frac{1}{\text{SNR}(\omega)}} \right] G(\omega), \tag{4}$$

where in the frequency domain, *F*ˆ *<sup>i</sup>* is the restored ground truth signal and *SNR* is the signal-to-noise ratio. After the deconvolutions, ˆ *fi*=1,2 has shorter lengths in spatial domain compared to *g*. We first normalized *g* and both of the restorations ( ˆ *fi*=1,2) between [0, −1], *gN* is then shifted so its minimum is matched with the minimums of each ˆ *fi* in the spatial domain to obtain *g*ˆ*N*. Finally, every restoration residue (*ri*) is quantified as in Equation (5).

$$
\sigma\_i = \frac{4}{\int \mathbf{g}(\mathbf{x})d\mathbf{x}} \cdot \int \dot{f}\_i(\mathbf{r})d\tau \qquad \{\tau \in \mathbf{x} | \mathfrak{H}\_N(\tau) > -0.05\},\tag{5}
$$

The deconvolution process using both of the extracted PSFs and their corresponding residues are showed in Figure 2. The deconvolution was performed with a Wiener inverse filter, where *h* is formulated in Equation (6).

$$h\_{\underline{l}\_i}(\mathbf{x}) = \begin{cases} \begin{array}{ll} 0 & o.w \\ 1 & |\mathbf{x}| < \frac{\underline{l}\_i}{2} \end{array} \end{cases} \tag{6}$$

**Figure 2.** Deconvolution of the degraded pulse in Figure 1, using two different point spread function (PSF) lengths and demonstration of their deconvolution residues.

Two fuzzy sets (*Clow* and *Chigh*) are defined with membership functions of *μ<sup>r</sup>* and *μ <sup>r</sup>* using zmf and smf respectively, where the overall shape of the functions is determined by *ar* and *cr*. These functions are designed to assign a higher degree of membership to the *L*ˆ*<sup>i</sup>* that produces a smaller number of residues after restoration.

#### *2.4. Deconvolution Grade*

All the combinations of the aforementioned inputs are used to form eight if–then rule statements with different weights. These statements, with their corresponding weights, are provided in Table 1. Fuzzy AND operator is then used for the implication of the fuzzy consequences.


**Table 1.** Rule base formation criteria.

Rule weight is added to scale the consequences and account for the certainty of the rules. The consequence is the restoration quality with two fuzzy sets (*Dgood* & *Dbad*) and membership functions of *μ<sup>q</sup>* and *μ <sup>q</sup>* respectively defined by smf and zmf. Aggregations of the rules are performed by using a Zadeh T-norm and defuzzifications are carried out by mean of maximum (MoM) method [15]. The resulting crisp values are the deconvolution grades (*Ei*=1,2). Therefore, there is a grade (*Ei*=1,2) for each deconvolution. In other words, for each ˆ *fi*=1,2 that is deblurred by its corresponding *hL*<sup>ˆ</sup>*i*=1,2 , there is an overall grade of restoration (*Ei*=1,2). According to the definition of the consequence membership functions, a greater value of *Ei* represents a better restoration and, on the contrary, a lower value of *Ei* represents a possible erroneous process, (*Ei* is ranging from 0 to 1). With this proposed system, if by mistake *Lf* is used instead of *Lh* in the formation of the *h* (Equation (6)), then the resulting *Ei* will be lower. Overall, *E*<sup>1</sup> and *E*<sup>2</sup> are used comparatively to determine and select the best restoration between ˆ *f*<sup>1</sup> and ˆ *f*<sup>2</sup> that are emerged from restoring a degraded sample (*g*). This proposed system and its overall restoration processes are demonstrated in Figure 3.

**Figure 3.** Process diagram, *L*ˆ*<sup>i</sup>* connections to the fuzzy inference system (FIS) are optional.

#### **3. Practical Result**

#### *Membership Function Parameters*

Membership function parameters were investigated pragmatically by testing the explained algorithm for various degraded EB measurement samples. In all degraded measurements, *h* and *f* had approximately similar sizes as a result of which *L*ˆ <sup>1</sup> ∼= *L*ˆ 2. The membership functions were designed with smooth transitions to provide a general solution and more flexibility, except for the attenuation. To further discriminate between *E*<sup>1</sup> and *E*2, the attenuation membership function parameters were adjusted to have more emphasis between the interval of 0 to 0.3 dB. This intuitive definition was done by observing the magnitude of null frequencies in several degraded signals where the attenuation of the null frequencies was always under 0.3 dB. The membership function parameters are presented in Table 2.

**Table 2.** Membership function definition details.


The membership functions of attenuation (*Bhigh* & *BLow*) and residue (*Clow* & *Chigh*) fuzzy sets are depicted in Figure 4, according to their values in Table 2. The fuzzy sets of PSF deviation and restoration quality were also defined with the similar membership functions to that of residues.

**Figure 4.** Attenuation and deconvolution residue membership functions.

The analysis of a few of the samples are shown in Figures 5 and 6. For a few of the EB measurements, the *Lh* (probe sizes) were known to be 1.00, 0.20, and 0.40 mm respectively. The crisp fuzzy inputs and deconvolution grades *Ei* were also provided for every sample. The restoration that resulted in the higher *Ei* was selected by the system as the correct solution and its corresponding *<sup>L</sup>*ˆ*<sup>i</sup>* therefore represents the probe size ( *<sup>L</sup>*<sup>ˆ</sup> *<sup>h</sup>* <sup>←</sup> *<sup>L</sup>*ˆ*i*). To validate the proposed system with the ground truth signal (*f*) [6], both restorations ( ˆ *f*1,2) were compared against their ground truth signal using cross-correlation. For the ˆ *fi* with the higher *Ei*, the cross-correlation of ˆ *fi* and *f* also produced greater coefficients, supporting the accuracy and reliability of the system. As another benchmark, full width at half maximum (FWHM) analysis was used, as it is a popular measure in the EB calibration jargon. The FWHM of *f* and the ˆ *fi* that had the higher *Ei* produced a similar result, further confirming that the FIS had successfully identified the correct restoration process.

**Figure 5.** Null frequencies in the spectrum of the degraded pulse. Result of restoration with detected null frequencies, expected PSF length of 1 mm on the left and 0.2 mm on the right.

**Figure 6.** Null frequencies in the spectrum of the degraded pulse. Result of restoration with detected null frequencies, expected PSF length of 0.4 mm.

#### **4. Conclusions and Discussion**

The algorithm showed superior performance when a rough prior knowledge of *Lh* was provided for the fuzzy inference system. The Δ*Ei* = (|*E*<sup>1</sup> − *E*2|) was greater than 0.5 thereby clearly identifying and segregating the correct deconvolution process. The algorithm was also tested without including the PSF knowledge, in which case Δ*Ei* was in the interval of 0.1 to 0.5, which was enough to confidently separate the correct deconvolution process.

Figure 6 depicted a special case where *H* had a null frequency at *ω<sup>h</sup>* = 120 kHz with a normalized magnitude of 0.09 dB, whereas, *F* null was at *ω<sup>f</sup>* = 170 kHz with a magnitude of 0.02 dB and had four times higher attenuation. Although *ω<sup>f</sup>* had a magnitude that was in its favor, the PSF deviation of 0.51 was not, yet the PSF deviation outweighed its low magnitude and the correct restoration was successfully distinguished with 14% separation in the deconvolution grades (|*E*<sup>1</sup> − *E*2| = 0.14). This high attenuation of *ω<sup>f</sup>* was most likely due to it being closer to the second harmonic of *ω<sup>h</sup>* and, therefore, it experienced further attenuation. Nevertheless, owing to the FIS implementation, the correct restoration process was identified. All the possible rules were considered for the implementation of this FIS and its tuning was performed heuristically by an expert. However, clustering algorithms could be used for FIS with multiple inputs and membership functions to determine the optimum number of rules. Furthermore, adaptive FISs can be used to automate the tuning and learning process of the FIS in a more complicated and complex scenario.

**Funding:** This research received no external funding.

**Acknowledgments:** The author would like to thank A. Ferhati, A. Faghihi for their help and cooperation, C. Longman for reviewing the article and V. Jefimovs for his laboratory assistance. Many thanks to NSIRC, TWI Ltd. and Brunel University for providing the measurement facilities and research funds.

**Conflicts of Interest:** The author declares no conflicts of interest.

#### **References**


© 2018 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Gaze-Guided Control of an Autonomous Mobile Robot Using Type-2 Fuzzy Logic**

**Mahmut Dirik 1,\*, Oscar Castillo <sup>2</sup> and Adnan Fatih Kocamaz <sup>1</sup>**


Received: 24 March 2019; Accepted: 16 April 2019; Published: 24 April 2019

**Abstract:** Motion control of mobile robots in a cluttered environment with obstacles is an important problem. It is unsatisfactory to control a robot's motion using traditional control algorithms in a complex environment in real time. Gaze tracking technology has brought an important perspective to this issue. Gaze guided driving a vehicle based on eye movements supply significant features of nature task to realization. This paper presents an intelligent vision-based gaze guided robot control (GGC) platform that uses a user-computer interface based on gaze tracking enables a user to control the motion of a mobile robot using eyes gaze coordinate as inputs to the system. In this paper, an overhead camera, eyes tracking device, a differential drive mobile robot, vision and interval type-2 fuzzy inference (IT2FIS) tools are utilized. The methodology incorporates two basic behaviors; map generation and go-to-goal behavior. Go-to-goal behavior based on an IT2FIS is more soft and steady progress in data processing with uncertainties to generate better performance. The algorithms are implemented in the indoor environment with the presence of obstacles. Experiments and simulation results indicated that intelligent vision-based gaze guided robot control (GGC) system can be successfully applied and the IT2FIS can successfully make operator intention, modulate speed and direction accordingly.

**Keywords:** eye gaze tracking; interval type-2 fuzzy logic; vision system; mobile robots; intelligent control

#### **1. Introduction**

Mobile robot motion control problems have attracted considerable interest from researchers. The environments where the robot moves may vary from static or dynamic obstacles. In such an uncertain environment without prior information about the robot, target and obstacle, the basic aim is to safely move the robot without colliding with obstacles and reach the target point. The problem is the design of a safe and efficient algorithm that will guide robots to the target. The control system of these robots includes a set of algorithms. Recently, various methodologies have been utilized for mobile robot navigation [1–3]. These are, artificial potential fields (APF) [4–6], vision-based gaze guided control (GGC) [7–15], fuzzy logic control [12,16–18], vector field histogram (VFH) [19,20], rapidly-exploring random trees [21,22], and obstacle avoidance and path planning [23] algorithms. The fuzzy logic controller is one of the efficient techniques, which tends to be used actually for steering control in unpredictable environments [3,24,25]. Fuzzy logic can smoothly be concerted to handle various types of linear and nonlinear systems like the human model perception process in uncertainty [26]. Motion control of a mobile robot in an unstructured dynamic or static environment is usually characterized by a number of uncertainties that characterize real-world environments. To have exact and complete for prior knowledge of these environments is not possible. As the membership sets of the type-2 fuzzy systems are fuzzy, they offer a stronger solution to the type-1 fuzziness to represent and address uncertainty types. Type-1 fuzzy control cannot handle such kind of uncertainties. For this

reason, the type-2 fuzzy system [24,27] is preferred to handle uncertainties and increase performance of the control navigation system that was introduced by Zadeh [28,29]. Type-2 fuzzy sets theory has been further developed by Mendel and Karnik [24,30,31]. Theoretical background computational methods of IT2FIS and its design principles were developed for type reduction [32–35].

In the paper, a gaze guided robot control system, based on eye gaze using the type-2 fuzzy controller, is presented. It aimed to determine the robot's direction based on the target location and create wheel speeds based on gaze direction and demonstrate how gaze could be used for automatic control of a mobile robot. That means to obtain a model of the robot moving towards the objects by looking directly at the object of interest in the real world. The interaction methods are expected to benefit users with mobility in their daily works. The intent of gaze direction amplitudes and the angle variables are applied for designing the motion control system's input. The input variables are image coordinates in 2D image space of the scene monitored by the overhead camera. Eye movements identified such as looking up, down in *y* (eye width) coordinates and left and right in *x* (eye height) coordinates values were mapped to the mobile robot steering commands. The approximate width (X) and height (Y) of the eye was taken into account for the inputs. Gaze combination control is a benefit from effective hands-free input parameter of remote vehicles control. Gaze-based robot control work done in this regard is relatively scarce. Combining different types of technology with eye based systems [36–40] integration can be useful for disabled, elderly and patients peoples that especially with neuro-motor disabilities. They can remotely control a moving platform [11,12,41,42] with these technologies in complex daily tasks [14,43]. A wheelchair is a good substitute, for example Reference [44]. It's orientation commands are created by a wheelchair mounted eye movement tracking system [45–47].

The proposed gaze guided robot control (GGC) system consist of an EV3 robotic platform. It is highly customizable and provides advanced LabVIEW programming features [48,49]. In this platform, two high-resolution cameras are used. The first camera is mounted on the robot moving platform for transmitting a live video to the user screen. The second camera is used to identify the user's eye movement and translate this gaze direction to the host system to calculate the robot's steering control by using soft computing technique. The goal is to process to robot's movements where the user is looking at the display screen. To make the robot move around in the vibrant environment under the visibility of an overhead camera, it is focused on the remote settings with a 2D gaze guided driving system such as forward, backward, left and right.

This paper is organized as follows. Section 2 represents the procedure of the gaze guided system including a vision system of gaze tracking, the control system of the type-2 fuzzy mechanism and its application and experimental platform. The experimental design and results of the proposed methods are given in Section 3. Section 4 includes the discussion. Finally, Section 5 concludes the paper and recommendation for future works.

#### **2. Method of Gaze Guided System**

The intention of the system's architecture is to achieve an influential integration of eye tracking technology with practical GGC system. The overall concept of gaze guided robot control system can be demonstrated in Figure 1. In this structure, it is illustrated how different physical elements can be attached and communicate. The system architecture is divided into three parts. The first part is (user side) the user inferences, which are related to the eye tracking subsystem and the eye movement translation into robotic platform commands (1), the second part (host-IT2FIS) is data processing and command execution (2), the third part (robot side) is a robot moving environment under the visibility of an overhead camera. This framework includes four parts: an eye tracking system which can track user's eyes, an overhead camera system that supplies the video feedback to the user, a wheeled mobile robot and the host computer system which is accountable for collecting the gaze data and commentate it into robot motion commands. The mobile robot used in this platform has an ARM9-based processor at 300 MHz Linux-based operating system. Detailed properties can be viewed in [50]. A well-developed programming interface based LabVIEW programming language allows us

to transmit movement information, depending on the direction of the gaze, to the robotic platform via Bluetooth. The image acquisition camera [51] is able to provide live video with a resolution of 480 × 640 at 30 fps. The overhead camera height from the floor is approximation 2 m. The system software principally consists of two parts: the vision detection (eye tracking, robot tracking) and motion control algorithm. Sub-titles and details of the system are given below.

**Figure 1.** Interaction of the gaze-guided robot control system (1. User side, 2. Control system, 3. Robot side).

#### *2.1. Eye-Gaze Tracking GGC System*

The GGC control system architecture is shown in Figure 1. In the first block (1), a user sitting in a chair and watches the live video from an overhead camera. The movement of the robot is monitored on this screen. Wherever the robot is required to move, the user looks at that side. The visual attention on that side is extracted from gaze data. To perform the robot navigation, the visual attention is converted from image space to 2D Cartesian space and produces control commands from this coordinate system. The point of gaze as the user observes the video frames is utilized for robot motion control inputs. The direction and speed are regulated by distance from the center point of the eye as seen in Figure 2. X-axis regulates steering and y-axis regulates robot speed.

**Figure 2.** Robot control input parameters range (the X-axis regulate steering and the Y-axis regulates speed).

A high-resolution webcam [51] is used to track eyes where the user is looking. It is a video-based remote eye tracking system. A shape adapted mean shift algorithm [52] is utilized which is asymmetric and anisotropic kernels for object tracking which is process the image and calculates the point of gaze coordinates (see Figure 3).

**Figure 3.** Structure and functionality of real-time eye tracking algorithm.

A series of raw image continuously capture in image acquisition step (see Figure 4). The image tackled in this step is important in order to get the relevant Region of interest (ROI). The algorithms have been implemented using NI Vision Builder programming tools. The images are sent to the NI Vision system for machine vision and image processing applications. Image pre-processing makes easier to extract ROI. The analysis has purposed the removal of uninterested information to reduce the processing area. The original image is transformed into a grayscale and masked image that does not need to detect an eye feature point. A morphological operation has been applied for increasing the detection accuracy. A shape adapted mean shift algorithm has been applied which provides angular as well as a linear offset in object shapes.

**Figure 4.** Image Analysis Steps.

This algorithm allows tracking eye templates with changing shapes and size and a built-in function using the NI vision assistant. The tracking template position is well-considered as the reference and its coordinates are taken into account. In the final image analysis, the eye coordinate system is obtained (see Figure 5).

**Figure 5.** Real-image analysis.

#### *2.2. Overhead Camera System-Robot Tracking*

In Figure 1 (3. robot side), the graphical representation of an overhead camera based robot tracking field is shown. The environment where the robot moves is characterized by static obstacles. Gaze-based collision-free motion control of the robot in such a static environment is important. Environment model is captured by an overhead camera and sent to the NI vision system then the mobile robot on the field is tracked by a standardized vision system. The center of the moving robot area is accepted as (0,0). Horizontal (x) and vertical (y) axes represent the robot direction and wheel velocity. The information of the robot motion control environment is received by the host computer, and the robot tracking algorithm is executed. The robot's position is continuously tracked and updated considering the acquired information from the sequentially captured images. Decision strategy is improved in the host system for plans the robot motion and corresponding velocity commands. The host computer regulates the rotational velocities of wheels and sends these commands to the robot. An experiment image shown in Figure 6 has captured by the overhead camera and this image is sent to the NI vision system and user monitor.

The robot steering control command is produced by eye gaze movement as explained before. After defining the eye gaze coordinates, the output signal calculated for robot motor speeds by using type-2 fuzzy control sent via Bluetooth.

**Figure 6.** The graphical representation of the robot field.

#### *2.3. Interval Type2 Fuzzy Control System*

The design and theoretical basis of a type-2 fuzzy model for mobile robot motion control has been presented in this section. This is planned to supply the basic thoughts needed to explain the algorithm using gaze input variables and rule base to determine the value of the output system. Fuzzy type-2 has been verified to be a powerful tool for controlling a complex system because of its robustness for controlling nonlinear systems with characteristic and uncertainties [33,53]. The concept of the type-2 fuzzy set was proposed by Zadeh [28,54] as an extension of type-1 fuzzy logic. It is able to model uncertainties in a much better way for control application. The appearance of uncertainties in nonlinear system control using the highest and lowest value of the parameter extending type-1 (Figure 7a) fuzzy to type-2. (Figure 7b). Uncertainty is a characteristic of information, which may be incomplete, inaccurate, undefined, and inconsistent and so on. These are represented by a region called a footprint of uncertainty (FOU) that is a limited region of upper and lower type-1 membership function.

**Figure 7.** (**a**) Type-1 membership function. (**b**) Type-2 membership function.

An interval type-2 fuzzy set denoted by *A*\$, it is expressed in (1) or (2).

$$\widetilde{A} = \{ (\mathbf{x}, y), \ \mu\_{\widetilde{A}}(\mathbf{x}, y) \Big| \: \mathbb{V}\_{\mathbf{x}} \in X, \ \mathbb{V}\_{\mathbf{u}} \in J\_{\mathbf{x}} \subset [0, 1] \}. \tag{1}$$

Hence, <sup>μ</sup>*A*\$(*x*, *<sup>u</sup>*) <sup>=</sup> 1, <sup>∀</sup>*u Jx* <sup>⊆</sup> [0 1] it is considered as interval type-2 membership function as shown in Figure 8.

$$\overline{A} = \int\_{\mathbf{x} \in \mathcal{X}} \int\_{\mathbf{u} \in \mathbb{J}\_{\mathbf{x}}} 1/(\mathbf{x}, \boldsymbol{\mu}) \, \, \mathbf{J}\_{\mathbf{x}} \subseteq [0.1] \tag{2}$$

where & & donate the union of all acceptable *<sup>x</sup>* and *<sup>u</sup>*. An IT2FIS can be explained in terms of an upper membership function <sup>μ</sup>*A*\$ (*x*) and a lower membership function <sup>μ</sup> *<sup>A</sup>*\$(*x*). *Jx* is just the interval [μ*A*\$ (*x*), μ *<sup>A</sup>*\$(*x*)]. A type-2 FIS is characterized by IF-THEN rules, where the antecedent and consequent sets are of type-2. The fundamental block used for designing the type-2 controller is the same as used with type-1. As shown in Figure 8, A type-2 FLS includes a fuzzifier, a rule base, a fuzzy inference engine, and an output complement. The output processor includes a type-reducer and defuzzifier; it produces a type-1

fuzzy set output (from the type-reducer) or a crisp number (from the defuzzifier) [53]. Type reducer is added because of its association with the nature of the membership grade of the elements [35].

**Figure 8.** Structure of a type-2 fuzzy logic system.

#### 2.3.1. Fuzzifier

In this case, the inputs of the fuzzy sets are described. There are two inputs parameter used in the proposed system. These are named robot direction and robot speed respectively. The width (X) and height (Y) of the eye was taken into account for the input membership functions' values range (see Figure 9). As mentioned before, X-axis is representing the robot direction and Y-axis is representing the robot speed which is used to determine the two crisp inputs variables. Table 1 shows the width and height information of the eye. These data are taken into account for the input function. In this table, it is illustrated that the eyes' start and end coordinate space in 2D is used for arranging the membership function. The membership functions consist of one or several type-2 fuzzy sets. A numerical vector x of fuzzifier maps into a type-2 set *A*\$. Type-2 fuzzy singleton is considered. In a singleton fuzzification, the inputs are crisp values on nonzero membership. We contemplate three fuzzy membership functions for the robot direction with labels \$L, \$S, \$R. These are indicating left, straight, and right respectively as illustrated in Figure 10. We worked using Gaussian and sigmoidal membership functions. A Gaussian type-2 fuzzy set is one in which the membership grade of every domain point is a Gaussian type-1 set contained in [0,1]. These functions are unable to specify asymmetric and archive smoothness membership function which are important in certain applications. The sigmoidal membership function, which is either open left, right asymmetric closed, it is appropriate for representing concepts such as very large" or very negative". In the same method, we contemplate three membership functions for robot speed with labels N\$, M\$, \$F, these are indicating near, medium, far respectively as illustrated in Figure 10. The variable range of functions is not infinite (see Figure 10).

**Figure 9.** Eye tracking region variable.

**Table 1.** Eye region information.

**Figure 10.** Membership functions of robot direction and speed.

The output fuzzy controllers are the left and right velocities of the wheel speed. The linguistic variables are implemented with tree membership function. For both right and left wheel speed, these are labeled as \$S, M, \$ \$F—slow, medium and fast respectively. It is illustrated in Figure 11.

**Figure 11.** Membership functions of the wheels speed.

#### 2.3.2. Fuzzy Inference Engine

The inference engine combines rules and gives an outline from input to output type-2 fuzzy sets. Figure 8 shows a graphical representation of the relationship between input and output. It is necessary

to compute the intersection and union of type-2 sets and implement compositions of type-2 relations. The desired behavior is defined by a set of linguistic rules. It is necessary to set the rules adequately for the desired result. For instance, a type-2 fuzzy logic with *p* inputs (*x*<sup>1</sup> ∈ *X*<sup>1</sup> , ... *xp* ∈ *Xp*) and one output (*y*) with *M* rules have the following form.

$$\mathcal{R}^{\mathcal{L}} \text{ : IF } \mathbf{x}\_1 \text{ is } \widetilde{F}\_1^{\mathcal{L}} \dots \text{ and } \mathbf{x}\_p \text{ is } \widetilde{F}\_p^{\mathcal{L}} \text{ THEN } y \text{ is } \widetilde{\mathbf{G}}^{\mathcal{L}}, \ \mathcal{L} = 1 \dots M.$$

The knowledge bases related to the robot wheels speed are reported in Table 2. The approximate locations of the rules formed in the knowledge base in the coordinate plane are shown in Figure 12.


**Table 2.** Rule base for robot wheel speed fuzzy controller.

**Figure 12.** Graphical distribution of the rule table.

In this experiment, we used type-2 fuzzy sets and minimum t-norm operation. The rule firing strength *Fi* (*x*) for crisp input vector is given by type-1 fuzzy set.

$$F^l(\mathbf{x'}) = \begin{bmatrix} \underline{f}^l \ (\mathbf{x'}) \end{bmatrix}, \; \overline{f}^l(\mathbf{x'}) \begin{bmatrix} \underline{f}^l \ (\mathbf{x'}) \end{bmatrix} = \begin{bmatrix} \underline{f}^l \ \overline{f}^l \end{bmatrix} \tag{3}$$

where

$$\underline{f}^l(\mathbf{x'}) = \underline{\mu}\_{\overrightarrow{F}\_1^l}(\mathbf{x'\_1}) \* \dots \* \underline{\mu}\_{\overrightarrow{F}\_p^l}(\mathbf{x'\_p}) . \tag{4}$$

$$
\overline{f}^l(\mathbf{x'}) = \overline{\mu}\_{\overline{F}\_1^l}(\mathbf{x'\_1}) \* \dots \* \overline{\mu}\_{\overline{F}\_1^l}(\mathbf{x'\_p}).\tag{5}
$$

The graphical representation of the rules of the system is shown in Figure 12. These are composed of input and output linguistic variables. Nine inference rules are designed to determine how the mobile robot should be steered and velocity. In each rule, a logic and operation is used to deduce the output. In Table 2 we present the rule set whose format is established as follows:

Rule 1: IF Direction (X) is left and speed (Y) is medium, then left wheel speed (Lws) is small and right wheel speed is medium.

#### 2.3.3. Type Reducer

Type reducer creates a type-1 fuzzy set output which is then transformed into a crisp output through the defuzzifier that combines the outputs sets to acquire a single output using one of the existing type reduction methods. Type reducer was proposed by Karnik and Mende [32,34,55]. In our experiments, we used the center of sets (cos) type reduction method. The expression of this method can be written in the following Equation (6).

$$Y\_{\rm cas}(\mathbf{x}) = [y\_{\boldsymbol{\nu}}, y\_{\boldsymbol{r}}] = \int\_{y^1 \in [y^1, y^1]} \dots \int\_{y^1 \in [y^M, y^M]} \int\_{f^1 \in [\underline{f}^{-1}, \overline{f}^1]} \dots \int\_{f^M \in [\underline{f}^M, \overline{f}^M]} \frac{\sum\_{i=1}^M f^i y^i}{\sum\_{i=1}^M f^i} \,. \tag{6}$$

The consequent set of the interval type-2 determined by two endpoints (*yl*, *yr*). If the values of *fi* and *yi*, which are associated with *yl*, are donated *fl <sup>i</sup>* and *yl i* , respectively, and the values of *fi* and *yi* which are associated with *yr* are donated *fr <sup>i</sup>* and *yr <sup>i</sup>* respectively, these points are given in Equations (7) and (8).

$$y\_l = \frac{\sum\_{i=1}^{M} f\_l^i y\_l^i}{\sum\_{i=1}^{M} f\_l^i} \tag{7}$$

$$y\_r = \frac{\sum\_{i=1}^{M} f\_r^{\;i} y\_r^{\;i}}{\sum\_{i=1}^{M} f\_r^{\;i}} \tag{8}$$

where *yl* and *yr* are the output of IT2FIS, which can be used to verify data (training or testing) contained in the output of the fuzzy system.

#### 2.3.4. Defuzzifier

The interval fuzzy set *Y*cos(*x*) variables obtained from type reducer are defuzzified and the average of *yl* and *yr* are used to defuzzify the output of an interval singleton type-2 fuzzy logic system. The equation is written as

$$y(\mathbf{x}) = \frac{y\_l + y\_r}{2}.\tag{9}$$

#### **3. Experiments and Results**

In this paper, we focused on intelligent vision-based gaze guided robot control systems. The evaluation and validation of this method were tested with several experiments. The experiments were performed under an overhead camera image and using type-2 fuzzy control system. The aim is to make strategic planning and implement remote control of the robot on the base of gaze coordinates where user looking for. The experiments included two stages; evaluation and determination of gaze coordination and using this information as input command effectively for robot control. In our proposed method, we have designed an interface system where the user looks at the experimental field view from the overhead camera on the computer monitor. The eye gaze tracker is calibrated based on real-world eye viewing fields. The human eye view field is an essential factor in getting the coordination system. The closed eye situation is also identified by the computer program and is used to stop the robot. Then the gaze coordinates are utilized to control the robot remotely. It aims to directly control the robot after calibration of the gaze tracker. The robot direction and speed are modulated linearly by the distance from the center of the gaze coordinate. In the eye horizontal plane, the x-axis represents the movement of the eye gaze coordinate and the vertical plane coordinate system represents the mobile robot wheel speed input variables. In order to determine robot steering, this coordinate system is considered. Commands for robot motion control are extracted and updated for every 250 ms continuously.

The simulation results show the ability of the type-2 fuzzy logic controller to simultaneously determine human intention from the combined viewpoint and eye gaze. Nevertheless, the proposed method is powerful to determine robot speed, orientation, and obstacle avoidance. In Figure 13, the simulation of the robot's initial and goal points is illustrated and robot motion data on this behavior is also shown in Table 3.

**Figure 13.** Robot trajectory from eye gaze based go-to-goal behavior.


**Table 3.** Robot trajectories data (inputs, outputs) for two scenarios from Figure 13.

The outcomes of the experiment and simulation are illustrated graphically and plotted separately to elucidate the effect of gaze based on the fuzzy type-2 rule set in Figures 14–19. In Table 3, it can be seen that robot wheel speeds are change according to the gaze coordinates. In this table, there are only a few examples. The details of these data are shown graphically below in Figures 15, 17, and 18. Various scenarios were performed to test the proposed method in a real environment. The experimental setup is shown in Figures 14 and 16. The adaptability, robustness, accuracy and efficiency of the proposed methods can be observed from these experimental results. The real experimental environment is shown in Figures 14 and 16. It includes the robot, obstacle and target token. Here are six different frames obtained from the experimental environment. The eye gaze data coordinates, robot path coordinates, and robot wheel speed are stored during the movement of the robot and this data is graphically illustrated. The affection of the relationship between robot speed and gaze point is illustrated in Figure 18. The effectiveness of the fuzzy rules and suitability of the fuzzy controller on collision-free behavior is shown in Figure 19.

In the first scenario (see Figure 14), the robot navigates in an uncomplicated environment with one obstacle. Throughout its motion to the target, the mobile robot encounters an obstacle. In this situation, the robot is moving smoothly and without colliding. The real experimentation of this scenario is illustrated in Figure 14. The eye gaze coordination for this scenario is shown in Figure 15.

**Figure 14.** Experimental setup; 1. Scenario—robot motion control from start (1) and goal (6) position.

**Figure 15.** Eye gaze coordination for 1. Scenario (Figure 13) (2D line graph (**a**), scattered (**b**)).

In the second scenario, the user has directed the robot to the left side with the eye movements. It was seen that the mobile robot behaved similarly to its first scenario. The real experimental results are shown in Figure 16. The plot of this scenario is shown in Figure 17. The objective of this driving is to confirm the efficacy of the method in different cases. As can be seen from this case, the mobile robot navigates successfully around the obstacles without collision and reach the target.

**Figure 16.** Experiment setup; 2. Scenario—robot motion control from start (1) and goal (6) position.

**Figure 17.** Eye gaze coordination for 2. Scenario (Figure 12) (2D line graph (**a**), scattered (**b**)).

The robot wheel speed for the two scenarios is shown in Figure 18. And, the Robot trajectory from the experiment of Figure 14 is shown in Figure 19.

**Figure 18.** Robot wheel speed for the two scenarios (1. Scenario (**a**), 2. Scenario (**b**)) from Figures 14–16.

**Figure 19.** Robot trajectory from the experiment of Figure 14 (1. Scenario). (**a**) Inputs (Gaze Coordinate); (**b**) robot path trajectory.

#### **4. Discussion**

From the experiments, the human gaze has been evidenced to be an encouraging modality for GGC. Our proposed method was expanded to explore motion control by straight-gaze input to perform hands-free control of the mobile robot. The gaze-based control system has good potential for future work. We think that this technology has a significant function in facilitating the life of a disabled person. The gaze-based control system is still in its infancy and its function and features need to be further developed in order to be implemented in more complex situations. The use of an eye-based interaction modality can reduce both physical and mental burden. The main aim will be to increase the usability of the system by increasing the learning and ease-of-use by considering the overall efficiency. A low-cost eye-tracking device used in our experiments is crucial in the stability. The user's head must be stable and the eye tracker must snapshot the eye frame in certain positions. Although this situation is difficult to set-up, it is preferred because it is inexpensive and robust in the gaze position for motion control commands. The profoundly accurate eye trackers are able to present control of vehicles that are as good in term of speed and error as mouse control. It is clear that the differentiation between our model and controlling a wheelchair-based gaze guide drive is using overhead camera images. This type of motion control could be profitable in remote control situations like hazardous, poisonous places where the hands are needed for other tasks. A signification limitation of the current work is that the surveillance system is managed from a fixed location. The correct determination of gaze width and height positions (start and end) is important for the designed controller. When the eyes are closed, the engines are driven to zero and thus the robot is stopped. For this purpose, the sub-program has been audited without choosing the rule base. This is considered as the mobile robot's go-to-goal behavior. For this reason, the behavior of robot retraction has not been taken into account. This choice was made for the purpose of feasibility in testing a novel idea for socially assistive robots. It is our expectation that a new idea is implemented for target audiences who need socially assisted robots.

The proposed method was implemented for an indoor experiment. As the work accomplished in this paper progressed, many ideas came to mind, which could be discussed in future research, such as outdoor experiments and investigations on how the gaze based control of devices and how it compares with traditional methods such as omnidirectional wheels. These wheels are troublesome individuals compared to the traditional ones. Lastly, our proposed method may sub serve as a simple, secure and affordable method for future gaze guided vehicle control prototypes.

#### **5. Conclusions**

In this paper, we proposed a vision-based gaze guide mobile robot control. Application of gaze interaction, wearable, and mobile technology has mainly been conducted on controlling the movement of robots. Users were enabled to control the robot remotely and hands-free by using their eyes to specify the target where they want the robot to move to. A central processing unit executes data communication between the user and the robotic platform. This continuously monitors the state of the robot through visual feedback and send commands to control the motion of the robot. Our experiments include an overhead camera, an eye tracking device, a differential drive mobile robot, vision and IT2FIS tools. This controller produces the required wheel velocity commands to drive the mobile robot to its destination along a smooth path. To achieve this requirement, our methodology incorporates two basic behaviors, map generation and go-to-goal behavior. Go-to-goal behavior, based on an IT2FIS, is more smooth and uniform to progress handling data in uncertainties to produce a better performance. The algorithms are implemented in an indoor environment with the presence of obstacles. Furthermore, the IT2FIS controller was strongly used to control a real-time mobile robot using exact gaze data obtained from human users using an eye tracking system. The differential drive mobile robot (EV3) was successfully commanded by the user's gaze. This method of interaction is available to most people, including those with disabilities and the elderly, who undermine motor ability. Thus, I would like to express that this technology needs to be developed in order to be able to be used in many fields. This system can also be an alternative or supplement to a conventional control interface

such as a mouse or joystick, etc. The results from the proposed technique have been illustrated via simulation and experiments. It is indicated that the intelligent vision-based gaze guided robot control (GGC) system is applicable and the IT2FIS can successfully infer operator intent, modulate speed and direction accordingly. The experimental results obtained are very adequate and verify the efficacy of the proposed approach.

**Author Contributions:** M.D. analyzed the original Mobile Robot motion control, visual servoing system and Gaze Guide control methods and applied Type-2 fuzzy logic for dynamic parameter adaptation applied for tracking trajectories, and contributed to performing the experiments and wrote the paper; O.C. reviewed the state of the art, analyzed the data and proposed the method; A.F.K. contributed to the discussion and analysis of the results.

**Funding:** This research was funded by TUBITAK-BIDEB under grant 2214/A.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

MDPI St. Alban-Anlage 66 4052 Basel Switzerland Tel. +41 61 683 77 34 Fax +41 61 302 89 18 www.mdpi.com

*Applied System Innovation* Editorial Office E-mail: asi@mdpi.com www.mdpi.com/journal/asi

MDPI St. Alban-Anlage 66 4052 Basel Switzerland Tel: +41 61 683 77 34

www.mdpi.com

ISBN 978-3-0365-4930-9