# **Dynamics under Uncertainty Modeling Simulation and Complexity**

Edited by Dragan Pamucar, Dragan Marinkovic and Samarjit Kar Printed Edition of the Special Issue Published in *Mathematics*

www.mdpi.com/journal/mathematics

## **Dynamics under Uncertainty: Modeling Simulation and Complexity**

## **Dynamics under Uncertainty: Modeling Simulation and Complexity**

Editors

**Dragan Pamucar Dragan Marinkovic Samarjit Kar**

MDPI • Basel • Beijing • Wuhan • Barcelona • Belgrade • Manchester • Tokyo • Cluj • Tianjin

*Editors* Dragan Pamucar Military academy, Department of Logistics University of Defence in Belgrade Belgrade Serbia

Dragan Marinkovic Faculty of Mechanical Engineering and Transport Systems Technische Universitaet Berlin Berlin Germany

Samarjit Kar Department of Mathematics National Institute of Technology Durgapur India

*Editorial Office* MDPI St. Alban-Anlage 66 4052 Basel, Switzerland

This is a reprint of articles from the Special Issue published online in the open access journal *Mathematics* (ISSN 2227-7390) (available at: www.mdpi.com/journal/mathematics/special issues/ Dynamics Uncertainty Modeling Simulation Complexity).

For citation purposes, cite each article independently as indicated on the article page online and as indicated below:

LastName, A.A.; LastName, B.B.; LastName, C.C. Article Title. *Journal Name* **Year**, *Volume Number*, Page Range.

**ISBN 978-3-0365-1576-2 (Hbk) ISBN 978-3-0365-1575-5 (PDF)**

© 2021 by the authors. Articles in this book are Open Access and distributed under the Creative Commons Attribution (CC BY) license, which allows users to download, copy and build upon published articles, as long as the author and publisher are properly credited, which ensures maximum dissemination and a wider impact of our publications.

The book as a whole is distributed by MDPI under the terms and conditions of the Creative Commons license CC BY-NC-ND.

## **Contents**


### **Chao Fu, Guojin Feng, Jiaojiao Ma, Kuan Lu, Yongfeng Yang and Fengshou Gu** Predicting the Dynamic Response of Dual-Rotor System Subject to Interval Parametric Uncertainties Based on the Non-Intrusive Metamodel Reprinted from: *Mathematics* **2020**, *8*, 736, doi:10.3390/math8050736 . . . . . . . . . . . . . . . . . **165**

#### **Miomir Stankovi´c, Zeljko Stevi´c, Dillip Kumar Das, Marko Suboti´c and Dragan Pamuˇcar ˇ**


## **About the Editors**

#### **Dragan Pamucar**

Dr. Dragan Pamucar is an Associate Professor at the University of Defence in Belgrade, Department of Logistics, Serbia. Dr. Pamucar received a Ph.D. in Applied Mathematics, with a specialization in Multi-criteria modeling and soft computing techniques, from the University of Defence in Belgrade, Serbia in 2013 and an MSc degree from the Faculty of Transport and Traffic Engineering in Belgrade, 2009. His research interests are in the field of computational intelligence, multi-criteria decision-making problems, Neuro-fuzzy systems, fuzzy, rough and intuitionistic fuzzy set theory, and neutrosophic theory. Application areas include a wide range of logistics, management, and engineering problems.

#### **Dragan Marinkovic**

Dr. Dragan Marinkovic is a Full Professor at TU Berlin, Department of Structural Analysis, Germany. Dr. Marinkovic received Ph.D. and M.Sc. degrees from the University in Nis, Serbia. His research interest are in the fields of computational intelligence, multi-criteria decision-making problems, neuro-fuzzy systems, finite element analysis, nanomaterials, nanotechnology, materials science, modeling, mathematical modeling, experimentation, ansys, and labview. Application areas include a wide range of logistics and engineering problems.

#### **Samarjit Kar**

Dr. Samarjit Kar is a Professor at the National Institute of Technology Durgapur, West Bengal, India. His research interests are in the fields of computational intelligence, multi-criteria decision-making problems, neuro-fuzzy systems, fuzzy, rough and intuitionistic fuzzy set theory, and neutrosophic theory. Application areas include a wide range of logistics, management, and engineering problems.

## **Preface to "Dynamics under Uncertainty: Modeling Simulation and Complexity"**

Dear Colleagues,

The dynamics of systems have proven to be very powerful tools in understanding the behavior of the different natural phenomena throughout the last two centuries. However, the attributes of natural systems are observed to deviate from their classical state due to the effect of different types of uncertainties. Actually, randomness and impreciseness are the two major sources of uncertainties in natural systems. Randomness is modeled by different stochastic processes, and impreciseness could be modeled by fuzzy sets, rough sets, Dempster–Shafer theory, etc.

Generally, symmetry, asymmetry, and antisymmetry are basic characteristics of binary relations used when modeling dynamical systems. Moreover, the notion of symmetry appeared in many articles about fuzzy sets, rough sets, Dempster–Shafer theory, etc. which are employed in the dynamical systems. Hence, the behavior of dynamical systems with uncertain variables, parameters, and functions has attracted academic attention in the recent past. Similarly, the study of the dynamics manifested in complex networks, or an interaction network of individuals, became popular in the last few decades. The study of collective dynamics in complex interaction networks has been proven to be useful to understand collective dynamic phenomena such as the emergence of cooperation between rational agents, synchronization of signal, like a flashlight or fireflies, rumor spreading, or conscious forming in a social network. Different methods of statistical mechanics are also successfully applied to study such complex systems and to understand the emergence of different collective behavior. When randomness and imprecision coexist in a system, the system is called a hybrid uncertain system. In such a system, the overall uncertainty is an aggregation of both types of uncertainties. However, in the context of modeling the behavior of complex natural systems, it is extremely important to analyze the effect of appropriate uncertainty to understand the predictability of different phenomena. Examples of such uncertain dynamical systems can be found in different levels of the universe ranging from the interaction of quantum particles to the complex interaction of biochemical molecules, such as signaling in the brain, or even complex social interactions like forming opinions.

Potential topics included but not limited to the following:


**Dragan Pamucar, Dragan Marinkovic, Samarjit Kar** *Editors*

## *Editorial* **Dynamics under Uncertainty: Modeling Simulation and Complexity**

**Dragan Pamuˇcar 1,\* , Dragan Marinkovi´c <sup>2</sup> and Samarjit Kar <sup>3</sup>**


This issue contains the successful invited submissions [1–11] to a Special Issue of *Mathematics* on the subject area of "Dynamics under Uncertainty: Modeling Simulation and Complexity".

The dynamics of systems have proven to be very powerful tools in understanding the behavior of different natural phenomena throughout the last two centuries. However, the attributes of natural systems are observed to deviate from their classical state due to the effects of different types of uncertainties. In actuality, randomness and impreciseness are the two major sources of uncertainties in natural systems. Randomness is modeled by different stochastic processes, and impreciseness could be modeled by fuzzy sets, rough sets, the Dempster–Shafer theory, etc.

Hence, the behavior of dynamical systems with uncertain variables, parameters, and functions has attracted academic attention in the recent past. Similarly, the study of the dynamics manifested in complex networks, or an interaction network of individuals, has become popular in the last few decades. The study of collective dynamics in complex interaction networks has been proven to be useful in understanding collective dynamic phenomena such as the emergence of cooperation between rational agents, synchronization of signals as seen in a flashlight or fireflies, rumor spreading, or conscious forming of a social network, etc. Different methods of statistical mechanics are also successfully applied to the study such complex systems and to understand the emergence of different collective behaviors. When randomness and imprecision coexist in a system, the system is called a hybrid uncertain system. In such a system, the overall uncertainty is an aggregation of both types of uncertainties. However, in the context of modeling the behavior of complex natural systems, it is extremely important to analyze the effect of the appropriate uncertainty to understand the predictability of different phenomena. An example of such uncertain dynamical systems could be sited in different levels of the universe, ranging from the interaction of quantum particles to the complex interaction of biochemical molecules, such as signaling in the brain, or even in complex social interactions, such as while forming opinions.

This Special Issue includes the most important forecasting techniques applied to the modeling simulation and complexity in dynamic systems, such as, fuzzy multi-criteria techniques, artificial intelligence, the Dempster–Shafer approach, and heuristics.

Response to our call had the following statistics, Figure 1.

**Citation:** Pamuˇcar, D.; Marinkovi´c, D.; Kar, S. Dynamics under Uncertainty: Modeling Simulation and Complexity. *Mathematics* **2021**, *9*, 1416. https://doi.org/10.3390/ math9121416

Received: 11 June 2021 Accepted: 16 June 2021 Published: 18 June 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

**Figure 1.** Special Issue statistics.

The geographical distribution of the authors (published papers) is presented in Table 1.

**Table 1.** Publications by country.


Published submissions are related to road traffic risk analysis [1], dual-rotor systems [2], multi-criteria decision making [3,5,6,8,9], MIMO discrete-time systems [4], the classification and diagnosis of brain disease [7], data mining [10], and empathic building [11].

ć ć This Special Issue presents 11 models, which are briefly presented in the next section. Stankovi´c et al. [1] proposed fuzzy Measurement Alternatives and Ranking according to the COmpromise Solution (fuzzy MARCOS) method for road traffic risk analysis. In addition, they used the fuzzy PIvot Pairwise RElative Criteria Importance Assessment—the fuzzy PIPRECIA method— to determine the weights of the criteria on the basis of which road network sections were evaluated. Fu et al. [2] investigated the non-probabilistic steadystate dynamics of a dual-rotor system with parametric uncertainties under two-frequency excitations. Žižovi´c et al. [3] presented a new method for determining weight coefficients by forming a non-decreasing series at criteria significance levels (the NDSL method). Li et al. [4] investigated the problems of state feedback and the static output feedback preview controller for uncertain discrete-time multiple-input multiple-output systems based on the parameter-dependent Lyapunov function and the linear matrix inequality

technique. Pribi´cevi´c et al. [5] developed a new multi-criteria methodology that enables the objective processing of fuzzy linguistic information in the pairwise comparison of criteria, and they called it the fuzzy DEMATEL-D method. Žižovi´c et al. [6] presented a new MADM method in their research called RAFSI (Ranking of Alternatives through Functional mapping of criterion sub-intervals into a Single Interval), which successfully eliminates the rank reversal problem. Hamzenejad et al. [7] introduced a new robust algorithm using three methods for the classification of brain disease: (1) the Wavelet-Generalized Autoregressive Conditional Heteroscedasticity-K-Nearest Neighbor method; (2) the Wavelet-GARCH-KNN method; and (3) the Wavelet Local Linear Approximation. Pamuˇcar et al. [8] presented an improved Best Worst Method for determining criteria weights in multi-criteria decision making. Uluta¸s et al. [9] proposed a multiple-criteria decision-making approach for the selection of the optimal equipment for performing logistics activity. For defining the objective weights of the criteria, they applied the correlation coefficient and the standard deviation, and for the final ranking of the alternatives, they utilized the MARCOS method. Aleksi´c et al. [10] developed a prediction model that determines the most important factors for bleeding in liver cirrhosis. Salmeron and Ruiz-Celma [11] proposed an artificial intelligence-based approach to detect synthetic emotions based on Thayer's emotional model and Fuzzy Cognitive Maps.

We found the submissions and selections of papers for this issue very inspiring and rewarding. We also thank the editorial staff and reviewers for their efforts and help during the process.

**Author Contributions:** Conceptualization, D.P., D.M. and S.K.; methodology, D.P. and D.M.; formal analysis, S.K.; investigation, D.P.; supervision, D.M. and D.P. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


### *Article* **Synthetic Emotions for Empathic Building**

**Jose L. Salmeron 1,2,\* and Antonio Ruiz-Celma <sup>3</sup>**


**Abstract:** Empathic buildings are intelligent ones that aim to measure and execute the best user experience. A smoother and intuitive environment leads to a better mood. The system gathers data from sensors that measure things like air quality, occupancy, noise and analyse it for the better experience of the users. This research proposes an artificial intelligence-based approach to detect synthetic emotions based on Thayer's emotional model and Fuzzy Cognitive Maps. This emotional model is based on a biopsychological approach to the analysis of the humans' emotional state. In this research, Fuzzy Grey Cognitive Maps are used, which are an extension of the fuzzy cognitive maps using the grey systems theory to model uncertainty. Fuzzy Cognitive Grey Maps (FGCMs) have become a very valuable theory for modeling high-uncertainty systems when small and incomplete discrete data sets are available. This research includes experiments with a couple of synthetic case studies for testing this proposal. This proposal provides an innovative way for simulating synthetic emotions and designing an empathic building.

**Keywords:** empathic building; fuzzy grey cognitive maps; Thayer's emotion model; artificial emotions; affective computing

**Citation:** Salmeron, J.L; Ruiz-Celma, A. Synthetics Emotions for Empathic Buildings. *Mathematics* **2021**, *9*, 701. https://doi.org/10.3390/math9070701

Academic Editor: Dragan Pamucar Received: 16 December 2020 Accepted: 12 March 2021 Published: 24 March 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

#### **1. Introduction**

Autonomous systems have been designed to interact with one or more targets in an environment primarily without human intervention [1,2]. Some systems are capable of operating in an environment with high-level objectives and others that do not require any human involvement [3,4]. The complexity of this type of system with a high degree of autonomy makes the result of their interaction with the environment uncertain and it is not possible to ensure the desired behavior [5]. For this reason, approaches such as Off-Line Reinforcement Learning arise, which train agents in a controlled environment.

Regardless of the technique used for the design of autonomous systems, usually highly specialized tasks may require the inclusion of affective behaviors to improve their performance [6]. The role of emotions in human reasoning, daily activities, and decision-making is really critical. In other words, emotions have a huge impact on human intelligence. If their emotions are not working properly, then human beings will not make decisions properly. Therefore, there is a strong interrelation between embedding emotions in systems and making systems that include intelligence. Artificial emotion is an emerging research subject and aims to make machines have artificial emotions [7].

According to [8], affective forecasting studies have shown that people are biased in making both random and systematic errors when anticipating their own future emotional states. Because of the divergence between experienced and anticipated reactions, it is worth examining artificial intelligence methods to avoid these problems.

Empathic building is an intelligent building that aims to measure and execute the best user experience. A smoother and intuitive environment lead to a better mood. The system gathers the relevant data from IoT sensors that measure things like air quality, occupancy, noise and analyze it for the better experience of the users.

The main contribution of this paper is to propose Fuzzy Cognitive Grey Maps (FGCMs) as an innovative technique to predict artificial emotions in systems with a certain degree of autonomy in complex environments with high uncertainty. In addition, the dynamic analysis mapping of the FGCM uses Thayer's model of emotion within an emotional space. They define the categories of emotions in a matrix with four quadrants. This proposal translates said matrix to a two-dimensional Cartesian coordinate according to its valence and excitation.

The remainder of the paper is organized as follows. Section 2 presents the theoretical background. Section 3 shows the FGCMs fundamentals. Section 4 describe the methodological proposal. The next section details the experimental approach with two case studies and conclusions are finally shown.

#### **2. Theoretical Background**

Affective Computing seeks to bring computers and effective humans closer together. Affective computing tries to assign systems the human-like capabilities of emotions' observation, interpretation and generation [9]. As the authors explained previously, emotions have a huge impact on human physical states, beliefs, motivations, activities, decisions, and even wishes. An appropriate balance of emotions makes human beings having flexibility and creativity in solving problem [10].

Affective computing focuses on the recognition and processing of human emotions. Emotion processing is useful for analyzing human reactions, eliciting behavioral intentions, and generating reasonable responses from systems. Over the last years emotions' research has become a multi-disciplinary research field with a growing interest [11]. Moreover, emotions play a fundamental role in human-machine interaction. The simulation or automatic detection of emotional states aims to improve the interaction between humans and machines. Therefore, such simulation or detection would allow systems to perform alternative operating paths in accordance with current human emotions.

It could be worthy in a lot of real-life applications as a fear-type emotion recognition for audio-based surveillance systems [12], real-life emotion detection within a medical emergency call centre [13], semi-automatic diagnosis of psychiatric diseases [14] detection of children's emotional states in a conversational computer games [15], and so on.

On the other hand, relevant advances were made in speech synthesis as well [16]. Biosignals (e.g.: electrocardiogram (ECG or EKG), electroencephalogram (EEG), electromyogram (EMG) and electrooculogram (EOG) and so on), face and body images are options to detect emotional states [17,18]. However, those kinds of methods are more invasive, and so complex for applying in a lot of real applications [11]. This research proposes a non-invasive soft computing-based method for simulating emotions in real-world applications.

So far, there are a lot of emotion-based theories, such as the OCC model [19] and Thayer's emotion model. The OCC model comprises a classification of twenty-two emotion kinds within a hierarchy. The hierarchy includes three branches, namely emotions concerning the consequences of events, actions of agents, and aspects of objects. The emotions identified in the OCC model are the following: joy, hope, relief, pride and gratitude, like distress, fear, disappointment, remorse, anger and dislike [20].

Furthermore, some branches mix to form a set of composed emotions, specifically emotions concerning consequences of events. According to the OCC model, all emotions can be grouped in terms of the event that provokes each emotion. Scenarios that drive emotions can be folded into three kinds. The first scenario's kind that drives emotions is the consequences of events. The second kind of scenario that generates emotions is the actions of the agents. The third one that provokes emotions is the appearance of objects.

Thayer's emotional model is the affective framework that supports this research. Next, the fundamentals of that model are shown.

#### *2.1. Thayer's Emotion Model*

Thayer's model [21] is based on mood analysis as a biopsychological concept [22]. Thayer considers mood as an affective state highly related to psychophysiological and biochemical elements. Moreover, individual cognitive actions and casual events perform a critical role in its sudden understanding.

Thayer's emotion model is frequently used to avoid the ambiguity of adjectives [23]. Thayer's model organizes the major categories of emotions in a matrix according to their arousal (how calming versus exciting) and valence (how negative versus positive). The emotion categories can be separated into the four quadrants of the common two-dimensional cartesian coordinate system (Figure 1), valence (*x*), excitement (*y*). The origin models the lack of emotions.

**Figure 1.** Emotional model.

Three emotions are located in each quadrant. The first quadrant (valence and positive arousal) is made up of emotions: excited, happy and excited. The second quadrant (negative valence and positive arousal) includes the emotions annoying, angry, and nervous. The third quadrant emotions (valence and negative arousal) are sadness, boredom, and sleepiness. Finally, the last quadrant (positive valence and negative arousal) contains the emotions calm, peace and relaxation. According to this, the emotional space is made up of twelve emotions.

The distance to the origin reflects the intensity of the emotion. Emotions closer to the source are less intense, while those further away from the source represent more intense emotions.

#### *2.2. Emotional-Based AI Systems*

Research has been done to develop emotional systems in various settings, such as emotions in music, art, and so on. The authors present some efforts below.

Marreiros et al. [20] designs a Ubiquitous Group Decision Support System (u-GDSS) that enables asynchronous and distributed computational services. One of the most interesting elements of this research is a multiagent-based simulator of emotional group decision-making. Zhou et al. [24] incorporates affective computing, emotion ontology within an emotion-aware service-oriented architecture. This framework allows us to publish emotion-sensitive services. Sharada & Ramanaiah [25] proposes an intelligent agent

framework based on a neuro-fuzzy system to process the events. The emotion generation is based on a hopfield network.

Setiono et al. [26] proposes a game design with affective computing where the experience of the players is improved through the collection and understanding of the player's emotions. Han et al. [27] proposes a human-centric lifelong learning framework where the added value is affective computing. The results of their research prove that the incorporation of affective computing greatly improves the conventional alternatives. Kratzwald et al. [28] proposes a personalized learning transfer approach that uses sentiment analysis to achieve significant performance improvements.

In addition, Fuzzy Cognitive Maps have also been used as an interface between emotions, mood and behavior of human beings. Salmeron [29] builds emotional robots that operate in near real-time and improve their sensitivity. Salmeron & Lopez [30], Salmeron [31] presents a FCM-based proposal for generate synthetic emotions. This is the starting point of this research. FCMs have several valuable elements for the generation of synthetic emotions, such as flexible and adaptive reasoning and a high abstraction level [32,33]. Furthermore, this technique has been widely used to model and analyze complex dynamic systems [34–36]. As cognition tool, an FCM is easy to use and it can model knowledge and reasoning in an efficient way.

#### **3. Fuzzy Grey Cognitive Maps**

#### *3.1. Fundamentals*

Grey Systems Theory (GST) is a so interesting set of solving problem tools within environments with high uncertainty, under discrete small and incomplete data sets [37]. GST was created to analyze small data samples with poor information quality. GST has found successful applications in energy, transportation, military science, business, meteorology, medicine, agriculture, industry, and others.

Fuzzy Grey Cognitive Map is based on FCMs and GST, and it has become a very worthy theory for solving problems within domains with high uncertainty [38]. FGCMs offer an intuitive way to model and reason about concepts without loss of precision. An advantage of FGCMs is that non-technical decision-makers can understand all the problems in a given scenario using decision models represented as causal graphs. Furthermore, an FGCM allows locating the most critical factor that impacts the expected target or output concept.

The FGCM nodes are representing relevant concepts for the problem. The influence between nodes concepts are modeled by directed edges. An edge linking two nodes represents the grey causal impact of the causal concept on the effect concept. As FCMs, the FGCM models are represented by a (grey) adjacency matrix (*A* <sup>±</sup>).

$$A^{\pm} = \begin{array}{c} c\_1 \\ \vdots \\ c\_n \end{array} \begin{pmatrix} \mathcal{C}\_1 & \dots & \mathcal{C}\_n \\ \mathcal{O}\_{11}^{\pm} & \dots & \mathcal{O}\_{1n}^{\pm} \\ \vdots & \ddots & \vdots \\ \mathcal{O}\_{n1}^{\pm} & \dots & \mathcal{O}\_{nn}^{\pm} \end{pmatrix} \tag{1}$$

FGCMs can be considered as a special type of dynamic system that includes feedback, where the effect of the change in one node can impact other nodes, which successively can impact the concept that initiates the change. A FGCM models unstructured knowledge through causalities through grey concepts and grey relationships between them based on FCM [38–43].

Because FGCMs are hybrid methods that combine grey systems theory and neural networks, the state of each node (concept) is measured by its grey weight as follows

$$
\omega\_{\vec{i}\vec{j}}^{\pm} = [\underline{\oplus}\_{\vec{i}\overleftarrow{j}} \overline{\sigma}\_{\vec{l}\overleftarrow{j}}] \mid \underline{\oplus}\_{\vec{i}\overleftarrow{j}} \wedge \overline{\sigma}\_{\vec{l}\overleftarrow{j}} \in [-1, +1] \vee [\mathbf{0}, +1] \tag{2}
$$

where *i* is the pre-synaptic (cause) node and *j* is the post-synaptic (effect) one. Note that if the FGCM is unipolar, then upper *̟* and lower *̟* weights belong to range [0, +1]. However, if the FGCM is bipolar, then upper and lower weights belong to range [−1, +1].

FGCM dynamics begins with an initial grey vector state *c* <sup>±</sup>(0), which models a proposed initial imprecise stimuli. The initial grey vector state with *n* nodes is denoted as

$$\begin{aligned} \mathfrak{c}^{\pm}(0) &= \left( \mathfrak{c}\_{1}^{\pm}(0), \mathfrak{c}\_{2}^{\pm}(0), \dots, \mathfrak{c}\_{n}^{\pm}(0) \right) \\ &= \left( [\mathfrak{c}\_{1}(0), \mathfrak{c}\_{1}(0)], [\mathfrak{c}\_{2}(0), \mathfrak{c}\_{2}(0)], \dots, [\mathfrak{c}\_{n}(0), \mathfrak{c}\_{n}(0)] \right) \end{aligned} \tag{3}$$

The updated nodes states are computed in an iterative inference process with an activation function (usually sigmoid or hyperbolic tangent function) [38,44,45], which maps monotonically the grey node value into a normalized range [0, +1] or [−1, +1], depending of the selected function. Note that grey arithmetic is detailed as [38]. Each single node would be updated as follows

$$\begin{aligned} \left[c\_j^{\pm}(t+1) \right. &\in f^{\pm} \left(\sum\_{i=1}^{n} \left.\sigma\_{ij}^{\pm} \cdot c\_i^{\pm}(t) \right.\right) \\ &\in \left[\underline{c}\_j(t+1), \overline{c}\_j(t+1)\right] \end{aligned} \tag{4}$$

If the nodes has memory of the previous state the updating equation is as follows

$$c\_j^{\\\pm}(t+1) \in f^{\\\\\pm} \left( c\_i^{\\\\\pm}(t) \oplus \sum\_{i=1}^n \mathcal{o}\_{ij}^{\\\\\pm} \cdot c\_i^{\\\\\pm}(t) \right) \tag{5}$$

where ⊕ is the summation of grey numbers.

The most used activation function in FGMCs is unipolar sigmoid function when the nodes' value maps in the range [0, 1]. If *f* <sup>±</sup>(·) is a sigmoid, then the *i* component of the grey vector state at *t* + 1 iteration *c* <sup>±</sup>(*t* + 1) after the iterations would be as follows

$$c\_i^{\pm}(t+1) \in \left[ \left( 1 + e^{-\lambda \cdot \underline{c}\_i(t)} \right)^{-1} \left( 1 + e^{-\lambda \cdot \overline{c}\_i(t)} \right)^{-1} \right]. \tag{6}$$

Morever, the activation function *f* <sup>±</sup>(·) would be the hyperbolic tangent when the nodes' states map in the range [-1, +1]. It is computed as follows

$$c\_i^{\pm}(t+1) \in \left[\frac{e^{2\cdot\lambda\cdot\underline{\varepsilon}\_i(t)} - 1}{e^{2\cdot\lambda\cdot\underline{\varepsilon}\_i(t)} + 1}, \frac{e^{2\cdot\lambda\cdot\overline{\varepsilon}\_i(t)} - 1}{e^{2\cdot\lambda\cdot\overline{\varepsilon}\_i(t)} + 1}\right] \tag{7}$$

The nodes' states evolve along the FGCM dynamics and it could lead to three different scenarios.


FGCM includes greyness as an uncertainty measurement. Higher values of greyness mean that the results have a higher uncertainty degree. It is computed as follows

$$\phi(c\_i^{\pm}) = \frac{\ell(c\_i^{\pm})}{\ell(\psi)}\tag{8}$$

where ℓ(*c* ± *i* ) = *c<sup>i</sup>* − *c<sup>i</sup>* is the absolute value of the length of grey node *c* ± *i* state value, and ℓ(*ψ*) is the absolute value of the range in the information space, denoted by *ψ*. It is computed as follows

$$\ell(\psi) = \begin{cases} 1 & \text{if } \{c\_i^\pm, \mathcal{o}\_i^\pm\} \subseteq [0, 1] \land \{c\_i^\pm, \mathcal{o}\_i^\pm\} \not\subseteq [-1, +1] \\ 2 & \text{if } \{c\_i^\pm, \mathcal{o}\_i^\pm\} \subseteq [-1, +1] \end{cases} \tag{9}$$

#### *3.2. FGCM Advantages over FCM*

FGCMs have several advantages over conventional FCM [32,33,38,41,46,47]. A FGCM allows us to calculate the desired steady states managing the uncertainty and hesitation existing in the raw data (for instance, due to source noise) for the causal relationships between nodes, as well as within the states of the initial nodes.

Unlike FCMs, FGCM states have weights with grey numbers. In this way, FGCMs are able to model multi-meaning uncertainty in the relationships between concepts.

FGCM is an FCM generalization and it is considered closer to human decision-making than FCM is. It handles the inner hesitancy and uncertainty in complex systems by including greyness in edges and nodes. Indeed, the FGCMs' reasoning process output includes a degree of greyness expressed in grey values representing the certainty of the results.

FGCMs are also able to model more types of relationships than an FCM can. For example, FGCMs can run successfully models with edges where the intensity is just partially known or even is not known at all (e.g., *̟* ± *ij* ∈ [−1, +1]).

It is important to consider that, even in the case the dynamics of an FCM would finish with the same vector state as one FGCM after the whitenization, FGCMs still handle better the grey uncertainty and inner fuzziness of human emotions.

#### **4. Proposal**

#### *Methodology*

Figure 2 shows the flowchart of our methodological proposal. The starting points are the input data. It includes three kinds of input data. Firstly, the environment is a set of variables representing the influence of the environment over the affective state. Moreover, the mood and the temperament are input data because each individual has its own mood and temperament with the differences between them detailed before.

The effective engine is composed of FGCM-based models for building synthetic emotions. The reactions have influence over the mood. Afterwards, the higher state is selected and the arousal and the valence are computed. The affective state is computed using arousal and valence. If the system keeps running, the process is executed again. Note that the environment data is changing over time and it has an impact on the affective state.

**Figure 2.** Architecture of the proposal.

#### **5. Experiments and Discussion**

With the intention of testing the proposal, this research analyzes two case studies of an artificial experiment. The objective is the simulation of the emotions of an autonomous system produced by the environmental conditions in a hospital facility.

It should be noted that the objective of the model is not to design a real-world emotional system, but only to test the FGCM approach for simulating synthetic emotions of people in the queue in a theoretical empathic building. For that reason, the authors have designed an FGCM-based emotional model shown in Figure 3. The concepts in this model are detailed in Table 1. Nodes *c* ± 1 and *c* ± <sup>2</sup> model arousal and valence respectively. They are the output concepts because those nodes are used to identified the emotions.

In each case of this experiment, the authors have designed a different initial vector state. In ths test case, the initial grey vector state *c* ± 1 (0) models the initial grey state values of the events at a given time of the process. As a result of the FGCM dynamics the final grey vector *c* <sup>±</sup>(*t*) models the achieved steady state. The steady grey vector *c* <sup>±</sup>(*t*) is the steady vector in the convergence region. The steady state of nodes *c* ± 1 and *c* ± 2 , their greyness and the detected emotion are analysed.

Moreover, the authors analyze the FGCM dynamics in both cases with different settings. The setting is composed by the memory and slope. If the nodes do not have memory, then the updating equation is Equation (5). If the nodes have memory, then the updating equation is Equation (4). The activation function is the hyperbolic tangent because the emotional model needs negative values. The slope is the *λ* parameter of the Equation (7). According to literature [48], the slopes applied for both activation functions are 1, 3, and 5.

**Figure 3.** Fuzzy Cognitive Grey Map (FGCM)-based experimental model.


**Table 1.** FGCM nodes and description.

#### *5.1. Experiment 1*

For the first synthetic case study, the initial grey vector state is shown in Equation (10). Table 2 shows the results of this experiment with the different settings. Figure 4 show a graphical representation of the emotions achieved with each setting.

$$c\_1^{\pm}(0) = ([0,0], [0,0], [2,2], [0,0], [0,0], [2,3], [-2,-1], [1,3], [3,4])\tag{10}$$


#### **Table 2.** Results of experiment 1.

Note that *m* means memory, *F* false and *T* true. Higher values of greyness are highlighted.

The achieved emotion with hyperbolic tangent as activation function is strongly related with the selected setting, especially the memory of the updating function. If the function has no memory (Equation (4)), then the emotion is almost neutral. However, if the function has memory (Equation (5)), then the emotion goes from pleased to happy as the slope increases.

The lower values of greyness for nodes *c* ± 1 and *c* ± 2 are achieved without memory (Equation (4)), and 1.0 as slope. The higher value of greyness for node *c* ± 1 is achieved with memory (Equation (5)), and 3.0 as slope. The higher value of greyness for node *c* ± 2 is achieved with memory (Equation (5)), and 1.0 as slope.

#### *5.2. Experiment 2*

For the second synthetic case study, the initial grey vector state is shown in Equation (11). Table 3 shows the results of this experiment with the different settings. Figure 5 show a graphical representation of the emotions achieved with each setting.

$$c\_2^{\pm}(0) = ([[.0, .0], [.0, .0], [.0, .0], [.-\mathcal{T}, -\mathcal{A}], [.\mathcal{Z}, \mathcal{Z}], [-\mathcal{A}, .0], [.0, .0], [.6, .\mathcal{T}], [-\mathcal{A}, -\mathcal{I}])) \tag{11}$$

The values of arousal (*c* ± 1 ) and valence (*c* ± 2 ) with hyperbolic tangent as activation function allow to compute peaceful and neutral as the achieved emotions. The achieved emotion is strongly related to the selected setting, especially the memory of the activation function. If the updating function has no memory (Equation (4)), then the emotion is mostly neutral. However, if the activation function has memory (Equation (5)), then the emotion is peaceful increasing intensity when the slope increases.

The lower values of greyness for nodes *c* ± 1 and *c* ± 2 are achieved without memory (Equation (4)), and 1.0 as slope. The higher value of greyness for node *c* ± 1 is achieved with memory (Equation (5)), and 1.0 as slope. The higher value of greyness for node *c* ± 2 is achieved with memory (Equation (5)), and 1.0 as slope.


**Table 3.** Results of experiment 2.

Note that *m* means memory, *F* false and *T* true. Higher values of greyness are highlighted.

**Figure 4.** Experiment 1.

**Figure 5.** Experiment 2—Hyperbolic tangent.

#### **6. Conclusions**

This paper shows an FGCM-based system for synthetic emotions. FGCM is a grey graph for modeling causal reasoning within complex problems with high uncertainty. This research proves that it is possible to generate or simulate emotions obtained from sensors' raw data.

Note that this research is not an empirical one. An FGCM-based proposal based on sensors' raw data, concepts and output nodes is shown. Indeed, the aim is not to model a real-world system, but this research proposes an FGCM-based theoretical proposal so that ongoing research of real-world practitioners can apply to generate or simulate synthetic emotions within their own applications or systems.

The experiments' results prove that the outlet of this proposal is strongly related to the setting applied. According to the results, FGCMs with memory nodes are the best option for emotion modeling, and the lower slopes target emotions with less intensity. As a limitation, FGCMs are strongly related with their own setting and validation is not straightforward.

**Author Contributions:** conceptualization, J.L.S.; data curation, J.L.S.; formal analysis, J.L.S.; funding acquisition, A.R.-C.; investigation, J.L.S.; methodology, J.L.S.; project administration, A.R.-C.; validation, J.L.S. and A.R.-C.; writing—original draft, J.L.S.; writing—review and editing, J.L.S. All authors have read and agreed to the published version of the manuscript.

**Funding:** Ruiz-Celma was funded by the Government of Extremadura and the European Regional Development Fund, (Una manera de hacer Europa), through GR18137 and IB18008 support.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** Salmeron would like to thank Tessella (Altran group, part of Capgemini) for their kind support.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


#### *Article*

## **Prediction of Important Factors for Bleeding in Liver Cirrhosis Disease Using Ensemble Data Mining Approach**

**Aleksandar Aleksi´c <sup>1</sup> , Slobodan Nedeljkovi´c <sup>2</sup> , Mihailo Jovanovi´c <sup>3</sup> , Miloš Randelovi´c ¯ 4 , Marko Vukovi´c <sup>5</sup> , Vladica Stojanovi´c <sup>6</sup> , Radovan Radovanovi´c <sup>7</sup> , Milan Randelovi´c ¯ <sup>8</sup> and Dragan Randelovi´c ¯ 1,\***


Received: 17 September 2020; Accepted: 13 October 2020; Published: 30 October 2020

**Abstract:** The main motivation to conduct the study presented in this paper was the fact that due to the development of improved solutions for prediction risk of bleeding and thus a faster and more accurate diagnosis of complications in cirrhotic patients, mortality of cirrhosis patients caused by bleeding of varices fell at the turn in the 21th century. Due to this fact, an additional research in this field is needed. The objective of this paper is to develop one prediction model that determines most important factors for bleeding in liver cirrhosis, which is useful for diagnosis and future treatment of patients. To achieve this goal, authors proposed one ensemble data mining methodology, as the most modern in the field of prediction, for integrating on one new way the two most commonly used techniques in prediction, classification with precede attribute number reduction and multiple logistic regression for calibration. Method was evaluated in the study, which analyzed the occurrence of variceal bleeding for 96 patients from the Clinical Center of Nis, Serbia, using 29 data from clinical to the color Doppler. Obtained results showed that proposed method with such big number and different types of data demonstrates better characteristics than individual technique integrated into it.

**Keywords:** ensemble techniques; data mining; classification and discrimination; linear regression; applied mathematics general; prediction theory; theory of mathematical modeling; medical applications

#### **1. Introduction**

Determination of relevant predictors in many fields of human life is important research challenge, including medicine. Research described in this paper is motivated from the fact that from one side, the liver disease causes about 3.5% of all deaths, which is a big number from approximately two million deaths per year worldwide, and that bleeding of varices is most common complication for successful

treatment of liver cirrhosis [1]. On the other side, fact that the development of improved computer solutions for prediction of factors of bleeding, at the beginning of the 21th century, enables significantly more comprehensive, accurate, and fast diagnosis. Namely, the best way to determine esophageal varices is through gastrointestinal endoscopy. Since less than 50% of cirrhotic patients have varices and endoscopy is a nonconforming intervention, this way, a noninvasive methodology for predicting patients with the highest risk of bleeding and then applying endoscopy is the right choice [2]. In that way, good prediction indirectly reduces mortality of cirrhosis patients caused by bleeding of varices, so that further researches in this area impose itself as a serious challenge [3].

Basic idea of authors in research proposed in this paper was to apply concept of the classification algorithm, as one of a group of machine learning algorithms, so that a two-class classifier classifies the results into two classes, which is in each classification procedure completely defined with suitable 2 × 2 confusion matrix that content number of a true and false positive classification attempts and true and false negative classification attempts and could be applied in prediction of significant factors for bleeding in liver cirrhosis. Namely, concepts of diagnostic sensitivity and specificities are commonly used in the field of laboratory medicine [4]. Diagnostic test results are classified as positive or negative, where positive results imply the possibility of illness, whereas negative results indicate higher probability of absence of the illness. However, most of these tests are conducted by the instruments with high but not perfect accuracy, thus introducing certain errors in the diagnosis results and causing false positive and false negative results. Diagnosis sensitivity that is also known as true positive rate represents the possibility to detect ill patients actually, and it is defined as the number of true positive over the total number of ill patients, including the true positive and false negative patients. Hence, proper detection should discover patients with positive results within ill patients. On the other hand, specificity that is also known as a true negative rate represents possibility to detect healthy patients, and it is defined as the number of true negative patients over the total number of healthy patients, including the true negative and false positive ones. Thus, proper determination should also provide negative result for healthy patients. Assuming that determination provides only positive result, then the sensitivity will be 100%, but in that case, healthy patients would be falsely identified as ill [5]. In theory of statistic, experiments can be used to affirm hypotheses on differences and relationship between two or more groups of variables, and such experiments are called tests, or they can be used to determine influence of variables on dependent variable(s), such multifactor experiments are called valuations, [6] and such one is applied in the presented case study in this paper. Data mining approach, where belongs classification methodology, has been widely used in different fields of human life, such as economics [7], justice [8], medicine [9], etc. Data mining has also been applied for solving various problems, especially in diagnosis in medicine [10] and in the field of diagnosis of liver cirrhosis as in [11]. Bioinformatics and data mining have been the research hot topics in the computational science field [12–16]. Data mining is generally a two stage methodology that in the first stage involves the collection and management of a large amounts of data, which in second stage is used to determine patterns and relationships in collected data using machine learning algorithms. [17–20].

It is known that esophagus bleeding is not only the most frequent but also the most severe complication in cirrhotic patients that directly threatens patient's life [21–24]. Because of this fact, the main objective of this paper is to analyze as many factors as possible, which cause this bleeding, and specifically in this study, we have determined 29 factors, which belong to different types of data, from clinical and biochemical view, obtained via endoscopic and ultrasound data to the color Doppler data. In this way, we aimed to be as comprehensive as possible and determine and rank these factors as risk indicators of varices bleeding. Consequently, due to high mortality ratio caused by bleeding of varices, considering the bleeding risk assessment is crucial for proper therapy admission. The case study, we included 96 cirrhotic patients from the Clinical Center of Nis, Serbia. This mentioned study studied risks of initial varices in cirrhosis patients, as well as risks of early and late bleeding reoccurrence. As the main result of this study, authors proposed model which predicts the assessment of

the significance of the individual parameters for variceal bleeding and survival probability of cirrhotic patients, which is in addition to the above adequate therapy very important and for determination of patient priority on the liver transplant waiting lists. Namely, in literature and practice connected with the problem of bleeding in liver cirrhosis, we can find research gap between request that for considering this problem, it is necessary to include more different types of parameters and, e.g., uncomfortable endoscopy, which, in turn, may be cost ineffective because less than 50% of a cirrhosis patients are with varices, from the medical standpoint from one side [25]. From the other mathematical side, we have the research gap between the need to include as many factors as possible in the consideration of bleeding problems in liver cirrhosis, which, in turn, cause the undesirable occurrence of noise in the data and, consequently, the need to reduce their number provided that the accuracy of the prediction is maintained [26]. Due to this fact, it is becoming more common request for using more noninvasive factors as possible, which is commonly solved using data mining technique. We can find more articles that deals with using different techniques of data mining for determination of risk indicators in different complications in disease liver cirrhosis [27–29] and risk for variceal bleeding as in [30,31]. Because two main methodologies of data mining approach are used in this paper, data mining classification technique with feature selection and logistic regression for prediction of variceal bleeding in cirrhotic patients, it is necessary to present the state of the art closely observed on the subject methodology, which solves the considered problem. This enabled authors to produce one new ensemble data mining model whose validity is proven by the results obtained in the case study. In literature, we find few papers that deals with machine learning approaches, which studied general complications in liver cirrhosis disease as, e.g., in [32,33], also on prediction of esophageal varices [34–38], and we found different forms of their integration but we did not find integration that we propose in this paper.

Authors as the subject of the paper set the answer to the research question, i.e., proof of the hypothesis, that it is possible to integrate classification method with attribute reduction also and regression into one ensemble method, which has better characteristics than each of them individually applied. To confirm the hypothesis and answer the research question, the authors used the results obtained with application of their novel proposed model in the case study described in previous paragraph of this section.

The remaining of this paper is organized as follows. After Section 1 Introduction, which after short explanation of motivation for authors to work on this paper, describes in four paragraphs the concept, objectives and existing research gap, contribution, and the organization of the paper and gives author's review of world literature which deal with bleeding problems in liver cirrhosis as well as with application of classification and logistic regression in prediction models, the other sections continues. Next, Section 2 Materials and Methods is part of paper that presents the background, which enables solving of the considered problem to be solved in this paper, introducing the methodology adopted in the proposed solution. In Section 3 Results are presented results obtained with proposed new methodology at concrete case study performed in the Clinical Center of Nis, Serbia. In Section 4 Discussion, authors discuss possibilities of theirs proposed approach and especially to clinical interpretation of the results, and in the end of this paper are conclusion remarks in Section 5 Conclusions.

#### **2. Materials and Methods**

#### *2.1. Materials*

#### 2.1.1. Determination of Relevant Predictors of Bleeding Problems

The aim of this paper is to apply the integrated data mining methodology to the prediction on risk indicators of bleeding of varices using comprehensive analyze of different types of the clinical, biochemical, endoscopic, ultrasound, and color Doppler data [36]. As mentioned previously, the study included 96 cirrhotic patients. In order to conduct the case study more efficiently, two groups of patients were formed according to whether they previously had bleeding. The group of patients with episodes of bleeding of varices was divided into two subgroups, namely, patients with and without

endoscopic sclerosis of esophagus varicosity. Clinical and biochemical parameters (Child–Pugh and MELD score) were analyzed along with endoscopic parameters (size, localization, and varicosity appearance) and ultrasound and color Doppler parameters. So big number of 29 considered factors in which 5 different type of parameters are used, because of the high mortality rate due to bleeding of varices, it is necessary to have precise risk assessment of bleeding for timely implementation of therapeutic interventions and also to assess precise prognosis and survival rate of patients with cirrhosis, which is important for appropriate therapy of patients and good patient prioritization on the waiting list for liver transplantation.

Benedeto-Stojanov et al. in [37] considered the bleeding problem in cirrhotic patients with the aim to evaluate the survival prognosis of patients with liver cirrhosis using the Model of End-stage Liver Disease (MELD) and Child–Pugh scores and to analyze the MELD score prognostic value in patients with both the liver cirrhosis and the bleeding of varices. Benedeto-Stojanov et al. studied in [38] the bleeding of varices as the most common life-threating complication of a cirrhotic patient with the aim to analyze the sources of gastroesophageal bleeding in cirrhotic patients and to identify the risk factors of bleeding from esophageal varices. Durand and Valla in [39] introduced a MELD score that was originally designed for assessing the prognosis of cirrhotic patients that underwent the transjugular intrahepatic portosystemic shunt (TIPS) and defined it as a continuous score relying on three objective variables. In the case of TIPS, MELD score has been proven as a robust marker of early mortality across a wide spectrum of causes of cirrhosis, but even though, 10–20% of patients have been still misclassified. In [40], authors described their developed Rockall risk scoring system for predicting the outcome of upper gastrointestinal (GI) bleeding, including bleeding of varices with the aim to investigate the mortality rate of first bleeding of varices and the predictability of each scoring system. Kleber and Sauerbruch studied in [41] the hemodynamic and endoscopic parameters as well as liver function and coagulation status and patient's history regarding the bleeding incidence. The following parameters were found to be correlated with an increased risk of bleeding: the first year after diagnosis of varices, positive history of bleeding of varices, presence of varices with large diameters, high blood pressure or a red color sign, concomitant gastric varices or development of a liver cell carcinoma. Authors concluded in [42] that using MELD score-based allocation, many current transplant recipients have shown advanced end-stage liver disease with an elevated international normalized ratio (INR).

The relationship between abnormalities in coagulation tests and the risk of bleeding has recently been investigated in patients with liver disease. In [32], we can notice that risk factors for mortality and rebleeding following acute variceal hemorrhage (AVH) were not well enough and completely established, and they tried to determine risk factors for emergence of mortality in 6-week and rebleeding within 5 days in cirrhotic patients and AVH.

#### 2.1.2. Methods of Aggregation in Classification and Prediction Models

Boosting as an ensemble algorithm is one of the most important recent technique in classification methodology. Boosting sequentially applies classification algorithm to readjust the training data and then takes a weighted majority of the votes of a series of classifiers. Even being simple, this strategy improves performances of many classification algorithms significantly. For a two-class problem, boosting can be viewed as an approximation to additive modeling on the logistic scale using the maximum Bernoulli likelihood as a criterion [43]. Over the past few years, boosting technique has appeared as one of the most powerful methods for predictive analytics. Some implementations of powerful boosting algorithms [44] can be used for solving the regression and classification problems, using continuous and/or categorical predictors [45,46]. Finally, using predictive analytics with gradient boosting in clinical medicine is discussed in [47].

We can find a different kind of mentioned ensemble algorithm in prediction of most important factors using other methodologies as well as aggregation methods in decision-making problem, e.g., [48,49].

In computer science, e.g., a logistic model tree (LMT) represents a classification model which has an associated supervised training algorithm in which logistic regression and decision tree learning are combined [50].

#### *2.2. Methods*

#### 2.2.1. Classification Method for Relevant Predictor Determination

Classification is frequently studied methodology in field of machine learning. Classification algorithm, as a predictive method, represents a supervised machine learning technique and implies the existence of a group of labeled instances for each class of objects and predicts the value of a (categorical) attribute (i.e., class) based on the values of other attributes, which are called predicting attributes [51]. The algorithm tries to discover relationships between the attributes in order to achieve accurate prediction of the outcome. The prediction result depends on the input and discovered relationships between the attributes. Some of the most common classification methods are classification and decision trees (e.g., ID3, C4.5, CART, SPRINT, THAID, and CHAID), Bayesian classifiers (e.g., Naive Bayes and Bayes Net), artificial neural networks (Single-Layer Perceptron, Multilayer Perceptron, Radial Base Function Network, and Support Vector Machine), k-nearest neighbor classifier (K-NN), regression-based methods (e.g., Linear Regression and Simple Logistic), and classifiers based on association rules (e.g., RIPPER, CN2, Holte's 1R, and C4.5) [52]). Selection of the most appropriate classification algorithm for a certain application is one of crucial points in data mining-based application and processes.

Consider a classifier that classifies the results into two classes, positive and negative. Then, the possible prediction results are as shown in Table 1.


**Table 1.** The confusion matrix of a two-class classifier.

It should be noted that in Table 1, TP + FN + FP + TN = N where N is the total number of members in the considered set to be classified. The matrix presented in Table 1 is called a 2 × 2 confusion matrix. As presented in Table 1, there are four results, true positive (TP), false positive (FP), true negative (TN), and false negative (FN). It is important to notice that these numbers are counts, i.e., integers, not ratios, i.e., fractions. Based on the possible results that are presented in Table 1, for a two-class classifier, the accuracy, precision, recall, and F1 measure can be, respectively, calculated as:

$$\text{Accuracy} = (\text{TP} + \text{TN}) / \text{N} \tag{1}$$

$$\text{Precision} = \text{TP} / (\text{TP} + \text{FP}) \tag{2}$$

$$\text{Recall}(\text{Sensitivity}) = \text{TP}/(\text{TP} + \text{FN}) \tag{3}$$

$$\text{Specificity} = \text{TN} / (\text{TN} + \text{FP}). \tag{4}$$

Method based on the Receiver Operating Characteristic (ROC) curves are widely used in evaluation of prediction performance of a classifier. These represent on the OX axe, the rate of false positive cases and on the OY axe, the rate of true positive cases [53].

The ROCs of five classifiers denoted as A–E are displayed in Figure 1. A discrete classier output only a class label. Also, a discrete classier produces an (*FP\_Rate* and *TP\_Rate*) pair, which corresponds to a single point in the ROC space, where *FP\_Rate* represents false positive rate and *TP\_Rate* represents true positive rate. A binary classifier is represented by a point on the graph (*FP\_Rate* and *TP\_Rate*), as follows [54]:


**Figure 1.** The Receiver Operating Characteristic (ROC) graph of five discrete classifiers.

Generally, in the ROC space, a point is classified more accurately when its true positive rate is higher and false positive rate is lower. In the ROC graph, classifiers appearing on the left-hand side of the ROC graph, which are near the *y*-axis, are considered as conservative. Namely, these classifiers make positive classifications only based on a strong evidence, so there can be only a few false positive errors, but there is also a low true positive rate as well. On the other hand, classifiers on the upper right-hand side of the ROC graph are considered as liberal. These classifiers make positive classifications based on weak evidence, so they classify almost all positives correctly, but they often have high false positive rate. For instance, in Figure 1, classifier A is more conservative than classifier B.

Decision trees or rule sets only make a decision on one of two classes a sample belong to in the case considered in this paper. When a discrete classifier is applied to a sample set, it yields to a single confusion matrix, which in turn corresponds to one ROC point. Thus, a discrete classifier produces only a single point in ROC space. On the other hand, the output of Naive Bayes classifier or neural networks is a probability or a score, i.e., a numeric value that represents the degree to which a particular instance is a member of a certain class [55].

Many classifiers scan yield to incorrect results. For instance, logistic regression provides approximately well-calibrated probabilities; in the Support Vector Machine (SVM) and similar methods, the outputs have to be converted into reasonable probabilities; regression analysis establishes a relationship between a dependent or outcome variable and a set of predictors. Namely, regression, as a data mining technique, belongs to supervised learning. Supervised learning partitions data into training and validation data sets, so regression model is constructed using only a part of the original data, that is, training data.

The classification performance of a classifier can be evaluated using:


The data are divided into two sets, training set and test set. The training set is used to train a selected classification algorithm, and test set is used to test the trained algorithm. If the classifier classifies most instances in the training set correctly, it is considered that it can classify correctly some other data as well. However, if many samples are incorrectly classified, it is considered that the trained model is unreliable. In addition to training and testing as a common approach to efficient use, model validation is most often used [56] to:


In summary, the classification model is defined by its true positive rate, false positive rate, precision, F1 measure, and confusion matrix, which represent basic parameters of precision evaluation of the implemented classifier.

#### 2.2.2. Calibration Method

Calibration is applicable in the case a classifier output is the probability value. Calibration refers to the adjustment of the posterior probability output by a classification algorithm towards the true prior probability distribution of target classes. In many studies [57–59], machine learning and statistical models were calibrated to predict that for every given data row the probability that the outcome is 1. In classification, calibration is used to transform classifier scores into class membership probabilities [11,60]. The univariate calibration methods, such as logistic regression, exist for transforming classifier scores into class membership probabilities in the two-class case [61]. Logistic regression represents a statistical method for analyzing a dataset including one or more independent variables that determine an outcome, which is measured with a dichotomous variable, where there are only two possible outcomes, i.e., it contains only the data coded as 1, which is positive result (TRUE, success, pregnant, etc.), or 0, which is negative result (FALSE, failure, nonpregnant, etc.).

Logistic regression generates the coefficients, and the corresponding standard errors and significance levels, to predict a logit transformation of the probability of presence of a characteristic of interest, which can be expressed as:

$$\text{logit}(\mathbf{p}) = \mathbf{b}\_0 + \mathbf{b}\_1 \mathbf{\hat{X}}\_1 + \mathbf{b}\_2 \mathbf{\hat{X}}\_2 + \mathbf{b}\_3 \mathbf{\hat{X}}\_3 + \dots + \mathbf{b}\_k \mathbf{\hat{X}}\_k \tag{5}$$

where *p* denotes the probability of presence of the characteristic of interest. The logit transformation is defined as the logged odds as follows:

$$\text{odds} = \frac{\text{p}}{1 - \text{p}} = \frac{\text{probability of presence of characteristics}}{\text{probability of absence of characteristics}} \tag{6}$$

$$\text{logit}(\mathbf{p}) = \ln(\frac{\mathbf{p}}{1 - \mathbf{p}}) . \tag{7}$$

Logistic regression selects parameters that increase the probability of observing sample values, instead of selecting parameters that minimize the sum of square errors (as in ordinary regression). The regression coefficients are coefficients b0, b1, . . . , b<sup>k</sup> in the regression Equation (8). In the logistic regression, coefficients indicate the change (an increase when b<sup>i</sup> > 0, or a decrease when b<sup>i</sup> < 0) in the predicted logged odds of the characteristic of interest for a one-unit change in the independent variables. When the independent variables that are denoted as X<sup>a</sup> and X<sup>b</sup> in (8) are dichotomous variables (e.g., smoking and sex), then the influence of these variables on the dependent variable can be compared by matching their regression coefficients b<sup>a</sup> and bb. By applying the exponential function on the both sides of the regression Equations (7) and (5), considering Equation (6) as well, Equation (7) can be rewritten as the following equation:

$$\mathbf{p} \text{odds} = \frac{\mathbf{p}}{1 - \mathbf{p}} = \mathbf{e}^{\mathbf{b}\_0} \times \mathbf{e}^{\mathbf{b}\_1 \mathbf{x}\_1} \times \mathbf{e}^{\mathbf{b}\_2 \mathbf{x}\_2} \times \mathbf{e}^{\mathbf{b}\_3 \mathbf{x}\_3} \times \dots \times \mathbf{e}^{\mathbf{b}\_k \mathbf{x}\_k}.\tag{8}$$

Thus, according to (8), when a variable *X<sup>i</sup>* increases by one, while all other factors remain unchanged, then the odds will increase by a factor *e b i* , which is expressed as:

$$\mathbf{e}^{\mathbf{b}\_l(1+\chi\_l)} - \mathbf{e}^{\mathbf{b}\_l\chi\_l} = \mathbf{e}^{\mathbf{b}\_l(1+\chi\_l) - \mathbf{b}\_l\chi\_l} = \mathbf{e}^{\mathbf{b}\_l + \mathbf{b}\_l\chi\_l - \mathbf{b}\_l\chi\_l} = \mathbf{e}^{\mathbf{b}\_l}.\tag{9}$$

The odds ratio (OR) of an independent variable *X<sup>i</sup>* is notated as factor e<sup>b</sup> i , and it denotes a relative amount by which the odds of the outcome increase (OR greater than 1) or decrease (OR less than 1) when the value of the independent variable is increased by one.

#### 2.2.3. Aggregation Method of Boosting using Classification and Calibration

Boosting as ensemble algorithm, which often uses more different supervised machine learning algorithms with minimum two from decision trees, classification and regression algorithm has become one of the most powerful and popular approaches in knowledge discovery and data mining field. It is commonly applied in science and technology when exploring large and complex data for discovering useful patterns is required, which allows different ways of modeling knowledge extraction from the large data sets [62].

In supervised learning, feature selection is most often viewed as a search problem in a space of feature subsets. In order to conduct our search, we must determine a starting point, a strategy for traversing trough the space of subsets, a function for evaluation, and a criterion to stop. This formulation allows that a variety of solutions can be developed, but usually two method types are considered, called filter methods and wrapper methods. Filter methods use an evaluation function that relies solely on data properties. Due to that fact, it is independent on any particular algorithm, and wrapper methods use inductive algorithm to estimate the value of a given subset. In our approach, there method types are combined: filter (information gain, gain ratio, and other four classifiers) and wrapper (search guided by the accuracy) [63].

As mentioned previously, the ROC analysis has been used in medicine, radiology, biometrics, and other areas for many decades, and recently, it has been increasingly used in machine learning and data mining research. In this study, the authors used the areas under the ROC curves to identify the classification accuracy of more classifiers, which is most important for proposed model to order the minimal number of attributes enough to give maximum value on the ROC curve [64].

In addition, most popular calibrating methods use isotonic regression to fit a piecewise-constant nondecreasing function. Isotonic regression is a useful nonparametric regression technique for fitting an increasing function to a given dataset. An its alternative is to use a parametric model and that most common model called univariate logistic regression. The model is defined as:

$$\mathbf{l} = \log(\mathbf{p}/(\mathbf{1} - \mathbf{p})) = \mathbf{a} + \mathbf{b}\mathbf{f},\tag{10}$$

where *f* denotes a prediction score and *p* denotes the corresponding estimated probability for predicted binary response variable *y*. Equation (10) shows that the logistic regression model is essentially a linear model with intercept *a* and coefficient *b*, so it can be rewritten as:

$$\mathbf{p} = \frac{1}{1 + \mathbf{e}^{-(\mathbf{a} + \mathbf{b} \mathbf{f})}}.\tag{11}$$

Assume *f<sup>i</sup>* is prediction score on the training set, and let *y<sup>i</sup>* ∈"{0, 1} be the 2009 true label of the predicted variable. The parameters *a* and *b* are chosen to minimize the total sum P i l(p, y<sup>i</sup> ).

For example, in paper [35], which deals with prediction of risk of bleeding of varices using 25 attributes, we can find one aggregation of six classification's algorithms and six feature selection classifiers and that three from wrapper and three from filter group proposes model, which gives best solution than each of aggregated methods individually.

In this paper, authors propose model which:


**Algorithm 1:** Procedure of obtaining significant predictors of bleeding in cirrhotic patients.


The calibration process is given in Algorithm 2.

**Algorithm 2:** The pseudocode of the calibration process used most significant predictors of death income in cirrhotic patients part.

```
//Set the great importance for further treatment and prevention of bleeding for 15
predictive//variables
for i = 1 to (n − 1) inclusive do:
/* if odds ratio value pair is out of order */
if OR[i] > OR [i + 1] then
/* swap attributes in subset A
                              ′ and remember something changed */
swap (A[i], A[i + 1])
end if
end for
```
The ignored predictive variables are variables that have the accuracy less than 0.85%.

#### **3. Results**

The study from Benedeto-Stojanov and other coauthors in [36] involved 96 subjects, 76 (79.2%) male and 20 (20.8%) female participants. There were 55 patients without bleeding, of which 44 (80.0%) were male and 11 (20.0%) were female. The group of 41 patients with bleeding included 32 (78.0%) male and 9 (22.0%) female participants. The average age of all patients was (56.99 ± 11.46) years. The youngest and oldest patients were 14 and 80 years old, respectively.

The data used in the study were obtained by the Clinical Center of Nis, Serbia. The original feature vector of patient data consisted of 29 features that were predictive variables. As the thirtieth variable, there was a two-case class variable result (yes/no), which was considered as a dependent variable. All predictive variables and dependent variable are shown in Table 2, where it can be seen that they were of numerical data type.



In the case study, five classification algorithms were implemented, i.e., Naive Bayes, J48, Decision Trees, HyperPipes, Ada LogitBoost, and PART for designing prediction modes. Method of training set was applied in model for proposed classification algorithms where the authors chose the most famous from different groups of classifiers. This method was chosen and training set mode combined with test as well as 10-cross validation were not used because of a small number of instances in the case study.

The performance indicators of five classification algorithms are given in Table 3, where it can be seen that the LogitBoost classifier achieved the most accurate prediction results among all the models.

As presented in Table 3, the LogitBoost classifier achieved the F1 measure of 97.9%, accuracy of 98.0% (0.980), and the ROC of 0.999.


**Table 3.** Performance indicators obtained by the classification algorithms.

In Table 4, *CCI* denotes the number of correctly classified inputs, and *ICI* denotes the number of incorrectly classified inputs.

**Table 4.** Accuracy of the LogitBoost algorithm.


The LogitBoost classifier achieved a relatively good performance on classification tasks, due to the boosting algorithm [65]. Boosting process is based on the principle that finding many rough rules of thumb can be much easier than finding a single, highly accurate prediction rule. This classifier is a general method for accuracy improvement of learning algorithms. In the WEKA [66], LogitBoost classifier is implemented as class which performs additive logistic regression, which performed classification using a regression scheme as a base learner, and also can handle multiclass problems.

Feature selection is normally realized by searching the space of attribute subsets and evaluating each attribute. This is achieved by combining attribute subset evaluator with a search method. In this paper, five filter feature subset evaluation methods with a rank search or greedy search method were conducted to determine the best feature sets, and they are listed as follows:


The feature ranks obtained by the above five methods on the training data are presented in Table 5.


**Table 5.** Results of the five ranking methods (bigger number mark highest rank).

The ROC value shows relationship between sensitivity, which represents measure of the proportion of positives that are correctly identified TP, and specificity, which represents measure of the proportion of negative that are correctly identified, both in executed process of classification. The evaluation measures with variations of ROC values were generated from an open source data mining tool, WEKA, that offers a comprehensive set of state-of-the-art machine learning algorithms as well as set of autonomous feature selection and ranking methods. The generated evaluation measures are shown in Figure 2, where the *x*-axis represents the number of features, and the *y*-axis represents the ROC value of each feature subset generated by five filter classifiers. The maximum ROC value of all the algorithms and the corresponding cardinalities that are illustrated in Figure 2 are given numerically in Table 5. This is quite useful for finding an optimal size of the feature subsets with the highest ROC values. As given in Table 5, the highest ROC values were achieved by CH and IG classifiers. Although the CH and IG resulted in the ROC value of 0.999, the IG/CH could attain the maximum ROC value when the number of attributes reached the value of 15. Thus, it was concluded that IG has an optimal dimensionality in the dataset of patients.

**Figure 2.** ROC value as a function of attribute number.

The top ranking features in Table 5 that were obtained by CH and IG classifiers were used for further predictive analysis as significant predictors of bleeding, and they were as follows: (A15)—Red color signs, (A14)—large esophageal varices, (A17)—congestive gastropathy, (A7)—international normalized ratio, (A23)—collateral circulation, (A24)—flow speed in portal vein, (A9)—ascites, (A8)—creatinine, (A6)—prothrombin time, (A29)—MELD score, (A2)—age, (A5)—albumin, (A11)—platelet count/spleen diameter ratio, (A3)—etiology, and (A25)—flow speed in lienal vein.

This study analyzes risks of initial bleeding of varices in cirrhotic patients, and the risks of early and late bleeding reoccurrence. The obtained results are important for further treatment and prevention of bleeding from esophageal varices, the most common and life-threatening complications of cirrhosis. Coauthors in this manuscript, Randjelovic and Bogdanovic, still used the univariate logistic regression analysis to demonstrate the most significant predictors of bleeding. Results of this analysis obtained using same input data are given in Table 6 [67].


**Table 6.** Odds ratio values for bleeding risk factors (univariate logistic regression).


**Table 6.** *Cont.*

After conducting the experiment with the real medical data, important predictors of bleeding were determined by performing logistic regression analysis.

Univariate logistic regression analysis indicated the most significant predictors of bleeding in cirrhotic patients: the value of the Child–Pugh/Spleen Diameter ratio, platelet count, as well as the expression of large esophageal varices, red color signs, gastric varices, and congestive gastropathy collateral circulation. Approximate values were calculated relative risk (odds ratio—OR), and their 95% confidence intervals. The statistical significance was estimated by calculating the OR Wald (Wald) values.

The increase in the value of Child–Pugh/Spleen Diameter ratio for one unit resulted in the reduction in the risk of bleeding by 0.2% (from 0.1% to 0.3%, *p* < 0.05), while the increase in platelet count to 1 <sup>×</sup> <sup>10</sup><sup>9</sup> /L yielded to the decrease in risk of bleeding by 0.8% (from 0.1% to 1.5%, *p* < 0.05). Expression of the following factors indicating an increased risk of bleeding: large esophageal varices 24.589 (7.368–82.060, *p* < 0.001), red color signs 194.997 (35.893–1059.356, *p* < 0.001), gastric varices 4.110 (1.187–14.235, *p* < 0.05), congestive gastropathy 10.153 (3.479–29.633, *p* < 0.001), and collateral circulation 1.562 (1.002–2.434, *p* < 0.05).

Following performed univariate logistic regression analysis, it is enabled that previously acquired set of 15 attributes with attribute rank given in columns one (CH) and three (IG) of Table 5 can be calibrated using results for OR given in column three (OR) in Table 6. The calibration process is showed in Table 7. It was carried out so that the mentioned set of 15 attributes from Table 5, which is given in the first row of Table 7 using extracting those of the 15 attributes for which OR > 1 in Table 6 is given in the second row of Table 7 and using extracting those of the 15 attributes for which OR < 1 in Table 6 is given in the third row of Table 7.

According to the results in Table 6, the independent (predictive) variables were A1–A29 attributed to number *p* smaller than 0.05, which significantly influenced on dependent, binary variable A30—bleeding.

According to the results in Table 7, we have 15 significant predictors given in first row. On the one hand, in second row, we have 12 predictors from this 15 with OR greater than one and characteristic when the predictive variable, bleeding, increased, and the risk that binary variable would acquire value Yes also increased.


**Table 7.** Top ranking feature subsets.

On the other hand, in third row we have 3 predictors from this 15 with OR smaller than one and characteristic when the predictive variable increased, and the risk that binary variable would acquire value Yes decreased.

#### **4. Discussion**

In the machine learning and statistics, dimensionality reduction is the process of reducing the number of random variables under a certain consideration and can be divided into feature selection and attribute importance determination.

Feature selection approaches [68–70] try to find a subset of the original variables. In this work, two strategies, namely, filter (Chi-square, information gain, gain ratio, relief and symmetrical uncertainty) and wrapper (search guided by the accuracy) approaches are combined. The performance of classification algorithm is commonly examined by evaluating the classification accuracy.

In this study, the ROC curves are used to evaluate the classification accuracy. By analyzing the experimental results presented in Table 3, it can be observed that the LogitBoost classification technique achieved better result than the other techniques using training mode of valuation applied classification algorithms.

The results of the comparative application of different classifiers conducted in described case study on feature subsets generated by the five different feature selection procedures are shown in Table 5. The LogitBoost classification algorithm is trained using decision stumps (one node decision trees) as weak learners. The IG attribute evaluation can be used to filter features, thus reducing the dimensionality of the feature space [71].

The experimental results presented on Table 8 show that IG feature selection significantly improves the all observed performance of the LogitBoost classification technique in spite of the fact that decision tree has inherited ability to focus on relevant features while ignoring the irrelevant ones (refer to Section 3).


**Table 8.** Performance of LogitBoost classification before and after information-gain attribute evaluation/Chi-square attribute evaluation (IG/CH) feature selection.

The authors performed 10-cross validation of the proposed model using Weka software and it confirmed the validity of the proposed model defined by the procedures given in the work with Algorithm 1 and Algorithm 2 as it is given in Table 9.


**Table 9.** Performance of LogitBoost before/after IG/CH feature selection using 10-cross validation.

As we mentioned in Section 3 Results, results of univariate regression on same data set [67] are used for fine calibration in proposed model. In that paper was considered comparison of use of classic and gradual regression in prediction of risk of variceal bleeding in cirrhotic patients.

Table 10 shows results obtained using multivariate gradual regression and recognizes only two factors as significant for risk of variceal bleeding, which are in comparison with results of proposed model evidently a worse result in terms of requested prediction.


**Table 10.** Odds ratio values for bleeding risk factors (multivariate logistic regression).

The regression calibration is a simple way to improve estimation accuracy of the errors-in-variables model [72].

It is shown that when variables are small, regression calibration using response variables outperforms the conventional regression calibration.

Expert clinical interpretation of obtained results for risk of bleeding prediction in cirrhotic patients could be given using decision tree diagram with feature subset of 15 attributes, which is practically equal to set of 29 attributes in the case without feature selection but more precise and accurate.

The run information part contains general information about the used scheme, the number of instances, 96 patients, in the case of 15 attributes is shown in Figure 3, i.e., the case of 29 attributes is shown in Figure 4, and in both cases as well as the attributes names.

**Figure 3.** The decision tree with feature subsets (using 15 attributes).

**Figure 4.** The decision tree without feature subsets (using 29 attributes).

The output for predicted variable represented by the decision tree given in Figure 3 can be interpreted using the If-Then 14 rules in Equation (12), as follows:

$$\text{If } (A15 = <0.0) \text{ and } (A24 = <0.12) \text{ then } A30 = \text{No } (39, 75/96). \tag{12}$$

Authors contribution is demonstrated in obtained result with application of proposed new ensemble boosting model of data mining, which integrates classification algorithm with attribute reduction and regression method for calibration and which shows that proposed model has a better characteristic than each of individually applied model and authors could not find in existing literature.

The authors confirmed originality of the proposed ensemble model by reviewing the state of the art, generally observed, end especially in the liver disease prediction, which are given in the introduction of the paper and could be confirmed by observation updated state of the art in both disciplines:


Advantage of proposed model is, in fact, that it is evaluated on the case study including big number of different types of considered factors. Finally, one advantage of the proposed model is in the fact that it could be applied worldwide where it will generate prediction that is suitable according the specificity of each locality individual so that the paper is suitable for broad international interest and applications.

In such a way, authors confirmed the hypothesis and answered the research question set in introduction of this paper and thus contributed to the creation of a tool that can successfully and practically serve to solve their perceived research gap.

This described study has several limitations that must be addressed:

First, we collected data only from one medical center (as it is given in [78] as Supplementary Material) that reflects its particularities; the sample would be more representative if it is from many different localities, so that results can be generalized. Second, we evaluated small size of only 96 patient's information, although most of the variables were statistically significant. Third, we have not included all possible factors that could cause bleeding. Finally, we must notice that noninvasive markers may be useful only as a first step to possibly identify varices for cirrhosis patients and in this way to reduce the number of endoscopies.

In further work and research authors plan to test proposed model on the data set obtained in last 10 years in Clinical Center of Nis, Serbia. Authors also intend to include in further research at least two other clinical centers in Serbia or in the Western Balkans that are topologically distant and located in different locations with different features (hilly and lowland coastal locality) where the population has other habits and characteristics and, in this way, to obtain bigger size of cirrhotic patients and more representative sample for proposed model evaluation. Authors plan also to deal with determining more precise type and number in each type of classification algorithms and type and number of classifiers for feature selection used in proposed model. Finally, proposed model can be suggested for prediction and monitoring of risk of bleeding in cirrhotic patients, e.g., by implementing as a software tool.

#### **5. Conclusions**

Analysis of significance of factors that are influencing an event is a very complex task. Factors can be independent or dependent, quantitative or qualitative, i.e., deterministic or probabilistic nature, and there can be a complex relationship between them. Due to the importance of determination of risk factors for bleeding problems in cirrhotic patients and the fact that early prediction of varices bleeding in cirrhotic patients in last 20 years help this complication to be reduced, it is clear that it is necessary to develop an accurate algorithm for selection of the most significant factors of the mentioned problem.

Among all techniques of statistics, operation research, and data mining techniques, in this work, statistical univariate regression and data mining technique of classification are aggregated to obtain

one boosting method, which has better characteristics than each of them individually. Data mining is used to find a subset of the original set of variables. Also, two strategies, filter (information gain and other) and wrapper (search guided by the accuracy) approaches, are combined. Regression calibration is utilized to improve estimation performance of the errors in variables model. Application of the bleeding risk factors-based univariate regression presented in this paper can help decision-making and higher risk identification of bleeding in cirrhotic patients.

The proposed method uses advantages of data mining decision tree method to make good beginning classification of considered predictors and then univariate regression is utilized for fine calibration of obtained results, resulting in developing a high-accuracy risk prediction model.

It is evident that the proposed ensemble model can be useful and extensible to other hospitals in the world treating this illness, the liver cirrhosis and its consequences as the bleeding of varices studied in this case.

**Supplementary Materials:** The following are available online at http://www.mdpi.com/2227-7390/8/11/1887/s1, Table S1: The data in study described in (Benedeto-Stojanov, 2010) 29 attributes—involved 96 subjects by Clinical Center of Nis, Serbia.

**Author Contributions:** Conceptualization: A.A.; Project administration: S.N.; Validation: M.J.; Writing—original draft: M.V.; Writing—review & editing: M.R. (Miloš Randelovi´c); Formal analysis: M.R. (Milan Ran ¯ delovi´c); ¯ Software: V.S.; Investigation: R.R.; Supervision, Methodology: D.R. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Abbreviations**


#### **References**

1. Liu, Y.; Meric, G.; Havulinna, A.S.; Teo, M.S.; Ruuskanen, M.; Sanders, J.; Zhu, Q.; Tripathi, A.; Verspoor, K.; Cheng, S.; et al. Early prediction of liver disease using conventional risk factors and gut microbiome-augmented gradient boosting. *medRxiv* **2020**. [CrossRef]


**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

*Article*

## **Development of a Novel Integrated CCSD-ITARA-MARCOS Decision-Making Approach for Stackers Selection in a Logistics System**

**Alptekin Uluta¸s <sup>1</sup> , Darjan Karabasevic 2,\* , Gabrijela Popovic <sup>2</sup> , Dragisa Stanujkic <sup>3</sup> , Phong Thanh Nguyen <sup>4</sup> and Ça ˘gatay Karaköy <sup>1</sup>**


Received: 8 September 2020; Accepted: 28 September 2020; Published: 1 October 2020

**Abstract:** The main goal of this paper is to propose a Multiple-Criteria Decision-Making (MCDM) approach that will facilitate decision-making in the field of logistics—i.e., in the selection of the optimal equipment for performing a logistics activity. For defining the objective weights of the criteria, the correlation coefficient and the standard deviation (CCSD method) are applied. Furthermore, for determining the semi-objective weights of the considered criteria, the indifference threshold-based attribute ratio analysis method (ITARA) is used. In this way, by combining these two methods, the weights of the criteria are determined with a higher degree of reliability. For the final ranking of the alternatives, the measurement of alternatives and ranking according to the compromise solution method (MARCOS) is utilized. For demonstrating the applicability of the proposed approach, an illustrative case study pointing to the selection of the best manual stacker for a small warehouse is performed. The final results are compared with the ones obtained using the other proved MCDM methods that confirmed the reliability and stability of the proposed approach. The proposed integrated approach shows itself as a suitable technique for applying in the process of logistics equipment selection, because it defines the most influential criteria and the optimal choice with regard to all of them in a relatively easy and comprehensive way. Additionally, conceiving the determination of the criteria with the combination of objective and semi-objective methods enables defining the objective weights concerning the attitudes of the involved decision-makers, which finally leads to more reliable results.

**Keywords:** MCDM; the CCSD method; the ITARA method; the MARCOS method; stackers; logistics

#### **1. Introduction**

Logistics has long been considered a key factor in economic development, spatial integration, and market integration in the developed world [1]. During the 1960s, logistics as a concept of the integration of the process of the distribution of goods gained its place in the theory and practice of business management. Within the logistics sector, there are three basic approaches: physical distribution management, materials management, and business logistics. The important issue that

logistics is faced with is certainly the question of the selection of adequate equipment for dealing with material resources.

The efficiency of the performance of logistics activities strongly depends on the use of optimal equipment in the warehouse or for the transportation of the goods. Bad choices could lead to the damage or contamination of goods, delays in delivery, and an increase in costs [2]. Furthermore, the selection of equipment directly influences the performance of the company, so this kind of decision could be considered as strategic and of great importance [3]. In the case of manufacturing equipment selection, the selection of the equipment needed for performing logistics activities requires defining the crucial features of the equipment, comparing them with the equipment offered on the market, and selecting the most suitable one [4]. The costs are considered as the most influential criterion in equipment selection, but they could not be treated as the only one.

Decisions regarding equipment purchasing affect various criteria that are often mutually opposing. Besides this, making a decision based on only one or a few criteria as well as making a decision based on previous experience and intuition will not lead to a reasonable decision. The use of techniques based on mathematics and statistics increases the reliability of the decision and contributes to the assurance of the selection that is made. The utilization of the Multiple-Criteria Decision-Making (MCDM) method could be a suitable means for the facilitation of a decision process regarding logistics equipment selection.

Recently, the field of Multi-Criteria Decision-Making (MCDM) has been rapidly evolving, thanks to the large number of scientific publications dealing with the adoption of individual decisions based on employed techniques and methods that belong to the specified domain [5]. MCDM is quite a suitable tool for solving complex decision-making problems because of its ability to evaluate different alternatives using a specific set of criteria [6].

The main aim of this paper is to develop a novel integrated MCDM-based approach for equipment selection in a logistics system. The correlation coefficients (CC) and standard deviations (SD) i.e., the CCSD method [7]—will be applied for determining the objective weights of the criteria. Besides that, the indifference threshold-based attribute ratio analysis method (ITARA) [8], as a semi-objective method, will be also applied for determining the weights of the criteria. Therefore, the weights of the criteria will be determined by applying a combined CCSD-ITARA approach in order to make an objective determination of criteria significance where the subjectivity—i.e., perspective of the decision-makers is included to a moderate degree. When it comes to the ranking of the alternatives, the measurement of alternatives and ranking according to the compromise solution method (MARCOS) [9] will be applied. The applicability of the proposed approach will be demonstrated through the illustrative case study, pointing to the selection of a suitable type of stacker for purchasing. The proposed approach enables the facilitation of the selection process regarding the purchasing of logistics equipment, which is a manual stacker in the considered case. Thus, the practitioners could observe all the involved criteria and, based on them, select the most appropriate alternative. Scientifically, the proposed combination of methods is completely new, and its possibilities have not been fully tested yet. In this case, it is used for the facilitation of decision and selection processes in the logistics field, but its potential could be further explored in other areas as well.

The rest of the paper is organized as follows: In Section 1, introductory considerations are given. A literature review is presented in Section 2. Section 3 demonstrates the methodology. An illustrative case study is described in Section 4. Finally, at the end of the manuscript the conclusions are given.

#### **2. Literature Review**

Decision-making is a process as old as humanity itself. Every day, each of us usually makes a large number of decisions. However, one of the problems that arise is to choose from the multitude of possible solutions the solution by which we will achieve the desired goal to the greatest degree, taking into account the objective limitations, which, to a greater or lesser extent, limit our freedom of judgment [10–12]. As could be inferred, the decision process involves the synergy of action

of the human factor, mathematical methods, and IT tools [13]. In each study on the issue of decision-making, attention is focused on three general concepts—namely, the decision-making process, the decision-maker, and the decision itself—with the constant attempt to find a suitable way to make an appropriate decision. Intending to facilitate the decision process, scholars have proposed various methods that belong to the MCDM field.

MCDM has been developed as an integral part of operational research in order to create mathematical tools aimed at supporting the subjective evaluation of criteria by decision-makers [14,15]. MCDM is created in such a way that facilitates the selection of the most desirable alternative, the classification of the alternatives into a smaller number of categories, and the ranking of these alternatives following subjective requirements [16,17]. As already mentioned, there are a whole range of various MCDM techniques that have been applied to solving different types of complex problems. Each of the developed MCDM methods has its advantages, disadvantages, and limitations. Additionally, according to the problem that is being solved, it is necessary to consider an adequate technique [18–20].

Thus, MCDM considers situations in which the decision-maker must choose one of the alternatives from a set of available alternatives, which are judged based on several often-conflicting criteria [17,20]. The remarkably extensive development of the field of decision-making theory over the past few decades certainly has contributed to the presence of a multitude of MCDM methods. Perhaps the best-known and most widely applied MCDM methods are: simple additive weighting (SAW) [21]; the analytic hierarchy process (AHP) [22]; the analytic network process (ANP) [23]; elimination et choix traduisant la realité (ELECTRE) [24]; the preference ranking organization method for enrichment evaluation (PROMETHEE) [25]; the technique for order performance by similarity to ideal solution (TOPSIS) [26]; Višekriterijumska optimizacija i kompromisno rešenje (VIKOR) [27]; the complex proportional assessment of alternatives (COPRAS) [28]; and so forth.

In order to cope with a wider spectrum of problems, there is a new generation of newly developed MCDM methods and MCDM-based approaches, such as a new additive ratio assessment method (ARAS) [29]; multi-objective optimization on the basis of the ratio analysis method (MOORA) [30]; multi-objective optimization by ratio analysis plus full multiplicative form (MULTIMOORA) [31]; the step-wise weight assessment ratio analysis method (SWARA) [32]; the pivot pair-wise relative criteria importance assessment method (PIPRECIA) [33]; the multi-attributive ideal-real comparative analysis method (MAIRCA) [34]; the full consistency method (FUCOM) [35]; the evaluation based on distance from the average solution method (EDAS) [36]; a combined compromise solution method (CoCoSo) [37]; and so on. It is important to note that some of the aforementioned methods are used for weight determination and some of them for the ranking of alternatives.

Until now, MCDM methods have been used in the logistics field to contribute to and simplify the decision process regarding the various issues. A very popular theme that occupied scientific attention is certainly the question of reverse logistics [38–40]. Thence, the authors examined the problem of the selection of the logistics center or warehouse location [41,42]. The issue of humanitarian logistics is resolved by applying different MCDM techniques too [43,44]. The selection of the partners suitable for performing the logistics activities has been also performed by applying MCDM methods [45,46].

The topic connected to the equipment selection pointed to material handling is also present in the works of various authors. For example, Mathew and Sahu [47] used four methods for resolving the problem of equipment selection, and they are: CODAS, EDAS, MOORA, and WASPAS. The authors also based the selection of the equipment on the fuzzy axiomatic design principles [48]. Suitable equipment is selected in the fuzzy environment too [49]. Saputro and Rouyendegh [50] used the TOPSIS and MOMILP methods to find the best solution regarding the equipment for the warehouse. As can be concluded, there is enough space for observing the issue of the selection of the appropriate equipment for a logistics center. With that aim, in this paper an integrated approach based on the CCSD, ITARA, and MARCOS methods is proposed. The main reason for involving the CCSD and ITARA methods in the procedure of determining the criteria weights relies on the fact that they enable the definition

of the criteria weights in an objective way but with incorporating a hint of the subjectivity of the decision-maker. In some cases, it is necessary to incorporate the requirements of the decision-maker to some suitable extent because the decision-maker knows what his/her possibilities and requests are. The MARCOS method, which is utilized for the final ranking of the considered alternatives, is a relatively recently proposed method whose possibilities have not been completely examined until now. The mentioned method enables the selection of a compromise solution that is optimal for the present conditions and fulfills all the given criteria to a satisfying degree.

#### **3. Methodology**

In this study, an integrated model including the CCSD, ITARA, and MARCOS methods is applied to determine the best stacker (Figure 1).

**Figure 1.** The computational procedure of the integrated CCSD-ITARA-MARCOS approach.

The CCSD and ITARA methods are used to determine weights of the criteria, whereas the MARCOS method is used to rank the alternatives—i.e., in our case, stackers—and to select the best one.

#### *3.1. The CCSD Method*

The CCSD method was developed by Wang and Luo [7]. The CCSD method is an objective weighting method. However, so far the CCSD method has been used for solving a variety of problems, such as problems in the supply chain [51,52], technological forecasting [53], financial performance evaluation [54], environmental issues [55], and so forth.

The steps of this method are as follows [7,53]:

Step 1: A decision matrix (*G*) is constructed. This matrix includes *m* alternatives, *B*1, . . . , *B<sup>m</sup>* based on the *n* criteria, *T*1, . . . , *Tn*.

$$G = \begin{bmatrix} g\_{ij} \end{bmatrix}\_{m \times n}. \tag{1}$$

In Equation (1), *gij* denotes the performance of the *i*th alternative on the *j*th criterion.

Step 2: This matrix is normalized using Equation (2) (for beneficial criteria) and Equation (3) (for cost criteria).

$$h\_{ij} = \frac{\text{g}\_{ij} - \min\{\text{g}\_{ij}\}}{\max\{\text{g}\_{ij}\} - \min\{\text{g}\_{ij}\}},\tag{2}$$

$$h\_{ij} = \frac{\max\limits(\mathbf{g}\_{ij}) - \mathbf{g}\_{ij}}{\max\limits(\mathbf{g}\_{ij}) - \min\limits(\mathbf{g}\_{ij})}.\tag{3}$$

Step 3: The criterion *T<sup>j</sup>* is removed to take into account its impact on decision-making. With criterion *T<sup>j</sup>* , the performance value is computed using Equation (4) [56].

$$d\_{ij} = \sum\_{k=1,\ k \neq j}^{n} h\_{ik} w\_k. \tag{4}$$

In Equation (4), *w<sup>k</sup>* denotes the weight of *k*th criterion calculated using some method for the subjective criteria weights determination, such as the AHP, SWARA, or PIPRECIA methods.

Step 4: The correlation coefficient (*R<sup>j</sup>* ) between *T<sup>j</sup>* criterion's value and *dij* is computed using Equation (5).

$$R\_j = \frac{\sum\_{i=1}^{m} \left( h\_{ij} - \overline{h}\_j \right) \left( d\_{ij} - \overline{d}\_j \right)}{\sqrt{\sum\_{i=1}^{m} \left( h\_{ij} - \overline{h}\_j \right)^2 \sum\_{i=1}^{m} \left( d\_{ij} - \overline{d}\_j \right)^2}},\tag{5}$$

where:

$$
\overline{h}\_{\overline{j}} = \frac{\sum\_{i=1}^{m} h\_{\overline{i}\overline{j}}}{m},
\tag{6}
$$

$$
\overline{d}\_{j} = \frac{\sum\_{i=1}^{m} d\_{ij}}{m}.\tag{7}
$$

Step 5: In order to determine the objective weights (*wjC*) of criteria, a non-linear optimization model is written as:

$$\begin{array}{c} \text{Minimize } I = \Sigma\_{j-1}^{\boldsymbol{n}} \left( w\_{j\mathcal{C}} - \frac{\sigma\_{j} \sqrt{1 - R\_{j}}}{\sum\_{k=1}^{n} \sigma\_{k} \sqrt{1 - R\_{k}}} \right)^{2}, \\\\ \text{s.t. } \Sigma\_{j-1}^{\boldsymbol{n}} w\_{j\mathcal{C}} = 1. \end{array} \tag{8}$$

In Equation (8), σ*<sup>j</sup>* denotes *T<sup>j</sup>* criterion's standard deviation, and it can be calculated using Equation (9).

$$
\sigma\_{\bar{j}} = \sqrt{\frac{1}{m} \sum\_{i=1}^{m} \left( h\_{ij} - \overline{h}\_{\bar{j}} \right)^{2}}.\tag{9}
$$

The non-linear model indicated in Equation (8) is solved using MS Excel Solver (Microsoft corp., Redmond, WA, USA), Lingo 16 (Lindo Systems, Chicago, IL, USA), and MATLAB (The MathWorks, Inc., Natick, MA, USA).

#### *3.2. The ITARA Method*

The ITARA method was recently developed by Hatefi [8] and is a semi-objective method for determining the weights of criteria. The steps of the ITARA method are as follows [8]:

Step 1: A decision matrix (*G*) is constructed. This matrix was indicated in Equation (1).

Step 2: Normalized values and *NIT<sup>j</sup>* (Normalized Indifference Threshold) are obtained using Equations (10) and (11), respectively.

$$e\_{ij} = \frac{\mathcal{G}\_{ij}}{\sum\_{i=1}^{m} \mathcal{G}\_{ij}} \tag{10}$$

$$\text{NIT}\_{\text{j}} = \frac{\text{IT}\_{\text{j}}}{\sum\_{i=1}^{m} \text{g}\_{ij}}.\tag{11}$$

In Equation (11), *IT<sup>j</sup>* denotes the Indifference Threshold of the *j*th criterion.

Step 3: Normalized values are sorted in ascending order, then they are named ρ*ij* in such a way that ρ*ij* ≤ ρ*i*+1,*<sup>j</sup>* .

Step 4: The distance (γ*ij*) between ρ*i*+1,*<sup>j</sup>* and ρ*ij* is computed as follows.

$$
\gamma\_{\rm ij} = \rho\_{\rm i+1,j} - \rho\_{\rm ij}. \tag{12}
$$

Step 5: The difference (ε*ij*) between γ*ij* and *NIT<sup>j</sup>* is calculated as follows.

$$
\varepsilon\_{\rm ij} = \begin{cases}
\text{ } \boldsymbol{\gamma}\_{\rm ij} - \text{NIT}\_{\boldsymbol{j}} & \text{for } \boldsymbol{\gamma}\_{\rm ij} > \text{NIT}\_{\boldsymbol{j}\prime} \\
\text{ } & \text{ for } \boldsymbol{\gamma}\_{\rm ij} \le \text{NIT}\_{\boldsymbol{j}\prime}
\end{cases}
\forall\_{\rm i} \in \mathsf{M}\_{\boldsymbol{\omega}}
\ \forall\_{\boldsymbol{j}} \in \mathsf{N}.\tag{13}
$$

Step 6: The weights of the criteria (*wjI*) are computed as follows.

$$w\_{j1} = \frac{v\_j}{\sum\_{j=1}^{n} v\_j} \tag{14}$$

where:

$$w\_j = \left(\sum\_{i=1}^{m-1} \varepsilon\_{i\bar{j}}\right)^{1/p}.\tag{15}$$

These weights are combined using Equation (16) [57].

$$w\_{j\gets O} = \frac{w\_{j\gets wjl}}{\sum\_{j=1}^{n} w\_{j\gets wj}}.\tag{16}$$

#### *3.3. The MARCOS Method*

The MARCOS method is developed by Stevi´c et al. [9]. Although the method is new, so far it has been applied for solving different decision-making problems, such as the assessment of project management software [58], supplier selection [59], the evaluation of human resources [60], road traffic analysis [61], and so on.

In our study, the MARCOS method is used to rank stackers and to determine the best one. The steps of this method are as follows [9]:

Step 1: The decision matrix is constructed. This matrix was indicated in Equation (1).

Step 2: An extended decision matrix (*U*) is formed.

$$\begin{array}{ccccc} T\_1 & T\_2 & \dots & T\_n \\ AAI & \begin{bmatrix} \mathcal{G}\_{a1} & \mathcal{G}\_{a2} & \dots & \mathcal{G}\_{ann} \\ \mathcal{G}\_{11} & \mathcal{G}\_{12} & \dots & \mathcal{G}\_{1n} \\ \mathcal{G}\_{21} & \mathcal{G}\_{22} & \dots & \mathcal{G}\_{2n} \\ \dots & \dots & \dots & \dots & \dots \\ \mathcal{G}\_{m1} & \mathcal{G}\_{m2} & \dots & \mathcal{G}\_{mn} \\ \mathcal{G}\_{a1} & \mathcal{G}\_{a2} & \dots & \mathcal{G}\_{a\dot{n}} \end{bmatrix}. \tag{17}$$

While the ideal solution (*AI*) is the best alternative, the anti-ideal solution (*AAI*) is the worst alternative. These values are computed as follows.

$$AI = \max\{g\_{i\bar{i}}\} \text{ if } j \in BN \text{ and } AI = \min\{g\_{i\bar{i}}\} \text{ if } j \in CS\_{\prime} \tag{18}$$

$$AAI = \min\{\mathbf{g}\_{i\bar{i}}\} \text{if } j \in BN \text{ and } AAI = \max\{\mathbf{g}\_{i\bar{i}}\} \text{if } j \in CS. \tag{19}$$

In Equations (18) and (19), *BN* denotes the beneficial criteria and *CS* presents the cost criteria. Step 3: The extended decision matrix is normalized using Equations (20) and (21).

$$y\_{ij} = \frac{\text{g}\_{aij}}{\text{g}\_{ij}} \text{ if } j \in \text{CS}, \tag{20}$$

$$y\_{ij} = \frac{\mathcal{G}i\!j}{\mathcal{g}\_{aij}}\text{ if }j \in \text{BN}.\tag{21}$$

).

In Equations (20) and (21), *yij* is an element of the normalized matrix (*Y* = h *yij*i

*m*×*n* Step 4: The normalized values are multiplied by the weights (*wjCO*) of criteria by using Equation (22) to identify the weighted matrix (*C* = h *cij*i *m*×*n* ).

$$\mathbf{c}\_{i\bar{j}} = y\_{i\bar{j}} \times w\_{j\bar{\complement}}.\tag{22}$$

Step 5: The utility degrees (*Z<sup>i</sup>* ) of the alternatives are computed concerning the anti-ideal and ideal solution, respectively.

$$Z\_i^- = \frac{\mathbb{S}\_i}{\mathbb{S}\_{aai}} \, ^\prime \tag{23}$$

$$Z\_i^+ = \frac{\mathbb{S}\_i}{\mathbb{S}\_{ai}} \,\prime \tag{24}$$

where:

$$\mathbf{S}\_{i} = \sum\_{i=1}^{n} \mathbf{c}\_{ij}. \tag{25}$$

Step 6: The utility functions (*f*(*Zi*)) of the alternatives are determined using Equation (26).

$$f(Z\_i) = \frac{Z\_i^+ + Z\_i^-}{1 + \frac{1 - f\left(Z\_i^+\right)}{f\left(Z\_i^+\right)} + \frac{1 - f\left(Z\_i^-\right)}{f\left(Z\_i^-\right)}} \,\tag{26}$$

where:

$$f(Z\_i^-) = \frac{Z\_i^+}{Z\_i^+ + Z\_i^-} \,\prime \tag{27}$$

$$f\left(Z\_i^+\right) = \frac{Z\_i^-}{Z\_i^+ + Z\_i^-}.\tag{28}$$

In Equation (27), *f Z* − *i* denotes the utility function concerning the anti-ideal solution. In Equation (28), *f Z* + *i* presents the utility function concerning the ideal solution.

Step 7: The alternatives are ranked with respect to the final utility function. The alternative with the highest final utility function is determined as the best one.

#### **4. An Illustrative Case Study**

In this study, the best manual stacker will be selected for small warehouses. For this, two logistics experts were asked to identify suitable alternatives for small warehouses and evaluation criteria. The logistics experts identified eight alternatives and five criteria, which are the Price of Stacker (PS) (USD), Capacity (CPC) (kg), Lift Height (LH) (mm), Warranty Period (WRP) (Month), and Fork Length (FL) (mm). All the data are obtained from websites selling stackers. Table 1 indicates the decision matrix.


**Table 1.** Decision matrix.

First of all, the CCSD method is applied to the above matrix to determine the objective weights of the criteria. The results of the CCSD are indicated in Table 2.


Then, the value of *IT<sup>j</sup>* for each criterion is determined by the experts. These values are indicated in Table 3.


The steps of the ITARA method are applied to the decision matrix to achieve the weights of the criteria. The results of the ITARA method, the results of the CCSD, and the combined weights of criteria are indicated in Table 4.

**Table 4.** The results of the ITARA, CCSD, and combined weights.


The combined weights are transferred to the MARCOS method. Then, the extended decision matrix is formed using step 2 of the MARCOS method. Table 5 indicates the extended decision matrix.


**Table 5.** The extended decision matrix.

Then, the extended decision matrix is normalized using Equations (20) and (21). Table 6 presents the normalized matrix.


**Table 6.** The normalized matrix.

Then, the normalized values are multiplied by weights (*wjCO*) of the criteria using Equation (22) to determine the weighted matrix. Table 7 indicates the weighted matrix.

**Table 7.** The weighted matrix.


Using Equations (23)–(28), the results of the MARCOS method are obtained. The results of the MARCOS method and the rankings of stackers are indicated in Table 8.


**Table 8.** The results of the MARCOS method.

According to the results of the MARCOS method, the rankings of the stackers are as follows: Stc8, Stc3, Stc7, Stc4, Stc5, Stc1, Stc2, and Stc6. As can be seen from the input data presented in Table 1, the parameters connected to the Stc8 are always medium to high, which finally emphasizes this choice as a compromise and optimal.

In order to confirm the stability and reliability of the proposed model, the gained results are compared with the results obtained using the following MCDM methods: the weighted aggregated sum product assessment (WASPAS) method [62], additive ratio assessment (ARAS) [29], and grey relational analysis (GRA) [63]. The comparison of the gained ranking orders of the alternatives is shown in Figure 2.

**Figure 2.** Testing the stability of the proposed approach.

#### **5. Discussion and Conclusions**

The selection of equipment for dealing with materials during the logistics process is a very important task for decision-makers because this choice has a significant impact on the future operation of a logistics center. This kind of decision could be treated as strategic because the selected type of equipment could contribute to decreasing costs, shortening the time needed for performing an activity, and providing a higher security for goods and products. All of the points mentioned lead to the conclusion that these decisions require an analytical approach that involves all the criteria

that are important for performing the evaluation process. For that matter, the MCDM methods could be a suitable and useful tool that facilitates the decision process and enables the making of a proper decision for the given conditions. For the facilitation of a decision-making process regarding equipment selection in the logistics field, in this paper we proposed the application of a novel integrated CCSD-ITARA-MARCOS MCDM model. The usefulness of this integrated model is demonstrated through the illustrative case study and pointed to the selection of the appropriate manual stacker. Additionally, two domain experts were involved in the evaluation process in regard to identifying suitable alternatives—i.e., manual stackers for small warehouses—according to the given set of evaluation criteria.

In the conducted case study, the weights of the evaluation criteria were determined by applying and combining the CCSD method and the ITARA method. When it comes to the process of determining weights, both the methods are convenient and easy to apply. The main difference between these methods is their orientation. The CCSD method is objective, whereas the ITARA method is semi-objective. Additionally, the CCSD method does not need a specific normalization method and can include more data on criteria weights [7], whereas the ITARA method belongs to a group of methods that are based on measuring data dispersion. The reason for employing objective and semi-objective methods when it comes to the determination of weights of criteria is that subjective methods often led to a decrease in the accuracy of evaluation with the increase in the number of criteria [8]. The main advantage of this combination relies on the fact that the standpoint of the decision-maker is appreciated to a certain degree. Namely, every decision-maker has a particular attitude regarding the criteria, meaning that for someone something is more important than to for another. If the significance of criteria is determined only on an objective basis then the individual dimension is lost. In this case, combining the objective and semi-objective methods for obtaining the criteria significance reflects the intention of the preservation of the objectiveness of evaluation together with acknowledging the preferences of decision-makers without disturbing the reliability of criteria significance determination.

The final ranking order is obtained by utilizing the newly developed MARCOS method. The MARCOS method primarily is based on testing the reference values of alternatives related to ideal values [9]. Thus, the given method emphasizes the alternative that represents some kind of compromise solution regarding the given requirements. The final evaluation and ranking order are strongly influenced by determining the criteria significance. In the present case, as was previously stated, the significance of the criteria is determined very thoughtfully and carefully, and all because of gaining the most reliable results. It is undeniable that the MARCOS method is easy to use and that it facilitates the decision process, but in combination with the CCSD and ITARA applied for determining the importance of the considered criteria, the reliability of the performed evaluation and the gained ranking order increased.

Following the results of the applied integrated model, the stacker designated as Stc8 is the best in terms of the evaluated criteria. With the aim of testing the proposed approach based on the mentioned MCDM methods, the obtained results are compared with the results determined using the WASPAS, ARAS, and GRA methods. In the computing procedures of all three methods, the same weights of the criteria are involved, which were obtained by applying the CCSD-ITARA. In all observations, the stacker Stc8 is in first place and represents the best choice for the given conditions. Besides this, the stacker Stc3 is in the second place, except in the case when is applied in ARAS, when it is in 3rd place. Thus, in this way the reliability and stability of the proposed approach are completely confirmed.

The proposed integrated CCSD-ITARA-MARCOS model proved to be extremely successful when it comes to solving problems in a logistics system—i.e., a stacker selection problem. The use of the CCSD-ITARA-MARCOS model is very beneficial because it is very comprehensive and empowers us to make confident judgments. However, the applicability of the proposed model should not be limited only to the logistics field. Its potential and possibilities should be examined in other fields, such as information technologies, strategy selection, personnel selection, etc. In that way, all the aspects of the proposed model will be observed and the potential shortages could be resolved.

The key advantage of the introduced integrated model is its simplicity, ease of use, and objectivity that appreciates the standpoint of decision-makers to an acceptable degree. However, the main limitation of the proposed model is that it deals with crisp numbers. The decision-making environment is characterized by uncertainty and vagueness, so it is very difficult to correctly express the evaluation criteria through crisp numbers. In other words, the reliability of the performed evaluation decreases because unexpected changes could cause a situation where, for example, the first ranked alternative is not acceptable because the conditions have changed. In order to better incorporate uncertainty into the evaluation process, an extension with fuzzy, grey, and neutrosophic numbers is proposed. In this way, the proposed model would be improved and the possibility of making impropriate decisions would be reduced. Furthermore, by involving a greater number of decision-makers, the subjective dimension could be incorporated to a greater extent and interesting results would be obtained.

Besides the mentioned shortages, the CCSD-ITARA-MARCOS model proved its applicability and ability to help in the process of decision-making. Overall, the proposed hybrid model is flexible, adaptable, and effective, and it can help decision-makers solve problems in other areas as well. Additionally, the model is quite simple and can be easily modified depending on the problem one wants to solve.

**Author Contributions:** Conceptualization, A.U., D.K., and G.P.; methodology, A.U., D.K., D.S., and Ç.K.; validation, P.T.N.; data curation, G.P.; writing—original draft preparation, D.S. and Ç.K.; writing—review and editing, A.U., and P.T.N.; supervision, D.K. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Application of Improved Best Worst Method (BWM) in Real-World Problems**

#### **Dragan Pamuˇcar 1,\*, Fatih Ecer <sup>2</sup> , Goran Cirovic <sup>3</sup> and Melfi A. Arlasheedi <sup>4</sup>**


Received: 12 July 2020; Accepted: 7 August 2020; Published: 11 August 2020

**Abstract:** The Best Worst Method (BWM) represents a powerful tool for multi-criteria decision-making and defining criteria weight coefficients. However, while solving real-world problems, there are specific multi-criteria problems where several criteria exert the same influence on decision-making. In such situations, the traditional postulates of the BWM imply the defining of one best criterion and one worst criterion from within a set of observed criteria. In this paper, an improvement of the traditional BWM that eliminates this problem is presented. The improved BWM (BWM-I) offers the possibility for decision-makers to express their preferences even in cases where there is more than one best and worst criterion. The development enables the following: (1) the BWM-I enables us to express experts' preferences irrespective of the number of the best/worst criteria in a set of evaluation criteria; (2) the application of the BWM-I reduces the possibility of making a mistake while comparing pairs of criteria, which increases the reliability of the results; and (3) the BWM-I is characterized by its flexibility, which is expressed through the possibility of the realistic processing of experts' preferences irrespective of the number of the criteria that have the same significance and the possibility of the transformation of the BWM-I into the traditional BWM (should there be a unique best/worst criterion). To present the applicability of the BWM-I, it was applied to defining the weight coefficients of the criteria in the field of renewable energy and their ranking.

**Keywords:** BWM; BWM-I; criteria weights; multi-criteria; renewable energy

#### **1. Introduction**

In everyday life, we meet and analyze problems to find an optimal solution, i.e., the task of optimization. We meet them almost everywhere—in technical and economic systems, in the family, and elsewhere. The decision-making process and the choice of "the best" alternative is most frequently based on the analysis of more than one criterion and a series of limitations. When speaking about decision-making with the application of several criteria, decision-making may be referred to as multi-criteria decision-making (MCDM) [1,2]. The essence of the problem of MCDM is reduced to the ranking of an alternative from within the considered set by applying specific mathematical tools and/or logical preferences. Finally, a decision is made on the choice of the best alternative, taking into consideration different evaluation criteria. MCDM is an integral part of the contemporary science of decision-making and the science of management and systems engineering, which has broadly been applied in many fields, such as engineering, economics, medicine, logistics, the military field, and management [3,4].

While solving MCDM problems, the inevitable phase implies the determination of criteria weight coefficients. Studying the available literature enables us to note that there is no unique division of the methods for determining criteria weights and that, for the most part, their division has been made per the authors' understanding of and needs for solving a real-world problem. According to [5], one of the classification methods for determining criteria weights is implying their division into objective and subjective models. Objective models imply the calculation of criteria weight coefficients based on the criteria value in the initial decision-making matrix. The most well-known objective models include the Entropy Method [6], the CRITIC method (CRiteria Importance Through Intercriteria Correlation) [7], and the FANMA method, which is named after the authors of the method [8].

On the other hand, subjective models imply the application of the methodology, implying the direct participation of decision-makers who express their preferences according to the significance of criteria. Subjective models differ from each other in the number of participants and the techniques applied, as well as how the criteria final weights are formed. A big group of subjective models consists of the models based on pairwise comparisons. Thurstone [9] was the first to introduce the pairwise comparison method, which represents a structured manner of defining the decision-making matrix. Pairwise comparisons are used to show the relative significances of *m* actions in situations when it is impossible or senseless to assign rates to actions in relation to criteria. One of the most frequently used methods based on pairwise comparisons is the Analytic Hierarchy Process (AHP) method [10].

#### *Motivation for the Modification of the Traditional Best Worst Method*

In the last few years, the Best Worst Method (BWM) has significantly ranked in the field of MCDM as a model providing reliable and relevant results for optimal decision-making. Rezaei [11] developed the BWM to overcome some shortcomings of the AHP, which first of all pertain to a large number of comparisons in criteria pairs. By applying the BWM, optimal values of weight coefficients are obtained with only *2n-3* comparisons in criteria pairs. A small number of comparisons in pairs remove inconsistencies during the comparison of criteria. That exerts a further influence on obtaining more reliable results (in relation to the AHP), since transitivity relations are less undermined, which further influences a greater consistency of the results. Differently from the AHP, in the BWM, only reference comparisons implying the defining of the advantages of the best criterion over all other criteria and the advantage of such other criteria over the worst criterion are realized. This procedure is much simpler and more accurate, and it eliminates redundant (secondary) comparisons.

The BWM implies that one best criterion and one worst criterion representing reference points for pairwise comparisons with other criteria are defined in every MCDM problem from within a set of evaluation criteria. However, in numerous real-world problems, there are situations in which there is no unique best and/or worst criterion/criteria, but there are two or more best and/or worst criteria. Such situations are impossible to solve by the traditional BWM [11], but a consensus of the decision-maker on the defining of the unique best and/or worst criterion/criteria is required instead. We are going to illustrate this problem with the following example. The decision-maker observes a set of four criteria put in order according to significance C1 = C2 > C3 > C4, for which weight coefficients need to be defined. The traditional BWM implies that the decision-maker should adapt (modify) his/her preferences to the BWM's algorithm, which implies the defining of the unique best criterion, about which the comparison of the three remaining criteria will be made. In that manner, objectivity in the decision-making process is undermined. If, based on a consensus, we were to define that criterion C1 is the best criterion, then, since the difference between C1 and C2 is minimal, we would take the smallest value from the 9-degree scale, namely *a*<sup>12</sup> = 2. This means that the weight coefficients of the criteria C1 and C2 should be in a 2:1 ratio, which does not represent the decision-maker's real preference. Solving this problem by applying the traditional BWM, the weight coefficients that are in an approximate ratio *w*<sup>1</sup> ≈ 2 · *w*<sup>2</sup> are obtained. In this paper, the authors have developed an

improved BWM (BWM-I), which enables us to solve a problem such as this or similar problems. The BWM-I enables us to realistically perceive the decision-maker's preferences irrespective of the number of best/worst criteria in a problem. Besides, in the case of a larger number of best/worst criteria, the number of criteria pairwise comparisons is reduced (decreases) in the BWM-I from *2n-3* to *2n-5*. In that way, the model's algorithm is simplified, and the reliability of results is increased. In the case when there is a unique best/worst criterion, the BWM-I transforms into the traditional BWM with *2n-3* comparisons. This flexibility recommends the application of the BWM-I in complex studies in which criteria and experts' preferences differ depending on experts' preferences.

#### **2. Applications of BWM: A Literature Review**

In order to calculate weights of evaluation criteria in an MCDM problem, some MCDM methods can be utilized, such as stepwise weight assessment ratio analysis (SWARA) [12], the analytic hierarchy process (AHP) [13–15], the analytic network process (ANP), the full consistency method (FUCOM) [16,17], criteria importance through intercriteria correlation (CRITIC) [18], Entropy [19], level-based weight assessment (LBWA) [20], and so on. As one of the latest weighting methods, BWM is based on pairwise comparisons to extract criteria weights. By only conducting *2n-3* comparisons, as mentioned before, the BWM overcomes the inconsistency problem encountered during pairwise comparisons.

During the past five years, the BWM has already been utilized in numerous real-world problems, such as energy, supply chain management, transportation, manufacturing, education, investment, performance evaluation, airline industry, communication, healthcare, banking, technology, and tourism. Moreover, there are numerous studies in which only the BWM method is used (singleton integration), as well as the papers employing this method together with other methods (multiple integrations).

Van de Kaa et al. [21] used the BWM to compare three communication factors and [22] applied the method to the evaluation of technical and performance criteria in supply chain management. Similarly, [23–25] studied the BWM to determine sustainable criteria weights in sustainable supply chain management. Both [26,27] applied the BWM to the selection of the mobile phone. In another study, the BWM was employed to evaluate cars [28]. Ghaffari [29] employed the method to evaluate the key success factors in the development of technological innovation. In addition, [30] applied the BWM in the development of a strategy for overcoming barriers to energy efficiency in buildings. This method is used by [31] to assess the factors influencing information-sharing arrangements. Furthermore, [24] employed the BWM to evaluate the research and development (R&D) performance of firms. Yadollahi et al. [32] applied the BWM in order to prioritize the factors of the service experience in the banking industry. Finally, [33] applied the method to the selection of the bioethanol facility location.

As mentioned above, the BWM has been combined with other robust techniques in order to obtain better results. For instance, fuzzy information and interval values were utilized to integrate with the method. To represent uncertainty in the BWM, [34,35] used fuzzy sets in manufacturing and performance evaluation, respectively. While [36] applied triangular fuzzy sets in performance evaluation, similarly, [37,38] employed the method with the variants of the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) method in the supply chain management, the energy sector, and investment, respectively. Furthermore, researchers have integrated the Multicriteria Optimization and Compromise Solution (VIKOR) method with the BWM. For instance, [39–41] applied the BWM–VIKOR integration to supplier selection and the green performance of airports, respectively. In another study, [42] proposed a BWM-interval type-2 fuzzy TOPSIS framework for the selection of the most proper green supplier. In order to select a location for wind plants, [43] used the BWM and the MultiAtributive Ideal-Real Comparative Analysis (MAIRCA) integration. Moreover, [44] studied a rough BWM and Simple Aditive Weighting (SAW) approach to wagon selection. In order to assess firms' performance in product development, [45] applied the fuzzy BWM and the fuzzy Analytic Network Process (ANP) methodologies. Another study suggested the fuzzy BWM and the fuzzy COPRAS methodologies for the analysis of the key factors of sustainable architecture [46]. In order to assess and

rank foreign companies, [47] proposed the BWM, ELimination Et Choice Translating REality (ELECTRE) III, and Preference Ranking Organization METHod for Enrichment of Evaluations (PROMETHEE) II multi-criteria models. Another study by [48] introduced the interval rough BWM-based Weighted Aggregated Sum Product ASsessment (WASPAS) and Multi-Attributive Border Approximation area Comparison (MABAC) models for the evaluation of third-party logistics providers. An integrated model including the BWM, TOPSIS, Gray Relational Analysis (GRA), and Weighted Sum Approach (WSA) was proposed for turning operations [49]. For web service selection, [50] employed the BWM, VIKOR, SAW, TOPSIS, and COmplex PRoportional ASsessment (COPRAS). Finally, [51] proposed the BWM-based MAIRCA multi-criteria methodology for neighborhood selection.

What is common to all these studies is that they apply the traditional algorithm of the BWM, which implies that one best criterion and one worst criterion are defined through a consensus. In the literature, there are numerous examples of studies implying the defining of criteria weight coefficients irrespective of whether there are one best or worst criterion, or several best or worst criteria [52–55]. In such studies, the algorithm of the traditional BWM would not be able to provide objective results, since it requires the adaptation of experts' preferences to one best/worst criterion. For that reason, the BWM-I that eliminates this problem and enables us to define criteria weights through a realistic perception of experts' preferences has been developed in this paper. The algorithm of the BWM-I is presented in the following section.

#### **3. Improved Best Worst Method (BWM-I)**

The BWM-I provides decision-makers with the possibility of choosing as many best/worst criteria as there are in the real decision-making problem. The determination of evaluation criteria weight coefficients by the application of the BWM-I implies the following steps:

Step 1. Defining a set of evaluation criteria *C* = {*c*1, *c*2, . . . *cn*}, where *n* represents the total number of the criteria.

Step 2. Determining the best and the worst criteria, i.e., as many best and worst criteria as there are in the decision-making model. Simultaneously, *m<sup>b</sup>* and *m<sup>w</sup>* denote the number of the best and the worst criteria in the model, respectively.

Step 3. Determining the advantages of the best criterion/criteria from within the set *C* over the other criteria. A 9-degree numeric scale is used to determine the advantage(s). If the criteria *C*<sup>1</sup> and *C*<sup>2</sup> are marked as the best criterion, then an improved best-to-others vector (M-BO) is obtained by the application of expression (1), namely:

$$A\_B = (m\_b a\_{\text{BB}}, a\_{\text{B}(m\_b+1)}, a\_{\text{B}(m\_b+2)}, \dots, a\_{\text{Bu}}) \tag{1}$$

where *aBn* represents the advantage of the best criterion *B* over the criterion *j*, and *m<sup>b</sup>* represents the number of the best criteria in the model, whereas *aBB* = 1. It is clear that for *m<sup>b</sup>* = 1, expression (1) transforms into a classical best-to-others (BO) vector, as in the traditional BWM.

Step 4. Determining the advantages of all the criteria from within the set *C* over the worst criterion/criteria. In order to determine the advantage(s), as in Step 3, a 9-degree numeric scale is used. If we mark the criterion *Cn*−<sup>1</sup> and the criterion *Cn*, i.e., *m<sup>w</sup>* = 2, as the worst criterion, then a modified others-to-worst vector (M-OW) is obtained by the application of expression (2), as follows:

$$A\_W = (a\_{1W'} a\_{2W'} \dots\_{\prime} a\_{(n-3)W'} a\_{(n-2)W'} m\_{\overline{w}} a\_{nW}) \tag{2}$$

where *ajW* represents the advantage of the criterion *j* over the worst criterion *W*, *m<sup>w</sup>* represents the number of the worst criteria in the model, whereas *aWW* = 1. For *m<sup>w</sup>* = 1, expression (2) transforms into a classical OW vector, as in the traditional BWM.

Step 5. Calculating the optimal values of the weight coefficients of the criteria from within the set *C*, (*w* ∗ 1 , *w* ∗ 2 , . . . , *w* ∗ *n* ). Since the BWM algorithm defining weight coefficients in the case when there is one or more than one best and/or worst criterion/criteria (i.e., *m<sup>b</sup>* ≥ 1 and *m<sup>w</sup>* ≥ 1) is considered here, the postulates for solving the optimization model must be defined.

The optimal values of weight coefficients are obtained once the condition stipulating that for each pair *wB*/*w<sup>j</sup>* and *wj*/*wW*, it is applicable that *wB*/*w<sup>j</sup>* = *aBj* and *wj*/*w<sup>W</sup>* = *ajW* is met. Since we are considering the case where *m<sup>b</sup>* ≥ 1 and/or *m<sup>w</sup>* ≥ 1, it is necessary that the mentioned conditions should be revised, so there is the condition that *wB*/*w<sup>j</sup>* = *mbaBj* and *wj*/*w<sup>W</sup>* = *mwajW*, where the weight coefficients *w<sup>B</sup>* and *w<sup>W</sup>* represent the weights of the unique best and the unique worst criteria. The unique best and worst criteria (*C<sup>B</sup>* and *CW*) represent all the criteria that are marked as the best and the worst criteria in the set *C* = {*c*1, *c*2, . . . *cn*}. In addition, since *wB*/*w<sup>W</sup>* = *mbaBW*/*mw*, we obtain *w<sup>B</sup> w<sup>W</sup> m<sup>W</sup> mb* = *aBW*. It arises from the aforementioned factors that the weight coefficient of the unique best criterion (*wB*) represents the sum of all the weight coefficients of the criteria that are marked as the best criteria in the set *C* = {*c*1, *c*2, . . . *cn*}, i.e.,

$$w\_B = \sum\_{l=1}^{b} w\_l \tag{3}$$

where *w<sup>l</sup>* represents the weight coefficients of all the criteria in the set *C* = {*c*1, *c*2, . . . *cn*} that are marked as the best criteria, whereas *b* represents the total number of the best criteria from the set *C*.

The unique worst criterion is defined similarly. The weight coefficient of the unique worst criterion (*wW*) represents the sum of all weight coefficients of the criteria that are marked as the worst criteria in the set *C* = {*c*1, *c*2, . . . *cn*}, i.e.,

$$w\_W = \sum\_{k=1}^{\upsilon} w\_k \tag{4}$$

where *w<sup>k</sup>* represents the weight coefficients of all the criteria that are marked as the worst criteria in the set *C* = {*c*1, *c*2, . . . *cn*}, and *v* represents the total number of the worst criteria from within the set *C*. Since the optimal values of weight coefficients should meet the condition stipulating that the maximum absolute values of the differences should be *w<sup>B</sup> mb* ·*wj* − *aBj* and *wj mw*·*w<sup>W</sup>* − *ajW* , all such absolute values must be minimized for each *j*, i.e.,

$$\begin{aligned} \underset{j}{\text{minimize}} & \left\{ \left| \frac{w\_{B}}{m\_{b} \cdot w\_{j}} - a\_{Bj} \right| , \left| \frac{w\_{j}}{m\_{w} \cdot w\_{W}} - a\_{jW} \right| \right\} \\ \text{s.t.} & \\ w\_{B} + w\_{W} + \sum\_{j=1}^{n-(m\_{b} + m\_{w})} w\_{j} = 1 \\ w\_{B\prime} w\_{W\prime\prime} w\_{j} \ge 0 \forall j \end{aligned} \tag{5}$$

The model presented in (5) is equivalent to the following model.

 

 

 

$$\begin{array}{l|l} \min \xi \\ \text{s.t.} & \left| \frac{w\_{B}}{m\_{b} \cdot w\_{j}} - a\_{Bj} \right| \leq \xi, \forall w\_{j} \neq w\_{W} \\ & \left| \frac{w\_{j}}{m\_{W} \cdot w\_{W}} - a\_{jW} \right| \leq \xi, \forall w\_{j} \neq w\_{B} \\ & \left| \frac{w\_{B}}{w\_{W}} \frac{w\_{W}}{m\_{b}} - a\_{BW} \right| \leq \xi, \\ & w\_{B} + w\_{W} + \sum\_{j=1 \atop j=1}^{n-(m\_{b} + m\_{w})} w\_{j} = 1 \\ & w\_{B}, w\_{W}, w\_{j} \geq 0 \forall j \end{array} \tag{6}$$

Should *m<sup>b</sup>* > 1 and/or *m<sup>w</sup>* > 1, then the total number of the criteria in the model is reduced (decreases) by the introduction of the unique best and the unique worst criteria. Then, we obtain a smaller number of comparisons, i.e., the total number of comparisons in the model is reduced from 2*n* − 3 (in the traditional BWM) to 2*n* − 5 (in the BWM-I). It is clear that, should *m<sup>b</sup>* = *m<sup>w</sup>* = 1, the models (5) and (6) transform into the classical optimization BWM model [11].

**Example 1.** *If a set of eight criteria C*1, *C*2, . . . , *C*<sup>8</sup> *is observed, in which there are two best and two worst criteria; if we know that the criteria C*<sup>1</sup> = *C*<sup>2</sup> *are marked as the best, then the unique best criterion (CB) that represents both criteria in model (6) is introduced. If the criteria C*<sup>7</sup> = *C*<sup>8</sup> *are marked as the worst, then the unique worst criterion (CW) represents the criteria C*<sup>7</sup> *and C*<sup>8</sup> *and in model (6). Then, the total number of the criteria in the model is reduced to six, since C*<sup>1</sup> = *C*<sup>2</sup> = *C<sup>B</sup> and C*<sup>7</sup> = *C*<sup>8</sup> = *CW. Thus, the total number of comparisons in pairs of criteria is reduced from 15 to 13.*

*Should m<sup>b</sup>* > 1 *and*/*or m<sup>w</sup>* > 1*, then, based on conditions (3) and (4), it follows that by solving model (6), the values of the weight coe*ffi*cients of the best criterion and the worst criterion increased by the number of the best and the worst criteria are obtained. Therefore, after solving model (6), the obtained values of the weights w<sup>B</sup> and w<sup>W</sup> need to be divided by m<sup>b</sup> and m<sup>w</sup> in order to obtain the final values of the weight coe*ffi*cients of the best and the worst criteria. For example, if m<sup>b</sup>* = *m<sup>w</sup>* = 2*, the final values of the best and the worst (w* ∗ *B and w* ∗ *W ) criteria obtained are w* ∗ *B*1 = *w* ∗ *<sup>B</sup>*<sup>2</sup> = *wB*/*m<sup>b</sup>* = *wB*/2 *and w* ∗ *W*1 = *w* ∗ *<sup>W</sup>*<sup>2</sup> = *wW*/*m<sup>w</sup>* = *wW*/2*. The values of the weights of the remaining criteria remain unchanged, and they are taken from the solution to model (6).*

In order to more easily understand the algorithm of the BWM-I, the following part is dedicated to solving a simple example including five criteria taken from a study by [28]; then, a complex model implying the defining of the weight coefficients of a total of the 28 criteria grouped into six clusters is considered in the case study (Section 3).

**Example 2.** *While buying a car, the buyer applies five criteria for the evaluation of the alternative (the car): Quality (C1), Price (C2), Comfort (C3), Safety (C4), and Style (C5). The buyer has the evaluated criteria per the algorithm of the traditional BWM, as shown in Table 1.*


**Table 1.** The best-to-others and others-to-worst pairwise comparison vectors.

*Based on the data accounted for in Table 1, it is possible to conclude that the buyer considers the criteria Price (C2) and Safety (C4) as the most significant, whereas the criterion Style (C5) is rated as the least significant. The problem that appears here cannot be solved through the application of the traditional BWM, which requires the defining of the unique best and worst criteria. If we were to insist on the defining of the unique best criterion (as is required by the traditional BWM), then we would have to revise the BO vector to define a single best criterion. However, by doing so, we would exert an influence on the buyer's preferences, i.e., the buyer would not express his real preferences. Those revised preferences would further exert an influence on a non-objective choice of alternatives, which should be avoided. If the expert (in this case, the buyer) requires a high degree of rationality during the evaluation of the criteria, the multi-criteria decision-making methods also need to be used as support to such rational decision-making in order to meet that very same condition. Therefore, since it was impossible to apply the traditional BWM, the BWM-I was applied.*

Based on the data from Table 1, we conclude that the number of the best criteria is *m<sup>b</sup>* = 2, whereas the number of the worst criteria is *m<sup>w</sup>* = 1. Based on that and expression (4), it is possible to define the model for the calculation of the optimal values of the weight coefficients of the BWM-I as follows:

$$\begin{array}{l|c|c} \min \xi \\ s.t. \\ \left| \frac{w\_B}{2 \cdot w\_1} - 2 \right| \le \xi, \left| \frac{w\_B}{2 \cdot w\_3} - 4 \right| \le \xi, \left| \frac{w\_B}{2 \cdot w\_W} - 9 \right| \le \xi, \\ \left| \frac{w\_1}{w\_W} - 4 \right| \le \xi, \left| \frac{w\_3}{w\_W} - 2 \right| \le \xi, \\ w\_B + w\_W + w\_1 + w\_3 = 1 \\ \left| w\_B, w\_W, w\_1, w\_3 \ge 0 \right. \end{array} \tag{7}$$

By solving the presented model, the values of the weights *w<sup>B</sup>* = 0.7088, *w<sup>W</sup>* = 0.0400, *w* ∗ 1 = 0.1656, and *w* ∗ <sup>3</sup> = 0.0856, as well as ξ = 0.140, are obtained. Based on condition (3), we obtain *w* ∗ *B*1 = *w* ∗ <sup>2</sup> = 0.7088/2 = 0.3544, i.e., *w* ∗ *<sup>B</sup>*<sup>2</sup> = *w* ∗ 4 = 0.7088/2 = 0.3544. Since *m<sup>b</sup>* = 1, *w<sup>W</sup>* = *w* ∗ <sup>5</sup> = 0.0400 is obtained. So, the optimal values of the weight coefficients *w<sup>j</sup>* = (0.1656, 0.3544, 0.0856, 0.3544, 0.0400) *T* are obtained characterized by a high consistency ratio:

$$CR = \frac{\xi}{CI} = \frac{0.140}{5.23} = 0.026.5$$

Had the model of the traditional BWM [27] been applied to the presented example, optimization model (8) would have been obtained.

$$\begin{array}{l|c|c|c} \min \xi \\ s.t. \\ \left| \frac{w\_2}{w\_1} - 2 \right| \le \xi, \left| \frac{w\_2}{w\_3} - 4 \right| \le \xi, \left| \frac{w\_2}{w\_4} - 1 \right| \le \xi, \left| \frac{w\_2}{w\_5} - 9 \right| \le \xi, \\ \left| \frac{w\_1}{w\_3} - 4 \right| \le \xi, \left| \frac{w\_3}{w\_5} - 2 \right| \le \xi, \left| \frac{w\_4}{w\_5} - 9 \right| \le \xi, \\ w\_1 + w\_2 + w\_3 + w\_4 + w\_5 = 1 \\ w\_1, w\_2, w\_3, w\_4, w\_5 \ge 0 \end{array} \tag{8}$$

By solving model (8), the following vectors of the weight coefficients *w<sup>j</sup>* = (0.1638, 0.3505, 0.0847, 0.3616, 0.0396) *T* and ξ = 0.1401 are obtained. Based on the results obtained, we perceive that even though there is the defined condition that both best criteria (*C2* and *C4*) are of the same significance, the values of the weight coefficients are different (*w*<sup>2</sup> , *w*4), i.e., *w*<sup>2</sup> = 0.3505 and *w*<sup>4</sup> = 0.3616. The different values of the weight coefficients of the criteria *C2* and *C4* are a consequence of undermining the condition of the transitivity of relations between criteria. This is confirmed by the value of the consistency ratio (*CR*), which is *CR* = 0.026, just as in model (7).

The shown example has demonstrated that the traditional BWM model can be applied to the determination of the weights of a larger number of the best/worst criteria, but only in the case when the consistency ratio is ideal, i.e., when *CR* = 0.00. However, we may realistically expect that more than one best/worst criterion and the value *CR* > 0 will appear in solving real-world problems, especially those with a greater number of criteria. In such cases, the BWM-I is inevitably applied. Given the fact that the BWM-I is capable of transforming itself into the traditional BWM (in the case when *m<sup>b</sup>* = *m<sup>w</sup>* = 1), its application is also logical for a future objective perception of and solving real-world multi-criteria problems.

#### **4. Case Study: The Application of BWM-I**

In this chapter, the application of the BWM-I in solving a renewable energy source evaluation problem implying the existence of a larger number of the best/worst criteria within the framework of the dimensions/criteria is presented. The most common criteria for a renewable energy source

evaluation involve technical, environmental, social, risk, political, and economic aspects. Thus, we introduce a six-dimensional model in order to define the weights of the drivers for renewable energy sources, as shown in Figure 1, in which several criteria are considered for each dimension. The six dimensions are technical (*C1*), economic (*C2*), social (*C3*), environmental (*C4*), risk (*C5*), and political (*C6*); each dimension comprises three to six criteria. Moreover, the criteria for the evaluation of renewable energy sources were achieved by reviewing the existing literature [56–64]. Consequently, the evaluation comprised of six dimensions and 28 criteria. The criteria and their descriptions are listed in Table 2.

**Figure 1.** The local weights of the criteria according to the considered dimensions. **Figure 1.** The local weights of the criteria according to the considered dimensions.




**Table 2.** *Cont.*

After defining the set of the evaluation criteria, the following steps of the BWM-I (Steps 3 and 4) imply the formation of the M-BO and M-OW vectors of the dimensions/sub-criteria, as shown in Table 3.


**Table 3.** The best-to-others (M-BO) and modified others-to-worst (M-OW) vectors of the dimensions/ sub-criteria.

Table 3 enables us to note that in some M-BO and M-OW vectors, there are several best and worst criteria. So, based on the M-BO and M-OW dimensions, we notice the existence of one best criterion (Environmental—*C4*), whereas there are two worst criteria (Risk—*C5* and Political—*C6*). In the Economic Sub-Criteria group, there are three best criteria (Investment cost—*C21*, Operation and maintenance cost—*C22*, and Energy cost—*C24*) and one worst criterion (Return of investment—*C23*). In the Social Sub-Criteria group, there is one best criterion (Social acceptance—*C31*) and two worst criteria (Noise—*C34* and Visual impact—*C35*). The Environmental Sub-Criteria group is characteristic, since it contains two best criteria (Impact on the environment and humans—*C43* and Climate change—*C45*) and two worst criteria (GHG Emissions—*C41* and Water use—*C44*). In the Political Sub-Criteria group, there are two best criteria (Compatibility with the national energy policy—*C62* and Compatibility with the public policy—*C63*) and one worst criterion (Government support—*C64*). In the remaining sub-criteria groups (the Technical Sub-Criteria and the Risk Sub-Criteria), there are the unique best and worst criteria, for which reason the traditional postulate of the BWM is used to define the weight coefficients of these sub-criteria groups.

Based on the M-BO and M-OW vectors (Table 3), the optimization models for the calculation of the weight coefficients of the dimensions/sub-criteria were defined. A total of seven BWM-I models were defined, some of which are shown in the next part.


By solving the presented models, the optimal values of the weight coefficients of the dimensions/sub-criteria are obtained, as shown in Table 4.


**Table 4.** The optimal values of the weight coefficients of the dimensions/sub-criteria.


**Table 4.** *Cont.*

In Table 4, the global and local values of the weight coefficients of the criteria are presented. The global weights of the criteria were obtained by multiplying the weight coefficient of the dimension with the weight coefficients of the sub-criterion. By solving model (6), the values of ξ ∗ , which are ξ ∗ *<sup>C</sup>*1−*C*<sup>6</sup> = 0.6277, ξ ∗ *<sup>C</sup>*11−*C*<sup>15</sup> = 0.2984, ξ ∗ *<sup>C</sup>*21−*C*<sup>26</sup> = 0.3542, ξ ∗ *<sup>C</sup>*31−*C*<sup>35</sup> = 0.3542, ξ ∗ *<sup>C</sup>*41−*C*<sup>45</sup> = 0.8939, ξ ∗ *<sup>C</sup>*51−*C*<sup>53</sup> = 0.2087, and ξ ∗ *<sup>C</sup>*61−*C*<sup>64</sup> = 0.2087 were obtained. The values of ξ ∗ are used to determine the consistency ratio, as shown in Table 5.

**Table 5.** The consistency index and the consistency ratio of our modified Best Worst Method (BWM-I).


The analysis of the results of the BWM-I from Table 5 allows us to conclude that the values of the consistency ratio are satisfactory [27].

According to the findings shown in Table 4, the environmental dimension is determined to be the most crucial dimension, with the significance of 0.3972, only to be followed by the economic and technical dimensions, with the comparative weights of 0.2823 and 0.1674, respectively. According to Figure 1, in the pairwise comparison of the evaluation criteria, both "Impact on the environment and humans" and "Climate change" ranked as the priority factor from the environmental aspect, only to be followed by "Land use". Furthermore, the three criteria (Investment cost, Operation and maintenance cost, and Energy cost) ranked the first in the ranking related to the economic dimension. "Technology maturity" and "social acceptance" were the most important criteria in terms of technological and social dimensions, respectively. Overall, according to the global weights, the most important criteria were "Climate change" (0.1199), "Impact on the environment and humans" (0.1199), "Land use" (0.1084), and "Technology maturity" (0.0716), indicating that the Climate change, Impact on the environment and humans, Land use, and Technology maturity criteria represent the four most crucial evaluation criteria for the determination of a suitable renewable energy source.

In order to show the sensitivity analysis of the BWM-I model, in the next section, we simulated the changes in the input parameters of the BO and OW vectors. In each group of criteria, another best or worst criterion was added, while the values of the remaining criteria in BO and OW vectors remained unchanged.

In the Dimensions group, two best criteria were selected (C4 and C2), while the remaining values of the criteria remained unchanged. In the Technical Sub-Criteria group, two criteria—C12 and C11—were selected as the worst criteria. In the Economic Sub-Criteria group, in addition to the three best criteria, the two worst criteria were selected (C23 and C25). In the Social Sub-Criteria group, two best criteria, C31 and C32, were added to the input BO and OW vectors. In the Risk Sub-Criteria group, in addition to the best criterion C51 and criterion C52, it was selected as the best criterion. In the Political Sub-Criteria group, in addition to C64, criterion C61 was also chosen as the worst criterion. After the implementation of these changes, the results shown in Figure 2 were obtained.

**Figure 2.** Results of modified M-BO and M-OW vectors of the dimensions/sub-criteria.

By analyzing the results from Figure 2, we notice that the model is sensitive to changes in the number of best and worst criteria in the input data. Despite the changes in the input data, the degree of consistency of the considered models remained within acceptable limits. The authors believe that the presented analysis shows the stability and robustness of the modified BWM methodology.

#### **5. Managerial Implications**

Integrating some methods into decision-making methodologies will make a significant contribution to the particular body of knowledge. Furthermore, it is valuable that the existing methods are made more efficient by completing their deficiencies. In decision theory, MCDM methods are utilized to solve many real-world problems. Improvement and development of the functionless side of an existing approach is always appreciated to continuously improve this branch of operations research, because businesses, politicians, researchers, and industries need such arrangements to make more reliable decisions.

The aim of this paper is pertinent to the fact that the BWM method, which is one of the new approaches in the field of MCDM, is ineffective if there is more than one best/worst criterion. Thus, this work suggests a novel strategy to solve an MCDM problem via some specific modifications to the main structure of the traditional BWM method. As a result, decision-makers will be able to easily cope with the problem of more than one best/worst criteria often encountered in real-world problems. Furthermore, by making fewer pairwise comparisons (only *2n-5*), they will not only have to deal with the problem of inconsistency but also save time. Therefore, it is as well believed that the present article will give a different point of view for future works.

The presented methodology eliminates deviations in expert preferences that occur as a consequence of adapting to the traditional BWM algorithm. The previous analysis showed apparent advantages, so it is expected that the proposed methodology will be accepted by the management when solving real-world problems. Most decision-makers readily accept tools that are logical and easy to understand. The BWM-I methodology can be included in the category of easy-to-understand decision-making tools. In particular, it is expected to be accepted and used by decision-makers who know the algorithm of the traditional BWM, as well as its advantages and disadvantages. In addition, the use of the BWM-I methodology as part of the set of tools that make up the decision support system will make it more acceptable to management structures. This tool will be acceptable for managers who require a more realistic view of the mutual relations between the criteria, as well as a realistic and rational view of expert preferences.

A few insights are extracted to increase the applicability of the proposed BWM-I methodology in real cases. Thereby, the implications are as follows:


Knowing that the decision-making process is accompanied by greater or lesser uncertainties caused by a dynamic environment, such a system eliminates further adjustment and deviation of expert preferences. As a result of this feature, the demonstrated methodology can help companies establish a rational, systematic approach to evaluating the internal and external factors that affect their business. The flexibility of the methodology in terms of reducing the number of pairwise comparisons is also valuable. It is expected that the flexibility of the BWM-I methodology will enable its application in complex studies in which criteria and expert preferences differ and in which no consensus is required in expert preferences.

#### **6. Conclusions**

The BWM method represents a very powerful tool for multi-criteria decision-making and defining criteria weight coefficients. Generally viewed, while solving real-world problems, there are specific multi-criteria problems in which there are several criteria with the same influence on decision-making. The traditional postulate of the BWM implies that while defining priority vectors (BO and OW), one best criterion and one worst criterion should be chosen from within a set of the observed criteria. Then, the criteria are compared in pairs by defining the best-to-others (BO) and others-to-worst (OW) vectors. While defining the BO and OW vectors, the decision-maker may assign the same criteria preferences while comparing the BO and OW, which means that there may be several criteria with the same significance. However, the traditional BWM does not permit the defining of several best/worst criteria that will have the same significance, although it is frequently the case in real-world problems. As a result of that, by applying the traditional BWM, decision-makers are required to define one best/worst criterion should they believe that there are two or more best/worst criteria. In that way, the decision-maker's preferences are distorted to a certain extent, and no objective results are obtained. Should the small flexibility of the 9-degree scale be added to that as well, then the obtained values of criteria weights may significantly deviate from the preferences expressed by the decision-maker.

In this paper, the improvement of the traditional BWM is presented. The improved BWM (BWM-I) eliminates the shortcomings of the traditional BWM. It offers a possibility for decision-makers to express their preferences even in the cases when there is more than one best and worst criterion. The BWM-I was successfully tested on two examples in this paper. In the first example in Section 3, a case in which there are two best criteria is presented. The algorithm of the traditional BWM and the BWM-I was also applied to the same example. It was shown that the BWM-I has greater flexibility in expressing experts' preferences in relation to the traditional BWM. In the second example (Section 4), the BWM-I was applied to the defining of the weight coefficients of the criteria in the field of renewable energy and their ranking. In the presented example, all of the 28 criteria grouped into the six dimensions were subjected to evaluation. Through a combination of the seven models of the BWM-I, the advantages of the developed model and the possibilities of the objective processing of experts' preferences are demonstrated.

In comparison with the traditional BWM, the proposed BWM-I has several advantages according to the following:


#### *Future Research*

The proposed BWM-I represents a tool that is capable of being successfully integrated with other MCDM techniques. The development of the hybrid multi-criteria models for group decision-making that would be based on the integration of the BWM-I into other MCDM tools represents one of the future directions of its application. The second logical step for the future improvement of the BWM-I is its application in an uncertain environment, such as fuzzy, rough, grey, neutrosophic, and so on [67,68]. In the last few years, numerous linguistic approaches, such as the expansions of linguistic variables in a neutrosophic environment and the unbalanced linguistic approach, have been developed. The mentioned approaches have attracted considerable attention to the decision-making field through the possibility of applying linguistic variables in the decision-making process. Connecting these

linguistic approaches with the BWM-I and research into the possibility of the linguistic modeling of preferences are interesting and promising topics in future research.

**Author Contributions:** Conceptualization, methodology, validation, D.P. and F.E.; writing—original draft preparation, review and editing, D.P., F.E., G.C. and M.A.A. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


68. Li, J.; Wang, J.Q.; Hu, J.H. Multi-criteria decision-making method based on dominance degree and BWM with probabilistic hesitant fuzzy information. *Int. J. Mach. Learn. Cybern.* **2019**, *10*, 1671–1685.

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

*Article*

## **A Robust Algorithm for Classification and Diagnosis of Brain Disease Using Local Linear Approximation and Generalized Autoregressive Conditional Heteroscedasticity Model**

#### **Ali Hamzenejad <sup>1</sup> , Saeid Jafarzadeh Ghoushchi <sup>2</sup> , Vahid Baradaran <sup>1</sup> and Abbas Mardani 3,4,\***


Received: 5 July 2020; Accepted: 24 July 2020; Published: 2 August 2020

**Abstract:** Regions detection has an influence on the better treatment of brain tumors. Existing algorithms in the early detection of tumors are difficult to diagnose reliably. In this paper, we introduced a new robust algorithm using three methods for the classification of brain disease. The first method is Wavelet-Generalized Autoregressive Conditional Heteroscedasticity-K-Nearest Neighbor (W-GARCH-KNN). The Two-Dimensional Discrete Wavelet (2D-DWT) is utilized as the input images. The sub-banded wavelet coefficients are modeled using the GARCH model. The features of the GARCH model are considered as the main property vector. The second method is the Developed Wavelet-GARCH-KNN (D-WGK), which solves the incompatibility of the WGK method for the use of a low pass sub-band. The third method is the Wavelet Local Linear Approximation (LLA)-KNN, which we used for modeling the wavelet sub-bands. The extracted features were applied separately to determine the normal image or brain tumor based on classification methods. The classification was performed for the diagnosis of tumor types. The empirical results showed that the proposed algorithm obtained a high rate of classification and better practices than recently introduced algorithms while requiring a smaller number of classification features. According to the results, the Low-Low sub-bands are not adopted with the GARCH model; therefore, with the use of homomorphic filtering, this limitation is overcome. The results showed that the presented Local Linear (LL) method was better than the GARCH model for modeling wavelet sub-bands.

**Keywords:** Magnetic Resonance Imaging (MRI); wavelet transform; GARCH; LLA; LDA; KNN

#### **1. Introduction**

Electromagnetic imaging techniques provide valuable information about the human body. One of these methods is the Magnetic Resonance Imaging (MRI) of the brain [1]. One major area of research that has expanded in medical engineering involves diagnostic tools by machine control for a quicker and easier inference, which can be a great help for physicians in clinical medicine. Therefore, in recent years, mathematical methods have attracted much attention to the analysis of neural network data [2]. Brain images are considered as interesting subjects in the mathematical application and diagnosis of brain disorders in a patient [3]. The MRI can be used to examine the status of the brain tissue and discover whether or not there is a disease [4]. In MRI imaging, the patient is exposed to a strong

magnetic field, after which radio waves are leaked toward him. The body's tissues emit another radio wave in response to this position. By receiving these radio waves emitted from the patient's body and by analyzing these waves via a powerful computer, images are created on the device monitor that show the levels of the target organ parts. The next step involves extracting features.

The Two-Dimensional Wavelet Transform (DWT) and the Principal Component Analysis (PCA) were the methods that were used to extract the features of the images [5,6]. Then, the classification methods were used to diagnosis the disease type in the brain [7,8]. Chaplot, et al. [9] used two-dimensional DWT sub-bands to extract the features in their research on Alzheimer's Disease (AD). Additionally, Daubechies filters were used as a filtering technique. The outcome illustrated that the Support Vector Machine (SVM) with radial base function and the polynomial kernel has a higher performance than linear neural networks and SVM [10,11]. Hackmack, et al. [12] used multidimensional complex wavelet transformations, and then linear SVM, to determine multi-scale brain images. The results showed that low-band scales include more information than high-frequency values. Maitra and Chatterjee [13] presented a Slantlet deformation—developed DWT—to extract the containing features of the brain's images. The Fuzzy C-Meaning (FCM) method has been used to analyze the brain MRI, based on the characteristics of the image histogram, in order to determine a healthy subject from Alzheimer's disease. Ramathilagam, et al. [14] used the c-means fuzzy modified algorithm to divide the brain MRI image with a T1-T2 weight. Since the c-means standard factor is intensively sensitive to the noise-induced area during extraction, the authors proposed to repeat the dist-max algorithm before executing the method.

Rivest-Hénault and Cheriet [15] used a local linear representation to model the brain tissue, after which regional models were embedded in the framework of the surface set in order to control the spatial integrity of division. Hussain, et al. [16] classified the images as normal or abnormal using (Back-Propagation Neural Networks) BPNN feed-forward, with characteristics derived from dynamic statistics and 2D-DWT. Bhattacharyya and Kim [17] presented an image segmentation technique for detecting a tumor with MRI images. The existing thresholding techniques produced different results in each image. Therefore, in order to achieve a satisfactory result in the brain tumor image, they presented a methodology that found the tumor to be unique. Kim, et al. [18] studied the diagnosis of Alzheimer's disease based on the Electroencephalogram (EEG) signal of the brain using genetic algorithms and neural networks. One of the remarkable points in this study was the ability to differentiate Alzheimer's patients from the mild stage of healthy subjects with 82% accuracy using a single-channel EEG signal [19]. Additionally, in another study, the comparison of the EEG signaling disorder of healthy people with that of brain tumor patients was calculated by entropy. According to the results, in low-frequency patients, low-rhythms of EEG signals such as Delta and Theta bands have a higher power spectrum than for healthy people. Gholipour, et al. [20] described the use of a completely new automated software algorithm using the standard MRI sequence before and after Contra T1. The T1 weighs in before and after Contra, and the images are automatically interconnected and normalized. The volume of tumor growth is automatically calculated. In their study, they were able to test a method for calculating the size of the tumor when it was enlarged by the collapse of the cavity and, of course, when the enlarged tumor was covered with semi-autogenous blood in a cutaneous cavity. It detected an increase in tumor volume among blood products, which rarely reduced measurements when using other techniques. Their approach seems to overcome many challenges by assessing the response to increased brain tumors and leading to more validation. Zacharaki, et al. [21] studied machine-learning algorithms that automatically identify the relevant features and are desirable for brain tumor differentiation. They studied various machine-learning techniques for classifying the brain tumor based on the features extracted from conventional MRI and perfusion. Their study was performed by mutual validation of Leave-One-Out (LOO) exodus on 101 brain tumors, obtained using a pack evaluation in combination with the best first-order algorithm and K-Nearest Neighbour (KNN) algorithm classification, reaching 96.9%. When differentiated, it became Glimatic and 94.5% when distinguished from a low-grade neoplasm. Fritzsche, et al. [22] completed a study of 15 patients

with brain tumors and 18 patients with Mild Cognitive Impairment (MCI); eight remained stable in a three-year follow-up, and 15 were healthy individuals. The classification was also improved by limiting the analysis to the left-brain hemisphere. Devanand, et al. [23], using morphometric mapping of MRI, evaluated the local changes of the hippocampus grains and entorhinal cortex in predicting the transformation from a MCI cognitive impairment to an AD brain tumor. In the MCI model, Cox regression models for the conversion time to conversion converters were made for AD (n = 31) and 99 non-converted controls for age, sex, and education. In Zöllner, et al. [24], the performances of reduction features such as the Pearson correlation coefficient, principal components analysis, and independent component analysis in the classification of Glioma's disease were analyzed using a backup vector machine classifier.

Afshar, et al. [25] studied classification using CapsNets for the detection of brain tumors in order to present a developed architecture with higher accuracy. Their findings indicated that the presented method could overcome Convolutional Neural Networks (CNNs) successfully. Mohan and Subashini [26] provided a clinical study of brain tumor imaging related to gliomas. They used related methods of segmentation and classification. Huang, et al. [27] proposed an algorithm based on the rough set method. They presented a hybrid method with the use of FCM. Initially, the feature table was set based on FCM clustering amounts. Then, the relationship among features showed similarity criteria in each cluster.

In this paper, we presented three algorithms, named WGK, D-WGK, and WLK. The first presented method is Wavelet-GARCH-KNN (WGK). In this method, we first used a two-stage 2D-DWT to decompose the input images into sub-bands of wavelets. The reached wavelet coefficients were features of classification. Then, the GARCH model was used for feature extraction with the use of HH1, HL1, LH1, and second stage HH2, HL2, LH2. Because of the incompatibility of Local Linear (LL) with the GARCH model, this sub-band was ignored [28]. To reduce the number of features, the PCA and PCA + LDA method was then used with the extracted feature brain lesion being classified via KNN methods. The results are illustrated in the results section. The second presented method is named Developed Wavelet-GARCH-KNN (D-WGK). In the second method, we overcame the limitation of the WGK algorithm using homomorphic filtering before a wavelet transformation. Therefore, the LL2 sub-band participated in the GARCH model. Then, similarly to the WGK method, the KNN method was designated for the classification of brain tumors. The third method was Wavelet-LLA-KNN (WLK). In this method, all sub-bands of the wavelet decomposition were used for modeling with the LLA algorithm. The remaining part of the third method was also similar to the WGK and D-WGK method.

#### **2. Methods and Materials**

#### *2.1. Image Processing*

The modern world of today allows digital images to be analyzed and stored [29]. To get better results, it is sometimes necessary to make changes to these images. These changes have three main purposes: processing, analysis, and image perception. For this reason, computer image processing systems have been developed to perform these operations with better speed and accuracy. In these systems, four major processes occur pre-processing, image quality enhancement, image transformation, and classification and segmentation. In these methods, using mathematical science, rules have been created by computers to simulate human visual elements, and this is an aspect of image analysis that is used for specific purposes. Computer Vision is the analysis of scientific images in various scientific branches such as medicine, engineering, molecular imaging, astronautics, security, etc. Modern digital technology has made it possible to manipulate multidimensional signals from systems ranging from simple digital circuits to multiple parallel computers [30,31].

#### *2.2. Discrete Wavelet Transform (DWT)*

Let *f*(*x*) ∈ *L* 2 (*R*) be the function wavelet expansion related to the wavelet ψ(*x*) and scaling ϕ(*x*) function [20]; we then have:

$$f(\mathbf{x}) = \frac{1}{f\sqrt{M}} \sum\_{\mathbf{j}^k} \mathcal{W}\_{\Psi}(j\_0, k) \phi\_{j\_0 k}(\mathbf{x}) + \frac{1}{\sqrt{M}} \sum\_{\mathbf{j}^i=j\_0}^l \sum\_k \mathcal{W}\_{\Psi}(j, k) \psi\_{j, k}(\mathbf{x}) \tag{1}$$

$$\mathcal{W}\_{\boldsymbol{\uprho}}(j\_0, k) = \frac{1}{\sqrt{\mathcal{M}}\!/ } \sum\_{\mathbf{x}=\mathbf{0}}^{M-1} f(\mathbf{x}) \phi\_{j0} \, \_k(\mathbf{x}) \tag{2}$$

$$\mathcal{W}\_{\psi}(j,k) \underset{f \in \overline{\text{Mh}}}{=} \frac{1}{\sqrt{\text{Mh}}} \sum\_{\mathbf{x}=0}^{M-1} f(\mathbf{x} \boldsymbol{\mathfrak{y}}) \psi\_{j,k}(\mathbf{x}) \tag{3}$$

$$\phi\_{j,k}(\mathbf{x}) = \mathbf{2}^{\frac{j}{2}} \phi(\mathbf{2}^j \mathbf{x} - k) \tag{4}$$

$$
\psi\_{j,k}(\mathbf{x}) = \mathbf{2}^{\frac{j}{2}} \psi \{ \mathbf{2}^j \mathbf{x} - k \} \tag{5}
$$

ு

where *f*(*x*) is the input variable as a vector, and ϕ*j*0,*<sup>k</sup>* (*x*) and ψ*j*,*<sup>k</sup>* (*x*) are the scaling coefficient and wavelet coefficient, respectively. *x* = 0, 1, . . . , *M* − 1, *j* = 0, 1, . . . , *J* − 1, *k* = 0, 1, 2, ..., *M* − 1, where *M* is the number of samples to be transformed that is equal to 2*<sup>J</sup>* , *J* is the number of transformation levels, and *j*<sup>0</sup> is a random starting scale. The expansion function is a series of crisp numbers; it is also called the discrete wavelet transform of *f*(*x*). The representation of the discrete function of *f*(*x*) can be written as a weighted summation of wavelet ψ*j*,*<sup>k</sup>* (*x*) and the scaling coefficient ϕ*j*0,*<sup>k</sup>* (*x*), as shown in Equation (1). In this equation, *W*φ(*j*0, *k*) and *W*ψ(*j*0, *k*) are the approximation coefficient and detail coefficient, respectively. The expansion coefficients are shown as follows. () బ,() ,() − − − 2 () () ,() బ,() థ( , ) ట( , )

Figure 1 shows a two-step wavelet transformation that generates four sub-bands, where ψ *H*, ψ *<sup>V</sup>* and ψ *<sup>D</sup>* indicate deviations along the horizontal, vertical, and diagonals edges, respectively. In this diagram, 2 ↓ shows a down stampeding indicator. 2D-DWT can be executed with digital filtration and down samplers. The other sub-bands are generated with discrete 2D scaling functions, with the use of 1D-FWT on *f*(*x*, *y*) [32]. For the computation of the DWT coefficients, we should consider the multiresolution refinement equation, as shown in Equations (6) and (7): 

**Figure 1.** The two-dimensional DWT diagram.

$$\phi\_{j,k}(\mathbf{x}) = \sum\_{n} 2^{\frac{j}{2}} \phi \{ 2^j \mathbf{x} - n \} h\_{\phi}(n) \tag{6}$$

−). ℎట()

<sup>ଶ</sup>(2

*Mathematics* **2020**, *8*, 1268

$$\psi\_{j,k}(\mathbf{x}) = \sum\_{n} 2^{\frac{j}{2}} \psi(2^j \mathbf{x} - n) h\_{\psi}(n) \tag{7}$$

where *h*<sup>φ</sup> and *h*<sup>ψ</sup> are the scaling vector and wavelet vector, respectively. *h*<sup>φ</sup> and *h*<sup>ψ</sup> may be considered as weights for the summation of Equations (6) and (7). With the inclusion of Equations (6) and (7) into Equations (2) and (3), the following questions result.

$$\mathcal{W}\_{\Phi}(j,k) = h\_{\phi}(-n)\mathcal{W}\_{\Phi}(j+1,n), \quad (n=2k, \ k \ge 0) \tag{8}$$

$$\mathcal{W}\_{\psi}(j,k) = h\_{\psi}(-n)\mathcal{W}\_{\psi}(j+1,n) \qquad \qquad j \ge j\_0 \tag{9}$$

The scaling and wavelet coefficient of a certain scale *j* may be obtained via the convolution of the scaling coefficients of the next scale *j* + 1 (with finer detail) with the order-reversed scaling and wavelet vectors *h*φ(−*n*) and *h*ψ(−*n*). Based on Figure 1, the results of the first level of transformation for the column of an input image are as follows:

$$\begin{aligned} \mathcal{W}\_{\phi}(j+1,m,n) &= h\_{\phi}(-n) \ast \left( h\_{\phi}(-m) \ast \mathcal{W}\_{\phi}^{2\uparrow}(j,m) + h\_{\psi}(-m) \ast \mathcal{W}\_{\psi}^{2\uparrow}(j,m) \right) \\ &+ h\_{\psi}(-n) \ast \left( h\_{\phi}(-m) \ast \mathcal{W}\_{\phi}^{2\uparrow}(j,m) + h\_{\psi}(-m) \ast \mathcal{W}\_{\psi}^{2\uparrow}(j,m) \right) \quad (k \tag{10} \\ &\ge 0) \end{aligned} \tag{11}$$

$$\begin{array}{rcl} \mathcal{W}\_{\phi}(j+1,m,n) &= h\_{\phi}(-n)h\_{\phi}(-m) \ast \mathcal{W}\_{\phi}^{2\uparrow}(j,m) + h\_{\phi}(-n)h\_{\psi}(-m) \ast \mathcal{W}\_{\psi}^{2\uparrow}(j,m) \\ &+ h\_{\psi}(-n)h\_{\phi}(-m) \ast \mathcal{W}\_{\phi}^{2\uparrow}(j,m) + h\_{\psi}(-n)h\_{\psi}(-m) \ast \mathcal{W}\_{\psi}^{2\uparrow}(j,m), \quad (k \qquad (11) \\ &\ge 0) \end{array} \tag{11}$$

$$\begin{array}{ll} \mathcal{W}\_{\phi}(j+1,m,n) &= h\_{\phi}(-n)h\_{\phi}(-m) \ast \mathcal{W}\_{\phi}^{2\uparrow}(j,m) + h\_{\phi}(-n)h\_{\psi}(-m) \ast \mathcal{W}\_{\psi}^{2\uparrow}(j,m) \\ &+ h\_{\psi}(-n)h\_{\phi}(-m) \ast \mathcal{W}\_{\phi}^{2\uparrow}(j,m) + h\_{\psi}(-n)h\_{\psi}(-m) \ast \mathcal{W}\_{\psi}^{2\uparrow}(j,m), \quad (k \tag{12} \\ &\ge 0) \end{array} \tag{12}$$

Generally, 2D- ϕ(*x*, *y*), and 3D- ψ *<sup>H</sup>*(*x*, *y*), ψ *<sup>V</sup>*(*x*, *y*), and ψ *<sup>D</sup>*(*x*, *y*) are required to generate a 1D scaling function ϕ and related wavelet ψ [20].

$$
\phi(\mathbf{x}, y) = \phi(\mathbf{x})\phi(y) \tag{13}
$$

$$
\psi^H(\mathbf{x}, y) = \psi(\mathbf{x})\phi(y) \tag{14}
$$

$$
\psi^V(\mathbf{x}, y) = \phi(y)\phi(\mathbf{x})\tag{15}
$$

$$
\psi^D(\mathbf{x}, y) = \psi(\mathbf{x})\psi(y) \tag{16}
$$

#### *2.3. Generalized Autoregressive Conditional Heteroscedasticity*

Bollerslev was the first researcher who developed the GARCH method [33]. It can be considered as being the variance of the time variable, for example, an oscillation. Conditional requires immediate dependence on past observations, and self-control combines past data at the present time. GARCH models are statistical methods that are more common in the economy. Engle [34] presented the process of Autoregressive Conditional Heteroscedasticity (ARCH) to change the conditional variance over time as a factor of past mistakes that remain based on the conditional constant variance. The GARCH process (Algorithm 1) is a general form of ARCH and is a time series modeling technique that uses the last variance to predict future variances.

#### **Algorithm 1. GARCH**

1: **Input:** *y<sup>t</sup>* , *P*, *Q*, *dist* 2: **Output:** *a<sup>i</sup>* , ǫ*<sup>t</sup>* 3: Step 1: Estimate AR(q): 4: *y<sup>t</sup>* = *a*<sup>0</sup> + *a*<sup>1</sup> *yt*−<sup>1</sup> + · · · . + *aqyt*−*<sup>q</sup>* + ǫ*<sup>t</sup>* 5: ǫˆ 2 *<sup>t</sup>* = *a*ˆ<sup>0</sup> + P*<sup>q</sup> i*=1 *a*ˆ*i*ǫˆ 2 *t*−*i* 6: Step 2: Compute and plot the autocorrelations of ǫ <sup>2</sup> by: 7: ρ = P*T <sup>t</sup>*=*i*+1(ǫˆ 2 *t* −σˆ 2 *t* )(ǫˆ 2 *t*−1 −σˆ 2 *t*−1 ) P*T <sup>t</sup>*=1(ǫˆ 2 *t* −σˆ 2 *t* ) 2 8: Step 3: null hypothesis states that there are no ARCH or GARCH errors

#### *2.4. Local Linear Approximation*

The Local Linear Approximation is calculated via [35]. In this method, the first and second derivatives are determined so as to generate a fitting function with the observation data.

Let *x* have three value *x*(1), *x*(2), and *x*(3). An LLA for the derivative of *x* at the *x*(2) is calculated via the mean of the two slopes between *x*(1)−*x*(2) and between *x*(2)−*x*(3), which can now be calculated from *x*(3) and stored in the matrix *y* of the same order as *x*(3) where the *k*th row of *y* is:

$$y\_{k1} = \mathbf{x}\_{k2} \tag{17}$$

$$y\_{k2} = \frac{\mathbf{x}\_{k3} - \mathbf{x}\_{k1}}{2\pi\Delta t},\tag{18}$$

$$y\_{k3} = \frac{\mathbf{x}\_{k1} - 2\mathbf{x}\_{k2} + \mathbf{x}\_{k3}}{\left(\tau \Delta t\right)^{2}}.\tag{19}$$

$$\frac{d\mathbf{x}(1-\tau)}{dt} \approx \frac{\mathbf{x}(1+2\tau) - \mathbf{x}(1)}{2\tau\Delta t} \tag{20}$$

where the first column of *y* is the value of *x* at the moment of measurement indexed in the second column of *x*(3), and the second and third columns of *y* are the approximated first and second derivatives, respectively, at that same moment of measurement. In this case, τ = 1 since *x*(1), *x*(2), and *x*(3) are successive measures, and ∆*t* is the time interval among the measures. The others (for instance *x*(1), *x*(3), and *x*(5)) can be calculated with τ = 2 being substituted into Equation (20).

#### *2.5. K-Nearest Neighbour Algorithm*

KNN is a simple form of machine learning [31,36]. In this algorithm, an article is classified by the values of its neighbors, which are allocated to k (<sup>∈</sup> <sup>N</sup>+) nearest neighbors [37]. The similarity of each object in a class is utilized as the weight of the class. In the case of a few of the k nearest neighbors sharing a category, the per-neighbor weights of that category are included together at that point, and the obtained weighted entirety is utilized as the probability score of the candidate categories. A positioned list is obtained for the test archive. By thresholding these scores, twofold category assignments are obtained.

#### *2.6. Proposed Method*

In this paper, we aim to use mathematical methods to diagnose brain diseases. We implemented three methods for the classification and diagnosis of brain tumors (Algorithm 2). The first presented method is Wavelet-GARCH-KNN (WGK). In this method, we first used two-stage 2D-DWT to decompose input images into sub-bands of wavelets. The obtained wavelet coefficients are features of classification. Then, the GARCH model was used for feature extraction with the use of HH1, HL1, LH1, and second-stage HH2, HL2, LH2. Because of the incompatibility of LL with the GARCH model, this sub-band was ignored. To reduce the number of features, the PCA and PCA + LDA method

was then used, with extracted feature brain lesions being classified with the use of KNN methods. The results are illustrated in the results section.

The second presented method is named Developed Wavelet-GARCH-KNN (D-WGK). In the second method, we overcame the limitation of the WGK algorithm by using homomorphic filtering before a wavelet transformation. Therefore, the LL2 sub-band participated in the GARCH model. Then, similarly to the WGK method, the KNN method was designated for the classification of brain tumors.

The third method is Wavelet-LLA-KNN (WLK). In this method, all sub-bands of wavelet decomposition were used for modeling with the LLA algorithm. The remaining part of the third method was also similar to the WGK and D-WGK method. The results of each algorithm are depicted in the below sections. The structure and proposed model in this study are shown in Figure 2.

**Figure 2.** The block diagram of the proposed method.

#### **Algorithm 2. Presented**

```
1: Input: ym×m = {m × m} ∈ R
                              2
```
**2: Switch:**


#### **3. Results and Discussion**

#### *3.1. Datasets*

In this paper, we used seven brain diseases to implement and test the presented methods. They consist of Alzheimer's, Alzheimer plus visual agnosia, Glioma, Huntington, Meningioma, Pick, and Sarcoma. These diseases, in conjunction with normal brain images, include 240 MRI images from the Harvard medical school website. All images are from T2-weighted MR brain images in the axial plane and have 256 × 256 pixels. These images were saved in different folders and studied separately. Therefore, after feature extraction, they were aggregated into a single code folder.

#### *3.2. Two-dimensional Discrete Wavelet Transforms (2D-DWT)*

In this paper, we used 2D-DWT to separate the sub-bands of images. In this transformation, we input images from 256 × 256 pixels to 131 × 131 first-stage sub-bands and 69 × 69 sub-bands. The example of a wavelet transformation is shown in Figure 3. Additionally, for the second aforementioned method, we needed homomorphic filtering.

H

L

**Figure 3.** The sub-bands of the wavelet transformation.

=5 The results of the wavelet discretization when using homomorphic filtering (σ = 5) are shown in Figure 4. The top images show the original image without (with) the filtration, and the first and second transformation are shown on the left and right sides of the images, respectively. Regarding the literature studies, the GARCH model was not compatible with LL2 sub-bands [38]. This situation is obviously shown in Figure 3 (LL2). Because all the brain sections of the images were almost within the GARCH model in (1, 1), we did not find the model coefficient to be significant for the GARCH (1, 1) model. In this paper, we overcame this limitation and made the LL2 model be compatible with the GARCH (1, 1) model. To overcome this condition, we used homomorphic filtration for the main image, and then the 2D-DWT was performed on it. With this method, we increased the contrast of the LL2 sub-band, as can be seen in Figure 4 (LL2).

**Figure 4.** The sub-bands of the wavelet transformation using homomorphic filtering.

#### *3.3. Feature Reduction*

In this section, we used nonconvulsive status epilepticus (NCSE) to extract features and classify them. Via this method, we can classify the features into two classification states: two-classes and eight-classes. In the two-class state, we classify the features into two classes to diagnosis the normal and abnormal MRI images. Using this state, we can find patient and inpatient brain images. Moreover, using the eight-class state, we can classify the brain images into seven different classes in conjunction with normal brain images.

In this paper, we studied different methods to reduce features, consisting of:

WGK: Using GARCH without LL2 + PCA WGK: Using GARCH without LL2 + PCA + LDA D-WGK: Using Homomorphic filtering + GARCH with LL2 + PCA WLK: Using LLA + PCA WLK: Using LLA + PCA + LDA

The results are depicted in Figures 5–8. Figure 5 shows the feature reduction plots used to find the best number of classes. In this figure, we used two methods, 2D-DWT and GARCH (1, 1), used with and without LL2 sub-bands. The result shows that with the addition of LL2 to the GARCH method, the model is developed. Furthermore, the results of the PCA method show that we can use 14 features for the classification of images.

Furthermore, this enhancement is shown in Figure 6 for the two-class state. Additionally, in this state, the method is developed, and a number of features are decremented from 6 to 5. This can speed up the classification method and increase the accuracy of the methods because we used all sub-bands of the 2D-DWT method for classification with fewer features.

**Figure 5.** Normalized cumulative summation of eigenvalues of training data for GARCH (1, 1) with and without LL2 (Eight-classes).

**Figure 6.** Normalized cumulative summation of eigenvalues of training data for GARCH (1, 1) with and without LL2 (two-classes).

NCSE

**Figure 7.** Normalized cumulative summation of eigenvalues of training data for the GARCH and LLA methods for different PCA and PCA + LDA methods (eight-classes).

**Figure 8.** Normalized cumulative summation of eigenvalues of training data for the GARCH and LLA methods for different PCA and PCA + LDA methods (two-classes).

Figures 7 and 8 show the feature reduction results for the presented LLA and GARCH model using the PCA and PCA + LDA methods. In the eight-class method (Figure 7), the best number of features for the GARCH and PCA method was 20, which for GARCH + PCA + LLDA decreased to 10 features. Using the presented method, the LLA + PCA method's number of features decreased to 7 features. Furthermore, for the LLA + PCA + LDA method, the best number of features should be three features. The results showed that the last presented method decreased the number of features to 3 so that it would be great for feature reduction.

For a two-class state (Figure 8), this reduction is conspicuously shown. The resolution between the classes is high, which indicates the ability of LLA + PCA + LDA in incremental inter-class distances and decremental intra-class distances.

#### *3.4. The Classification Results*

In this paper, we used the k-Nearest Neighborhood (KNN) method to classify the input features. KNN is a non-parametric method used in data mining, machine learning, and pattern recognition. The KNN algorithm is one of the ten most used algorithms in various machine learning and data mining projects in the industry. The KNN algorithm can be used for classification and regression issues. However, it is often used for classification issues.

The value of K in the KNN method is one of the effective parameters in classification. The mean classification accuracy was determined for different values of K, which was increased from 1 to 11 in steps of two for both states. The results are depicted in Figure 9. The results show that, for K ≤ 5, the classifier has good efficiency. Furthermore, the accuracy of the LLA method in the KNN classifier is greater than the GARCH method. ≤

**Figure 9.** The plots of the accuracy of the two- and eight-class methods for different K values.

In the statistics, indicators of sensitivity and specificity are utilized to evaluate the result of the binary classification (two-class). When the data can be divided into positive and negative groups, the accuracy of the results of a test that divides the information into these two categories is measurable and describable using sensitivity and attribute indicators. Sensitivity means a proportion of positive cases that will test them correctly as being positive. Specificity means the proportion of negative cases that mark them correctly as being negative.

True Positive (TP): The disease is diagnosed correctly.

False Positive (FP): A healthy person is diagnosed with mistakes.

True Negative (TN): A healthy person is diagnosed correctly.

False Negative (FN): The disease is diagnosed with mistakes.

The sensitivity parameter is calculated by dividing the numbers of TP cases by the sum of TP cases and FN cases.

$$\text{sensitivity}\_{\text{\\_}}\text{ TPR} = \frac{\text{TP}}{\text{TP} + \text{FN}}\tag{21}$$

, = + In a similar way, the specificity results are the division of TN cases by the sum of FP and TN cases.

$$
specificity\_{\prime} \text{TNR} = \frac{\text{TN}}{\text{TN} + \text{FP}} \tag{22}$$

, = + Other classification criteria, such as precision, accuracy, and fall-out, are defined as following Equations (23)–(25).

$$\text{Precision}, \text{PPV} = \frac{\text{TP}}{\text{TP} + \text{FP}} \tag{23}$$

$$accuracy(\text{ACC}) = \frac{\text{TP} + \text{TN}}{\text{TP} + \text{TN} + \text{FP} + \text{FN}} \tag{24}$$

$$(fall - out(\text{FPR}) = \frac{\text{FP}}{\text{FP} + \text{FN}}\tag{25}$$

The results of the classification using the LLA method are shown in Tables 1 and 2. Table 1 shows the results of the classification using the presented methods. The results show that the maximum accuracy belongs to the presented LLA method that conducted the extraction using the combination of the PCA and LDA methods. Moreover, the minimum one belongs to the GARCH method that used the PCA feature extraction method.


**Table 1.** The comparison between the presented methods.

**\*** Marti nez, et al. [39]; \*\* Kalbkhani, Shayesteh and Zali-Vargahan [38].

**Table 2.** The results of the classification for eight-class states using PCA + LDA (WLK).


Therefore, we can prioritize the presented methods that follow a maximum sensitivity and minimum fall-out as PCA + LDA (WLK), PCA (WLK), PCA + LDA (WGK), and PCA (WGK) (see Figure 10). This shows that the LLA method is better than the GARCH model in terms of robustness, sensitivity, and accuracy. Moreover, the combination of PCA and LDA produces better results than single PCA. One of the main reasons that the GARCH model has not produced a good model is the incompatibility of this method with some images. Table 2 also shows the results of a classification in the eight-class state, and the results show the acceptable outperformance of most diseases. The diagnosis of Pick and Sarcoma is somewhat more inaccurate than that of others; this is because of the complex images of these diseases.

Figure 11 shows the confusion matrix of the presented model of the hybrid PCA, LDA, and LLA methods for the diagnosis of normal and abnormal images. The main diameter of the matrix shows the number of images detected correctly. From 210 abnormal images, 18 (8.57%) were recognized as normal lesions. However, 192 (91.43%) of the abnormal images were diagnosed correctly. Nevertheless, all the normal images were detected, and 92.5% (accuracy) of all images are were correctly classified, while 7.5% were incorrectly classed.

0.1

0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

TPR

  FPR

**Figure 10.** The Receiver Operating Characteristic (ROC) curve of the presented method.

**Figure 11.** The confusion matrix for two-classes states using PCA-LDA (LLA).

Figure 12 also shows the confusion matrix of the presented method for the classification in eight classes. The lower row of the matrix shows the percentages of each disease that were detected correctly (sensitivity). The maximum detection percentage belonged to normal images, and then Huntington and Meningioma came second. However, only 93.3% of Sarcomas were diagnosed correctly. In the end, 92.5% (accuracy) of all images were classified in the proper class, while 7.3% of them could not be recognized and were incorrectly classed. The red cells show incorrect choices or false ones. In each column, the sum of the elements equals the number of images of each disease. For example, for Alzheimer's (first column), 28 images (from 30) were diagnosed correctly; however, two images were classified into the Alzheimer plus category.


**Figure 12.** The confusion matrix for eight-classes states using PCA-LDA (WLK).

#### **4. The Complexity Analysis**

( <sup>ଷ</sup> , <sup>ଷ</sup> ) (<sup>ଶ</sup> ) (<sup>ଷ</sup> ) (4<sup>ଶ</sup> 2) () (ᇱ) ′ <sup>ᇱ</sup> = 2 () is ( ( <sup>ଷ</sup> , <sup>ଷ</sup> ) + ) ( ( <sup>ଷ</sup> , <sup>ଷ</sup> ) + <sup>ଶ</sup> + ) ( ( <sup>ଷ</sup> , <sup>ଷ</sup> ) + 2) ( ( <sup>ଷ</sup> , <sup>ଷ</sup> ) + <sup>ଶ</sup> + 2) In the proposed method, we used five major approaches. Therefore, we should calculate their complexity. The complexity of PCA is *O* (*min*(*p* 3 , *n* 3 )), where *p* shows the number of features, and *n* is the data abundance (image size 256 × 256) [40]. Additionally, the complexity of the LDA method for feature extraction is *O np*<sup>2</sup> if *n* > *p.* Otherwise, it is *O p* 3 . The complexity of 2D-DWT is *O* 4*Mn*<sup>2</sup> *log*2*n* , where *M* is the number of vanishing moments of the mother wavelets that are used. The complexity of the GARCH (1, 1) method depends on the autocorrelation complexity and is *O*(*n*), where, in this case, *n* is 256 × 256. Regarding the complexity of the LLA method, we can calculate this as *O*(*n* ′*n*), where *n* ′ is of the order of the derivative in the method and where, in this case, *n* ′ = 2. The complexity of the KNN method is *O*(*npk*), where, in this case, *k* = 1. Therefore, the complexity of the presented method is as follows: PCA (GARCH) is *O min*(*p* 3 , *n* 3 ) + *n* , PCA + LDA (GARCH) is *O min*(*p* 3 , *n* 3 ) + *np*<sup>2</sup> + *n* , PCA (LLA) is *O min*(*p* 3 , *n* 3 ) + 2*n* , and PCA + LDA (LLA) is *O min*(*p* 3 , *n* 3 ) + *np*<sup>2</sup> + 2*n* ; therefore, the group of the LLA method is somewhat more complex than that of the GARCH group; however, the result is remarkable and compatible with all of the images.

#### **5. Conclusions**

In this paper, a hybrid algorithm for determining the diagnosis of brain disease in MRIs is presented. Initially, the two-level transformation of the 2D-DWT was calculated as the input images. The sub-banded wavelet coefficients could be modeled using the GARCH and LLA models. We used five studies in this paper. After using the 2D-DWT method and the separation of the image into six sub-bands to model the sub-bands, we used GARCH (1, 1) without using the Low-Low sub-band in the second wavelet level (use of WGK). Because this sub-band was incompatible with the GARCH (1, 1) method in terms of overcoming this condition, we used homomorphic filtering before 2D-DWT (use of D-WGK). The results showed that, by using Homomorphic filtering, the LL2 sub-band with the maximum image data could be utilized in the GARCH (1, 1) method with high performance. Moreover, we used the LLA method to model the 2D-DWT sub-bands. In this method, we used all of the sub-bands to model features (use of WLK). The results showed that using the LLA method, we could reduce the number of features from 20 to 3. Then, we classified the images using the KNN method. The results demonstrated the high accuracy and robustness of the presented methods. The results

showed that the WLK method was better than the WGK and D-WGL models in terms of robustness, sensitivity, and accuracy. Furthermore, the hybrid of PCA and LDA produced better results than PCA. One of the main reasons why the GARCH model has not produced a good model relates to the incompatibility of this method with some images. We overcame this problem in D-WGT with the use of homomorphic filtering. The results of an eight-class classification (diagnosis of disease type) showed an acceptable outperformance for most diseases. The diagnosis of Pick and Sarcoma was somewhat more inaccurate than that of the others; this is because of the complex images of these diseases. Out of the abnormal images, 8.57% were recognized as normal lesions. However, 91.43% of abnormal images were diagnosed correctly. Nevertheless, all the normal images were detected with 92.5% accuracy. The maximum detection percentage belonged to normal images, and then Huntington and Meningioma came second. However, 93.3% of sarcomas were classified correctly. In the end, 92.5% of all images were classified in their proper class, with a 7.3% error. Future work should focus on increasing the dataset volume for the diagnosis of brain tumors. Furthermore, it could be implemented for other MRI images, like breast cancer, prostate cancer, and so on. The novel methods of deep learning could also be enriched with this feature extraction method, which increases process speed and accuracy.

**Author Contributions:** Conceptualization, A.H.; Formal analysis, A.H. and S.J.G.; Methodology, A.H. and V.B.; Project administration, S.J.G. and V.B.; Resources, A.H. and A.M.; Software, A.H.; Supervision, S.J.G.; Validation, A.H.; Visualization, S.J.G. and V.B.; Writing—original draft, A.M.; Writing—review & editing, A.M. All authors have read and agreed to the published version of the manuscript.

**Funding:** The funding sources had no involvement in the study design, collection, analysis or interpretation of data, writing of the manuscript or in the decision to submit the manuscript for publication.

**Conflicts of Interest:** The authors declare no conflict of interest.

**Ethical Approval:** This article does not contain any studies with human participants or animals performed by any of the authors.

#### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Eliminating Rank Reversal Problem Using a New Multi-Attribute Model—The RAFSI Method**

#### **Mališa Žižovi´c <sup>1</sup> , Dragan Pamuˇcar 2,\* , Miloljub Albijani´c <sup>3</sup> , Prasenjit Chatterjee <sup>4</sup> and Ivan Pribi´cevi´c <sup>5</sup>**


Received: 16 May 2020; Accepted: 19 June 2020; Published: 21 June 2020

**Abstract:** Multi-attribute decision-making (MADM) methods represent reliable ways to solve real-world problems for various applications by providing rational and logical solutions. In reaching such a goal, it is expected that MADM methods would eliminate inconsistencies like rank reversal issues in a given solution. In this paper, an endeavor is taken to put forward a new MADM method, called RAFSI (Ranking of Alternatives through Functional mapping of criterion sub-intervals into a Single Interval), which successfully eliminates the rank reversal problem. The developed RAFSI method has three major advantages that recommend it for further use: (i) its simple algorithm helps in solving complex real-world problems, (ii) RAFSI method has a new approach for data normalization, which transfers data from the starting decision-making matrix into any interval, suitable for making rational decisions, (iii) mathematical formulation of RAFSI method eliminates the rank reversal problem, which is one of the most significant shortcomings of existing MADM methods. A real-time case study that shows the advantages of RAFSI method is presented. Additional comprehensive analysis, including a comparison with other three traditional MADM methods that use different ways for data normalization and testing the resistance of RAFSI method and other MADM methods to rank the reversal problem, is also carried out.

**Keywords:** multi-criteria optimization; RAFSI method; performance comparison; rank reversal

#### **1. Introduction**

Multi-criteria optimization (MCO) methods represent powerful tools for making rational decisions while being engaged in various types of activities. Studies in MCO problems have particularly been prevalent in recent decades [1]. The reasons for such developments lie both in theoretical and practical points of view. In a theoretical sense, MCO is attractive as it studies insufficiently structured problems, while, in a practical sense, MCO represents a powerful way for choosing adequate actions. Furthermore, MCO methods are unavoidable for designing appropriate tools to explore diverse systems.

MCO methods can be classified into five groups [2]: (1) methods for determining non-inferior solutions that determine the set of non-inferior solutions, while it depends on the decision-makers (DMs) to adopt the final solution based on their preferences. The following methods belong to this group: weighting coefficient methods (the restriction method in the criteria functions environment, as well as the Simplex method), (2) methods with a predetermined preference, which are used to form synthesizing (resultant) criterion function (it includes almost all multi-attribute decision-making (MADM) methods, (3) interactive methods in which DMs express their preferences interactively, (4) stochastic methods where indicators of uncertainty are included in the optimization model, and (5) methods for emphasizing a subset of non-inferior solutions that narrow down the subset of non-inferior results, which are achieved by introducing additional elements for making rational decisions.

MADM methods involve sound mathematical steps for processing information to evaluate alternatives concerning a predetermined set of criteria, which is the main focus of this paper. It is performed to establish a ranking of solutions and the best choice. Some of the most predominant representative methods of this group are


MADM methods play a significant role in solving real-world problems in several areas. Let us mention some interesting studies, which show the diversity of applications of MADM methods. Orji and Wei [13] applied a hybrid decision-making trial and evaluation laboratory (DEMATEL)-TOPSIS model for sustainable supplier selection. Rabbani et al. [14] modified traditional MADM methods using fuzzy sets and demonstrated their application in logistics. Mahdi Paydar et al. [15] applied the fuzzy Multi-Objective Optimization Method by Ratio Analysis (MOORA) and Failure Mode and Effects Analysis (FMEA) methods in the Iranian chemical industry application. Zhou and Xu [16] used DEMATEL, Analytic Network Process (ANP), and VIKOR methods in sustainable supplier selection. Lu et al. [17] extended the ELECTRE method using a rough set theory. Si et al. [18] showed the possibilities of applying picture fuzzy numbers in MADM. Noureddine and Ristic [19] combined the Full Consistency Method (FUCOM), TOPSIS, and MABAC with the Dijkstra algorithm for optimizing the transport of dangerous cargo. Badi et al. [20] used a gray-based assessment model to evaluate healthcare waste treatment alternatives in Libya. Krmac and Djordjevic [21] applied the TOPSIS method for evaluating the influence of the Train Control Information System on capacity utilization.

One of the most important problems that occur in most MADM methods with predetermined preferences is the lack of resistance to rank reversal problems. If unexpected changes in the ranking of alternatives occur when any non-optimal alternative is added or deleted from the existing set of alternatives, this indicates serious mathematical issues in the applied MADM method. This problem can be illustrated with the following example in which three candidates are examined (candidates A, B, and C) who applied for the same work position. A MADM method is used to rank the candidate alternatives and the method suggested the following ranking of the candidates: A > B > C. Furthermore, it is assumed that candidate B (with the second rank) is replaced with a poor candidate D, which kept candidates A and C unchanged. If this new set of alternatives (A, D, and C) is now ranked by the same method under the same criteria weights, it is expected that the applied MADM method would again suggest candidate A as the best solution under the new conditions. However, in actual practice, some unwanted changes in the ranking order of the alternatives occur for the majority of the MADM methods [22].

The rank reversal problem was noticed and presented for the first time by Belton and Gear [22], who analyzed the use of Analytic Hierarchy Process (AHP) for ranking alternatives. In their research, they conducted a simple experiment in which three alternatives and two criteria were analyzed. After the initial ranking of the alternatives, they formed a new set of alternatives by introducing a copy of the non-optimal alternative. After evaluating this new set of alternatives while keeping the same criteria weights, inconsistencies were observed as the ranking order of the best alternative was changed. Thus, they concluded that AHP suffers from rank reversal phenomena. A few years later, Triantaphyllou and Mann [23] noticed the same problem again in AHP when the worst alternative was replaced by a non-optimal alternative. Triantaphyllou and Mann [23] also conducted the same experiment on two other methods, which included the Weighted Sum Model (WSM) and Weighted Product Model (WPM), and concluded that none of these methods were efficient in solving the rank reversal problem. Afterward, Triantaphyllou and Lin [24] further tested five MADM methods, including WSM, WPM, AHP, revised AHP, and TOPSIS in terms of the same two evaluative criteria in the fuzzy environment and came to the same conclusions. Then, many authors pointed out the rank reversal problem in many other MADM methods [25–30].

Furthermore, there is a large number of MADM methods already developed in the past few years, which give successful results for solving practical problems [31]. Nevertheless, most of these methods are not able to successfully eliminate the rank reversal problem. Among such methods, only the lattice MADM method can successfully eliminate the rank reversal problem [12]. However, this method has a complex mathematical algorithm and requires profound knowledge in net theory [32]. The complexity of the lattice algorithm significantly limits its broader use [33]. Moreover, several studies have shown that the rank reversal problem can be solved when traditional methods are substantially modified [34–36]. Keeping in mind that MADM methods are often used in the condition of dynamic changes in the initial decision matrix, authors of this research have paid attention to the development of a new MADM method, called Ranking of Alternatives through Functional mapping of criterion sub-intervals into a Single Interval (RAFSI) method that eliminates rank reversal problems. Besides eliminating the rank reversal problem, RAFSI method is also characterized by simple mathematical formulations that can be easily used for solving complex problems. RAFSI method integrates three starting points for making consistent decisions, which encompass (1) defining referential criteria points including ideal and anti-ideal criteria values, (2) defining relations between the considered alternatives and ideal/anti-ideal values, and (3) using a new technique for data normalization, based on defining criteria functions that map criteria sub-intervals into a unique criteria interval.

According to the results shown in this paper, three main advantages of the RAFSI method distinguish it from the other traditional MADM methods, which include (1) a simple algorithm of RAFSI method that enables DMs to solve complex problems, (2) use a new data normalization technique that converts an initial decision matrix into a unique criterion interval, and (3) resistance of the RAFSI method to rank reversal problems. We are emphasizing this phenomenon since it can be especially seen in dynamic conditions of decision-making where some alternatives often change during the process of making decisions, and MADM methods are often used in such conditions. Based on these advantages of RAFSI method, one of the most important contributions of this paper is to enrich the MADM research domain by developing a new method, which enables the DMs to make stable and coherent decisions in dynamic and uncertain environments.

After the introductory discussion on motivation, goals, and contributions, the content of the paper is presented as follows. In Section 2, the mathematical formulation of the RAFSI method is presented. Section 3 covers the application of RAFSI method for a real-time case study by considering six alternatives and five criteria. Results' validation and performance comparisons are presented in Section 4. Lastly, Section 5 concludes the paper with future research directions.

#### **2. RAFSI Method**

Let us assume that the DMs have to rank *m* alternatives on the basis of *n* criteria *C*1,*C*2, . . . ,*Cn*. Criteria weights (*w<sup>j</sup>* , *j* = 1, 2, . . . . *n*) meet the following condition P *n j*=1 *w<sup>j</sup>* = 1. Criteria *C*1, *C*2, . . . , *C<sup>n</sup>* can be maximizing type (*max*) or minimizing type (*min*). Alternatives *Ai*(*i* = 1, 2, . . . , *m*) are defined by their respective values (*aij* ) on each criterion (*c<sup>j</sup>* ). The initial decision matrix is shown as follows. 

$$N = \begin{array}{c} \mathbb{C}\_1 \, \mathbb{C}\_2 \, \dots \, \mathbb{C}\_n\\ A\_1 \left[ \begin{array}{cccc} n\_{11} & n\_{12} & \cdots & n\_{1n} \\ n\_{21} & n\_{22} & \cdots & n\_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ n\_{m1} & n\_{m2} & \cdots & n\_{mn} \end{array} \right] \end{array} \tag{1}$$

The RAFSI method has the following steps.

Step 1: Define ideal and anti-ideal values. For each criterion *Cj*(*j* = 1, 2, . . . , *n*), the DM defines two values *aI<sup>j</sup>* and *aN<sup>j</sup>* , where *aI<sup>j</sup>* represents the ideal value of criterion *C<sup>j</sup>* , while *aN<sup>j</sup>* represents an anti-ideal value of criterion *C<sup>j</sup>* . It is clear that *aI<sup>j</sup>* > *aN<sup>j</sup>* for max criteria and *aI<sup>j</sup>* < *aN<sup>j</sup>* for min criteria. 

Step 2: Mapping of elements of the initial decision matrix into criteria intervals. In the previous part, criteria intervals are defined below.


In order to make all criteria of the initial decision matrix equal or transfer them into the criteria interval [*n*1, *n*2*<sup>k</sup>* ], we are forming a sequence of numbers from the *k* interval in the way where *k*−*1* points are inserted between the highest and the lowest values of the criteria interval. 

$$n\_1 < n\_2 \le n\_3 < n\_4 \le n\_5 < n\_6 \dots \le n\_{2k-1} < n\_{2k} \tag{2}$$

The criteria interval is constant for all criteria and it has *n*<sup>1</sup> and *n*2*<sup>k</sup>* fixed points. Then we can map sub-intervals of the criteria into criteria intervals using functions *f*1, *f*2, *f*3, that is *f<sup>s</sup>* , as shown in Figure 1.

**Figure 1.** Mapping of sub-intervals into the criteria interval.

The map of minimum value *aN<sup>j</sup>* (for *max* criteria) and *aI<sup>j</sup>* (for *min* criteria) is *n*1. Additionally, the map of maximum value *aI<sup>j</sup>* (for *max* criteria) and *aN<sup>j</sup>* (for *min* criteria) is *n*2*<sup>k</sup>* . It is suggested that the ideal value is at least six times better than the anti-ideal (barely acceptable value), or *n*<sup>1</sup> = 1 and *n*2*<sup>k</sup>* = 6. However, the DM can use other preferred values such as *n*<sup>1</sup> = 1 and *n*2*<sup>k</sup>* = 9.

1 6

()

We define a function *fs*(*x*), which maps sub-intervals into the criteria interval [*n*1, *n*2*<sup>k</sup>* ] by Formula (3) below. The endpoints of the interval [*n*1, *n*2*<sup>k</sup>* ] determine the ratio of a barely acceptable alternative to the ideal alternative. This ratio is set up by the DM.

$$f\_{\mathbf{s}}(\mathbf{x}) = \frac{n\_{2\mathbf{k}} - n\_{1}}{a\_{I\_{j}} - a\_{N\_{j}}} \mathbf{x} + \frac{a\_{I\_{j}} \cdot n\_{1} - a\_{N\_{j}} \cdot n\_{2\mathbf{k}}}{a\_{I\_{j}} - a\_{N\_{j}}} \tag{3}$$

where *n*2*<sup>k</sup>* and *n*<sup>1</sup> represent the relation that shows the extent to which the ideal value is preferred over the anti-ideal value, and where *aI<sup>j</sup>* and *aN<sup>j</sup>* represent ideal and anti-ideal values of criteria *C<sup>j</sup>* , respectively.

Expression (3), as a function, can be part of the function, which maps a part of the interval h *aN<sup>j</sup>* , *aI<sup>j</sup>* i into interval [*n*1, *n*2*<sup>k</sup>* ]. In this case, all these parts, that is, all functions *f*1(*x*), *f*2(*x*)..., *fn*(*x*), represent a function *fs*(*x*) that maps the entire criterion interval into a defined numerical interval. Thus, Expression (3) can represent a function that maps a part of an interval, but can also map a complete criterion interval into the corresponding numerical interval. Therefore, the numbers *aI<sup>j</sup>* and *aN<sup>j</sup>* can represent: (1) values from inside the criterion interval or (2) endpoints of the criterion interval. The second possibility is used in this paper.

In this way, the standardized decision matrix *S* = h *sij*i *m*×*n* (*i* = 1, 2, . . . , *m*, *j* = 1, 2, . . . , *n*) is obtained in which all elements of the matrix are mapped into the interval [*n*1, *n*2*<sup>k</sup>* ]. After functional mapping of the elements of the initial decision matrix into criteria interval N [*n*1, *n*2*<sup>k</sup>* ], the condition *n*<sup>1</sup> ≤ *sij* ≤ *n*2*<sup>k</sup>* is achieved for every *I*, *j.*

$$\mathcal{S} = \begin{bmatrix} \mathcal{C}\_1 \ \mathcal{C}\_2 \ \dots \ \mathcal{C}\_n \\ \mathcal{A}\_2 \ \begin{bmatrix} s\_{11} & s\_{12} & \cdots & s\_{1n} \\ s\_{21} & s\_{22} & \cdots & s\_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ s\_{m1} & s\_{m2} & \cdots & s\_{mn} \end{bmatrix} \tag{4}$$

In the above formula, the elements of the matrix *sij* are obtained by using expression (3), that is, *sij* = *fA<sup>i</sup> Cj* .

Note the following:


Step 3: Calculate arithmetic and harmonic means. Using expressions (5) and (6), arithmetic and harmonic means are calculated for minimum and maximum sequence of the elements *n*<sup>1</sup> and *n*2*<sup>k</sup>* .

$$A = \frac{n\_1 + n\_{2k}}{2} \tag{5}$$

$$H = \frac{2}{\frac{1}{n\_1} + \frac{1}{n\_{2k}}} \tag{6}$$

Step 4: Form normalized decision matrix *S*ˆ = h *s*ˆ*ij*i *m*×*n* (*i* = 1, 2, . . . , *m*, *j* = 1, 2, . . . , *n*). Using expressions (7) and (8), elements of the matrix *S* are normalized, and transferred into the interval [0,1].

(a) for the criteria *C<sup>j</sup>* (*j* = 1, 2, . . . , *n*) max type:

$$
\hat{s}\_{lj} = \frac{s\_{lj}}{2A} \tag{7}
$$

(b) for the criteria *C<sup>j</sup>* (*j* = 1, 2, . . . , *n*) min type:

$$\mathfrak{sl}\_{ij} = \frac{H}{\mathfrak{Ls}\_{ij}} \tag{8}$$

In this way, a new normalized decision matrix is created, as shown below.

$$\mathcal{S} = \begin{array}{c} \mathsf{C}\_{1} \ \mathsf{C}\_{2} \ \dots \ \mathsf{C}\_{n} \\ A\_{1} \\ A\_{2} \\ \vdots \\ A\_{m} \end{array} \left[ \begin{array}{cccc} \mathfrak{s}\_{11} & \mathfrak{s}\_{12} & \dots & \mathfrak{s}\_{1n} \\ \mathfrak{s}\_{21} & \mathfrak{s}\_{22} & \dots & \mathfrak{s}\_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ \mathfrak{s}\_{m1} & \mathfrak{s}\_{m2} & \dots & \mathfrak{s}\_{mn} \end{array} \right] \tag{9}$$

where *<sup>s</sup>*ˆ*ij* <sup>∈</sup> [0, 1] represents normalized elements of *<sup>S</sup>*ˆ.

For the elements of the normalized decision matrix *S*ˆ = h *s*ˆ*ij*i *m*×*n* , which are defined using Expressions (7) and (8), the following relations can apply.

(a) For max type criteria *C<sup>j</sup>* (*j* = 1, 2, . . . , *n*), we have the following condition.

$$0 < \frac{\eta\_1}{2A} \le \hat{s}\_{ij} \le \frac{\eta\_{2k}}{2A} < 1\tag{10}$$

Proof of (10):

$$\frac{n\_{2k}}{2A} = \frac{n\_{2k}}{2\frac{n\_1 + n\_{2k}}{2}} = \frac{n\_{2k}}{n\_1 + n\_{2k}} < \frac{n\_{2k} + n\_1}{n\_1 + n\_{2k}} = 1$$

(b) for min type criteria *C<sup>j</sup>* (*j* = 1, 2, . . . , *n*), we have the following condition.

$$0 < \frac{H}{2n\_{2k}} \le \mathbb{S}\_{ij} \le \frac{H}{2n\_1} < 1\tag{11}$$

Proof of (11):

$$\frac{H}{2n\_1} = \frac{\frac{2}{\frac{1}{n\_{2k}} + \frac{1}{n\_1}}}{2n\_1} = \frac{1}{n\_1(\frac{1}{n\_{2k}} + \frac{1}{n\_1})} = \frac{1}{1 + \frac{n\_1}{n\_{2k}}} < 1$$

Additionally, for the boundary values of criteria intervals *n*<sup>1</sup> and *n*2*<sup>k</sup>* , we have the following equality (12) and (13).

$$\frac{n\_1}{2A} = \frac{H}{2n\_{2k}}\tag{12}$$

Proof of (12):

$$\frac{n\_1}{2A} = \frac{H}{2n\_{2k}} \Rightarrow \frac{n\_1}{A} = \frac{H}{n\_{2k}}$$

$$\frac{n\_1}{A} = \frac{\frac{n\_1}{n\_1 + n\_{2k}}}{\frac{n\_1 + n\_{2k}}{2}} = \frac{2}{\frac{n\_1 + n\_{2k}}{n\_1}} = \frac{2}{\frac{1 + \frac{n\_{2k}}{n\_1}}{1 + \frac{n\_{2k}}{n\_1}}}$$

$$= \frac{2}{\frac{n\_{2k}}{n\_{2k}} + \frac{n\_{2k}}{n\_1}} = \frac{2}{\frac{2}{n\_{2k}} + \frac{1}{n\_1}} = \frac{H}{\frac{n\_{2k}}{n\_{2k}}} = \frac{H}{n\_{2k}}$$

$$\frac{n\_{2k}}{2A} = \frac{H}{2n\_1} \tag{13}$$

Proof of equality (13):

$$\frac{n\_{2k}}{2A} = \frac{H}{2n\_1} \Rightarrow \frac{n\_{2k}}{A} = \frac{H}{n\_1}$$

$$\begin{split} \frac{n\_{2k}}{A} &= \frac{n\_{2k}}{\frac{n\_1 + n\_{2k}}{2}} = \frac{2}{\frac{n\_1 + n\_{2k}}{n\_{2k}}} = \frac{2}{\frac{n\_1}{n\_{2k}} + 1} \\ &= \frac{2}{\frac{n\_1}{n\_{2k}} + \frac{n\_1}{n\_1}} = \frac{2}{n\_1 \left(\frac{1}{n\_{2k}} + \frac{1}{n\_1}\right)} = \frac{1}{n\_1} \frac{2}{\frac{1}{n\_{2k}} + \frac{1}{n\_1}} = \frac{H}{n\_1} \end{split}$$

Step 5: Calculate criteria functions of the alternatives *V*(*Ai*). Criteria functions of the alternatives (*V*(*A<sup>i</sup>* )) are calculated according to Equation (14) below. Alternatives are then ranked according to the descending order of the calculated (*V*(*A<sup>i</sup>* )) values.

$$V(A\_i) = w\_1 \mathbb{S}\_{i1} + w\_2 \mathbb{S}\_{i2} + \dots + w\_n \mathbb{S}\_{in} \tag{14}$$

#### **3. Case Study and Results**

In this section, the application of the newly developed RFIS method is presented by giving an example that considers the evaluation of six alternatives *A<sup>i</sup>* (*i* = 1, 2, . . . , 6) in relation to five criteria *<sup>C</sup><sup>j</sup>* (*<sup>j</sup>* <sup>=</sup> 1, 2, . . . , 5 . Suppose that the alternatives represent researchers who applied for a job at a scientific research center. Evaluation of the researchers is performed using five criteria. The criteria are arranged in two groups: 1) criteria of maximizing type (max): C1, C2, and C5, and 2) criteria of minimizing type (min): C3 and C4. Criteria weights are estimated by the Level-Based Weight Assessment (LBWA) model [26] as *w<sup>j</sup>* = (0.35, 0.25, 0.15, 0.15, 0.1). The initial decision matrix (*N* = h *nij*i *m*×*n* ,*i* = 1, 2, . . . , *m*, *j* = 1, 2, . . . , *n*) is given below.


Application of RAFSI method is illustrated by following the steps described in Section 2.

Step 1: In the first step, DM defines the set of ideal (*aI<sup>j</sup>* ) and anti-ideal values (*aN<sup>j</sup>* ) for the considered criteria. In this example, the following ideal and anti-deal points are defined by consensus.

$$a\_{I\_j} = \{200, 12, 10, 100, 8\}$$

$$a\_{\mathbf{N}\_j} = \{120, 6, 20, 200, 2\}$$

Step 2: Based on the defined ideal and anti-ideal points, criteria intervals are formed.


To transfer the values of all criteria into a unique interval, a sequence of numbers is chosen where *n*<sup>1</sup> < *n*<sup>2</sup> ≤ *n*<sup>3</sup> < *n*<sup>4</sup> ≤ *n*<sup>5</sup> < *n*<sup>6</sup> . . . ≤ *n*2*k*−<sup>1</sup> < *n*2*<sup>k</sup>* . The final points of the sequence *n*<sup>1</sup> and *n*2*<sup>k</sup>* define the values determining the number of times the ideal value is better than the anti-ideal value. In other words, points *n*<sup>1</sup> and *n*2*<sup>k</sup>* determine the boundary values of the interval in which all values of the initial decision matrix are transferred. In this paper, it is assumed that the ideal value is six times better than the barely acceptable value (anti-ideal value). Now, the functions for criteria standardization

are defined using expression (3). It helps to transfer the values of the initial decision matrix into the interval [1, 6]. Therefore, we consider the following functions.

$$f\_{A\_i}(\mathbf{C}\_1) = \frac{6-1}{200-120} \mathbf{C}\_1 + \frac{200 \cdot 1 - 120 \cdot 6}{200 - 120} = 0.06 \cdot \mathbf{C}\_1 - 6.50$$

$$f\_{A\_i}(\mathbf{C}\_2) = \frac{6-1}{12 - 6} \mathbf{C}\_2 + \frac{12 \cdot 1 - 6 \cdot 6}{12 - 6} = 0.83 \cdot \mathbf{C}\_2 - 4.00$$

$$f\_{A\_i}(\mathbf{C}\_3) = \frac{6-1}{20 - 10} \mathbf{C}\_3 + \frac{20 \cdot 1 - 10 \cdot 6}{20 - 10} = 0.50 \cdot \mathbf{C}\_3 - 4.00$$

$$f\_{A\_i}(\mathbf{C}\_4) = \frac{6-1}{200 - 10} \mathbf{C}\_4 + \frac{200 \cdot 1 - 100 \cdot 6}{20 - 100} = 0.05 \cdot \mathbf{C}\_4 - 4.00$$

$$f\_{A\_i}(\mathbf{C}\_5) = \frac{6-1}{8 - 2} \mathbf{C}\_5 + \frac{8 \cdot 1 - 2 \cdot 6}{8 - 2} = 0.83 \cdot \mathbf{C}\_5 - 0.67$$

Based on the defined functions, the elements of the initial decision matrix are mapped into the interval [1, 6] and the standardized decision matrix (*S* = h *sij*i 6×5 ,*i* = 1, 2, . . . , 6, *j* = 1, 2, . . . , 5) is obtained in which all elements are transferred into the interval [1, 6].


The elements of the position *A<sup>i</sup>* -*C<sup>1</sup>* are obtained using the functions *fA<sup>i</sup>* (*C*1) = 0.06 · *C*<sup>1</sup> − 6.50:

$$f\_{A\_1}(180) = 0.06 \cdot 180 - 6.50 = 4.75, \ f\_{A\_2}(165) = 0.06 \cdot 165 - 6.50 = 3.81$$

$$f\_{A\_3}(160) = 0.06 \cdot 160 - 6.50 = 3.50, \ f\_{A\_4}(170) = 0.06 \cdot 170 - 6.50 = 4.13$$

$$f\_{A\_5}(185) = 0.06 \cdot 185 - 6.50 = 5.06, \ f\_{A\_6}(167) = 0.06 \cdot 167 - 6.50 = 3.94$$

Replacing the values from the initial matrix into functions *fA<sup>i</sup>* (*C*2), *fA<sup>i</sup>* (*C*3), *fA<sup>i</sup>* (*C*4), and *fA<sup>i</sup>* (*C*5), we get the remaining values of elements of *sij*.

Step 3: Calculating the arithmetic and harmonic means of minimum and maximum elements *n*<sup>1</sup> = 1 and *n*2*<sup>k</sup>* = 6.

$$A = (n\_1 + n\_{2k})/2 = (1+6)/2 = 3.5$$

$$H = \frac{2}{\frac{1}{n\_1} + \frac{1}{n\_{2k}}} = \frac{2}{\frac{1}{6} + \frac{1}{1}} = 1.71$$

The arithmetic mean for *n*<sup>1</sup> = 1 and *n*2*<sup>k</sup>* = 6 is 3.5, while the harmonic mean is 1.71.

Step 4: Using expressions (7) and (8) elements of matrix *S* are normalized and transformed, depending on whether they belong to min or max type criteria. In this way, we get a new matrix *S*ˆ = h *s*ˆ*ij*i 6×5 (*i* = 1, 2, . . . , 6, *j* = 1, 2, . . . , 5).

$$
\begin{array}{cccc}
\text{C1} & \text{C2} & \text{C3} & \text{C4} & \text{C5} \\
 A1 & 0.68 & 0.68 & 0.23 & 0.21 & 0.35 \\
 A2 & 0.54 & 0.52 & 0.20 & 0.34 & 0.50 \\
 A3 & 0.50 & 0.48 & 0.29 & 0.38 & 0.44 \\
 A4 & 0.59 & 0.56 & 0.21 & 0.31 & 0.31 \\
 A5 & 0.72 & 0.62 & 0.26 & 0.27 & 0.42 \\
 A6 & 0.56 & 0.49 & 0.24 & 0.29 & 0.39 \\
 & \text{max} & \text{max} & \text{min} & \text{min} & \text{max}
\end{array}
$$

For example, the element of the matrix *S*ˆ in position A1–C1 is *s*ˆ<sup>11</sup> = 4.75 <sup>2</sup>·3.5 <sup>=</sup> 0.68. Moreover, for the min type criteria, A1–C3 is *s*ˆ<sup>13</sup> = 1.71 <sup>2</sup>·3.75 <sup>=</sup> 0.23.

Step 5: Using expression (14), criteria functions *V*(*A<sup>i</sup>* ) of the alternatives are calculated, as exhibited in Table 1. Ranking pre-order of the alternatives is derived as per the descending order of *V*(*A<sup>i</sup>* ) values, where the alternative with higher *V*(*A<sup>i</sup>* ) values are always preferred.


**Table 1.** The function criteria and the final ranking of the researchers/alternatives.

Based on the above findings, the researcher A5 is selected as the best alternative candidate for the considered case study.

#### **4. Validation of the Results**

#### *4.1. Comparing the Results with Other MADM Methods*

For validation, the results of RFIS method are now compared with other traditional MADM methods like TOPSIS [6], VIKOR [4,5], and COPRAS [10]. The same decision matrix and criteria weights are used for this performance comparison. The results of this comparison are shown in Figure 2.

**Figure 2.** Comparing RAFSI method with other MADM methods.

The ranking orders as obtained by VIKOR and RAFSI methods are in complete agreement, whereas, in COPRAS and TOPSIS methods, the rank similarity is observed only for the first two alternatives {A5, A1}, and the last ranked alternative (A6). For the remaining three alternatives (A2, A3, and A4), COPRAS and TOPSIS methods suggested different rankings. Such a result is a consequence of using different data normalization techniques, such as vector normalization (in TOPSIS method) and additive normalization (in the COPRAS method). To confirm this fact, an experiment is further conducted, which was comprised of the following two stages.


#### *4.2. Rank Reversal Problem*

One of the ways to check the stability of MADM methods is by introducing new alternatives in the original set or by eliminating poor alternatives from the set. In such conditions, it is expected that the MADM method will not show any drastic change in the ranking of the alternatives. This phenomenon is called the well popular rank reversal problem [13], and considerable attention has already been paid to it in the literature [21,25]. The resistance of the developed RAFSI method to the rank reversal problem is now tested through two experiments. In the first experiment, five scenarios are considered. In each scenario, the worst alternative is eliminated from the set of alternatives, and the impact of this change on ranking and criteria functions of the alternatives are analyzed. In the second experiment, the set of alternatives is further expanded by introducing a new alternative, and the impact of such inclusion on alternatives' rank is analysed.

The first experiment: After applying RAFSI method, the researchers are ranked according to the results shown in scenario S0 (the original rank). In the next scenario (S1), the researcher who achieved the least rank is eliminated. After that, the remaining five candidates are again ranked. Thus, a total of five scenarios (S1–S5) are formed, whereby, in each subsequent scenario, the worst-ranked researcher from the set is eliminated. At the same time, we also analyzed the possibility of any changes in criteria function values and rankings of the remaining alternatives for each of the newly formed scenarios. The rankings of the alternatives in all five scenarios are shown in Table 2.


**Table 2.** The ranking of the alternatives in scenarios.

From Table 2, it is easy to observe that RAFSI method gives valid results in a dynamic environment. This is also confirmed by criteria function values of the alternatives *(f(Ai))*. In all these scenarios, the criteria functions of the alternatives remained unchanged. TOPSIS, VIKOR, and COPRAS methods are used in the same condition. All these methods also showed stability and resistance to rank reversal. However, changes in criteria function values are observed in these methods.

The second experiment: In the second experiment, among the six existing candidates, another candidate (A7) is added who achieved the same test results as compared to candidate A6. The new decision matrix is shown below.


After evaluating the new set of candidates by RAFSI, TOPSIS, VIKOR, and COPRAS methods with the same criteria weights, it was observed that the rankings and criteria functions of certain alternatives are changed, as shown in Table 3. To compare the results more comprehensively, a parallel presentation of the results is given using RAFSI, TOPSIS, VIKOR, and COPRAS methods on the new and old set of alternatives.


**Table 3.**Ranking pre-orders for the old and new set of the alternatives.

After analyzing the results of Table 3, we can conclude the following


Based on these analyses, we can conclude that rank reversal problems exist in COPRAS, TOPSIS, and VIKOR methods can lead to irrational results in conditions where we have changeable initial parameters in the decision matrix. At the same time, we can conclude that the developed RAFSI method is resistant to rank reversal problems, which contributes to achieving stable and reliable evaluation results while solving complex real-world problems.

#### **5. Discussion and Conclusions**

In this paper, a new MADM method, called RAFSI, is suggested, which shows a high level of reliability in results. This makes this method suitable for solving real-time MADM problems in different areas. The mathematical formulation of the RAFSI method does not use traditional data normalization expression. Instead, a new technique for standardization is suggested that enables data transformation from the initial decision matrix into any interval, which makes this method suitable for rational decision making. The mapping of criteria sub-intervals from the initial decision matrix into a unique criteria interval is done by using criteria functions. After forming a unique criteria interval, using arithmetic and harmonic means, the criteria interval is transformed into a normalized criteria interval. This mapping is done depending on the criteria type. Therefore, we can highlight the following contributions of this paper: (1) the development of a new MADM method for solving real problems in the business world, (2) presentation of the new method that is based on coherent defining relations between ideal and anti-ideal criteria values, (3) it eliminates the rank reversal problem and offers reliable results for making rational decisions, (4) development of a new method for the data normalization, which can be used in various areas, from MADM to heuristic algorithms and artificial intelligence-based methods.

The RAFSI method is validated through a comparison of the results with traditional MADM methods and by checking resistance to rank reversal problems. The performance comparison of the results of the RAFSI method is done with TOPSIS, COPRAS, and VIKOR methods. These methods are chosen because they use different ways of data normalization like vector, linear normalization, and additive normalization. The goal of comparison with different methods is to confirm the validity of the new method by concerning traditional MADM methods that have already shown high efficiency in solving real-world problems. The performance comparison results showed a very high level of a positive correlation between the results of the RAFSI method and other widely used MCO methods.

After comparing the ranks in the second phase, the validity of resistance of RAFSI, TOPSIS, COPRAS, and VIKOR methods to rank reversal problem is executed. In these experiments, the change in the number of alternatives is simulated. In the first experiment, the number of alternatives is reduced in five scenarios, while, in the second experiment, a set of alternatives is expanded by introducing one non-optimal alternative. The results showed that the RAFSI method is resistant to the rank reversal problem. On the other hand, the conventional TOPSIS, COPRAS, and VIKOR methods did not show satisfying results. The achieved results confirm the validity of RAFSI methods and can be recommended for using in future research for solving different multi-criteria problems.

The goals of future research should be aimed into the direction of using the RAFSI method for other real problems as well as combining with objective and subjective criteria weighting techniques. Furthermore, one of the goals of future research also lies in expanding RAFSI method by using different uncertainty theories. Using uncertainty theories, it would enable the use of linguistic variables for rational expression of human preferences. In addition, the use of new data normalization techniques in heuristic algorithms and other MADM methods can be a future research scope.

**Author Contributions:** Conceptualization, M.Ž. and D.P. Methodology, M.Ž. and D.P. Validation, M.Ž. and D.P. Writing—original draft preparation, M.Ž. and D.P. Review and editing, M.A., P.C., and I.P. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

*Article*

## **Novel Extension of DEMATEL Method by Trapezoidal Fuzzy Numbers and D Numbers for Management of Decision-Making Processes**

**Ivan Pribi´cevi´c <sup>1</sup> , Suzana Doljanica <sup>2</sup> , Oliver Momˇcilovi´c <sup>3</sup> , Dillip Kumar Das <sup>4</sup> , Dragan Pamuˇcar 5,\* and Željko Stevi´c <sup>6</sup>**


Received: 8 April 2020; Accepted: 14 May 2020; Published: 17 May 2020

**Abstract:** The decision-making trial and evaluation laboratory (DEMATEL) method is one of the most significant multi-criteria techniques for defining the relationships among criteria and for defining the weight coefficients of criteria. Since multi-criteria models are very often used in management and decision-making under conditions of uncertainty, the fuzzy DEMATEL model has been extended in this paper by D numbers (fuzzy DEMATEL-D). The aim of this research was to develop a multi-criteria methodology that enables the objective processing of fuzzy linguistic information in the pairwise comparison of criteria. This aim was achieved through the development of the fuzzy DEMATEL-D method. Combining D numbers with trapezoidal fuzzy linguistic variables (LVs) allows for the additional processing of uncertainties and ambiguities that exist in experts' preferences when comparing criteria with each other. In addition, the fuzzy DEMATEL-D methodology has a unique reasoning algorithm that allows for the rational processing of uncertainties when using fuzzy linguistic expressions for pairwise comparisons of criteria. The fuzzy DEMATEL-D methodology provides an original uncertainty management framework that is rational and concise. In order to illustrate the effectiveness of the proposed methodology, a case study with the application of the proposed multi-criteria methodology is presented.

**Keywords:** D numbers; fuzzy sets; DEMATEL; multi-criteria decision-making; criteria weights

#### **1. Introduction**

A dynamic environment in which almost all scientific and professional fields operate requires the timely and precise management of processes, which involves decision-making at its each stage. The decisions are made on the basis of a number of inputs that are an integration of qualitative and quantitative criteria. If a certain number of experts with their different preferences in group decision-making are added, the problem is complicated in multiple ways. Therefore, it is necessary to take into account all possible uncertainties that arise in group decision-making in order to gain better and more accurate output. Certainly, an extremely important stage in a decision-making process

is determining the significance of the criteria by which the most acceptable solution or ranking of solutions is defined in a further process of solving multi-criteria problems. Therefore, the aim of this paper was to develop a new methodology for determining the significance of criteria that takes aspects of uncertainty and diversity in decision-makers' preferences into account. Accordingly, an extension of the fuzzy decision-making trial and evaluation laboratory (DEMATEL) model is performed by D numbers (fuzzy DEMATEL-D), which is explained in detail in the following section. The DEMATEL method was developed by Gabus and Fontel [1], and it has thus far been widely applied in its basic or extended form, as confirmed in the study [2]. The authors carried out a comprehensive review of the literature published in a period of a decade in terms of developing various extensions of this method and its applications in different decision-making areas. Taking into account the evident wide application of this method and the need to adequately handle uncertain situations and determine the precise weights of criteria, fuzzy set theory is integrated with D numbers. In that way, an overall synergistic effect is achieved in decision-making processes.

Dempster–Shafer evidence theory [3,4] is an area of artificial intelligence because it processes and analyzes uncertainties and inaccuracies in information. It is also a convenient algorithm for reasoning in a dynamic and uncertain environment, which is recommended for use in expert systems. Since Dempster–Shafer evidence theory (DST) allows for the processing of nonspecific, ambiguous, and juxtaposed information, numerous researchers favor DST over traditional approaches, such as Bayesian probability theory [5,6]. In addition to the benefits that DST possesses for solving various real-world problems, such as network problems [7], decision-making problems [8–10], and risk theory [11], there are also limitations to DST that represent a kind of barrier to its wider application for solving real-world problems. One of the most well-known limitations that restricts the wider practical application of DST is the exclusivity of elements when parsing elements of a subset [12,13]. This limitation is shown through the following example. Giving a diagnosis in medicine is a typical area that includes different types of uncertainties [12]. Say there is patient with the symptoms of fever, polypnea, and cough; taking into account the mentioned cases, they are likely caused by the flu (*F*), bacterial (*B*) infection, or an upper respiratory infection (*U*). There are two independent diagnostic reports that were submitted by two doctors. The first doctor made a diagnosis that the patient got *F* with a possibility of 0.7 and *B* or *U* with a possibility of 0.2. The reminder 0.1 possibility is for an unknown diagnosis: *m*1(*F*) = 0.7; *m*1(*B*, *U*) = 0.2 and *m*1(*F*, *B*, *U*) = 0.1. The second doctor made a diagnosis which showed: *m*2(*F*) = 0.5; *m*2(*B*) = 0.3 and *m*2(*F*, *B*, *U*) = 0.2. The questions is: What disease does the patient have? The DST in this scenario would show following results: *m* (*B*) = 0.1304; *m* (*B*, *U*) = 0.058, and *m* (*F*, *B*, *U*) = 0.0290. It can be seen that there is an invisible hypothesis that the possibility of the unknown is equal to that of {*F*, *B*, *U*}. Based on the presented results, it can be concluded that the set of all diseases, which are manifested through the considered symptoms, can be presented as a set {*F*, *B*, *U*}. However, the set {*F*, *B*, *U*} contains only three types of diseases that are considered in this example. Obviously, this unseen hypothesis is not reasonable. Such a problem cannot be addressed by applying DST (Figure 1a) because DST implies the exclusivity of the elements, in our case being diagnosed diseases. This problem can be successfully eliminated by D numbers [12,13]. After the application of D numbers, *D* (*F*) = 0.6147 and *D* (*B*) = 0.1054 are obtained. The result shows that the patient having the flu is the highest probability. In comparison to DST, in the D numbers theory, the unknown is inherited during the reasoning.

D numbers, as a reliable and effective expression of uncertain information (and according to Xiao [14]), are good at handling these types of uncertainties. Deng and Jiang [15] developed a decision-making model to solve the adversarial problem under uncertainty with D numbers. Their model integrated fuzzy set theory, game theory, and D number theory (DNT). The same authors in [16] showed the advantages of using D numbers in green supply chain management in a fuzzy environment.

Overcoming the problem was recognized by Zhou et al. [17], who performed an integration of crisp DEMATEL and D numbers to identify the critical success factors (CSFs) in emergency management. The same method was applied in [18] for the risk identification and analysis of an energy power system. The advantages of the D-DEMATEL method are reflected when simultaneously considering ambiguities and subjectivity, which is impossible with classical approaches, as stated by Zhou et al. [17]. By developing an extension of the DEMATEL method with trapezoidal fuzzy numbers (TrFN) and D numbers in this paper, uncertainties are considered at a higher level with input parameters manifested through output functions.

– **Figure 1.** The frame of discernment in Dempster–Shafer evidence theory (DST) and in D numbers.

 [ ∅ In addition to the needs and aims presented in the introduction, the paper is has several other sections. Section 2 presents the preliminaries that outline the basics of D numbers and fuzzy theory. Section 3 is an extension of TrFN DEMATEL with D numbers, while Section 4 shows the application of the developed methodology with a specific example. Section 5 summarizes the contributions of the paper, with an overview of further research related to this paper.

#### **2. Background**

#### *2.1. D Numbers*

 *ij* D numbers represent an extension of DST with the aim to present more effectively uncertainties in the information being processed. As shown in Figure 1b, D numbers do not require the exclusivity of the elements of a set, which significantly broadens the domain of the practical application of D numbers.

 

 . **Definition 1** ([12])**.** *Let* <sup>Υ</sup> *be a finite nonempty set, and a D number is a mapping that D* : <sup>Υ</sup> <sup>→</sup> [0, 1]*, with:*

$$\sum\_{A \subseteq \Upsilon} D(A) \le 1 \text{ and } D(\mathcal{Q}) = 0 \tag{1}$$

 

 0 *where* <sup>∅</sup> *is an empty set and A is any subset of* <sup>Υ</sup>*. As stated in the previous section of the paper, the theory of D numbers does not require the elements of a set* Υ *to be mutually exclusive. The information presented by D numbers is called complete information if the condition of* P *<sup>A</sup>*⊆<sup>Υ</sup> *D*(*A*) = 1 *is filled. If* P *<sup>A</sup>*⊆<sup>Υ</sup> *D*(*A*) < 1*, the information is incomplete.*

*If* Υ *is a discrete set of elements* Υ = n *b*1, *b*2, . . . , *b<sup>i</sup>* , *b<sup>j</sup>* , . . . , *b<sup>n</sup>* o *, where <sup>b</sup><sup>i</sup>* <sup>∈</sup> *<sup>R</sup> and <sup>b</sup><sup>i</sup>* , *<sup>b</sup><sup>j</sup> (when i* , *j), then we can express D numbers by:*

$$D(b\_1) = v\_1 \ . \ D(b\_2) = v\_2 \ . \ \dots \ D(b\_i) = v\_i \ . \ D(b\_j) = v\_{j\prime} \ . \ \dots \ D(b\_n) = v\_n \tag{2}$$

*in addition to expressing D numbers using Equation (2), there is another simplified way to express D numbers: D* = (*b*1, *v*1),(*b*2, *v*2) . . . (*b<sup>i</sup>* , *vi*)*,* (*b<sup>j</sup>* , *vj*) . . . (*bn*, *vn*) *. This presentation also satisfies the condition that v<sup>i</sup>* > 0 *and* P*n i*=1 *v<sup>i</sup>* ≤ 1*.*

**Definition 2** ([12])**.** *Let two D numbers D*<sup>1</sup> = (*b*1, *v*1), . . . ,(*b<sup>i</sup>* , *vi*), . . . ,(*bn*, *vn*) *and D*<sup>2</sup> = (*bn*, *vn*), . . . ,(*b<sup>i</sup>* , *vi*), . . . ,(*b*1, *v*1) (*bi* , *vi*)*,* (*b<sup>j</sup>* , *vj*) . . . (*bn*, *vn*) *be given. Then, we can define the rule for the combination of D numbers D* = *D*<sup>1</sup> ⊙ *D*<sup>2</sup> *as follows:*

$$\begin{cases} D(\mathcal{D}) = 0\\ D(\mathcal{B}) = \frac{1}{1 - \mathcal{K}\_{\mathcal{D}}} \sum\_{\substack{\mathcal{D} \\ B\_1 \cap \mathcal{B}\_2 = \mathcal{B}}} D\_1(\mathcal{B}\_1) D\_2(\mathcal{B}\_2), \mathcal{B} \neq \mathcal{Q} \\quad\\ K\_D = \frac{1}{\mathcal{Q}\_1 \mathcal{Q}\_2} \sum\_{\substack{\mathcal{D}\_1 \cap \mathcal{B}\_2 = \mathcal{Q} \\ B\_1 \subseteq \mathcal{B} \\ B\_2 \subseteq \mathcal{B} \end{cases}} D\_1(\mathcal{B}\_1) D\_2(\mathcal{B}\_2) \\\tag{3}$$

$$\begin{cases} Q\_1 = \sum\_{\substack{\mathcal{D}\_1 \cap \mathcal{B} \\ B\_2 \subseteq \mathcal{B}}} D\_2(\mathcal{B}\_2) \\\ Q\_2 = \sum\_{\substack{\mathcal{D}\_2 \cap \mathcal{B}}} D\_2(\mathcal{B}\_2) \end{cases} \tag{4}$$

*Rule (3) is a generalization of Dempster's rule [8]. If D*<sup>1</sup> *and D*<sup>2</sup> *are defined in the frame of discernment and if Q*<sup>1</sup> = 1 *and Q*<sup>2</sup> = 1*, then the rule of combining D numbers (Rule (3)) is transformed into Dempster's rule. Rule (3) of numbers is an algorithm for the combination and fusion of uncertain information presented in D numbers.*

*For a discrete D number D* = (*b*1, *v*1),(*b*2, *v*2) . . . (*b<sup>i</sup>* , *vi*)*,* (*b<sup>j</sup>* , *vj*) . . . (*bn*, *vn*) *, we can define the integration operator as follows:*

$$I(D) = \sum\_{i=1}^{n} d\_i v\_i \tag{4}$$

*where d<sup>i</sup>* ∈ *R* <sup>+</sup>, *v<sup>i</sup>* > 0 *i* P*n i*=1 *v<sup>i</sup>* ≤ 1*.*

#### *2.2. Fuzzy Set Theory*

Fuzzy set theory is widely used to model uncertainties [19–23]. In some decision-making models, qualitative assessments are given in natural language. These linguistic variables (LVs) can be presented by linguistic expressions [24–26].

**Definition 3.** *Let X crisp be a universe of generic elements containing a fuzzy set A*e *as a subset. For each element, let <sup>x</sup>* <sup>∈</sup> *<sup>X</sup> be a number* <sup>µ</sup>*A*e(*x*) <sup>∈</sup> [0, 1]*; then, we can call the number the grade of membership of x in A [* e *27].*

**Definition 4.** *A fuzzy set A*e*of the universe of discourse X is convex if and only if for every element, x*1, *x*<sup>2</sup> ∈ *X, thus implying that:*

$$
\mu\_{\widetilde{A}}(\lambda \mathbf{x}\_1 + (1 - \lambda)\mathbf{x}\_2) \ge \min \{ \mu\_{\widetilde{A}}(\mathbf{x}\_1), \mu\_{\widetilde{A}}(\mathbf{x}\_2) \} \tag{5}
$$

*where* λ ∈ [0, 1]*.*

**Definition 5.** *The trapezoidal fuzzy number A can be defined as* e *A*e = (*a*1, *a*2, *a*3, *a*4)*, as shown in Figure 2.*

$$\mu\_{\overrightarrow{A}}(\mathbf{x}) = \begin{cases} \frac{\mathbf{x} - a\_1}{a\_2 - a\_1} & a\_1 \le \mathbf{x} \le a\_2 \\ 1 & a\_2 \le \mathbf{x} \le a\_3 \\ \frac{a\_4 - \mathbf{x}}{a\_3 - a\_4} & a\_3 \le \mathbf{x} \le a\_4 \\ 0 & otherwise \end{cases} \tag{6}$$

 

 

**Figure 2.** Trapezoidal number membership function.

' The concept of an LV is very appropriate in activities where the processing of complex or poorly defined information that cannot be well described by traditional quantitative formulations is needed. The LVs are expressed by words, sentences, or artificial languages. Each linguistic value can be presented by a fuzzy set [28]. Linguistic modelling permits experts to express themselves by labels belonging to a specific linguistic label set [29]. In this paper, experts' preferences, according to different criteria, were considered as linguistic variables. LVs can be expressed by positive TrFN, shown in Table 1, as was the case in our study.

**Table 1.** Linguistic variables.


 Basic arithmetic operations with TrFN *A* e <sup>1</sup> = (*a*1, *a*2, *a*3, *a*4) and *A* e <sup>2</sup> = (*b*1, *b*2, *b*3, *b*4) are presented in the next section [30,31]:

(1) Addition:

$$A\_1 \oplus A\_2 = (a\_1, a\_2, a\_3, a\_4) + (b\_1, b\_2, b\_3, b\_4) = (a\_1 + b\_1, a\_2 + b\_2, a\_3 + b\_3, a\_4 + b\_4) \tag{7}$$

(2) Multiplication:

$$
\widetilde{A}\_1 \otimes \widetilde{A}\_2 = (a\_1, a\_2, a\_3, a\_4) \otimes (b\_1, b\_2, b\_3, b\_4) = (a\_1 \times b\_1, a\_2 \times b\_2, a\_3 \times b\_3, a\_4 \times b\_4) \tag{8}
$$

(3) Subtraction:

$$
\overline{A}\_1 - \overline{A}\_2 = (a\_1, a\_2, a\_3, a\_{41}) - (b\_1, b\_2, b\_3, b\_4) = (a\_1 - b\_4, a\_2 - b\_3, a\_3 - b\_2, a\_4 - b\_1) \tag{9}
$$

(4) Division:

$$
\overline{A}\_1 \div \overline{A}\_2 = (a\_1, a\_2, a\_3, a\_{41}) \div (b\_1, b\_2, b\_3, b\_4) = (a\_1 \div b\_4, a\_2 \div b\_3, a\_3 \div b\_2, a\_4 \div b\_1) \tag{10}
$$

#### (5) Reciprocal values:

$$\left(\widetilde{A}\_1^{-1} = (a\_1, a\_2, a\_3, a\_4)\right)^{-1} = \left(\frac{1}{a\_4}, \frac{1}{a\_3}, \frac{1}{a\_2}, \frac{1}{a\_1}\right) \tag{11}$$

#### **3. TrFN DEMATEL-D Methodology**

Due to the imprecision and subjectivity evident in group decision-making, an extension of the fuzzy DEMATEL methodology was made using D numbers. The use of D numbers makes it possible to: (1) take the uncertainties that exist in experts' comparisons of criteria into account and (2) define the intervals of fuzzy linguistic expressions on the basis of uncertainties and inaccuracies that exist in experts' judgment. Numerous multi-criteria models imply the introduction of fuzzy numbers to express the uncertainties that exist in group decision-making [32–37]. The introduction of D numbers makes it possible to take additional uncertainties that arise when selecting fuzzy linguistic variables from a predefined set into account. In addition to fuzzy linguistic variables, D numbers introduce the probability of choosing a fuzzy linguistic variable, thus increasing the objectivity and quality of existing data in group decision-making. Since it is a new extension of the fuzzy DEMATEL methodology by D numbers, the following section details the algorithm which includes six steps:

Step 1: Experts' analysis of factors: Suppose that there are *m* experts divided into two homogeneous expert groups EG1 and EG2, and there are *n* criteria considered in a comparison matrix. Let the fuzzy linguistic variables used to compare the criteria be expressed by trapezoidal fuzzy numbers *l* = {*l<sup>b</sup>* , *b* = 1, 2, . . . , *t*}, where *t* represents the total number of fuzzy linguistic variables.

Each expert group defines the degree of influence of the criterion *i* on the criterion *j*. The comparative analysis of the pair of *i*th and *j*th criterion by the expert group is denoted by the D number

$$D^1\_{ij} = \left\langle (\mathbf{l}^1\_{i(1)}, \mathbf{v}^1\_{i j(1)}), \dots, (\mathbf{l}^1\_{i j(i)}, \mathbf{v}^1\_{i j(i)}), \dots, (\mathbf{l}^1\_{i j(t)}, \mathbf{v}^1\_{i j(t)}) \right\rangle \text{ and } D^2\_{ij} = \left\langle (\mathbf{l}^2\_{i(1)}, \mathbf{v}^2\_{i j(1)}), \dots, (\mathbf{l}^2\_{i j(t)}, \mathbf{v}^2\_{i j(i)}), \dots, (\mathbf{l}^2\_{i j(t)}, \mathbf{v}^2\_{i j(t)}) \right\rangle,\tag{12}$$

where *D*<sup>1</sup> *ij* and *<sup>D</sup>*<sup>2</sup> *ij* represent the D numbers used to express the preferences of EG1 and EG2, respectively, and *t* represents the number of fuzzy linguistic variables used to compare the criteria. As a result of the comparison, two nonnegative matrices of rank *n*×*n* are obtained, and each element of the matrix *X* <sup>1</sup> = *D*1 *ij n*×*n* and *X* <sup>2</sup> = *D*2 *ij n*×*n* represents a D number. The diagonal elements of the matrices *X* 1 and *X* <sup>2</sup> have a value of zero because the same factors have no effect. Thus, we can get one matrix *X* <sup>1</sup> = *D*1 *ij n*×*n* and *X* <sup>2</sup> = *D*2 *ij n*×*n* for each expert group.

Step 2: Forming a single fuzzy direct-relation matrix *X*e: The transformation of D matrices into a single matrix of fuzzy linguistic values is carried out through three phases.

Phase 1: In the first phase, the uncertainties presented in the initial experts' preferences are fused. Accordingly, applying the rules for the combination of D numbers *Dij* = *D*<sup>1</sup> *ij* <sup>⊙</sup> *<sup>D</sup>*<sup>2</sup> *ij*, (Equation (3)), the analysis and synthesis of the data provided by D numbers in expert matrices *X* <sup>1</sup> = *D*1 *ij n*×*n* and *X* <sup>2</sup> = *D*2 *ij n*×*n* are performed.

Phase 2: After implementing the rules for the combination of D numbers, the uncertainties presented at the intersection of fuzzy linguistic variables (FLVs) (Figure 3) are transformed into unique fuzzy linguistic variables.

We can define FLVs as the term-set *L* = *lb* |*b* = (0, . . . , *B*) , where *l<sup>b</sup>* is an FLV presented in *D*<sup>1</sup> *ij* and *D*2 *ij*. Each term *l<sup>b</sup>* is presented as trapezoidal fuzzy numbere*z*, i.e.,e*<sup>z</sup>* <sup>=</sup> *z* (*l*) , *z* (*m*<sup>1</sup> ) , *z* (*m*2) , *z* (*u*) , where *z* (*m*<sup>1</sup> ) and *z* (*m*2) represent the middle points of the trapezoidal fuzzy number (TrFN), and *z* (*l*) and *z* (*u*) are the lower and upper limits, respectively, of the fuzzy interval.

FLV transformation is performed on the basis of the ratio of the surfaces located at the intersection *Si*,*i*+<sup>1</sup> and the corresponding area of the FLVs. 

$$D\_{\rm FLVT}(H\_i) = D(H\_i) + D(H\_{i\prime}H\_{i+1}) \frac{\frac{S\_{i\bar{i}+1}}{S\_{\bar{i}}}}{\frac{S\_{i\bar{i}+1}}{S\_{\bar{i}}} + \frac{S\_{i\bar{i}+1}}{S\_{i+1}}} \tag{13}$$

$$D\_{\rm FLVT}(H\_{i+1}) = D(H\_{i+1}) + D(H\_{i\prime}H\_{i+1}) \frac{\frac{S\_{i\bar{i}+1}}{S\_{i+1}}}{\frac{S\_{i\bar{i}+1}}{S\_{\bar{i}}} + \frac{S\_{i\bar{i}+1}}{S\_{i+1}}} \tag{14}$$

where *Si*,*i*+<sup>1</sup> represents the intersection between the linguistic variable *l<sup>i</sup>* and the linguistic variable *li*+1, while *S<sup>i</sup>* and *Si*+<sup>1</sup> represent the area of the linguistic variable *l<sup>i</sup>* and *li*+1, respectively. 

After the FLV transformation, we can obtain a single D matrix *X* = h *Dij* i *n*×*n* . 

**Figure 3.** Fuzzy linguistic variables.

 Phase 3: The elements of the D matrix *X* = h *Dij* i *n*×*n* are transformed into a single fuzzy direct-relation matrix *X* e = h e *xij* i *n*×*n* , where <sup>e</sup> *xij* = *xij*1, *xij*2, *xij*3, *xij*<sup>4</sup> represents the elements of the matrix *X* e expressed by trapezoidal fuzzy numbers. The elements of the matrix *X* e = h e *xij* i *n*×*n* are obtained by applying the operator of integration of D numbers (Equation (4)), i.e., <sup>e</sup> *xij* = P *e i*=1 *liv<sup>i</sup>* , where *e* represents the number of FLVs contained in the D number.

 Step 3: Computing the elements of a normalized fuzzy direct-relation matrix: After forming a single fuzzy direct-relation matrix *X* e = h e *xij* i *n*×*n* by applying Equations (16) and (17), we can obtain the elements of the normalized fuzzy direct-relation matrix (Equation (15)).

$$N = \begin{bmatrix} 0 & \overline{d}\_{12} & \cdots & \overline{d}\_{1n} \\ \overline{d}\_{21} & 0 & \cdots & \overline{d}\_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ \overline{d}\_{n1} & \overline{d}\_{n2} & \cdots & 0 \end{bmatrix} \tag{15}$$

 where *d* e *ij* = *d L ij* , *d M ij* , *d U ij* represents the normalized values of the matrix *X* e = h e *xij* i *n*×*n* , which are obtained by applying Equations (16) and (17):

$$\widetilde{d}\_{ij} = \frac{\widetilde{\overline{\mathbf{x}}\_{ij}}}{\widetilde{\overline{\mathbf{s}}}} = \left( \frac{\mathbf{x}\_{ij1}}{\mathbf{s}\_4}, \frac{\mathbf{x}\_{ij2}}{\mathbf{s}\_3}, \frac{\mathbf{x}\_{ij3}}{\mathbf{s}\_2}, \frac{\mathbf{x}\_{ij4}}{\mathbf{s}\_1} \right) \tag{16}$$

$$\begin{array}{lcl}\widetilde{s} = \max\{\sum\_{j=1}^{n} \widetilde{\mathbf{x}\_{ij}}\} &= \max\{\sum\_{j=1}^{n} \mathbf{x}\_{j|1}, \sum\_{j=1}^{n} \mathbf{x}\_{j|2}, \sum\_{j=1}^{n} \mathbf{x}\_{j|3}, \sum\_{j=1}^{n} \mathbf{x}\_{j|4}\} \\ &= \left(\max\{\sum\_{j=1}^{n} \mathbf{x}\_{i|1}\}, \max\{\sum\_{j=1}^{n} \mathbf{x}\_{i|2}\}, \max\{\sum\_{j=1}^{n} \mathbf{x}\_{i|3}\}, \max\{\sum\_{j=1}^{n} \mathbf{x}\_{i|4}\}\right) \end{array} \tag{17}$$

Step 4: Determining the fuzzy number-based total relation matrices: By applying Equations (18)–(20), we can obtain a total influence matrix *T* = h e *tij* i *n*×*n* , where *I* is an *n*×*n* identity matrix. Since the matrix *N* = h *d*e*ij*i *n*×*n* is presented by trapezoidal fuzzy numbers, we can form four submatrices *N* = (*N*1, *N*2, *N*3, *N*4), where *N*<sup>1</sup> = h *dij*<sup>1</sup> i *n*×*n* , *N*<sup>2</sup> = h *dij*<sup>2</sup> i *n*×*n* , *N*<sup>3</sup> = h *dij*<sup>3</sup> i *n*×*n* , and *N*<sup>4</sup> = h *dij*<sup>4</sup> i *n*×*n* . In addition, lim*m*→∞ (*N*1) *<sup>m</sup>* <sup>=</sup> *<sup>O</sup>*, lim*m*→∞ (*N*2) *<sup>m</sup>* <sup>=</sup> *<sup>O</sup>*, lim*m*→∞ (*N*3) *<sup>m</sup>* <sup>=</sup> *<sup>O</sup>*, and lim*m*→∞ (*N*4) *<sup>m</sup>* = *O*, where *O* represents the zero matrix.

$$\begin{array}{l}\lim\_{m\to\infty}\left(I+N\_{1}+N\_{1}^{2}+\ldots+N\_{1}^{m}\right)=\left(I-N\_{1}\right)^{-1}\\\lim\_{m\to\infty}\left(I+N\_{2}+N\_{2}^{2}+\ldots+N\_{2}^{m}\right)=\left(I-N\_{2}\right)^{-1}\\\lim\_{m\to\infty}\left(I+N\_{3}+N\_{3}^{2}+\ldots+N\_{3}^{m}\right)=\left(I-N\_{3}\right)^{-1}\\and\\\lim\_{m\to\infty}\left(I+N\_{4}+N\_{4}^{2}+\ldots+N\_{4}^{m}\right)=\left(I-N\_{4}\right)^{-1}\end{array}\tag{18}$$

The total relation fuzzy matrix *T* is obtained by computing each of the sub-elements:

$$\begin{aligned} T\_1 &= \lim\_{m \to \infty} \left( I + N\_1 + N\_1^2 + \dots + N\_1^m \right) = \left( I - N\_1 \right)^{-1} = \left[ t\_{\bar{l}j1} \right]\_{m \times n} \\ T\_2 &= \lim\_{m \to \infty} \left( I + N\_2 + N\_2^2 + \dots + N\_2^m \right) = \left( I - N\_2 \right)^{-1} = \left[ t\_{\bar{l}j2} \right]\_{m \times n} \\ T\_3 &= \lim\_{m \to \infty} \left( I + N\_3 + N\_3^2 + \dots + N\_3^m \right) = \left( I - N\_3 \right)^{-1} = \left[ t\_{\bar{l}j3} \right]\_{m \times n} \\ \text{and} \\ T\_4 &= \lim\_{m \to \infty} \left( I + N\_4 + N\_4^2 + \dots + N\_4^m \right) = \left( I - N\_4 \right)^{-1} = \left[ t\_{\bar{l}j4} \right]\_{m \times n} \end{aligned} \tag{19}$$

where *N*<sup>1</sup> = h *dij*<sup>1</sup> i *n*×*n* , *N*<sup>2</sup> = h *dij*<sup>2</sup> i *n*×*n* , *N*<sup>3</sup> = h *dij*<sup>3</sup> i *n*×*n* , and *N*<sup>4</sup> = h *dij*<sup>4</sup> i *n*×*n* . Submatrices *T*1, *T*2, *T*3, and *T*<sup>4</sup> form the single fuzzy total relation matrix *T* = (*T*1, *T*2, *T*3, *T*4), which is presented as follows:

$$T = \begin{bmatrix} \overline{t}\_{11} & \overline{t}\_{12} & \cdots & \overline{t}\_{1n} \\ \overline{t}\_{21} & \overline{t}\_{22} & \cdots & \overline{t}\_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ \overline{t}\_{n1} & \overline{t}\_{n2} & \cdots & \overline{t}\_{nn} \end{bmatrix}\_{n \times n} \tag{20}$$

wheree*tij* = e*tij*1,e*tij*2,e*tij*3,e*tij*<sup>4</sup> is the total assessment of experts' effect for each criterion *i* and criterion *j*, thus expressing their mutual influence and dependence.

Step 5: Computing the sum of rows and columns of the total relation matrix: Presented by vectors *R* and *C* of rank *n* × 1, Equations (21) and (22) are:

$$R = \left[\sum\_{j=1}^{n} \widetilde{t}\_{ij}\right]\_{n \times 1} = \left[\left(\sum\_{j=1}^{n} t\_{ij1}, \sum\_{j=1}^{n} t\_{ij2}, \sum\_{j=1}^{n} t\_{ij3}, \sum\_{j=1}^{n} t\_{ij3}, \sum\_{j=1}^{n} t\_{ij4}\right)\right]\_{n \times 1} \tag{21}$$

$$\mathcal{C} = \left[ \sum\_{i=1}^{n} \overline{t\_{ij}} \right]\_{1 \times n} = \left[ \left( \sum\_{i=1}^{n} t\_{i\bar{j}1}, \sum\_{i=1}^{n} t\_{i\bar{j}2}, \sum\_{i=1}^{n} t\_{i\bar{j}3}, \sum\_{i=1}^{n} t\_{i\bar{j}4} \right) \right]\_{1 \times n} \tag{22}$$

The value *R<sup>i</sup>* represents the sum of the *i*th raw of the matrix *T*. The determined value presents the total direct and indirect effects that the criterion *i* provides for the other criteria. Meanwhile, the value of *C<sup>i</sup>* represents the sum of the *j*th column of the matrix *T* and shows the effects that the criterion *j* receives from the other criteria [37].

Step 6. Determining the weight coefficients of the criterion (*w<sup>j</sup>* ): This is achieved via Equation (23):

$$
\sqrt{\mathcal{W}}\_{\text{j}} = \sqrt{\left(\overline{\mathcal{R}}\_{i} + \overline{\mathcal{C}}\_{i}\right)^{2} + \left(\overline{\mathcal{R}}\_{i} - \overline{\mathcal{C}}\_{i}\right)^{2}}\tag{23}
$$

where the values e*R<sup>i</sup>* + *C*e*<sup>i</sup>* and e*R<sup>i</sup>* − *C*e*<sup>i</sup>* are obtained using Equations (24) and (25):

$$
\widetilde{\mathcal{R}}\_{i} + \widetilde{\mathcal{C}}\_{i} = \begin{pmatrix}
\sum\_{j=1}^{n} t\_{i\backslash 1} + \sum\_{i=1}^{n} t\_{i\backslash 1} \sum\_{j=1}^{n} t\_{i\backslash 2} + \sum\_{i=1}^{n} t\_{i\backslash 2} \\
\sum\_{j=1}^{n} t\_{i\backslash 3} + \sum\_{i=1}^{n} t\_{i\backslash 3} \sum\_{j=1}^{n} t\_{i\backslash 4} + \sum\_{i=1}^{n} t\_{i\backslash 4}
\end{pmatrix} \tag{24}
$$

$$
\widetilde{R}\_i - \widetilde{\mathbf{C}}\_i = \begin{pmatrix}
\sum\_{j=1}^n t\_{ij1} - \sum\_{i=1}^n t\_{ij1}, \sum\_{j=1}^n t\_{ij2} - \sum\_{i=1}^n t\_{i\backslash 2\prime} \\
\sum\_{j=1}^n t\_{i\backslash 3} - \sum\_{i=1}^n t\_{i\backslash 3\prime} \sum\_{j=1}^n t\_{i\backslash 4} - \sum\_{i=1}^n t\_{i\backslash 4}
\end{pmatrix} \tag{25}
$$

The normalization of the weight coefficients is carried out by Equation (26):

$$w\_j = \frac{\mathcal{W}\_{\bar{j}}}{\sum\_{j=1}^{n} \tilde{\mathcal{W}}\_{\bar{j}}} \tag{26}$$

where *<sup>n</sup>* is the number of criteria and *<sup>w</sup>*e*<sup>j</sup>* is the fuzzy values of the criteria weight. The values of the criteria weight are in the interval *<sup>w</sup>*e*<sup>j</sup>* <sup>=</sup> *wj*1, *wj*2, *wj*3, *wj*<sup>4</sup> , where the condition 0 ≤ *wj*<sup>1</sup> ≤ *wj*<sup>2</sup> ≤ *wj*<sup>3</sup> ≤ *wj*<sup>4</sup> ≤ 1 is fulfilled for each evaluation criterion. However, the requirement that the sum of the weight coefficients of the criteria be generally equal to one must be fulfilled. Since these are fuzzy coefficients of criteria, using Equation (26) allows for the obtainment of the weight coefficients for which 0 ≤ P*n <sup>j</sup>*=<sup>1</sup> *wj*<sup>1</sup> ≤ P*n <sup>j</sup>*=<sup>1</sup> *wj*<sup>2</sup> ≤ P*n <sup>j</sup>*=<sup>1</sup> *<sup>w</sup>j*<sup>3</sup> <sup>≤</sup> 1 and <sup>P</sup>*<sup>n</sup> <sup>j</sup>*=<sup>1</sup> *wj*<sup>4</sup> ≥ 1. This fulfills the condition that the criteria weight are in the interval *w<sup>j</sup>* ∈ [0, 1],(*j* = 1, 2, . . . , *n*).

#### **4. Application of TrFN D-DEMATEL Method**

This section describes the application of the TrFN D-DEMATEL method for determining the quality of logistics services in order to obtain an adequate insight into the management processes of the service provider. The research by Prentkovskis et al. [38] was used to test the methodology presented. The dimensions that affect the measurement of logistics service quality were taken from the study [38], and they were evaluated using the TrFN D-DEMATEL methodology. There were five defined dimensions: reliability (C1), assurance (C2), tangibles (C3), empathy (C4), and responsiveness (C5). The study involved six experts who evaluated the dimensions. A detailed description of applying the TrFN D-DEMATEL methodology is presented in the following section.

Step 1: Experts' analysis of factors.

Six experts participated in the study, and they were divided into two homogenous expert groups: EG1 and EG2. Expert groups expressed their preferences when comparing dimensions using a nine-degree fuzzy linguistic scale; see Table 1. Each expert group defined the mutual degree of influence of the criteria by D numbers; see Table 2.

Table 2 shows the experts' comparisons of dimensions using D numbers, where the D number *D*<sup>1</sup> represents the experts' preferences of the EG1 expert group and *D*<sup>2</sup> represents the experts' preferences of the EG2 expert group.

Step 2: Forming a single fuzzy direct-relation matrix.

Phase I: In order to obtain aggregated experts' preferences, a fusion of the uncertainties expressed in the group experts' preferences *D*<sup>1</sup> and *D*<sup>2</sup> is performed. For the uncertainty fusion, the rule for the combination of D numbers *Dij* = *D*<sup>1</sup> *ij* <sup>⊙</sup> *<sup>D</sup>*<sup>2</sup> *ij* (Equation (3)) is used. Thus, an aggregated D matrix of experts' preferences is obtained; see Table 3.

In order to clarify the application of the rules for combining D numbers, the following section shows the application of the rules for the combination of D numbers for position C2−C1 in the experts' analysis of dimensions (Table 2).

Based on the data in Table 2, for position C2–C1, we can distinguish two D numbers that represent the experts' preferences of homogeneous expert groups: *D*<sup>1</sup> = {(VH,0.2), (VH;EH,0.35), (EH,0.4)} (where VH is 'very high' and EH is 'extremely high') and *D*<sup>2</sup> = {(VH,0.25), (VH;EH,0.45),


(EH,0.1)}. Table 4 provides an analysis of the data on D numbers whose combination was considered, *D* = *D*<sup>1</sup> *C*2−*C*1 <sup>⊙</sup> *<sup>D</sup>*<sup>2</sup> *C*2−*C*1 .

By applying Equation (4), we can calculate the relationships defined by the rule for the combination of D numbers.

$$K\_{D} = \frac{1}{Q\_{1}Q\_{2}} \left( D\_{\text{C2-C1}}^{1} \text{(VH)} \cdot D\_{\text{C2-C1}}^{2} \text{(EH)} + D\_{\text{C2-C1}}^{1} \text{(EH)} \cdot D\_{\text{C2-C1}}^{2} \text{(VH)} \right) = 0.158$$

$$Q\_{1} = D\_{\text{C2-C1}}^{1} \text{(VH)} + D\_{\text{C2-C1}}^{1} \text{(VH;EH)} + D\_{\text{C2-C1}}^{1} \text{(EH)} = 0.2 + 0.35 + 0.4 = 0.95$$

$$Q\_{2} = D\_{\text{C2-C1}}^{2} \text{(VH)} + D\_{\text{C2-C1}}^{2} \text{(VH;EH)} + D\_{\text{C2-C1}}^{2} \text{(VH)} = 0.25 + 0.45 + 0.1 = 0.80$$

Thus, we can obtain:

$$D\_{\rm C2-C1}(\rm NH) = \frac{1}{1-K\_{\rm D}} \begin{pmatrix} D\_{\rm C2-C1}^{1}(\rm NH)D\_{\rm C2-C1}^{2}(\rm NH) + D\_{\rm C2-C1}^{1}(\rm NH)D\_{\rm C2-C1}^{2}(\rm NH; \rm EH) + \\ D\_{\rm C2-C1}^{1}(\rm NH; \rm EH)D\_{\rm C2-C1}^{2}(\rm NH) \end{pmatrix} = 0.270$$

$$D\_{\rm C2-C1}(\rm NH; \rm EH) = \frac{1}{1-K\_{\rm D}} \left( D\_{\rm C2-C1}^{1}(\rm NH; \rm EH)D\_{\rm C2-C1}^{2}(\rm NH; \rm EH) \right) = 0.187$$

$$D\_{\rm C2-C1}(\rm EH) = \frac{1}{1-K\_{\rm D}} \begin{pmatrix} D\_{\rm C2-C1}^{1}(\rm NH; \rm EH)D\_{\rm C2-C1}^{2}(\rm EH) + D\_{\rm C2-C1}^{1}(\rm EH)D\_{\rm C2-C1}^{2}(\rm NH; \rm EH) + \\ D\_{\rm C2-C1}^{1}(\rm EH)D\_{\rm C2-C1}^{2}(\rm EH) \end{pmatrix} = 0.303$$


**Table 3.** Aggregated D matrix of experts' preferences.

**Table 4.** Intersection table to combine *D*<sup>1</sup> *C*2−*C*1 and *D*<sup>2</sup> *C*2−*C*1 .


Phase II: After applying the rule for the combination of D numbers, we can obtain a D number located between the fuzzy linguistic variables VH and EH, and so it is necessary to transform the uncertainty found between the fuzzy variables VH and EH into unique FLVs. The transformation of uncertainty is performed by applying Equations (13) and (14). The following section presents the procedure for the transformation of uncertainty between the fuzzy variables VH and EH. A graphical display of the fuzzy linguistic variables VH and EH is given in Figure 4.

**Figure 4.** Fuzzy linguistic variables VH (very high) and EH (extremely high).

The transformation of FLVs is performed on the basis of the ratio of the surfaces located at the intersection *S*VH,EH and the area that covers the fuzzy variables VH and EH, i.e., *S*VH,EH = 0.5 × 2 × 1 =

–

− s'

(4), the values of the aggregated D matrix of experts' preferences are

0 0

**m.** 

1.00 and *S*EH = 0.5 × 3 × 1 = 1.50. Using Equations (13) and (14), we can obtain finite values of D numbers for the fuzzy variables VH and EH:

$$\begin{array}{l}D\_{\text{C2-C1}}(\text{VH}) = 0.270 + 0.187 \frac{1/2}{1/2 + 1.5/2} = 0.350\\D\_{\text{C2-C1}}(\text{EH}) = 0.303 + 0.187 \frac{1/1.5}{1/2 + 1.5/2} = 0.410\end{array}$$

$$D\_{\text{B1}}(\text{MH}) = 0.461 + 0.154 \frac{1.125/2}{1.125/2 + 1.125/2} = 0.538$$

Thus, we can obtain D number *DC*2−*C*<sup>1</sup> = {(VH,0.350), (EH,0.410)} which is in the first position C2−C1. The remaining values of the aggregated D matrix of experts' preferences are obtained in a similar way (Table 3).

Phase III: Using Equation (4), the values of the aggregated D matrix of experts' preferences are integrated into the corresponding fuzzy values; see Table 5. By this procedure, the uncertainties expressed by D numbers are transformed into unique trapezoidal fuzzy numbers.


**Table 5.** Single fuzzy direct-relation matrix.

Using Equations (4), (7), and (8), the element C2−C1 of the single fuzzy direct-relation matrix (Table 5) is obtained as follows:

<sup>e</sup>*x*<sup>21</sup> <sup>=</sup> 0.350 · (7, 8, 9, 10) <sup>+</sup> 0.410 · (8, 9, 10, 10) <sup>=</sup> (5.73, 6.49, 7.25, 7.60)

Similarly, we can obtain the remaining elements of the single fuzzy direct-relation matrix (Table 5). Steps 3 and 4: Computing the elements of the normalized fuzzy direct-relation matrix and total fuzzy influence matrix.

By applying Equations (16) and (17), we can obtain the elements of the normalized fuzzy direct-relation matrix; see Table 6.

In the next step, by using Equations (18)–(20), we can obtain the total influence matrix *T* = h e*tij*i 5×5 ; see Table 7.

Steps 5 and 6: Computing the sum of rows and columns of the fuzzy total relation matrix and determining the optimal values of the weight coefficients of dimensions.

The optimal values of the weight coefficients of dimensions are defined on the basis of the total direct/indirect effects that the criterion *i* provides for other criteria (*R<sup>i</sup>* ) and the total direct/indirect effects that the criterion *j* receives from other criteria (*C<sup>i</sup>* ). The values of *R<sup>i</sup>* and *C<sup>i</sup>* are obtained by using Equations (21) and (22). After calculating the values of *R<sup>i</sup>* and *C<sup>i</sup>* (Table 8), we can obtain the optimal values of the dimensions by using Equations (23)–(26).


**Table 6.** Normalized fuzzy direct-relation matrix.

**Table 7.** Total fuzzy influence matrix.


**Table 8.** Ranking the weight coefficients of the dimensions.


The final values of the weight coefficients of the dimensions are: reliability (*w*<sup>1</sup> = 0.226), assurance (*w*<sup>2</sup> = 0.211), tangibles (*w*<sup>3</sup> = 0.213), empathy (*w*<sup>4</sup> = 0.160), and responsiveness (*w*<sup>5</sup> = 0.190). Based on presented results, we can define the final ranking as C1 > C3 > C2 > C5 > C4.

Compared to crisp DEMATEL, the proposed method has two main advantages. The first advantage of proposed model is the elimination of disadvantage in the DST where the elements in the frame of discernment are required to be independent. While both evidence DEMATEL [39] and DEMATEL-D can decrease the subjectivity of expert preferences, the DST is not very applicable for the presentation of linguistic estimates in conditions where it is required that the elements within the distinction must be mutually exclusive. As shown In Figure 1a, the variables must have boundaries in DST. However, it was found that this demand is difficult to be satisfied for LVs such as "F", "B", and "U". As shown in Figure 1b, the D numbers theory overcomes this poorness and permit overlap between

LVs, which makes it more applicable for linguistic assessments. Furthermore, in DST, the sum of basic probability assignment must present the complete information, i.e., the sum of probability must be 1. However, in the D numbers theory, the information can be incomplete, which is more practical and realistic.

The second advantage of proposed DEMATEL-D model is related to reducing experts subjectivity. Even though both fuzzy DEMATEL [40] and the TrFN DEMATEL consider fuzziness, the developed method is more objective than fuzzy DEMATEL because it can reduce the impact of expert subjectivity by fusing group opinions.

The precedence of the D numbers theory is the ability to integrate group information. Therefore, in order to perform the verification and validity of the developed method, this study computed the result in each expert group and compared the opinions of these two expert groups with the final result, as shown in Table 8. In these three cases (the first group, the second group, and the aggregated values), C1 was the most significant element but the ranking of the other elements was quite various, thus showing that the final rank was sensitive to the knowledge of experts. Consequently, the need for the integration of expert information in various fields using the D numbers theory has been shown.

The D numbers theory is used to fuse the expert preferences in decision-making processes. Therefore, it is reasonable to expect the aggregate values to be close to the values represented by the expert preferences. The final values of the criteria obtained using the DEMATEL-D model in this study were between the results that have been proposed through expert evaluations. This shows us that the proposed model respects the uncertainties that exist in group decision-making and that the model gives results that are valid and reasonable.

#### **5. Conclusions**

In this paper, the fuzzy DEMATEL methodology was expanded by D numbers to overcome uncertainties and subjectivities that are inevitable in group decision-making processes, especially with numerous decision-makers. The integration of fuzzy DEMATEL with D numbers allows for the consideration of uncertainties that exist in experts' comparisons of criteria, and that the intervals of fuzzy linguistic expressions are defined based on the uncertainties and imprecision that exist in experts' judgment. The introduction of D numbers makes it possible to take the additional uncertainties that arise when selecting fuzzy linguistic variables from a predefined set into account. D numbers, in addition to fuzzy linguistic variables, introduce the probability of choosing a fuzzy linguistic variable, thus increasing the objectivity and quality of existing data in group decision-making. This can be proven, for example, by determining the quality of logistics services in order to obtain an adequate insight into the management processes. Considering that this is a new extension of the fuzzy DEMATEL method by D numbers, which was demonstrated on a real study, it can be concluded that there is a justification for the development of the presented methodology. Future research may be based on the greater application of MCDM methods and D numbers. In addition, it is possible to integrate rough numbers with D numbers, which could provide a more comprehensive concept for managing decision-making processes.

**Author Contributions:** Conceptualization, I.P. and Ž.S.; methodology, I.P.; D.P., and Ž.S.; validation, S.D. and D.K.D.; writing—original draft preparation, D.P and O.M.; writing—review and editing, D.K.D. and S.D.; supervision, Ž.S. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**

1. Gabus, A.; Fontela, E. *World Problems, an Invitation to Further Thought within the Framework of DEMATEL*; Battelle Geneva Research Centre: Geneva, Switzerland, 1972.


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Preview Control for MIMO Discrete-Time System with Parameter Uncertainty**

#### **Li Li <sup>1</sup> and Fucheng Liao 2,\***


Received: 16 April 2020; Accepted: 6 May 2020; Published: 9 May 2020

**Abstract:** We consider the problems of state feedback and static output feedback preview controller (PC) for uncertain discrete-time multiple-input multiple output (MIMO) systems based on the parameter-dependent Lyapunov function and the linear matrix inequality (LMI) technique in this paper. First, for each component of a reference signal, an augmented error system (AES) containing previewed information is constructed via the difference operator and state augmentation technique. Then, for the AES, the state feedback and static output feedback are introduced, and when considering the output feedback, a previewable reference signal is utilized by modifying the output equation. The preview controllers' parameter matrices can be achieved from the solution of LMI problems. The superiority of the PC is illustrated via two numerical examples.

**Keywords:** AES; PC; MIMO discrete-time system; state feedback and output feedback; parameter dependence

#### **1. Introduction**

In the field of control, there are many effective control methods, for example, optimal control [1], learning control [2], tracking control [3], and repetitive control [4] and so on. In many practical problems, future information is always known completely or partially, such as a vehicle driving path, scheduled flight route of an aircraft, and machining rules of a machine tool. Preview control can fully utilize the future values of these previewed signals to improve the control performance [5,6]. The preview control was first proposed by Sheridan in 1966 [7], and Bender [8] applied preview control theory to a vehicle suspension system. The field of preview control has attracted researchers and has been studied since the 1970s (see, the papers [9–13]). For a linear constant preview control system, LQR-based design methods have been most widely studied, e.g., [14–20]. However, the presence of an unknown disturbance or uncertain system model can cause degraded performance or even loss of closed-loop stability. To deal with this problem, robust preview control has received considerable attention [21–27]. In recent years, the integration of preview control and other control methods has attracted much attention. For example, in [28,29], the analysis and design problems of preview repetitive control for discrete system have been investigated. A fault-tolerant control theory was combined with preview control in [30,31]. In [32], the preview control concept was added to the Lipschitz non-linear system to consider the preview tracking control problems. Of course, preview control has attracted researchers for its applications in varied areas, e.g., wind turbine blade-pitch control [33], autonomous vehicle guidance [34], robotics [35], and so on.

With the rapid development of computer, electronics and information technology, industrial systems are becoming larger and more complex. Therefore, it is more interesting to consider the control problem of MIMO systems. For example, the preview control problem of MIMO systems was studied in [36] by combining linear quadratic optimal theory with the AES method. However, the dimension of an AES is high and the calculation is complex. In addition, through numerical simulations, we find that the preview control effect is not ideal when the reference signal is a vector, as in [11,13,15,36,37]. The components of the reference signal influence each other, and the influence is often negative. However, for a high-dimensional reference signal, the AES constructed in [11,13,15,36,37] not only has a high dimension, but the component signals also share the same preview length.

In this paper, robust PC design methods are proposed for MIMO discrete systems. First, the construction of an AES including previewable signals is carried out. Then, sufficient conditions of closed-loop systems and the PC design methods are proposed. The main contributions of our preview control scheme are summarized as follows: (i) The AES of a MIMO uncertain discrete-time system is successfully constructed from a new perspective. It not only constructs a lower-dimensional error system, but it also provides optional preview lengths. (ii) Our desired PC design method can avoid the negative influence of reference signal components on each other, and then effectively improve the tracking performance. (iii) Our design additionally allows the system output matrices to be non-common and have uncertainties. Finally, the simulation results clearly validate the superiority of the proposed PC.

Notation. *A* > 0: symmetric and positive definite matrix *A*. *A <sup>T</sup>* denotes the matrix transposition of *A*. The symbol ∗ denotes the entries of matrices implied by symmetry. *sym*(*A*) means *A* + *A T* . *I* and 0: identity matrix and zero matrix of appropriate dimensions, respectively.

#### **2. Problem Formulation**

Consider the uncertain discrete-time system

$$\begin{cases} \mathfrak{x}(k+1) = A(\theta)\mathfrak{x}(k) + B(\theta)\mathfrak{u}(k),\\ \mathfrak{y}(k) = \mathcal{C}(\theta)\mathfrak{x}(k) + D(\theta)\mathfrak{u}(k), \end{cases} \tag{1}$$

where *x*(*k*) ∈ *R n* , *u*(*k*) ∈ *R <sup>m</sup>* and *<sup>y</sup>*(*k*) <sup>∈</sup> *<sup>R</sup> <sup>q</sup>* are respectively the state vector, input control vector, and output vector.

*<sup>y</sup>*(*k*) = <sup>h</sup> *y*1(*k*) *y*2(*k*) · · · *yq*(*k*) i*T* , *C i* (θ), and *D<sup>i</sup>* (θ) represent the *i* (*i* = 1, 2, · · · , *q*) row of matrices *C*(θ) and *D*(θ), respectively. Then, we can have

$$y\_i(k) = \mathbb{C}^i(\theta)\mathbf{x}(k) + D^i(\theta)\boldsymbol{u}(k) \tag{2}$$

**A1**: The uncertain matrices are given by

$$\left[\begin{array}{cccc} A(\theta) & B(\theta) & \mathbf{C}(\theta) & D(\theta) \end{array}\right] = \sum\_{j=1}^{s} \theta\_{j} \begin{bmatrix} A\_{j} & B\_{j} & \mathbf{C}\_{j} & D\_{j} \end{bmatrix} \tag{3}$$

where *A<sup>j</sup>* , *B<sup>j</sup>* , *C<sup>j</sup>* , and *D<sup>j</sup>* (*j* = 1, 2, · · · ,*s*) are matrices with appropriate dimensions. θ = h θ<sup>1</sup> θ<sup>2</sup> · · · θ*<sup>s</sup>* i*T* ∈ *R s* is the parameter vector and satisfies

$$\Theta \in \Theta := \left\{ \theta \in \mathbb{R}^s \, \middle| \, \theta\_j \ge 0, (j = 1, 2, \dots, s), \sum\_{j=1}^s \theta\_j = 1 \right\} \tag{4}$$

**A2**: Let *<sup>r</sup>*(*k*) = <sup>h</sup> *r*1(*k*) *r*2(*k*) · · · *rq*(*k*) i*T* ∈ *R <sup>q</sup>* be the reference signal. Assume that the component reference signal *ri*(*k*) (*i* = 1, 2, · · · *q*) is available from current time *k* to *k* + h*<sup>i</sup>* . The future values are assumed not to change beyond *k* + h*<sup>i</sup>* , namely,

$$r\_i(k+j) = r\_i(k+\mathbf{h}\_i), \ (\ j \ge \mathbf{h}\_i+1).$$

where h*<sup>i</sup>* is the preview length.

**Remark 1.** *It should be noted that A2 is an assumption about ri*(*k*) (*i* = 1, 2, · · · , *q*) *rather than r*(*k*)*. There are two advantages of A2: (1) Each component ri*(*k*) *can have its own preview length h<sup>i</sup> instead of sharing one preview length h. (2) It can avoid the negative e*ff*ects of other signals.*

The objective is to design preview controller such that

(i) The output tracks the reference signal without steady-state error, that is,

$$\lim\_{k \to \infty} e\_i(k) = 0 \tag{5}$$

where *ei*(*k*) = *yi*(*k*) − *ri*(*k*).

(ii) The closed-loop system is robustly stable and exhibits acceptable transient responses for all <sup>θ</sup> <sup>∈</sup> <sup>Θ</sup>.

#### **3. Derivation of AES**

Here, we derived an AES that contains previewed information. Employing the difference operator ∆ as:

$$
\Delta \delta(k) = \delta(k+1) - \delta(k) \tag{6}
$$

and applying the difference operator to (1) and (2), one obtains:

$$\begin{cases} \Delta \mathbf{x}(k+1) = A(\theta) \Delta \mathbf{x}(k) + B(\theta) \Delta u(k), \\\ \Delta y\_i(k) = \mathbf{C}^i(\theta) \Delta \mathbf{x}(k) + D^i(\theta) \Delta u(k). \end{cases} \tag{7}$$

Considering (5)–(7), it is obtained that:

$$e\_l(k+1) = e\_l(k) + \mathcal{C}^i(\theta)\Delta x(k) + D^i(\theta)\Delta u(k) - \Delta r\_l(k) \tag{8}$$

It follows from (6) and (8) that:

$$
\widetilde{\mathbf{x}}\_{i}(k+1) = \widetilde{A}\_{i}(\boldsymbol{\theta})\widetilde{\mathbf{x}}\_{i}(k) + \widetilde{B}\_{i}(\boldsymbol{\theta})\Delta u\_{i}(k) + \mathbf{G}\Delta r\_{i}(k) \tag{9}
$$

where

$$
\widetilde{\mathbf{x}}\_{i}^{\cdot}(k) = \begin{bmatrix} \mathbf{e}\_{i}(k) \\ \Delta \mathbf{x}(k) \end{bmatrix}, \widetilde{A}\_{i}(\boldsymbol{\theta}) = \begin{bmatrix} I & \mathbf{C}^{i}(\boldsymbol{\theta}) \\ \mathbf{0} & A(\boldsymbol{\theta}) \end{bmatrix}, \widetilde{B}\_{i}(\boldsymbol{\theta}) = \begin{bmatrix} D^{i}(\boldsymbol{\theta}) \\ B(\boldsymbol{\theta}) \end{bmatrix}, \mathbf{G} = \begin{bmatrix} -1 \\ \mathbf{0} \end{bmatrix}
$$

From A1, *A*e*i*(θ) and e*Bi*(θ) can be given by:

$$\widetilde{A}\_{i}(\theta) = \begin{bmatrix} I & \sum\_{j=1}^{s} \theta\_{j} \mathbf{C}\_{j}^{i} \\ \mathbf{0} & \sum\_{j=1}^{s} \theta\_{j} A\_{j} \end{bmatrix} = \sum\_{j=1}^{s} \theta\_{j} \begin{bmatrix} I & \mathbf{C}\_{j}^{i} \\ \mathbf{0} & A\_{j} \end{bmatrix} = \sum\_{j=1}^{s} \theta\_{j} \widetilde{A}\_{i,j} \tag{10}$$

$$\widetilde{B}\_i(\boldsymbol{\theta}) = \begin{bmatrix} \sum\_{j=1}^s \theta\_j \boldsymbol{D}\_j^i \\ \sum\_{j=1}^s \theta\_j \boldsymbol{B}\_j \\ \sum\_{j=1}^s \theta\_j \boldsymbol{B}\_j \end{bmatrix} = \sum\_{j=1}^s \theta\_j \begin{bmatrix} \boldsymbol{D}\_j^i \\ \boldsymbol{B}\_j \end{bmatrix} = \sum\_{j=1}^s \theta\_j \widetilde{B}\_{i,j} \tag{11}$$

Note that, in (10) and (11), *C i j* and *D<sup>i</sup> j* represent the *i* row of matrices *C<sup>j</sup>* and *D<sup>j</sup>* , respectively, where *i* ∈ 1, 2, · · · , *q* , *j* ∈ {1, 2, · · · ,*s*}.

From A2, *ri*(*k*), *ri*(*k* + 1), · · · , *ri*(*k* + h*i*) are available at time *k*. Defining

$$\mathbf{x}\_{\mathrm{ri}}(k) = \begin{bmatrix} \Delta r\_i(k) \\ \Delta r\_i(k+1) \\ \vdots \\ \vdots \\ \Delta r\_i(k+\mathbf{h}\_i) \end{bmatrix}, \begin{bmatrix} 0 & 1 & & 0 \\ 0 & \ddots & \ddots & \\ \vdots & & \ddots & \ddots \\ 0 & \cdots & \cdots & 0 & 1 \\ 0 & \cdots & \cdots & 0 & 0 \end{bmatrix}$$

then, it can be obtained:

$$\mathbf{x}\_{\rm ri}(k+1) = A\_{\rm R,i} \mathbf{x}\_{\rm ri}(k) \tag{12}$$

where *xri*(*k*) ∈ *R* <sup>h</sup>*i*+<sup>1</sup> and *<sup>A</sup>R*,*<sup>i</sup>* <sup>∈</sup> *<sup>R</sup>* (h*i*+1)×(h*i*+1) .

Each component*ri*(*k*) can have its own preview length h*<sup>i</sup>* ; therefore, h*<sup>i</sup>* can be selected appropriately as needed.

Based on (8) and (12), we obtain:

$$\pounds(k+1) = \hat{A}\_{l}(\theta)\pounds(k) + \hat{B}\_{l}(\theta)\Delta u\_{l}(k) \tag{13}$$

where

$$\begin{aligned} \label{eq:SDAR} \mathbf{f}\_{i}(k) = \begin{bmatrix} \ \widetilde{\mathbf{x}}\_{i}(k) \\ \mathbf{x}\_{i\end{bmatrix} \end{bmatrix}, \begin{bmatrix} \boldsymbol{A}\_{i}(\boldsymbol{\theta}) = \begin{bmatrix} \ \widetilde{A}\_{i}(\boldsymbol{\theta}) & \boldsymbol{W}\_{i} \\ \mathbf{0} & \boldsymbol{A}\_{R,i} \end{bmatrix}, \begin{bmatrix} \boldsymbol{B}\_{i}(\boldsymbol{\theta}) = \begin{bmatrix} \ \widetilde{B}\_{i}(\boldsymbol{\theta}) \\ \mathbf{0} \end{bmatrix}, \boldsymbol{W}\_{i} = \begin{bmatrix} \ \boldsymbol{G} & \boldsymbol{0} & \cdots & \mathbf{0} \\ \end{bmatrix} \end{aligned}$$

System (13) is the AES and the future information of *ri*(*k*) is added to System (13). Based on (10) and (11), *A*ˆ *<sup>i</sup>*(θ) and *B*ˆ *<sup>i</sup>*(θ) are written as:

$$\boldsymbol{A}\_{i}(\boldsymbol{\theta}) = \begin{bmatrix} \sum\_{j=1}^{s} \theta\_{j} \widetilde{A}\_{i,j} & \mathcal{W}\_{i} \\ \mathbf{0} & \boldsymbol{A}\_{\mathcal{R},i} \end{bmatrix} = \sum\_{j=1}^{s} \theta\_{j} \begin{bmatrix} \widetilde{A}\_{i,j} & \mathcal{W}\_{i} \\ \mathbf{0} & \boldsymbol{A}\_{\mathcal{R},i} \end{bmatrix} = \sum\_{j=1}^{s} \theta\_{j} \boldsymbol{A}\_{i,j} \tag{14}$$

$$\mathfrak{B}\_{i}(\boldsymbol{\theta}) = \begin{bmatrix} \sum\_{j=1}^{s} \theta\_{j} \widetilde{\boldsymbol{B}}\_{i,j} \\ \boldsymbol{0} \end{bmatrix} = \sum\_{j=1}^{s} \theta\_{j} \begin{bmatrix} \widetilde{\boldsymbol{B}}\_{i,j} \\ \boldsymbol{0} \end{bmatrix} = \sum\_{j=1}^{s} \theta\_{j} \mathfrak{B}\_{i,j} \tag{15}$$

**Remark 2.** *System (13) is the so-called AES. The future information of ri*(*k*) *is added to the AES (13) rather than the future information of r*1(*k*)*, r*2(*k*)*,* · · · *, rq*(*k*)*. The benefits of this treatment are: (i) the size of the AES in this paper is smaller. Our proposed AES has* 1 + *n* + (*h<sup>i</sup>* + 1) *states, whereas the AES in refs. [5,10,11,26,27] has q* + *n* + (*h* + 1)*q. (ii) Based on the theoretical analysis and numerical simulations, we found that, if we added the future information of r*(*k*) *to the AES as usual, the control e*ff*ect of the PC is poor.*

#### **4. PC Design**

Consider the following system

$$
\hat{\mathfrak{x}}\_{l}(k) = \hat{A}\_{l}(\theta)\hat{\mathfrak{x}}\_{l}(k) \tag{16}
$$

**Lemma 1.** *Lemma 1: System (16) is asymptotically stable, if there exists Pi*(θ) > 0 *and matrices F*1*<sup>i</sup> and F*2*<sup>i</sup> with appropriate dimensions such that:*

$$\Omega\_i(\theta) = \begin{bmatrix} -P\_i(\theta) - F\_{1i}\hat{A}\_i(\theta) - \hat{A}\_i(\theta)^T F\_{1i}{}^T & \* \\ F\_{1i}{}^T - F\_{2i}\hat{A}\_i(\theta) & P\_i(\theta) + F\_{2i} + F\_{2i}{}^T \\ (i = 1, 2, \cdots, q) \end{bmatrix} < 0 \tag{17}$$

**Proof.** Consider the Lyapunov function

$$V\_i(k) = \hat{\mathfrak{x}}\_i(k)^T P\_i(\theta) \hat{\mathfrak{x}}\_i(k).$$

We have

$$
\Delta V\_l(k) = \hat{\mathbf{x}}\_l(k+1)^T P\_l(\theta) \hat{\mathbf{x}}\_l(k+1) - \hat{\mathbf{x}}\_l(k)^T P\_l(\theta) \hat{\mathbf{x}}\_l(k) \tag{18}
$$

From (17), the following equation holds:

$$2\left[\hat{\mathbf{x}}\_{l}(k)^{T}F\_{1i} + \hat{\mathbf{x}}\_{l}(k+1)^{T}F\_{2i}\right] \left[\hat{\mathbf{x}}\_{l}(k+1) - \hat{A}\_{l}(\theta)\hat{\mathbf{x}}\_{l}(k)\right] = \mathbf{0} \tag{19}$$

where *F*1*<sup>i</sup>* and *F*2*<sup>i</sup>* are matrices with appropriate dimensions.

Obviously, if (17) holds, then it can be concluded that ∆*Vi*(*k*) < 0, which implies that System (16) is asymptotically stable. This completes the proof.

#### *4.1. State Feedback PC*

The state feedback control is presented as follows:

$$
\Delta u\_i(k) = \left(\sum\_{j=1}^s \gamma\_j \mathbb{K}\_{i,j}\right) \hat{\mathbf{x}}\_i(k) (i = 1, 2, \dots, q) \tag{20}
$$

where, *Ki*,*<sup>j</sup>* and γ*<sup>j</sup>* (*i* = 1, 2, · · · ,*s*) are matrices and adjustable variables to be determined, and γ*<sup>j</sup>* ≥ 0, P*s j*=1 <sup>γ</sup>*<sup>j</sup>* <sup>=</sup> 1. For convenience, we note that *<sup>K</sup>i*(γ) = <sup>P</sup>*<sup>s</sup> j*=1 γ*jKi*,*<sup>j</sup>* .

Substituting (20) into (13), we obtain:

$$\mathbf{f}\_{i}(k+1) = \left[\hat{A}\_{i}(\theta) + \hat{B}\_{i}(\theta)\mathbb{K}\_{i}(\boldsymbol{\gamma})\right] \mathbf{f}\_{i}(k) \tag{21}$$

**Theorem 1.** *If there exist matrices Xi*(θ) > 0*, Yi*(γ)*, and H<sup>i</sup> and scalars* α*<sup>i</sup> and* β*<sup>i</sup> such that*

$$\Pi\_{i}(\boldsymbol{\theta},\boldsymbol{\gamma}) = \begin{bmatrix} -\alpha\_{i}^{2}\mathbf{X}\_{i}(\boldsymbol{\theta}) - \operatorname{sym}(\alpha\_{i}\mathbf{\hat{A}}\_{i}(\boldsymbol{\theta})\mathbf{H}\_{i} + \alpha\_{i}\mathbf{\hat{B}}\_{i}(\boldsymbol{\theta})\mathbf{Y}\_{i}(\boldsymbol{\gamma})) & \* \\ -\beta\_{i}\mathbf{H}\_{i}^{T} - \alpha\_{i}(\hat{A}\_{i}(\boldsymbol{\theta})\mathbf{H}\_{i} + \hat{B}\_{i}(\boldsymbol{\theta})\mathbf{Y}\_{i}(\boldsymbol{\gamma})) & \beta\_{i}^{2}\mathbf{X}\_{i}(\boldsymbol{\theta}) - 2\beta\_{i}\mathbf{H}\_{i} \\ (i = 1, 2, \cdots, q) \end{bmatrix} < \mathbf{0},\tag{22}$$

*then System (21) is asymptotically stable.*

**Proof.** For the closed-loop System (21), from Lemma 1 we know that, if there exists *Pi*(θ) > 0, *F*1*<sup>i</sup>* and *F*2*<sup>i</sup>* with appropriate dimensions satisfies:

$$\begin{bmatrix} -P\_{\mathrm{i}}(\theta) - \operatorname{sym}(F\_{1\mathrm{i}}(\hat{A}\_{\mathrm{i}}(\theta) + \hat{B}\_{\mathrm{i}}(\theta)\mathbf{K}\_{\mathrm{i}}(\boldsymbol{\gamma}))) & \* \\ F\_{1\mathrm{i}}^{T} - F\_{\mathrm{2i}}(\hat{A}\_{\mathrm{i}}(\theta) + \hat{B}\_{\mathrm{i}}(\theta)\mathbf{K}\_{\mathrm{i}}(\boldsymbol{\gamma})) & P\_{\mathrm{i}}(\theta) + F\_{\mathrm{2i}} + F\_{\mathrm{2i}}^{T} \end{bmatrix} < 0 \tag{23}$$

To obtain LMI conditions [38,39], let

$$F\_{1i} = a\_i \mathcal{R}\_{i\prime} \; F\_{2i} = -b\_i \mathcal{R}\_i \tag{24}$$

where *<sup>a</sup><sup>i</sup>* , 0 and *<sup>b</sup><sup>i</sup>* , 0. Then, by applying a congruence transformations by *diag*<sup>n</sup> *F*1*<sup>i</sup>* −1 , *F*2*<sup>i</sup>* −1 o to (23) and denoting *R<sup>i</sup>* <sup>−</sup>*<sup>T</sup>* = *H<sup>i</sup>* , *R<sup>i</sup>* <sup>−</sup>*TPi*(θ) <sup>−</sup>1*R<sup>i</sup>* <sup>−</sup><sup>1</sup> = *Xi*(θ), *Ki*(γ)*R<sup>i</sup>* <sup>−</sup>*<sup>T</sup>* = *Yi*(γ), α*<sup>i</sup>* = 1/*a<sup>i</sup>* , and β*<sup>i</sup>* = 1/*b<sup>i</sup>* , we arrive at the condition in Theorem 1.

**Theorem 2.** *Given scalars* α*<sup>i</sup> and* β*<sup>i</sup> , if there exist matrices Xi*,*<sup>j</sup>* > 0*, Yi*,*<sup>d</sup> , and H<sup>i</sup> such that:*

$$\text{III}\_{j,d}^{i} < 0i, j, d : i \in \{1, 2, 3, \dots, q\}, j, d \in \{1, 2, 3, \dots, s\} \tag{25}$$

*then System (21) is robustly stabilizable via (20), and the control input is given by*

$$
\Delta\mu\_{\bar{l}}(k) = \mathbf{K}\_{\bar{l}}(\boldsymbol{\gamma})\mathbf{\hat{x}}\_{\bar{l}}(k) = \sum\_{d=1}^{s} \boldsymbol{\gamma}\_{d}\mathbf{Y}\_{\bar{l},d}\mathbf{H}\_{\bar{l}}^{-1}\mathbf{\hat{x}}\_{\bar{l}}(k)\tag{26}
$$

*In (25),*

$$\Pi\_{j,d}^i = \begin{bmatrix} -\alpha\_i^2 X\_{i,j} - \operatorname{sym}(\alpha\_i \hat{A}\_{i,j} H\_i + \alpha\_i \hat{B}\_{i,j} Y\_{i,d}) & \* \\ -\beta\_i H\_i^T - \alpha\_i (\hat{A}\_{i,j} H\_i + \hat{B}\_{i,j} Y\_{i,d}) & \beta\_i^2 X\_{i,j} - 2\beta\_i H\_i \end{bmatrix}$$

**Proof.** Multiplying (25) by θ*j*γ*<sup>d</sup>* for 1 ≤ *j* ≤ *s* and 1 ≤ *d* ≤ *s* and summing them, according (14) and (15), we obtain

$$\Pi I\_i(\theta, \boldsymbol{\gamma}) = \sum\_{j=1}^{s} \sum\_{d=1}^{s} \theta\_j \boldsymbol{\gamma}\_d \Pi\_{j,d}^i \tag{27}$$

and, thus, (25) can imply Π*i*(θ, γ) < 0. From (22), Theorem 2 holds.

If the system model parameter can be available, the state feedback for System (20) to be designed

$$
\Delta u\_i(k) = \left(\sum\_{j=1}^s \theta\_j \mathbb{K}\_{i,j}\right) \mathfrak{k}\_i(k) \tag{28}
$$

The matrices *<sup>K</sup>i*,*<sup>j</sup>* (*<sup>j</sup>* <sup>=</sup> 1, 2, · · · ,*s*) are gain matrices, and we let *<sup>K</sup>i*(θ) = <sup>P</sup>*<sup>s</sup> j*=1 θ*jKi*,*<sup>j</sup>* . Applying (28) to System (13) yields

$$\mathbf{f}\_{i}(k+1) = \left[\hat{A}\_{i}(\theta) + \hat{B}\_{i}(\theta)\mathbb{K}\_{i}(\theta)\right]\mathbf{\hat{x}}\_{i}(k)\tag{29}$$

Based on Theorems 1 and 2, the following corollaries are presented.

**Corollary 1.** *The System (29) is asymptotically stable if there exist matrices Xi*(θ) > 0 *and Yi*(θ) *and scalars* α*<sup>i</sup> and* β*<sup>i</sup>* ∈ (0, 2)*, such that:*

$$\Pi\_{i}(\boldsymbol{\theta}) = \begin{bmatrix} -\alpha\_{i}^{2}\mathbf{X}\_{i}(\boldsymbol{\theta}) - \operatorname{sym}(a\_{i}\hat{\mathbf{A}}\_{i}(\boldsymbol{\theta})\mathbf{X}\_{i}(\boldsymbol{\theta}) + a\_{i}\hat{\mathbf{B}}\_{i}(\boldsymbol{\theta})\mathbf{Y}\_{i}(\boldsymbol{\theta})) & \star \\ -\beta\_{i}\mathbf{X}\_{i}(\boldsymbol{\theta}) - \alpha\_{i}(\hat{\mathbf{A}}\_{i}(\boldsymbol{\theta})\mathbf{X}\_{i}(\boldsymbol{\theta}) + \hat{\mathbf{B}}\_{i}(\boldsymbol{\theta})\mathbf{Y}\_{i}(\boldsymbol{\theta})) & (\beta\_{i}^{2} - 2\beta\_{i})\mathbf{X}\_{i}(\boldsymbol{\theta}) \\ (i = 1, 2, \cdots, q) \end{bmatrix} < 0,$$

**Proof.** In Theorem 1, let *F*1*i*(θ) = *aiPi*(θ), *F*2*i*(θ) = *biPi*(θ), *Pi*(θ) <sup>−</sup><sup>1</sup> = *<sup>X</sup>i*(θ), *<sup>K</sup>i*(θ)*Xi*(θ) = *<sup>Y</sup>i*(θ), α*<sup>i</sup>* = <sup>1</sup> *ai* , β*<sup>i</sup>* = <sup>1</sup> *bi* , then (30) can be obtained.

**Corollary 2.** *For known scalars* β*<sup>i</sup>* ∈ (0, 2) *and* α*<sup>i</sup> , if there exist matrices Xi*,*<sup>d</sup>* > 0 *and Yi*,*<sup>d</sup> such that*

$$\Pi^i\_{\mathbf{j},d} + \Pi^i\_{d,\mathbf{j}} < 0, \; j \le d: \; \mathbf{j}, d \in \{1, 2, 3, \dots, s\}, i \in \{1, 2, 3, \dots, q\} \tag{31}$$

*then the System (29) is asymptotically stable, and the control input is given by*

$$
\Delta\mu\_i(k) = \left(\sum\_{d=1}^s \theta\_d Y\_{i,d}\right) \left(\sum\_{d=1}^s \theta\_d X\_{i,d}\right)^{-1} \pounds\_i(k) \tag{32}
$$

*In (31),*

$$\begin{aligned} \Pi^i\_{j,d} = \begin{bmatrix} -\alpha\_i^2 X\_{i,d} - \operatorname{sym}(\alpha\_i \hat{A}\_{i,j} X\_{i,d} + \alpha\_i \hat{B}\_{i,j} Y\_{i,d}) & \* \\ -\beta\_i X\_{i,d} - \alpha\_i (\hat{A}\_{i,j} X\_{i,d} + \hat{B}\_{i,j} Y\_{i,d}) & (\beta\_i^2 - 2\beta\_i) X\_{i,d} \end{bmatrix} \end{aligned}$$

*The gain matrix Ki*,*<sup>j</sup> in (20) is divided as follows:*

$$\mathbf{K}\_{i,j} = \begin{bmatrix} K\_{\varepsilon j}^{i} & K\_{\mathbf{x}j}^{i} & K\_{\mathbf{R}j}^{i}(\mathbf{0}) & K\_{\mathbf{R}j}^{i}(\mathbf{1}) & \cdots & K\_{\mathbf{R}j}^{i}(\mathbf{h}\_{i}) \end{bmatrix} \tag{33}$$

*Equation (20) is then written as*

$$\Delta u\_{i}(k) = \sum\_{j=1}^{s} \gamma\_{j} \left[ K\_{e\_{j}^{i}}^{i} e\_{i}(k) + K\_{\ge j}^{i} \Delta \pi(k) + \sum\_{d=0}^{h\_{i}} K\_{\mathcal{R}\_{j}^{i}}^{i}(d) \Delta r\_{i}(k+d) \right]$$

*Therefore, the control input of System (1) is given by*

$$u\_i(k) = \mathbf{K}\_e^i \sum\_{h=0}^{k-1} e\_i(h) + \mathbf{K}\_x^i x(k) + \sum\_{d=0}^{h\_i} \mathbf{K}\_R^i(d) r\_i(k+d) \tag{34}$$

*where K<sup>i</sup> <sup>e</sup>* = P*s j*=1 γ*jK i ej, K<sup>i</sup> <sup>x</sup>* = P*s j*=1 γ*jK i xj, and K<sup>i</sup> R* (*d*) = <sup>P</sup>*<sup>s</sup> j*=1 γ*jK i Rj*(*d*)*.*

#### *4.2. Static Output Feedback PC*

To obtain the control law with preview compensation, for System (13), the output equation is modified as

$$z\_{i}(k) = \mathbb{C}\_{\text{Zi}}(\theta)\mathbb{X}\_{i}(k)\tag{35}$$

where

$$\mathbf{C}\_{\rm Zi}(\theta) = \left[ \begin{array}{c} I\_{q\_i} \\ & \sum\limits\_{j=1}^{s} \mathbf{C}\_j^i \\ & & I\_{\{M\_{R,j} + 1\}} \end{array} \right] = \sum\_{j=1}^{s} \theta\_j \mathbf{C}\_{\rm Zi,j} \tag{36}$$

We consider a output feedback controller

$$
\Delta u\_i(k) = \left(\sum\_{j=1}^s \gamma\_j K\_{i,j}\right) z\_i(k), \ (i = 1, 2, \cdots, q) \tag{37}
$$

Based on (13), (35), and (37), we obtain the following system:

$$\left[\hat{\mathbf{x}}\_{i}(k+1) = \left[\hat{A}\_{i}(\theta) + \hat{B}\_{i}(\theta)\mathbf{K}\_{i}(\boldsymbol{\gamma})\mathbf{C}\_{Zi}(\theta)\right]\hat{\mathbf{x}}\_{i}(k)\tag{38}$$

**Lemma 2.** *[40]: For appropriately dimensioned matrices F, R, S, and N and scalar* β*, F* + *S <sup>T</sup>R <sup>T</sup>* + *RS* < 0 *is fulfilled if the following condition holds:*

$$
\begin{bmatrix}
F & \* \\
\beta \mathcal{R}^T + \mathcal{N} \mathcal{S} & -\beta \mathcal{N} - \beta \mathcal{N}^T
\end{bmatrix} < 0
$$

**Theorem 3.** *For given* α*<sup>i</sup> ,* β*<sup>i</sup> , and* ρ*<sup>i</sup> , the System (38) is asymptotically stable if there exist matrices Xi*(θ) > 0 *and matrices Q<sup>i</sup> , Li*(γ)*, U<sup>i</sup> , and H<sup>i</sup> , such that:*

$$\Pi\_{i}(\boldsymbol{\theta},\boldsymbol{\gamma}) = \begin{bmatrix} -a\_{i}^{2}\mathbf{X}\_{i}(\boldsymbol{\theta}) - \text{sym}(a\_{i}\boldsymbol{\hat{A}}\_{i}(\boldsymbol{\theta})\mathbf{H}\_{i} + \boldsymbol{\hat{B}}\_{i}(\boldsymbol{\theta})\mathbf{L}\_{i}(\boldsymbol{\gamma})\boldsymbol{Q}\_{i}) & \* & \* \\ -\boldsymbol{\theta}\_{i}\mathbf{H}\_{i}^{T} - a\_{i}(\boldsymbol{\hat{A}}\_{i}(\boldsymbol{\theta})\mathbf{H}\_{i} + \boldsymbol{\hat{B}}\_{i}(\boldsymbol{\theta})\mathbf{L}\_{i}(\boldsymbol{\gamma})\boldsymbol{Q}\_{i}) & \boldsymbol{\beta}\_{i}^{2}\mathbf{X}\_{i}(\boldsymbol{\theta}) - 2\boldsymbol{\beta}\_{i}\mathbf{H}\_{i} & \* \\ -\boldsymbol{\rho}\_{i}a\_{i}\mathbf{L}\_{i}(\boldsymbol{\gamma})^{T}\boldsymbol{B}\_{i}(\boldsymbol{\theta})^{T} + \mathbf{C}\_{\widecheck{\rm E}}(\boldsymbol{\theta})\mathbf{H}\_{i} - \mathbf{U}\_{i}\mathbf{Q}\_{i} & -\boldsymbol{\rho}\_{i}a\_{i}\mathbf{L}\_{i}(\boldsymbol{\gamma})^{T}\boldsymbol{B}\_{i}(\boldsymbol{\theta})^{T} & -\boldsymbol{\rho}\_{i}\mathbf{sym}(\boldsymbol{U}\_{i}) \\ (i = 1, 2, \cdots, q) & (ii = 1, 2, \cdots, q) \end{bmatrix} < 0,\tag{39}$$

**Proof.** Equation (39) is written as

$$\underbrace{\begin{bmatrix} -a\_i^2 \mathbf{X}\_i(\boldsymbol{\theta}) - \text{sym}(\alpha\_i \dot{\mathbf{A}}\_i(\boldsymbol{\theta}) \mathbf{H}\_i + \mathbf{B}\_i(\boldsymbol{\theta}) \mathbf{L}\_i(\boldsymbol{\gamma}) \mathbf{Q}\_i) & \* \\ -\beta\_i \mathbf{H}\_i^T - \alpha\_i (\dot{\mathbf{A}}\_i(\boldsymbol{\theta}) \mathbf{H}\_i + \mathbf{B}\_i(\boldsymbol{\theta}) \mathbf{L}\_i(\boldsymbol{\gamma}) \mathbf{Q}\_i) & \beta\_i^2 \mathbf{X}\_i(\boldsymbol{\theta}) - 2\beta\_i \mathbf{H}\_i \\ \hline \\ -\rho\_i \alpha\_i \mathbf{L}\_i(\boldsymbol{\gamma})^T \dot{\mathbf{B}}\_i(\boldsymbol{\theta})^T \begin{bmatrix} I & I \\ \end{bmatrix} + \underbrace{\begin{matrix} \mathbf{U}\_i & \mathbf{U}\_i^{-1}(\mathbf{C}\_{\text{Z}i}(\boldsymbol{\theta}) \mathbf{H}\_i - \mathbf{U}\_i \mathbf{Q}\_i) \end{matrix}}\_{\mathbf{S}} & \underbrace{-\rho\_i \text{sym}(\mathbf{U}\_i)}\_{-\rho\_i \mathbf{N} - \rho\_i \mathbf{N}^T} \end{bmatrix}}\_{\mathbf{U}} < 0. \tag{40}$$

According to Lemma 2, (40) can guarantee

" −α*<sup>i</sup>* <sup>2</sup>*Xi*(θ) <sup>−</sup> <sup>α</sup>*isym*(*A*<sup>ˆ</sup> *<sup>i</sup>*(θ)*H<sup>i</sup>* + *B*ˆ *<sup>i</sup>*(θ)*Li*(γ)*Qi*) ∗ −β*iH<sup>i</sup> <sup>T</sup>* <sup>−</sup> <sup>α</sup>*i*(*A*<sup>ˆ</sup> *<sup>i</sup>*(θ)*H<sup>i</sup>* + *B*ˆ *<sup>i</sup>*(θ)*Li*(γ)*Qi*) β*<sup>i</sup>* <sup>2</sup>*Xi*(θ) <sup>−</sup> <sup>2</sup>β*iH<sup>i</sup>* # <sup>−</sup> *sym* " *<sup>I</sup> I* # α*iB*ˆ *<sup>i</sup>*(θ)*Li*(γ)*U<sup>i</sup>* −1 (*CZi*(θ)*H<sup>i</sup>* − *UiQi*) h *I* 0 i ! = " −α*<sup>i</sup>* <sup>2</sup>*Xi*(θ) <sup>−</sup> <sup>α</sup>*isym*(*A*<sup>ˆ</sup> *<sup>i</sup>*(θ)*Hi*) ∗ −β*iH<sup>i</sup> <sup>T</sup>* <sup>−</sup> <sup>α</sup>*i*(*A*<sup>ˆ</sup> *<sup>i</sup>*(θ)*Hi*) β*<sup>i</sup>* <sup>2</sup>*Xi*(θ) <sup>−</sup> <sup>2</sup>β*iH<sup>i</sup>* # <sup>−</sup> *sym* " *<sup>I</sup> I* # α*iB*ˆ *<sup>i</sup>*(θ)*Li*(γ)*U<sup>i</sup>* −1 (*CZi*(θ)*H<sup>i</sup>* − *UiQ<sup>i</sup>* + *UiQi*) h *I* 0 i ! < 0. (41)

Letting *Ki*(γ) = *Li*(γ)*U<sup>i</sup>* −1 , we have

$$
\begin{bmatrix}
\end{bmatrix} - a\_l \mathbf{sym} \left( \begin{bmatrix} I \\ I \end{bmatrix} \mathbf{\hat{B}}\_i(\boldsymbol{\theta}) \mathbf{K}\_i(\boldsymbol{\gamma}) \mathbf{C}\_{\mathbf{Z}i}(\boldsymbol{\theta}) \mathbf{H}\_i \begin{bmatrix} I & \mathbf{0} \ \end{bmatrix} \right) < 0,
$$

and therefore

$$\begin{bmatrix} -\alpha\_i^2 X\_i(\theta) - \alpha\_i \text{sym}((\hat{A}\_i(\theta) + \hat{B}\_i(\theta)\mathcal{K}\_i(\mathcal{V})\mathcal{C}\_{\text{Zi}}(\theta))H\_i) & \* \\ -\beta\_i H\_i^T - \alpha\_i((\hat{A}\_i(\theta) + \hat{B}\_i(\theta)\mathcal{K}\_i(\mathcal{V})\mathcal{C}\_{\text{Zi}}(\theta))H\_i) & \beta\_i^2 X\_i(\theta) - 2\beta\_i H\_i \end{bmatrix} < 0$$

From Theorem 1, Theorem 3 holds.

**Theorem 4.** *For given scalars* α*<sup>i</sup> ,* β*<sup>i</sup> , and* ρ*<sup>i</sup> and matrix Q<sup>i</sup> , if there exist Xi*,*<sup>j</sup>* > 0*, Li*,*<sup>d</sup> , H<sup>i</sup> , and U<sup>i</sup> such that*

$$\Pi\_{j,d}^{j} < 0 \ (i, j, d : i \in \{1, 2, 3, \dots, q\}, j, d \in \{1, 2, 3, \dots, s\})\tag{42}$$

*then the System (38) is robust asymptotically stable. The controller is given by*

$$
\Delta u\_i(k) = K\_i(\gamma) Z\_i(k) = \sum\_{d=1}^s \gamma\_d L\_{i,d} \mathcal{U}\_i^{-1} Z\_i(k) \tag{43}
$$

*In (42),*

$$\mathbf{II}\_{j,d}^{i} = \begin{bmatrix} -\alpha\_{i}^{2}\mathbf{X}\_{i,d} - \alpha\_{i}\text{sym}(\hat{A}\_{i,j}\mathbf{H}\_{i} + \hat{\mathbf{B}}\_{i,j}\mathbf{L}\_{i,d}\mathbf{Q}\_{i}) & \* & \* \\ -\beta\_{i}\mathbf{H}\_{i}^{T} - \alpha\_{i}(\hat{A}\_{i,j}\mathbf{H}\_{i} + \hat{\mathbf{B}}\_{i,j}\mathbf{L}\_{i,d}\mathbf{Q}\_{i}) & \beta\_{i}^{2}\mathbf{X}\_{i,d} - 2\beta\_{i}\mathbf{H}\_{i} & \* \\ -\rho\_{i}\alpha\_{i}\mathbf{L}\_{i,d}^{T}\mathbf{B}\_{i,j}^{T} + \mathbf{C}\_{\mathbf{Z}i,j}\mathbf{H}\_{i} - \mathbf{U}\_{i}\mathbf{Q}\_{i} & -\rho\_{i}\alpha\_{i}\mathbf{L}\_{i,d}^{T}\mathbf{B}\_{i,j}^{T} & -\rho\_{i}\operatorname{sym}(\mathbf{U}\_{i}) \end{bmatrix}$$

*Similarly, if the uncertain parameters of the system model are known, we consider the following form of the parameter-dependent output controller:*

$$
\Delta u\_i(k) = \left(\sum\_{d=1}^s \theta\_d \mathbf{K}\_{i,d}\right) z\_i(k) \tag{44}
$$

*where Ki*,*<sup>d</sup>* (*<sup>d</sup>* <sup>=</sup> 1, 2, · · · ,*s*) *are gain matrices, and Ki*(θ) = <sup>P</sup>*<sup>s</sup> d*=1 θ*dKi*,*<sup>d</sup> .*

Based on (13) and (44), we obtain

$$\mathbf{f}\_{i}(k+1) = \left[\hat{A}\_{i}(\theta) + \mathcal{B}\_{i}(\theta)\mathcal{K}\_{i}(\theta)\mathcal{C}\_{Zi}(\theta)\right]\mathbf{\hat{x}}\_{i}(k)\tag{45}$$

According to Theorem 3 and 4, Corollary 3 and 4 are given as follows:

**Corollary 3.** *For given scalars* α*<sup>i</sup> ,* ρ*<sup>i</sup> and* β*<sup>i</sup>* ∈ (0, 2)*, a su*ffi*cient condition for the proposed controller (44) that ensures the uncertain discrete-time closed system (45) to be asymptotically stable, if there exist matrices Xi*(θ) > 0*, Li*(θ)*, Q<sup>i</sup> , and Ui*(θ) *such that Equation (46) hold:*

$$\Pi\_{i}(\boldsymbol{\theta}) = \begin{bmatrix} -a\_{i}^{2}\mathbf{X}\_{i}(\boldsymbol{\theta}) - a\_{i}\mathbf{sym}(\hat{A}\_{i}(\boldsymbol{\theta})\mathbf{X}\_{i}(\boldsymbol{\theta}) + \hat{\boldsymbol{\theta}}\_{i}(\boldsymbol{\theta})\mathbf{L}\_{i}(\boldsymbol{\theta})\mathbf{Q}\_{i}) & \* & \* \\ -\boldsymbol{\theta}\_{i}\mathbf{X}\_{i}(\boldsymbol{\theta}) - a\_{i}(\hat{A}\_{i}(\boldsymbol{\theta})\mathbf{X}\_{i}(\boldsymbol{\theta}) + \hat{\boldsymbol{\theta}}\_{i}(\boldsymbol{\theta})\mathbf{L}\_{i}(\boldsymbol{\theta})\mathbf{Q}\_{i}) & (\boldsymbol{\theta}\_{i}^{2} - 2\boldsymbol{\theta}\_{i})\mathbf{X}\_{i}(\boldsymbol{\theta}) & \* \\ -\boldsymbol{\rho}\_{i}\boldsymbol{\mu}\_{i}\mathbf{L}\_{i}(\boldsymbol{\theta})^{\mathsf{T}}\mathbf{\mathcal{B}}\_{i}(\boldsymbol{\theta})^{\mathsf{T}} + \mathbb{C}\_{\boldsymbol{Z}i}(\boldsymbol{\theta})\mathbf{X}\_{i}(\boldsymbol{\theta}) - \boldsymbol{\mathcal{U}}\_{i}\mathbf{Q}\_{i} & -\boldsymbol{\rho}\_{i}\boldsymbol{\mu}\_{i}\mathbf{L}\_{i}(\boldsymbol{\theta})^{\mathsf{T}}\mathbf{\mathcal{B}}\_{i}(\boldsymbol{\theta})^{\mathsf{T}} & -\boldsymbol{\rho}\_{i}\text{sym}(\mathcal{U}\_{i}) \\ (i = 1, 2, \cdots, q) & (\mathbf{d} = 1, 2, \cdots, q) \end{bmatrix} < 0,\tag{46}$$

**Corollary 4.** *For given* β*<sup>i</sup>* ∈ (0, 2)*,* α*<sup>i</sup> ,* ρ*<sup>i</sup> , and matrix Q<sup>i</sup> , if there exist matrices Xi*,*<sup>d</sup>* > 0*, Li*,*<sup>d</sup> and U<sup>i</sup> such that*

$$\Pi^i\_{j,d} + \Pi^i\_{d,j} < 0, \ (j \le d : j, d \in \{1, 2, 3, \dots, s\}, i \in \{1, 2, 3, \dots, q\})\tag{47}$$

*hold, then the closed-loop System (45) is robustly asymptotically stable, and the controller is given by*

$$
\Delta \mu\_i(k) = \sum\_{d=1}^{s} \theta\_d L\_{i,d} \mathbf{U}\_i^{-1} Z\_i(k) \tag{48}
$$

In (47),

$$\mathbf{II}\_{jd}^{i} = \begin{bmatrix} -\alpha\_{i}^{2}\mathbf{X}\_{i,d} - \alpha\_{i}\text{sym}(\hat{A}\_{i,j}\mathbf{X}\_{i,d} + \mathbf{B}\_{i,j}\mathbf{L}\_{i,d}\mathbf{Q}\_{i}) & \* & \* \\ -\beta\_{i}\mathbf{X}\_{i,d} - \alpha\_{i}(\hat{A}\_{i,j}\mathbf{X}\_{i,d} + \mathbf{B}\_{i,j}\mathbf{L}\_{i,d}\mathbf{Q}\_{i}) & \beta\_{i}^{2}\mathbf{X}\_{i,j} - 2\beta\_{i}\mathbf{X}\_{i,j} & \* \\ -\rho\_{i}\alpha\_{i}\mathbf{L}\_{i,d}^{T}\mathbf{B}\_{i,j}^{T} + \mathbf{C}\_{\mathbf{Z}\mathbf{I}\_{i}}\mathbf{X}\_{i,d} - \mathbf{U}\_{i}\mathbf{Q}\_{i} & -\rho\_{i}\alpha\_{i}\mathbf{L}\_{i,d}^{T}\mathbf{B}\_{i,j}^{T} & -\rho\_{i}\text{sym}(\mathbf{U}\_{i}) \end{bmatrix}$$

We decompose the gain matrix *Ki*,*<sup>j</sup>* as

$$\mathbf{K}\_{i,j} = \begin{bmatrix} K\_{\varepsilon j}^{i} & K\_{yj}^{i} & K\_{\mathbb{R}j}^{i}(\mathbf{0}) & K\_{\mathbb{R}j}^{i}(\mathbf{1}) & \cdots & K\_{\mathbb{R}j}^{i}(\mathbf{h}\_{i}) \end{bmatrix} \tag{49}$$

and then (37) is

$$\Delta u\_{i}(k) = \sum\_{j=1}^{s} \gamma\_{j} \left[ K\_{e\_{j}^{i}}^{i} e\_{i}(k) + K\_{y\_{j}^{i}}^{i} \Delta y(k) + \sum\_{d=0}^{h\_{i}} K\_{R}^{i}(d) \Delta r\_{i}(k+d) \right]$$

The controller of System (1) can be taken as

$$u\_i(k) = K\_e^i \sum\_{h=0}^{k-1} e\_i(h) + K\_y^i y\_i(k) + \sum\_{d=0}^{h\_l} K\_R^i(d) r\_i(k+d) \tag{50}$$

where

$$\mathbb{K}\_{\varepsilon}^{i} = \sum\_{j=1}^{s} \gamma\_{j} \mathbb{K}\_{\varepsilon j'}^{i} \, \mathbb{K}\_{\mathcal{Y}}^{i} = \sum\_{j=1}^{s} \gamma\_{j} \mathbb{K}\_{\mathcal{Y}j'}^{i} \, \mathbb{K}\_{\mathbb{R}}^{i}(d) = \sum\_{j=1}^{s} \gamma\_{j} \mathbb{K}\_{\mathbb{R}j}^{i}(d)$$

**Remark 3.** *In light of (34) and (50), it is clear that the preview controller of System (1) consists of three terms. The first term is the integral action on the tracking error, the second term represents the state feedback or output feedback, the third term represents the feedforward or preview action based on the future information on ri*(*k*)*.*

**Remark 4.** *If the construction method of AES proposed by [11,13,14,26] is used in this paper. In the other word, the future information of the reference signal r*(*k*) *has been added to augmented state vector. The preview compensation term in PC will be the form of*

$$\sum\_{d=0}^{h} \mathcal{K}\_{\mathbb{R}}(d) r(k+d) = \sum\_{d=0}^{h} \mathcal{K}\_{\mathbb{R}}(d) \left[ \begin{array}{cccc} r\_1(k+d)^T & r\_2(k+d)^T & \cdots & r\_q(k+d)^T \end{array} \right]^T \tag{51}$$

*It follows from the theoretical analysis and numerical simulations that the future information of r*1(*k*)*, r*2(*k*)· · · *, rq*(*k*) *interacts with each other. This may lead to poor tracking performance.*

#### **5. Numerical Example**

In (1), let

$$A(\theta) = \begin{bmatrix} 1 & -0.6 & -0.8 & -1 \\ 0 & 0 & -0.1 & 0.5 \\ 0.2 & 0 & 0.9 & -0.3 \\ 0.1 & -0.3 & -0.3 & 0.1 \end{bmatrix} \theta\_1 + \begin{bmatrix} 0.9 & 1.2 & 0.4 & -0.3 \\ 0 & 1 & 0 & 0.2 \\ -0.6 & 0.3 & 1 & 0 \\ 0.3 & -0.5 & 0 & 1 \end{bmatrix} \theta\_2$$

$$B(\theta) = \begin{bmatrix} -0.5 & 0.1 \\ -0.2 & 0.1 \\ 0.5 & 0 \\ 0 & 0.5 \end{bmatrix} \theta\_1 + \begin{bmatrix} -0.3 & 0.2 \\ -0.1 & 0 \\ -0.6 & 0.2 \\ 0.2 & 0.5 \end{bmatrix} \theta\_2$$

$$\mathcal{C}(\theta) = \theta\_1 \begin{bmatrix} 0.2 & 1.2 & 0.3 & 0 \\ -0.1 & 1.5 & 0.2 & 0.4 \end{bmatrix} + \theta\_2 \begin{bmatrix} 0.3 & 0.8 & 0 & 0 \\ -0.7 & -2 & 0.5 & -0.3 \end{bmatrix}, D(\theta) = 0.$$

For *s* = 2, the scalars are taken as α<sup>1</sup> = 4, β<sup>1</sup> = 0.6, α<sup>2</sup> = 0.8, and β<sup>2</sup> = 0.5 and γ<sup>1</sup> = 0.3 and γ<sup>2</sup> = 0.7. In this example, we selected the preview lengths as O<sup>1</sup> h<sup>1</sup> = 6,(h<sup>2</sup> = 5), O<sup>2</sup> h<sup>1</sup> = 2,(h<sup>2</sup> = 1), and O<sup>3</sup> h<sup>1</sup> = 0,(h<sup>2</sup> = 0). By solving the LMIs (25) using the MatLab LMI control toolbox, the gains were obtained as follows.

When h<sup>1</sup> = 2, we had

$$\mathbf{K}\_{1} = \begin{bmatrix} 0.31429 & 0.94275 & 2.01847 & -0.33257 & -0.82234 & -0.31382 & -0.31125 & -0.29639 \\ -0.36601 & 1.14616 & -1.09108 & -1.71517 & -3.31321 & 0.36385 & 0.34914 & 0.28267 \end{bmatrix}$$

,

When h<sup>1</sup> = 6, we obtained

$$K\_{1} = \begin{bmatrix} 0.30753 & 0.95324 & 1.98948 & -0.34762 & -0.82702 & -0.30735 \\ -0.45000 & 1.08942 & -1.29829 & -1.74390 & -3.308732 & 0.4487631 \\ -0.30500 & -0.29540 & -0.28696 & -0.26726 & -0.24118 & -0.20619 \\ 0.43397 & 0.36963 & 0.27278 & 0.22070 & 0.18295 & 0.16303 \end{bmatrix}.$$

When h<sup>1</sup> = 0, we had

$$K\_1 = \begin{bmatrix} 0.29017 & 0.95022 & 1.99563 & -0.37000 & -0.83099 \\ -0.37170 & 1.20180 & -1.19541 & -1.80608 & -3.39788 \end{bmatrix}$$

When h<sup>2</sup> = 1, we had

$$K\_{2} = \begin{bmatrix} -0.11796 & 0.80069 & 1.46557 & -0.53132 & -0.62061 & 0.11759 & 0.11951 \\ -0.12550 & 0.91854 & -0.11027 & -1.29944 & -2.73462 & 0.11732 & 0.12856 \end{bmatrix}.$$

When h<sup>2</sup> = 5, we obtained

$$K\_{2} = \begin{bmatrix} -0.11978 & 0.79657 & 1.46443 & -0.53304 & -0.61610 & 0.11975 \\ -0.12802 & 0.91029 & -0.11878 & -1.29739 & -2.73190 & 0.12520 \\ 0.11903 & 0.11962 & 0.11649 & 0.11441 & 0.10747 \\ 0.13164 & 0.13769 & 0.12484 & 0.12282 & 0.11164 \end{bmatrix}$$

When h<sup>2</sup> = 0, we had

$$K\_{2} = \begin{bmatrix} -0.11994 & 0.81063 & 1.46822 & -0.53765 & -0.62579 \\ -0.13216 & 0.94764 & -0.09604 & -1.31084 & -2.74898 \end{bmatrix}$$

The reference signal was selected as

$$r\_1(k) = \begin{cases} 0, & k \le 10, \\ 0.05(k - 10), & 10 < k < 50, \\ 2, & k \ge 50. \end{cases} \tag{52}$$

$$r\_2(k) = \begin{cases} 0, & k \le 40, \\ 0.0375(k - 40), & 40 < k < 80, \\ & 1.5, \quad k \ge 80. \end{cases} \tag{53}$$

The outputs and the reference signals are depicted in Figure 1. Figure 2 plots the control input. As can be seen in Figures 1 and 2, the existence of the preview compensation accelerated the response speed, which reduced the tracking error.

To consider the robustness of the proposed PC, the simulations were completed with different θ<sup>1</sup> and θ<sup>2</sup> as long as they met A1. Here, the simulation results about θ<sup>1</sup> = 1 and θ<sup>1</sup> = 0 would be given separately. We depicted the output and control input of system (1) with θ<sup>1</sup> = 1 and θ<sup>2</sup> = 0 in Figures 3 and 4. Figure 5 plotted the output of System (1) with θ<sup>1</sup> = 0 and θ<sup>2</sup> = 1. The corresponding input control is shown in Figure 6. One can see from Figures 3–6 that the PC made the closed-loop system have a faster dynamic response speed compared with no preview.

 **Figure 1.** The output response and the reference signals.

–

 0 **Figure 3.** The output response of System (1) with θ<sup>1</sup> = 0.

0 –

 0 **Figure 4.** The control input of System (1) with θ<sup>1</sup> = 0

 1 **Figure 5.** The output response of System (1) θ<sup>1</sup> = 1.

 **=**1 **Figure 6.** The control input of System (1) with θ<sup>1</sup> = 1.

*rk*()

*rk*()

**=**1

The construction methods of AES in [11,13,15,26] were employed, or, equivalently, the future information of *r*(*k*) was added to the augmented state vector to derive the AES. For comparison, we performed simulations for this case in [11,13,15,26] by using the same example. From Figures 1 and 7, it can be seen that the future information of the signal components *r*1(*k*) and *r*2(*k*) interacted with each other. This led to poor tracking performance of System (1). In addition, From Figures 1, 2, 7 and 8, we could easily see that our proposed PC provided better tracking performance than those in [11,13,15,26].

**Figure 7.** The output response.

**Figure 8.** The control input.

*Output Feedback Case*

In System (1), we let

 

 

 

 

0

0

 

   

 

 

 

 

   

 

 

 

 

 

$$A(\theta) = \begin{bmatrix} 1 & -0.4 & 0.8 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{bmatrix} \theta\_1 + \begin{bmatrix} 1.2 & -0.6 & 0.7 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{bmatrix} \theta\_2.$$

$$B(\theta) = \begin{bmatrix} 1 & 1 \\ 0.3 & 0.3 \\ 0 & 0 \end{bmatrix} \theta\_1 + \begin{bmatrix} 1 & 1 \\ 1.1 & 1 \\ 0 & 0 \end{bmatrix} \theta\_2. \text{ C}(\theta) = \theta\_1 \begin{bmatrix} 1 & 0 & 0 \\ 1 & 0 & 0 \end{bmatrix} + \theta\_2 \begin{bmatrix} 1 & 0 & 0 \\ 1 & 0 & 0 \end{bmatrix}.$$

$$D(\theta) = 0$$

Letting α<sup>1</sup> = α<sup>2</sup> = 4, β<sup>1</sup> = β<sup>2</sup> = 0.6, and ρ<sup>1</sup> = ρ<sup>2</sup> = 1 and γ<sup>1</sup> = 0.3 and γ<sup>2</sup> = 0.7, we had matrices *Q*<sup>1</sup> = 6(*CZ*1,1 + *CZ*2,1) and *Q*<sup>2</sup> = 6(*CZ*1,2 + *CZ*2,2). According to Theorem 4, the static output feedback gain matrices were obtained as follows.

When h<sup>1</sup> = h<sup>2</sup> = 2, we obtained

$$\begin{array}{ccccc} \ K\_1 = \begin{bmatrix} 1.26358 & -0.40042 & -1.26358 & -1.26358 & -1.26358 \\ -1.32911 & -0.14526 & 1.32911 & 1.32911 & 1.32911 \end{bmatrix} \\\\ K\_2 = \begin{bmatrix} 1.26358 & -0.40042 & -1.26358 & -1.26358 & -1.26358 \\ -1.32911 & -0.14526 & 1.32911 & 1.32911 & 1.32911 \end{bmatrix} \end{array}$$

When h<sup>1</sup> = h<sup>2</sup> = 6, *K*<sup>1</sup> and *K*<sup>2</sup> are given, respectively, by

$$\begin{aligned} K\_{1} &= \begin{bmatrix} 1.31567 & -0.30661 & -1.31567 & -1.31567 & -1.31567 & -1.31567 \\ -1.38506 & -0.24404 & 1.38506 & 1.38506 & 1.38506 & 1.38506 \\ -1.31567 & -1.31567 & -1.31567 \\ 1.38506 & 1.38506 & 1.38506 \end{bmatrix}, \\\\ K\_{2} &= \begin{bmatrix} 1.31567 & -0.30661 & -1.31567 & -1.31567 & -1.31567 & -1.31567 \\ -1.38506 & -0.24404 & 1.38506 & 1.38506 & 1.38506 & 1.38506 \\ -1.31567 & -1.31567 & -1.31567 & -1.31567 \\ 1.38506 & 1.38506 & 1.38506 & 1.38506 \end{bmatrix} \end{aligned}$$

When h<sup>1</sup> = h<sup>2</sup> = 0, we obtained

$$\begin{array}{l} \text{K}\_1 = \begin{bmatrix} 1.18156 & -0.45238 \\ -1.24364 & -0.09158 \end{bmatrix} \\\\ \text{K}\_2 = \begin{bmatrix} 1.18156 & -0.45238 \\ -1.24364 & -0.09158 \end{bmatrix} \end{array}$$

For the Signal (52) and (53), Figure 9 depicts the output and the reference Signals (52) and (53). Figure 10 indicates the control input for different preview lengths. From Figures 9 and 10, we found that the output response could reach a steady state faster when using the output controller with preview compensation.

**Figure 9.** The output of System (1) with different *MR*.

**Figure 10.** The control input of System (1) with different *MR*.

 1 0 1 1 0 1 – For the static output feedback case, two extreme cases, namely, θ<sup>1</sup> = 1 and θ<sup>1</sup> = 0 have also been considered. Figures 11 and 12, respectively, show the output response and control input of System (1) by static output controller under θ<sup>1</sup> = 0. When θ<sup>1</sup> = 1, Figures 13 and 14 show the response and the control input curves, respectively. It is evident from Figures 11–14 that the tracking effect was still remarkable under the reference input preview compensation.

–

 1 **Figure 11.** θ<sup>1</sup> = 0, θ<sup>2</sup> = 1, output response of system (1) with different *MR*.

**Figure 12.** θ <sup>1</sup> = 0, θ2= 1, control of system (1) with di 1 fferent *MR*.

 0 **Figure 13.** θ<sup>1</sup> = 1, θ<sup>2</sup> = 0, output response of system (1) with different *MR*.

 0

> 

 

 0 **Figure 14.** θ<sup>1</sup> = 1, θ<sup>2</sup> = 0, control input of system (1) with different *MR*.

 Similarly, for the output feedback case, the simulations were completed when the design methods in [11,13,15,26] were used. From these simulation results, we could find that the proposed output feedback PC had more advantages. Simulation and analysis were made separately under different situations of parameters θ<sup>1</sup> and θ2. Considering length limitations, the figures for these results would not be provided here.

#### **6. Conclusions**

() The PC problem for MIMO discrete-time systems with polytopic uncertainties was discussed in this paper. We derived the AES including previewed information on *ri*(*k*) by using classical difference method. The parameter-dependent state feedback and output feedback were proposed and the conditions of the design methods of PCs were given by using parameter-dependent quadratic Lyaounov functions and LMI approach. The robust controllers with preview actions using LMIs were presented.

**Author Contributions:** Supervision, F.L.; Writing—review and editing, L.L. and F.L. All authors have read and agreed to the published version of the manuscript.

**Funding:** National Natural Science Foundation of China [61903130] and the Hubei Provincial Natural Science Foundation of China [2019CFB227]. Hubei Provincial Department of Education [18G046 and B2018129].

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

*Article*

## **A Model for Determining Weight Coe**ffi**cients by Forming a Non-Decreasing Series at Criteria Significance Levels (NDSL)**

**Mališa Žižovi´c <sup>1</sup> , Dragan Pamuˇcar 2,\* , Goran Cirovi´c ´ <sup>3</sup> , Miodrag M. Žižovi´c <sup>4</sup> and Boža D. Miljkovi´c <sup>5</sup>**


Received: 8 April 2020; Accepted: 6 May 2020; Published: 8 May 2020

**Abstract:** In this paper, a new method for determining weight coefficients by forming a non-decreasing series at criteria significance levels (the NDSL method) is presented. The NDLS method includes the identification of the best criterion (i.e., the most significant and most influential criterion) and the ranking of criteria in a decreasing series from the most significant to the least significant criterion. Criteria are then grouped as per the levels of significance within the framework of which experts express their preferences in compliance with the significance of such criteria. By employing this procedure, fully consistent results are obtained. In this paper, the advantages of the NDSL model are singled out through a comparison with the Best Worst Method (BWM) and Analytic Hierarchy Process (AHP) models. The advantages include the following: (1) the NDSL model requires a significantly smaller number of pairwise comparisons of criteria, only involving an *n* − 1 comparison, whereas the AHP requires an *n*(*n* − 1)/2 comparison and the BWM a 2*n* − 3 comparison; (2) it enables us to obtain reliable (consistent) results, even in the case of a larger number of criteria (more than nine criteria); (3) the NDSL model applies an original algorithm for grouping criteria according to the levels of significance, through which the deficiencies of the 9-degree scale applied in the BWM and AHP models are eliminated. By doing so, the small range and inconsistency of the 9-degree scale are eliminated; (4) while the BWM includes the defining of one unique best/worst criterion, the NDSL model eliminates this limitation and gives decision-makers the freedom to express the relationships between criteria in accordance with their preferences. In order to demonstrate the performance of the developed model, it was tested on a real-world problem and the results were validated through a comparison with the BWM and AHP models.

**Keywords:** NDSL model; AHP; criteria weights; pairwise comparisons

#### **1. Introduction**

The determination of the relative weights of criteria in multi-criteria decision-making models represents a specific problem that is inevitably accompanied by subjectivities. This procedure is very significant, since it exerts a great influence on the final decision in the decision-making process [1]. Multi-criteria optimization methods use normalized values of weights, which meet the condition

that P*<sup>n</sup> <sup>i</sup>* <sup>=</sup> <sup>1</sup> *w<sup>i</sup>* = 1, *w<sup>i</sup>* ≥ 0. In many models for perceiving the relative ratios of weights, however, non-normalized values are used in the form of whole numbers or amounts in percentages [2]. The percentage value of the weight of one criterion denotes a part of the overall preference attributed to that criterion.

The determination of the values of criteria weights is a special problem in multi-criteria optimization, so numerous models have been developed to solve it. Multi-criteria optimization models are well-known for their sensitivity to change in the vector of weight coefficients, so minor modifications in the values of the mentioned vector can cause a major change in the order of the significance of alternatives in the model. Therefore, special attention has been devoted to studying these models in the literature dealing with multi-criteria optimization [3–6].

Studying the available literature allows us to notice that there is no unique classification of methods used for the determination of criteria weights, and their classification was, for the most part, performed in compliance with the author's understanding of and needs regarding the solving of a concrete practical problem. Therefore, in [7], the classification of criteria weight determination methods is given, and groups them into objective and subjective approaches. Objective models imply the calculation of criteria weight coefficients based on the value(s) of the criterion/criteria in the initial decision-making matrix. The most well-known objective models include the Entropy method [8], the CRiteria Importance Through Intercriteria Correlation (CRITIC) method [9], and the FANMA method (named after the authors Fan and Ma) [10].

On the other hand, subjective models include the application of a methodology implying the direct participation of decision-makers, who express their preferences according to the significance of criteria. There are several ways in which weights of criteria are obtained through the subjective approach, which may differ from each other in terms of the number of participants in the process of the determination of weights, the methods applied, and the manner in which the final criteria weights are formed. The group of subjective models used to aggregate partial values in multi-attribute analysis methods includes the trade-off method [11], which enables identification of the decision-maker's dilemmas through pairwise comparisons; the swing weight method [12], which involves the construction of two extreme hypothetical scenarios; the worst (*W*) and the best (*B*) method, in which the first scenario (*W*) is constructed based on the worst values of all criteria, and the second scenario (*B*) corresponds to the best values; the Simple Multi-Attribute Rating Technique (SMART) method [13], which includes a procedure for the determination of criteria weights based on comparing criteria with the best and the criteria from within the defined set of criteria; and SMART Exploiting Ranks (SMARTER), which was developed by [13] and which represents a new version of the SMART method. SMARTER uses the centroid method for ranking the criteria for the determination of weight coefficients.

Apart from the above-mentioned subjective approaches, there are also approaches exclusively based on criteria pairwise comparisons, and such approaches are referred to as pairwise comparison methods. The pairwise comparison method was first introduced by Thurstone [14], and it represents a structured way of producing a decision matrix. Pairwise comparisons (performed by an expert or a team of experts) are used to demonstrate the relative significance of *m* actions in situations in which it is impossible or senseless to assign marks to actions in relation to criteria. In pairwise comparison methods, the decision-maker compares the observed criterion/action with other criteria/actions, and determines the level of significance of the observed criterion/action. An ordinal scale is used to help determine the magnitude of the preference for one criterion over another. One of the most frequently used methods based on pairwise comparisons is the method of the Analytic Hierarchy Process (AHP) [15]. Apart from the AHP, the pairwise comparison methods include the Decision-Making Trial and Evaluation Laboratory (DEMATEL) method [16]; the Best Worst Method (BWM) [17]; the resistance to change method [18], which has elements of the swing method and pairwise comparison methods; and the Step-Wise Weight Assessment Ratio Analysis (SWARA) method [19]. In pairwise comparison methods, for example, in the AHP, weights are determined based on pairwise comparisons of criteria, and the results are generated from pairwise comparisons of alternatives with criteria. After that, by

means of the usefulness function, the final values of alternatives are calculated. A very significant challenge in pairwise comparison methods arises from a lack of consistency of comparison matrices, which is frequently the case in practice [20]. Each of these methods has a wide application in the various areas of science and technology, as well as in solving real-life problems. The AHP method is used in [21] to make a strategic decision in a transport system. In [22], this method is employed to determine the significance of criteria in evaluating different transitivity alternatives in transport in Catania. In [23], the AHP method is using to identify and evaluate defects in the passenger transport system, whereas in [24], it is applied to select an alternative to the electronic payment system. Stevi´c et al. [25] carried out site selection of a logistics center by applying the AHP method. In [26], the DEMATEL method is employed to analyze the risk in mutual relations in logistics outsourcing. Additionally, in [27], the authors proposed a two-phase model which aims to evaluate and select suppliers using an integrated Fuzzy AHP and Fuzzy Technique for Ordering Preference by Similarity to Ideal Solution (FTOPSIS) methods. Integration of the DEMATEL method is not rare, so in [27], along with the Analytic Network Process (ANP) and Data Envelopment Analysis (DEA), a decision is made on the choice of the 3PL logistics provider. The SWARA method is used in [28] to select the 3PL in the sustainable network of reverse logistics and a rough form [29] for the purpose of determining the significance of criteria to the procurement of railroad wagons. Moreover, the application of the SWARA method can be seen in [30–46]. BWM is a method that has increasingly been applied in a short period of time [47–70]. Some authors [55–57,61,67,71–73] see this method as an adequate substitute for the AHP. Its major advantage is the smaller number of pairwise comparisons (2*n* − 3) involved compared to the AHP.

Weight coefficients represent a means calibrating decision-making models and the quality of a decision made directly depends on the quality of their definition. The reason for studying this problem lies in the fact that each subjective method used for the determination of criteria weights has both advantages and disadvantages. In this research study, subjective methods based on pairwise comparisons of criteria, more precisely, the BWM and AHP models, as the highest-sounding representatives of this group of methods, are analyzed. Their advantages and disadvantages are analyzed. Based on the identification of the weaknesses of these models, a new approach to the determination of weight coefficients that involves forming a non-decreasing series at criteria significance levels (the NDSL model) is proposed. The NDSL model includes the application of an original algorithm to the grouping of criteria according to significance levels, through which the need to predefine the ordinal scale for the pairwise comparison of criteria is eliminated. Criteria are grouped according to significance levels in relation to the most significant criterion. After their grouping according to significance levels, the numerical values of the significance of the criteria are determined in accordance with the decision-maker's preferences. By employing this procedure, results which are fully consistent and also represent the real relationships defined by experts' preferences are obtained. The proposed model eliminates the deviations from experts' preferences that appear in the AHP model, since the NDSL's results are always consistent. We highlight this since an increase in the consistency ratio in the AHP leads to the distortion of experts' preferences and the values of weight coefficients deviate from the optimal values. This is what frequently appears in the mentioned models and most often, it is a consequence of using the 9-degree scale characterized by limited possibilities of expressing experts' preferences [74].

This paper has several goals. The first goal of the paper is to present a new model for the determination of criteria weight coefficients which enables a rational expression of the decision-maker's preferences with a minimal number of comparisons—*n* − 1. The second goal of the paper is to develop a model for the determination of criteria weight coefficients which always generates consistent results. The third goal of the paper is to eliminate the 9-degree scale for the expression of experts' preferences in pairwise comparison models through defining an original algorithm for comparing criteria according to the levels of significance. By forming significance levels, the shortcomings of the 9-degree scale, which include (1) its limited flexibility while expressing experts' preferences and (2) inconsistencies during criteria pairwise comparisons, are eliminated [74].

The rest of the paper is organized in the following manner: in the next section (Section 2), the mathematical bases of the NDSL model are presented, and the algorithm demonstrating the performance of the seven steps for defining criteria weight coefficients is presented; in Section 3, the NDSL model is tested on a real-world problem, and a comparison of the results with those of the BWM and AHP models is made; conclusive considerations and directions for future research studies are given in Section 4.

#### **2. Model for Determining Weight Coe**ffi**cients by Forming a Non-Decreasing Series at Criteria Significance Levels**

Allow us to assume that, in a multi-criteria model, there is a set *S* containing *n* evaluation criteria *S* = {*C*1, *C*2, . . . , *Cn*}, and that the weight coefficients of the criteria have not been predefined, i.e., that weight coefficients need to be determined. Allow us also to assume that, in that multi-criteria problem, the criteria *C*1, *C*2, . . . , , *C<sup>n</sup>* are ordered according to their significance (strength). Therefore, the weight coefficients of the criteria satisfy the relationships in which *w*<sup>1</sup> ≥ *w*<sup>2</sup> ≥ . . . ≥ *w<sup>n</sup>* ≥ 0, with the condition that the criteria weights are normalized and meet the condition stipulating that P*<sup>n</sup> <sup>i</sup>* <sup>=</sup> <sup>1</sup> *w<sup>i</sup>* = 1.

**Theorem 1.** *For a randomly chosen real (natural) number N, which is such that N* > *n (where n represents the number of criteria in the multi-criteria model), and if the criterion C*<sup>1</sup> *and the criterion Cx, x* ∈ {1, 2, . . . , *n*}*, are assigned the sum of 2N, then it is possible to determine the number* α*x*, *which is such that it fulfils the ratio between the criteria:*

$$\mathbb{C}\_1: \mathbb{C}\_\mathbf{x} = (N + a\_\mathbf{x}): (N - a\_\mathbf{x}). \tag{1}$$

**Proof.** The proof of this ratio is obvious, since, for *x* = 1, we evidently obtain *C*<sup>1</sup> : *C*<sup>1</sup> = (*N* + α1) : (*N* − α1), i.e., we obtain α<sup>1</sup> = 0, i.e., we obtain *C*<sup>1</sup> : *C*<sup>1</sup> = *N* : *N*, i.e., the ratio *C*<sup>1</sup> : *C*<sup>1</sup> = 1.

If we assume that the ratio *<sup>C</sup>*<sup>1</sup> : *<sup>C</sup><sup>x</sup>* = *<sup>t</sup>x*, then we also obtain *<sup>N</sup>*+α*<sup>x</sup> N*−α*<sup>x</sup>* = *t<sup>x</sup>* ≥ 1, from which it follows that *N* + α*<sup>x</sup>* = *Nt<sup>x</sup>* − α*xtx*, i.e.,

$$a\_{\mathbf{x}} = N \cdot \frac{t\_{\mathbf{x}} - 1}{t\_{\mathbf{x}} + 1} \,\tag{2}$$

where <sup>α</sup>*<sup>x</sup>* represents a non-negative number for the given *<sup>x</sup>* <sup>∈</sup> {1, 2, . . . , *<sup>n</sup>*}.

**Corollary 1.** If the criterion *C<sup>x</sup>* has a greater or equal significance (weight) for the criterion *Cy*, then the condition *t<sup>x</sup>* ≥ *t<sup>y</sup>* is met, from which it follows that α*<sup>x</sup>* ≥ α*y*.

**Proof.** The proof for Corollary 1 is evident, since it arises from Theorem 1:

If *C<sup>x</sup>* ≥ *Cy*, then we have *C*<sup>1</sup> : *C<sup>x</sup>* ≥ *C*<sup>1</sup> : *Cy*. Since *C*<sup>1</sup> : *C<sup>x</sup>* = *t<sup>x</sup>* and *C*<sup>1</sup> : *C<sup>y</sup>* = *ty*, then we have *<sup>t</sup><sup>x</sup>* <sup>≥</sup> *<sup>t</sup><sup>y</sup>* and <sup>α</sup>*<sup>x</sup>* <sup>≥</sup> <sup>α</sup>*y*.

It follows from Corollary 1 that a non-decreasing series of numbers can be attributed to the series of the criteria ordered according to significance, i.e.,

$$
\alpha\_1, \alpha\_2, \alpha\_3, \dots, \alpha\_n. \tag{3}
$$

Based on Theorem 1, it is possible to conclude that α<sup>1</sup> = 0. Since we have

$$\mathbb{C}\_1: \mathbb{C}\_1 = (N + a\_1): (N - a\_1) \to \mathbb{C}\_1(N - a\_1) =: \mathbb{C}\_1(N + a\_1).$$

then *N* − α<sup>1</sup> = *N* + α<sup>1</sup> and we have α<sup>1</sup> = 0.

*Mathematics* **2020**, *8*, 745

Additionally, based on Theorem 1, a new series (4) can be formed from the already formed series of elements (3), i.e.,

$$\frac{N+\alpha\_1}{N-\alpha\_1}, \frac{N+\alpha\_2}{N-\alpha\_2}, \dots, \frac{N+\alpha\_n}{N-\alpha\_n}.\tag{4}$$

The non-decreasing series of elements that is presented by the expression (4) represents a series of ratios of the significance (strength) of the criterion *C*<sup>1</sup> against the other criteria from within the *S* set of criteria. Based on the condition (1), the series of elements (4) can be represented as a non-decreasing series of weight coefficients of the criteria of the multi-criteria model, which is such that

$$w\_1 = w\_{1\prime} \\ w\_2 = \frac{N - a\_2}{N + a\_2} \cdot w\_{1\prime} \\ w\_3 = \frac{N - a\_3}{N + a\_3} \cdot w\_{1\prime} \dots \\ w\_n = \frac{N - a\_n}{N + a\_n} \cdot w\_1. \tag{5}$$

Based on the expression (5) and the condition that the sum of all weight coefficients of criteria of the multi-criteria model is equal to one, i.e., <sup>P</sup>*<sup>n</sup> j* = 1 *w<sup>j</sup>* = 1, the following is obtained:

$$w\_1 \cdot \left(1 + \sum\_{j=2}^{n} \frac{N - \alpha\_j}{N + \alpha\_j} \right) = 1. \tag{6}$$

Therefore, it follows from this that the weight coefficient of the most influential (best) criterion is obtained as

$$w\_1 = \frac{1}{1 + \sum\_{j=2}^{n} \frac{N - \alpha\_j}{N + \alpha\_j}}.\tag{7}$$

It follows from the condition (1) that *w*<sup>1</sup> : *w<sup>i</sup>* = (*N* + α*i*) : (*N* − α*i*), i.e., *w<sup>i</sup>* = *w*<sup>1</sup> · (*N* − α*i*)/(*N* + α*i*), from which the weight coefficients of the remaining criteria are obtained:

$$w\_{l} = \frac{\frac{N - a\_{l}}{N + a\_{l}}}{1 + \sum\_{j=2}^{n} \frac{N - a\_{j}}{N + a\_{j}}} ; i = 2, 3, \dots, n. \tag{8}$$

#### *2.1. Forming a Non-Decreasing Series at Criteria Significance Levels*

The basic idea of forming a criteria classification level precisely reflects the need to determine the significance of criteria and eliminate the limitations of using predefined scales for expressing experts' preferences. The basic limitation of using scales for expressing experts' preferences in subjective models, such as the AHP, BWM, and DEMATEL, relates to the small range of values of such scales, as well as the nonlinearity of the scale (in the AHP). The insufficient range of values makes the development of an objective expression of experts' preferences more difficult, which is particularly pronounced when comparing a larger number of criteria. Therefore, for example, the range of values for the scale employed in the AHP and BWM is from 1 to 9. Should there be a larger number of criteria (for example, seven) in the considered problem, experts' comparisons are made more difficult due to the small number of values in the scales. The 9-degree scale also implies that the greatest ratio between the weights of the best (*CB*) and worst (*CW*) criteria is limited to 9, i.e., *CB*:*C<sup>W</sup>* = 9:1. If, however, an expert considers the ratio *CB*:*C<sup>W</sup>* to be greater than 9:1 and the *CB*:*C<sup>W</sup>* = 15:1, then such a preference cannot be presented. In order for experts to express preferences of this kind by applying the 9-degree scale, they are forced to distort their preferences, which leads to the deviation of weight values from the optimal values.

By introducing the level of criteria significance, experts are given a possibility to form as many criteria significance levels *L<sup>j</sup>* , *j* ∈ {1, 2, . . . , *k*} as they need for expressing their preferences. Within the framework of significance levels, criteria are roughly classified according to experts' preferences. Forming significance levels, i.e., grouping criteria according to levels, is performed by adhering to the following rules:


After grouping criteria as per the levels *L<sup>j</sup>* , *j* ∈ {1, 2, . . . , *k*}, experts express their preferences through a numerical comparison of the criteria by means of the significance of the criteria (α*<sup>i</sup>* ). Therefore, based on the value α*<sup>i</sup>* , a fine classification of the criteria is conducted within the observed level. The values of the significance of the criteria (α*<sup>i</sup>* ) within every level *L<sup>j</sup>* , *j* ∈ {1, 2, . . . , *k*}, are defined based on experts' preferences; the final values α*<sup>i</sup>* within every level *L<sup>j</sup>* need to be defined. In the following part, the boundary values of the significance of the criteria (α*<sup>i</sup>* ) within the level *L<sup>j</sup>* , *j* ∈ {1, 2, . . . , *k*} are defined.

If the significance of the criterion *C<sup>i</sup>* is expressed as α*<sup>i</sup>* , where *i* ∈ {1, 2, . . . , *n*}, then subset of the criteria is formed for each criteria level, which together make the criteria set *S*. Then, it follows that *L<sup>j</sup>* = *L*<sup>1</sup> ∪ *L*<sup>2</sup> ∪ · · · ∪ *L<sup>k</sup>* , and for every level *j* ∈ {1, 2, . . . , *k*},

$$L\_j = \left\{ \mathbb{C}\_{j\_1 \prime} \mathbb{C}\_{j\_2 \prime} \dots \mathbb{C}\_{j\_l} \right\} \\ = \{ \mathbb{C}\_i \in \mathbb{S} \; : \; j \le \alpha\_i < j+1 \right\} \tag{9}$$

Based on the previously defined relations, it is possible to define the boundaries within which the values of the significance of the criteria (α*<sup>i</sup>* ) move for each observed level *L<sup>j</sup>* , *j* ∈ {1, 2, . . . , *k*}. If the criterion *C<sup>i</sup>* belongs to the level *L<sup>j</sup>* , *j* ∈ {1, 2, . . . , *k*} is presented as *C<sup>i</sup>* ∈ [*t<sup>j</sup>* ,*t<sup>j</sup>* + 1), *i* ∈ {1, 2, . . . , *n*}, *j* ∈ {1, 2, . . . , *k*}, and then, based on the relation (2), we can obtain the following:


**Example 1.** *If we assume that the criteria are grouped at three levels L<sup>j</sup> , j* ∈ {1, 2, 3}, *and if we take that N* = 50, *then we can define the interval in which the values of the significance of the criterion C<sup>i</sup> within the level L<sup>j</sup> should range. By applying the previously defined relationships, we obtain the result that the values* α*<sup>i</sup> range within the level L<sup>j</sup> , j* ∈ {1, 2, 3}*, in the following intervals:*

*Mathematics* **2020**, *8*, 745


From the relations presented for the determination of the boundary values of the significance of the criteria (α*<sup>i</sup>* ), i.e., from experts' preferences within the level *L<sup>j</sup>* , *j* ∈ {1, 2, . . . , *k*}, we may perceive that the breadth of the interval *N* · (*k* − 1)/(*k* + 1) ≤ α*<sup>x</sup>* < *N* · *k*/(*k* + 2) depends on the value of the real (natural) number *N*. A broader interval and, simultaneously, a more comfortable scale with fewer decimal values for expressing experts' preferences, are obtained for greater values of the number *N*, such as *N* ≥ *n* 2 ; vice versa, a scale with a larger number of decimal values for expressing experts' preferences is obtained for smaller values of the number *N*, such as *n* < *N* < *n* 2 .

Based on Theorem 1, while performing a comparison of any criterion *C<sup>i</sup>* with the criterion *C*<sup>1</sup> (where *C*<sup>1</sup> is the most influential criterion), the NDSL model ensures that the number 2*N* is added to the criteria. Simultaneously, a part greater than or equal to 2*N* belongs to the criterion *C*1, as the most significant criterion, whereas a smaller or equal part belongs to the criterion *C<sup>i</sup>* . If the problem of defining the number *N* is observed from an economic standpoint, and if we take *N* = 50, then this problem can be observed in ordinary economic terms, i.e., in percentages (*p*%). If *p* ≥ 50, then it belongs to the criterion *C*1, while (1−*p*)% belongs to the criterion *C<sup>i</sup>* . Since expressing in percentages is a normal thing to do during a pairwise comparison, the authors propose that *N* = 50 should be taken for the values of the number *N* for solving real problems.

#### *2.2. Steps of the NDSL Model*

Based on the previously demonstrated mathematical bases of the NDSL model, the steps that should be taken in order to obtain the weight coefficients of criteria are systematized in this section. In Phase One, a set of evaluation criteria is formed, and the criteria are further ranked in accordance with experts' preferences. In Phase Two, the levels of significance of the criteria are formed and the criteria significance level is determined within each level. Finally, the weight coefficients of the criteria are calculated in Phase Three. Figure 1 schematically presents the phases through which the NDSL model is implemented.

The NDSL model includes the calculation of the weight coefficients of criteria through the seven steps presented in the next part of the paper.

Step 1: Determining the most significant criterion from within the set of criteria *S* = {*C*1, *C*2, . . . , *Cn*}. Allow us to assume that the decision-maker has chosen the criterion *C*<sup>1</sup> as the most significant, and allow us to assume that *C*<sup>1</sup> is a criterion from within the set *S* = {*C*1, *C*2, . . . , *Cn*}, which is the most significant in the decision-making process.

Step 2: Ranking the criteria from within the defined set of evaluation criteria *S* = {*C*1, *C*2, . . . , *Cn*}. Ranking is performed according to the significance of the criteria, i.e., from the most significant criterion to the criterion of the least significance. In that manner, we obtain the criteria ranked according to the expected values of weight coefficients:

$$
\mathbb{C}\_1 > \mathbb{C}\_2 > \dots > \mathbb{C}\_{n\nu} \tag{10}
$$

where *n* represents the total number of the criteria. If it is estimated that there are two or several criteria with the same significance, instead of the sign ">", the sign "=" is placed in-between those criteria in the expression (10).

**Figure 1.** The non-decreasing series at criteria significance levels (NDSL) model.

Step 3: Grouping the criteria according to the significance levels. Allow us to assume that experts have grouped the criteria as per levels in accordance with their preferences, depending on the significance of the criteria. Grouping criteria as per levels is performed according to the rules defined in the previous section of the paper, namely:


By grouping criteria as per levels, rough expert preferences for the criteria from within the set *S* = {*C*1, *C*2, . . . , *Cn*} are expressed. The precise definition of experts' preferences is expressed via the significance of the criteria (α*<sup>i</sup>* ). The boundary values of α*<sup>i</sup>* as per levels are presented in the next step.

Step 4: Defining the boundary values of the significance of criteria (α*<sup>i</sup>* ) as per levels. When defining the boundary values of the significance of criteria, the following relations should be adhered to:

• Level *L*1: For *C<sup>i</sup>* ∈ [1, 2), the values of the significance of criteria (α*<sup>i</sup>* ) range in the interval 0 ≤ α*<sup>i</sup>* < *N*/3, i.e., *C<sup>i</sup>* ∈ [1, 2) ⇒ 0 ≤ α*<sup>i</sup>* < *N*/3;


Step 5: Presenting experts' preferences as per levels. Based on the defined boundary values α*<sup>i</sup>* , experts express their preferences in accordance with the significance of the criteria. Every criterion *C<sup>i</sup>* ∈ *S* within the level *L<sup>j</sup>* , *j* ∈ {1, 2, . . . , *k*} is assigned the value α*<sup>i</sup>* . Therefore, since it is the most significant criterion, the criterion *C*<sup>1</sup> is assigned the value α<sup>1</sup> = 0. The rest of the criteria are assigned appropriate values α*<sup>i</sup>* in compliance with the significances of the criteria. If the criterion *C<sup>i</sup>* has a greater significance than the criterion *Ci*+1, then it is considered that α*<sup>i</sup>* < α*i*+1, or if the criterion *C<sup>i</sup>* has a significance equal to that of the criterion *Ci*+1, then it is considered that α*<sup>i</sup>* = α*i*+1.

Step 6: Defining the *f*(*Ci*) criteria significance functions. The *f* : *S* → *R* criteria significance function is defined in that manner. For each criterion *C<sup>i</sup>* ∈ *S*, it is possible to define a criteria significance function by applying the following expression:

$$f(\mathbb{C}\_{i}) = \frac{N - a\_{i}}{N + a\_{i}} \,\mathrm{s}\tag{11}$$

where *i* ∈ {1, 2, . . . , *n*}, α*<sup>i</sup>* represents the significance of the criterion assigned to the criterion *C<sup>i</sup>* within the observed level, whereas *N* represents a real (natural) number.

Step 7: Calculating the optimal values of criteria weight coefficients. If the most influential criterion is marked as *C*1, then, by applying the expression (12), it is possible for us to calculate the weight coefficient of the criterion *C*1, i.e.,

$$w\_1 = \frac{1}{1 + \sum\_{j=2}^{n} f(\mathbb{C}\_j)} \, ^\prime \tag{12}$$

where *f*(*Cj*) represents the criteria significance function.

The weight coefficients of the remaining criteria from within the set *S* are obtained by applying the following expression (13):

$$w\_i = \frac{f(\mathbb{C}\_i)}{1 + \sum\_{j=2}^n f(\mathbb{C}\_j)} \,\, \tag{13}$$

where *f*(*Ci*) represents the function of the significance of criteria whose weight coefficient is being calculated, whereas *f*(*Cj*) represents the functions of the significance of all criteria (without the function of the significance of the most significant criterion).

The application of all multi-criteria models is aimed at selecting an alternative with the best final value of the criteria function. The total value of the criteria function *f<sup>l</sup>* (*l* = *1,2,..,m*) alternative *l* can be obtained through the transformation of the NDSL model into a classical multi-criteria model by the application of the expression (14). By applying the simple additive weighted value function (14), which is the basic model for the majority of MCDM methods, the algorithm of the NDSL model transforms into a classical multi-criteria model, which can be used to evaluate *m* alternative solutions as per *n* optimization criteria.

$$f\_l = \sum\_{i=1}^{n} w\_i x\_{ij\prime} \tag{14}$$

where *w<sup>i</sup>* represents the values of the weight coefficients, whereas *xij* represents the values of the alternatives as per the optimization criteria in the decision-making initial matrix *X* = h *xij*i *m*×*n* .

#### **3. Application of the NDSL Model**

This section is a demonstration of the application of the presented model for solving a real-world problem. With the aim of understanding the presented algorithm as easily as possible, the application of the NDSL model for solving the simple problem of evaluating a car, which a large number of people are faced with every day, is presented. The subject matter of consideration was the problem of selecting an optimal car from a set of cars by applying a larger number of criteria. For the purpose of this study, the criteria defined in the study [74] were considered.

The subject matter of consideration was the example in which the car buyer is evaluating the alternatives by observing the following five criteria: The quality (C1), the price (C2), convenience/comfort (C3), the safety level (C4), and the interior (C5). If we accept the condition that *N* ≥ *n* 2 , i.e., *N* = 25, then we can determine the weight coefficients of the criteria by the NDSL model as follows:

Step 1: Determining the most significant criterion from within the set of criteria *S* = {*C*1, *C*2, . . . , *C*5}. Allow C1 to be selected as the most significant criterion;

Step 2: The criteria from within the set of criteria *S* = {*C*1, *C*2, . . . , *C*5} are ranked as follows: C2 > C1 = C4 > C3 > C5;

Step 3: Grouping the criteria as per significance levels. The criteria are grouped into sets at four levels, as follows:


At the first level, the criterion C2 is positioned as the most significant criterion, i.e., *C*<sup>2</sup> ∈ [1, 2). Since it has been estimated that the significance of the remaining criteria is more than two times as small as that of the criterion C2, they are classified as the other significance levels. At the second level, there are the criteria C1 and C4, because they have been estimated to have a weight coefficient which is two to three times as small as that of the criterion C2, i.e., *C*1,*C*<sup>4</sup> ∈ [2, 3). The criterion C3 is at the third level, since its weight coefficient is three to four times as small as that of the criterion C2, i.e., *C*<sup>3</sup> ∈ [3, 4). The criterion C5 is at the eighth level, since its weight coefficient has been estimated to be between eight and nine times as small as the weight coefficient of the most significant criterion (C2), i.e., *C*<sup>5</sup> ∈ [8, 9);

Step 4: Based on the relations for defining the boundary values of the criteria significance (α*<sup>i</sup>* ), we can determine the intervals for α*<sup>i</sup>* at every significance level, as follows:

> Level *L*<sup>1</sup> : α*<sup>i</sup>* ∈ [0.00, 8.33); Level *L*<sup>2</sup> : α*<sup>i</sup>* ∈ [8.33, 12.5); Level *L*<sup>3</sup> : α*<sup>i</sup>* ∈ [12.5, 15.0); Level *L*<sup>8</sup> : α*<sup>i</sup>* ∈ [19.44, 20.0).

Step 5: Based on the defined intervals of the criteria significance (α*<sup>i</sup>* ), the experts' preferences as per levels are presented:

> Level *L*<sup>1</sup> : α<sup>2</sup> = 0 Level *L*<sup>2</sup> : α<sup>1</sup> = α<sup>4</sup> = 8.33 Level *L*<sup>3</sup> : α<sup>3</sup> = 14.9 Level *L*<sup>8</sup> : α<sup>5</sup> = 19.5

Based on the presented values of α*<sup>i</sup>* , it is possible to conclude the following:

(1) For Level One: Since the criterion C2 is the most significant criterion, it has been assigned the value α<sup>1</sup> = 0.;


Step 6: By applying the expression (11), the functions of the significance of the criteria *f*(*Ci*), *i* = 1, 2, . . . , 5, were defined as follows:

$$\begin{array}{l}f(\mathsf{C}\_{2}) = 1.000\\f(\mathsf{C}\_{1}) = 0.500\\f(\mathsf{C}\_{4}) = 0.500\\f(\mathsf{C}\_{3}) = 0.253\\f(\mathsf{C}\_{5}) = 0.124\end{array}$$

Step 7: Since the criterion C2 is defined as the most influential criterion, by applying the expression (12), it is possible to calculate the weight coefficient of the most significant criterion:

$$w\_2 = \frac{1}{1 + \sum\_{j=2}^{5} f(\mathbb{C}\_j)} = \frac{1}{1 + 0.500 + 0.500 + 0.253 + 0.124} = 0.421... $$

The weight coefficients of the remaining criteria are obtained by applying the following expression (13):

$$\begin{array}{rcl} w\_1 &=& \frac{0.500}{1 + 0.500 + 0.500 + 0.253 + 0.124} = 0.210 \\ w\_4 &=& \frac{0.500}{1 + 0.500 + 0.500 + 0.253 + 0.124} = 0.210 \\ w\_3 &=& \frac{0.253}{1 + 0.500 + 0.500 + 0.253 + 0.124} = 0.106 \\ w\_5 &=& \frac{0.124}{1 + 0.500 + 0.500 + 0.253 + 0.124} = 0.052 \end{array}$$

In that way, the vector of the weight coefficients *w<sup>i</sup>* = (0.210, 0.421, 0.106, 0.210, 0.052) *T* is obtained.

#### **4. Comparison and Discussion**

In this section, based on the presented methodology, the advantages of the NDSL model that make the model a reliable and interesting multi-criteria model are singled out. The advantages of the NDSL model are presented through a comparison with known methodologies employed for the determination of criteria weight coefficients. The BWM and AHP methods were singled out for the purpose of the comparison, since the validity of both methodologies is based on the satisfaction of the condition of the transitivity of relations and a pairwise comparison. Additionally, other reasons for comparing the model with the BWM and AHP methods are the quality of the results and the widespread use of the BWM and AHP models by the scientific community for successfully solving numerous real world problems. Bearing in mind the fact that the NDSL model is methodologically based on an assessment of the comparative significance of criteria and satisfaction of the condition of transitivity, a comparison with the BWM and AHP models is a logical step for conducting a comparison of the results and validation of the model. In the following part, the application of the BWM and AHP methods is presented for the same example in which the NDSL model was tested in the previous chapter.

The algorithm of the BWM implies the formation of the Best-to-Others (BO) and the Others-to-Worst (OW) vector [75]: *A<sup>B</sup>* = (2, 1, 4, 2, 8) *T* and *A<sup>W</sup>* = (4, 8, 2, 4, 1) *T* , respectively. By applying the BWM, the optimal values of the weight coefficients were obtained, namely,

*w*<sup>1</sup> = 0.2105, *w*<sup>2</sup> = 0.4211, *w*<sup>3</sup> = 0.1053, *w*<sup>4</sup> = 0.2105, *w*<sup>5</sup> = 0.0526, and a consistency ratio (CR) *CR* = *0.00.*

Based on the data from [75], a pairwise comparison matrix of the AHP model (Table 1) was formed, and the values of the weight coefficients of criteria, with a consistency ratio CR = 0.029, were obtained.


**Table 1.** Criteria pairwise comparison—the Analytic Hierarchy Process (AHP) method.

By applying the AHP method, the values of the weight coefficients of criteria similar to those in the BWM were obtained, but with a significantly larger number of pairwise comparisons. The differences in the values of the weight coefficients between the AHP and BWM are a consequence of the incomplete consistency of the results in the AHP model (CRAHP = 0.029 and CRBWM = 0.000). A comparative presentation of the results of all three approaches is shown in Table 2.

**Table 2.** A comparative presentation of the results obtained by applying the NDSL, Best Worst Method (BWM), and AHP methods.


Table 2 allows us to notice that identical values of the weight coefficients of criteria were obtained by applying the BWM and NDSL models. By applying the AHP, the values obtained deviate to a certain extent from the weights of the BWM and NDSL models. The solution obtained by the AHP model is also acceptable, since the values of the consistency ratio are within the permitted boundaries, i.e., CR ≤ 0.1. We need to emphasize the fact that, by applying the BWM and NDSL models to this example, completely consistent results were obtained, which was also confirmed by the calculation made, i.e., CRBWM = 0.00. Comparing criteria by applying a 9-degree scale (in the BWM), however, often leads to inconsistent results. Different from the BWM and AHP models, consistent results are always obtained when using the NDSL model because it applies an original methodology for grouping criteria as per significance, within which transitivity relations between criteria are retained. In the next part of the paper, a discussion is presented through a comparison of the NDSL model with the BWM and AHP models. The discussion aims to point to the limitations of the BWM and AHP models, which are eliminated by the application of the NDSL model. The discussion is organized through the following: (1) a comparative presentation of the number of criteria pairwise comparisons needed in the analyzed models; (2) the impact of the measuring scale on the results of the BWM, AHP, and NDSL models; (3) the consistency of the results of the analyzed models; (4) the problem of defining the best and worst criteria in the BWM and NDSL models; and (5) the problem of multi-optimality in the BWM.

In the AHP method, *n*(*n* − 1)/2 pairwise comparisons need to be made, whereas the algorithm of the BWM implies 2*n* − 3 comparisons. An increase in the number of criteria in the BWM and AHP models leads to a significant increase in the number of pairwise comparisons, through which the mathematical formulation of the mentioned models is, to a great extent, made more complex. This makes the validation of the results and the impossibility of obtaining satisfactory values of the CR more complex. On the other hand, in relation to the presented subjective models (the AHP method and BWM), the NDSL only requires an *n* − 1 comparison in pairs of criteria, so the mathematical formulation of the model is made more complex as the number of criteria increases. Apart from that, the presented methodology enables us to transfer mathematical transitivity as per significance levels, which produces maximally consistent results for the comparison.

In the case of a larger number of criteria (more than eight), it is difficult to obtain fully consistent results in the BWM and AHP models. That is a consequence of the small range of the 9-degree scale used in these models. The 9-degree scale limits the expression of experts' preferences to a maximum ratio of 9:1. This limitation further imposes an inconsistency in comparisons. This assertion will be illustrated by the example of an evaluation of suppliers A, B, and C. If suppliers B and C differ from each other a little in terms of the quality of the delivery, the company has a possibility to assign them the values 9 and 8 when comparing them with supplier A. Now, given the fact that there is a small difference between suppliers B and C, that difference cannot consistently be expressed by means of the 9-degree scale. In that situation, there is no other possibility but to assign the value 1, through which the same significance is assigned to suppliers B and C [76]. Another example is as follows: should alternative A be preferable to B, and should B be better than C (mark: 7), once A is compared with C, the highest available result is 9, which creates an inconsistency. Similar inconsistencies caused by the 9-degree scale also appear in the BWM, but can be eliminated by the implementation of different scales.

These inconsistencies in comparisons are eliminated in the NDSL model. The NDSL model applies a different logic for criteria comparison, which is performed in two steps. The first step involves grouping criteria according to the significance levels, whereas in the second step, an expert evaluation of criteria is carried out through the scale defined for every level individually. By forming a criteria significance level, the shortcoming of the predefined scale of values is eliminated. The NDSL model enables us to form the needed number of such levels, which implies that experts have a sufficient freedom to express the realistic advantages of the most significant criterion in relation to other criteria.

The results of the NDSL model do not require the consistency of the results to be checked because, in the first step of the model, weight coefficients are ranked in relation to the most significant criterion. Therefore, transitivity relations between criteria are formed in the first step. Those relations are retained throughout the model by forming a non-decreasing series as per significance levels, so the results of the model are simultaneously also always consistent. On the other hand, the BWM and AHP models require the consistency of solution(s) to be checked and validation of the results obtained. The 9-degree scale and a large number of comparisons frequently undermine the transitivity between criteria in both models, which leads to an increase in the CR and the boundary values being exceeded.

#### **5. Conclusions**

In this paper, a new model for determining the weight coefficients of criteria in multi-criteria models by forming a non-decreasing series at criteria significance levels (NDSL) is presented. The NDSL model involves forming a non-decreasing series based on criteria significance levels. The mathematical formulation of the NDSL model is systematized in the second section of the paper, and an algorithm, which is implemented through seven steps, is proposed. With the aim of presenting the applicability of the new model, its application in decision-making in a real-world problem is demonstrated. A comparison of the results of the NDSL model and the results of the BWM and AHP models is also presented in the paper. It was demonstrated through a comparison with the mentioned models that the NDSL model generates the same results as the existing models and enables elimination of the weaknesses that exist in the BWM and AHP models.

The NDSL model has several interesting characteristics that make it a robust and interesting model to apply in multi-criteria decision-making, namely due to the following facts: (1) the NDSL model requires a significantly smaller number of comparisons in pairs of criteria, only needing an *n* − 1 comparison, whereas the AHP requires an *n*(*n* − 1)/2 comparison and the BWM requires a 2*n* − 3

comparison; (2) the model enables us to obtain consistent results, even in the case of a larger number of criteria (more than nine criteria); (3) the NDSL model applies an original algorithm for grouping criteria as per significance levels, through which the shortcomings of the 9-degree scale applied in the BWM and AHP models are eliminated. In that way, the small range and inconsistency of the 9-degree scale are eliminated; (4) while the BWM includes defining a unique best/worst criterion, the NDSL model eliminates this limitation and gives decision-makers the freedom to express relationships between criteria in accordance with their preferences, irrespective of the number of best/worst criteria in the model.

The NDSL model represents a tool which helps managers cope with their own subjectivity when prioritizing criteria through a simple and logical algorithm. By employing the presented model, the appearance of the inconsistency of experts' preferences is eliminated through an original algorithm requiring a small number of comparisons (*n* − 1). The authors believe that this approach gives experts the opportunity to express their preferences in a natural way, by forming the level of significance of criteria. Accordingly, it is expected that by forming the criteria significance level, the shortcomings and limitations that exist in predefined assessment scales are eliminated. For example, when comparing the best (CB) criterion with the C<sup>x</sup> criterion, an expert knows that the C<sup>B</sup> criterion is 2.5 times more significant than the C<sup>x</sup> criterion. In pairwise comparison methods that use the Saaty scale, such a relationship cannot be represented directly, since the Saaty scale involves only integer values. Through the formation of significance levels, the expert is given the opportunity to classify the C<sup>x</sup> criterion as belonging to another level in a logical manner, or based on their preferences, since they already know that the C<sup>B</sup> criterion is 2.5 times more important than the C<sup>x</sup> criterion. From this, we can conclude that the experts indirectly form the significance levels of the criteria. However, the mathematical formulation of existing models for pairwise comparisons requires experts to represent the significance of criteria by defining relationships over a numerical scale. In this way, criteria are indirectly grouped into levels of significance. However, such a procedure can lead to a misrepresentation of the significance of the criterion, which may be due to a misunderstanding of the mathematical apparatus of the method. Bearing all of the above in mind, the authors believe that this formulation of the interrelation between criteria enables the rational and logical expression of expert preferences, which further contributes to objective decision making.

Bearing in mind the mentioned advantages of the NDSL, there is a need for the development and implementation of software for real-world applications. Through such work, the model will be brought significantly closer to users and will enable the exploitation of all of the advantages mentioned in the paper. We also propose the application of the model in other real-world applications in which the NDSL model would be used with other developed MCDM tools. This limitation has already been eliminated. The authors developed a software solution in Microsoft Excel software while working on this study. One of the directions of future research studies should be working towards the extension of this model through the application of different theories of uncertainty, such as neutrosophic sets, fuzzy sets, rough numbers, grey theory, and so forth. The extension of the NDSL through the application of theories of uncertainty will enable the processing of experts' preferences, even when comparisons are made based on partly known or even very little-known data. This would enable an easier expression of the decision-maker's preferences, simultaneously respecting the subjectivities and shortcomings of information about certain phenomena.

**Author Contributions:** Conceptualization, M.Ž. and D.P.; methodology, M.Ž., D.P., and G.C.; validation, B.D.M. ´ and M.M.Ž.; writing—original draft preparation, D.P.; writing—review and editing, M.Ž. and D.P.; supervision, D.P. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

*Article*

## **Predicting the Dynamic Response of Dual-Rotor System Subject to Interval Parametric Uncertainties Based on the Non-Intrusive Metamodel**

#### **Chao Fu 1,2 , Guojin Feng <sup>2</sup> , Jiaojiao Ma 2,3, Kuan Lu 1,\*, Yongfeng Yang <sup>1</sup> and Fengshou Gu <sup>2</sup>**


Received: 12 April 2020; Accepted: 3 May 2020; Published: 7 May 2020

**Abstract:** In this paper, the non-probabilistic steady-state dynamics of a dual-rotor system with parametric uncertainties under two-frequency excitations are investigated using the non-intrusive simplex form mathematical metamodel. The Lagrangian formulation is employed to derive the equations of motion (EOM) of the system. The simplex form metamodel without the distribution functions of the interval uncertainties is formulated in a non-intrusive way. In the multi-uncertain cases, strategies aimed at reducing the computational cost are incorporated. In numerical simulations for different interval parametric uncertainties, the special propagation mechanism is observed, which cannot be found in single rotor systems. Validations of the metamodel in terms of efficiency and accuracy are also carried out by comparisons with the scanning method. The results will be helpful to understand the dynamic behaviors of dual-rotor systems subject to uncertainties and provide guidance for robust design and analysis.

**Keywords:** dual-rotor; multi-frequency excitation; non-intrusive calculation; metamodel

#### **1. Introduction**

Risk analyses and optimization of engineering mechanical systems always play an important role in the design and maintenance [1,2]. To optimize and improve the dynamic performance, a dual-rotor system is widely employed in modern aero-engines for large surge margin. It is more complicated than single rotor systems in both the structural and dynamical regimes. Researchers have paid attention to the vibrations of dual-rotors under faults, such as the imbalance and rub-impact [3–5]. The design and modeling of dual-rotors were also intensively studied over the past few decades [6–9]. The application of rotor-bearing structures in the dual-rotor systems were investigated both theoretically and experimentally [10]. To improve the fidelity, the differences between 1D and 3D models of dual-rotor systems were studied [11].

The reported contributions provide guidance for the dynamical assessments of dual-rotor systems. However, an important feature of practical engineering mechanical systems has been ignored, which is that the physical parameters of the models and working conditions will behave in an uncertain way inherently [12–15]. For a complex engineering dual-rotor system, this problem will be more prominent. In recent literature [16–19], the sources and causes of parametric uncertainties in rotor systems were explained in detail, especially the complex stiffness of the connecting structures. It is gradually recognized that the inherent uncertainty should not be overlooked for robust design and dynamic behaviors prediction. Efforts have already been made to investigate the effects of uncertainties

in rotor dynamical systems. The polynomial chaos expansion in combination of the harmonic balance method was used to quantify the effects of different random parametric uncertainties on the linear and non-linear dynamical characteristics [20–22]. More recently, the Kriging metamodeling was applied to the prediction of uncertain behaviors of flexible rotor systems [17]. The nonparametric modeling [14] was also introduced into the uncertainty quantification for a Jeffcott rotor [23] as well as analyses in terms of the balancing and unbalancing [24]. Considering the random excitations, the power spectral density of the unbalance response of an aero-engine dual-rotor was analyzed in [25]. The modeling and stochastic frequency response functions of rotors subject to random uncertainties were studied by using the Karhunen–Loève decomposition [26].

As can be observed, the widely adopted and employed uncertainty analysis methods mostly belong to the probabilistic domain. In practical situations, it is generally difficult or too expensive to gather enough prior data for the uncertain parameters. The interval analysis procedures are more versatile and easier to implement due to their non-probabilistic characteristics [13]. The Chebyshev inclusion function proposed by Wu et al. [27] has attracted wide attention in the past few years due to its simplicity in concept and non-intrusiveness. Several improved forms have been developed and applied to uncertain mechanical systems [28,29]. Although the interval analysis has been widely used in structural dynamics of the truss and multibody systems, it has not been applied to the uncertain rotor dynamics until recent years [30,31]. Some meaningful results have been obtained in these contributions using the interval models. However, formulations and applications of metamodeling methodologies based on non-probabilistic descriptions have not attracted sufficient attention. The computational burden needs also to be reduced. The vibration characteristics of dual-rotor systems subject to multi-frequency excitation and interval variables remain to be revealed.

This paper presents the non-intrusive metamodeling for the uncertainty quantification of a dual-rotor system. The major purposes are to calculate the steady-state dynamic responses of such a system under interval uncertainties and illustrate the effectiveness of the metamodel. First, the dual-rotor model and its motion equations will be described in Section 2. Then, in Section 3, the formulation of the metamodel for single and multi-uncertain variables is explained. Next, propagations of uncertainties of different physical parameters are studied and discussed in Section 4. Finally, the concluding remarks are drawn in Section 5.

#### **2. Model Description and Motion Equations**

A dual-rotor system often consists of a higher pressure (HP) rotor and a lower pressure (LP) rotor, which are connected by the inter-shaft bearing and rotate at different angular speeds. They can also be referred to as the inner and outer rotors [8,32]. Figure 1 shows the schematic diagram of a typical dual-rotor system. The rotors are mounted on massless shafts and supported by three rigid isotropic bearings with stiffness and damping *k*1, *c*<sup>1</sup> *k*2, *c*<sup>2</sup> and *k*3, *c*3. The *m*1, *Jd*1, *Jp*<sup>1</sup> and *m2*, *Jd*2, *Jp*<sup>2</sup> are the mass, diameter moment of inertia and polar moment of inertia of the HP and LP rotors, respectively. There are mass imbalances on both of the rotors, denoted by *e*<sup>1</sup> and *e*2. The angular rotating speeds of the LP and HP rotors are ω<sup>1</sup> and ω2. The span of the system is *L* and the other locations are measured by their corresponding distances from the left end *L<sup>i</sup>* , *i* = 1, 2, 3, 4, as shown in Figure 1.

1 2

 

2 2

1 1 22 22 2 ()( ) , 1, <sup>2</sup>

**Figure 1.** Configuration of a typical dual-rotor system.

The system can be described by eight degrees-of-freedom (DOFs) and four for each rotor, i.e., two lateral displacements and two rotational angles [33,34]. It is obtained as

123

$$\mathbf{q} = [\mathbf{x}\_1, y\_1, \theta\_{y1\prime}, \theta\_{\mathbf{x}1\prime}, \mathbf{x}\_2, y\_2, \theta\_{y2\prime}, \theta\_{\mathbf{x}2}]^T \tag{1}$$

2 where subscripts 1 and 2 correspond to the LP and HP rotors. After this, the coordinates of the three bearing centers can be derived using the eight basic DOFs

$$\begin{cases} \begin{array}{l} \mathbf{x}\_{b1} = \mathbf{x}\_{1} - L\_{1}\theta\_{y1} \\ \mathbf{y}\_{b1} = \mathbf{y}\_{1} + L\_{1}\theta\_{\mathbf{x}1} \end{array} \Big| \begin{array}{l} \mathbf{x}\_{b2} = \mathbf{x}\_{1} + (L - L\_{1})\theta\_{y1} \\ \mathbf{y}\_{b2} = \mathbf{y}\_{1} - (L - L\_{1})\theta\_{\mathbf{x}1} \end{array} \Big| \begin{array}{l} \mathbf{x}\_{b3} = \mathbf{x}\_{2} - (L\_{3} - L\_{2})\theta\_{y2} \\ \mathbf{y}\_{b3} = \mathbf{y}\_{2} + (L\_{3} - L\_{2})\theta\_{\mathbf{x}2} \end{array} \Big| \end{cases} \tag{2}$$

Modeling rotating systems based on the energy analyses is widely employed in the community of rotor dynamics [34]. For the system under study, the kinetic energy can be calculated as

$$\begin{cases} T = T\_1 + T\_2\\ T\_i = \frac{1}{2}m\_l(\dot{\mathbf{x}}\_i^2 + \dot{y}\_i^2) + \frac{1}{2}I\_{dl}(\dot{\theta}\_{\text{xi}i}^2 + \dot{\theta}\_{\text{yi}}^2) + I\_{pl}\omega\_i^2 - I\_{pll}\dot{\theta}\_{\text{jl}}\theta\_{\text{xi}i} \text{ i} = 1, 2 \end{cases} \tag{3}$$

The potential energy is contributed by the three bearings and can be denoted by

$$\begin{cases} V = V\_1 + V\_2 + V\_3 \\ V\_i = \frac{1}{2}k(x\_{bi}^2 + y\_{bi}^2), \; i = 1, \; 2, \; 3 \end{cases} \tag{4}$$

Accordingly, the dissipation energy can be expressed as

d

dt

 **F F**

$$\begin{cases} D = D\_1 + D\_2 + D\_3 \\ D\_i = \frac{1}{2} c\_i (\dot{\boldsymbol{x}}\_{bi}^2 + \dot{\boldsymbol{y}}\_{bi}^2), \; i = 1, \; 2, \; 3 \end{cases} \tag{5}$$

If the connection of the inner and outer rotors are modeled as a linear spring and its stiffness is *kc*, the reacting forces of the inter-shaft bearing are as follows

$$\begin{cases} F\_x = k\_c[\mathbf{x}\_1 + (L\_4 - L\_1)\theta\_{y1} - \mathbf{x}\_2 - (L\_4 - L\_3)\theta\_{y2}] \\\ F\_y = k\_c[y\_1 - (L\_4 - L\_1)\theta\_{x1} - y\_2 + (L\_4 - L\_3)\theta\_{x2}] \end{cases} \tag{6}$$

When rotating, the unbalance forces on the two rotors are obtained by

$$\begin{cases} \mathbf{F}\_{\mathsf{u}1}(t) = \left[ m\_1 \mathbf{e}\_1 \omega\_1^2 \cos(\omega\_1 t), \ m\_1 \mathbf{e}\_1 \omega\_1^2 \sin(\omega\_1 t), \ \mathbf{0}, \ \mathbf{0}, \ \mathbf{0}, \ \mathbf{0}, \ \mathbf{0}\right]^T\\ \mathbf{F}\_{\mathsf{u}2}(t) = \left[ \mathbf{0}, \ \mathbf{0}, \ \mathbf{0}, \ \mathbf{0}, \ m\_2 \mathbf{e}\_2 \omega\_2^2 \cos(\omega\_2 t), \ m\_2 \mathbf{e}\_2 \omega\_2^2 \sin(\omega\_2 t), \ \mathbf{0}, \ \mathbf{0}\right]^T \end{cases} \tag{7}$$

The Lagrangian equation considering dissipation effects can be applied to the system as

$$\frac{\partial}{\partial \mathbf{d}} \left( \frac{\partial T}{\partial \dot{q}\_j} \right) + \frac{\partial D}{\partial \dot{q}\_j} - \frac{\partial T}{\partial q\_j} + \frac{\partial V}{\partial q\_j} = Q\_j, \text{ } j = 1, \text{ 2, ..., 8} \tag{8}$$

Submitting Equations (1)–(7) into Equation (8) and rearranging the results into matrix form, the motion equations of the dual-rotor system can be obtained as

$$\mathbf{M}\ddot{\mathbf{q}}(t) + \mathbf{\tilde{C}}\dot{\mathbf{q}}(t) + \mathbf{K}\mathbf{q}(t) = \mathbf{F}(t) \tag{9}$$

where **M** and **K** are the mass and stiffness matrices of the system, **C**e includes the damping and gyroscopic effects, **F**(*t*) integrates the unbalance forces and the reacting forces in the inter-shaft bearing. A dot over the displacement vector **q** denotes derivation with respect to time. The rotational speeds or frequencies of the inner and outer rotors are incommensurable, making Equation (9) a system excited by two frequencies. Its solution can be obtained by numerical methods and the fourth order Runge–Kutta method with variable steps, which will be used in this paper.

#### **3. Non-Intrusive Interval Analysis of the System Based on Meta-Modeling**

As a practical problem, the accurate distribution model of the uncertainty is difficult to establish. In other words, the problem is small sample-sized. Therefore, the interval methods may be more suitable to implement. However, the intrusive interval methods need to modify the deterministic solution packages and are complicated in mathematical formulation. The surrogate methods [17,28,30] popular nowadays should be a good choice. These methodologies are simple in deduction and they work in a non-intrusive way because the deterministic dynamic problem can be used as a black box. Importantly, the computational cost of them should be carefully managed to ensure economic and feasible analyses. In this paper, we establish a simplex metamodel for the dynamic responses of the dual-rotor system considering non-probabilistic interval uncertainties. The small-range constraint in the conventional perturbation method is released. Moreover, the surrogates need not to find the gradient direction of parametric uncertainties to track their propagations, which is essential in the Taylor-based interval methods. In the latter, the difficulties in obtaining the high order derivative information will also limit the applications. Without loss of generality, we firstly consider the case where only one interval parameter is included. The interval parameter can be expressed as

$$a^I = [\underline{a}, \overline{a}] = [a^c - \beta a^c, a^c + \beta a^c] \tag{10}$$

where superscript *I* designates an interval variable, the bars over and under a quantity denote its upper and lower limits. Notations *a <sup>c</sup>* and β are the mid-value and uncertain degree of *a I* . An interval character can be completely defined when the lower and upper bounds are given, which are much easier to obtain than the precise probability model. The following relationships can be further obtained

$$\begin{cases} a^c = (\underline{a} + \overline{a}) / 2 \\ \Delta a = (\overline{a} - \underline{a}) / 2 \\ \beta = \Delta a / a^c \end{cases} \tag{11}$$

Taking the interval uncertainty into consideration, the motion equations of the dual-rotor system evolve to interval ordinary differential equations (IODEs). Due to the interval input, the system outputs should also be interval quantities. Consequently, we can rewrite the displacement vector of the dual-rotor system in interval form

$$\mathbf{q}^I(t) = [\underline{\mathbf{q}}(t), \overline{\mathbf{q}}(t)] = \left\{ \mathbf{q}(a, t) | a \in a^I \right\} \tag{12}$$

Efforts should be taken to find the distribution limits of the uncertain displacements. Direct interval arithmetic will introduce enormous errors which make the results meaningless. Here, we consider Equation (12) as a constraint to the system given in Equation (9) and formulate the metamodel based on the approximation theories. Equation (10) can be written in another form using the standard interval

$$a^I = a^c + \Delta a [-1, \ 1] \tag{13}$$

It is clear that for any possible value of the uncertain parameter *a* ∈ *a I* , we can find an alternative variable ξ ∈ [−1, 1] which is equivalent to it with linear projection. Therefore, this can be used to handle the uncertain problem on the standard interval. The actual value of the uncertain parameter can be obtained using a reverse projection. Therefore, a simplex radial basis is established

$$\boldsymbol{\Xi} = \begin{bmatrix} \mathbf{1}, \ \boldsymbol{\xi}, \boldsymbol{\xi}\_2, \cdots, \ \boldsymbol{\xi}\_{\boldsymbol{m}}, \cdots \end{bmatrix}^{\mathrm{T}} \tag{14}$$

Based on the polynomial basis, a simplex form metamodel of the uncertain responses of the dual-rotor can be constructed as

$$\mathcal{S}(\boldsymbol{\xi}) = \sum\_{i}^{\infty} \mathcal{Y}\_{i} \boldsymbol{\xi}^{i} = \mathbf{Y} \boldsymbol{\Xi}\_{\prime} i = 0, \ \mathbf{1}, \ \mathbf{2}, \ \dots \tag{15}$$

Equation (15) attempts to approximate the actual uncertain system with the weighted sum of a series of simplex. In practical calculation, it is only possible to consider finite number of terms and we truncate it to *<sup>k</sup>* herein. The weight coefficient vector <sup>Υ</sup> <sup>=</sup> {Υ*<sup>i</sup>* , *i* = 0, 1, 2, . . .} needs to be determined to fully formulate the metamodel. To this end, samples of the responses of the dual-rotor should be drawn. The roots of orthogonal polynomials are effective sample candidates in the parameter space and they are widely adopted in stochastic and non-probabilistic computations [29,35]. Here, the Chebyshev roots will be used, which can be calculated as

$$\text{l.s}\_{i} = \cos[\frac{2i - 1}{2(k + 1)}\pi], i = 1, 2, \dots, k + 1 \tag{16}$$

Subsequently, the sampled responses from the deterministic system can be obtained by simulations of the model with the uncertain parameter specified to the samples and others kept to their mid-values.

$$
\widetilde{a\_i} = a^c + \beta a^c \mathfrak{d}\_{i\prime} i = 1, \mathfrak{d}\_{\prime} \cdots, \; k+1 \tag{17}
$$

$$\widetilde{\mathbf{q}}\_{i}(t) = \left\{ \mathbf{q}(a, t) \middle| a = \widetilde{a}\_{i} \in a^{l} \right\}, i = 1, 2, \cdots, \ k + 1 \tag{18}$$

Subsequently, Equation (15) evolves to

$$
\underline{\mathfrak{q}}(\mathfrak{g}) = \Upsilon \underline{\tilde{\Xi}}(\mathfrak{g}) \tag{19}
$$

where <sup>e</sup>**<sup>q</sup>** and <sup>e</sup><sup>Ξ</sup> are the sample response vector from the dual-rotor and the value matrix of the radius basis vector at uncertainty sample series <sup>ϑ</sup><sup>=</sup> {ϑ*<sup>i</sup>* }, *<sup>i</sup>* = 1, 2, · · · , *<sup>k</sup>* + 1. The <sup>e</sup><sup>Ξ</sup> is expressed as

$$
\widetilde{\Xi} = \begin{bmatrix}
1 & \mathfrak{s}\_1 & \mathfrak{s}\_1^2 & \cdots & \mathfrak{s}\_1^k \\
1 & \mathfrak{s}\_2 & \mathfrak{s}\_2^2 & \cdots & \mathfrak{s}\_2^k \\
\vdots & \vdots & \ddots & \vdots & \vdots \\
1 & \mathfrak{s}\_{k+1} & \mathfrak{s}\_{k+1}^2 & \cdots & \mathfrak{s}\_{k+1}^k
\end{bmatrix} \tag{20}
$$

In Equation (19), there are *k* + 1 unknown weight coefficients and the number of equations is the same. Thus, the coefficient vector can be directly solved

$$\Upsilon = \overline{\mathbf{q}}(\mathfrak{d}) \left[ \overline{\Xi}(\mathfrak{d}) \right]^{-1} \tag{21}$$

Once the coefficient vector is obtained, the metamodel is fully determined. It is a simplex form function aimed to represent the actual distribution model of the uncertain dynamic response, which has unknown mathematical descriptions. As the lower and upper bounds of the system responses are of interest, the metamodel can be used to derive these values, which should be simple.

For multi uncertain variables, the basic idea is the same but some strategies to reduce the computation cost should be incorporated. For example, in the case that the dual-rotor contains *n* interval uncertainties, the radius basis vector can be rewritten in ascending order as

$$\Xi = \begin{bmatrix} 1, \ \xi\_1, \cdots, \ \xi\_n, \xi\_1^2, \xi\_1 \xi\_2, \cdots, \ \xi\_n^2, \cdots, \ \xi\_1^k, \xi\_1^{k-1} \xi\_2, \cdots, \ \xi\_n^k \end{bmatrix}^T \tag{22}$$

The number of elements in Ξ is

$$N = \frac{(n+k)!}{n!k!} \tag{23}$$

The metamodel is expressed by the weighted sum of terms whose order is no greater than *k*

$$S(\xi) = \sum\_{0 \le i\_1 + \dots + i\_n \le k} \Upsilon\_{i\_1 \dots i\_n} \xi\_1^{i\_1} \xi\_2^{i\_2} \cdots \xi\_n^{i\_n} = \Upsilon \Sigma, i\_1, \dots, i\_n = 0, 1, \dots, k \tag{24}$$

where Υ*i*<sup>1</sup> ,··· ,*i<sup>n</sup>* is the multi-dimensional coefficient. There will be (*k* + 1) *n* samples based on Equation (16) when all the sample candidates are chosen for every uncertain dimension.

In problems with relatively large number of interval parameters, the computation cost will be high. It is demonstrated that when the used samples are twice of the unknowns in the metamodel, the model will be robust and the efficiency is enhanced [36]. In such way, the number of samples kept will be 2*N*. The number of unknown coefficients is not the same as that of equations, the least square method can be introduced to evaluate the regression coefficients

$$\Upsilon = \overline{\mathfrak{q}}(\overline{\mathfrak{d}}) \overline{\Xi}(\overline{\Xi}^T \overline{\Xi})^{-1} \tag{25}$$

where <sup>e</sup>**q**(eϑ) is the 8 <sup>×</sup> <sup>2</sup>*<sup>N</sup>* matrix for the deterministic sample response drawn from the dual-rotor system based on the uncertain parameter sample sets

$$\begin{cases} \widetilde{\mathfrak{G}} = [\mathfrak{G}\_1, \mathfrak{G}\_2, \dots, \mathfrak{G}\_{2N}] \\ \mathfrak{G}\_j = \begin{cases} \mathfrak{G}\_{i,j} \rangle, i = 1, 2, \dots, n \end{cases} \end{cases} \tag{26}$$

In Equation (26), there are 2*N* sets of samples and each set contains *n* elements. The matrix eΞ in Equation (25) represents the values of the radius basis vector calculated at the parameter sample sets

$$
\widetilde{\Xi} = \begin{bmatrix}
1 & \mathfrak{s}\_{1,1} & \cdots & \mathfrak{s}\_{n,1} & \mathfrak{s}\_{1,1}^2 & \mathfrak{s}\_{1,1}\mathfrak{s}\_{2,1} & \cdots & \mathfrak{s}\_{n,1}^2 & \cdots & \mathfrak{s}\_{1,1}^k & \mathfrak{s}\_{1,1}^{k-1}\mathfrak{s}\_{2,1} & \cdots & \mathfrak{s}\_{n,1}^k \\
1 & \mathfrak{s}\_{1,2} & \cdots & \mathfrak{s}\_{n,2} & \mathfrak{s}\_{1,2}^2 & \mathfrak{s}\_{1,2}\mathfrak{s}\_{2,2} & \cdots & \mathfrak{s}\_{n,2}^2 & \cdots & \mathfrak{s}\_{1,2}^k & \mathfrak{s}\_{1,2}^{k-1}\mathfrak{s}\_{2,2} & \cdots & \mathfrak{s}\_{n,2}^k \\
\vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots \\
1 & \mathfrak{s}\_{1,N} & \cdots & \mathfrak{s}\_{n,N} & \mathfrak{s}\_{1,N}^2 & \mathfrak{s}\_{1,N}\mathfrak{s}\_{2,N} & \cdots & \mathfrak{s}\_{n,N}^2 & \cdots & \mathfrak{s}\_{n,N}^k & \mathfrak{s}\_{1,N}^{k-1}\mathfrak{s}\_{2,N} & \cdots & \mathfrak{s}\_{n,N}^k
\end{bmatrix}\_{\mathfrak{M}\times N} \tag{27}
$$

in which the first sub index refers to different uncertain variables and the second to the sample sets expressed in Equation (26). The above deduction is for interval problems involving multiple parametric uncertainties embedded with computational burden alleviation strategies.

When the explicit meta-model is constructed, the bounds of the dynamic response or the extreme values of the meta-model can be easily solved. Since it is in simplex form, the scanning method can be applied to the meta-model to find the bounds efficiently when the dimension of uncertainties is not too high (no greater than three, for example). It can be expressed as

$$\begin{cases} \mathbf{s}\_{i} = \mathbf{S}(\boldsymbol{\xi}\_{i}), i = 1, 2, \dots, p \\ \mathbf{q}(t) = \min([\mathbf{s}\_{1}, \dots, \mathbf{s}\_{p}]), \overline{\mathbf{q}}(t) = \max([\mathbf{s}\_{1}, \dots, \mathbf{s}\_{p}]) \end{cases} \tag{28}$$

where ξˆ *<sup>i</sup>* represents the grid parametric points produced in the scanning and *p* is the total number of them. If many uncertainties are involved (greater than three), the max/min values of the meta-function should be evaluated by the optimization methods, such as the genetic algorithm [28].

#### **4. Results and Discussions**

In this section, numerical simulations of the dual-rotor system based on the previous approaches will be presented. The model has the following values of the physical parameters: m<sup>1</sup> = 16.25 kg, *<sup>J</sup>p*<sup>1</sup> <sup>=</sup> 0.134 kg · <sup>m</sup><sup>2</sup> , *<sup>J</sup>d*<sup>1</sup> <sup>=</sup> 0.0698 kg · <sup>m</sup><sup>2</sup> ; *<sup>m</sup>*<sup>2</sup> <sup>=</sup> 8.4 kg, *<sup>J</sup>p*<sup>2</sup> <sup>=</sup> 0.0793 kg · <sup>m</sup><sup>2</sup> , *<sup>J</sup>d*<sup>2</sup> <sup>=</sup> 0.0405 kg · <sup>m</sup><sup>2</sup> ; *<sup>e</sup>*1<sup>=</sup> <sup>3</sup> <sup>×</sup> <sup>10</sup>−<sup>5</sup> m, *<sup>e</sup>*1<sup>=</sup> <sup>8</sup> <sup>×</sup> <sup>10</sup>−<sup>5</sup> m; *<sup>L</sup>*<sup>1</sup> <sup>=</sup> 0.2 m, *<sup>L</sup>*<sup>2</sup> <sup>=</sup> 0.24 m, *<sup>L</sup>*<sup>3</sup> <sup>=</sup> 0.44 m, *<sup>L</sup>*<sup>4</sup> <sup>=</sup> 0.54 m, *<sup>L</sup>* <sup>=</sup> 0.62 m; *<sup>c</sup>*<sup>1</sup> <sup>=</sup> *<sup>c</sup>*<sup>2</sup> <sup>=</sup> *<sup>c</sup>*<sup>3</sup> <sup>=</sup> 14.69 <sup>N</sup> · s/m, *<sup>k</sup>*<sup>1</sup> <sup>=</sup> *<sup>k</sup>*<sup>2</sup> <sup>=</sup> *<sup>k</sup>*<sup>3</sup> <sup>=</sup> <sup>5</sup> <sup>×</sup> <sup>10</sup><sup>6</sup> N/m, *<sup>k</sup><sup>b</sup>* <sup>=</sup> <sup>8</sup> <sup>×</sup> <sup>10</sup><sup>7</sup> N/m. The rotation speed ratio between the HP and LP rotors is 1.2. In this paper, we use the maximum vibration deflections of the two rotors at every rotating speed for demonstration, which can be calculated as max( p *xi* <sup>2</sup> + *y<sup>i</sup>* <sup>2</sup>), *i* = 1, 2. The deterministic steady-state dynamic responses of the HP and LP rotors excluding uncertainty are given in Figure 2. It is observed that the first two peaks appear at 738.4 rad/s and 886.1 rad/s for both of the rotors and the amplitudes of the LP rotor are higher than the HP rotor. It should be noted that the simplified model used in the currently study is sufficient for uncertainty propagation analysis and excludes irrelevant factors that may cause response variability. In order to capture detailed natural characteristics, however, sophisticated physical models should be developed for the comprehensive modal analysis of engineering dual-rotor systems. We refer interested readers to [37] for more information. In the next few sections, different physical parameters are considered uncertain and their effects are investigated based on the three-order metamodel.

#### *4.1. E*ff*ect of Interval Mass Eccentricity*

Firstly, we treat the uncertainties in the two eccentricities on the rotors. In an engineering context, the balancing status often gets worse after assembling the well-balanced rotors. The reason may be the assembly errors and hardness to measure. Moreover, the imbalance can be influenced by material degradation and wear. Therefore, the imbalance or mass eccentricity should be considered uncertain in analysis. We take the uncertain degree to be <sup>β</sup> <sup>=</sup> 10%. If *<sup>e</sup>*<sup>1</sup> = [2.7, 3.3] <sup>×</sup> <sup>10</sup>−<sup>5</sup> m, the interval response can be analyzed using the metamodel established in Section 3 and the results for the LP rotor are plotted in Figure 3. For comparison, the results for uncertain imbalances on the HP rotor

are given in Figure 4 when *e*<sup>2</sup> = [7.2, 8.8] × 10 <sup>−</sup><sup>5</sup> m. To provide a reference for the uncertain effects, the deterministic curves shown in Figure 2 will be added in all the uncertain cases.

+ =

−

**Figure 2.** Deterministic steady-state responses of the dual-rotor system.

**Figure 3.** Dynamic response of the Lower Pressure (LP) rotor with uncertain imbalance on the LP rotor.

**Figure 4.** Dynamic response of the LP rotor with uncertain imbalance on the Higher Pressure (HP) rotor.

A major difference between Figures 3 and 4 is that the response interval occurs at different rotation speed bands. Uncertainty in either of the two imbalances will not cause significant deviations of the system responses in the whole speed range. More specifically, the uncertainty in the imbalance on the LP rotor has effects mainly in the speed range around the second peak, while interval imbalance on the HP rotor will influence the first resonance area. That is because the mass imbalances on the two rotors correspond to their respective vibration peaks. Similar characteristics are also found in the parametric investigations of the dual-rotor systems [8]. This phenomenon indicates the different sensitivities of the system in different speed ranges to the two imbalances, which is not observed in single rotor systems. In the respective effective ranges of the two uncertainties, the deterministic response is symmetrically deviated and the enveloped ranges are related to the magnitude of the uncertain degree. Due to the presence of uncertainties, the dynamic response of the system can be any possible values in the response interval.

#### *4.2. E*ff*ect of Interval Bearing Sti*ff*ness*

The stiffness of bearing #1 is taken as an interval quantity to cover its variability [15–18]. Generally, it is difficult to define the accurate value of the stiffness of a support. In this case, the uncertain degree of the interval uncertainty is 5%. Subsequently, the stiffness can be expressed as *k*<sup>1</sup> = [4.75, 5.25] × 10 <sup>6</sup> N/m. The response range of the HP rotor is shown in Figure 5.

As can been seen from Figure 5, the uncertain behaviors of the dual-rotor are totally different from the cases with uncertain imbalances. The deterministic response curve is significantly deviated and the lower bound and upper bound are asymmetric. Near the first peak, a slope peak in the upper bound and a sharp one in the lower bound are found. In addition, the positions of the peaks are shifted compared with the deterministic one with the lower to the left and the upper to the right. There is an observable flat band in the upper bound around the second peak. These features are introduced by the alterations in the intrinsic characteristics caused by the uncertainty. The results also prove that the dual-rotor is sensitive to the support stiffness of bearing #1. It may be considered a key factor for design and maintenance of such engineering systems.

**Figure 5.** Dynamic response of the HP rotor with uncertain bearing stiffness.

In Figure 5, the reference solutions obtained from the conventional scanning method are also provided to verify the accuracy of the interval results. The scanning procedure generates evenly scattered samples in the uncertain parameter interval and then searches for the bounds of all the response samples. It serves as references similar to the Monte Carlo simulation in the probabilistic frame [38,39]. The 50-points reference solutions (green dotted line) plotted in Figure 5 agree well with the results obtained by the metamodel. Subsequently, the accuracy of the metamodel is validated. To obtain insight, comparisons of the vibration time histogram of the LP rotor at rotation speed 768.7 rad/s obtained from the two methods are illustrated in Figure 6. It is further demonstrated that the bounds calculated from the metamodel are in accordance with the scanning method in different time stamps. The peak shifts are observable as well. From the vibration pattern in Figure 6, we can also identify that the dynamical responses have multiple frequencies which is introduced by the multi-frequency excitations. It should be noted that, in the metamodel, only order three is used, which suggests that the deterministic model runs four times. The underlying computational cost is much lower than the scanning method. The simulations were carried out within MATLAB R2019b on a computer equipped with 16 GB RAM and Inter ® Core™ i7-8550U@1.8GHz. It should be noted that the actual speed interval calculated is 0–1400 rad/s and only a part of the results are presented in Figure 5. Moreover, the increment for two consecutive speed steps is small and the initial 300 periods of the vibration are skipped for every rotational speed to eliminate the transient effects. The above conditions will cause the calculation time in a single deterministic simulation to be relatively long. However, the difference of computation time between the two methods can still show their efficiency. For the steady-state dynamical response calculations, the average CPU time elapsed in the metamodel was 28.23 min, while it was 351.87 min in the scanning method. It is shown that the computational cost needed is significantly reduced in the metamodel. The above analyses verify the accuracy and efficiency of the developed interval method in the uncertain responses prediction of the dual-rotor system.

**Figure 6.** Time histogram of the LP rotor with uncertain bearing stiffness.

#### *4.3. E*ff*ect of Interval Geometric Length*

In this subsection, we assume the geometric length of shaft *L*<sup>1</sup> to be uncertain as a result of different assemble conditions. The uncertain degree is chosen as 10%. Figure 7 presents the interval responses of the HP rotor under uncertain shaft length. We can find that the uncertainty has influences on the whole speed range though the physical parameter is related to the inner rotor. There are trivial peak shifts as well in both resonance peaks. However, the impacts of the uncertain length are weaker than the bearing stiffness which suggests that the dual-rotor is insensitive to the length. In the speed range right after the first peak, the bounds of the response and the deterministic curve overlapped with each other. This further proves the ability of the metamodel in the prediction of the interval response of the system evidenced by the fact that the deterministic curve is rigorously enclosed in the narrow range.

**Figure 7.** Dynamic response of the HP rotor with uncertain geometric length.

#### *4.4. E*ff*ect of Multi Interval Parameters*

This subsection pays attention to the influences of multi uncertain parameters [40,41] on the dynamic behaviors of the dual-rotor. Consider the uncertainties in the two imbalances and the bearing stiffness as studied in the previous subsections. The first set of uncertain degrees are 5% for the two imbalances and 2.5% for the stiffness of bearing #1. We then double their respective uncertain degrees for the second case. Figure 8 shows the results for the two cases with (a) for the HP rotor and (b) for the LP rotor. The dynamic response is significantly affected by the multiple uncertain parameters and larger uncertain degrees lead to wider response ranges. The peak shifts are observed. In the upper bounds for the HP and LP rotors, there is both a slope peak and a high-amplitude band, but the locations are switched. The slopes are also in opposite directions. These features correspond to the influence mechanism of the bearing stiffness on the two rotors since imbalances affect the vibration amplitudes linearly. We can observe a few trivial instable estimations in the upper bounds of Figure 8, which are caused by the minor errors of the metamodel as only order three is used.

**Figure 8.** Dynamic responses of the dual-rotor with multi uncertain parameters: (**a**) Interval responses of the HP rotor; (**b**) Interval responses of the LP rotor.

In large-scale dual-rotor systems, the number of uncertain parameters that should be considered simultaneously may occasionally be very large. Although cost alleviating strategies are already incorporated, the non-intrusive metamodel used in the current research needs further improvement to cope with the exponentially growing computation burden. Alternatively, one can undertake sensitivity analyses using dedicated algorithms or investigations with individual interval parameters based on the metamodel to capture their respective contributions to the dynamical response variability and then discard those of trivial importance. Moreover, the dual-rotor system analyzed in this paper is linear. The nonlinear vibrational responses of such systems under multi interval uncertainties are much more complicated and difficult to predict, which can occur in the dual-rotors undergoing rub-impact [42], or the rotating shaft is cracked. The established method is capable of estimating the interval time history of such nonlinear vibrations. Further evaluation should be completed to verify the effectiveness of the metamodel when the steady-state frequency response has turning points.

#### **5. Conclusions**

The uncertain dynamics of a dual-rotor system under interval uncertainties are studied via non-intrusive computations. The governing motion equations are established by using the Lagrangian method. A simplex form metamodel for problems subject to single and multi-uncertain variables is constructed without modification to the deterministic solver. The calculation accuracy and efficiency are verified by the scanning method. It is found that the imbalances on the inner and outer rotors only affect speed ranges near one vibration peak. Peak shifts are observed when the bearing stiffness is considered an uncertain quantity. Moreover, the response interval of the dual-rotor is relatively small for the uncertain geometric length. These characteristics also indicate the sensitivities of the dual-rotor to the physical parameters. The multi-uncertain-variable simulations suggest that the cumulative uncertainties propagation will significantly influence the dynamics of the dual-rotor system. The main findings of this paper show some insights into the vibration characteristics of dual-rotor systems considering the non-probabilistic uncertainties.

**Author Contributions:** Conceptualization, C.F. and K.L.; Methodology, C.F. and J.M.; Software, G.F. and K.L.; Validation, Y.Y. and K.L.; Formal analysis, J.M. and G.F.; Investigation, K.L. and F.G.; Writing—Original draft preparation, C.F. and J.M.; Writing—Review and editing, K.L. and G.F.; Project administration, Y.Y. and F.G.; Funding acquisition, K.L. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by National Natural Science Foundation of China, grant number 11802235 and 11972295, and Joint Doctoral Training Foundation of HEBUT, grant number 2018HW0005.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **A New Fuzzy MARCOS Method for Road Tra**ffi**c Risk Analysis**

**Miomir Stankovi´c <sup>1</sup> , Željko Stevi´c 2,\* , Dillip Kumar Das <sup>3</sup> , Marko Suboti´c <sup>2</sup> and Dragan Pamuˇcar <sup>4</sup>**


Received: 20 February 2020; Accepted: 20 March 2020; Published: 24 March 2020

**Abstract:** In this paper, a new fuzzy multi-criteria decision-making model for traffic risk assessment was developed. A part of a main road network of 7.4 km with a total of 38 Sections was analyzed with the aim of determining the degree of risk on them. For that purpose, a fuzzy Measurement Alternatives and Ranking according to the COmpromise Solution (fuzzy MARCOS) method was developed. In addition, a new fuzzy linguistic scale quantified into triangular fuzzy numbers (TFNs) was developed. The fuzzy PIvot Pairwise RElative Criteria Importance Assessment—fuzzy PIPRECIA method—was used to determine the criteria weights on the basis of which the road network sections were evaluated. The results clearly show that there is a dominant section with the highest risk for all road participants, which requires corrective actions. In order to validate the results, a comprehensive validity test was created consisting of variations in the significance of model input parameters, testing the influence of dynamic factors—of reverse rank, and applying the fuzzy Simple Additive Weighing (fuzzy SAW) method and the fuzzy Technique for Order of Preference by Similarity to Ideal Solution (fuzzy TOPSIS). The validation test show the stability of the results obtained and the justification for the development of the proposed model.

**Keywords:** Fuzzy MARCOS; Fuzzy PIPRECIA; traffic risk; TFN; MCDM

#### **1. Introduction**

Multi-criteria decision-making (MCDM) methods, [1–4] especially in integration with fuzzy theory [5–7], are a very powerful and useful tool for reliable decision-making in different fields of decision-making. The decision-making process, according to Stoji´c et al. [8], requires the prior definition and fulfillment of certain factors, especially when it comes to solving problems in complex areas. The theory of multi-criteria decision-making according to Zavadskas et al. [9] holds a special place in the field of science. The application of fuzzy MCDM methods contributes to a more precise determination of an acceptable solution, especially since considering them with respect to different factors is a very demanding and difficult task. Moreover, this was expressed in group decision-making processes [10]. The main motivation of this study can be considered in the following way: determining of risk level at road sections requires the inclusion of a lot of different variables. After experimental data collection, variables must be assessed in a clear and precise way. For this

purpose, a new fuzzy linguistic scale was developed with quantification in TFNs. Moreover, a new fuzzy Measurement Alternatives and Ranking according to COmpromise Solution (Fuzzy MARCOS) was developed in this paper to evaluate sections of road infrastructure with the aim of determining the degree of risk on them. The development of new fuzzy MARCOS method and defining a new linguistic scale based on TFNs represent the main contributions of this paper. The advantages of the Fuzzy MARCOS method are as follows: consideration of fuzzy reference points through the fuzzy ideal and fuzzy anti-ideal solution at the very beginning of model formation, more precise determination of the degree of utility with respect to both set solutions, proposal of a new way of determining utility functions and its aggregation, possibility to consider a large set of criteria and alternatives, as demonstrated through a realistic example, too. This paper considers one of four the most important factors that affect traffic accidents and lead to hazardous situations, and it relates to the road. There are frequent situations where geometric characteristics can negatively affect the creation of potential situations and increase the risk for each traffic participant. The geometric characteristics of the road on particular sections have a major impact on increasing the risk of traffic accidents. Morency et al. [11] analyzed the extent to which road geometry and traffic volume influence social inequalities in pedestrian, cyclist and motorcyclist injuries in wealthy and poor urban areas. Based on their observational study, it was concluded that there were more injured pedestrians, cyclists and motorcyclists at intersections in poorer areas than in wealthier areas. Nevertheless, studies have shown that the two most important road factors affecting accident rates are the pavement condition and the geometric characteristics of the road [12]. Therefore, in this paper, on the basis of causal factors, the degree of risk was determined on sections of 200 m. In her research, Nenadi´c [13] carried out the evaluation of sections, i.e., three locations, based on seven criteria. In contrast to this research, the optimality criterion was set in such a way that the safest section was considered instead of the one with the highest risk. In the study performed by Bao et al. [14], the fuzzy methodology was used for similar purposes. An Improved hierarchical fuzzy TOPSIS model has been defined for evaluating road safety performance. The evaluation of safety performance indicators (SPIs) was performed by Khorasani et al. [15] as an MCDM problem in a methodological sense. The TOPSIS method in combination with other techniques was also used in Haghighat's research [16] for determining the safety position of the roads of the Bushehr province.

The rest of the paper is organized as follows. Section 2 provides the preliminaries necessary to develop the fuzzy MARCOS method. They refer to displaying basic operations with fuzzy numbers and presenting the steps of the crisp MARCOS method. Section 3 presents the development of the fuzzy MARCOS method algorithm (Figure 1), which consists of a total of 10 steps. In Section 4 of the paper, the MCDM model is formed and a detailed calculation of each step of the developed fuzzy MARCOS method is presented. The calculation of the criterion weight using the fuzzy PIPRECIA method is also summarized. The following Section 5 presents an overview of validation tests to verify the stability of the proposed model. Finally, Section 6 provides a brief overview of the most important tasks accomplished and the contributions of the paper along with future research guidelines.

ć

‐

 

**Figure 1.** Algorithm of new developed fuzzy MARCOS method.

#### **2. Preliminaries**

A fuzzy number *<sup>A</sup>*˜ on R to be a TFN if its membership function <sup>µ</sup>*A*˜(*x*): R→[0,1] is equal to Equation (1):

$$\mu\_A(\mathbf{x}) = \begin{cases} \begin{array}{c} \frac{\mathbf{x} - l}{m - l} & l \le \mathbf{x} \le m \\\ \frac{\mathbf{u} - \mathbf{x}}{u - m} & m \le \mathbf{x} \le u \\\\ 0 & \text{otherwise} \end{array} \tag{1} $$

where *l* represents the lower and *u* upper bounds of the fuzzy number *A*˜ and *m* is the modal value. The TFN can be marked as *A*˜ = (*l*, *m*, *u*).

The operations of TFN *A*˜ <sup>1</sup> = (*l*1, *m*1, *u*1) and *A*˜ <sup>2</sup> = (*l*2, *m*2, *u*2) are as follow [17,18] Addition: *A*˜ = (*l*1, *m*1, *u*1)

$$
\tilde{A}\_1 \oplus \tilde{A}\_2 = (l\_1, m\_1, u\_1) + (l\_2, m\_2, u\_2) = (l\_1 + l\_2, m\_1 + m\_2, u\_1 + u\_2). \tag{2}
$$

Multiplication:

$$
\tilde{A}\_1 \otimes \tilde{A}\_2 = (l\_1, m\_1, \mu\_1) \otimes (l\_2, m\_2, \mu\_2) = (l\_1 \times l\_2, m\_1 \times m\_2, \mu\_1 \times \mu\_2). \tag{3}
$$

Subtraction:

$$
\tilde{A}\_1 - \tilde{A}\_2 = (l\_1, m\_1, \mu\_1) - (l\_2, m\_2, \mu\_2) = (l\_1 - \mu\_2, m\_1 - m\_2, \mu\_1 - l\_2). \tag{4}
$$

Division:

$$\frac{\tilde{A}\_1}{\tilde{A}\_2} = \frac{(l\_1, m\_1, u\_1)}{(l\_2, m\_2, u\_2)} = \left(\frac{l\_1}{u\_2}, \frac{m\_1}{m\_2}, \frac{u\_1}{l\_2}\right). \tag{5}$$

Reciprocal:

$$
\tilde{A}\_1^{-1} = (l\_1, m\_1, u\_1)^{-1} = \left(\frac{1}{u\_1}, \frac{1}{m\_1}, \frac{1}{l\_1}\right). \tag{6}
$$

The following section provides a brief overview of the crisp MARCOS method defined by Stevi´c et al. [19]:

Step 1: Designing of an initial decision-making matrix.

Step 2: Designing of an *extended* initial matrix, performed by defining the anti-ideal (*AAI*) and ideal (*AI*) solution.

$$
\begin{bmatrix}
\mathbf{C}\_1 & \mathbf{C}\_2 & \dots & \mathbf{C}\_n \\
AAI & \begin{bmatrix}
\mathbf{x}\_{ad1} & \mathbf{x}\_{ad2} & \dots & \mathbf{x}\_{ann} \\
\mathbf{x}\_{11} & \mathbf{x}\_{12} & \dots & \mathbf{x}\_{1n} \\
\mathbf{x}\_{21} & \mathbf{x}\_{22} & \dots & \mathbf{x}\_{2n} \\
\vdots & \dots & \dots & \dots & \dots \\
\mathbf{x}\_{m1} & \mathbf{x}\_{22} & \dots & \mathbf{x}\_{mn} \\
\mathbf{x}\_{a1} & \mathbf{x}\_{a2} & \dots & \mathbf{x}\_{ann} \\
\end{bmatrix}
\tag{7}
$$

(AAI) is the worst alternative, while (*AI*) is best alternative. Depending on type of the criteria, *AAI* and AI are defined by applying Equations (8) and (9):

$$AAI = \min\_{i} \ge\_{i\nmid} \text{if } j \in \text{ } B \text{ and } \max\_{i} \ge\_{i\nmid} if \ j \in \text{C} \tag{8}$$

$$AI = \max\_{i} \ge\_{ij} if \ j \in \ B \text{ and } \min\_{i} \ge\_{ij} if \ j \in \mathbb{C} \tag{9}$$

*B* belongs maximization group of criteria, while *C* belongs the minimization group of criteria. Step 3: Normalization of previous matrix (*X*). *N* = h *nij*i *m*×*n* are obtained using Equations (10) and (11):

$$m\_{ij} = \frac{\chi\_{ai}}{\chi\_{ij}}\ if\ j \in \text{ }\mathcal{C} \tag{10}$$

$$m\_{i\bar{j}} = \frac{\mathfrak{x}\_{l\bar{j}}}{\mathfrak{x}\_{\text{ai}}} \text{ if } j \in \mathcal{B} \tag{11}$$

where elements *xij* and *xai* represent the elements of the matrix *X.*

Step 4: Determination of the weighted matrix *V* = h *vij*i *m*×*n* Equation (12).

$$w\_{\rm ij} = n\_{\rm ij} \times w\_{\rm j} \tag{12}$$

Step 5: Computation of the utility degree of alternatives *Ki*—Equations (13) and (14).

$$K\_i^- = \frac{\mathcal{S}\_i}{\mathcal{S}\_{aai}}\tag{13}$$

$$K\_i^+ = \frac{\mathcal{S}\_i}{\mathcal{S}\_{ai}} \tag{14}$$

where *S<sup>i</sup>* (*I* = 1, 2, . . . , *m*) represents the sum of the elements of matrix *V*, Equation (15).

$$S\_{\bar{l}} = \sum\_{i=1}^{n} v\_{\bar{l}\bar{l}} \tag{15}$$

Step 6: Determination of the utility function of alternatives *f(K<sup>i</sup> )* defined by Equation (16).

$$f(\mathbf{K}\_i) = \frac{\mathbf{K}\_i^+ + \mathbf{K}\_i^-}{1 + \frac{1 - f\left(\mathbf{K}\_i^+\right)}{f\left(\mathbf{K}\_i^+\right)} + \frac{1 - f\left(\mathbf{K}\_i^-\right)}{f\left(\mathbf{K}\_i^-\right)}};\tag{16}$$

where *f K* − *i* represents the utility function in relation to (*AAI*), while *f K* + *i* represents the utility function in relation to the (*AI*).

Utility functions in relation to (*AI*), and (*AAI*), solution were determined using Equations (17) and (18).

$$f\left(\mathbf{K}\_i^-\right) = \frac{\mathbf{K}\_i^+}{\mathbf{K}\_i^+ + \mathbf{K}\_i^-} \tag{17}$$

$$f\left(\mathbf{K}\_i^+\right) = \frac{\mathbf{K}\_i^-}{\mathbf{K}\_i^+ + \mathbf{K}\_i^-} \tag{18}$$

Step 7: Ranking the alternatives.

#### **3. A New Fuzzy MARCOS Method**

Step 1: Creating an initial fuzzy decision-making matrix. MCDM models include the definition of a set of n criteria and m alternatives.

Step 2: Creating an *extended* initial fuzzy matrix. The extension is performed by determining the fuzzy anti-ideal *A*˜(*AI*) and fuzzy ideal *A*˜(*ID*) solution.

$$
\tilde{A} = \begin{bmatrix}
\tilde{\mathcal{K}}\_1 & \tilde{\mathcal{K}}\_2 & \dots & \tilde{\mathcal{K}}\_n \\
\tilde{A}\_1 & & & \\
\tilde{A}\_2 & & \\
& \ddots & \\
& \tilde{A}\_m & \\
& & \tilde{A}(ID)
\end{bmatrix}
\quad \left(\begin{array}{cccc}
\tilde{\mathcal{K}}\_{i1} & \tilde{\mathcal{K}}\_{i2} & \dots & \tilde{\mathcal{K}}\_{i\tilde{m}} \\
\tilde{\mathcal{K}}\_{i1} & \tilde{\mathcal{K}}\_{i2} & \dots & \tilde{\mathcal{K}}\_{1n} \\
\tilde{\mathcal{K}}\_{21} & \tilde{\mathcal{K}}\_{22} & \dots & \tilde{\mathcal{K}}\_{2n} \\
\dots & \dots & \dots & \dots \\
\tilde{\mathcal{K}}\_{m1} & \tilde{\mathcal{K}}\_{22} & \dots & \tilde{\mathcal{K}}\_{mn} \\
\tilde{\mathcal{K}}\_{i\tilde{d}1} & \tilde{\mathcal{K}}\_{i\tilde{d}2} & \dots & \tilde{\mathcal{K}}\_{i\tilde{d}n}
\end{array}\right) \tag{19}
$$

The fuzzy *A*˜(*AI*) is the worst alternative while the fuzzy *A*˜(*ID*) is an alternative with the best performance. Depending on type of the criteria, *A*˜(*AI*) and *A*˜(*ID*) are defined by applying Equations (20) and (21):

$$\tilde{A}(AI) = \min\_{\vec{i}} \mathfrak{X}\_{\vec{i}\vec{j}} \; if \; j \in \ B \; and \; \max\_{\vec{i}} \mathfrak{X}\_{\vec{i}\vec{j}} \; if \; j \in \mathbb{C} \tag{20}$$

$$\tilde{A}(ID) = \max\_{\vec{i}} \tilde{\mathbf{x}}\_{\vec{i}\vec{j}} \; if \; j \in \ B \; and \; \min\_{\vec{i}} \tilde{\mathbf{x}}\_{\vec{i}\vec{j}} \; if \; j \in \mathbb{C} \tag{21}$$

*B* belongs to the maximization group of criteria while *C* belongs to the minimization group of criteria.

Step 3: Creating a normalized fuzzy matrix *N*˜ = h *n*˜*ij*i *m*×*n* obtained by applying Equations (22) and (23):

$$\mathfrak{m}\_{ij} = \left( \mathfrak{n}\_{ij}^{l}, \mathfrak{n}\_{ij}^{m}, \mathfrak{n}\_{ij}^{u} \right) = \left( \frac{\mathfrak{x}\_{\mathrm{id}}^{l}}{\mathfrak{x}\_{\mathrm{ij}}^{u}}, \frac{\mathfrak{x}\_{\mathrm{id}}^{l}}{\mathfrak{x}\_{\mathrm{ij}}^{m}}, \frac{\mathfrak{x}\_{\mathrm{id}}^{l}}{\mathfrak{x}\_{ij}^{l}} \right) \mathrm{i}f \text{ } j \in \mathsf{C} \tag{22}$$

$$
\mathfrak{m}\_{ij} = \left( n\_{ij}^{l}, n\_{ij}^{m}, n\_{ij}^{u} \right) = \left( \frac{\mathfrak{x}\_{ij}^{l}}{\mathfrak{x}\_{\mathrm{id}}^{u}}, \frac{\mathfrak{x}\_{ij}^{m}}{\mathfrak{x}\_{\mathrm{id}}^{u}}, \frac{\mathfrak{x}\_{ij}^{u}}{\mathfrak{x}\_{\mathrm{id}}^{u}} \right) \mathrm{i}f \text{ } j \in \ B
\tag{23}
$$

where elements *x l ij*, *x m ij* , *x u ij* and *x l id*, *x m id*, *x u id* represent the elements of the matrix *<sup>X</sup>*˜.

Step 4: Computation of the weighted fuzzy matrix *V*˜ = h *v*˜*ij*i *m*×*n* Matrix *V*˜ is calculated by multiplying matrix *N*˜ with the fuzzy weight coefficients of the criterion *w*˜ *<sup>j</sup>* , Equation (24).

$$\mathfrak{v}\_{\mathbf{i}\mathbf{j}} = \left(\upsilon\_{\mathbf{i}\mathbf{j}'}^{l}\upsilon\_{\mathbf{i}\mathbf{j}'}^{m}\upsilon\_{\mathbf{i}\mathbf{j}}^{u}\right) = \mathfrak{n}\_{\mathbf{i}\mathbf{j}} \otimes \mathfrak{w}\_{\mathbf{j}} = \left(n\_{\mathbf{i}\mathbf{j}}^{l} \times w\_{\mathbf{j}'}^{l}\ n\_{\mathbf{i}\mathbf{j}}^{m} \times w\_{\mathbf{j}'}^{m}, n\_{\mathbf{i}\mathbf{j}}^{u} \times w\_{\mathbf{j}}^{u}\right) \tag{24}$$

Step 5: Calculation of *S*˜ *i* fuzzy matrix using the following Equation (25):

$$\mathfrak{S}\_{i} = \sum\_{i=1}^{n} \mathfrak{v}\_{ij} \tag{25}$$

where *S*˜ *i s l i* ,*s m i* ,*s u i* represents the sum of the elements of the weighted fuzzy matrix *V*˜ .

*Step 6*: Calculation of the utility degree of alternatives *K*˜ *<sup>i</sup>* by applying Equations (26) and (27).

$$\tilde{\mathcal{K}}\_{l}^{-} = \frac{\tilde{\mathcal{S}}\_{l}}{\tilde{\mathcal{S}}\_{ai}} = \left( \frac{\mathbf{s}\_{\frac{l}{l}}^{l}}{\mathbf{s}\_{ai}^{\mu}}, \frac{\mathbf{s}\_{\frac{l}{l}}^{m}}{\mathbf{s}\_{ai}^{m}}, \frac{\mathbf{s}\_{\frac{l}{l}}^{u}}{\mathbf{s}\_{ai}^{l}} \right) \tag{26}$$

$$\mathcal{K}\_{i}^{+} = \frac{\mathfrak{S}\_{i}}{\mathfrak{S}\_{id}} = \left( \frac{s\_{i}^{l}}{s\_{id}^{u}}, \frac{s\_{i}^{m}}{s\_{id}^{m}}, \frac{s\_{i}^{u}}{s\_{id}^{l}} \right) \tag{27}$$

Step 7: Calculation of fuzzy matrix *T*˜ *<sup>i</sup>* using Equation (28)

$$\mathcal{T}\_{\mathbf{i}} = \mathbb{I}\_{\mathbf{i}} = \left(\mathbf{t}\_{\mathbf{i}'}^{l}\mathbf{t}\_{\mathbf{i}}^{m}, \mathbf{t}\_{\mathbf{i}}^{u}\right) = \mathbb{K}\_{\mathbf{i}}^{-} \oplus \mathbb{K}\_{\mathbf{i}}^{+} = \left(\mathbf{k}\_{\mathbf{i}}^{-l} + \mathbf{k}\_{\mathbf{i}}^{+l}, \mathbf{k}\_{\mathbf{i}}^{-m} + \mathbf{k}\_{\mathbf{i}}^{+m}, \mathbf{k}\_{\mathbf{i}}^{-u} + \mathbf{k}\_{\mathbf{i}}^{+u}\right) \tag{28}$$

Then, it is necessary to determine a new fuzzy number *D*˜ using Equation (29)

$$\tilde{D} = \left( d^l, d^m, d^\mu \right) = \max\_{\stackrel{\circ}{l}} \tilde{t}\_{\stackrel{\circ}{l}} \tag{29}$$

and then, it is necessary to de-fuzzify the number *D*˜ by using the expression *d fcrisp* = *l*+4*m*+*u* 6 obtaining the number *d fcrisp*.

Step 8. Determination of utility functions in relation to the ideal *f K*˜ <sup>+</sup> *i* and anti-ideal *f K*˜− *i* solution by applying Equations (30) and (31).

$$f\left(\tilde{\mathcal{K}}\_i^+\right) = \frac{\tilde{\mathcal{K}}\_i^-}{df\_{crisp}} = \left(\frac{k\_i^{-l}}{df\_{crisp}}, \frac{k\_i^{-m}}{df\_{crisp}}, \frac{k\_i^{-u}}{df\_{crisp}}\right) \tag{30}$$

$$f(\tilde{\mathcal{K}}\_i^-) = \frac{\tilde{\mathcal{K}}\_i^+}{df\_{crisp}} = \left(\frac{k\_i^{+l}}{df\_{crisp}}, \frac{k\_i^{+m}}{df\_{crisp}}, \frac{k\_i^{+u}}{df\_{crisp}}\right) \tag{31}$$

After that, it is necessary to perform defuzzification for *K*˜<sup>−</sup> *i* , *K*˜ <sup>+</sup> *i* , *f K*˜ <sup>+</sup> *i* , *f K*˜− *i* and apply the following step:

Step 9: Determination of the utility function of alternatives *f K<sup>i</sup>* by Equation (32).

$$f(\mathbb{K}\_i) = \frac{\mathbb{K}\_i^+ + \mathbb{K}\_i^-}{1 + \frac{1 - f\left(\mathbb{K}\_i^+\right)}{f\left(\mathbb{K}\_i^+\right)} + \frac{1 - f\left(\mathbb{K}\_i^-\right)}{f\left(\mathbb{K}\_i^-\right)}};\tag{32}$$

Step 10: Ranking the alternatives based on the final values of utility functions. It is desirable that an alternative have the highest possible value of the utility function.

In addition to developing the new fuzzy MARCOS method, a new linguistic scale for evaluating alternatives has been defined, which is shown in Table 1. A total of nine linguistic terms are defined and for each term, its triangular fuzzy number.


**Table 1.** A newly defined scale for evaluating potential solutions.

#### **4. Results**

In order to determine the degree of risk on the roads in Bosnia and Herzegovina through the Rudanka-Doboj M-17 section (length 7.4 km), a list of six criteria was formed on the basis of which the evaluation was carried out. The analysis was conducted in 38 Sections of two-lane main roads of the first order in Bosnia and Herzegovina as potential alternatives. The following are the starting points for the potential criteria affecting traffic risk: a longitudinal gradient (upgrade/downgrade)—(C1), the number of access points on each section (left and right) (C2), the number of traffic accidents with fatalities (C3), the number of traffic accidents with slightly injured persons (C4), the number of traffic accidents with seriously injured persons (C5) and the number of traffic accidents with material damage (C6). The number of traffic accidents for all four classes was taken from the sample for the last four years.

As mentioned above, the Rudanka-Doboj M-17 section covers a total length of 7.4 km and was divided into 200-m sections that represent alternatives. After forming the MCDM model with 38 alternatives and six criteria in the first step, the fuzzy anti-ideal *A*˜(*AI*) and fuzzy ideal *A*˜(*ID*) solutions are defined in the second step on the basis of Equations (20) and (21). Thus, an extended fuzzy initial decision matrix is formed. Table 2 shows the extended initial fuzzy decision matrix with linguistic ratings, as well as quantified values in triangular fuzzy numbers.


**Table 2.** Extended initial fuzzy decision matrix.


**Table 2.** *Cont.*

The scope of values of the observed road sections according to each individual criterion are as follows. For the first criterion, the scope of the values ranges from 0% to 1.4%, which generally represents favorable topographic conditions. For the second criterion relating to the total number of access points, the values range from zero to 23. Considering that there are over 20 access points on particular sections, a potential danger to traffic participants and an impact on traffic flow complexity can be noticed. When considering alternatives in relation to the third criterion, it is important to note that there are a total of five sections, with one fatal accident each. For traffic accidents with minor traffic injuries, the values are in the range of 0–7, traffic accidents with serious injuries in the range of 0–3 and material damage in the range of 0–19.

After forming the fuzzy initial decision matrix, it is necessary to determine the significance of input parameters, i.e., their values. For this purpose, the fuzzy PIPRECIA method developed in [20] was applied. As this is an already exploited method, detailed procedures for calculating the values of criteria are not shown, but rather, the summarized results by each step (Table 3).


**Table 3.** Calculation and results of applying the fuzzy PIPRECIA method for determining the criterion values.

Based on the aggregation of the values *wj* shown in Table 3, the final criterion values are obtained: *w*˜ <sup>1</sup> = (0.098, 0.174, 0.336), *w*˜ <sup>2</sup> = (0.133, 0.254, 0.471), *w*˜ <sup>3</sup> = (0.100, 0.203, 0.411), *w*˜ <sup>4</sup> = (0.064, 0.119, 0.234), *w*˜ <sup>5</sup> = (0.083, 0.149, 0.263), *w*˜ <sup>6</sup> = (0.060, 0.102, 0.185).

The elements of the fuzzy normalized matrix (Table 4) were obtained by applying Equation (23) since all the criteria are of benefit type, i.e., they need to be maximized. An example of normalization is

 $\vec{\pi}\_{11} = \begin{pmatrix} \frac{1.000}{9.000} \frac{1.000}{9.000} \frac{1.000}{9.000} \end{pmatrix} = (0.111, 0.111, 0.111)\_I$   $\vec{\pi}\_{15} = \begin{pmatrix} \frac{3.000}{7.000} \frac{5.000}{7.000} \frac{5.000}{7.000} \end{pmatrix} = (0.429, 0.714, 0.714)\_I$ 


The values of the weighted normalized matrix shown in Table 5 are obtained using Equation (24): *v*˜<sup>11</sup> = *n l* <sup>11</sup> × *w l* 1 , *n m* <sup>11</sup> × *w m* 1 , *n u* <sup>11</sup> × *w u* 1 = (0.111 × 0.098, 0.111 × 0.174, 0.111 × 0.336) = (0.011, 0.019, 0.037).



The fuzzy matrix *S*˜ *i* is obtained by applying Equation (25)


#### *Mathematics* **2020**, *8*, 457

as follows:

$$\tilde{S}\_1 = \begin{pmatrix} 0.111 + 0.015 + 0.011 + 0.021 + 0.035 + 0.020, \\ 0.019 + 0.028 + 0.023 + 0.066 + 0.106 + 0.034, \\ 0.037 + 0.157 + 0.046 + 0.130 + 0.188 + 0.103 \end{pmatrix} = \begin{pmatrix} 0.113, 0.276, 0.660 \end{pmatrix}$$

Using Equation (26), the matrix *K*˜ *i* − is obtained:

$$
\begin{array}{llll}
\tilde{k}^{-}\_{1} = (0.517, 2.386, 10.607) & \tilde{k}^{-}\_{23} = (1.304, 6.064, 23.546) & \tilde{k}^{-}\_{36} = (0.284, 1.367, 9.105) \\
\tilde{k}^{-}\_{2} = (0.284, 1.367, 8.270) & \tilde{k}^{-}\_{24} = (0.418, 1.487, 8.745) & \tilde{k}^{-}\_{37} = (0.542, 2.229, 11.329) \\
\tilde{k}^{-}\_{3} = (0.383, 1.333, 8.264) & \tilde{k}^{-}\_{25} = (0.418, 2.307, 8.920) & \tilde{k}^{-}\_{38} = (0.383, 1.333, 8.264) \\
\end{array}
$$

as follows:

$$\tilde{k}\_{1}^{-} = \frac{\tilde{\mathcal{S}}\_{1}}{\tilde{\mathcal{S}}\_{ai}} = \begin{pmatrix} s\_{1}^{l} & s\_{1}^{m} & s\_{1}^{u} \\ s\_{ai}^{u} & s\_{ai}^{m} & s\_{ai}^{l} \end{pmatrix} = \begin{pmatrix} 0.113 & 0.276 & 0.660 \\ 0.219 & 0.116 & 0.062 \end{pmatrix} = \begin{pmatrix} 0.517, 2.386, 10.607 \end{pmatrix}$$

Using Equation (27), the matrix *K*˜ *i* <sup>+</sup> is obtained:

$$\begin{array}{llll} \tilde{k}^{+}\_{1} = (0.060, 0.284, 1.602) & \tilde{k}^{+}\_{23} = (0.151, 0.721, 3.557) & \tilde{k}^{+}\_{34} = (0.033, 0.163, 1.375) \\ \tilde{k}^{+}\_{2} = (0.033, 0.163, 1.249) & \tilde{k}^{+}\_{24} = (0.048, 0.177, 1.321) & \tilde{k}^{+}\_{37} = (0.063, 0.265, 1.711) \\ \tilde{k}^{+}\_{3} = (0.044, 0.159, 1.248) & \tilde{k}^{+}\_{25} = (0.048, 0.275, 1.348) & \tilde{k}^{+}\_{38} = (0.044, 0.159, 1.248) \\ \end{array}$$

as follows:

$$\tilde{k}\_{1}^{+} = \frac{\tilde{\mathbf{S}}\_{1}}{\tilde{\mathbf{S}}\_{\text{id}}} = \begin{pmatrix} s\_{1}^{l} & s\_{1}^{m} & s\_{1}^{u} \\ s\_{\text{id}}^{u} & s\_{\text{id}}^{m} & s\_{\text{id}}^{l} \end{pmatrix} = \begin{pmatrix} 0.113 & 0.276 & 0.660 \\ 1.900 \, \prime & 0.974 & 0.412 \end{pmatrix} = (0.060, 0.284, 1.602)$$

In Step 7, the matrix *T*˜ *i* is calculated using Equation (28):

 $\tilde{t}\_1 = (0.577, 2.670, 12.210)$ ,  $\tilde{t}\_{23} = (1.454, 6.785, 27.103)$ ,  $\tilde{t}\_{36} = (0.317, 1.530, 10.481)$ ,  $\tilde{t}\_2 = (0.317, 1.530, 9.519)$ ,  $\tilde{t}\_{24} = (0.466, 1.664, 10.066)$ ,  $\tilde{t}\_{37} = (0.605, 2.494, 13.040)$ ,  $\tilde{t}\_3 = (0.427, 1.492, 9.512)$ ,  $\tilde{t}\_{25} = (0.466, 2.582, 10.268)$ ,  $\tilde{t}\_{38} = (0.427, 1.492, 9.512)$ .

The elements of the matrix *T*˜ *<sup>i</sup>* are obtained as follows:

$$\vec{r}\_1 = (0.517 + 0.060, 2.386 + 0.284, 10.607 + 1.602) = (0.577, 2.670, 12.210)$$

Then, it is necessary to determine a new fuzzy number *D*˜ using Equation (29) *D*˜ = *d l* , *d <sup>m</sup>*, *d u* = max *i* ˜*tij* and *D*˜ = (1.454, 6.785, 27.103) is obtained, and then it is necessary to defuzzify the number *D*˜ by using the expression *d fcrisp* = *l*+4*m*+*u* 6 obtaining the number *d fcrisp* = 9.283. The calculation of the last two steps and the final results obtained using the fuzzy MARCOS method are shown in Table 6.


**Table 6.** Calculation of the two last steps and results of applied fuzzy MARCOS.

Utility functions in relation to the ideal *f K*˜ <sup>+</sup> *i* and anti-ideal *f K*˜− *i* solution are determined by applying Equations (30) and (31).

$$\begin{array}{c} f(\tilde{\mathcal{K}}\_1^+) = \frac{\tilde{\mathcal{K}}\_1^-}{df\_{cris}} = \begin{pmatrix} \frac{0.517}{9.283} \frac{2.386}{9.283} \frac{10.607}{9.283} \end{pmatrix} \\\ f(\tilde{\mathcal{K}}\_1^-) = \frac{\tilde{\mathcal{K}}\_1^+}{df\_{cris}} = \begin{pmatrix} \frac{0.060}{9.283} \frac{0.284}{9.283} \frac{1.602}{9.283} \end{pmatrix} \end{array}$$

After that, it is necessary to perform defuzzification for *K*˜<sup>−</sup> *i* , *K*˜ <sup>+</sup> *i* , *f K*˜ <sup>+</sup> *i* , *f K*˜− *i* , which is given in Table 6.

Determination of the utility function of alternatives *f K<sup>i</sup>* . The utility function of alternatives is defined by Equation (32).

$$f(\mathbf{K\_1}) = \frac{\mathbf{K\_1^+} + \mathbf{K\_1^-}}{1 + \frac{1 - f\left(\mathbf{K\_1^+}\right)}{f\left(\mathbf{K\_1^+}\right)} + \frac{1 - f\left(\mathbf{K\_1^-}\right)}{f\left(\mathbf{K\_1^-}\right)}} = \frac{0.446 + 3.445}{1 + \frac{1 - 0.371}{0.371} + \frac{1 - 0.050}{0.050}} = 0.181$$

The ranking represents the sorting of obtained values in descending order, where A23 represents the most hazardous 200-m section and is dominant over the others.

#### **5. Validation Tests**

#### *5.1. Changing the Significance of Input Parameters*

In the first phase of validation test, the impact of changing the three most significant criteria C1, C2 and C3 on ranking results was analyzed. Using Equation (33), a total of 30 scenarios were created.

$$
\tilde{\mathcal{W}}\_{n\beta} = \left(1 - \tilde{\mathcal{W}}\_{n\alpha}\right) \frac{\tilde{\mathcal{W}}\_{\beta}}{\left(1 - \mathcal{W}\_{n}\right)}\tag{33}
$$

The scenarios were formed through three different groups of 10 sets each. In the first group of scenarios, the first criterion was changed, criterion C2 was changed in the second group and criterion C3 was changed in the third group. If Equation (33) is observed, *W*˜ *<sup>n</sup>*<sup>β</sup> represents the fuzzy corrected value of the criteria C2, C3, C4, C5 and C6, then C1, C3, C4, C5 and C6, i.e., C1, C2, C4, C5 and C6, respectively by groups. *W*˜ *<sup>n</sup>*<sup>α</sup> represents the reduced fuzzy value of the criteria C1, C2, and C3 respectively by groups, *W*˜ <sup>β</sup> represents the original fuzzy value of the criterion considered and *W*˜ *n* represents the original fuzzy value of the criterion whose value is reduced, in this case, C1, C2 and C3.

In the first scenario, the fuzzy value of criterion C1 was reduced by 5% while the values of the remaining criteria were proportionally corrected by applying Equation (33). In each subsequent scenario, the value of criterion C1 was reduced by 10%, while the values of the remaining criteria were corrected, so they met the condition P*<sup>n</sup> <sup>j</sup>*=<sup>1</sup> *w m <sup>J</sup>* = 1. These changes, i.e., these 10 scenarios represent the first group. Scenarios 11–20 represent the second group in which criterion C2 was corrected. The third group consists of scenarios 21–30 with the change in the value of the third criterion. After forming 30 new vectors of the weight coefficients of the criteria (Table 7), new model results were obtained, as presented in Figures 1–3.

**Table 7.** New criterion values across 30 scenarios.


After forming scenarios as described above, results were obtained for each of the groups. Figure 2 shows the obtained ranks of the alternatives and comparison of initial results with the results in the first group of scenarios, i.e., S1–S10.

The change in the significance of the first criterion affects the ranks of road network sections, which is, in a way, understandable since there is a large number of alternatives. It is important to note that alternatives A23, A18, A20, A12, A5 and A16, which take the first, second, third, eighth, ninth and tenth places, respectively, do not change their positions in any scenario. With the elimination of the significance of the first criterion, since its value is 0.05 in the tenth scenario, there is a change in ranks by five positions for some alternatives.

**Figure 2.** Comparison of initial results with scenarios S1–S10.

**Figure 3.** Comparison of initial results with the second group of scenarios, i.e., scenarios S11–S20.

Figure 3 shows the obtained ranks of the alternatives and comparison of initial results with the results in the second group of scenarios, i.e., S11–S20. In Figure 3, it can be noticed that there have been major changes in the ranks of alternatives across scenarios. The reason for this is the fact that the most significant criterion C2 has been changed, which has a significant impact on the output. However, alternative A23 remains in the first place, despite the fact that the influence of the most significant criterion is minimized. Additionally, alternatives A31 and A34 do not change their position and are ranked 35th and 38th, respectively. Alternative A18 in scenarios S11–S17 retains its second position, while in the remaining three scenarios (S18–S20), it is in the third place. Alternative A20 is in the third

‐

position in scenarios S11–S13, fourth in S14-S16 and fifth in S17–S20 scenarios. With the decrease in the impact of criterion C2, results and ranks change to a maximum of 37%. ‐

Figure 4 shows the obtained ranks of the alternatives and a comparison of initial results with the results in the third group of scenarios, i.e., S21–S30.

**Figure 4.** Comparison of initial results with third group of scenarios i.e., S21–S30.

In Figure 4, it can be noticed that there were also some changes in the ranks of the alternatives across the scenarios. The reason for this is the fact that the second most significant criterion C3 was changed, which has a slightly lower impact than C2, but is also very important in obtaining the output. However, alternatives A23, A34 and A35 do not change their ranks and they are still ranked as first, 38th and 37th, respectively. Alternative A20 retains its third position in scenarios S21 and S22, while holding second place in the remaining scenarios. Alternative A30 changes its position by one place only in the last S30 scenario.

#### *5.2. Impact of Reverse Rank Matrices*

One of the ways to test the validity of the obtained results of the model for decision-making is to construct dynamic matrices and then analyze the solutions that the model provides under newly formed conditions. If the solutions show some logical contradictions that are expressed in the form of undesirable changes in the ranks of alternatives, then one may express concern that there is a problem with the mathematical apparatus of the applied method. In line with this goal, a test in which the resistance of the model to the rank reversal problem is considered was conducted.

A change in the number of alternatives was made for each scenario, eliminating the worst alternative from further consideration. After defining a new set of alternatives, the ranking of the remaining alternatives is performed under newly formed conditions using the proposed model. In the test, 35 scenarios were formed in which the change in the elements of the decision matrix was simulated. As a rule, 37 scenarios should be formed (one less than the total number of alternatives). However, in this case, we had two newly formed scenarios in which two alternatives were eliminated. In scenario S7, there are two alternatives A3 and A38 that have the same position, and both alternatives were eliminated in the next eighth scenario. The same situation is with scenario S14 in which A13 and A27 were eliminated.

Based on the results obtained (Figure 5) and taking into account the complexity of the MCDM model, it can be concluded that the model is stable. In the first 14 scenarios, there is no change in ranks for any alternative. Only eight alternatives change their ranks after the fifteenth scenario has been formed when Alternative A9 is eliminated from the model. It is important to note that after obtaining the results in S15 when the change occurs, the alternatives retain their ranks until the last scenario.

‐

**Figure 5.** Results of the test of reverse rank matrix.

#### *5.3. Comparison with Other Approaches*

In this section, a validation test is performed involving comparison with two other methods in a fuzzy form: the Fuzzy SAW [21] and the fuzzy TOPSIS method [22], and the results are presented in Figure 6.

**Figure 6.** Results of comparison with Fuzzy SAW and fuzzy TOPSIS methods.

In this case study, we compared the fuzzy MARCOS technique with MCDM models that have a linear normalization. When observing the comparisons of the applied methods, it can be seen that in 15 cases, there is no change of ranks. It is of primary importance to note that the first 11 (A23, A18, A20, A21, A33, A17, A19, A12, A5, A16, and A26) alternatives do not change positions regardless of the approach taken. In addition, alternatives A8, A32, A3 and A38 do not change their ranks and are in 17th, 18th and 30th places, respectively. The last two mentioned alternatives share the 30th position. In addition, compared to the fuzzy SAW method, eight other alternatives do not change their

‐

‐

‐ ‐

‐

‐

ranks, while compared to the fuzzy TOPSIS method, three other alternatives do not change positions. The biggest difference is with alternative A25, which is ranked 20th applying the fuzzy MARCOS method and in the 25th position using the fuzzy TOPSIS method. The observed small differences in ranking of alternatives, between Fuzzy MARCOS results and Fuzzy SAW/Fuzzy TOPSIS results (see Figure 6) do not limit the usefulness of the study, as it is impossible to know in advance the possible outcomes of an applied methodology and the extent of possible deviation of the rankings, therefore making the process is significant for validation.

The advantages of the Fuzzy MARCOS method are as follows: consideration of fuzzy reference points through the fuzzy ideal and fuzzy anti-ideal solution at the very beginning of model formation, more precise determination of the degree of utility with respect to both set solutions, proposal of a new way of determining utility functions and its aggregation, possibility to consider a large set of criteria and alternatives, as demonstrated through a realistic example, too. Compared with other methods, this method is simple, effective, and easy to sort and optimize the process.

#### **6. Conclusions**

In this paper, a fuzzy MARCOS algorithm was developed to support multi-criteria decision-making, especially when considering parameters in an uncertain environment. Considering the relationships of indicators presented through TFNs between the ideal and the anti-ideal solution, it can positively affect making valid decisions. In addition, this paper defines a new fuzzy linguistic scale for parameter evaluation by decision-makers. The Fuzzy MARCOS model was tested using the example of determining the degree of risk on short sections of the first-order main road. A part of the road network with a length of 7.4 km divided into 38 short sections of 200 m each was analyzed. Thanks to the previously formed adequate database regarding all necessary parameters, the MCDM model was created. The results of the proposed model show that the 23rd section, i.e., the section between 4.2 and 4.4 km represents the most hazardous section since the value obtained is drastically higher than the others are. This result is caused by the fact that this section has an undesirable value considering almost all factors and an adequate reaction in terms of increasing surveillance and traffic safety on this section is required. The obtained results in terms of risk can be used for improving road safety. The results can help decision-makers take into account these indicators as an input parameter for all planning, design and operational analyzes, as well as indicators for the development of regulatory plans for a given area in local conditions. For the purpose of validation, an extensive analysis was carried out, which involves changing the significance of the input parameters, testing the factors of dynamic environment and comparing the results with two other methods in a fuzzy form. The validation tests support the development and application of the fuzzy MARCOS method. In order to improve the robustness of MCDM in fuzzy environment, a new fuzzy MARCOS method was developed in this study, which uses the ratio method and the reference point method to obtain a scheme of basic comprehensive decision information. The fuzzy MARCOS method is a powerful tool for optimizing multiple goals. Fuzzy MARCOS refreshes the MCDM domain by introducing an algorithm for analyzing the relationship between alternatives and reference points. The Fuzzy MARCOS method integrates the following points to provide a robust decision: defining reference points (fuzzy ideal and fuzzy anti-ideal values), determining the relationship between alternatives and fuzzy ideal/anti-ideal values, defining the utility degree of alternatives in relation to the fuzzy ideal and fuzzy anti-ideal solutions. The results obtained by the fuzzy MARCOS method are more reasonable due to the fusion of the results of the ratio approach and reference point sorting approach. The Fuzzy MARCOS method shows the significant stability and reliability of the results in a dynamic environment. Moreover, it is important to note that in numerous scenarios, the fuzzy MARCOS method shows stability in processing large data sets, which was proven in the performed research.

Future research may be based on the integration of particular short sections and their evaluation, and the development of the MARCOS method with other theories such as neutrophic [23], single-valued intuitionistic fuzzy numbers [24], grey theory [25] and others. Moreover, the approach of a building

consensus in group decision making with information granularity [26] or the concept of a granular fuzzy preference relation where each pairwise comparison is formed as a certain information granule can be implemented [27].

**Author Contributions:** Conceptualization, M.S. (Miomir Stankovi´c) and M.S. (Marko Suboti´c); methodology, Ž.S.; validation, D.K.D. and D.P.; investigation, M.S. (Marko Suboti´c); writing—original draft preparation, Ž.S.; writing—review and editing, D.K.D. and D.P.; supervision, M.S. (Miomir Stankovi´c). All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Acknowledgments:** The paper is a part of the research done within the project No. 19.032/961-58/19 "Influence of Geometric Elements of Two-lane Roads in Traffic Risk Analysis Models" supported by Ministry of Scientific and Technological Development, Higher Education and Information Society of the Republic of Srpska.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

MDPI St. Alban-Anlage 66 4052 Basel Switzerland Tel. +41 61 683 77 34 Fax +41 61 302 89 18 www.mdpi.com

*Mathematics* Editorial Office E-mail: mathematics@mdpi.com www.mdpi.com/journal/mathematics

MDPI St. Alban-Anlage 66 4052 Basel Switzerland

Tel: +41 61 683 77 34 Fax: +41 61 302 89 18

www.mdpi.com ISBN 978-3-0365-1575-5