**Multi-Criteria Decision-Making Techniques for Improvement Sustainability Engineering Processes** Volume 1

### Edited by

Edmundas Kazimieras Zavadskas, Dragan Pamučar, Željko Stević and Abbas Mardani

Printed Edition of the Special Issue Published in *Symmetry*

www.mdpi.com/journal/symmetry

## **Multi-Criteria Decision-Making Techniques for Improvement Sustainability Engineering Processes**

## **Multi-Criteria Decision-Making Techniques for Improvement Sustainability Engineering Processes**

**Volume 1**

Editors

**Edmundas Kazimieras Zavadskas Dragan Pamuˇcar Zeljko Stevi´c ˇ Abbas Mardani**

MDPI • Basel • Beijing • Wuhan • Barcelona • Belgrade • Manchester • Tokyo • Cluj • Tianjin

*Editors* Edmundas Kazimieras Zavadskas Vilnius Gediminas Technical University Lithuania

Dragan Pamuˇcar University of Defence, Military academy, Department of Logistics Belgrade, Serbia

Zeljko ˇ Stevic´ Faculty of Transport and Traffic Engineering, University of East Sarajevo, Doboj, Bosnia and Herzegovina

Republic of Srpsk Abbas Mardani Muma College of Business at University of South Florida (USF) Tampa, FL, USA

*Editorial Office* MDPI St. Alban-Anlage 66 4052 Basel, Switzerland

This is a reprint of articles from the Special Issue published online in the open access journal *Symmetry* (ISSN 2073-8994) (available at: https://www.mdpi.com/journal/symmetry/special issues/Sustainability Engineering Processes).

For citation purposes, cite each article independently as indicated on the article page online and as indicated below:

LastName, A.A.; LastName, B.B.; LastName, C.C. Article Title. *Journal Name* **Year**, *Article Number*, Page Range.

**Volume 1 ISBN 978-3-03936-778-8 (Hbk) ISBN 978-3-03936-779-5 (PDF)**

**Volume 1-2 ISBN 978-3-03936-794-8 (Hbk) ISBN 978-3-03936-795-5 (PDF)**

c 2020 by the authors. Articles in this book are Open Access and distributed under the Creative Commons Attribution (CC BY) license, which allows users to download, copy and build upon published articles, as long as the author and publisher are properly credited, which ensures maximum dissemination and a wider impact of our publications.

The book as a whole is distributed by MDPI under the terms and conditions of the Creative Commons license CC BY-NC-ND.

## **Contents**


### **Jarosław G ´orecki**



Reprinted from: *Symmetry* **2019**, *11*, 682, doi:10.3390/sym11050682 ................. **459**

## **About the Editors**

**Edmundas Kazimieras Zavadskas** Ph.D., DSc, is a Professor at the Department of Construction Management and Real Estate, Chief Research Fellow at the Laboratory of Operational Research, Research Institute of Sustainable Construction, Vilnius Gediminas Technical University, Lithuania. He received his Ph.D. in Building Structures (1973) and Dr Sc. (1987) in Building Technology and Management. He is a member of the Lithuanian and several foreign Academies of Sciences, Doctore Honoris Causa from Poznan, Saint Petersburg, and Kiev Universities, and the Honorary International Chair Professor in the National Taipei University of Technology. Awarded by the International Association of Grey System and Uncertain Analysis (GSUA) for his huge input in the field of Grey Systems, Zavadskas has been elected to an Honorary Fellowship of the International Association of Grey System and Uncertain Analysis, a part of IEEE (2016), awarded by "Neutrosophic Science—International Association" for distinguished achievements in neutrosophics, and has been conferred an honorary membership (2016), and awarded the Thomson Reuters certificate as the most highly cited scientist (2014). A highly cited researcher in the field of Cross-Field (2018, 2019), Zavadskas is recognized for exceptional research performance demonstrated by the production of multiple highly cited papers that rank in the top 1% by citations for field and year in the Web of Science. Zavadskas' main research interests include multi-criteria decision-making, operations research, decision support systems, and multiple-criteria optimization in construction technology and management. With over 517 publications in Clarivate Analytics Web of Science, h-index = 65, a number of monographs in Lithuanian, English, German, and Russian, Zavadskas is also Founding Editor-in-Chief of the journals *Technological and Economic Development of Economy* and *Journal of Civil Engineering and Management*, as well as Guest Editor of over 15 Special Issues related to decision-making in engineering and management.

**Dragan Pamuˇcar** is an Associate Professor at University of Defence in Belgrade, Department of Logistics, Serbia. Dr. Pamucar received a Ph.D. in Applied Mathematics with a specialization in Multi-criteria modeling and soft computing techniques from the University of Defence in Belgrade, Serbia in 2013 and an MSc degree from the Faculty of Transport and Traffic Engineering in Belgrade, 2009. His research interests are in the fields of Computational Intelligence, Multi-Criteria Decision-Making problems, Neuro-Fuzzy systems, Fuzzy, Rough and Intuitionistic Fuzzy Set Theory, and Neutrosophic Theory. Application areas include a wide range of logistics problems. Dr. Pamucar has authored/co-authored over 80 papers published in SCI-indexed international journals including Experts Systems with Applications, Applied Soft Computing, Soft Computing, Computational Intelligence, Computers & industrial Engineering Technical Gazette, Sustainability, Symmetry, Water, Asia-Pacific Journal of Operational Research, Operational Research, Journal of Intelligent and Fuzzy Systems, Land Use Policy, Environmental Impact Assessment Review, International Journal of Physical Sciences, Economic Computation and Economic Cybernetics Studies and Research, and many more. In the last three years, Prof. Pamucar was awarded as top and outstanding reviewer for numerous Elsevier journals, such as Sustainable Production and Consumption, Measurement, Egyptian Informatics Journal, International Journal of Hydrogen Energy, and so on.

**Zeljko Stevi´ ˇ c**is an Assistant Professor at the University of East Sarajevo, Faculty of Transport and Traffic Engineering, Doboj. He received a Ph.D. in Transport and Traffic Engineering from the University of Novi Sad, Faculty of Technical Sciences, in 2018. His interests include logistics, supply chain management, transport, traffic engineering, soft computing, multi-criteria decision-making problems, rough set theory, sustainability, fuzzy set theory, and neutrosophic set theory. He has published over 120 papers in his areas of interest. He has contributed outstanding research to the mentioned fields. In all his research, he has provided very good application studies and practical contributions, solving different problems in transportation, logistics, supply chain management, traffic engineering, the economy, etc. His published studies are very well cited in other research, which can be seen in ResearchGate. He has an h-index of 18 in Google Scholar, 11 in Scopus, and 11 in WoS. Dr. Stevic has authored/co-authored papers published in refereed international journals ´ including Applied Soft Computing, Neural Computing and Applications, Sustainability, Symmetry, Engineering Economics, Soft Computing, Transport, Scientometrics, Information, ECECSR, Technical Gazette, SIC, Mathematics (MDPI), and more. He is a member of the Program Committee for specific programs for the Republic of Srpska in Horizon 2020. He is Editor-in-Chief of the journal Operational Research in Engineering Sciences: Theory and Applications. Moreover, he is Guest Editor of the following journals:


**Abbas Mardani**, Ph.D., Informetrics Research Group and Faculty of Business Administration; Ton Duc Thang University, Vietnam. Abbas has published more than 130 articles in high-quality journals. He is Editor and Guest Editor of several journals, including the International Journal of Physical Distribution & Logistics Management; Applied Soft Computing, Technological and Economic Development of Economy, Technological Forecasting and Social Change; Computer and Industrial Engineering; Journal of Enterprise Information Management; International Journal of Fuzzy Systems; Symmetry; Energies; Soft Computing Letters, etc. in Elsevier, Springer, Emerald, and MDPI. Abbas' H-indexes in Scopus and Google Scholar are 18 and 22 respectively. His research interests include quality management, sustainable development, fuzzy sets, decision-making, TQM, SCM, sustainability, service quality, and optimization.

## *Editorial* **Multi-Criteria Decision-Making Techniques for Improvement Sustainability Engineering Processes**

### **Edmundas Kazimieras Zavadskas 1, Dragan Pamuˇcar 2, Željko Stevi´c 3,\* and Abbas Mardani 4,5**


Received: 25 May 2020; Accepted: 25 May 2020; Published: 9 June 2020

**Abstract:** The success of any activity and process depends fundamentally on the possibility of balancing (symmetry) needs and their satisfaction. That is, the ability to properly define a set of success indicators. The application of the developed new multi-criteria decision-making (MCDM) methods can be eliminated or decreased by decision-makers' subjectivity, which leads to consistency or symmetry in the weight values of the criteria. In this Special Issue, 40 research papers and one review study co-authored by 137 researchers from 23 different countries explore aspects of multi-criteria modeling and optimization in crisp or uncertain environments. The papers proposing new approaches and elaborate case studies in the following areas of applications: MCDM optimization in sustainable engineering, environmental sustainability in engineering processes, sustainable multi-criteria production and logistics processes planning, integrated approach for modeling processes in engineering, new trends in the multi-criteria evaluation of sustainable processes, multi-criteria decision-making in strategic management based on sustainable criteria.

**Keywords:** multi-criteria decision-making; sustainability; engineering; optimization

### **1. Introduction**

Decision making on complex engineering problems including individual process decisions requires an appropriate and reliable decision support system. Fuzzy set theory, rough set theory and neutrosophic set theory which belong to MCDM techniques are very useful for modeling complex engineering problems with imprecise, ambiguous or vague data. Sustainability in engineering is one of the most discussed topics in recent years and represents one of the key factors in engineering sustainable development and optimization. Sustainable multidisciplinary approaches based on MCDM techniques enable easier process technology in the future.

Engineering is the application of scientific and mathematical principles for practical objectives such as the processes, manufacture, design and operation of products, while accounting for constraints invoked by environmental, economic and social factors. There are various factors needing to be considered in order to address engineering sustainability, which is critical for overall the sustainability of human development and activity. In these regards, in recent decades, decision-making theory has been a subject of intense research activities due to its wide applications in different areas, such as

sustainable engineering and environmental sustainability. The decision-making theory approach has become an important means of providing real-time solutions to uncertainty problems, especially for sustainable engineering and environmental sustainability problems in engineering processes. This Special Issue have stimulated both theoretical and applied research in the related fields of sustainability engineering processes. It is certainly impossible to provide in this short editorial a more comprehensive description for all articles in this Special Issue. However, we can with sure say that effort in compiling these articles have enriched our readers and inspire researchers with regard to the seemingly common, but actually important issue of decision-making and fuzzy decision-making approaches for sustainable engineering processes.

### **2. Contributions**

The Special Issue collects 40 original research papers and one review paper written by Guest Editors. The papers contribute to various fields above mentioned. A lot of the research have proposed new methodologies treating uncertainty.

The topics of the Special Issue attracted attention of a wide scientific community: 137 researches from 23 countries contributed to the Issue. Distribution of authors according to countries is shown in Figure 1.

**Figure 1.** Number of authors from different countries.

The largest number of authors were from Serbia (29 authors). China has 21 authors, while 18 researches come from Lithuania and 17 from India. Bosnia and Herzegovina and Poland contributed almost equally, with 11 and 9 authors, respectively. Next came Iran, with six authors, while five contributors were Malaysia, four from Slovenia and three from Taiwan. Two authors come from Hungary and two from Austria. Authors from the following eleven countries contributed with one paper: Australia, Chile, Denmark, Finland, Pakistan, Russia, Saudi Arabia, South Africa, UAE, USA and Vietnam. It is important note that one researcher contributes with different affiliations from two different countries. He has published three different papers.

Distribution of papers according to authors' affiliations is presented in Table 1. Authors and co-authors from Lithuania contributed 12 papers, five papers without international collaboration [1–5] and seven papers with international cooperation: Bosnia and Herzegovina–Lithuania– Serbia–Malaysia [6], Lithuania–Bosnia and Herzegovina–Serbia [7], Iran–Lithuania [8], China– Lithuania [9], Malaysia–Lithuania [10], Serbia–South Africa–Lithuania–Bosnia and Herzegovina [11] and Chile–Iran–Lithuania–Australia [12]. Authors from China contributed in total eight papers, four without international collaboration [13–16] and four more in international cooperation: [9], China–USA [17], China–Pakistan [18] and Russia–China–Serbia [19]. Authors from Poland contributed four papers, but only to a national cooperation [20–23]. Researchers from Serbia have published 15 papers: three without international cooperation [24–26], four with authors from Bosnia and Herzegovina [27–30], three with authors from India [31–33], mentioned [6,7,11,19], India–Finland–Serbia [34]. Authors from India have published seven papers. Apart from mentioned [31–34] they have cooperated as follow: India–UAE [35], Chile–India [36] and India– Denmark–Vietnam–Saudi Arabia [37]. Authors from Slovenia and Taiwan contributed without international cooperation, two papers from Slovenia [38,39] and one from Taiwan [40]. In addition, one study is result of cooperation authors between Hungary and Austria [41]. Two papers come from Malaysia and Chile, while the authors from the following countries contributed per one study in collaboration aforementioned: UAE, Iran, Chile, South Africa, Finland, USA, Pakistan, Australia, Denmark, Vietnam, Saudi Arabia and Russia.


**Table 1.** Publications by country.

### **3. Conclusions**

Guest Editors are very happy that the Special Issue on multi-criteria decision-making techniques for improvement sustainability engineering processes has interested researchers from Europe, Asia and America; papers involving 137 researchers from 23 countries were published.

The Special Issue showed that MCDM techniques are an important tool for solving various problems in the field of sustainability engineering processes. Decision making in real systems requires flexible decisions and respect for the mutual influence between the attributes of the decision. Therefore, the authors, have shown the importance of aggregation operators for information fusion in MCDM problems.

Through 40 published papers, the authors have shown the possibilities of applying multicriteria techniques for processing information represented by crisp values, as well as various theories of uncertainty. Uncertainty theories applied in this special edition include traditional fuzzy sets, intuitionistic type-2 fuzzy sets, q-rung orthopair fuzzy sets, q-rung interval-valued orthopair fuzzy sets, rough sets and rough numbers, probabilistic linguistic term sets and neutrosophic sets. The application areas of the proposed MCDM techniques mainly covered production/manufacturing engineering, logistics and transportation and construction engineering and management.

**Author Contributions:** All authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

**Conflicts of Interest:** The authors declare no conflict of interest.

### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

*Article*

## **Use of Determination of the Importance of Criteria in Business-Friendly Certification of Cities as Sustainable Local Economic Development Planning Tool**

### **Milan Ranđelovi´c 1, Slobodan Nedeljkovi´c 2, Mihailo Jovanovi´c 3, Milan Cabarkapa ˇ 4, Vladica Stojanovi´c 5, Aleksandar Aleksi´c <sup>6</sup> and Dragan Ranđelovi´c 6,\***


Received: 30 December 2019; Accepted: 17 February 2020; Published: 6 March 2020

**Abstract:** One of the essential activities for sustainable local economic development is continuous improvement of business environment which can be carried out through the business-friendly certification as objective benchmarking process, which is influenced by many factors - criteria that could be analyzed using multi-criteria decision-making methods. Determining criteria weights is the most important task regarding these methods for which a number of methodologies based on different approaches were developed. These methodologies could be generally divided into two groups: subjective and objective. Shortly, these methodologies quantify given preferences using knowledge of experts if they are subjective or using calculations from available data if they are objective. Methodologies from these two groups give different results in a wide range of values. Therefore, it is useful to create composite indicators using aggregation of both approaches in order to reduce the influence of their bad individual characteristics and, therefore, achieve a balanced symmetrical approach. The purpose of this paper is constructing one efficient model that solves a problem of the planning of sustainable local economic development in the Republic of Serbia. Our approach uses the aggregation of the entropy method, as one objective approach, and the analytical hierarchy process, as a subjective approach, in executing business-friendly certification process. The implementation of the proposed approach has been demonstrated as a part of a business-to-government (B2G) platform called "Multi-Criteria Support System for Analysis of the Local Economic Environment" in the City of Niš.

**Keywords:** entropy; AHP; aggregation; economic development; business-friendly certification; sustainable development

### **1. Introduction**

Business Friendly Certification (BFC) of local self-government units (LSGU) is a process that introduces standards for efficient and transparent governance and allows evaluation of the quality and relevance of services and information for investors and entrepreneurs. This process should be used as one of the most important procedures in the planning of sustainable local economic development. The current application of this procedure in the Republic of Serbia aims to improve the business environment through institutional reforms especially in public administration at the local level with active participation and cooperation of economic subjects, citizens, and municipalities of one local self-government unit. This process involves the determination and evaluation of various criteria on the local self-government level [1–5].

Therefore, the BFC process is complemented with another one carried out by the local self-government BFC (LBFC), which involves the determination and evaluation of various factors-criteria using their impact on the fulfilment of local business environment demands as well as on attracting potential investors. It should be noted that these factors depend on those regulated by the republic and those regulated by local authorities. The BFC process, together with the mentioned LBFC process, must be implemented in accordance with the defined model of one cyclic algorithm of strategic planning of sustainable local economic development (LED), which is framed by year cycles dynamics [6,7].

Multi-criteria decision-making methods (MCDM) of criteria weighting have a significant influence on the final outcome of decision-making and ranking alternatives that participate as a part of the model. That is why it is necessary to determine these weights properly [8–10]. Determining the criteria weights can be done using various methods, and all of them can be classified into two groups: the subjective and objective methods. Subjective methods are methods that most respect the subjective preferences of the decision-maker or expert in the process of evaluation of criteria relevance. In this group, the analytic hierarchy process (AHP) and its various forms of improvement and aggregation with some others methods (as for example fuzzy logic, BWM (Best Worst Method), conjoint analysis, Delphi method, according to references [11–25]) are mostly used in MCDM. The main aim of the paper [11] is a systematic review of possible ways of determining the criteria weights by the decision-maker or by other participants in the decision-making process. In articles [12–22], one can find descriptions of the application of different subjective methods: AHP, IR-AHP-MABAC, AHP-DEMATEL, IRDEMATEL-COPRAS, DEMATEL-MAIRCA, ROUGH DEMATEL-ANP-MAIRCA, FUZZY, ROUGH BWM-WASPAS, BWM-MAIRCA, DELPHI-SWARA-MABAC, WASPAS. In paper [23], the authors propose the novel Full Consistency Method (FUCOM) as more efficient than AHP and BWM. This method implies the definition of two groups of constraints that need to satisfy optimal values of weight coefficients. The first group of constraints are based on the fact that the criteria weight coefficients should be equal to the comparative priorities of the criteria while the second group of constraints are based on the basis of mathematical transitivity conditions. Both of them are solved using the FUCOM model, and in addition to optimal weight values, a deviation from full consistency (DFC) is obtained while its degree is the deviation value of the obtained weight coefficients from the estimated comparative priorities of the criteria. Also, paper [24] is based on the description of the main difference between conceptual and theoretical frameworks as well as on literature review of comparative studies of AHP and conjoint analysis. The authors of article [25] deal with one application of the Delphi method for Iranian medical schools' ranking purposes. Objective methods, on the other hand, are concerned with weight coefficient assignment on the basis of the analysis of given data, which are used for solving specific mathematical models of selected methods of multi-criteria decision-making, without taking into account the attitude of decision-makers or experts. Mostly used common objective methods are: entropy, CRITIC (Criteria Importance Through Inter-criteria Correlation), FANMA, data envelopment analysis (DEA), correlation analysis, regression analysis, factor analysis, etc. In literature, we can find in [7,26–33] and [34]. The main aim of the paper [26] is one review of possible ways of objective determining of the criteria weights by the algorithms where the relative importance of criteria reflects the amount of information contained in any of them, which is associated with the contrast intensity of each criterion. In paper [27], an analysis of several simpler methods for objective determination of the criteria weight was explained in more details using more complex CRITIC, ENTROPY, and FANMA methods with their advantages and disadvantages in

comparison with subjective methods of evaluating criteria, while paper [28] deals with a comparative analysis of two different types of objective techniques for criteria weighting: Entropy and CRITIC (an article [29] considering CRITIC method in detail). In paper [30], authors deal with the entropy method, whereas in paper [31], authors have presented some models to generate the favorable weights from a pairwise comparison matrix combing AHP and DEA models. In addition, paper [32] considers again the usage of the DEA methodology. Papers [7,33,34] present possibilities and results of the implementation of regression, as well as the factor and correlation analysis, respectively, in criteria weighting, which belongs to the objective approach as classical statistics methods.

As mentioned previously, the main aim of this article is to present one aggregated methodology using two mentioned approaches of criteria weighting in MCDM to achieve one balanced-symmetric approach, having higher accuracy by reducing the influence of the disadvantages of each method (which are presented in [7,34–42]), and thus make it suitable for the processes of BFC and LBFC. Paper [35] consider obtaining criteria weights in MCDM, from objective and subjective approaches, mainly from the perspective of more participative methods in the construction of composite indicators. In paper [7], the aggregation of AHP and ANOVA on the case study of business-friendly certification processes of five cities in the Republic of Serbia is given. In article [36], a generalized combinative weighting framework is given to show how different types of weightings may be combined to find more reliable rankings of alternatives after the analysis was previously done using different subjective and objective approaches, while paper [37] proposes an integrated approach to determine attribute weights in the MCDM problems, which makes use of the subjective information provided by a decision-maker and the objective information obtained by a two-objective programming model. In papers [38,39], the same case was used to identify and determine the weights of the subjects on the first-year, in order to predict future students' success related to all of the study programs. In paper [38], the aggregated measure as the combination of weights obtained by DEA as an objective, and AHP as a subjective method is defined for determining the subject's weight. The paper [39] presents methodology for calculation of an aggregated measure as of arithmetic means for quantitative decision-making with the combination of statistical methodology based on regression and factor analysis on one hand and operational research approaches DEA, as an objective method, and AHP, as a subjective method. Also, in paper [34], authors combine AHP and correlation analysis approaches for determination of weights in a multi-criteria model of a contemporary problem of business-friendly certification in five local self-government units in the Republic of Serbia. In paper [40], we can find the aggregation of AHP and entropy, paper [41] takes a statistical approach for the sensitive analysis of MCDM, and article [42] deals with one aggregation that uses a model of behavior mechanism.

Therefore, the methodology proposed in this paper is applied on the case study in the city of Niš to estimate its quality, i.e., better characteristics in relation with each individual methodology with respect to the basic methodological framework of composite indices, which is described in detail in papers [43,44].

For solving this task, the paper has been organized as follows. After this Introduction as the first section, in Section 2, the proposed method of aggregation is described in individual short sub-sections: AHP as subjective, Entropy as objective, and the proposed method, which aggregate both of them. In Section 3, 'Brief introduction to the modelling of strategic planning of LED', the model based on the BFC and LBFC process is proposed in the first subsection while its implementation as the Support System for Analysis of Economic Environment of the Local Self-government in the City of Niš, the Republic of Serbia is presented in the second subsection. In the Results section, the obtained results for the three previously described methods are given. Section 5 presents the discussion about the obtained results, and the 'Conclusion' section as the last section of the paper provides the conclusions and underlines the main contributions of this work.

### **2. Proposed Method of Aggregation in Criteria Weighting one MCDM Method**

In order to find the most suitable methodology for MCDM, of which the presence is necessary for the proposed model of strategic planning of LED in Section 2, we are starting from the general multi-criteria analysis model (1) in which it is necessary to evaluate one of the *m* alternatives (*Ai*, *I* = 1, 2, ... , *m*), in accordance with *n* different criteria that are relevant (*Cj*, *j* = 1, 2, ... , *n*). The importance of each of the criterion *Cj* is determined using the weight coefficients *wj*, where criteria values of alternatives could be noted as *aij*. Regardless of the method used for their determination or calculation, all relative weights in the multi-criteria decision-making (MCDM) model must meet the following requirements:

$$0 \le w\_j \le 1$$

$$\sum\_{j=1}^n w\_j = 1$$

$$\begin{array}{ccccc} & \mathbf{C}\_1 & \dots & \mathbf{C}\_m \\ & \omega\_1 & \dots & \omega\_m \\ A\_1 & a\_{11} & \dots & a\_{1m} \\ & \vdots & \ddots & \vdots \\ A\_m & a\_{n1} & \dots & a\_{nm} \end{array} \tag{1}$$

where it is known:


For solving of this model, several taxonomies [45] exist, but as we mentioned in the introduction of this paper, it is very important, after choosing a concrete model, to also choose the appropriate criteria weighting method for the previously chosen method.

The classification of developed criteria of weighting methods can be done in several ways, with the criteria importance irrespective to the classification, which usually implies that the comparison or ranking criteria are applied in determining the weight of the criteria. Thus, for example, methods can be distinguished by the number of participants in the weighting process, when distinct individual or group methods are applied to theoretical concepts or when the statistical and algebraic methods are distinguished based on the concept of compensation or exchange between criteria, where possible methods are divided into a compensatory or no compensative method of combining individual weight criteria like in [13]. Thus, in the literature [46], the method of determining the weight of the criteria was divided into: statistical and algebraic, holistic and decomposed, direct and indirect, compensatory and no compensative. In algebraic methods, the *n* weight is calculated based on the set of *n* − 1 judgments by solving the equation system. Statistical procedures use a regression analysis of a redundant set of judgments. The decommissioned procedures are based on the comparison of a one-to-one pair of criteria, while in holistic methods, the decision-maker, when expressing his preferences, considers the criteria and variants and performs the overall assessment of the variants. In direct methods, the decision-maker compares two criteria using a relational scale, while in indirect methods, on the basis of preferences, the decision-makers calculate the weight of the criteria. Compensation methods involve strict compensation between the criterion and the severity of the criteria. They are used to aggregate partial values in the methods of multi attribute theory of values while the no compensative methods are without compensation, and the weight of the criteria are used as coefficients of importance used for aggregation of partial values in higher-order MCDM methods. As we said in the Introduction, basically, most approaches for determination of the weight of criteria can be divided into subjective and objective ones. Subjective approaches are based on determining the severity of the criteria, mainly relying on the information obtained from the decision-maker or from the experts involved in the decision-making

process. Subjective approaches reflect the subjective thinking and intuition of the decision-maker, and thus, the decision-maker influences the outcome of the process. Objective approaches are based on determining the severity of the criteria based on the information contained in the decision matrix using certain mathematical models. It is common knowledge that as a result of the need to improve the quality and precision of the criteria, weighting determination is used as one of the methods of composite evaluation of aggregation methods. Namely, aggregation is a unification process and refers to a different combination and merges several, usually numerical, values into one value. Each function that yields one output value using the input value vector is called the aggregation function. The earliest examples of aggregation considered an arithmetic mean, and it was used in all phases of the development of physics, as well as other sciences, based on the experiment. Aggregation plays an important role in optimizing the solution of tasks in various current scientific fields, and especially where there is an intrusion of fusion of data, such as our case of applying the method of multi-criterion decision-making, the choice of location of production capacities, for determining the significance of the determinants of the location of the production facilities in the LBFC process and factors considered from the republic level in the BFC process, which determine the rang of this self-government, i.e., city on the state level, and so, shows the local self-government with how much intensity they must improve the determinants of location in the LBFC process.

In order to get a good aggregation, it is not necessary to use any aggregation operator, but on the basis of an axiomatic approach, it depends on some conditions. These conditions can be values conditioned by nature to be aggregated because of their multidimensionality and inconsistency. In Section 3.1 of this paper, where the algorithm of the double-sided model of strategic planning of LED is described, we proposed the aggregation of the two mostly-used methods for criteria weighting: AHP as subjective, and entropy as one objective method. We have proposed this due to the appropriateness of the aggregation of exactly those two methods for the BFC and LBFC process, which implies a mandatory taking into account of the expert opinion on the significance of particular factor criteria in both BFC and LBFC processes as the initial and obligatory objectification of expert estimates on the data from the decision matrix to eliminate subjectivity and possible errors in the expert assessment. Doing this with the aggregation of these two methods of different approaches, we have obtained a more accurate method than each individual. Authors have in mind that AHP, although an old method, is still the most-used subjective method, having only one important deficiency, which could be inconsistency in the case of more than nine criteria weightings, which we have in the BFC process but not in the LBFC process. In our case study, AHP performs completely consistent pairwise comparisons [23,47,48]. Also, when choosing the entropy method, we must notice that this is because the type of criteria is not important, and it eliminates possible errors in subjective criteria weighting, as it is described in [26,27,29]. AHP-entropy aggregation has been discussed in the literature in many papers [40,49–54] where different solutions were proposed in which the application was evaluated to solve different problems. As we still mentioned in the Introduction of this paper, in paper [40], the different criteria for the construction of one agricultural water resources management in the Aras basin in the North West of Iran are integrated using the decision support system to evaluate the performance of nine indicators of structural alternatives. The aggregation of AHP as subjective and entropy method as the objective was used for criteria weighting. This aggregation was also used in [49] to determine the criteria weights in evaluating regional social and economic development in Lithuania, as well as for estimating the strategic potential of enterprises, and for determining the efficiency of economic development and commercial activities of various enterprises. In paper [50], for AHP and entropy aggregation purposes, a simple spreadsheet-based application is developed for deriving criteria weights by synthesizing decision elements and ranking decision alternatives. In [51], the use of an aggregated AHP and entropy method is used for biomass selection in Fischer–Tropsch reactors. One can see that a joint fuzzy comprehensive method and an AHP method have been proposed in paper [52] to demonstrate a variable weighting type for landslide susceptibility evaluation modelling, which combines subjective and objective weights and is tested on the case study with eight influencing factors selected from the

study of Zhen'an County, Shanxi Province in the People's Republic of China. In [53], the aggregation of fuzzy AHP and entropy method is applied for the evaluation of website usability because fuzzy MCDM approaches are widely adapted to measure and rank the website usability as a subjective in nature. Along with these measures, objective design dimension, as well as other support features that encourage the user to reach the desired information in the stipulated time, which is also important, was incorporated in the proposed metric. Finally, in article [54], a smart electricity utilization (SEU), as one of the most important components in a smart grid, which is of crucial importance when evaluating safety, efficiency, and demand response capability of electricity users to achieve the smart use of electricity, is described. For the purpose of taking into account the uncertainty of expert scoring and user data, the authors proposed a hybrid interval analytic hierarchy process and interval entropy method for electricity user evaluation.

### *2.1. AHP Method*

AHP is a multi-criteria analysis method that provides a scientific basis for decision-making problems. It has been widely applied since the 1980 s, when the decision-maker is either an individual or a group [55,56]. Almost all problems related to multi-criteria decision-making were using AHP as a quantitative technique, and its application can include more than 150 different kinds of areas [57]. The AHP method is a method that is used for formulation and analyzing decisions that can measure the influence of many factors relevant to possible outcomes of decisions. It can be used for forecasting, for instance, the performance of relative probability distribution of outcomes.

According to Saaty [58], the AHP algorithm is based on three principles:


At the first level, the problem is decomposed in hierarchical structure. The goal of the first level is on the top of the structure, while the criteria based on which decision is made are treated at the lower levels within the structure. At the lowest hierarchical level, there are many alternatives for which comparisons must be made. The hierarchical structure of the National Alliance for Local Economic Development (NALED) BFC problem for the evaluation of five LSGU is given in Figure 1.

**Figure 1.** Hierarchical structure of analytic hierarchy process (AHP) model for National Alliance for Local Economic Development (NALED) Business Friendly Certification (BFC) of five local self-government units (LSGUs).

The next phase involves A pair-wise comparison of criteria, as well as the alternatives at a specific level of the hierarchy, which are also in relation to the criteria of the level directly higher within the hierarchical structure. A pair-wise comparison of alternatives is made in response to the question in which one of the two observed attributes of an alternative to the specific criteria is better in meeting the criteria and contribution to the objective. Strength of preference is expressed with the ratio scale, with increments scaled from 1 to 9. The lowest preferential level 1 shows equality of observed attributes, and the preferential level 9 indicates the highest preference of one attribute over another [59,60].

Such a scale was created by Saaty [58], and it is used in the essential AHP method, as well as for its later advanced variant (Analytic Network Process—ANP) [61]. The defined scale allows comparisons in a limited scope, while the perception is sensitive enough to make a difference in the importance of alternatives.

### *2.2. Method of Entropy*

Following Sanchez and Soyer [62], and also Shannon [63], let us assume, that *Cj* = (*a*1*j*, *a*2*j*, ... , *amj*) denotes a priority vector according to a certain criterion *j*, *j* = 1, ... , *n*. Entropy for this vector may be defined as:

$$H(w\_j) = -\sum\_{i=1}^{m} a\_{ij} \ln(a\_{ij}), \; j = 1, \ldots, n \tag{2}$$

In information theory, the entropy *H*(*wj*) can be defined as a measure of discrete random variable *X* uncertainty, which can have a value from the finite set (*x*1, *x*2, ... , *xn*) in a way that probability that *X* is going to be equal to *xj* is *wj* and can be denoted as

$$P(X = x\_j) = w\_j \tag{3}$$

In the context of multi-criteria, entropy is mainly used to determine the priority of an alternative. The priority *pi*, in accordance with the procedure given for weights *wj*, can be interpreted as the probability that the *i*th alternative will be preferred by the decision-maker (Uden and Kwiesielewicz, [64]). One of the most important entropy characteristics is that *H*(*X*) ≥ 0. The entropy of a discrete distribution with finite support always has a non-negative value. It can be equal to zero when all elements in Equation (2) are equal to zero. Such a case is possible only when one value of a discrete random variable has a probability equal to one and, at the same time, all the other values have probabilities equal to zero. The described situation excludes uncertainty.

Entropy achieves the maximum value of the uniform distribution presented by the following equation:

$$H(1/n, \dots, \ 1/n) = \ln\left(n\right) \tag{4}$$

This characteristic of entropy is consistent with the definition of entropy as a measure of uncertainty, which means that the maximum is reached when all values of random variable *X* have equal probability.

### *2.3. Aggregation Method*

As we said in the introduction of this section, to get a good aggregation, it is possible to use any aggregation operator from several groups of mentioned aggregation, but on the basis of an axiomatic approach, it depends on several conditions. As we said in the introduction of this section, these conditions can be values conditioned by the nature to be aggregated and can also be in solving the multi-criteria problem that is addressed in this paper, where the goal is to obtain a global optimal estimate due to the large range of differences in the values of the criteria obtained. Also, as we concluded in the introduction of this section, because of their suitability to the nature of the considered BFC and LBFC MCDM problems, authors proposed the aggregation of the subjective-AHP and objective-entropy method.

Namely, calculated values of the objective criteria weights actually determine the influence, i.e., the effect of each particular criterion on the rationality of possible alternatives, but subjective criteria

weights indicate their importance in terms of the rationality of considered alternatives, and in some cases, these subjective and objective criteria differ considerably, having a negative effect on the accuracy of alternatives determination in considered BFC and LBFC processes of MCDM.

In literature, a variety of constructed methodologies for aggregation of subjective AHP and objective entropy methods are commonly found. The integration of individual preferences of decision-makers using AHP criteria weighting can be performed using various mathematical approaches, which can be categorized into four groups [64].Because, from a mathematical standpoint, unique taxonomy does not exist when aggregation of AHP and entropy methods are used in that sense, the authors try to recognize in literature the three approach groups presented in this paper.

The first approach is simply algebraic aggregation of the individual judgments obtained for each method used in different algebraic form as in [26,49,65–67], where, in the paper [26], one of possible ways of algebraic aggregation of objective and subjective determining of the criteria weights is given in the form of a kind of arithmetic mean (same approach of aggregation authors propose in the paper [49], using AHP and entropy methods, and also in [65] in the case of material selection, and in [66], used to select sheet hydroforming process parameters). In contrast, paper [67] has considered two possible ways of algebraic aggregation of objective and subjective determining of the criteria weights in the form of a kind of arithmetic and geometric mean. The second approach is based on using different mathematical programming algorithms to aggregate individually obtained AHP and entropy criteria weights, as in paper [37], which, as we mentioned in the introduction, proposes an integrated approach to determine attribute weights in the MCDM problems using the subjective information that was provided by a decision-maker and the objective information in the form of a two-objective programming model. Paper [68] points out that the objective weight introduced in [37] can be very different from the objective weight obtained by the entropy method. In [69], the authors proposed one programmed model using AHP and entropy to solve a problem of criteria weighting in the process of certifying cities and municipalities in the City of Niš in the Republic of Serbia as favorable to business, while in paper [70], the authors present several versions of a mathematical programming model that determines attribute weights for each consumer. These weights are empirically evaluated using databases of dry cereals and automobiles as an example. The third approach is based on fuzzy logic as one of the most used artificial intelligence techniques in MCDM, which is described in [52–54,71–73]. In papers [52–54] as we presented at the beginning of Section 3 of this paper, a fuzzy comprehensive method and AHP were proposed to demonstrate a variable type of weighting for landslide susceptibility evaluation modelling, then for evaluation of website usability and one hybrid interval analytic hierarchy process and interval entropy method for electricity user evaluation, respectively. In paper [71], it is assumed that in AHP, rankings of factors under particular attributes are given, and after that, based on the entropy, an amount of information associated with each ranking is evaluated and an attribute ranking is fixed. In [72], the authors proposed a new decision-making method using fuzzy AHP based on entropy weight because the traditional method of AHP is mainly used in non-fuzzy (crisp) decision applications with an unbalanced scale of judgements to overcome these problems. For example, in paper [73], the authors apply a fuzzy comprehensive evaluation algorithm to optimize the decision-making process of logistic center location in a way that fuzzy AHP is proposed to determine the rank in logistic center location evaluation. In order to include the domain expertise into the decision-making process and improve evaluation performance, the fuzzy MCDM model is developed with the goal of modifying the fuzzy AHP weights based on the entropy technique by using decision matrix information through the Delphi method. The authors of this paper choose the algebraic approach to decide which of them could be implemented for solving BFC and LBFC processes in the model for LED planning:

• If the aggregated weight of each criterion is obtained one with the AHP method *Zi*, *i* = 1, ..., *n* and the other weight obtained by using the entropy method, *wi*, *i* = 1, ..., *n* [65] in the form:

$$\alpha\_i = \frac{z\_i w\_i}{\sum\_{i=1}^n z\_i w\_i}, \ i = 1, \dots, n$$

• If the aggregate weight of each criterion is obtained with the weighting entity of the AHP method as *Zi*, *i* = 1, ..., *n* and the other weight obtained by using the entropy method *wi*, *i* = 1, ..., *n* [67] in the form:

$$\alpha\_i = \frac{\sqrt{z\_i w\_i}}{\sum\_{i=1}^n \sqrt{z\_i} w\_i}, \; i = 1, \dots, n$$

### **3. Brief Introduction to the Modeling of Strategic Planning of LED**

Since 2000, LED has been predominantly initiated by international development agencies, which implies changes in organizations, the introduction of new codes of conduct, institutional reform, and the implementation or development of new or existing strategies, etc., making the local community more competitive [74,75]. According to the author's proposal for strategic planning, the planning of LED, for the time in over five years, is a creative process in which a cyclical algorithm determines the key areas of development and harmonizes stakeholders with the most important goals and assesses the impact of significant factors on LED. Since there are several schools of thought, the authors adhered to one of them, which says that there are three things most important in the LED: location, again location, and finally, location.

The problem of choosing the location of production capacities for sustainable LED is a problem in which it is necessary to compare a large number of locations (alternatives) based on a large number of location determinants (criteria), some of which may be considered locally and some are republican managed, whereas, it should be kept in mind that each individual case, depending on the particular investor, distinguishes preferences, i.e., different criteria have different significance (weight of coefficients), so it is obvious that this problem can be dealt with as a classic MCDM problem [76].

### *3.1. Double-Sided Model of Strategic Planning of LED*

National Alliance for Local Economic Development (NALED) introduced, implemented, and has actively carried out the BFC procedure in Serbia for more than ten years. The results are very concrete and have positive implications on the improvement of the general business climate in Serbia. Almost half of all local self-government units in Serbia participated and improved their business environment through this program of certification. Certification criteria provide unique guidance to LSGUs on the type and quality of services, information they should provide, as well as infrastructure that investors and local business communities think LSGUs should provide. The local self-government, which is a part of the certification process receives specific recommendations for governance reforms that are necessary to be implemented in order to create a favorable and improved business environment and investment climate at the local level. A positive business environment includes transparent local governance, efficient administration, adequate infrastructure, and major strategic development decisions made in partnership with the business community. The BFC process encompasses a wide range of activities, which could be represented as in Figure 2.

**Figure 2.** Schematic representation of procedures in the BFC process in the Republic of Serbia [75].

The ultimate goal of the certification process is to increase the competitiveness of individual local governments, which contributes to national level competitiveness, as well as investment promotion and attraction, employment generation, and increasing living standards and quality of life in the Republic of Serbia [77]. The certification process has been adjusted, improved, and extended through the years, but mainly, it included 12 criteria and from 80 to more than 100 sub-criteria by which local self-governments and independent NALED-engaged experts could assess whether and to what extent a municipality meets the standards of a friendly business environment [78]: (C1) strategic planning of LED in partnership with businesses; (C2) special department in charge of LED, FDI promotion and business support—LED Office; (C3) business council for economic issues—advisory body to the mayor and local governments; (C4) efficient and transparent system for acquiring construction permits; (C5) economic data and information relevant for starting and developing a business; (C6) multilingual marketing materials and website; (C7) balanced structure of budget revenues/debt management; (C8) investing into the development of local workforce; (C9) cooperation and joint projects with local business on fostering LED; (C10) adequate infrastructure and reliable communal services; (C11) transparent policies on local taxes and incentives for doing business, and (C12) electronic communication and on-line services.

Assessment of the fulfilment of certification criteria methodology and collecting additional information in cooperation with the municipality team and the business community is done through the following set of steps:


According to NALED, the importance of each criterion is defined as the average score of the previous level of evaluation and as such, can be called the relative importance of observed criteria *Cj*.

Using the given data for self-government of some cities, it is possible to apply different MCDM methods to determine the importance for any city in the Republic of Serbia.

Local self-government can collect data from several subjects of the local business community who assess the location with a few criteria from the local and national level with a similar LBFC procedure using different MCDM methods to determine which type of job must be introduced in order to fulfil the requests of the local business community [79].

Therefore, in this paper, the authors proposed programmable supported strategic planning of LED using one two-dimensional, i.e., double-sided model, which realizes one cyclic algorithm, as presented in Figure 3.

On the one hand, using the given data from NALED allows to apply different MCDM methods to determine the level of fulfilment of each criterion of a good business environment for considered cities in the Republic of Serbia and to determine the intensity that should be applied to the fulfilment of certain factors in concrete local self-government.

On the other hand, the local government can use some MCDM procedures for solving the mentioned LBFC and collect data from as many local business community members as possible, and so, assess each possible location with several criteria on the local and republic level, determine the significance of each of these criteria, and thus determine which criteria are to be met for better implementation of LED; in addition, it can determine which branches of economic development should be involved because the significance of the criteria that determines the quality of the site that is suitable for the eventual investment depends on the branch [79].

In the proposed model, the strategic plan of LED should be adopted after every five years (this is determined with the date-4), while, each year, the LBFC procedure determines the most important branches of economic development related to the date-1 and the most important factors-criteria of the locations on which the local self-government should work based on the given date-2 and the required intensity of investment determined in the previous step using location factors each three months related to the date-3. The proposed model is valid, regardless of which city will be considered. However, it is used in those that are in the BFC process [67].

**Figure 3.** The algorithm of the double-sided model of strategic planning of LED [67].

For solving criteria weighting that is multidimensional and inconsistent on both mentioned MCDM problems in the proposed model, i.e., for implementation of BFC and LBFC process, we have proposed in this paper the aggregation of the two most-used methods—AHP method from a subjective approach and method of entropy from an objective approach of MCDM in order to improve quality of factors-criteria weighting, which is described in detail in Section 2. The proposed methodology is applied to the case study of the City of Niš in the Republic of Serbia. Having in mind that the authors have already considered one other methodology of such aggregation on a similar case study for the BFC procedure in article [69], in Section 2, we presented only the results obtained by the implementation of the BFC procedure, especially because the implementation of LBFC is principally the same, normally with other criteria factors.

### *3.2. Multi-Criteria Support System for Analysis Economic Environment of the Local Self-government*

The implementation of the above proposed aggregated algorithm is conducted on a business-to-government (B2G) platform called "Multi-Criteria Support System for Analysis of the Local Economic Environment". Namely, the focus of the technical solution "Multi-Criteria Support System for Analysis of the Local Economic Environment" is associated with the investigation of the business environment in the City of Niš, which is the third-largest local government in the Republic of Serbia, and one of the cities involved in the business-friendly certification process.

The platforms main goal is to deliver feedback from the business community regarding their preferences on the key issues for improving the local business scene. The solution for the MCDM problem may be in the identification of which one of the considered criteria is of the highest importance for making a satisfactory business environment. This analysis is based on the insights and attitudes of the community (several enterprises and entrepreneurs that operate on the territory of the City of Niš). The result of the performed analysis is information for the local authorities on current concerns and observations of those who are doing business under local government. The system was developed in 2015 at the Faculty of Economics, University of Niš, as well as at the Criminalistics and Police Academy, Belgrade, and implemented on the official web page of the Society of Economists of Niš (http://den.org.rs/). This is a decision support system used in the research conducted by the society. Apart from the society, the second beneficiary of this system is the Local Economic Development Office, City of Niš. The main idea is to offer accurate information to the representatives of the Local Economic Development Office in a timely manner and to inform them with the research results so that they would be able to integrate these results into the strategic plans of local economic development. Finally, the system users are business entities that are involved in the research by providing the system with their attitudes on key concerns in order to improve the local business climate. Therefore, they are taking an active part in the creation of local economic policies in this way.

The study consists of two segments:


In the first analysis, the research is exclusively focused on the criteria in which improvement is in the jurisdiction of the local government—LBFC. The criteria that are within the competence of national institutions (such as tax incentives or subsidies) are not included in the study. By completing the questionnaire, the opinion of the business community is included in the analysis of this important issue. The identity of the participants is anonymous and only visible to researchers. The main goal of the actual research conducted by the Society of Economists of Niš is to recognize the economic policy at the local level, which would allow the local authorities to contribute, regardless of the state institutions, to the enhancement of the business climate and thus contribute to a rise in economic activity. An example of the online questionnaire (anketa.investnis.rs) for collecting the data from clients is given in Figure 4.

The second analysis is research that is exclusively focused on the criteria in which improvement is in the jurisdiction of the state government. The criteria that are listed as relevant are exactly the same as in the BFC process presented in Section 3.1 of this paper. Both researches involve collecting data from many business entities to determine what factors are important (first analysis), and also, for many cities collection of data, to determine the required intensity of improving the specific criteria determined by the first analysis (second analysis), which leads to conclusion that the importance of the criteria is determined on the basis of their collective judgment. Practically, the main problem is determining the relative importance of the criteria in the decision-making multiple criteria model.


**Figure 4.** Overview of a part of the on-line questionnaire [77].

### **4. Results**

According to NALED, the importance of the criteria in the process of business-friendly certification of local self-governments in the Republic of Serbia is defined as the average score of the previous level of evaluation, and it can be defined as the relative importance of observed criteria Cj (row named NALED's evaluation of criteria importance in Table 1). The results related to the level of fulfilment for all observed criteria for five specific cities in the Republic of Serbia, which are not named, are given in Table 1. Using these data, we applied two different methods—AHP method as a subjective approach, and entropy as an objective approach—to determine the weights of the criteria.

**Table 1.** The level of criteria fulfilment in municipalities surveyed according to the BFC program.


### *4.1. Criteria Weights Determination Using AHP*

Regarding the particular problems of the case study of business-friendly certification of local self-government in the Republic of Serbia, as discussed in this paper, it is necessary to perform a comparison on the level of criteria in order to determine the weights of the MCDM model using AHP described in Section 2.1.

Using the evaluation of criteria importance defined by NALED [78], shown in the first row of Table 1, on the basis of pair-wise comparison, a reciprocal matrix is formulated (Table 2). Based on the pair-wise comparison, reciprocal matrix, and an algorithm from the AHP method, a vector of priorities has been calculated. The rounded values for *j* = 1, 2, ... , 12 are given in Table 3.


**Table 2.** Pair-wise comparison of criteria.

**Table 3.** Criteria importance of business-friendly certification model obtained using AHP.


### *4.2. Criteria Weights Determination Using Method of Entropy*

Based on the described procedure of entropy calculation in Section 3.2, the weights of criteria are determined. Determining the weight of the criteria *wj j* = 1, 2, ... , 12 is carried out through three steps using coefficients *aij i* = 1, 2, ... , 5; *j* = 1, 2, ... , 12 from Table 1. In the first step, the normalization of the criteria values of the variants is performed using the standard known form:

$$r\_{i\bar{j}} = \frac{a\_{i\bar{j}}}{\sum\_{i=1}^{n} a\_{i\bar{j}}} \text{ for } i = 1, 2, \dots, 5; \bar{j} = 1, 2, \dots, 12 \tag{5}$$

Thus, a normalized decision matrix is obtained for *n* = 5 and *m* = 12 as follows:

$$\begin{array}{ccccccccc}\mathbf{C}\_{1} & & \dots & & \mathbf{C}\_{m} \\ & \omega\_{1} & & \dots & & \omega\_{m} \\ A\_{1} & r\_{11} & & \dots & & r\_{1m} \\ & & \vdots & & \ddots & \vdots \\ A\_{n} & r\_{n1} & & \dots & & r\_{nm} \end{array} \tag{6}$$

The quantity of information that is contented in a normalized matrix of decision and carried from each criteria *Cj* could be measured as a value of entropy *ej*:

$$\mathbf{e}\_{j} = -k \sum\_{i=1}^{5} r\_{i\bar{j}} \text{ ln } r\_{i\bar{j}} \qquad \quad i = 1, 2, \dots, 5; \ j = 1, 2, \dots, 12 \tag{7}$$

By introducing the constant *k* = 1/ln(*n*), it is ensured that all values of *ej* are in the interval [0,1]. The second step determines the degree of divergence *dj* in relation to the average amount of information contained in each criterion:

$$d\_{\vec{j}} = 1 - e\_{\vec{j}} \, \vec{j} = 1, 2, \dots, 12 \tag{8}$$

where *dj* represents the intrinsic contrast intensity of the *Cj* criterion. The greater the divergence of the initial criterion values of the variant Ai for the given criterion *Cj*, the value of *dj* for the given criterion is higher, and it is concluded that the importance of the criterion *Cj* for the given decision problem is greater. If all variants have similar values of degree of divergence for a particular criterion, then this criterion is less important for giving the decision problem. Also, if all the values of the diversity variants are the same for a certain criterion, the given criterion may be omitted because it does not bring new information to the decision-maker. Since the value of *dj* represents a specific measure of the intensity of the contra-criteria *Cj*, the final relative weight of the criterion, in the third step of the method, can be obtained by simple additive normalization:

$$w\_j = \frac{d\_j}{\sum\_{j=1}^n d\_j} \text{ j } = 1, \text{ 2, ..., 12} \tag{9}$$

The obtained results for *j* = 1, 2, ... , 12 are presented in Table 4.

**Table 4.** The relative importance of criteria determined by the objective approach using the method of entropy.


### *4.3. Criteria Weights Determination Using Aggregation Method*

During this work, as we mentioned in Section 2.3, we have considered the algebraic aggregation operators and two aggregation methods:

If the aggregated weight of each criterion is obtained as the normalized product of the weighting estimator (AHP method) (*wi sub* = *Zi*, *i* = 1, ... , *n* = 12) and the normalized average weight obtained by using the entropy method (*wi obj* = *wi*, *i* = 1, ... , *n* = 12) [65]:

$$w\_i^{(10)} = \alpha\_i = \frac{z\_i w\_i}{\sum\_{i=1}^n z\_i w\_i}, i = 1, \dots, n \tag{10}$$

The obtained results for aggregation method I are given in Table 5.

**Table 5.** The relative importance of criteria determined by the aggregated approach using Equation (10).


If the aggregate weight of each criterion is obtained as a normalized geometric mean of the weighting entity (AHP method) (*wi sub* = *Zi*, *i* = 1, ..., *n* = 12) and normalized average weights obtained by using the entropy method (*wi obj* = *wi*, *i* = 1, ..., *n* = 12) [67]

$$w\_{i}^{(11)} = \alpha\_{i} = \frac{\sqrt{z\_{i}w\_{i}}}{\sum\_{i=1}^{n} \sqrt{z\_{i}w\_{i}}}, \text{ i } = 1, \dots, n \tag{11}$$

The obtained results by aggregated approach II using Equation (11) are given in Table 6.

**Table 6.** The relative importance of criteria determined by the aggregated approach II using Equation (11).


The results given in Tables 3 and 4 compared to the results obtained by the aggregated methods given in Tables 5 and 6 practically indicate that in the BFC process of cities in the Republic of Serbia, an integrated method that uses the normalized geometric mean of one of the possibly weighted subjects and the normalized average weights obtained by another (preferably by the objective method given in Formula (11) because this aggregated weight will always be in the range between ω*<sup>i</sup>* and *zi*, i.e., between *wi sub* and *wi obj* [67,80]) should be used.

### **5. Discussion**

The authors have presented the improvements in the theoretical and practical framework for the planning of sustainable local economic development by the application of multi-criteria analysis. The proposed methodology bridges the gap in the literature when we are talking about indicators' evaluation for sustainable local economic development by using a new approach for the obtaining of criteria weights coefficients. One can see that the new type of aggregation of subjective and objective methodologies for the calculation of criteria weights has been introduced. The efficiency of the proposed approach has been demonstrated experimentally on the web application having the City of Niš, the Republic of Serbia, as an example. Our new aggregation approach proposed here for the obtaining of criteria weights, tested in the process of local economic development, is a required part of the two phases:


The results obtained in this work have shown that the proposed approach is a very powerful tool that could notably improve the decision-making process related to sustainable local economic development. To the best knowledge of the authors, this is one of the first papers related to the field of sustainable local economic development that gives a multi-criteria framework practically implemented on the web application we have used for the verification of the proposed methodology.

The significant differences in the results obtained with two different MCDM approaches considered here, AHP and Entropy, observed by comparing results in Tables 3 and 4, shows that some kind of aggregation of these two methods for obtaining better methodology is necessary.

The difference in the results given in Tables 5 and 6 indicates:


Namely, the entropy as the objective approach for weights determination is an adequate method because it is possible to estimate the homogeneity of the individual judgments of all enterprises and entrepreneurs involved in the research. Due to that, the entropy method has been conducted for the determination of individual importance in both analyses while the AHP method has been used as one subjective because it can use subjective knowledge of experts from this field.

As the homogeneity of individual judgments on the observed criterion is higher, the importance of the criterion is lower, assuming that the weights are calculated based on the degree of the diversification of information *Hj* by the following equation [65]:

$$H(w\_j) = 1 - H(w\_j), \quad j = 1, \ldots, 12\tag{12}$$

Subsequently, the weights are determined as *wj* <sup>=</sup> *<sup>H</sup>*(*wj*)/ *<sup>n</sup> j*=1 *H*(*wj*), where 0 ≤ *wj* ≤ 1 and

*n j*=1 *wj* = 1.

Finally, we have made one Wilcoxon Signed Ranks Test for the results obtained with Equations (10) and (11).

Using this test we prove that there is no significant difference between these two methodologies and that the proposed methodology could be applicable—Table 7.


**Table 7.** The Wilcoxon Signed Ranks Test in SPSS.

### **6. Conclusions**

In conclusion, the authors noticed two important facts that are important from the standpoint of sustainable local economic development. The platform of the considered decision support system is based on a proposed double-sided model of strategic planning of LED used in its obligatory BFC and LBFC processes, which ensure the sustainability of the proposed solution. For the realization of these two processes, the MCDM with an approach of aggregation has been used in the form of a geometric mean of results obtained by AHP as subjective and entropy as objective methodology, which provides a balanced approach to the final solution. General limitations of each type of methodology that we have included in the aggregated proposed solutions have been minimized. Using this way, the system creates the favorites of the business community related to the criteria for the assessment of the local business environment (first analysis) and necessary correction estimation of the city in the Republic of Serbia (second analysis on the basis of implementation of the proposed model). This application has been proven as a useful tool for dynamic observing of the local business environment realized as a software tool on a local government website (http://investnis.rs). Results obtained in this article enables that the further work of authors and other researchers who deal with the problem of aggregation of criteria weighting using AHP as subjective and entropy as objective methods could be researched and tested with a more efficient solution of aggregation of one subjective and one objective method. They should have in mind new subjective methods, for example, BMW, SWARA, or FUCOM, and also new objective methods, for example, CRITIC and FANMA, which may have the advantages in enabling better results in solving considered and similar problems in other areas of human life.

The contribution of this paper can be considered from two points of view. The theoretical work has provided a new approach of the aggregation of subjective and objective methods of multi-criteria decision-making, which ensures that the results obtained by its application are always between the results when they are applied individually. This eliminates the possible bad effects of each of the methods from the group of subjective and objective methods. The practical work has dealt with the practical implementation of the proposed method in the concrete system for planning of sustainable local economic development of the City of Niš in the Republic of Serbia.

**Author Contributions:** M.R., contributed to this paper by providing data and field work in analysis of the specific case of City of Nis. He also contributed with his research background related to identification and research of methods for objectification and improvement of multi criteria decision making processes by integration of different existing methods in determination of weight criteria. He also participated directly in business certification process and contributed to the paper bringing field work experience in specific area of local economic development and improving business environment. S.N., contributed to this paper adding his extensive experience and research in different decision making systems and in the specific case development of the algorithm for sustainable local economic development presented in this paper. M.J., contributed to this paper adding his extensive experience and research in different decision making systems and in the specific case development of the algorithm for sustainable local economic development presented in this paper, as well as research and practical implementation of the model in the Republic of Serbia. M.C., having in mind their good educations in software engineering ˇ contributed to this paper adding his experience and research in different multi criteria decision making methods, and especially in the part of the research related to current achievements and findings of other authors in this particular area. V.S., having in mind their big mathematical educations contributed to this paper adding his

extensive experience and research in mathematical modeling as well as in different models of multi criteria decision making, and especially in the part of the research related to current achievements and findings of other authors in this particular area. A.A. contributed to this paper adding his experience and research in different multi criteria decision making methods, and especially in the part of the research related to current achievements and findings of other authors in this particular area. D.R. as the most experienced of all authors was leading the team worked on this paper and contributed most significantly by identifying specific areas of sustainable development planning where multi-criteria decision making methods could be implemented. He has expensive experience in research of decision making models and provided the idea which of the existing models could contribute the most through integration having in mind their specific and mathematical education. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest.

### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

*Article*

## **Application of the AHP-BWM Model for Evaluating Driver Behavior Factors Related to Road Safety: A Case Study for Budapest**

### **Sarbast Moslem 1, Danish Farooq 1, Omid Ghorbanzadeh 2,\* and Thomas Blaschke <sup>2</sup>**


Received: 1 January 2020; Accepted: 26 January 2020; Published: 5 February 2020

**Abstract:** The use of driver behavior has been considered a complex way to solve road safety complications. Car drivers are usually involved in various risky driving factors which lead to accidents where people are fatally or seriously injured. The present study aims to dissect and rank the significant driver behavior factors related to road safety by applying an integrated multi-criteria decision-making (MCDM) model, which is structured as a hierarchy with at least one 5 × 5 (or bigger) pairwise comparison matrix (PCM). A real-world, complex decision-making problem was selected to evaluate the possible application of the proposed model (driver behavior preferences related to road safety problems). The application of the analytic hierarchy process (AHP) alone, by precluding layman participants, might cause a loss of reliable information in the case of the decision-making systems with big PCMs. Evading this tricky issue, we used the Best Worst Method (BWM) to make the layman's evaluator task easier and timesaving. Therefore, the AHP-BWM model was found to be a suitable integration to evaluate risky driver behavior factors within a designed three-level hierarchical structure. The model results found the most significant driver behavior factors that influence road safety for each level, based on evaluator responses on the driver behavior questionnaire (DBQ). Moreover, the output vector of weights in the integrated model is more consistent, with results for 5 × 5 PCMs or bigger. The proposed AHP-BWM model can be used for PCMs with scientific data organized by traditional means.

**Keywords:** analytic hierarchy process; best worst method; decision-making; driver behavior questionnaire; road safety

### **1. Introduction**

According to the Hungarian Central Statistical Office data, there were 625 road fatalities in 2017, a 2.9% increase when compared to 2016 [1]. The investigations of the Road Safety Action Program declare that human-related factors caused most of the accidents. Therefore, handling them befits the highly dynamic objective of road safety action. The Road Safety Action Program (2014–2016) was incorporated into the Hungarian Transport Strategy, which also set objectives to reduce the number of road fatalities by 50% between 2010 and 2020 [2].

The previous research done to estimate drivers' perceptions of accident risk revealed that the main factors related to the driver which directly affect road safety included driving behavior, driving experience, and the driver's perception of traffic risks [3]. Drivers are generally involved in actions that cause safety problems for both themselves and other road users. Many driver behavior factors were observed as being dynamic, intentional rule violations, and errors due to less driving experience, while others were the result of inattention, momentary mistakes, or failure to conduct a function, the latter often related to age [4,5].

Driving behavior identification has been observed as a central condition for traffic research and investigations, which generally give practical information in three principal subjects: road safety analysis, microscopic traffic simulation, and intelligent transportation systems (ITS) [6]. Identifying the driver's characteristics is essential to ease the driver's workload and enhance the essential services of active vehicle safety systems. However, these systems, based on the average of the driver's performance and the individual driver's attitudes, were seldom taken into consideration [7].

There have been ample efforts made to detect and remediate behaviors that reduce driving safety. Among the many tools developed to identify problematic driver behaviors, the driver behavior questionnaire (DBQ) stands out for its longevity and dominant employment [8]. In order to analyze risky driving behavior for road safety, the DBQ was first used as an operational tool in related studies in the 1990s [9,10].

Multi-criteria decision-making (MCDM) models have provided decision-makers with the best solutions for the complex problems of dealing with several criteria [11]. MCDM models have been successfully used in a wide range of problems in varied fields, including complex transportation systems [12,13]. The analytic hierarchy process (AHP) is considered to be one of the most common MCDM models that supports decision-makers to identify and solve real-world problems. However, the conventional AHP glosses over assorted impediments, such as the menial pairwise comparisons (PCs) and the lack of consistency in some cases. Furthermore, it requires more efforts from the evaluators [14]. There are also some gaps, such as the eigenvector optimality problem and the fact that it ignores the interrelations among the different factors at different levels [15–19].

However, it is inconceivable to neglect the inconsistency in the pairwise comparison matrix (PCM), since inconsistency usually occurs in practice [20,21]. The inconsistency issue of the PCM is the notable drawback of the AHP, and this may lead to fallacious results. This would occur mainly if the PCM is 5 × 5 or bigger in the decision structure, where the tolerably consistent filling of this sized matrix requires significant cognitive effort from non-expert evaluators [16].

To overcome the lack of consistency in the conventional AHP and minimize the PCs in the questionnaire survey, Rezaei created Best Worst Method (BWM) [22], which aims to unburden decision-makers by requiring less pairwise comparisons than the conventional AHP procedure. As it is a new technique, it only has a few applications, and some questions remain open in terms of the conditions and limitations related to the usage of parsimonious AHP. For the BWM itself, the satisfactory consistency ratio value and the inconsistency amelioration methods can be addressed. When it comes to the BWM within other contexts, uncertainty could be investigated. The multi-optimality solution of the model in the BWM could be solved from other perspectives [23].

This paper proposes an integrated model of AHP-BWM to overcome the disadvantages of the traditional AHP method in the case of 5 × 5 or larger PCMs in the decision structure. Additionally, this model gives a few PCs and allows for the high consistency of the PCM. Moreover, we attempt to apply BWM to equal or larger than 5 × 5 PCMs, which might be demanding for layman evaluators. Even though Saaty clarified that the consistency would be very poor when the factor numbers exceed 7 ± 2 [14], it is originally a theoretical verification of Miller's psychological investigation [24]. However, the proposed model leads to more consistent and reliable results, with a smaller number of PCs for designated matrices.

### **2. Materials and Methods**

### *2.1. Driver Behavior Questionnaire (DBQ) Survey*

The questionnaire survey technique is a predefined series of questions used to collect data from individuals [9,10]. Some recent studies applied the driver behavior questionnaire to assess the real-world situation, by considering the significant traffic safety factors and interrelations between the observed factors [17,25]. This study utilized the driver behavior questionnaire (DBQ) as a tool to collect driver behavior data based on perceived road safety issues. The case study has been conducted using experienced drivers in the Hungarian capital city, Budapest. To do so, car drivers with at least fifteen years of driving experience were asked to fill in the DBQ face-to-face, which enhanced its reliability. The questionnaire survey was designed in two parts: The first part intended to accumulate demographic data about the participants, and these results are tabulated in Table 1. The results state the mean and standard deviation (SD) values of observed characteristics such as age, gender and driving experience based on drivers' responses to the DBQ.


**Table 1.** Pattern characteristics.

The second part of the DBQ, which has a design based on the Saaty scale, is used to analyze the significant driver behavior factors related to road safety. The previous study identified three types of deviant driving behavior, i.e., errors, lapses and violations, and investigated the relationship between driving behavior and accident involvement [9]. Some previous studies utilized the extended version of the DBQ to measure aberrant driver behaviors such as aggressive and ordinary violations, lapses and errors. Accordingly, these driver behaviors were defined as "errors" in unintended acts or decisions, while "slips and lapses" are tasks which we do without much conscious consideration. Furthermore, "violations" are intentional failures—intentionally doing incorrect action [26–28]. An "aggressive violation" was defined as inconsistent behavior towards other road users [27]. For evaluation purposes, the driver behavior factors are designed in a three-level hierarchical structure and each factor is symbolized with an 'F', as shown in Figure 1. These driver behavior factors have a significant influence on road safety, as discussed in the foregoing study [17].

**Figure 1.** The hierarchical model of the driver behavior factors related to road safety [17].

### *2.2. Overview of the Conventional Analytic Hierarchy Process (AHP)*

The conventional AHP method is based on the hierarchical decision structure, which is constructed from decision elements of the complex decision issue, and it was applied extensively in many areas [29–31]. The hierarchical structure generally consists of multi-levels where the principal elements and sub-elements are located, and the importance of the linkages among the elements in the different levels determine the global scores for the elements in the last level. The main steps of the conventional AHP are:


The PCM is a positive square matrix *Ax*, where *xij* > 0 is the subjective ratio between *wi* and *wj*, and *wx* is the weight score from the *Ax*. The following equation can calculate Saaty's eigenvector method, which is defined for the PCMs:

$$A \cdot w\_{\text{X}} = \lambda\_{\text{max}} \cdot w\_{\text{X}} \iff w\_{\text{X}}(A - \lambda\_{\text{max}} \cdot I) \tag{1}$$

where the maximum eigenvalue of the *A* matrix is λ*max*.

The structure of a (6 × 6) consistent with theoretical PCMs is defined as (2).


For every PCM, the reciprocity is primarily performed *xji* = 1/*xij* where *xii* = 1 is ensured. However, consistency is probably not performed for empirical matrices. The consistency criterion:

$$
\boldsymbol{\mathfrak{x}\_{ik}} = \boldsymbol{\mathfrak{x}\_{lj}} \cdot \boldsymbol{\mathfrak{x}\_{jk}} \tag{3}
$$

The questionnaire surveys were filled out by the evaluators, taking into consideration the evaluated driver behavior factors related to road safety established on the Saaty scale. This scale ranges from 1 for the case that two elements have the same importance to 9 when one element is favored by at least an order of magnitude [14]. The empirical matrices are generally not consistent in the eigenvector method despite being filled by the evaluators.

Saaty created the consistency check in order to examine the PCM consistency, which in turn provides robust outcomes.

$$\text{CI} = \frac{\lambda\_{\text{max}} - m}{m - 1} \tag{4}$$

where CI is the consistency index, and the maximum eigenvalue of the PCM is λ*max*, and *m* is the number of rows in the matrix. While the following equation can compute the consistency ratio (CR):

$$\text{CR} = \text{Cl/RI} \tag{5}$$

where the average of the consistency index is (RI) and its values are presented in (Table 2). The agreeable value of CR in the AHP approach is CR < 0.1.


**Table 2.** RI indices from randomly generated matrices.

Sensitivity analysis enables the perception of the effects of alternates in the main element on the sub-element ranking, and helps the decision-maker to check the stability of results throughout the process.

### *2.3. Overview of the Best Worst Method (BWM)*

The general BWM method for criteria and alternatives has been used to derive the weights of the criteria, with a smaller number of comparisons and a more consistent comparison role. The best criterion is the one which has the most vital role in making the decision, while the worst criterion has the opposite role [22]. Furthermore, not only does BWM derive the weights independently, but it can also be combined with other multi-criteria decision-making methods [32–34].

The main steps can be summarized as the following:


We consider a set of elements (*e*1, *e*2, ... , *en*) and then select the most important element and compare it to others using the Saaty scale (1–9). Accordingly, this indicates that the most important element to other vectors would be: *Ea* = (*ea*1, *ea*2, ... , *ean*), and obviously *eaa* = 1. However, the least important element to other vectors would be *Eb* = (*e*1*b*, *e*2*b*, ... , *enb*) *<sup>T</sup>* by using the same scale.

After deriving the optimal weight scores, the consistency was checked through computing the consistency ratio from the following formula:

$$CR = \frac{\xi^\*}{\text{Consistency Index}}\tag{6}$$

where Table 3 provides us with the consistency index values:

**Table 3.** Consistency index (CI) values.


To obtain an optimal weight for all elements, the maximum definite differences are *wa wj* − *eaj* and *wj wb* − *ejb* , and for all *j* is minimised. If we assumed a positive sum for the weights, the following problem would be solved:

$$\begin{aligned} \text{minim} & x\_j \left\{ \left| \frac{w\_b}{w\_j} - c\_{aj} \right| \Big| \frac{w\_j}{w\_b} - c\_{jb} \Big| \right\} \\ & \qquad \text{s.t.} \\ & \sum\_j b\_j = 1 \\ & b\_j \ge 0, \text{ for all } j \end{aligned} \tag{7}$$

The problem could be transferred to the following problem:

$$\begin{array}{c} \min \xi \\ \text{s.t.} \\ \left| \frac{w\_l}{w\_j} - \varepsilon\_{aj} \right| \leq \underline{\xi}, \text{ for all } j \\ \left| \frac{w\_j}{w\_b} - \varepsilon\_{jb} \right| \leq \underline{\xi} \text{ for all } j \\ \sum\_j b\_j = 1 \\ b\_j \geq 0, \text{ for all } j \end{array} \tag{8}$$

By solving this problem, we obtain the optimal weights and ξ∗ .

### *2.4. The Proposed AHP-BWM Model*

Let us have *n* criteria structured in a decision problem into *m* levels. Thus, we have *k* = 1, ... , *m* levels in the decision. Let us denote *j* criteria on a certain level of the decision, in order to denote a criterion on a certain level in which we might have *h* criteria.

*j* = 1, ... , *h* and J are the set of all criteria of a decision such as *J* = 1, ... , *n*. Consequently, means respectively, the first criterion on the first level, and so forth.

The first step of the suggested method is to select a level or levels to decide how the parsimonious AHP will be conducted. It is proposed that we select level(s) which have 'h' in their title, so that the level(s) can be considered to be worth enough in number, relieving the evaluators from numerous pairwise comparisons. Moreover, it is recommended (following Saaty's 7 ± 2 rule for a PCM) to select level(s) for which PCMs larger or equal to 5 × 5 should be evaluated [14]. Based on own experience, the pairwise comparisons for a 5 × 5 matrix might be demanding for layman evaluators.

In the simple AHP approach, in order to obtain a completed matrix for *n* factor, we need to evaluate *n*(*n* − 1)/2 PCs. For the BWM method, we need 2*n* − 3 pairwise comparisons. This matter shows that BWM is an efficient approach to save time and effort for both evaluators and analyzers. For example, if we have just ten criteria, this means that we have to make 45 comparisons with simple AHP for them, however, if we select BWM for this PCM then this is just 17 comparisons.

At the first level of the hierarchical model, the evaluators filled the matrix by factors with symbols *f*12, *f*13, *f*<sup>23</sup> in order to compare F1, F2 and F3.

In the pure AHP, the evaluator has to evaluate twenty-eight comparisons (12 comparisons for four (3 × 3) matrices + 1 comparison for one (2 × 2) matrix + 15 comparisons for (6 × 6) matrix).

However, in the proposed AHP-BWM model, the evaluator has to evaluate only twenty comparisons. The main steps of the proposed AHP-BWM model are discussed in Figure 2.

**Figure 2.** The main steps of the proposed BWM-AHP model.

### **3. Results and Discussion**

The AHP-BWM model was applied based on the size of matrices to more effectively evaluate the driver behavior factors related to road safety. Firstly, the analytic hierarchy process which was used for the hierarchical structure contains four (3 × 3) matrices and one (2 × 2) matrix. The best worst method was applied at the third level for the (6 × 6) matrix to compute the weight scores; this method helps us to perform nine comparisons for the (6 × 6) matrix instead of 15 comparisons if the AHP was applied. Furthermore, the reliability of the PCs' consistency in AHP and BWM was checked, and it was acceptable for all of them. The aggregated scores for F121, F122, F123, F124, F125 and F126 are depicted in Table 4.

**Table 4.** The weight scores for driver behavior factors related to road safety.


For the first level, the AHP results showed that "violations" (F1) is the highest ranked driver behavior factor related to road safety based on evaluator answers on the DBQ. The previous study observed that "road traffic violations (RTVs)" is the most critical behavior that causes certain risk to other road users [35]. Another study noticed that "violations" along with "errors" positively correlated

with self-reported accident involvement [36]. After that, the results observed "errors" (F3) as a second rank factor, followed by "lapses" (F2), as shown in Table 5.


**Table 5.** The final weight scores for the factors at the first level.

For the second level, the AHP results indicated that "aggressive violation" (F12) is the most important driver behavior factor related to road safety. The previous study found an important relationship between aggressive violations and the number of accidents for Finland and Iran [37]. In addition, a recent study observed a positive correlation between the more aggressive violations and accident involvement [38]. Furthermore, the results evaluated "fail to apply brakes in road hazards" (F33) as the second most significant factor compared to other related factors.

The previous study noted that more fatalities and a high number of impact speed crashes could occur if the driver does not apply the brakes [39,40]. Meanwhile, "visual scan wrongly" (F32) is observed to be the lowest ranked driver behavior factor related to road safety, as shown in Table 6.


F33 0.2243 2

**Table 6.** The final weight scores for the factors at the second level.

For the third level, the combined AHP-BWM model was applied to one (3×3) matrix (AHP) and one (6 × 6) matrix (BWM). The model results showed "drive with alcohol use" (F126) as the most significant driver behaviour factor related to road safety. This result can be justified by looking at Hungarian driving laws; there is a zero-tolerance policy towards drinking and driving [41]. Subsequently, the model results observed "disobey traffic lights" (F123) as the second ranked factor, followed by "failing to yield pedestrian" (F122). The previous study observed "beating traffic lights" as one of the frequent causes for the high number of crashes and injuries [42]. The results showed "fail to use personal intelligence" (F111) as the lowest ranked driver behavior factor compared to other related factors as shown in Table 7.

**Table 7.** The final weight scores for the factors at the third level.


The recent DBQ study also evaluated "failing to use personal intelligent assistant" as the least respected driver behavior by Budapest and Islamabad drivers [43]. The application of the AHP-BWM model resulted in more consistent weights, facilitating the completion of prepared questionnaires for the decision-makers.

### **4. Conclusions**

The use of driver behavior identification has been considered an important and complex way to solve road issues because of the large amount of driver behavior data and its variation. In this paper, we described some tricky AHP problems, and then designed an advanced AHP-BWM model for evaluating the driver behavior factors related to road safety. The study utilized the driver behavior questionnaire designed using the Saaty scale to collect driver behavior data from experienced drivers with at least fifteen years of driving experience. The proposed AHP-BWM model helps to evaluate the driver behavior factors in a three-level hierarchical structure, with less evaluation time and better understandability for evaluators due to fewer comparisons compared to the conventional AHP. The obtained model results are more reliable compared to those of conventional models due to more consistent PCs, which increase the efficiency of the proposed model.

The study evaluation outcomes show that "violations" is the most significant factor that influences road safety, followed by "errors" for the first level of the hierarchical structure. For the second level, the AHP results found that "aggressive violations" is the most important driver behavior factor related to road safety, followed by "fail to apply brakes in road hazards", based on the drivers' responses. The results showed "visual scan wrongly" as the least significant driver behavior factor related to road safety. For the third level, the AHP-BWM model results evaluated that "drive with alcohol use" is the most important factor followed by "disobey traffic lights", compared to other related factors. However, the model results found "failing to use personal intelligence" to be the least significant driver behavior factor related to road safety.

By considering further research, it is clear that many other AHP-BWM model applications are necessary to become familiar with analyzing different real-world characteristics. The measurable benefits are clear, it provides a faster and cheaper survey process, and undoubtedly the survey pattern can be extended more easily using this technique than applying the conventional AHP with the complex PC questionnaire. However, this paper merely provides one example, but many other applications can ultimately verify the technique. The combined AHP-BWM model will help researchers to improve their future research by improving the consistency with fewer PCs and saving time for analyzing collected data.

**Author Contributions:** Conceptualization, S.M. and D.F.; methodology, S.M.; software, S.M.; validation, formal analysis, S.M.; resources, D.F., S.M. and O.G.; data curation, D.F. and S.M.; writing—original draft preparation, D.F., S.M., and O.G.; writing—review and editing, D.F., S.M., O.G., and T.B.; funding, T.B. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research is partly funded by the Austrian Science Fund (FWF) through the GIScience Doctoral College (DK W 1237-N23).

**Conflicts of Interest:** The authors declare no conflict of interest.

### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

*Article*

## **A Novel Integrated Subjective-Objective MCDM Model for Alternative Ranking in Order to Achieve Business Excellence and Sustainability**

### **Vladimir Markovi´c 1, Ljubiša Staji´c 2, Željko Stevi´c 3,\*, Goran Mitrovi´c 4, Boris Novarli´c <sup>5</sup> and Zoran Radojiˇci´c <sup>6</sup>**


Received: 23 December 2019; Accepted: 10 January 2020; Published: 14 January 2020

**Abstract:** Achieving sustainability in constant development in every area in today's modern business has become a challenge on the one hand, and an imperative on the other. If the aspect of business excellence achievement is also added to it, the complexity of the system increases significantly, and it is necessary to model a system considering several parameters and satisfying the multi-criteria function. This paper develops a novel integrated model that involves the application of a subjective-objective model in order to achieve business sustainability and excellence. The model consists of fuzzy PIPRECIA (fuzzy pivot pairwise relative criteria importance Assessment) as a subjective method, CRITIC (criteria importance through intercriteria correlation) and I-distance method as objective methods. The goal is to take the advantages of these approaches and allow for more accurate and balanced (symmetric) decision-making through their integration. The integrated subjective-objective model has been applied in a narrow geographical area to consider and evaluate banks as a significant factor in improving the social aspect of sustainability. An additional contribution of the paper is a critical overview of multi-criteria problems in which the levels of the hierarchical structure contain a different (asymmetric) number of elements. A specific example has also been used to prove that only a hierarchical structure with an equal number of lower-level elements provides precise weights of criteria in accordance with the preferences of decision-makers referring to subjective models. The results obtained are verified throughout the calculation of Spearman and Pearson correlation coefficients, and throughout a sensitivity analysis involving a dynamic reverse rank matrix.

**Keywords:** sustainability; fuzzy PIPRECIA; CRITIC; I-distance; business excellence

### **1. Introduction**

The success and sustainability of each economic system is directly dependent on the ability to finance all capital projects considering different areas. In such a system, the banking sector can play a key role, especially when it comes to developing countries. The banking sector is the most important part of both the financial and economic systems of every country. Banks play a key role in financial intermediation through the following processes: asset mobilization, asset allocation, investment of national savings and other forms of capital. The extent to which the banking sector is developed within the country's financial system influences how efficient the allocation of capital will be, how dynamic the growth of the enterprise will be, and how expansive the overall economic development will be. It is evident that the banking sector in many countries has been experiencing financial difficulties of varying intensity in recent decades. These phenomena have particularly appeared in the past two decades, so the serious financial troubles have plagued many countries.

One of the ways to overcome the problems of financial crises and maintain confidence in the banking sector of countries is to determine the real quality of individual banks that are participants in the market. A clear picture of participants allows both the corporate and the retail sectors to avoid the pitfalls of malicious marketing and problematic business behavior. In addition, the reliance on high-quality participants in the banking sector and the elimination of those who are not, is the first prerequisite for the smooth functioning of the economy and its continued growth and development. Accordingly, there is a real requirement to form an adequate model that will best assess the quality of each unit and, as a result, provide a complete ranking list of all banks ranked. The problem of ranking alternatives, i.e., banks in the particular case, is based on a large number of "angles" from which it is necessary to consider the quality of a bank and then, on that basis, to construct a "global (integral) quality index" that enables cross-comparisons. The large amount of business data possessed by all companies, and thus by banks, further complicates this analysis and requires the selection of only the most significant indicators that realistically reflect the performance and quality of the business. In addition, the model would need to enable ranked participants to identify their weaknesses and, on that basis, to make certain adjustments to the business that would improve the lagged segments of the business.

In accordance with the above, and the methodology developed and applied in this paper, the following goals can be identified. The first goal is to enrich the area that addresses multi-criteria problems through the formation of an integrated subjective-objective model. The second goal of the paper is the possibility of constructing a multi-criteria model that should enable the ranking of banks as economic units, based on the most significant indicators of their business performance. The third goal of the paper involves a brief critique of the previous MCDM (multi-criteria decision-making) problems with an unbalanced hierarchical structure at lower levels of the hierarchy, which practically has a great influence on final decisions. The fourth goal of the paper is to enhance the integration of uncertainty theory, such as fuzzy logic, with other approaches, and the integration of subjective-objective models in order to achieve more accurate and approximately optimal results. As previously mentioned, we can synthesize one main goal of this research, in that such developed a model should ensure precise answer various questions and give potential approximately optimal solutions in various fields taking into account different constraints.

In addition to the introduction, conducted research is described through five more sections. Section 2 focuses on the importance of a new approach to business excellence and a brief review of the situation in the field. Section 3 presents a defined methodology of the paper and a research flow. This section integrates different approaches. Section 4 summarizes the results, i.e., the ranking of banks on the basis of a previously extensive analysis and the multiphase determining the significance of the criteria used for the ranking. In Section 5, the proposed model is verified, i.e., the results are obtained throughout a sensitivity analysis. Section 6 includes concluding considerations with an overview of instructions for future research.

### **2. Literature Review**

In recent decades, a new approach has been introduced to the business world, called "business excellence" (BE) in the literature. Facing an increasingly unstable and asymmetric business market, a large number of organizations are implementing BE strategies and quality systems as key elements of their business concept [1] that lead to improved business results [2]. The design, creation, implementation, and evaluation of these strategies require the reconsideration of how organizations work. The increasing application of different methods and tools, such as business process rearrangement, continuous monitoring of results, enterprise resource planning (ERP), lean management, or Six Sigma model management, have imposed the need for an integrated model in order to achieve BE at all levels in companies [3].

BE can be seen not only as a new understanding of the quality system [4], but also as an umbrella term that takes into account a broader range of issues like sustainability [5].

Increasing competition in the banking sector [6] has led banks to become actively involved in developing quality systems and business excellence in their business. This also appears as an inevitability since, according to Navid and Shabantaheri [7], many people believe that the traditional method of banking is not flexible. By raising the quality level, banks seek to retain their customers and build a certain level of loyalty. In this way, through the improvement of quality, each bank develops its business [8].

In order to determine the quality and the way of doing business, reports with indicators are formed for each bank separately according to different criteria. Following the adoption of reports, multi-criteria analysis methods are often applied to rank the banks and provide a clearer picture of comparative business. Stanujkic et al. [9] ranked five commercial banks in Serbia using various methods, such as: simple additive weighting (SAW), additive ratio assessment (ARAS), COmplex PRoportional ASsessment (COPRAS), multi-objective optimization on the basis of ratio analysis (MOORA), gray relational analysis (GRA), compromise programming (CP), VIKOR (VIsekriterijumska optimizacija i KOmpromisno Resenje) and Technique for ordering preference by similarity to ideal solution (TOPSIS). In the research, they used the four most important criteria: liquidity, efficiency, profitability, and capital adequacy, consisting of three sub-criteria, and four sub-criteria for capital adequacy. They concluded that various aggregation and normalization procedures in some cases lead to obtaining the various optimal solutions. Considering this limitation and the impact on the final results, a more accurate subjective-objective model has been applied in this research.

Ratkovi´c et al. [10] carried out a comparative quality analysis of the banking and postal sectors in Serbia by applying the SERVQUAL model based on five basic dimensions of customer expectations and perceptions. The findings of this study indicate that it is important to improve the situation regarding quality in the banking sector significantly. All dimensions of the SERVQUAL model need to be improved.

An integrated MCDM model consisting of four methods: Fuzzy AHP, TOPSIS, VIKOR and ELECTRE and Balance scorecard (BSC) was applied in [11] to determine the performance of three banks in Iran on the basis of 21 evaluation indexes. In the study [12], bank loan default classification models were evaluated using the TOPSIS method and the K-nearest neighbor algorithm. Wanke et al. [13] evaluated the performance of the Association of Southeast Asian Nations banks based on the integration of the MCDM model involving fuzzy AHP, TOPSIS, and neural networks. In contrast to this research, they performed an evaluation based on CAMELS input parameters. CAMELS includes capital adequacy (C), asset quality (A), management quality (M), earnings (E), liquidity (L), and sensitivity to market risk (S). Gökalp [14] applied the PROMETHEE (Preference Ranking Organization Method for Enrichment Evaluations) method to compare state, private and foreign banks located in the territory of Turkey. The authors analyzed the six-year period in order to obtain better and clearer results. He has obtained different results depending on the observation period, so it can be concluded that such an evaluation is necessary to perform periodically. The evaluation was based on the input parameters of the CAMEL system. The integration of AHP and TOPSIS methods is common, and the model has also been used in [15] to evaluate private banks in Turkey. The research included 21 banks that were evaluated on the basis of financial parameters. Önder and Hepsen [16] also applied the same combined AHP–TOPSIS model to evaluate Turkish banks over a nine-year period. A model of 57 criteria was created, based on which 17 banks were evaluated. The same model, but in fuzzy form [17], was applied for the evaluation of 12 banks in the territory of Serbia, including 19 evaluation parameters. The combination of entropy and TOPSIS methods was used in [18] to perform the evaluation of 12 banks based on five criteria: growth rate, number of branches, numbers of ATM, net income, and lending. Beheshtinia and Omidi [19] applied (AHP) and modified digital logic (MDL) tools for determining the weight values of 23 criteria, while fuzzy TOPSIS and fuzzy VIKOR methods were implemented in the evaluation process of four banks in Iran. Since different approaches also provide differences in the rankings of alternatives, the Copeland method was used in the end to aggregate the results, i.e., the rankings of alternatives. Gineviˇcius and Podvezko [20] state that one of the most significant criteria that has influence on the economic development of any organization is effective performance and bank reliability. Accordingly, they carried out a comparative analysis of ten commercial banks in Lithuania on the basis of 15 criteria. They applied various MCDM methods: SR (sum of ranks), SAW (simple additive weighting), TOPSIS and COPRAS (complex proportional assessment). In [21], the integration of the AHP method with VIKOR was applied in order to evaluate banks in India.

Using different MCDM methods, in integration with other approaches such as fuzzy logic, is evidenced in the literature in a large number of examples [22]. For example, such approach has been integrated with analytical network process (ANP), in order to improve quality in the airline industry [23]. Dinçer and Yüksel in their study [24] are evaluated BSC criteria with the integrated hybrid multicriteria decision-making approach by using fuzzy AHP, fuzzy ANP, and fuzzy VIseKriterijumska Optimizacija I Kompromisno Resenje (VIKOR) methods. Ecer [25] in his paper combines the fuzzy AHP with the ARAS for development a new integrated fuzzy MCDM model in order to evaluate M-banking services.

### **3. Methods**

Methods that belong to multi-criteria decision-making field have been developed as mathematical tools to support decision-makers in solving their complex problems [26–28]. Figure 1 shows the proposed methodology consisting of three phases and a total of ten steps.

**Figure 1.** Proposed methodology and the research flow.

Creating a subjective-objective model for multi-criteria ranking of the alternatives required certain activities divided into three phases. Each phase involved a number of steps, and their importance was equally essential for the proper formation of the model. The initial phase, which related to data collection and preparation, as a first step involved the acquaintance with the most important literature and achievements in the field of MCDM. As part of that step, it was necessary to address all the significant studies conducted in the field of ranking business entities so far, and thus banks too. Since the model entailed consulting experts in one of its segments and accepting their preferences regarding various aspects (criteria) of bank performance, the second step involved careful selection of the criteria and consulting in relation to the selection of criteria and sub-criteria. The selection of criteria and alternatives was the following step, and within it, the angles from which the quality of a business entity would be viewed were determined. As banks represent one of the most important segments of both society and economy, and as bearers of stable and sustainable economic growth, they have been selected as alternatives in this MCDM model. The last step in that phase required the selected experts to evaluate the criteria and sub-criteria on the basis of their preferences, in order to further determine the significance of each of them individually.

The second phase had two of the most significant processes involved in the application of Fuzzy PIPRECIA, CRITIC and I-distance method. The first was to determine the weight coefficients for each of the main criteria and sub-criteria, and thus the first step in that phase was to convert expert preferences into numerical indicators, then the next step involved calculating the weights of all criteria, and the last step concerned the elimination of the least significant sub-criteria within the main criteria. The second part of the phase considered the application of I-distance and, within three steps, the final list of criteria and alternatives was formed. Then, it included the calculation of the values of I-distance according to the order based on the weights of criteria and the ranking of the alternatives in accordance with each of the main criteria, as well as a comprehensive ranking list of the alternatives observed. The last, third, phase involved the application of a sensitivity analysis within which a reverse rank matrix was calculated. The procedure tested the sensitivity of the final results to eliminating the lowest-ranked alternative from the set of observed alternatives.

### *3.1. Fuzzy PIvot Pairwise RElative Criteria Importance Assessment—Fuzzy PIPRECIA Method*

The PIPRECIA method has some advantages in comparing to other methods, for example, in comparison to SWARA [29,30]. PIPRECIA accept that criteria be evaluated without their previously sorting by significance. Also, the benefits of fuzzy PIPRECIA are that on good way can solve MCDM problem with a large number of decision-makers (DMs) involved in the assessment of criteria. The Fuzzy PIPRECIA method was developed by Stevi´c et al. [31]. It consists of 11 steps shown below.

Step 1. Forming multi-criteria decision-making model including set of criteria and team of decision-makers.

Step 2. In order to determine the relative importance *sj* of criterion *j* (*Cj*) in relation to the previous *j* − 1 (*Cj*−1), each DM evaluates criteria by starting from the second criterion, Equation (1).

$$
\overline{s'\_j} = \begin{cases}
> \overline{1} & \text{if } & \mathbb{C}\_j > \mathbb{C}\_{j-1} \\
= \overline{1} & \text{if } & \mathbb{C}\_j = \mathbb{C}\_{j-1} \\
< \overline{1} & \text{if } & \mathbb{C}\_j < \mathbb{C}\_{j-1}
\end{cases}
\tag{1}
$$

where *sr <sup>j</sup>* denotes the evaluation of the criteria by a DM *r.*

To obtain a matrix *sj* , there is need to perform the averaging of matrix *s<sup>r</sup> <sup>j</sup>* using a geometric or average mean. DMs evaluate the criteria using the linguistic scales developed and defined in [31].

Step 3. Determining the coefficient *kj*

$$
\overline{k\_j} = \begin{cases}
= \overline{1} & \text{if } \quad j = 1 \\
2 - \overline{s\_j} & \text{if } \quad j > 1
\end{cases}
\tag{2}
$$

Step 4. Determining the fuzzy weight *qj*

$$
\overline{q\_j} = \begin{cases}
= \overline{1} & \text{if} \quad j = 1 \\
\frac{\overline{q\_{j-1}}}{\overline{k\_j}} & \text{if} \quad j > 1
\end{cases}
\tag{3}
$$

Step 5. Determining the relative weight of the criterion *wj*

$$
\overline{w\_j} = \frac{\overline{q\_j}}{\sum\_{j=1}^n \overline{q\_j}} \tag{4}
$$

In the following steps, it is necessary to apply the inverse methodology of the fuzzy PIPRECIA method.

Step 6. Evaluation of the applying scale defined above, but this time starting from a penultimate criterion.

$$
\overline{s'}'\_{j} = \begin{cases}
> \overline{1} & \text{if } \quad \mathbb{C}\_{j} > \mathbb{C}\_{j+1} \\
= \overline{1} & \text{if } \quad \mathbb{C}\_{j} = \mathbb{C}\_{j+1} \\
< \overline{1} & \text{if } \quad \mathbb{C}\_{j} < \mathbb{C}\_{j+1}
\end{cases}
\tag{5}
$$

where *sr j* denotes the evaluation of the criteria by a DM *r.*

It is again necessary to average the matrix *s<sup>r</sup> j* .

Step 7. Determining the coefficient *kj* 

$$
\overline{k\_j}' = \begin{cases} \\ 2 - \overline{s\_j}' \end{cases} \qquad \begin{array}{c} if \quad j = n \\ if \quad j > n \end{array} \tag{6}
$$

where *n* denotes a total number of criteria. It means that the value of the last criterion is equal to (1, 1, 1).

Step 8. Determining the fuzzy weight *qj* 

$$\overline{q\_j}' = \begin{cases} \frac{-\overline{1}}{\overline{q\_{j+1}}'} & \text{if} \quad j=n\\ \frac{\overline{q\_{j+1}}'}{\overline{k\_j}'} & \text{if} \quad j>n \end{cases} \tag{7}$$

Step 9. Determining the relative weight of the criterion *wj* 

$$
\overline{w\_{\dot{j}}}' = \frac{\overline{q\_{\dot{j}}}'}{\sum\_{j=1}^{n} \overline{q\_{\dot{j}}}'} \tag{8}
$$

Step 10. Determination of the final weights of the criteria.

$$\overline{w}\_{\dot{j}}^{\prime\prime} = \frac{1}{2} (w\_{\dot{j}} + w\_{\dot{j}}^{\prime}) \tag{9}$$

Step 11. Control the obtained results using Spearman and Pearson correlation coefficients.

### *3.2. CRiteria Importance through Intercriteria Correlation—CRITIC Method*

In DM problems, criteria, as a source of information, possess a weight which reflects the amount of the information contained in each of them. This weight is referred to as "objective weight". Diakoulaki et al. [32] introduced the CRITIC method for determining the objective weights of criteria in MCDM problems based on principles by using contrast intensity of each measure, considered as standard deviation, and conflict between criteria, regarded as the correlation coefficient between criteria [33].

The following steps describe the CRITIC method. It is assumed that there is a set of *n* feasible alternatives *Ai* (*i* = 1, 2, ... , *n*) and *m* evaluation criteria *Cj* (*j* = 1, 2, ... , *m*).

Step 1. Forming of the decision matrix *X*, expressed as follows.

$$\mathbf{x}\_{ij} = \begin{bmatrix} \mathbf{x}\_{11} & \mathbf{x}\_{12} & \dots & \mathbf{x}\_{1m} \\ \mathbf{x}\_{21} & \mathbf{x}\_{22} & \dots & \mathbf{x}\_{2m} \\ \dots & \dots & \dots & \dots \\ \mathbf{x}\_{n1} & \mathbf{x}\_{n2} & \dots & \mathbf{x}\_{mn} \end{bmatrix} \\ \mathbf{i} = \mathbf{1}, \mathbf{2}, \dots, n; \; j = 1, 2, \dots, m \tag{10}$$

The elements *xij* of the decision matrix (*X*) represent the performance value of *i*th alternative for *j*th criterion.

Step 2. Normalization of original decision matrix using the following equations for benefit criteria:

$$r\_{i\bar{j}} = \frac{\mathbf{x}\_{i\bar{j}} - \min\_{\bar{i}} \mathbf{x}\_{i\bar{j}}}{\max\_{\bar{i}} \mathbf{x}\_{i\bar{j}} - \min\_{\bar{i}} \mathbf{x}\_{i\bar{j}}} \tag{11}$$

, and for cost criteria:

$$r\_{ij} = \frac{\max\_i \mathbf{x}\_{ij} - \mathbf{x}\_{ij}}{\max\_i \mathbf{x}\_{ij} - \min\_i \mathbf{x}\_{ij}} \tag{12}$$

Step 3: Calculation of symmetric linear correlation matrix *mij*:

Step 4: Determination of the objective weight of a criterion using Equation (13).

$$\mathcal{W}\_{\bar{j}} = \frac{\mathbf{C}\_{\bar{j}}}{\sum\_{j=1}^{n} \mathbf{C}\_{j}} \tag{13}$$

where *Cj* is the quantity of information contained in the criterion *j* and is determined as follows:

$$C\_{\vec{j}} = \sigma \sum\_{\vec{j}'=1}^{n} 1 - m\_{\vec{i}'} \tag{14}$$

where σ is the standard deviation of *j*th criterion and the correlation coefficient between the two tests.

### *3.3. I-Distance Method*

In this paper, we have decided to apply the I-distance method as one of the most complete methods that measures the distance between units of the basic set. Namely, this method respects the fact that the indicators do not have the same importance, and also they are interdependent wherefore there is some duplication of information in the ranking process. Due to the application of partial correlation coefficients in the calculation process (which will be explained later in this section), this method prevents duplication of information contained in multiple indicators, but also values their individual importance by determining the order in which the indicators are introduced into the analysis. The I-distance method was originally introduced and defined in the publications of Professor Branislav Ivanovic in the 1960s and 1970s. Professor Ivanovic designed the method to rank countries by the level of development, which he described by various socio-economic indicators. Using this method, one can determine the relative position of a unit in relation to the other, within the units of the dataset. Linear (clustered and non-clustered) and quadratic distance were worked out in the method, and further research in this field led to the development of a multi-stage I-distance, which will be used in this paper [34–36].

The process of construction of I-distance is iterative, and the number of iterations depends on the number of indicators to be included in the analysis. If we observe a set of indicators *CT* = (*C*1,*C*2, ... *Ck*), which in this case describe quality of a certain field of operations, I-distance between the two observed units (banks) *er*=(*c*<sup>1</sup> ,*c*2,...*ck*,*r*) and *es*=(*c*<sup>1</sup> ,*c*2,...*ck*,*s*) is calculated on the basis of the following form [35]:

$$D(r,s) = \sum\_{i=1}^{k} \frac{\left| d\_i(r,s) \right|}{\sigma\_i} \prod\_{j=1}^{i-1} \left( 1 - r\_{j|i12\dots j-1} \right) \tag{15}$$

where:


It was pointed out that the calculation of I-distance is a procedure which consists of several iterations. The process first involves the entire discriminatory effect of indicator *C*<sup>1</sup> or the indicator that has the most information about the level of "quality" of the unit. After that, the part of the discriminatory effect of the second indicator, which was not involved in the discriminatory effect of the first indicators, is added. Similar to the previous, the part of the information that provides the third indicator is added, which was not involved in the discriminatory effect of the first two. The whole process is continuing, so that, finally, the level of "quality" of the unit *ej* defined by a set of indicator *C*, could be:

$$D\_{\vec{j}} = \sum\_{i=1}^{n} D\_{\vec{j}i}.\tag{16}$$

If there are different signs of variables, resulting in the occurrence of negative correlation coefficient between the variables, it is necessary to use the square I-distance in the analysis [35]. The involvement of indicators with less information is bigger in the square than in the plain distance, which is another reason to use square I-distance when we have a large number of indicators. The square I-distance is calculated as follows:

$$D^2(r,s) = \sum\_{i=1}^k \frac{\left|d\_i^2(r,s)\right|}{\sigma\_i^2} \prod\_{j=1}^{i-1} \left(1 - r^2\_{j:12\ldots j-1}\right) \tag{17}$$

Bank ranking in this paper will be carried out by the use of the square I-distance, because of the occurrence of negative partial correlation coefficients between the observed indicators for ranking, but it is necessary to say that, due to the specific problem being solved, two-stage method of I-distance will be applied. This method involves calculating I-distance for units in the set in several stages, in this case in two stages. We will get to the results of I-distance within each segment and measurement of bank's performance (liquidity, profitability, efficiency, solvency), and after that, we will again apply the same method to the obtained results in order to get the final bank ranking in the RS. This method will allow us to determine the best-performing banks for each of these segments, and the most successful one [36,38].

### **4. Results**

### *4.1. Forming the MCDM Model*

For the purpose of this paper, the RS banking sector, where currently operates eight banks with a dominant share of foreign capital, has been analyzed. Their business performance was measured throughout four the most significant aspects of banks' performance, which also represented the ranking criteria: liquidity, efficiency, profitability and solvency. The indicators used for each of the performance criteria are listed and explained below, and which represented the sub-criteria in further analysis.

Liquidity of the bank is a complex concept, usually interpreted as bank's ability to meet its obligations on maturity. The bank's management is required to continuously monitor liquidity from the static and dynamic aspect. By disrupting the liquidity of only one bank, the survival of the entire financial system can be brought into question. If a bank is unable to service its obligations, general confidence in the financial system is lost and this leads to erosion of monetary assets of all banks. The following indicators are used in theory and practice to assess liquidity [39]:


During the bank's liquidity management, the indicators L1, L2, and L5 need to be maximized, i.e., higher value of these ratios shows better liquidity. Indicators L3 and L4 have completely opposite meaning, the low value of these indicators implicates that there is high liquidity, and vice versa. When analyzing the bank, it should not be forgotten that too high liquidity causes low profitability.

Efficiency is defined by the phrase "do things right", and in a specific case it indicates that banks must manage their assets with the best possible strategy. A bank's efficiency is achieved when the bank produces larger effects with as low as possible costs, increasing productive assets by placing liabilities in the best way under current circumstances [39]. Productive assets bring interest income, the banks then increase capital, provided that they achieved positive financial results. The indicators that provide information on the effectiveness are:


The data for this calculation is taken from the income statement and banks tend to minimize indicators E1 and E2—lower value rejects greater efficiency and vice versa. The indicator E3 has an alternative explanation, as the maximum value increases efficiency.

Profitability indicators are crucial for business analysis and are defined as the bank's earning ability, or its ability to receive income of the invested assets and increase them during the business cycles. They are used for evaluation of the bank's profitability in given time, usually at the end of the accounting period [41]:


Higher values of profitability indicators signal a greater earning power and thus there is possibility of increasing the share capital. Caution should be exercised when interpreting the profitability indicators because numbers can distort the true picture. The profitability indicators are maximized, but as a result of the increase in net profit before tax, and not under the influence of the reduction of capital, assets or income from interest and the like.

Solvency or capital adequacy of a bank is an indicator to which we should pay more attention in the banking practice. To support this indicator, there is the statutory rate of minimum capital adequacy ratio of 12% and it represents the bank's ability to fulfill all its obligations eventually, even from the bankruptcy estate. "A bank is considered insolvent when its liabilities exceed the value of its assets or when realized losses exceed its equity capital". In that case, the bank does not have enough capital to cover the incurred losses, and a part of the assets are non-performing loans, receivables, loans, and there is no possibility to fulfill all its obligations [39,42]. The criteria used to test the solvency (adequacy) of the bank are:


When managing solvency, the bank should tend to minimize the indicators S1 and S2 and make the other indicators as large as possible. Instead of total assets and total resources, operating assets and business assets are included in the calculation of these indicators. Banks are for-profit organizations and business assets, which represent the funds arising from operations, participate directly in making a profit and are fully justifiably included in the calculation. Confirmation of this is that the total assets represent the sum of operating assets and off-balance assets, where the off- balance sheet are sureties, guarantees, acceptances, bills of exchange and other forms of guarantees, uncovered letters of credit, irrevocable, approved but undrawn loans and the like. A characteristic of off-balance sheet positions is that they are potential liabilities or claims and that there is some uncertainty whether and when those contingent liabilities and receivables would be implemented. Banks often use off-balance sheet transactions in order to earn additional income accomplished through commission fees. To conclude, off-balance sheet (assets) are excluded from the calculation because the aim of the research is to show the real rank and position of the banks in the RS banking sector on the basis of their core business.

The criteria and sub-criteria described above were evaluated by banking experts and they were assigned certain significance according to experts' preferences. The expert team included in the analysis was comprised of experts with years of experience in banking, finance, accounting and auditing. The team involved five experts, one of whom is a university professor with 25 years of experience in bank management, then a university professor with 20 years of experience in accounting and auditing, and with a title of a certified accountant and auditor. In addition, the team consisted of two members with over 10 years of experience working at senior banking positions, primarily in corporate banking. Additionally, as a member of the team and one of the experts who evaluated the significance of the criteria was a person with many years of experience in the field of auditing, and currently employed at one of the four leading auditing companies in the world.

Considering that the model was based on measuring the performance of banks and that the above criteria aimed to measure the quality of each individual segment of bank operations in the best possible way, it is logical that banks operating in the market of the Republic of Srpska were taken as alternatives. The performance indicators of the banks referred to 2018 and data were taken from the official financial and audit reports. Table 1 shows the quantitatively expressed values of indicators for observed banks in 2018.


**Table 1.** Performance indicators of the banks in 2018.

### *4.2. The Evaluation of Criteria Using the Fuzzy PIPRECIA Method*

The evaluation of the criteria has been performed using a linguistic scale that involves quantification into fuzzy triangular numbers. Table 2 shows the evaluation of the criteria for fuzzy PIPRECIA and inverse fuzzy PIPRECIA by decision-makers and the average values (AV) which are used for further calculation. It is important to note that, compared to the original method developed in [31], the average value (AV) is used here to average decision-makers' preferences, which in this specific case contributed to the more accurate input parameters of the model. Whether a geometric mean or an average value is applied depends directly on a particular case. Both methods of averaging are valid.

**Table 2.** Evaluation of the main criteria by DMs for the fuzzy PIPRECIA and Inverse fuzzy PIPRECIA methods.


Based on the evaluation of the criteria and their averaging, Equation (1), a matrix *sj* is formed.

$$s\_{\circ} = \begin{bmatrix} \dots \\ 0.860, 0.950, 1.047 \\ 1.120, 1.263, 1.370 \\ 0.807, 0.990, 1.043 \end{bmatrix}$$

Using Equation (2), those values are subtracted from number two. Following the rules of operations with fuzzy numbers, the *kj* matrix

$$k\_{\bar{j}} = \begin{bmatrix} 1.000, 1.000, 1.000 \\ 0.953, 1.050, 1.140 \\ 0.630, 0.737, 0.880 \\ 0.957, 1.010, 1.193 \end{bmatrix}$$

is obtained as follows:

According to Equation (2), the value *k*<sup>1</sup> = (1.000, 1.000, 1.000)

$$k\_2 = (2 - 1.047, 2 - 0.950, 2 - 0.860) = (0.953, 1.050, 1.140)$$

Applying Equation (3), the value *qj*

$$q\_{\bar{\jmath}} = \begin{bmatrix} 1.000, 1.000, 1.000 \\ 0.877, 0.952, 1.049 \\ 0.997, 1.293, 1.665 \\ 0.835, 1.280, 1.741 \end{bmatrix}$$

*Symmetry* **2020**, *12*, 164

is obtained as follows:

$$\overline{q\_1} = (1.000, 1.000, 1.000)$$

$$\overline{q\_2} = \begin{pmatrix} 1.000, 1.000, 1.000 \\ 1.140', 1.050', 0.953 \end{pmatrix} = (0.877, 0.952, 1.049)$$

Applying Equation (4), the relative weights are calculated:

$$\overline{w\_1} = \left(\frac{1.000}{5.455}, \frac{1.000}{4.525}, \frac{1.000}{3.709}\right) = (0.183, 0.221, 0.270)$$

For determining the final weights of the criteria Equations (5)–(9) or the methodology of the inverse fuzzy PIPRECIA method are applied. Based on the evaluation by the DMs and the application of the average value, the matrix *sj'* is obtained.

$$s\_{j}^{\prime} = \begin{bmatrix} 0.847 \text{, } 1.053 \text{, } 1.140 \\ 0.534 \text{, } 0.643 \text{, } 0.700 \\ 0.993 \text{, } 1.090 \text{, } 1.130 \\ \cdots \end{bmatrix}$$

Applying Equation (6), the values of matrix *kj*' are obtained:

$$k\_{\!\!\!/} = \begin{bmatrix} 0.860\ \text{\$0.947\ \$1.153\$} \\ 1.300\ \text{\$1.357\ \$1.446\$} \\ 0.870\ \text{\$0.910\ \$1.007\ \$} \\ 1.000\ \text{\$1.000\ \$}\ 1.000\ \end{bmatrix}$$

$$k\_4' = (1.000, 1.000, 1.000)$$

$$\overline{k\_3 \prime} = (2 - 1.130, 2 - 1.090, 2 - 0.993) = (0.870, 0.910, 1.007)$$

Applying Equation (7), the following values are obtained:

$$q\_{j'} = \begin{bmatrix} 0.588\\_0.856\\_1.028\\ 0.678\\_0.810\\_0.884\\ 0.993\\_1.099\\_1.149\\ 1.000\\_1.000\\_1.000 \end{bmatrix}$$

$$\overline{q\_{4'}} = (1.000\\_1.000\\_1.000)$$

$$\overline{q\_{3'}} = \left(\frac{1.000}{1.007}, \frac{1.000}{0.910'}, \frac{1.000}{0.870}\right) = (0.993, 1.099, 1.149)$$

After that, it is necessary to apply Equation (8) to obtain relative weights for the fuzzy Inverse PIPRECIA method.

$$\left(\overline{w\_4}' = \begin{pmatrix} \frac{1.000}{4.062}, \frac{1.000}{3.765}, \frac{1.000}{3.259} \end{pmatrix} = (0.246, 0.266, 0.307)\right)$$

The results of the applied methodology are presented in Table 3. These results refer only to the calculation of the main criteria: liquidity, efficiency, profitability and capital adequacy. The weights for all sub-criteria across all levels of the hierarchy are calculated in the same way.


**Table 3.** Calculation and results obtained by the application of fuzzy PIPRECIA and Inverse fuzzy PIPRECIA for the main criteria.

Using Equation (9), the final weights of the main criteria are obtained. Before using this equation, it is necessary to defuzzify the values of the criteria. Table 3 shows the complete previous calculation, and the last column shows the defuzzified values of the relative weights of the criteria.

Figure 2 shows the final result of the procedure for determining the individual significance of each of the main criteria. As explained above, based on the personal preferences of the experts, the significance of the observed criteria was obtained using the Fuzzy PIPRECIA method, and they are the "blue" values in Figure 2. Then, the defuzzification of the values was carried out to obtain the final weights of all the main criteria. The weights are marked in gray, and based on them we can determine that the most significant criterion is C3 (profitability) with a weight coefficient of 0.295, followed by C4 (solvency or capital adequacy) with a weight of 0.282, and C1 (liquidity) with 0.226 and finally C2 (efficiency) with a weight of 0.215. This procedure is of great importance in further analysis since it determines the order of introducing the main criteria into the procedure to calculate the I-distance.

**Figure 2.** Final values of the main criteria obtained using the fuzzy PIPRECIA method.

Spearman's correlation coefficient for the ranks obtained with fuzzy PIPRECIA and Inverse fuzzy PIPRECIA is 1.00, which means that these ranks are in complete correlation. Additionally, Pearson's correlation coefficient has been calculated for the weights of the criteria obtained using these approaches and is 0.966. In the same manner as previously shown, the values of all sub-criteria have been obtained, as shown in Tables 4–7.


**Table 4.** Calculation and results for the sub-criteria of the liquidity group.

**Table 5.** Calculation and results for the sub-criteria of the efficiency group.


**Table 6.** Calculation and results for the sub-criteria of the profitability group.



**Table 7.** Calculation and results for the sub-criteria of the capital adequacy group.

Spearman's correlation coefficient for the ranks obtained within the liquidity group is 1.00, which means that these ranks are in complete correlation. Pearson's correlation coefficient for the weights of the criteria, which is 0.938, has also been calculated.

Spearman's correlation coefficient for the ranks obtained within the efficiency group is also 1.00, while Pearson's correlation coefficient for the weights of the criteria is 0.987.

Spearman and Pearson correlation coefficients for the ranks obtained within the profitability group are 1.00.

Spearman's correlation coefficient for the ranks obtained within the capital adequacy group is 1.00, which means that these ranks are in complete correlation. Pearson's correlation coefficient for the weights of the criteria, which is 0.984, has also been calculated.

The final weights of the criteria were created as the product of the main criterion weights and the values of the weights obtained within individual groups. Table 8 presents the final weight results using the fuzzy PIPRECIA method. Since there are four sets of criteria that include a total of 18 sub-criteria not distributed equally across the groups, a further calculation is made in order to obtain as accurate results as possible. This is demonstrated through the following subsection, which outlines the way in which the formation of a hierarchical structure influences the weights of the criteria.


**Table 8.** The final values of the criteria using the fuzzy PIPRECIA method.

### *4.3. The Influence of Hierarchical Structure on Determining the Values of Criteria*

The purpose of this subsection is to point out that there is a huge influence of the hierarchical structure on obtaining the final values of criteria. Namely, in most studies, if evaluation is made on the basis of more than nine criteria, the levels arranged in a hierarchical structure are formed, as is the case in this paper. However, if the criteria at the first level have a different number of sub-criteria at the second level, there is a problem of giving preference to certain criteria that do not really deserve it. In this way, results obtained depend directly on the number of sub-criteria in each group, rather than on the actual preferences of decision-makers. We will take a simple example where the weights of all the criteria and sub-criteria are equal in order to demonstrate the problem as clearly as possible and make a proposal for its solution.

Example: Suppose we have four main criteria at the first level of the hierarchy and each of them has a different number of sub-criteria. The first criterion has five sub-criteria, the second criterion has three sub-criteria, the third criterion has four sub-criteria and the fourth has six sub-criteria. As stated, all have equal importance, meaning that the value of each main criterion is w1 = w2 = w3 = w4 = 0.250 in order that the sum of them is one. If the value of all sub-criteria is also equal within individual groups, the results will be as presented in Table 9.


**Table 9.** The values of the weights of criteria if they are all of the equal importance.

If all the criteria are equally important, then the values of each sub-criterion should be identical. If we observe the same example, the value of each criterion should be 0.55, which, we notice, is not the case. The sub-criteria of the second group are of the highest value, since there is the smallest number of them, which means that the hierarchical structure formed in this way does not allow objective results and weights of the criteria. If it is taken into account that in almost all cases the criteria have comparative significance that differs among the criteria, i.e., the main criteria have different values, then the influence of the sub-criteria of the group with the smallest number of elements is even greater. Thus, certain criteria undeservedly gain great values and have a greater impact on the final decision than they really should.

Since, in this paper, a hierarchical structure was formed with an unequal number of elements within different groups, in order to obtain adequate and realistic values of the criteria for the evaluation of alternatives, their selection was made. Within each group of the criteria, three of the most important were selected, so that 12 out of 18 criteria were included in further calculation. In order to maintain the constraint that the sum of all criteria is equal to one, the following equation was applied:

$$w\_{j}^{\prime} = w\_{m} + \left(\frac{\left(\sum\_{1}^{n} w\_{n}\right)}{m}\right) \tag{18}$$

where *wm* denotes the criteria remaining in the model and *wn* denotes the criteria exiting the model. *n* is the total number of criteria exiting the model and *m* is the total number of criteria remaining in the model.

In order to obtain the values of the 12 criteria that are equally represented in the hierarchical structure (Figure 3), the values of the criteria exiting the model are equally distributed to the criteria that remain in the model by applying Equation (18). Since the efficiency group and profitability group have per three sub-criteria from the beginning, they have been unchanged. For the liquidity group, the procedure for obtaining the weights of the criteria is as follows.

$$0.074 = 0.046 + \left(\frac{0.041 + 0.044}{3}\right) .0.082 = 0.054 + \left(\frac{0.041 + 0.044}{3}\right) .0.076 = 0.048 + \left(\frac{0.041 + 0.044}{3}\right) .082$$

while for the capital adequacy group is as follows:

$$0.0110 = 0.069 + \left(\frac{0.027 + 0.029 + 0.032 + 0.036}{3}\right) \\ 0.097 = 0.056 + \left(\frac{0.027 + 0.029 + 0.032 + 0.036}{3}\right)$$
 
$$0.101 = 0.060 + \left(\frac{0.027 + 0.029 + 0.032 + 0.036}{3}\right)$$

,

**Figure 3.** The values of the criterion weights after creating the balanced hierarchical structure.

In order to ensure objectivity in determining the significance of the criteria, as a support to the subjective fuzzy PIPRECIA method, the weights of the criteria were also calculated using the objective CRITIC method. Then, averaging by the average weight, the final weight values of the criteria and their ranks, which represented the input parameters for ranking alternatives using the I-distance method, were obtained. The outcomes of the previous analysis are the weights of all criteria and sub-criteria, but also the basis for the elimination of certain less important criteria. Accordingly, only the main criteria with per three sub-criteria were retained in the further analysis. The sub-criteria L1 and L2 (marked as C11 and C12 in the analysis) were eliminated from further consideration within the liquidity criteria since their weight coefficients were shown to be the lowest within the main criterion. Similarly, the sub-criteria S1, S2, S3 and S4 were eliminated from the main solvency (capital adequacy) criterion (marked as C41, C42, C43 and C44 in the analysis). Within the efficiency and profitability criteria, there was no need to eliminate the criteria since they had already contained per three sub-criteria, and thus the hierarchical structure was not disrupted.

### *4.4. Determining the Significance of the Criteria Using the CRITIC Method*

The objective CRITIC method for obtaining the weights of the criteria uses the initial matrix of the multi-criteria model, which is shown in Table 10.


**Table 10.** The initial matrix of the multi-criteria model.

After that, it is required to perform normalization by applying Equations (11) and (12) as in the case of normalization in the MABAC method [43–45]. The normalized matrix with the calculated standard deviation is shown in Table 11.

**Table 11.** Normalized initial decision matrix and standard deviation.


After normalizing the matrix and calculating the standard deviation, it is necessary to determine the correlation among the criteria, which is shown in Table 12. Essentially, a 12 × 12 symmetric matrix is obtained.


**Table 12.** Correlation among the criteria.

Subtracting the elements of the correlation matrix from number one, then summing these values, a matrix 1 × 12 is created, after which the summed values are multiplied by the standard deviation and

a matrix *cj* is obtained. Simply summing these values and dividing the individual elements by the sum yields the final criterion values using the CRITIC method.

*w*<sup>1</sup> = 0.081, *w*<sup>2</sup> = 0.087, *w*<sup>3</sup> = 0.089, *w*<sup>4</sup> = 0.076, *w*<sup>5</sup> = 0.102, *w*<sup>6</sup> = 0.078,

*w*<sup>7</sup> = 0.075, *w*<sup>8</sup> = 0.068, *w*<sup>9</sup> = 0.068, *w*<sup>10</sup> = 0.089, *w*<sup>11</sup> = 0.091, *w*<sup>12</sup> = 0.097,

### *4.5. Determination of the Final Values of the Criteria by Applying an Integrated Subjective-Objective Model*

Based on all previous calculations and the implementation of the integrated model, the final values of the criteria (Table 13), which are further used to evaluate the alternatives, are obtained.


**Table 13.** The final values of the criteria obtained by using the subjective-objective model.

As shown in Table 13, 12 sub-criteria were retained in the further analysis, per three within each main criterion. The calculated weight coefficients determined the order of introducing the criteria and sub-criteria in the further analysis and calculation of the values by applying the I-distance. The values in the table indicate that the weights of the individual sub-criteria are fairly uniform and range from 0.062 to 0.100.

### *4.6. Evaluation of the Alternatives by Applying I-Distance Method*

In this paper, the ranking of banks is performed by applying the square I-distance due to the appearance of negative partial correlation coefficients among the observed ranking indicators, but it is also necessary to say that due to the specificity of the problem, the two-stage I-distance method will be applied. The method involves calculating the I-distance for the units of a set in several stages, in this particular case in two stages. Namely, the results of I-distance within each of the segments of observing and measuring banks' performance (liquidity, profitability, efficiency, solvency) will be obtained first, and then the same method will be applied again to already obtained results in order to reach the final ranking of the banks in the RS. The method will allow us to determine which banks are the most successful in each of the above segments, but also which of them is the most successful bank overall [46].

The order of introducing the criteria, determined by their weights, was as follows: the profitability criterion was first introduced into the analysis, and within the criterion the order of introduction of the sub-criterion was C33, then C31, and finally C32. The next criterion of importance was solvency, whose sub-criterion C41 was first introduced, followed by C43 and last C42. After solvency, the criterion of liquidity was introduced into the analysis, and the significance and order of the sub-criteria was as follows: C12, C13 and finally C11. The last in terms of importance was the efficiency criterion, whose sub-criteria were introduced in the following order: C22, C21 and finally C23.

Before analyzing the results, for a better understanding of the calculation of I-distance method, the value for alternative A1 for indicator C1 was calculated:

$$\begin{split} D\_{11}^2 &= \frac{\left(\varepsilon\_{13} - \varepsilon\_{73}\right)^2}{\sigma\_3^2} \\ &= \frac{\left(0.0069 - (-0.0681)\right)^2}{0.028^2} + \left(1 - (-0.0197)\right)^2 \cdot \frac{\left(\varepsilon\_{21} - \varepsilon\_{71}\right)^2}{\sigma\_2^2} \\ &= \frac{\left(0.0069 - (-0.0681)\right)^2}{0.028^2} + \left(1 - (-0.0197)\right)^2 \cdot \frac{\left(0.088 - (-0.391)\right)^2}{0.205^2} \\ &\quad + \left(1 - 1\right)^2 \cdot \left(1 - 0.219\right)^2 \cdot \frac{\left(0.006 - (-0.061)\right)^2}{0.025^2} = 12.401 \end{split}$$

The order of introducing the criteria and sub-criteria has been explained previously, and accordingly this calculation refers to the profitability criterion and its corresponding sub-criteria. The same method of calculation has been applied for other criteria and sub-criteria, and the final values of I-distance used for ranking are given in the following section of the paper.

All of the above-described methods aimed to enable the ranking of the observed alternatives (banks) and to show the quality of each observed unit in relation to others. Data on the quality of alternatives within each of the main criteria observed, as well as overall, are presented in Table 14.


**Table 14.** The calculated values of I-distance and the rankings of the banks by each main criterion.

The data in the table shows the results by each of the main criteria, as well as the final ranking of banks' performance in 2018. According to the first and most important criterion, the results indicate that the most profitable bank in the RS was NLB Bank, followed by Unicredit Bank, and Komercijalna Bank in the third place. The lowest-ranked bank by the criterion was Pavlovic Bank, which achieved the worst results by all sub-criteria. The next analyzed criterion was solvency and the results showed that Hypo Bank was the best-ranked bank, followed by Komercijalna and Pavlovic Bank. The lowest-ranked banks according to this criterion were MF and NLB bank. The third criterion in terms of importance was liquidity, and within it, the results showed that Sberbank, followed by MF and Pavlovic Bank had the best liquidity. The last-ranked bank according to this criterion was Komercijalna Bank, with much lower liquidity than other banks. The last criterion by which banks are ranked was efficiency and the most efficient was Unicredit Bank, followed by MF and NLB Bank. The worst efficiency of all banks had Pavlovic Bank. Based on the ranking results obtained by each of the criteria, the final ranking of banks' performance was obtained and the most successful bank in the RS in 2018 was Unicredit Bank, followed by NLB and Hypo Bank, while Komercijalna and MF Bank took the fourth and fifth place with a slight difference. In terms of performance, Sberbank was after them, followed by Nova Bank. By far, Pavlovic Bank was in the last place.

### **5. Sensitivity Analysis and Discussion**

As part of the sensitivity analysis (SA), the dynamic impact on the rankings of the alternatives was checked, as shown in Figure 4.

**Figure 4.** Results of sensitivity analysis through a reverse rank matrix.

Modification certain elements of the MCDM matrix, like introducing a new or eliminating the existing alternative, can be influenced by changes in preferences. Therefore, several scenarios are created in which the modification of the elements of MCDM matrix is simulated. In all scenarios, a modification in the number of alternatives is performed, after which in such conditions the model is applied. Scenarios are created in a way that in each scenario the worst option is removed from consideration. After that model is applied taking into account new decision-making matrix. This part of SA has an objective: the analysis of the output of the proposed model in dynamic conditions. The obtained results applying the novel subjective-objective model is generated as A3 > A2 > A4 > A8 > A6 > A5 > A1 > A7. Alternative A7 is the worst, so in the first scenario it is removed. Thus, a new initial matrix is created with a total of seven alternatives. A new solution is generated, and the following results are obtained: A3 > A2 > A4 > A8 > A5 > A6 > A1. The results from the first scenario show that A3 remains the best alternative, while A1 remains the worst alternative. In this set, only A5 and A6 replace their positions. The subsequent scenario involves the elimination of alternative A1, as the worst, and the result is as follows: A3 > A2 > A4 > A8 > A5 > A6. Based on these preferences, it is concluded that no alternative has changed its ranking compared to the previous scenario. Subsequently, alternative A6 is eliminated from the analysis, as the worst. In the further analysis, five alternatives remain, and the preferences are completely the same in relation to the previous scenario: A3 > A2 > A4 > A8 > A5. It is continued with the elimination of the worst-ranked alternatives, and then the analysis with only four remaining alternatives is performed since A5 has been eliminated. Their ranking is as follows: A3 > A2 > A4 > A8. In this step, alternatives A4 and A8 have switched their placed and alternative A8 becomes the worst ranked. Accordingly, in the next step, it is eliminated, and the new order of the remaining three alternatives remain unchanged comparing to the previous step: A3 > A2 >A4. In the last step, only alternatives A3 and A2 remain, and in their mutual comparison, alternative A2 is described as better.

Based on this analysis, we can conclude that eliminating the last-ranked alternative has not resulted in a significant change in the final order of alternatives. Alternatives A5 and A6 and alternatives A4 and A8 have switched their places but observing the original ranking list and the values of I-distance, we can conclude that the differences between these alternatives are almost negligible (10.017 > 9.939 and 12.508 > 12.364, respectively). The change between alternatives A3 and A2 is due to the order of introducing the criteria into the calculation of I-distance, in which the discriminatory effect of the first-introduced criterion is fully evaluated, while the others are reduced proportionally to the partial correlation coefficients (explained in the section related to the I-distance method).

The concept of this paper implied that individual preferences of experts in specific fields were merged with the objective methods of multi-criteria decision-making in one model. Their preferences were transformed into numerical indicators throughout various mathematical transformations that determined the significance of each of the four main criteria on the basis of which they were further introduced into the analysis. The area of interest in this paper was the RS banking sector and the research conducted included eight banks with headquarters in the RS. Although most economists when considering the aspects of bank performance take solvency as the most important, new studies and views of experts have confirmed that profitability is more important than solvency now since good profitability also guarantees good solvency. It is important to emphasize that the differences in the significance of the four main criteria, according to experts, were extremely small. The most significant criterion (profitability) had a weight coefficient of 0.295, while the weight coefficient of the least significant (efficiency) was 0.215. After the formation of the hierarchical structure, the elimination of certain sub-criteria was performed in order to obtain the same number of sub-criteria within each main criterion. This segment is extremely important since it contributes to the equalization of the importance of each individual sub-criterion, i.e., allows no criterion to be in a subordinate position in further analysis. Subsequently, using the two-stage method of square I-distance, the calculations were made and the final ranking lists of alternatives (banks) were obtained according to each of the main criteria, as well as a comprehensive ranking list. The results show that banks which have a long tradition and operate in several European countries have the best results in this ranking. If average values are taken as a certain threshold of success, it is noted that only four banks are above the threshold in terms of profitability. Within this main criterion, it can be said that Nova Bank, Hypo Bank, Sberbank, and Pavlovic Bank are below the level considered as the lower profitability limit. When it comes to solvency, only three banks again have higher average solvency than, in this case, Hypo, Komercijalna, and Pavlovic Bank. The next observed criterion is liquidity, and within it, four banks have higher-than-average values, namely Sberbank, MF Bank, Hypo Bank, and Pavlovic Bank. The last analyzed main criterion is efficiency and it is concluded that only Unicredit and MF bank are above average within the criterion. It is important to note that within this criterion, Unicredit Bank drastically differs from the rest, which is the result of high-quality placements affecting the low cost of provisions. In addition, high-quality sources of capital allow them to have the ratio of costs and interest income almost seven times on the revenue side.

### **6. Conclusions**

In this paper, a novel integrated subjective–objective methodology for solving MCDM problems has been created. Through its development, one of the main contributions of the paper, which enrich the area of addressing multi-criteria problems, has been presented. In addition, a brief critique of previous MCDM problems with an unbalanced hierarchical structure at the lower levels of the hierarchy has been made, which practically has a great influence on the final decision-making, i.e., determining the weights of criteria. This is explained and proved through a specific example. The foregoing reflects the scientific contribution of the study. In addition, the expert contribution of the research refers to the application of the developed multi-criteria model that enables the ranking of banks as economic units, based on the most significant indicators of their business performance.

One of the main ideas of this paper is to form a comprehensive model that will accept the subjective views and preferences of experts, as well as objective methods that address multi-criteria decision-making. The reasons for this approach lie in the fact that the symbiosis of the two approaches can produce more accurate results applicable in various situations and available to decision-makers at all levels. This model answers several questions: when it comes to the hierarchical structure of the observed criteria, it answers the question of their significance, then it balances their importance and does not allow any significant preference for any particular criterion or sub-criterion. The results of applying this model have been market-verified, too. Namely, as the data are from 2018, the worst-ranked bank, Pavlovic Bank, got into significant problems in 2019 and during the year it had to make a significant recapitalization and even change its ownership, and thus the management structures. The model has ranked Unicredit and NLB as the best banks, which are certainly the most stable players in the banking market, both in the RS and B&H and at European level. These are banks with firm sources of capital, for which they do not pay high interest costs, and considering the market where they are, they are able to generate significant interest income and thus show exceptional results in terms of profitability and efficiency. The results of the model would be even more significant if the selected indicators were monitored in the period shorter than one year and if the changes and possible improvements or deterioration of certain banks' performance could be monitored. All the above is in favor of the accuracy and applicability of the model, not only to banks, but also to other economic units (enterprises, insurance companies, local governments, etc.), whose business performance can be measured by different quality indicators.

Future research may adopt several different directions. Considering that every business, and especially in the domain of banking, is market dynamic and subject to change, this model should adapt to new trends and challenges in banking in the future. This primarily refers to research in the field of indicators, expansion of existing ones or introduction of new ones, as well as constant consulting with experts in the field regarding the identification of certain changes in significance of the criteria. New times bring new challenges for all business units, and therefore opportunities for changes in the importance of particular aspects of business excellence, which certainly requires a permanent alignment of the subjective segment of the model. In addition, further research refers to the further integration of uncertainty theory, such as fuzzy logic, rough sets, with other approaches, and the integration of subjective–objective models in order to achieve more accurate and approximately optimal results.

**Author Contributions:** Conceptualization, V.M., and Ž.S.; methodology, V.M., and Ž.S.; validation, L.S., B.N.; formal analysis, V.M., Z.R., and Ž.S.; investigation, G.M., and B.N.; writing—original draft preparation, V.M. and Ž.S.; writing—review and editing, L.S., and G.M.; supervision, Z.R. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest.

### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Modelling of Autonomous Search and Rescue Missions by Interval-Valued Neutrosophic WASPAS Framework**

### **Rokas Semenas and Romualdas Bausys \***

Department of Graphical Systems, Vilnius Gediminas Technical University, Sauletekio al. 11, LT-10223 Vilnius, Lithuania; rokas.semenas@vgtu.lt

**\*** Correspondence: romualdas.bausys@vgtu.lt

Received: 20 December 2019; Accepted: 9 January 2020; Published: 13 January 2020

**Abstract:** The application of autonomous robots in search and rescue missions represents a complex task which requires a robot to make robust decisions in unknown and dangerous environments. However, imprecise robot movements and small measurement errors obtained by robot sensors can have an impact on the autonomous environment exploration quality, and therefore, should be addressed while designing search and rescue (SAR) robots. In this paper, a novel frontier evaluation strategy is proposed, that address technical, economic, social, and environmental factors of the sustainable environment exploration process, and a new extension of the weighted aggregated sum product assessment (WASPAS) method, modelled under interval-valued neutrosophic sets (IVNS), is introduced for autonomous mobile robots. The general-purpose Pioneer 3-AT robot platform is applied in simulated search and rescue missions, and the conducted experimental assessment shows the proposed method efficiency in commercial and public-type building exploration. By addressing the estimated measurement errors in the initial data obtained by the robot sensors, the proposed decision-making framework provides additional reliability for comparing and ranking candidate frontiers. The interval-valued multi-criteria decision-making method combined with the proposed frontier evaluation strategy enables the robot to exhaustively explore and map smaller SAR mission environments as well as ensure robot safety and efficient energy consumption in relatively larger public-type building environments.

**Keywords:** interval-valued neutrosophic sets; multi-criteria decision-making; WASPAS-IVNS; autonomous mobile robots; search and rescue

### **1. Introduction**

Nowadays, autonomous robot systems, such as industrial robots [1], autonomous cars [2], social [3] and service robots [4], are increasingly applied to solve real-life problems, and therefore represent a constant object of discussion, not only from the technical, but also from social and ethical perspectives [5]. Progressively growing autonomous robots decision-making capabilities enable such systems to replace humans in labour-intense and dangerous tasks, such as infrastructure maintenance and inspection [6,7], or environment exploration and data gathering tasks, such as search and rescue missions [8,9].

In general, search and rescue missions are complex tasks in which autonomous robots must safely explore the disaster sites and provide rescue teams with important information, such as victim locations and status, environmental conditions, and the locations of dangerous objects [8,10,11]. While designing robots capable of addressing these tasks, several strategies can be taken into consideration. Robot physical structure can be modified to address specific navigation requirements (e.g., opening doors [12]), or robot software components can be improved, namely, the environment perception

module, the self-localisation module, the motion control module, and the decision-making module [13]. Generally, the same type of software components can be transferred and applied between different robots. Therefore, this research is aimed at improving the decision-making module, which is responsible for sensor-obtained data interpretation and conversion to expected environment exploration behaviour in search and rescue missions.

Considering the lasting effects [14] that autonomous robots have when applied in search and rescue missions, the decision-making module is required to make efficient decisions by incorporating variating mission criteria into consideration. This problem is modelled by taking into account two different viewpoints: what criteria are applied for modelling effective autonomous environment exploration module, and what strategy is applied for the decision-making process. Hence, in this paper a novel frontier evaluation strategy is proposed, that addresses the technical, economic, social and environmental factors of autonomous search and rescue missions. Also, a new extension of the weighted aggregated sum product assessment (WASPAS) multi-criteria decision-making method modelled under the interval-valued neutrosophic sets (IVNS), namely WASPAS-IVNS, is proposed for candidate frontier evaluation and selection in autonomous search and rescue missions.

This paper is structured in the following manner. The review of environment exploration methodology is presented in Section 2. Section 3 introduces a new WASPAS-IVNS framework which is the core part of robot decision-making module. Also, in this section, criteria selection and calculation process, robot architecture, and the proposed search and rescue strategy are explained in detail. The experimental evaluation methodology, results, and discussion are presented in Section 4. Conclusions and future work are presented in Section 5.

### **2. Autonomous Environment Exploration in SAR Mission**

### *2.1. Environment Exploration Methodology*

Environmental exploration by autonomous mobile robots is a process through which an unknown environment is analysed and mapped by visiting all available areas. Many recent studies were aimed at improving this process [15,16], including research in extreme environments [11,17] and planetary exploration [18,19]. Although many different strategies have been introduced to address unknown environment exploration problems, a popular and easy-to-implement basis for autonomous mobile robot testing remains the frontier-based environment exploration method, originally proposed by Yamauchi [20]. This strategy describes frontiers as a boundary between the known and unknown portions of the environment. By continuously directing the robot to the previously unvisited frontiers, an exhaustive environment analysis can be achieved. Several papers address and implement this strategy. For example, exploration planning strategy for large-scale unknown environments was proposed in [21], and the frontier point selection strategy, based on the frontier point optimisation and multistep path planning was proposed in [22].

In autonomous SAR missions, robots are expected to make robust decisions by addressing a number of conflicting requirements set by the stakeholders. Hence, the original frontier-based environment exploration approach, which evaluates a single criterion (distance to the frontier), is not effective in this context. A more effective approach to this problem is to compose a set of criteria and evaluate each candidate frontier accordingly. Considering search and rescue missions, autonomous robots are deployed as data-collecting tools that provide information to the rescue teams [9]. In this situation firefighting teams, medic teams, robot providers, and victims can be considered as stakeholders with different preferences [14]. Naturally, victims hope to be saved, and medic teams prefer robots to detect, reach, and constantly monitor the state of injured or trapped victims. On the other hand, firefighting teams can prefer robots to scan the disaster site and provide data about the general environment layout and dangerous objects in the vicinity to ensure the safety of rescue personnel. Finally, robot providers would prefer to keep their property economically viable and safe (e.g., to optimise energy usage and avoid direct environmental damage to the robot to reduce its maintenance costs). All these

preferences can be denoted as environmental, technical, social, and economic factors that robots' decision-making module must address during the environment exploration process. By analysing and balancing contradicting stakeholder preferences, a set of criteria can be composed and applied to evaluate candidate frontiers [23].

### *2.2. Candidate Frontier Evaluation Methodology*

Considering the iterative frontier-based environment exploration approach, and a heavily criteria-based nature of the problem, we argue that multi-criteria decision-making (MCDM) frameworks can be applied to improve robots decision-making module.

MCDM frameworks are robust tools that can be applied to model and solve real-world problems, such as various selection problems, performance evaluations, and safety assessments [24]. For example, MCDM methods for lead-zinc flotation circuit selection problem, and location selection problem for waste incineration plants were proposed in [25,26]. MCDM frameworks were also successfully applied to the house shape evaluation problem by Juodagalviene et al. [27]. Stoji´c et al. [28] proposed a methodology for supplier selection for manufacturing chains, and more recently an MCDM-based safety evaluation methodology for urban parks was introduced in [29]. Several survey papers also address MCDM framework applications in sustainable development [30,31].

Considering the research field of robotics and autonomous robots, MCDM frameworks have also been the focus of some decision-making related studies, for example, selection method of automatically guided vehicles for warehouse automation was proposed in [32]. Ghorabaee [33] proposed a method for industrial robot selection. A similar industrial robot selection problem has also been addressed in [34]. However, these papers focus on the robot selection problem and not the actual environment exploration by autonomous mobile robots. Autonomous search and rescue missions by MCDM-driven decision-making module has first been introduced by Amigoni and Basilico [35]. Following this research, a PROMETHEE II method was proposed in [36] to improve the robot decision-making ability, and a recent study by Bausys et al. [23] introduced WASPAS extension by single-valued neutrosophic sets, directed to incorporate sustainability principles in autonomous environment exploration by mobile robots.

Although these strategies show efficiency in the iterative decision-making process, the MCDM application capabilities in autonomous search and rescue missions are yet to be exhaustively studied. Considering the discussed MCDM approaches, the decision-making module evaluates criteria based on the raw data obtained by the robot sensors, without the evaluation of measurement errors. However, in real-world scenarios, every sensor can produce small measurement errors which can be addressed by the autonomous mobile robots decision-making module to improve the exploration process. This is the most powerful motivation to extend the MCDM frameworks and introduce a new decision-making strategy that takes advantage of interval-valued neutrosophic sets and reduces the impact of measurement errors in autonomous search and rescue missions.

### **3. Methods**

### *3.1. Autonomous Mobile Robot Architecture*

The proposed MCDM approach is applied to extend the decision-making module of the general-purpose four-wheel mobile ground robot Pioneer 3-AT [37]. This platform is chosen due to its applicability in search and rescue missions [35,38]. The robot movement and environment perception functions, namely the control of robot movement and rotation velocity, odometry information publishing, interpretation of sensor data, construction of two-dimensional environment map, and path planning, are managed by the robot operating system (ROS) [39]. However, the robot decision-making module is controlled explicitly by applying the proposed MCDM framework.

In this research, an autonomous search and rescue mission is modelled and simulated in Gazebo software [40]. Hence, the software-provided pre-made Pioneer 3-AT robot is imported into the simulation and equipped with a virtual Hokuyo laser range scanner sensor. This sensor has a 260◦ line of sight and is the main perception device used by the robot to detect physical structures and obstacles in an exploration environment. Considering that victim and dangerous object recognition poses a set of problems that are out of the scope of this research, it is assumed that robot has sensing capabilities to identify these problem-related objects of interest (OOIs) with perfect accuracy.

To track its current position in space, and the position of detected OOIs, the autonomous robot builds a two-dimensional, 0.1 m cell-resolution grid map [41], based on the data obtained by the laser range scanner sensor. In this environment representation model, the exploration area is divided into square cells, where each cell corresponds to the real-world geometrical space and contains the value of cells' occupancy probability. These probability values are in the range of [0,100], where 0 is considered as a free cell and contains no visible obstacles, and the value of 100 indicates that corresponding area is occupied. In this model, the probability value of −1 indicates, that the corresponding area has not been perceived, and the cell value is unknown.

In the modelled search and rescue mission, the Pioneer 3-AT robot applies the iterative frontier selection strategy, which is presented in Figure 1. At the start of the first iteration, no initial information about the environment and the locations of task-related OOIs are available to the robot. Therefore, for its' first move, the robot is programmed to turn around by 360◦ and scan the surrounding area. Then, environmental data obtained by the laser range scanner sensor are mapped on a two-dimensional grid by applying the previously described grid map building methodology. When the map is updated, robot estimates its position on the constructed grid by applying ROS-provided self-localisation algorithm [41] and detects all groups of free cells that are bordering the unknown grid map cells. For each of these cell groups (frontiers), the centre point coordinate is calculated and added to the list of available candidate frontiers. Then, the robot activates the decision-making module and each candidate frontier is evaluated by applying the criteria set, presented in Table 1, and the proposed interval-valued neutrosophic MCDM framework. The highest-ranked candidate frontier is then selected by the decision-making module. Finally, the robot moves to the selected frontier and updates the partial grid map information with newly obtained data. This data acquisition, evaluation, and frontier selection process is repeated by the decision-making module until one of two conditions are met: the given time limit has passed or there are no candidate frontiers left.

**Figure 1.** Iterative environment exploration strategy applied by the autonomous robot.



Since frontier selection strategy is extended by applying MCDM methodology, next subchapters are dedicated to introducing this MCDM problem formulation, criteria evaluation, and weight selection methodology, as well as a detailed presentation of the proposed interval-valued neutrosophic WASPAS framework.

### *3.2. Problem Formulation*

In general, each individual candidate frontier, with a centre point *ai*(*x*, *y*) in an iteratively-obtained set of *m* candidate frontiers *A* = (*a*1, ... , *am*) can be evaluated by applying a set of *n* criteria, denoted by *C* = (*c*1, ... , *cn*). Each criterion in *C*, has an assigned weight value *wj*, indicating its relative importance to other criteria. By assigning *ai*(*x*, *y*) a vector of weighted criteria *C*(*ai*) = (*c*1(*ai*), *c*2(*ai*), ... , *cn*(*ai*)), and applying MCDM framework, global utility value *Q*(*ai*) of a candidate frontier can be measured and compared to other candidate values. Considering the proposed environment exploration strategy, the robot is directed to the candidate frontier with the highest *Q*(*ai*).

### *3.3. Criteria for Frontier Evaluation in Autonomous Search and Rescue Mission*

Criteria selection for autonomous search and rescue mission is essential to design a balanced strategy that addresses different requirements, set by the stakeholders. Considering the analysis of autonomous environment exploration strategies, robot decision-making modules are commonly driven by spatial-information-based criteria [36], such as distances between the robot and other objects. However, some search and rescue mission requirements, namely economic and social criteria, currently have lack evidence of an established and published globally-recognised evaluation methodology, suitable for frontier selection in search and rescue missions. Therefore, in the context of this research, we propose to estimate such criteria values by analysing the spatial information.

A total of six criteria are proposed to design an effective decision-making strategy and to address technical, social, environmental, and economic requirements of autonomous search and rescue missions. The subset of criteria, namely *c*1, the distance to the control station, *c*2, the estimated amount of new information that would be gained after reaching the candidate location, *c*5, the estimated energy needed to reach the candidate location, and *c*6, the distance from the robot to candidate frontier location, are traditionally applied in modelling SAR missions [23,35,36]. In the context of this research, we also introduce two criteria, denoted as *c*3, the estimated danger to victims, and *c*4, the estimated danger to the robot, to address social and economic aspects of search and rescue missions. The complete criteria set is presented in Table 1.

Distance to the control station is a technical criterion that essentially defines if a robot can transmit the obtained information after reaching the candidate frontier [35,36]. If the control station has a constant position *ps*(*x*, *y*), the distance to the candidate frontier denoted as *ai*(*x*, *y*) in *A* can be measured as a Euclidean distance. By minimising this criterion, the robot can be directed to the nearest candidate frontiers to conduct an exhaustive exploration of the nearby vicinity. On the contrary, maximising this criterion adjusts the robot behaviour to prioritise further located frontiers to conduct fast-paced environment coverage. Considering the operational parameters set to the robot path planning algorithm, the estimated measurement error for this criterion is set to ±1 m.

Similarly, the Euclidean distance from the current robot position *pcur*(*x*, *y*) to the candidate frontier *ai*(*x*, *y*) can be measured and minimised to avoid backtracking behaviour. This technical criterion should ensure that most of the frontiers around the robot would be visited before returning to the frontiers that are closer to the control station.

The last technical criterion in the proposed criteria set, namely, the estimated amount of new information that is expected to be gained by visiting the candidate frontier *ai*(*x*, *y*) is considered to be equal to the length of the frontier. In the context of this research, the autonomous robot applies an ROS-provided Gmapping module [41] to map the unknown environment. In this case, a grid representation is applied, where each cell contains occupancy information. By detecting free cell chains that are neighbouring unknown cells, robot decision-making module can measure candidate frontier length and direct the robot to frontiers that are estimated to provide more information. Considering the resolution of the reconstructed grid map, the estimated measurement error for this criterion is set to ±0.1 m.

Estimated energy that would be consumed by reaching the candidate frontier represents the movement cost and address the environmental factor of search and rescue mission. The decision-making module estimates the criterion value by measuring the time *tai* needed to reach the candidate frontier *ai*(*x*, *y*). To estimate this criterion value, an autonomous robot computes a set of paths *R* = (*r*1,*r*2, ... ,*rk*) for each candidate frontier *ai*(*x*, *y*) in set *A*, and each path *r* = (*wp*1, *wp*2, ... , *wpm*) is constructed from a set of *m* waypoints *wp*. In *r*, starting from the current robot position *pcur* = *wp*<sup>1</sup> to the candidate frontier *ai*(*x*, *y*) = *wpm*, two connecting waypoints *wpi* and *wpi*+<sup>1</sup> create a path segment. Therefore, the distance between two waypoints can be denoted as *d*(*wpi*, *wpi*+1) and the corner between two lines can be expressed as α(*wpi*, *wpi*+1, *wpi*+2). The criterion value can be measured by:

$$t\_{d\_i} = \frac{\sum\_{i=1}^{m} d(wp\_i, wp\_{i+1})}{v\_m} + \frac{\sum\_{i=1}^{m} \alpha(wp\_i, wp\_{i+1}, wp\_{i+2})}{v\_r} \tag{1}$$

where *vm* = 0.1 m/s is the minimum robot movement velocity, and *vr* = 0.1 ◦ /s is the minimum robot rotation velocity. Naturally, the decision-making module should minimise the criterion to prolong the robot operation time. Considering the operational parameters set to the robot path planning algorithm, the estimated measurement error for this criterion is set to ±10 s.

The estimated damage for following the planned path is an important economic criterion which addresses the robot safety in search and rescue missions. During these missions, several events may occur that can directly affect the autonomous environment exploration process (e.g., some parts of the building can collapse, blocking the previously traversable path or damaging the robot). High radiation, open fire sources, and other dangerous obstacles can also affect the robot, making it unable to continue the mission [9]. In the context of this research, such objects are treated as dangerous objects of interest (OOIs) that are randomly distributed through the search and rescue environment. The estimated danger to the robot is calculated by introducing a penalty point system. The decision-making module calculates Euclidean distances *dd* from each waypoint *wp* in a path *r* to all known dangerous OOIs in *Od* <sup>=</sup> *OOId*<sup>1</sup> , *OOId*<sup>2</sup> , ... , *OOIdn* . The penalty points *pp* are calculated by linearly increasing their value from 0 to 3, depending on the distance between the *OOIi* and each *wp* in *r*. For example, if the distance between *wpi* and *OOIi* is greater than 3 m, the robot is safe and receives no penalty points. However, if the distance between *wpi* and *OOIi* is 2 m, the robot receives one penalty point. Subsequently, if the distance is 0.5 m, robot receives 2.5 penalty points. Considering the resolution of the reconstructed grid map, the estimated measurement errors for point-based criteria are set to ±0.2 p, and the criterion value *ppi* is estimated by the sum of all penalty points for each path by the following equation:

$$\begin{aligned} pp\_i &= \sum\_{j=1}^n \sum\_{i=1}^m d\_d(wp\_{i\prime}, OOI\_{d\_j})\\ d\_d(wp\_{i\prime}, OOI\_{d\_j}) &= \begin{cases} 3 - d(wp\_{i\prime}, O\_{d\_j}); & \text{if } d\{wp\_{i\prime}, OOI\_{d\_j}\} < 3\\ & 0 \text{ otherwise} \end{cases} \end{aligned} \tag{2}$$

Finally, the estimated danger to the victim is a social criterion, designed to direct the robot closer to an injured person and collect more data about them and their environment. First, the decision-making module evaluates the Euclidean distance *dv* from each *wp* in the planned path *r* to all visible victims *Ov* = (*OOIv*1, *OOIv*2, ... , *OOIvn*). If *dv* < 6 m, the danger posed to the victim by nearby dangerous OOIs in *Od* <sup>=</sup> *OOId*<sup>1</sup> , *OOId*<sup>2</sup> , ... , *OOIdn* can be estimated by applying the previously described linear point-based methodology. However, in this case, the area-of-effect zones for dangerous OOIs are increased to 6 m and the sum of danger points, denoted by *dpi*, can be estimated by the following equation:

$$dp\_i = \sum\_{j=1}^{n} \sum\_{i=1}^{m} d\_v \text{(OOI}\_{\text{v}\_i}, \text{OOI}\_{d\_j})$$

$$d\_v \text{(OOI}\_{\text{v}\_i}, \text{OOI}\_{d\_j}) = \begin{cases} 6 - d \text{(OOI}\_{\text{v}\_i}, \text{OOI}\_{d\_j}); & \text{if } d \text{(OOI}\_{\text{v}\_i}, \text{OOI}\_{d\_j}) < 6\\ & \text{and } d \text{(wp}\_{i\text{v}}, \text{O1}\_{\text{v}\_i}) < 6\\ & 0 \text{ otherwise} \end{cases} \tag{3}$$

By utilising technical, environmental, economic, and social criteria, the robot decision-making module can make more precise decisions in search and rescue missions. However, criteria weights need to be adjusted to ensure the balanced and efficient environment exploration process.

### *3.4. Weight Selection*

To ensure the efficiency of the proposed decision-making strategy, an expert group was formed to evaluate the applicability of the proposed criteria set, and to determine weights for each criterion. A total of seven experts working in the field of autonomous robots and decision-making systems participated in criteria ranking and weighting process. To convert variating expert opinions into a well-formed weight set, a stepwise weight assessment ratio analysis (SWARA) [42] is applied. The general weight calculation process by SWARA method can be described as follows:


By analysing the autonomous unknown environment exploration problem, the experts have agreed on the criteria importance order, presented in Table 1. The criteria weight calculation process and results obtained by the SWARA method are provided in Tables 2 and 3.


**Table 2.** Pairwise comparison of criteria relative importance.

**Table 3.** Criteria weight determination for autonomous search and rescue mission.


### *3.5. WASPAS-IVNS Framework*

The original weighted aggregated sum product assessment method was first introduced in [43]. Since then, the framework has been extended several times [44] to better address uncertainties in initial data. As a product of such development, a recent WASPAS extension by single-valued neutrosophic sets [45], namely WASPAS-SVNS, was introduced in [25]. This method essentially enables the robot designer to model decision-related information by truth, falsity, and indeterminacy functions and has already been applied in an autonomous environment exploration task [23]. The WASPAS framework is constructed from two objectives which provide additional reliability in the decision-making process. Also, the WASPAS method requires very few computational resources, which is especially relevant for real-time applications. However, to address the problem of the small errors produced by imprecise robot movements and imperfect sensor readings, we propose a new formulation of WASPAS method, modelled under interval-valued neutrosophic set environment, namely WASPAS-IVNS. The general properties of the interval-valued neutrosophic set (IVNS) [46], and the proposed WASPAS-IVNS framework are presented as follows.

If a set of criteria modelled under the interval-valued neutrosophic environment is considered as a domain of problem-related objects *X*, and *x* ∈ *X* is a value of the single criterion, an interval-valued neutrosophic set *N* ⊂ *X* can be denoted by a general form of:

$$N = \{ \langle \mathbf{x}, T\_N(\mathbf{x}), I\_N(\mathbf{x}), F\_N(\mathbf{x}) \rangle : \mathbf{x} \in X \}\tag{4}$$

where *TN*(*x*) : *X* → [0, 1], *IN*(*x*) : *X* → [0, 1], *FN*(*x*) : *X* → [0, 1], and 0 ≤ *TN*(*x*) + *IN*(*x*) + *FN*(*x*) ≤ 3 for all *x* ∈ *X*. The three membership degree functions define *N*: the truth-membership degree function *TN*(*x*), the indeterminacy-membership degree function *IN*(*x*), and the falsity-membership degree function *FN*(*x*). These functions are described by subsets of *TN*(*x*) = [*in f TN*(*x*),*sup TN*(*x*)] ⊆ [0, 1], *IN*(*x*) = [*in f IN*(*x*), *sup IN*(*x*)] ⊆ [0, 1], *FN*(*x*) = [*in f FN*(*x*),*sup FN*(*x*)] ⊆ [0, 1] with the sum condition of 0 ≤ *sup TN*(*x*) + *sup IN*(*x*) + *sup FN*(*x*) ≤ 3.

Like all multi-criteria decision-making methods, the WASPAS-IVNS approach begins with a construction of the decision matrix *Y*. The single matrix object *yij* ∈ *Y*, *i* = 1, ... , *n* and *j* = 1, ... , *m* is modelled under the interval-valued neutrosophic environment and corresponds to the *i th* criteria of *j th* alternative (in this research, a candidate frontier). This decision matrix can be expressed as:

$$Y = \begin{bmatrix} y\_{11} & \cdots & y\_{1m} \\ \vdots & \ddots & \vdots \\ y\_{n1} & \cdots & y\_{nm} \end{bmatrix} \tag{5}$$

Next, decision matrix elements are normalised by applying the following normalisation function:

$$\inf \ y\_{ij}^\* = \frac{\inf \ y\_{ij}}{\max\_i y \sqrt{m}},\\ \sup \ y\_{ij}^\* = \frac{\sup \ y\_{ij}}{\max\_i y\_{ij} \sqrt{m}} \tag{6}$$

After element normalisation, the neutrosophication step is performed, and the initially crisp values of the decision matrix are transformed into interval-valued neutrosophic numbers. For this conversion, modification rates presented in [25] are applied.

After this step, the first objective of WASPAS-IVNS framework, which is based on the sum of the total relative importance of the alternative *j*, is calculated by the following equation:

$$Q\_j^{(1)} = \sum\_{i=1}^{L\_{\text{max}}} (y\_n^\*)\_{ij} \cdot w\_i + (\sum\_{i=1}^{L\_{\text{min}}} (y\_n^\*)\_{ij} \cdot w\_i)^c \tag{7}$$

where *j* is the alternative, and (*y*<sup>∗</sup> *<sup>n</sup>*)*ij* are interval-valued neutrosophic members with *wi* weight. *Lmax* corresponds to criteria set members that are maximised, and *Lmin* corresponds to criteria set members that are minimised.

The second objective of WASPAS-IVNS framework, which is based on the product total relative importance of alternative *j*, can be calculated by the following equation:

$$Q\_j^{(2)} = \prod\_{i=1}^{L\_{\max}} (y\_n^\*)\_{ij}{}^{w\_i} \cdot (\prod\_{i=1}^{L\_{\min}} (y\_n^\*)\_{ij}{}^{w\_i})^c \tag{8}$$

Members of this function share their definitions with those provided for Equation (7). Finally, the value of the joint generalised criteria is determined by:

$$Q\_{\hat{j}} = 0.5 Q\_{\hat{j}}^{(1)} + 0.5 Q\_{\hat{j}}^{(2)} \tag{9}$$

To complete the WASPAS-IVNS objectives, the following interval neutrosophic algebra operations should be applied. The multiplication of interval-valued neutrosophic number (*y*<sup>∗</sup> *<sup>n</sup>*) <sup>=</sup> [*in f tn*,*sup tn*], [*in f in*,*sup in*], [*in f fn*,*sup fn*] and a positive real number λ can be defined by Equation (10), and the complementary neutrosophic number component can be defined by Equation (11). The summation of two IVNNs *y*∗ *n*1 = [*in f tn*1,*sup tn*1], [*in f in*1,*sup in*1], [*in f fn*1,*sup fn*1] and *y*∗ *n*2 = [*in f tn*2,*sup tn*2], [*in f in*2, ,*sup in*2], [*in f fn*2, ,*sup fn*2] can be calculated by applying Equation (12). The power function of an interval-valued neutrosophic number and a positive real number λ, required by the second WASPAS-IVNS objective, is defined by Equation (13). Finally, the multiplication result of two IVNNs *y*∗ *n*1 = [*in f tn*1,*sup tn*1], [*in f in*1,*sup in*1], [*in f fn*1,*sup fn*1] and *y*∗ *n*2 = [*in f tn*2,*sup tn*2], [*in f in*2,*sup in*2], [*in f fn*2,*sup fn*2] can be calculated by applying the Equation (14):

*Symmetry* **2020**, *12*, 162

$$\begin{array}{rcl}\lambda\left(y\_n^\*\right) = & \langle \left[1 - \left(1 - \inf f\_n\right)^\lambda\right], 1 \\ & - \left(1 - \sup f\_n\right)^\lambda\right] , \left[\left(\inf f\_n\right)^\lambda, \left(\sup f\_n\right)^\lambda\right] , \left[\left(\inf f\_n\right)^\lambda, \left(\inf f\_n\right)^\lambda\right] \end{array} \tag{10}$$

$$<\langle y\_n^\* \rangle^\varepsilon = \langle \left[ \inf f\_n, \sup f\_n \right], \left[ 1 - \sup i\_n, 1 - \inf i\_n \right], \left[ \inf f\_n, \sup t\_n \right] \rangle \tag{11}$$

$$\begin{pmatrix} \begin{pmatrix} y\_{n1}^\* \end{pmatrix} + \begin{pmatrix} y\_{n2}^\* \end{pmatrix} = \begin{pmatrix} (\inf f\_{n1} + \inf f\_{n2} - \inf f\_{n1} \cdot \inf f\_{n2})\_{\prime} \\ (\sup f\_{n1} + \sup f\_{n2} - \sup f\_{n1} \cdot \sup f\_{n2})\_{\prime} \end{pmatrix} \\ \begin{pmatrix} (\inf f\_{n1} \cdot \inf f\_{n2})\_{\prime} & (\sup f\_{n1} \cdot \sup f\_{n2})\_{\prime} \end{pmatrix} \end{pmatrix} \tag{12}$$
 
$$\begin{bmatrix} (\inf f\_{n1} \cdot \inf f\_{n2})\_{\prime} \end{bmatrix} \begin{pmatrix} \sup f\_{n1} \cdot \sup f\_{n2} \end{pmatrix} \end{align} \tag{12}$$

$$(y\_n^\*)^\lambda = \left\langle \begin{bmatrix} (\inf f\_n)^\lambda \left( \sup f\_n \right)^\lambda \end{bmatrix} \right\rangle$$

$$(y\_n^\*)^\lambda = \left\langle \begin{bmatrix} 1 - \left(1 - \sup f\_n \right)^\lambda , 1 - \left(1 - \inf f\_n \right)^\lambda \end{bmatrix} \right\rangle$$

$$\begin{bmatrix} 1 - \left(1 - \inf f\_n \right)^\lambda , 1 - \left(1 - \sup f\_n \right)^\lambda \end{bmatrix} \end{bmatrix} \tag{13}$$

$$\begin{array}{c} \left[ \begin{matrix} (\inf f \, \boldsymbol{t}\_{n1} \cdot \inf f \, \boldsymbol{t}\_{n2}) \, (\sup f \, \boldsymbol{t}\_{n1} \cdot \sup f \, \boldsymbol{t}\_{n2}) \end{matrix} \right] \\ \left( \mathbf{y}\_{n1}^{\*} \right) \cdot \left( \mathbf{y}\_{n2}^{\*} \right) = \left\langle \begin{bmatrix} (\inf f \, \boldsymbol{i}\_{n1} + \inf f \, \boldsymbol{i}\_{n2} - \inf f \, \boldsymbol{i}\_{n1} \cdot \inf f \, \boldsymbol{i}\_{n2}) \end{bmatrix} \right. \\ \left( \sup f \, \boldsymbol{i}\_{n1} + \sup f \, \boldsymbol{i}\_{n2} - \sup \boldsymbol{i}\_{n1} \cdot \sup \boldsymbol{i}\_{n2} \right) \Big| \\ \left( \inf f \, \boldsymbol{n}\_{1} + \inf f \, \boldsymbol{n}\_{2} - \inf f \, \boldsymbol{f} \, \boldsymbol{n}\_{1} \cdot \inf f \, \boldsymbol{n}\_{2} \right) \Big| \\ \left( \sup f \, \boldsymbol{n}\_{1} + \sup f \, \boldsymbol{n}\_{2} - \sup f \, \boldsymbol{n}\_{1} \cdot \sup f \, \boldsymbol{n}\_{2} \right) \end{array} \right\rangle \tag{14}$$

To determine and select the highest-ranked candidate frontier, the obtained values can be compared by applying the interval-valued neutrosophic number comparison functions, namely, score function, denoted by *s*(*Q*1), accuracy function, denoted by *a*(*Q*1), and certainty function, denoted by *c*(*Q*1). These functions can be expressed by the following equations:

$$s(Q\_1) = \begin{bmatrix} \inf t\_{n1} + 1 - \sup i\_{n1} + 1 - \sup f\_{n1} \\ \sup t\_{n1} + 1 - \inf i\_{n1} + 1 - \inf f\_{n1} \end{bmatrix} \tag{15}$$

$$a(Q\_1) = \begin{bmatrix} \min\{\inf t\_{n1} - \inf f\_{n1}, \sup t\_{n1} - \sup f\_{n1}\}\_{\prime} \\ \max\{\inf t\_{n1} - \inf f\_{n1}, \sup t\_{n1} - \sup f\_{n1}\} \end{bmatrix} \tag{16}$$

$$\mathcal{L}(Q\_1) = \begin{bmatrix} \inf \ t\_{n1}, \sup \ t\_{n1} \end{bmatrix} \tag{17}$$

The comparison of two SVNNs by score, accuracy and certainty functions can be completed in the following:


The degree of the possibility of the score function is determined by the following equation:

$$\begin{aligned} p(s(Q\_1) \ge s(Q\_2)) \\ \rho = \max \left\{ 1 - \max \left( \frac{\sup(s(Q\_2)) - \inf(s(Q\_1))}{(\sup(s(Q\_1)) - \inf(s(Q\_1))) + (\sup(s(Q\_2)) - \inf(s(Q\_2)))}, 0 \right), 0 \right\} \end{aligned} \tag{18}$$

The degrees of the possibility for the accuracy and certainty functions are calculated in the respective approach. Next, we provide the practical application example of the proposedWASPAS-IVNS framework in search and rescue missions.

### **4. Experimental Evaluation of WASPAS-IVNS Framework**

In this research, search and rescue missions with time restrictions are considered. Robot operation time is bounded by a 20-min time interval, through which the autonomous robot must map the exploration environment and mark the detected OOI's on it. In this experiment, the OOIs are limited to human victims and dangerous objects.

### *4.1. Search and Rescue Environment*

To assess the proposed WASPAS-IVNS framework in search and rescue missions, two indoor environments representing commercial-type and public-type buildings were considered. In the commercial-type building environments (e.g., retail shops), rooms and staff-only areas are relatively small and are commonly connected to each other by the central hall. This type of structure requires the robot to constantly backtrack to previous locations in order to visit new ones. Differently, public-type buildings, such as hospitals, are distinguished by wide open spaces and looping corridors, enabling the robot to observe more environment and detect dangerous OOIs in advance. Both of these environments are presented in Figure 2.

**Figure 2.** (**a**) The test environment representing commercial-type buildings; (**b**) The test environment representing public-type buildings.

Dangerous objects and victims are placed at random locations in both environments as presented in Figure 2. In this research, the red dots represent dangerous OOIs and yellow dots represent victims. The public-type building contains five victims and five dangerous objects that the robot should detect within the set time interval, and the commercial-type environment contains three victims, and two dangerous objects. Next, the example of candidate frontier evaluation by the proposed WASPAS-IVNS framework is presented.

### *4.2. Frontier Evaluation by WASPS-IVNS Framework*

To highlight the practical application of the proposed WASPAS-IVNS framework, an example solution to one of the autonomous robot decision-making iterations is provided. The public-type building environment information, as mapped by the robot at the considered candidate frontier selection iteration, is provided in Figure 3. One victim and one dangerous object have already been found by the robot and marked by yellow and red dots, respectively. The robot is located at the position marked by a black dot, and the black line represents its previous movement trajectory. The available frontier regions are coloured in blue, and the green dots represent candidate frontier centre points *ai*(*x*, *y*) that robot decision-making module must evaluate. At this iteration, the robot has a total of seven candidate frontiers to choose from.

**Figure 3.** The public-type building environment mapped by the robot at the considered candidate frontier selection iteration.

First, criteria values are estimated for each candidate frontier location by applying the proposed methodology. In this iteration, no new OOIs were detected around candidate frontiers, hence *c*<sup>3</sup> and *c*<sup>4</sup> criteria values are null. Although these criteria do not influence the decision-making process in the considered iteration, it is highly recommended to change null values obtained in such situations to small positive number to stabilise the numerical computational procedure. The constructed decision matrix for the sample iteration is presented in Table 4.


**Table 4.** The initial decision matrix.

Next, the neutrosophication step is performed. Results obtained by decision matrix element conversion to interval-valued neutrosophic numbers by WASPAS-IVNS are presented in Table 5.


**Table 5.** The decision matrix after the neutrosophication step by WASPAS-IVNS framework.

The numerical results obtained by the first and second objectives of WASPAS-IVNS framework and the joint generalised criteria values are provided in Table 6. The candidate frontier ranks are obtained by applying the score function and are provided in Table 7. Considering the proposed environment exploration strategy, in this example iteration, frontier denoted by *a*<sup>2</sup> has the highest rank amongst candidate frontiers, and therefore is chosen as a next robot destination. Candidate frontier *a*<sup>4</sup> has similar initial criteria values and is ranked second. The main factor determining the next robot move, in this case, is *c*<sup>6</sup> criterion, which forces the robot decision-making module to prioritise the closer location.

**Table 6.** Numerical results obtained by WASPAS-IVNS framework.



**Table 7.** Numerical results and candidate frontier rank obtained by applying score function.

Compared to the WASPAS-SVNS method, the modelling of candidate frontier evaluation problem under interval-valued neutrosophic numbers enables the robot decision-making module to add into consideration the inaccuracies obtained by robot sensors and software component estimations. Unlike the WASPAS-SVNS method, the proposed WASPAS-IVNS framework provides additional tools for evaluating similar criteria values, and therefore enables the robot decision-making module to better estimate the final score values of the candidate frontiers. This framework difference is illustrated in Table 8, which presents candidate frontier score and rank results of the initial decision matrix (Table 4), obtained by applying the WASPAS-SVNS methodology described in [23]. In this example *a*<sup>2</sup> and *a*<sup>4</sup> frontier scores remain similar. However, due to a less effective approach, the robot is directed to *a*<sup>4</sup> frontier direction.

**Table 8.** The candidate frontier score results and rank obtained by applying WASPAS-SVNS methodology, presented in [23].


### *4.3. Search and Rescue Mission Simulation Results*

Next, we discuss the autonomous search and rescue mission results, obtained by the ten test runs in commercial-type and public-type building environments. The same set of already introduced rules were applied in each test. The obtained environment information in square meters, received penalty points, and a number of detected dangerous objects and victims are presented in Table 9.


**Table 9.** Assessment results obtained by the WASPAS-IVNS framework in commercial and public-type buildings.

### *Symmetry* **2020** , *12*, 162

By comparing robot behaviour and the results obtained between the commercial-type and public-type building environments, several observations can be made. The results of the autonomous robot exploration obtained in the small commercial-type building environments are presented in Figure 4.

**Figure 4.** (**a**) The average of environment information obtained during ten 20-min-long autonomous search and rescue missions in commercial-type building environment; (**b**) The average penalty points obtained by the robot during the same test runs.

In this environment, the autonomous robot was able to map the whole exploration space, totalling at an average of 491.3 square meters, and detect all OOIs before the given time limit. However, the robot also received heavy penalties, averaging at 5.43 penalty points. This can be explained by addressing the proposed environment exploration strategy, namely the distribution of criteria weights. The proposed strategy prioritises exploration before robot safety, and therefore, ensures that nearby locations would be visited first. As can be seen in Figure 4 and from the robot movement trajectory in one of the test runs, presented in Figure 5, robot decision-making module first directs the robot to visit the room on the left. However, the dangerous obstacle is not seen from the robot starting location, and this criterion does not participate in the decision-making process. Therefore, the robot drives into a room and receives a penalty at early exploration stages.

However, a different behaviour can be observed near the end of the search and rescue mission. When the robot decision-making module has to choose between the room at the top-right and the room at the bottom-left corner of the map, the robot is first directed to visit the room at the top. In this situation, the robot can see the victim in a nearby room, and a dangerous object, located at the bottom of the map. The proposed decision-making strategy prioritises the minimisation of the expected damage to the robot and to make contact with the victim. Hence, the last room is visited only when there are no more options left. Here, by evading the dangerous OOI blocking its way, the robot detects the hidden victim.

**Figure 5.** Autonomous robot movement trajectory in commercial building environment.

The results of autonomous robot exploration in a public-type building environment are presented in Figure 6.

**Figure 6.** (**a**) The average of environment information obtained during ten 20-min-long autonomous search and rescue missions in public-type building environment; (**b**) The average penalty points obtained by the robot during the same test runs.

Differently from the commercial-type building environments, in a public-type building environment, the robot was not able to complete an exhaustive environment exploration, due to the set time restrictions. These results are to be expected when applying the proposed environment exploration strategy, especially when considering the *c*<sup>1</sup> criterion, denoting the distance from the candidate frontier location to the robot control centre. The criterion is minimised, to ensure that the robot will be able to transfer the obtained information after reaching the candidate frontier. Therefore, at the start of the search and rescue mission, the robot spends a lot of time analysing nearby locations, as presented by the robot movement trajectory in one of the test runs, depicted in Figure 7.

**Figure 7.** Autonomous robot movement trajectory in public building environment.

On average, the robot managed to map 812.9 square meters of environmental information and detect 2.3 dangerous objects and 2.2 victims. However, the robot also received fewer penalty points, averaging at 1.04. The small number of received penalty points can also be explained not only by the integration of the proposed criteria, but also by analysing the physical structure of the test environment. In this case, wide-open spaces enable the robot to detect dangerous objects in advance and to make the decisions accordingly.

### **5. Conclusions**

Autonomous robot applications in search and rescue missions requires the robot decision-making module to be capable of making effective decisions in unknown environments by taking into consideration a set of mission-related criteria. However, the quality of robot-made decisions can be affected by the small errors in the initial data (e.g., imprecise environment map representation or sensor readings). Hence, in this research, a new frontier evaluation strategy is proposed for modelling search and rescue missions by introducing a new set of criteria that addresses technical, environmental, social, and economic factors of SAR missions. The robot-applied candidate frontier evaluation process is explicitly controlled by the proposed interval-valued neutrosophic WASPAS framework, namely, WASPAS-IVNS. This method is introduced to improve the decision-making process by addressing small measurement errors, obtained due to imperfect sensor readings and imprecise robot movements.

The experimental evaluation results show that our proposed method is effective in search and rescue scenarios and can be applied to solve complex real-time tasks. By addressing the estimated measurement errors, the proposed decision-making framework provides additional reliability when comparing candidate frontiers with similar initial criteria values, as presented in Section 4.2. In relatively small commercial-type buildings, the robot can conduct an exhaustive search and create a complete map of the environment before reaching the given time limit of 20 min. However, bigger, public-type building environments are more problematic in a sense that robot has to cover more area with same time restrictions. Therefore, the proposed criteria list should be adjusted accordingly. For example, the *c*<sup>1</sup> criterion should be maximised to direct the robot to further located frontiers.

The conducted experiments also show that the biggest threats to the robot in such environments are unknown damage sources. The proposed criteria set is only effective when the robot can detect and evaluate all task-related information. If the robot has no information about the objects located around the corners, the decision-making module has no method to estimate the possible danger or benefit of finding a victim and makes the decision, based only on *c*1, *c*2, *c*5, *c*<sup>6</sup> criteria.

For possible future work, the authors will consider addressing this problem by introducing additional robot movement rules that would stop the robot and force it to re-evaluate the decision as soon as it drives through the doors or obtains new environment information. The authors also consider expanding the proposed criteria list by conducting a more in-depth interview of stakeholders to identify and address other common search and rescue requirements.

**Author Contributions:** Conceptualization, R.S. and R.B.; methodology, R.S. and R.B.; software, R.S.; validation, R.S. and R.B.; formal analysis, R.S. and R.B.; investigation, R.S.; resources, R.S.; data curation, R.S.; writing—original draft preparation, R.S.; writing—review and editing, R.S. and R.B.; visualization, R.S.; supervision, R.B.; project administration, R.S. and R.B. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest.

### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **The Impact of Manufacturing Flexibility and Multi-Criteria Optimization on the Sustainability of Manufacturing Systems**

### **Robert Ojstersek \* and Borut Buchmeister**

Faculty of Mechanical Engineering, University of Maribor, 2000 Maribor, Slovenia; borut.buchmeister@um.si **\*** Correspondence: robert.ojstersek@um.si; Tel.: +386-2220-7585

Received: 12 December 2019; Accepted: 10 January 2020; Published: 12 January 2020

**Abstract:** The presented manuscript deals with the impact of manufacturing flexibility on cost-time investment as a function of sustainable production, which addresses the company's sustainable social and environmental impact adequately. The impact of manufacturing flexibility on cost-time investment in the research sphere is not described, despite the fact that we know its key role in the high-mix low-volume production types. Recently, researchers have been addressing intensively the impacts of various parameters on the sustainable aspect and its dependence on manufacturing flexibility. The complexity of the influence parameters is reflected in the multi-criteria nature of optimization problems that can be solved with appropriate use of the evolutionary computation methods. The manuscript presents a new method of manufacturing flexibility modelling, with respect to the four-level architectural model, which reflected as a symmetry phenomena influence on the cost-time profile diagram. The solution to a complex optimization problem is derived using the proposed improved heuristic Kalman algorithm method. A new method is presented of optimization parameters' evaluation with respect to the manufacturing flexibility impacts on cost-time investment. The large impact of appropriate multi-criteria optimization on a sustainably justified production system is presented, with the experimental work on benchmark datasets and an application case. The new method allows a comprehensive optimization approach, and validation of the optimization results by which we can provide more sustainable products, manufacturing processes, and increase the company's total, social and environmental benefits.

**Keywords:** manufacturing flexibility; multi-criteria optimization; sustainability; evolutionary computation; symmetry; cost-time profile

### **1. Introduction**

In the time of Industry 4.0, where the high complexity of manufacturing systems is reflected in multi-criteria optimization problems that must be solved while improving the productivity and sustainability of the production system. Personalized products in Industry 4.0 manufacturing systems are represented by the high-mix low-volume production type [1]. Adequate evaluation of the cost-time diagram for the high-mix low-volume production type is not capable of describing fully the effects of multi-criteria optimization on a specified production type [2]. The impact of manufacturing flexibility on this production type is a key optimization parameter, that needs to be well known and described in order to ensure sustainable manufacturing processes [3]. The research problem concerns the impact of manufacturing flexibility, and the suitability of the multi-criteria optimization methods used on the ability to provide sustainable production [4]. The impact of the flexibility on the manufacturing systems and their environmental and financial justification is not well described. An appropriately optimized flexible manufacturing system with short flow time and uniformly high machine utilization is a multi-critical optimization problem that can be solved with advanced evolutionary computation

methods. When introducing the evolutionary computation method, the complexity factors in the transfer of mathematical methods should be evaluated in the real-world environment. The real-world environment of flexible manufacturing systems that they represent are presented as small and mid-sized enterprises, with specific production characteristics, which advocates a high ability to adapt to the market demand. The high customization level of the customers needs in the manufacturing system, however, results in unevenly occupied, financially unjustified and less sustainable manufacturing systems. Therefore, ensuring highly efficient and sustainable flexible manufacturing systems is very important.

Researchers have recently been paying a lot of attention to ensuring sustainably justified manufacturing systems, defined as [5]: sustainable manufacturing is the creation of manufactured products through economically-time efficient processes that minimize negative environment impacts while conserving energy and natural resources [6]. In short: conserve energy (machine and workers' utilization, short idle times, optimized transport systems etc.), and natural resources (material handling, just in time systems, optimized manufacturing processes and techniques, low material and products scrap). Ensuring sustainable manufacturing systems increase growth and global competitiveness, with sustainable manufacturing optimized processes that minimize negative environmental impacts [7]. Optimized production methods and operations ensure continuously improving production system performance, cost and time efficiency, product quality, a safer working environment and high flexible manufacturing systems [8].

Ensuring sustainable production can be ensured through appropriate optimization approaches that optimize multi-criteria optimization problems comprehensively [9]. However, the evaluation of optimization methods is determined using a cost-time profile diagram [10]. In the cost-time profile diagram, we are talking about defining the impact of accumulated costs over the time period of orders. Activities, waiting times and resources define the accumulated costs that describe the economically time-efficient production systems, and, thus, minimize negative environmental impacts [11]. Evaluating the sustainability of manufacturing systems using a cost-time profile diagram identifies single-criteria optimization problems well [2], but is unable to identify and describe the manufacturing flexibility influence on the manufacturing systems [12]. The importance of the manufacturing flexibility impact is defined as a four level architecture model within the manufacturing system [13]. An individual resource level defines the flexibility of the manufacturing system's resources: labor, machinery and material handling. A shop floor level relates to the flexibility of the production shop floor: routing and operation flexibilities. It is the operation and routing flexibility that defines a flexible job shop scheduling problem as a multi-criteria optimization problem [14]. In the third and fourth levels of flexibility, we want to define plant and functional level flexibilities, which describe volume, mix, products, modifications, and new product flexibilities. These two levels, thus, represent a typical manufacturing system in Industry 4.0, which defines a high-mix, low-volume production process [15]. The flexibility defined in this way is a research problem that consists of two parts: multi-criteria optimization of flexible manufacturing systems, and evaluation of the cost-time profile diagram, depending on the manufacturing flexibility [16]. An appropriately valued and optimized cost-time and manufacturing flexibility ratio ensures a sustainably justified production system that minimizes negative environmental impacts, enhances quality and ensures the company s global competitive advantage [13].

The main contributions of the manuscript are: a new method of manufacturing flexibility modelling with a four-level architectural model that describes the high-mix low-volume production type with correlation to an Flexible Job Shop Scheduling Problem (FJSSP). Depending on the production type, we have defined the three machine groups mathematically, according to the parameters of costs, processing times, setup times, energy costs, tool cost, etc. Based on mathematical modelling, a new factor is presented between operational and idle costs. Using a cost-time profile that describes the production characteristics of activity, resources, times and costs, a simulation model is developed using an evolutionary computation method to determine the impact of manufacturing flexibility on

the economic and sustainable manufacturing efficiency. The numerical results obtained using the simulation scenario method and two test datasets (Kacem and Brandimarte) that define manufacturing flexibility, represent a new method of a cost-time-flexibility profile diagram, which represents the cost-time investment value as a function of manufacturing flexibility. The encouraging multi-criteria optimization results of the test datasets are supported by an example implementation of the proposed optimization method on a real-world production system. The proposed optimization approach is evaluated by comparing the performance of the self-designed improved heuristic Kalman algorithm (IHKA) evolutionary computation method and the comparative algorithms bare bones multi-objective particle swarm optimization (BBMOPSO) and multi-objective particle swarm optimization (MOPSO), with C-metric measures. The optimization results demonstrate the high ability to use the IHKA optimization algorithm to schedule order optimally in FJSSP production. The proposed algorithm is considered to be the most successful, as confirmed by the numerical and graphical results. Numerical multi-criteria optimization results were transmitted using an interactive method to a simulation environment, where the dependence of the cost-time diagram as a function of manufacturing flexibility is shown. The presented optimization results of a real-world manufacturing system prove the successful transfer of the theoretical mathematical methods through simulation environments to a real-world manufacturing system. The presented research work has shown a high degree of interdependence between the cost-time and the adaptive component of flexibility in the manufacturing systems.

The research work is organized as follows: The second section of the manuscript defines and presents the manufacturing flexibility using a four-level architectural model. The individual levels and their characteristics are defined, together with the impact on the production process. The third section presents a mathematical description of a multi-criteria optimization approach that allows solving complex optimization problems of flexible manufacturing systems with the aim of ensuring sustainable production. The fourth section presents a new approach to defining manufacturing flexibility with respect to the optimization parameters of machine groups, costs, positions and times. The influence is presented of the cost-time profile diagram on the manufacturing flexibility. The results of the manufacturing flexibility modelling are presented in the fifth section, where a new method is described for cost-time investment as a function of manufacturing flexibility evaluation. In the sixth section, the newly proposed theoretical methods are transferred to the applied real-world example, where the input data of a real-world manufacturing system shows the multi-criteria optimization approach with the improved heuristic Kalman algorithm (IHKA) evolutionary computation method. The numerical and graphical results of the proposed method are presented, and the advantages and limitations of the proposed approach are evaluated. Section Seven concludes the paper, with an answer to the initial research question of manufacturing flexibility impact on the provision of sustainable manufacturing systems, identifies the advantages and limitations of the proposed method and approach, and outlines directions and options for further research.

### **2. Manufacturing Flexibility**

Manufacturing flexibility is a multi-dimensional manufacturing objective with no general acquiescence on its definition. This is because every manufacturing enterprise looks on the manufacturing flexibility in its own way. Manufacturing enterprises can define manufacturing flexibility either in an adaptive or proactive manner. The adaptive approach represents the defensive/reactive use of flexibility to accommodate unknown uncertainty in a manufacturing system, and it addresses both the internal, as well as external, uncertainty faced by manufacturing enterprises. An adaptive approach can define manufacturing flexibility as a manufacturer's ability to adapt or change. On the other hand, a proactive approach to the use of flexibility aids the company in gaining global competitiveness by raising customer anticipation and increasing the insecurity of enterprise rivals. With a proactive approach, we can define manufacturing flexibility as a system's ability to cope with a wide range of possible dynamical environmental changes. From a sustainable enterprise viewpoint, manufacturing flexibility should be customer-driven, and refers to the availability of personalized products that

meet customer needs when there is a demand. From the literature, we can say that it is the ability of a manufacturing system to respond cost effectively and rapidly to changing product needs and requirements [3].

In this case, the definition reveals clearly that manufacturing flexibility is the ability of a manufacturing system to respond effectively and efficiently to the environmental uncertainties (manufacturing system and global demand). Manufacturing effectiveness related to manufacturing flexibility represents the ability of the system to meet product variety requirements, quantity and at the right time, whereas efficiency represents that all system resources must be planned and scheduled optimally. A general classification of manufacturing flexibility level is presented in Table 1, where manufacturing flexibility is divided into four levels. In our research work, we are focused on optimizing all four levels of manufacturing flexibility with a new, effective cost-time evaluation method.


**Table 1.** Manufacturing flexibility classification.

The complexity of manufacturing flexibility can be described as environmental uncertainty referring to the occurrence of an unexpected change, both within the manufacturing system and external dynamic changes. Dynamic variability of products within the manufacturing process refers to the flexibility of an advanced personalized variety of products and carrying out different adaptive manufacturing techniques. The dynamic variability of manufactured products can be divided in two different ways. The first way refers to the range of parts produced in the current time high-mix production type. The second way refers to the variation of product output over time, described as the low-volume production type. From the defined high-mix low-volume production type, we can distinguish between two types of changes: planned and unplanned changes. In sustainable manufacturing systems, we want to have as many planned changes as possible, which happen because of some well-planned managing actions. On the other hand, we must eliminate unplanned changes, which occur independently within the manufacturing systems, with unplanned response times. Planned and unplanned changes in manufacturing system flexibility lead to six dimensions: machine, operation, routing, volume, expansion, product and process flexibility. In our research work, we will refer mostly to manufacturing process flexibility, described as: the ability to produce a given set of part types, each possibly using different material, in several different sets of part types that the system can produce without major set-ups [17], number and variety of products which can be produced without incurring high transition penalties or large changes in performance outcomes [18].

### **3. Multi-Criteria Optimization**

Multi-objective optimization is an area that deals with multi-objective decision-making of mathematical and combinatorial optimization problems [19]. Multi-criteria optimization problems involve more than one optimization function, where several variables of the optimization problem need to be optimized. A main characteristic of multi-objective optimization is that there is not only one optimal solution optimizing the optimization function, but, for these functions, there are infinitely many Pareto optimal solutions [20]. Pareto optimal solutions are non-dominant, Pareto optimal, or Pareto effective. All Pareto optimal solutions in the Pareto space are considered equally appropriate. The field of multi-objective optimization is increasingly present in everyday life, due to the optimization problems complexity. Multi-objective optimization can be found in all fields of sciences, economics, logistics, etc., and where it is necessary to make optimal decisions in the presence of trade-offs between two or more conflicting goals [21].

In a mathematical sense, a multi-objective problem is formulated as presented with Equation (1):

$$\min(f\_1(\mathbf{x}), f\_2(\mathbf{x}), \dots, f\_k(\mathbf{x})); \mathbf{x} \in X,\tag{1}$$

where the integer *k* ≥ 2 represents the number of optimization parameters, and *X* represents the feasible set of decision vectors. A set of decision vectors is usually represented by constraint functions. A vector-valued objective function is defined as shown in Equation (2):

$$f: X \to \mathbb{R}^k, \ f(\mathbf{x}) \ = \ (f\_1(\mathbf{x}), \dots, f\_k(\mathbf{x}))^T. \tag{2}$$

Minimizing function negative dependence can be made by maximizing the function. The element *<sup>x</sup>* <sup>∗</sup> <sup>∈</sup> *<sup>X</sup>* presents a workable solution or a workable decision. The vector *<sup>z</sup>*<sup>∗</sup> <sup>=</sup> *<sup>f</sup>*(*x*∗) <sup>∈</sup> <sup>R</sup>*<sup>k</sup>* is called the function vector for the feasible solution *x*\*. Limitation of the multi-objective optimization relates to no viable solution that optimizes all of the target functions at the same time. For improving Pareto optimal solutions, at least one compromise of the remaining functions goals must be made.

Using the mathematical notations of Equations (3) and (4), we can conclude that the feasible solution *<sup>x</sup>*<sup>1</sup> <sup>∈</sup> *<sup>X</sup>* Pareto is dominated by another solution *<sup>x</sup>*<sup>2</sup> <sup>∈</sup> *<sup>X</sup>* in the case where:

$$f\_i(\mathbf{x}^1) \le f\_i(\mathbf{x}^2) \text{ for all } i \in \{1, 2, \dots, k\} \tag{3}$$

$$f\_i(\mathbf{x}^1) < f\_i(\mathbf{x}^2) \text{ for at least one } j \in \{1, \ 2, \dots, k\} \tag{4}$$

is a feasible solution, *<sup>x</sup>*\* <sup>∈</sup> *<sup>X</sup>* and the associated output value *<sup>f</sup>* (*<sup>x</sup>* \*) is Pareto optimal if there is no other solution that dominates it. The Pareto group of optimal solutions is called the Pareto front. The Pareto front of multi-objective optimization problems is constrained by two vectors:

• The nadir vector is defined mathematically by Equation (5):

$$z\_i^{\text{nd}} = \sup\_{\mathbf{x} \in \mathcal{X}} f\_i(\mathbf{x}) \text{ for all } i = 1, \dots, k \tag{5}$$

• The ideal vector is defined mathematically by Equation (6):

$$z\_i^{\text{ideal}} = \inf\_{\mathbf{x} \in \mathcal{X}} f\_i(\mathbf{x}) \text{ for all } i = 1, \dots, k \tag{6}$$

The upper and lower bounds for the optimization functions of Pareto optimal solutions are defined by the ideal and nadir vector components. An example of Pareto optimal solutions is shown in Figure 1, where there are two optimization functions, *f* <sup>1</sup> and *f* 2. The points in the coordinate system represent possible Pareto solutions where the point Z is not defined as the Pareto optimal solution, because it is dominated by the *X* and *Y* points. The points *X* and *Y* are not dominated by each other, so we can define both as Pareto optimal solutions.

**Figure 1.** Pareto multi-objective solutions.

### **4. Manufacturing Flexibility Modelling**

Modelling of manufacturing flexibility was performed on the flexible job shop scheduling type of manufacturing system. The multi-criteria nature of the flexible job shop scheduling manufacturing type is described as: we have *n* jobs which can be performed on *m* machines from a set of machines (*j* = 1, ... , *m*) suitable for carrying out the jobs. The choice of using which machine is made according to the machine occupancy and the suitability of the individual machines to perform the operation. The number of jobs *n* and number of machines *m* are given. Each job *i* has a specific sequence and number of operations *Oi*. The processing time of the operation *pjk* may vary, depending on the machine on which it is performed. For the multi-objective flexible job shop scheduling problem, some limitations must be made:


The multi-criteria flexible job shop scheduling optimization problem involves optimizing three criteria, described by Equations (7)–(9).

• Makespan (time required to complete all jobs):

$$f\_1 = \max \left\{ \mathbb{C}\_{\bar{l}} | j = 1, \dots, n \right\} \tag{7}$$

• Maximum workload (workload of the most loaded machine):

$$\|f\| = \max \sum\_{i=1}^{n} \sum\_{j=1}^{n\_i} p\_{ijk} x\_{ijk} \, k = 1, 2, \dots, m \tag{8}$$

• Total workload of all machines:

$$f\_3 = \sum\_{i=1}^{n} \sum\_{j=1}^{n\_i} \sum\_{k=1}^{m} p\_{ijk} \mathbf{x}\_{ijk}, k = 1, 2, \dots, m \tag{9}$$

where *Cj* is the completion time of job *Ji*, and *xijk* is a decision variable on which individual machine the operation will be processed.

Considering the definition of manufacturing flexibility in Table 1, we can see that a flexible job shop scheduling problem is defined as manufacturing flexibility according to the shop floor level (routing and operation flexibility associated with a shop floor). For more detailed modelling of manufacturing flexibility, we must still define manufacturing flexibility with respect to the other three levels: individual resource level (labor machine and material handling flexibility associated with a resource), plant level (volume, mix, expansion and product flexibility associated with a plant) and functional level (manufacturing flexibility). For addressing manufacturing flexibility comprehensively, we present below multi-criteria optimization modelling and the impact of manufacturing flexibility on sustainable production systems in relation to the cost-time-flexibility dependency. The benchmark data sets were expanded with additional data of the production system related to costs, manufacturing flexibility, dimensions and setup times. Additional data were generated mathematically, except for the location of the machines, which was a constant. In order to ensure adequate data interdependence and the veracity of the results, we decided to divide the machines into the three groups, as shown in Table 2. With the help of additional data from the real-world production system, we upgraded the simulation model. Such a model offers a comprehensive analysis, comparison, upgrade and evaluation of real-world manufacturing systems. Table 2 shows three machines groups, divided by the operating costs of the machine EUR/h. The three groups of machines are divided into small (*G*1), medium (*G*2) and large machines (*G*3). The price range of operating hours is between 30 to 40 EUR/h for small machines, 40 to 50 EUR/h for medium and 50 to 60 EUR/h for large machines. We assumed real values of fixed costs and recalculated the idle cost of the machines using the literature [22]. A detailed recalculation of machine prices is shown below. The recommendations given in the literature [22] have defined fixed costs as 40% in the case of a small machine, 50% in the case of a medium-sized machine, and 60% of a fixed cost in the case of a large machine. The right column of Table 2 shows the factor between fixed and recalculated idle cost values used by the computer program as a constant value in a mathematical calculation assignment.


**Table 2.** Machine group's classification.

The specified limits for the individual variables interval were generated by the numerically generated data, independently for each machine, and the data are shown in Table 3. The data are correlated with their correlation factors. The correlation of the generated data ensures the credibility of the simulation and numerical results. The data of the production system can be varied according to changes, and the mathematical and simulation model will adapt it automatically. The presented approach allows modularity and high flexibility of the entire proposed solution for simulating multi-objective optimization problems. The key advantage is the modular composition and easy adaptation to different types of manufacturing or service enterprises. Table 3 shows additional numerically generated data from the production system, which, in addition to operational and idle costs, shows the locations of the machines with respect to the base coordinate system with (two) *x* and *y* axes. The last row of the Table lists the setup time, which plays a key role in evaluating production flexibility against a cost-time diagram.


**Table 3.** Machinery cost determination.

Table 4 shows the determination of the variable costs of the machines according to the three machine groups. The calculation of variable costs was carried out using the calculation given in the literature [22]. The basic initial properties were assigned to the calculation:



**Table 4.** Variable machinery cost determination.

### *The Impact of Cost-Time Profile on Manufacturing Flexibility*

The main advantage of a flexible production system is its adaptability to customers demands. Simulation modelling of a flexible job shop scheduling problem is an almost unexplored area, so we wanted to prove the link between cost, time and production flexibility on all four levels of manufacturing flexibility by introducing functional dependency. The use of advanced optimization algorithms and simulation models improves and balances the interdependence between these three parameters significantly. As a basis for investigating the impact of production adaptability, we have chosen the well-known cost-time profile (CTP) method [10]. The CTP diagram is based on value stream architecture (VSA) [23], and for the purpose of visualizing the production process, shows the connection between the three main components (resources, activities and waiting times). Traditional cost accumulation deals only with the aspect of production, while VSA emphasizes operating procedures and the use of different resources, notably time, but does not take costs into account. In response to this shortcoming, which takes into account both cost and time considerations, researchers have proposed the introduction of the CTP method [2]. CTP is a graphical representation of the production orders costs sum in a given time unit. This model represents the source information from the moment the production process

begins to the moment the order with the completed activities leaves the production process. The basic components of CTP are defined as:


Figure 2 shows an example of a time-value diagram with the components defined above. The orange colored components represent the resources, the green colored arrows represent the activity, and the blue colored arrows represent waiting.

**Figure 2.** Cost-time profile diagram.

We have expanded the two-dimensional cost-time function dependency with an additional feature that describes the manufacturing flexibility. We have identified the missing required data shown in the Tables below. Table 5 shows the mathematically assigned material cost values for individual orders. We labelled the values of the material costs that were allocated mathematically in the interval between

20 and 30 EUR per order *Pm*. Material costs' values had to be assigned in order to provide a credible CTP. The material needed to execute the order is identified as the source of the CTP diagram.


**Table 5.** Input parameters of *Rs*.

Tables 6 and 7 define the number of products per order. The number of order products ranges from the value of one product in the reference scenario *Rs*, to 20 pieces of the product per single order in the scenario *S*2. The introduction of the simulation scenario method represents the possibility of testing and responding the simulation model to different changes in the production system. In our case, the three simulation scenarios define manufacturing flexibility as, fully customizable production in the RS scenario, where each order represents one piece of product. By increasing the number of products within the order in scenario *S*<sup>1</sup> (1 to 10 pieces) and *S*<sup>2</sup> (10 to 20 pieces), here, the production is defined as less flexible. The number of product pieces in Tables 5–7 is represented by label *Pq*. The *Pq* values are assigned numerically according to the distribution function and the interdependence of the production parameters. The simulation scenarios designed in this way allow us to analyze the impact of manufacturing flexibility on the cost-time profile diagram.

**Table 6.** Input parameters of *S*1.


**Table 7.** Input parameters of S2.


### **5. Manufacturing Flexibility Modelling Results**

Table 8 shows the simulation results of modelling the impact of the manufacturing flexibility on the CTP diagram. Simulation and mathematical modelling are performed using simulation scenarios with the aim of changing the manufacturing flexibility parameters. Simulation experiments were performed on five benchmark datasets (Kacem 5 × 10, Kacem 10 × 10, Kacem 15 × 10, Mk08, Mk10) [24,25]. Performing simulation experimentation on two different datasets allows the verifiability and credibility of the obtained optimization results.


**Table 8.** Manufacturing flexibility simulation modelling results.

The results presented in Table 8 are crucial in evaluating the CTP diagram and the impact of manufacturing flexibility on it. The average machine utilization decreases as production flexibility increases. The above mentioned flow time in Table 8 certainly affected the machines' utilization and, consequently, the average throughput of the product [pcs/h], which increases with increasing manufacturing flexibility. The biggest impact is flexibility when it comes to costs that go down when we produce several identical pieces in the manufacturing process.

As a result of the proposed simulation modelling approach, with the introduction of all additional characteristics of the production system, Figure 3 shows the final results after simulation and numerical studies of the manufacturing flexibility impact on the CTP diagram.

### *Cost-Time-Flexibility Profile Diagram*

The cost-time profile diagram, depending on manufacturing flexibility, is modelled graphically using the numerical and simulated results shown in Table 8. We named the three-dimensional diagram a cost-time-flexibility profile (CTFP). As with the analysis of the IHKA optimization algorithm, its suitability was tested on low, medium and high dimensions. It was noted that the shape of the three-dimensional diagram is influenced by resources, activities and waiting times, depending on the flexibility of the manufacturing process, as shown in Table 8. The basic cost-time diagram uses linear dependencies and constant values. With the CTFP graphical results in Figure 3, we can see nonlinear dependencies of the three variables (cost, time and manufacturing flexibility). Unlike the surface that describes the cost-time investment in a two-dimensional graph, the three-dimensional graph is a volume that describes the cost-time investment depending on the manufacturing flexibility.

For all five datasets, divided into three groups regarding dimensional difficulties, we see the adequacy of solving the optimization problem and the corresponding dependence between the variables in the CTFP diagram. We define the differences between the results of individual CTFP diagrams as:


shorter processing times are influenced significantly by the flexibility of production, especially when increasing the number of orders.

**Figure 3.** Cost-time profile as a function of manufacturing flexibility.

In general, we find that the CTFP diagram is influenced significantly by the flexibility of production, and its dependence on increasing orders is demonstrated by the CTFP diagram. The CTFP thus proposed illustrates graphically and numerically the situation within the production system. From the presented optimization, the approach of multi-criteria optimization and manufacturing flexibility modelling, it can be summarized that: Higher and even machines' utilization allows energy consumption reduction at the time when the machines are waiting for the operation to be performed (shorter idle time). Shorter flow times without intermediate waiting times allow quick adaptation to demand, and a high level of customer satisfaction on short delivery times (shorter due dates). Properly scheduled orders, depending on the execution of individual operations, allow shorter and more efficient transport routes and effective just in time method use. Proper allocation of operations to highly efficient machines with shorter processing time and high efficiency ensures low waste and efficient use of materials and energy resources. Effective work assignments with regard to the cost of operation and waiting defined by the flexibility of production and the division of machines into three groups make the production system highly economically viable.

### **6. Manufacturing Flexibility Case Study**

With the proposed comprehensive approach and associated methods in the previous sections, we proved the high ability to solve multi-criteria optimization problems of flexible manufacturing systems, so we decided to test the whole approach in the case of multi-criteria optimization of a real-world manufacturing system.

The ability to solve a multi-criteria optimization problem of a real-world manufacturing system (data set labelled as RW\_PS) is presented in Section 6. The first part of the section presents the real-world input data of the manufacturing system, which enables multi-criteria optimization of flexible job shop production. In the real-world case, only relevant and credible input provides the ability to achieve reliable optimization results. The following is an example of solving the scheduling of fifteen orders and comparing the results of the IHKA algorithm with the optimization solutions of the bare bones multi-objective particle swarm optimization (BBMOPSO) and multi-objective particle swarm optimization (MOPSO) algorithms. The transfer of optimization results to the simulation environment is presented following the previously proposed method of evaluating the sequence of machine operations determined by the IHKA algorithm. A modular and flexible simulation model has been built to provide an automated and easy interface to handle the simulation model [26]. The following is the analysis and evaluation of the simulation model using the CTFP diagram, which is proposed as part of a comprehensive multi-criteria optimization approach of the manufacturing system.

### *6.1. Manufacturing System Input Data*

The selected data were obtained from a European medium-sized company that manufactures custom products (high-mix, low-volume production type). Orders received in the company, ordered by the subscribers, must be scheduled optimally on the available machines within the production system. The orders input data are presented in Table 9. The orders consist of three different product types with different processing times, machine usage cost, machine idle rates, setup times and number of operations. The information provided by the company and the updated recalculated usage and idle cost values of the machines is formulated mathematically, as presented in the previous section. According to the literature [22], real-world production type is defined as flexible job shop production type. Compared to the Kacem and Brandimarte [24,25] test datasets, we have found that different product types add additional complexity of the RW\_PS optimization problem. A parallel can be drawn between the Brandimarte datasets and the inputs of the real-world production system (RW\_PS), both of which data sets allow operations to be performed on only a few specific machines within the production system.

For real production system data, machines marked *M*<sup>1</sup> to *M*<sup>12</sup> represent the following operations:


The main task of the optimization algorithm is to determine the order of operations on the available machine optimally. In doing so, the algorithm must determine which of the machines will execute which order, while optimizing three key parameters: makespan, maximum machine utilization and eliminating any bottlenecks in the manufacturing system.


**Table 9.** Input data for product type.

### *6.2. IHKA Multi-Criteria Optimisation*

The production system input data shown in Table 9 and the order input data, performs multi-criteria optimization of custom production scheduling using the IHKA algorithm [26]. In Figure 4, the Gantt chart provides a solution for scheduling orders and individual operations according to the machines available. The IHKA optimization algorithm performed the optimization with three key parameters: makespan, total machine utilization and utilization of the most busy (bottleneck elimination). The Gantt chart in Figure 4 shows that all orders are executed within 348 min. In order to determine the performance of the IHKA algorithm in solving real-world optimization problems, an analysis and comparison of optimization results was performed, with the currently most up-to-date algorithms for solving the flexible manufacturing scheduling of MOPSO and BBMOPSO.

**Figure 4.** Gantt chart of the improved heuristic Kalman algorithm (IHKA) optimization algorithm solutions.

A performance measures analysis was performed of the proposed IHKA optimization algorithm and the comparative algorithms BBMOPSO and MOSPO. Due to the complexity of the optimization problem, the comparison of the algorithms' performance was performed using the C-metric method [27]. Numerical results of thirty interactions calculation for the C-metric performance measures method and the labels meaning are presented in Table 10:

• Min value represents the worst Pareto front position obtained from the state of the subject algorithm which dominates the object algorithm in a certain number of percentages.


**Table 10.** C-metric performance measures of IHKA, multi-objective particle swarm optimization (MOPSO) and bare bones multi-objective particle swarm optimization (BBMOPSO) algorithms.


Table 10 shows the numerical results of the C-metric performance measures analysis within thirty interactions, comparing the obtained results of the individual algorithms:


Numerical results shows high performance with respect to the dominance and stability criteria of the proposed IHKA optimization algorithm compared to the comparative algorithms. The proposed algorithm demonstrates a high degree of capability and robustness of solving multi-criteria optimization problems.

Figure 5 and Table 11 shows the graphical and numerical optimization results of the three optimization algorithms, the proposed IHKA algorithm (mark x) and the comparative two MOPSO (mark \*) and BBMOPSO (mark +).

In order to prove the robustness of the optimization results, all three algorithms were tested in thirty interactions, and the average values of the optimization results are presented in Table 11. Considering the graphical results shown in Figure 5, we can conclude that the IHKA algorithm has proven to be the most appropriate for solving a real-world optimization problem. In its final interaction, it obtained the most optimal solution, which ensures the shortest makespan (MC) of orders and a consistent, high utilization of all machines (TW), without any bottlenecks in the manufacturing system (MW).

**Figure 5.** Graphical optimization results of the IHKA, MOPSO and BBPMOPSO algorithms.



Optimization algorithms numerical results of the thirty interactions' average values shows that the IHKA algorithm generated most optimal results at the two optimization parameters, at the orders' makespan and the total machine utilization. In optimizing the machine utilization parameter of the most work loaded machine, the IHKA algorithm performed worst on average MW, in which parameter the BBMOPSO algorithm dominated. The IHKA algorithm compensated for the MW reduced ability to achieve the optimum result for the TW parameter, which is controlled by the appropriate scheduling of individual operations on the available machines.

The presented graphical and numerical results confirm the high capabilities and reliability of solving multi-criteria optimization problems of flexible manufacturing systems with the IHKA algorithm. The algorithm has been shown to be capable of solving complex optimization problems from a real-world environment.

Table 12 shows the example (for *J*<sup>1</sup> and *J*2) of the IHKA output optimization results generated in the MATLAB software environment. The output optimization results of the optimization algorithm assign individual operation to the available machine, a start and finish time, and the machine sequence order in which it performs operations on the machine. The order of performing operations on the machine is performed according to the proposed method of its own decision logic [26], which bypasses the integrated decision logic of the simulation environment. The automated transfer of numerical optimization results to the simulation environment allows the user to determine their own units of measurement according to the real-world manufacturing system.


**Table 12.** Optimization results of the real-world production system (RW\_PS) dataset.

### *6.3. Validation of Optimisation Results Using the CTFP Diagram*

In the stage of optimization results' validation, an analysis of the optimization results was performed using the above presented CTFP diagram. Using the CTFP diagram, we can analyze the interdependence between three key parameters of flexible manufacturing systems: Time, costs and manufacturing flexibility interdependency. Manufacturing flexibility is defined using the previously defined approach in section five, where the mathematical distribution function was created to demonstrate the appropriateness of a new CTFP diagram method. For a simulation model of a real-world manufacturing system with fifteen orders and additional data that affect manufacturing flexibility, the values of the number of pieces in a single order product were determined with random function, from Tables 6 and 7. With the introduction of the flexibility parameter, a dependency analysis was performed between the number of products, flow time and total order costs. The numerical results shown in Table 13 and the graphical results shown in Figure 6, show the optimization results' correlation of the performed CTFP analysis between the Kacem 15 × 10, Mk10 test datasets and the RW\_PS real production system dataset. All three datasets can be defined as high-dimensional cases, which means that a partial match is foreseen, especially in the graphical results that determine the typical CTFP format. It was found that the high-dimensional optimization problems of Kacem 15 × 10, Mk10 and the real-world manufacturing system dataset RW\_PS show a significant dependence of the three mentioned parameters in ensuring production eligibility, while reducing production costs and shorter processing times significantly. The steep surface of the graph is particularly pronounced as the number of orders increases. The CTFP diagram in Figure 6 shows that, by extending the flow time with a lower number of individual order products, this correlation affects the overall costs involved adversely. A well-optimized production process is crucial to ensuring the economic and process viability of custom production.

**Table 13.** Applied production flexibility simulation modelling results with a CTFP diagram.


**Figure 6.** Cost-time diagram in relation to manufacturing flexibility of a real-world production system.

The impact of manufacturing flexibility on a sustainable production process is reflected in short flow times, high reliability of delivery due dates, low stocks, and a favorable cost-time profile linked to manufacturing flexibility and justified value stream architecture. These key production goals, which can only be achieved through appropriate multi-criteria optimization and additional objectives, are reflected in cost reduction through the rational and continuous use of workplaces, materials and machines, thereby ensuring a sustainably justified production system. With the help of graphical and numerical analysis of the CTFP diagram, we can see areas where the production system justifies the cost-time investment evenly, depending on the manufacturing flexibility. Based on the presented optimization approach, the company is able to adapt quickly to the global needs and demand of customers, while providing sustainable eligible production that increases total social and environmental benefits.

### **7. Conclusions**

In the presented research work, we have presented the importance of a manufacturing flexibility and multi-criteria optimization method on the sustainable justified manufacturing systems. The initial research question was related to the ability to model manufacturing flexibility and its impact on cost-time investment, correlated with sustainable manufacturing processes, more sustainable products and social and environmental benefits [2]. The main purpose of the research was to demonstrate a new approach to manufacturing flexibility modelling, based on a comprehensive consideration of all parameters with respect to a four-level architectural model [13]. A mathematical method is presented for calculating the characteristics of a flexible job sop production system and determining the interdependencies between cost, time and manufacturing flexibility. A cost-time profile diagram and its impact on the production system are defined for the purpose of optimization results validation. Advantages and limitations are shown which relate to the ability to validate only single-criteria optimization problems. For the purpose of eliminating the shortcomings and extending the method, an experimental model based on the simulation scenario method is presented, which can be used to evaluate the impact of manufacturing flexibility on a sustainable justified production system. The optimization results are shown of multi-criteria optimization of five Kacem and Brandimarte test datasets [24,25] solved using the IHKA evolution method [20] and the production adaptability modelling method. Based on the obtained numerical results, a graphical representation of the cost-time profile diagram influence on the manufacturing flexibility was performed, called the cost-time-flexibility profile diagram. The numerical and graphical results representation and validation are divided into

three groups, according to the complexity of the solved problems (low, medium and high dimensional optimization problems). The validation approach presented here has advantages and limitations related to the three parameters of cost-time and manufacturing flexibility dependence, which are described and presented numerically and graphically.

In times of complex production systems, the transfer of theoretical methods to the real-world environment is especially important if we want to make our production systems sustainable. Therefore, an example is presented of an applied application of the proposed method to a medium-sized European high-mix low-volume manufacturing company. With the help of the proposed evolutionary computation method [20], modelling of manufacturing flexibility and validation of cost-time investment, depending on the manufacturing flexibility, optimization and validation of the production system was performed, in order to define the sustainable orientation of the manufacturing company. Based on the numerical and graphical results, the IHKA evolutionary computational method best solved the multi-criteria optimization problem of a flexible job shop scheduling problem, represented by the RW\_PS real-world dataset and evaluated by C-metric performance measures. The obtained optimization results make it possible to perform simulation modelling of the manufacturing flexibility impact on the cost-time investment. Using the simulation scenario method, a numerical analysis of the manufacturing flexibility impact on the optimization parameters was performed, which affected the sustainability of the production system critically. Based on the presented method, we find that it serves companies in order to optimize existing manufacturing systems and to construct or design new manufacturing systems optimally, in order to correlate costs, time and manufacturing flexibility properly. Adequate CTFP ensures sustainable production from the energy and natural resource consumption, equal workload of workers and machines, improvement of product quality and customer satisfaction point of view. By achieving a balanced CTFP chart, we ensure sustainable business growth, and provide increases in a company's total, social and environmental, benefits.

The presented research work has limitations that relate to the evaluation and modelling of the presented method solely for flexible job shop types of manufacturing systems. At the same time, using the cost-time profile diagram, it can evaluate just about every production system. It should be emphasized that the impact of manufacturing flexibility is the most significant in the devalued type of production, and is more difficult, or even not present, in other types of production (mass production, etc.). In order to ensure the robustness of the proposed CTFP diagram approach, it would be appropriate to transfer and evaluate it to other production types in order to ensure the wide applicability of the proposed method.

The presented methods and results of mathematical and simulation modelling in correlation with the methods of defining cost-time investment and their correlation with the manufacturing flexibility, and multi-criteria optimization using IHKA optimization method, provide sustainably and financially justified production systems. Compared to the results of other researchers evaluating dynamic, flexible manufacturing systems and different decision models, the presented manuscript deals with the meaning of multi-criteria optimization of production system scheduling. The considered research case is formulated mathematically with the input parameters covering the majority of characteristics of production systems (makespan, process time, setup time, operational costs, idle costs, energy cost, order quantity, machine power, etc.). The optimization parameters are divided into three groups regarding the machine classification, based on which it is possible to describe the manufacturing system in detail using evolutionary computation methods, to allocate the work orders optimally to the appropriate, available highly-utilized machine. The presented comprehensive optimization method enables both numerical and graphical representation of optimization results. The modular structure enables interaction between the IHKA decision model, the simulation model and the graphical CTFP representation of the optimization results by means of cost-time investment, depending on manufacturing flexibility. Of course, devoting to optimizing only the FJSSP production type, for example with other researchers [28], is a limitation. Given that, other research focuses on demonstrating the impact of flexible line redesign planning problems [28] and determining the impact of the market uncertainty [29] on the adaptability and feasibility of using flexible and dedicated machines on the occupancy of manufacturing utilization using the Monte Carlo simulation method. The presented research work represents the originality with the economical and sustainable optimization approach. Other research works cite as a complexity a very wide range of optimization parameters that need to be defined properly mathematically, and their interdependence must be described; that limitation is well presented in described research work. The presented research work is, thus, an original contribution, with a comprehensive multi-criteria optimization approach and associated simulation modelling methods, using the simulation scenario approach, graphical and numerical definitions of optimization results by means of cost-time investment depending on manufacturing flexibility. The holistic approach is rounded off by an appropriately sustainable and economically justified method for ensuring optimized, highly flexible manufacturing processes that are increasingly present in the current era of Industry 4.0.

Thus, this research represents the basic research work for the further development of the manufacturing flexibility dependence on cost-time investment and, consequently, on sustainable orientation manufacturing systems. The influence of different parameters and, thus, solving multi-criteria optimization problems is crucial in ensuring economically, socially and environmentally sustainable production systems, which is defined clearly in this research work. The presented importance of optimizing highly flexible manufacturing systems opens up many possibilities for further research work. The introduction of collaborative workplaces into production systems represents a new research area, where the flexibility of collaborative robots allows high adaptability of manufacturing capacities with respect to the ability to perform different operations at different workplaces. The introduction of highly flexible workplaces also increases the complexity of multi-criteria optimization problems, that can be solved using the proposed evolutionary computation methods. Consideration of the feasibility of introducing and implementing collaborative workplaces from the economic and sustainable eligibility point view has not yet been investigated. Further research and implementation of the presented methods will allow evaluation of the justification for the introduction of the highly flexible workplaces in different types of production systems, thus ensuring optimized, sustainable and economically viable production systems.

**Author Contributions:** Conceptualization, R.O. and B.B.; methodology, R.O.; validation, R.O. and B.B.; resources, R.O. and B.B.; data curation, R.O.; writing—original draft preparation, R.O. and B.B.; writing—review and editing, B.B.; visualization, R.O.; supervision, B.B.; funding acquisition, B.B. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by the Slovenian Research Agency (ARRS), Research Core Grant number P2-0190.

**Acknowledgments:** We would like to express our very great appreciation to the Laboratory for Production and Operation Management and Laboratory for Discrete Systems Simulation at the University of Maribor for the possibility of carrying out our research work. Many thanks to the company for the possibility of performing the applied theoretical methods in the real-world environment. We would like to thank all anonymous reviewers and the editor for their comments. With the corrections, suggestions and comments made, the manuscript has gained in its scientific value.

**Conflicts of Interest:** The authors declare no conflict of interest. The funders had no role in the design of the study, in the collection, analyses, or interpretation of data, in the writing of the manuscript, or in the decision to publish the results.

### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Location Optimization of CR Express International Logistics Centers**

### **Dmitri Muravev 1,2,\*, Hao Hu 1, Hengshuo Zhou <sup>3</sup> and Dragan Pamucar <sup>4</sup>**


Received: 6 December 2019; Accepted: 7 January 2020; Published: 10 January 2020

**Abstract:** Currently, the trade volume between China and the European Union is experiencing rapid growth. However, there are many bottlenecks in the operation of the China Railway Express, such as imbalance in inbound and outbound transported containers in the Sino–European direction and a low profit margin. More than fifty-three rail routes in China provide rail transportation to European cities and have small traffic volumes. However, such a dramatic situation affects transportation costs, which are three times higher compared with maritime transportation, causing uncertainty related to the demand of Chinese customers. This study analyzes the shortcomings of previous research studies related to multicriteria decision-making (MCDM) models applied in the field of logistics and transportation. The study proposes a novel approach to determine the optimal locations of the CR Express international logistics centers. The proposed approach involves the application of a MCDM model using the DEMATEL-MAIRCA method. This technique finds the closest solution to the ideal one by identifying the value of the best alternative in line with the observed criterion, and by measuring the distances of other alternatives according to the observed criterion of the ideal value. Finally, we show the similarity of the proposed methodology to other MCMD methods, which is one of the key topics of the Symmetry Journal, to prove the validity of the applied DEMATEL-MAIRCA method. Preliminary results show that in view of increased container turnover between China and the European Union, the determination of optimal locations for CR Express international logistics centers should be carried out dynamically.

**Keywords:** facility location problem; criteria; multicriteria decision-making; DEMATEL-MAIRCA method; China Railway Express; international logistics centers

### **1. Introduction**

Currently, the trade market between China and European countries in the framework of the "One Belt, One Road" initiative has been rapidly developing. According to experts' forecasts, the number of containers will be increased by 800,000 TEU's in 2020, which is five times more than in 2015.

The main actor providing rail transportation between China and European countries is the China Railway express (CR Express). Despite efforts from governments and enterprises in both China and the countries along the route, the number of blocked trains and containers has increased dramatically in the last six years. By the end of March 2019, the total number of operations on the CR Express had

exceeded 7600 round trips, and the number of domestic routes in China had reached 61 across 43 cities, which are the current international logistics centers. The CR Express operates in 41 cities across 13 countries in Europe [1].

Undoubtedly, the CR Express has been developing rapidly in recent years. However, the high rail transportation costs between China and the European Union have a negative impact on the demand for this transportation method. By the end of 2017, the average price had dropped from 9000 \$/TEU to 6000 \$/TEU on average, which is still much higher than maritime transportation. Furthermore, the volume of the goods that European countries deliver to China by railway is comparatively small. In 2017, the number of trains from Europe to China was 67% of the number from China to Europe, meaning that operators cannot make full use of the containers, and the profit of whole transportation process is limited [1]. Therefore, most of the companies that run the CR Express are suffering losses, depending on the subsidies from local governments, which leads to competition for subsidies between different provinces in China. Finally, to date, almost every Chinese city running the CR Express has chosen to set up their own route by themselves. This has caused a lack of holistic planning and route combinations, increasing the total expense of the CR Express operation.

To solve the problem of high transportation costs, it is imperative to optimize the railway service network, particularly the optimal location of international logistics centers. There are several reasons to select the optimal location of international logistics centers and minimize the number of rail routes by reducing the number of cities. Firstly, freight transportation flows could be aggregated and economies of scale could be achieved. Secondly, it could potentially increase the volume per single trip, which would allow Chinese companies to charge more when collaborating companies along the routes. Finally, it could reduce transportation costs, avoid inefficient routes, and improve operation efficiency. If the transportation cost can be cut down, fewer subsidies from the government will be needed for transportation companies, which will decrease the burden on the government and increase the motivation of market players.

To date, several studies have been conducted on the facility location problem (FLP), including case studies on the CR Express, which are mainly focused on the application of multicriteria decision-making methods (MCDM) [2,3] or a combination of MCDM methods and deterministic optimization models [4,5].

However, these studies have several limitations. Firstly, the authors consider a limited number of criteria affecting the selection of the optimal logistics center locations, since these facilities are complex. This means that those facilities consist of many elements, the parameters of which are affected by different external factors. Consequently, this causes inaccuracy in the results. Secondly, most of the applied MCDM methods used to solve facility location problems are unstable in alternative rankings, sensitive to inconsistent data, difficult to develop, and based on experts' opinions.

The contribution of this paper can be summarized as follows. Firstly, this work reviews the existing and relevant literature for various MCDM methods with regard to transportation and logistics. Secondly, this work presents the hybrid multicriteria decision-making DEMATEL-MAIRCA model, which is used to select the optimal location of the CR Express international logistics centers (CILC). The decision-making trial and evaluation laboratory (DEMATEL) method is based on collective judgment and is used to identify the cause–effect relationships among selected strategic criteria in order to select precandidate cities for CILC. The multiatributive ideal-real comparative analysis (MAIRCA) method aims to compare the conceptual and experimental alternative ratings, and is able to estimate the alternatives and select precandidate cities for CILC. Additionally, a case study on CILC selection is presented. Lastly, the paper provides critical managerial insights for different decision makers, such as logistics managers and local government.

The remainder of this paper is structured as follows. Section two provides a review of the relevant literature. The third section introduces the description of the applied DEMATEL-MAIRCA methodology. The fourth section presents a case study on the selection of precandidate cities for CR Express international logistics centers. The fifth section provides a comparison between the proposed

methodology and other modern MCDM techniques. Finally, this paper summarizes future development strategies for designing an optimal railway network for CR Express international logistics centers.

### **2. Literature Review**

The methods employed in existing studies related to the facility location problem can be primarily grouped into two categories: quantitative mathematical methods and mathematical programming techniques. With respect to the facility location problem, the MCDM methods are often utilized for developing the ranking of potential locations based on expert opinions. Bridgman published the first paper on the MCDM method based on the weighted product model (WPM). This method relies on the comparison between alternatives through the multiplication of the ratio number (one for each criterion) [6]. Since then, the proposed technique has attracted the attention of many other researchers, which has improved the mentioned method. Fishburn proposed the weighted sum model (WSM) for solving the issues associated with the identical physical dimensions of studied variables. In other words, this method comprises the application of the "additive utility" assumption [7]. However, this method is not able to solve the problems related to various distinct types of criteria and variables [8]. Saaty presented an analytical hierarchy process (AHP), which relies on investigating priorities or weights of importance among the selected criteria and alternatives. However, the possible compensation between positive and negative scores for some criteria could cause information failure [9]. Furthermore, the implementation of the proposed method is inconvenient due to the technique's complexity. Roy proposed the elimination and choice expressing reality (ELECTRE) method based on partial aggregation. This method attempts to rank alternatives with respect to a concordance and discordance index, which is calculated through obtaining data from a decision table [10]. However, this method involves an additional boundary value, and the rating of the alternative is determined based on the size of this boundary value, which does not contain the correct value [11].

Further studies have focused on improving the MCDM method. The improved ELECTRE III/IV method was implemented to select the optimal location of a logistics center in Poland [3]. This method uses binary outranking relationships. The dataset applied in this method consists a final set of alternatives, a family of criteria, and preferences offered by decision makers [12]. However, this method is time consuming, since it requires sophisticated application and probable incapability to identify a preferred solution [13].

The multicriteria optimization and compromise solution (VIKOR) technique was used to select the optimal location of a distribution center for security materials. This method uses inconsistent and incommensurable (attributes with different units) criteria. This means that a compromise solution can be achieved for conflict resolution while the decision-maker aims to find a solution that is the closest to the ideal one. Furthermore, the alternatives can be estimated in line with the established criteria [14]. Nevertheless, this method has numerous limitations, such as the need to correlate criteria, the uncertainty of the weights obtained using only objective and subjective methods, and the option of an alternative being close to the ideal point and nadir point at the same time [15].

Atanassov interval-valued intuitionistic fuzzy sets (AIVIFS) were applied to select the location of a production plant in Serbia [16]. This methodology is based on the values of its membership and nonmembership functions, which are represented as intervals instead of exact numbers [17]. However, this method uses max–min–max composition losses of information, because the composition neglects most values except for extreme ones [18].

The SWARA-WASPAS method was used to select the optimal location of a shopping mall in Iran [19]. The idea of a step-wise weight assessment ratio analysis (SWARA) involves the determination of relative importance through the inputs provided by experts [20]. The weighted aggregated sum product assessment (WASPAS) method is based on the combination of a weighted sum model (WSM) and weighted product model (WPM) [21]. Nevertheless, the WASPAS method has several drawbacks, such as the unbalanced increase in the value of the objective function as a result of linear WPM function. This is significant for the initial decision matrix, which contains the boundary values of the individual elements [22].

The combination of evaluation based on the distance from the average solution (EDAS) and weighted aggregated sum product assessment with normalization (WASPAS-N) methods was applied to select a teahouse location in China [23]. The EDAS method obtains the best alternative related to the distance from the average solution [24]. The WASPAS-N method aggregates the normalized values of the decision matrix by applying the weighting related to the criteria involving arithmetic and geometric means [25]. Nevertheless, the EDAS method cannot be applied in stochastic MCDM problems involving different distribution laws.

The fuzzy technique for order of preference by similarity to ideal solution (TOPSIS) was used to select the optimal locations of shopping malls in Turkey [26]. This method selects the alternatives that have the shortest distance from the positive ideal solution and the farthest distance from the negative ideal solution at the same time [27,28]. However, it requires numeric attribute values that monotonically increase or decrease and have comparable units that could potentially complicate the data collection [29].

The combination of the TOPSIS technique and multichoice goal programming (MCGP) was applied to select the optimal location of a logistics center for the airline industry [30]. The concept of the proposed combination lies in the implementation of the obtained criteria weights using TOPSIS into each goal of MCGP. The idea of the MCGP methodology is based on the application of multiple aspiration levels for their problems, where these levels are categorized as "more appropriate" or "less appropriate" [31]. However, if each objective's aspiration level is represented by the continuous decision variable, which could range between lower and upper bounds, this approach does not provide the option for decision makers to control the bounds of the interval aspiration level [32].

The fuzzy data envelopment analysis (DEA) method was proposed to predict the results and efficiency of alternative control actions for train dispatchers to prevent potential disturbances on a railway line [33]. This nonparametric method analyzes the relative efficiency of decision-making units (DMUs) based on numerous inputs and outputs [34]. Nevertheless, this method is inadequate in ranking efficient DMUs with fuzzy numbers, and it is based on the self-evaluation of DMUs [35].

To overcome the shortcomings of the reviewed studies in the field of FLP and provide a case study, we propose the application of the DEMATEL–MAIRCA method. Since various diverse factors affecting the locations of logistics facilities mean the FLP is a multidisciplinary problem requiring a complex selection procedure, we propose the application of the MCDM method. This will make it possible to select the potential precandidate cities for the international logistics centers in China.

### **3. The Hybrid DEMATEL-MAIRCA Model and Case Study**

The proposed DEMATEL–MAIRCA model was developed by Serbian researcher Dr. Dragan S. Pamucar [36]. The application of the MCDM model is a hybrid of two techniques. The fuzzy DEMATEL model collects knowledge to capture the causal relationships between strategic criteria [37,38]. The model is especially practical and useful for visualizing the structure of complicated causal relationships with matrices or digraphs [39,40]. In other words, this method would identify the causal relationship among the selected criteria. The multiattribute ideal–real comparative analysis (MAIRCA) compares both theoretical and empirical alternative ratings [41], evaluates the alternatives, and selects precandidate cities for allocating CILC. The phases of the DEMATEL-MAIRCA method are presented in Figure 1.

**Figure 1.** The phases of the hybrid DEMATEL-MAIRCA model.

Pamucar proved that the proposed method provides lower instability in alternative rankings compared with traditional methods [36]. Moreover, the application of the MAIRCA model has several advantages. Firstly, this method has greater stability compared with other methods, such as TOPSIS or ELECTRE [36]. Firstly, this technique provides a different criteria normalization method. It has been proven that MCDM techniques apply a linear model of input data normalization that is more stable and ranks constantly in the sensitivity analysis. Secondly, the MAIRCA method is simpler from a mathematical perspective. It also provides solution stability and can be hybridized with other MCDM techniques [2]. In order to determine the causal relationship among the selected criteria, we propose the application of the fuzzy DEMATEL method.

Essentially, we aim to select precandidate Chinese cities, which should be evaluated and compared with each other based on the selected criteria. The alternatives are assigned as vectors. Each vector represents the value of the *i*-th alternative by the *j*-th criterion. Since the variable of the criteria affects the final ranking of alternatives, each criterion is presented as a weight ratio. This effect considers its relative value in estimating the alternatives. Since we need to establish the relationships among the criteria, the fuzzy DEMATEL method is applied. The algorithm of the fuzzy DEMATEL method is presented in Figure 2.

$$\widetilde{d}\_{\vec{\vartheta}} = \frac{\widetilde{z}\_{\vec{\vartheta}}}{\widetilde{R}} = \left( \frac{z\_{\frac{\vec{\vartheta}}{\vartheta}}^{(l)}}{r^{(l)}}, \frac{z\_{\frac{\vec{\vartheta}}{\vartheta}}^{(s)}}{r^{(s)}}, \frac{z\_{\frac{\vec{\vartheta}}{\vartheta}}^{(r)}}{r^{(r)}} \right) \quad where \quad \widetilde{R} = \max\left(\sum\_{j=1}^{n} \widetilde{z}\_{\vec{\vartheta}}\right) = \left(r^{(l)}, r^{(s)}, r^{(r)}\right)$$

$$\widetilde{\bar{R}}\_j = \sum\_{\dots}^{n} \tilde{f}\_{\bar{\theta}}, \ \bar{j} = 1, 2, \dots, n$$

$$\begin{aligned} \widetilde{W}\_i &= \left[ \left( \widetilde{D}\_i + \widetilde{R}\_j \right)^2 + \left( \widetilde{D}\_i - \widetilde{R}\_j \right)^2 \right]^{1/2} \\ W &= \left[ \left( W^{(r)} - W^{(l)} \right) + \left( W^{(s)} - W^{(l)} \right) \right] \cdot 1/3 + W^{(l)} \end{aligned}$$

$$\omega\_i = \frac{\mathcal{W}\_i}{\sum\_{i=1}^{n} \mathcal{W}\_i}$$

**Figure 2.** The algorithm of the fuzzy DEMATEL method.

The first step was the collection of the expert scores and calculation of the average matrix Z. This step involved the evaluation of the impact level between selected criteria by participating experts. Each expert's opinion is represented as a non-negative matrix. The impacts of the criteria on each other are expressed by linguistic expressions. In order to list the pairwise comparisons, linguistic expressions using triangular fuzzy numbers are applied. Finally, all experts' opinions are aggregated into the average matrix *Z*.

Since we calculated the elements of the matrix Z, we could determine the elements of the normalized initial direct relation matrix D. The elements of the D matrix are determined by summing the elements of the average matrix Z by row. The next step revealed the finalization of matrix elements *T* by summing rows and columns. The separate summarization of both rows and columns in the sub-matrices *T*1, *T*2, and *T*<sup>3</sup> is represented by the fuzzy numbers *Di* and *Ri*. Since we obtained the *Di* and *Ri* values, the criterion weights were calculated using the obtained fuzzy value of the weight coefficients. In order to facilitate the normalization of weight coefficients, the defuzzified value of the weight coefficients was applied prior to normalization.

To obtain the weights of the selected criteria, we applied the MAIRCA model. The purpose of the MAIRCA model is to evaluate the gap between ideal and empirical ratings. To identify the total gap for each alternative, we needed to sum the gaps in each criterion. Finally, the ranking of the alternatives could be obtained, where the best-ranked alternative contained the smallest gap value. The alternatives with the smallest total gap values were close to the ideal ratings. To solve a decision problem by applying the MAIRCA method after determining alternatives and related criteria, the following steps are validated. The algorithm of the MAIRCA method is presented in Figure 3.

$$P\_{\mathcal{A}} = \frac{l}{m}; \sum\_{i=1}^{m} P\_{\mathcal{A}\_i} = 1, \ t = 1, 2, \dots, m$$

$$\begin{bmatrix} \mathbf{w}\_1 & \mathbf{w}\_2 & \dots & \mathbf{w}\_n & & \mathbf{w}\_1 & \mathbf{w}\_2 & \dots & \mathbf{w}\_n \\ \mathbf{T}\_p = P\_{\mathcal{A}\_i} \begin{bmatrix} t\_{p1} & t\_{p2} & \dots & t\_{pn} \end{bmatrix} = P\_{\mathcal{A}\_i} \begin{bmatrix} P\_{\mathcal{A}\_i} \bullet \mathbf{w}\_1 & P\_{\mathcal{A}\_i} \bullet \mathbf{w}\_2 & \dots & P\_{\mathcal{A}\_i} \bullet \mathbf{w}\_n \end{bmatrix} \end{bmatrix}$$

$$\mathcal{A}\_{D, \mathbb{I}-f} = \left| \frac{|Q\_f| - |Q\_i|}{|Q\_\pi|} \right|, \ j = \mathfrak{Z}, \mathfrak{Z}, \dots, m$$

$$\mathcal{R}\_{\mathcal{A}\text{vis},j} = \begin{cases} \mathcal{A}\_{D,1-j} \ge I\_D \implies \mathcal{R}\_{\mathcal{A}\text{val},j} = \mathcal{R}\_{\text{vis}\text{vis},j},\\ \mathcal{A}\_{D,1-j} < I\_D \implies \mathcal{R}\_{\text{>\text{insf},j}} = \mathcal{R}\_{\text{vis}\text{vis},1} \end{cases}$$

**Figure 3.** The algorithm of the MAIRCA method.

The first step presents the formulation of the initial decision-making matrix *X*. This matrix analyzes the criteria values for each alternative. The criteria of matrix *X* could be quantitative or qualitative. The quantitative values of the criteria in the matrix *X* are determined by quantifying the real indicators among the selected criteria. The qualitative values of the criteria are obtained by choices made by the decision makers. If the study has a large number of experts, we propose to aggregate the opinions of the experts.

The second step estimates the preferences to select the alternatives. This selection is based on indifferent decision makers' opinions. This means that there is no preference for selected alternatives. Moreover, the decision maker is indifferent in both selecting any specific alternative and in the process of the alternatives' selection.

The third step reveals the computation of the theoretical ratings matrix *Tp*. The matrix *Tp* is represented as the multiplication of the total number of criteria and the total number of alternatives. The theoretical ratings matrix is calculated by multiplying preferences to select the alternatives and criterion weights. These decision-maker preferences are the same for all alternatives, since they are indifferent in the initial selection of the alternatives.

The calculation of the real ratings matrix *Tr* involves multiplying the elements of the theoretical ratings matrix *Tp* and the elements of the initial decision-making matrix *X*.

The fifth step involves calculating the total gap matrix *G*. The *G* matrix is calculated as the difference between the theoretical and real ratings. In other words, it is the gap between the theoretical ratings matrix *Tp* and the real ratings matrix *Tr*.

The sixth and seventh steps obtain the final values of criteria function *Qi* by alternative and the final alternative ranking methods. In order to obtain the values of criteria functions, we propose summing the gap with the alternatives. In other words, we should sum the elements of G matrix by column.

### **4. Case Study**

To identify the criteria selected in affecting the CILC location [3,42] and presented in Table 1, we have provided interviews with international logistics companies, local government, and surveys with scholars in the field of logistics, supply chain management, and transportation.


**Table 1.** Criteria used to select precandidate cities in order to allocate China Railway Express (CR Express) international logistics centers.


**Table 1.** *Cont*.

The preliminary selection of the potential precandidate cities is based on conducted research presented in [4], such as cities in which the CR Express already operates, cities that have been selected for further development by Chinese national strategic policy, and capital cities of provinces. The selected potential precandidate cities are Guangzhou, Changchun, Changsha, Chengdu, Chongqing, Guiyang, Harbin, Hefei, Hohbot, Kunming, Lanzhou, Liuzhou, Nanchang, Ningbo, Qingdao, Shanghai, Shenyang, Shijiazhuang, Suzhou, Taiyuan, Tianjin, Urumqi, Wuhan, Xiamen, Xi'an, Yinchuan, and Zhengzhou.

The initial step of the DEMATEL method involves applying the triangular fuzzy scale presented in Table 2 to evaluate the impacts between criteria.

**Table 2.** Triangular fuzzy scale used to estimate the impacts between criteria.


The compiled surveys created eight average matrices. The preferences of experts were collected into an average matrix *Z*. Then, we obtained Table A1. The elements of the initial direct relation matrix *D* are normalized by dividing each element of the matrix *Z* by the maximum element among the summed experts' opinion.

In order to derive Table 3, the elements of matrix *T* illustrated in Table 2 are summed by rows and columns.


**Table 3.** The ranking of alternatives using the MAIRCA method.

Table A2 demonstrates the aggregated values of matrix T by rows (*Di*), columns (*Ri*), and the obtained weights of each criterion (*wi*). Then, we estimated the alternatives in Table A4 and selected them by applying the MAIRCA method. In order to estimate the alternatives by using the qualitative criteria, we applied the linguistic fuzzy scale.

Table A4 presents the criteria, which is categorized in the following way. Max is related to profit-type criteria, where the highest values are favored. Min is associated with the cost-type criteria, characterized by the lowest values (which are also favored). Since we formulated the initial decision-making matrix in Table A5, the preferences of experts for the selected alternatives *PAi* are obtained by the formula *PAi* = 1/m = 1/27 = 0.037, where m is the total number of potential precandidate cities. Then, the elements of the theoretical ratings matrix (*Tp*) are calculated:

$$t\_{p42} = P\_{A4} \cdot w\_2 = 0.037 \cdot 0.060 = 0.00198$$

As the theoretical ratings matrix (*Tp*) has been derived, we can calculate the real ratings matrix (*Tr*). The elements of the real ratings matrix presented in Table A6 are obtained by multiplication of the elements presented in the theoretical ratings matrix (*Tp*) and normalized elements of the initial decision-making matrix (*X*). For example, the position tr32, which is the element of the real ratings matrix, is obtained with the following formula:

$$t\_{r42} = t\_{pij} \cdot \left(\frac{\mathbf{x}\_{ij} - \mathbf{x}\_i^-}{\mathbf{x}\_i^+ - \mathbf{x}\_i^-}\right) = 0.00198 \cdot \left(\frac{5151.59 - 4209.28}{9860.00 - 4209.28}\right) = 0.00033$$

The elements of the total gap matrix (*G*) presented in Table A7 are calculated as the difference (gap) between theoretical ratings (*tpij*) and real ratings (*trij*). The element of the total gap matrix at position *g*<sup>42</sup> is determined with the following formula:

$$g\_{42} = t\_{p42} - t\_{r42} = \begin{array}{c} 0.00198 \ - 0.00033 \ = \end{array} \tag{1}$$

The gap for the alternative *A*<sup>4</sup> with the criterion *U*<sup>2</sup> is *g*<sup>42</sup> = 0.00165. Regarding criterion *U*2, the obtained ideal alternative is dependent on *t*pi2 = *t*ri2 (i.e., *g*i2 = 0:00). In order to evaluate the unideal alternative with criterion *U*2, the condition is *t*ri2 = 0 (i.e., *g*i2 = *t*pi2). Consequently, evaluating the alternative *A*<sup>4</sup> with criterion *U*2, is not the ideal alternative (Ai <sup>+</sup>). Furthermore, alternative A4 is closer to the ideal alternative as compared to the unideal alternative, since the distance from the ideal alternative is *g*<sup>42</sup> = 0:0084.

To obtain the values of criteria functions (*Qi*) using alternatives presented in Table 3, we summed the gaps (*gij*) with the alternatives. In other words, we summed the elements of matrix (*G*) by columns; the alternative should preferably have the lowest possible value of the total gap (in this case, alternative no. 27).

The presented methodology allows us to select the precandidate cities for the CR Express international logistics centers. One of the key features of the applied method is the ability to scale the model, since a large number of criteria with different units were applied. Moreover, this method is very useful for different stakeholders, such as logistics managers and local government, since it is capable of handling large-scale problems and can produce infinite alternatives. Finally, the calculation is simple and does not require complex computer programs.

The preliminary results presented in Table 3 illustrate that the number of cities and their order are both changed. Figure 4 demonstrates that three areas have less precandidate cities after the applied methodology. Next, we will investigate the reduced number of cities from different perspectives.

**Figure 4.** The location of precandidate cities for the CR Express international logistics centers.

Firstly, from the infrastructure point of view, the reduced number of cities could potentially improve the throughput of the specific CR Express rail routes and access roads among the CILC. Moreover, as the optimized number of cities assumes increased traffic volumes, industrial enterprises should be located closer to CILC. In other words, we assume the rapid development of the industry.

Secondly, from the social and economic perspectives, the optimal number of cities could reduce the amount of the subsidies provided by the local government for development of the logistics infrastructure. This means that local authorities could forward the cash flow and use it to develop other sectors, such as healthcare and education. Furthermore, since the reduced number of cities implies increased traffic volumes, this could potentially increase the workforce and salaries of CILC workers.

Finally, in view of increasing environmental requirements in China, the optimized number of cities minimizes the volume of the solid waste produced by the CILC, the CO2 emission produced by trucks, servicing requirements for the CILC, as well as reduces noise pollution by reducing the amount of technological equipment in operation at the CILC. In addition, the number of lost containers could be reduced, since having a minimized number of cities reducers containers' movements.

### **5. Comparison and Discussion**

In the previous section, we showed the applicability of the DEMATEL-MAIRCA method for solving the facility location problem. However, in order to prove the validity of the proposed methodology, we will compare this method with other modern methodologies, such as multiobjective optimization on the basis of ratio analysis (MOORA), complex proportional assessment (COPRAS), and multiattribute border approximation area comparison (MABAC) methods.

The MABAC method was firstly formulated by Pamucar and Cirovic in 2015 [43]. This method computes the distance between each alternative and the bored approximation area (BAA) [44,45].

The COPRAS method was developed by Zavadskas, Kaklauskas, and Sarka in 1994. This method compares the alternatives and determines their priorities under the conflicting criteria by taking into account the criteria weights [46].

The MOORA method was firstly introduced by Brauers in 2004 [47]. This method involves multicriteria or multiattribute optimization and allows simultaneous optimization of two or more conflicting attributes (objectives) subject to certain constraints.

Based on computational experiments, we obtained the following alternative rankings and provided a correlation between the DEMATEL-MAIRCA method and other methods presented in Table 4. In order to compare the results obtained by the four different approaches (Table 4), we used Kendall's tau correlation coefficient. Kendall's tau correlation coefficient (KTC) of ranks is a valuable and significant indicator for investigating the relation between the obtained results from the applied four different approaches [24]. Moreover, the KTC is useful when a study contains ordinal variables or ranked variables, which is relevant in the present case. It also has been proven that KTC is more stable and more efficient than Spearman's rank correlation (SCC) [48]. The KTC was applied in the present study to determine the statistical difference of importance among the obtained ranks. The correlation between the DEMATEL-MAIRCA method and other methodologies is presented in Table 4.

**Table 4.** The correlation between the DEMATEL-MAIRCA method and other methodologies. Note: KTC = Kendall's tau correlation coefficient; MOORA = multiobjective optimization on the basis of ratio analysis; MABAC = multiattribute border approximation area comparison; COPRAS = complex proportional assessment.



**Table 4.** *Cont*.

From the total calculated statistical coefficient of correlation (0.987), we conclude that the obtained ranks have a high correlation with each other. All the KTC values are greater than 0.90, which according to Ziemba [49] illustrates high correlation. In Table 4, all of the KTC values are considerably greater than 0.90, with an average value of 0.984. This means that there is a high correlation between the proposed DEMATEL-MAIRCA method and the other MCDM techniques. It can be concluded that the obtained ranking is acceptable and reliable. In other words, the provided comparison proves that the proposed methodology is adequate and valid.

### **6. Conclusions**

This study presents the application of the hybrid DEMATEL-MAIRCA model to select the optimal locations of CILC. The DEMATEL method was applied to identify the weights of the criteria. The MAIRCA method was applied to evaluate the alternatives and select the locations of precandidate cities for CILC.

The main advantage of the applied methodology is its universality, since this method could be applied to other decision-making problems involving MCDM methods. Moreover, the applied technique could be applied in the case of a large number of alternatives and criteria. Furthermore, it clearly ranks the numerical values, allowing an easier understanding of results, and could be applied qualitative and quantitative types of criteria. Finally, the proposed method is robust; the DEMATEL-MAIRCA method is more stable in conditions of risk and uncertainty compared with other techniques. This means that the method is less sensitive to changes in the weights of selected criteria and related changes in alternative ranking.

The applied methodology is a practical tool for different stakeholders, such as logistics managers and local government. On the one hand, the ranking of the cities provides the possibility of finding optimal CILC locations and reduces rail transportation costs. On the other hand, it also affects the environmental aspects [50], since the number of potential precandidate cities is minimized. The case study shows the difference between the current CR Express railway network in China and the obtained optimal network. The obtained optimal CR Express railway network includes precandidate cities in areas where the freight transportation flows can be aggregated and economies of scale can be achieved. The provided comparison between the proposed methodology and other MCDM techniques shows a strong correlation, which demonstrates the validity of the DEMATEL-MAIRCA method.

Several limitations should be considered. Firstly, since the present study aggregates experts' opinions, data triangulation should be provided to investigate the impacts of different criteria. Secondly,

as there has been an imbalance in cargo flow between China and Europe, the criterion related to Chinese customers' demand for the CR Express service should be considered. Finally, in order to provide a comprehensive study, the same proposed approach should be applied to select the optimal location of CILC in Europe.

Further development of the study should include the application of the selected precandidate cities for the design of an optimal railway network between China and Europe. In order to design the optimal railway network, we propose the development of a mixed-integer mathematical model that would be applied in the AnyLogistix software. This software is based on supply chain optimization. By using the proposed software and scenario approaches, and including stochastic variables such as demand and transportation time, an optimal CR Express railway network can be designed.

**Author Contributions:** Conceptualization, D.M., H.H. and D.P.; methodology, D.M. and D.P.; validation, D.M., H.Z. and D.P.; investigation, D.M. and H.Z.; data curation, D.M. and H.Z.; writing—original draft preparation, D.M., H.H., H.Z. and D.P.; writing—review and editing, D.M., H.H., H.Z. and D.P.; visualization, D.M., H.Z. and D.P. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest.


**Appendix A**

**Table A1.** Average matrix (Z).


 **A1.** Average matrix

(Z).

**Table**

### *Symmetry* **2020**, *12*, 143













### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

### *Article*

## **Unified Fuzzy Divergence Measures with Multi-Criteria Decision Making Problems for Sustainable Planning of an E-Waste Recycling Job Selection**

**Pratibha Rani 1, Kannan Govindan 2,\*, Arunodaya Raj Mishra 3, Abbas Mardani 4,5, Melfi Alrasheedi <sup>6</sup> and D. S. Hooda <sup>7</sup>**


Received: 30 October 2019; Accepted: 27 December 2019; Published: 2 January 2020

**Abstract:** In the literature of information theory and fuzzy set doctrine, there exist various prominent measures of divergence; each possesses its own merits, demerits, and disciplines of applications. Divergence measure is a tool to compute the discrimination between two objects. Particularly, the idea of divergence measure for fuzzy sets is significant since it has applications in several areas viz., process control, decision making, image segmentation, and pattern recognition. In this paper, some new fuzzy divergence measures, which are generalizations of probabilistic divergence measures are introduced. Next, we review two different generalizations of the following measures. Firstly, directed divergence (Kullback–Leibler or Jeffrey invariant) and secondly, Jensen difference divergence, based on these measures, we develop a class of unified divergence measures for fuzzy sets (FSs). Then, a method based on divergence measure for fuzzy sets (FSs) is proposed to evaluate the multi-criteria decision-making (MCDM) problems under the fuzzy atmosphere. Lastly, an illustrative example of the recycling job selection problem of sustainable planning of the e-waste is presented to demonstrate the reasonableness and usefulness of the developed method.

**Keywords:** divergence measure; entropy; fuzzy set; multi-criteria decision making; recycling job selection; e-waste

### **1. Introduction**

The doctrine of fuzzy sets (FSs) and fuzzy logic pioneered by Zadeh [1], has been employed to form uncertainty, lack of information, and ambiguity arises in the decision making, logical programming, image processing, process control, pattern recognition, medical diagnosis, etc. Zadeh [2] defined the concept of fuzzy entropy as an essential tool for quantifying the fuzzy information. Corresponding to Shannon's entropy, De Luca and Termini [3] established the measure of entropy and originated the essential axioms, which the fuzzy entropy should fulfill. Afterward, Pal and Pal [4] introduced the

exponential fuzzy entropy. Moreover, fuzzy divergence measure as a prominent tool to evaluate the degree of discrimination for FSs has received much concentration in the last decades. Next, divergence measure construction is not easy work. First, Bhandari and Pal [5] defined the measure of directed divergence in terms of axioms for FSs based on a directed divergence of [6]. Shang and Jiang [7] provided an altered form of Bhandari and Pal [5] measured based on [8]. Next, Montes, et al. [9] improved the axiomatic definition of a divergence measure for FSs with various properties. They mentioned that very well-known functions described in the literature to compute the discrimination for FSs are, indeed, divergences. Conversely, it is also an amount of dissimilarity, and it persuades a set of desirable properties, which are constructive for evaluating discrimination for FSs.

In the literature, various information measures have been proposed such that each definition enjoys some definite axiomatic or heuristic postulates, which lead to their extensive applications in different disciplines. A conventional categorization to distinguish these measures is as: parametric, non-parametric, and entropy-type measures of information [10]. Parametric measures determine the amount of information delivered by the object regarding an unknown parameter α and are functions of α. The renowned measures of this type are Fisher [11] measures of information. Non-parametric measures quantify the amount of information delivered by the object for discriminating the object *P* against the object *Q*, or for determining the distance or similarity between *P* and *Q*. The Kullback–Leibler (K–L) [6], Bhandari and Pal [5] and Fan and Xie [12] measures are the prominent non-parametric measures. Entropy measures assess the amount of information enclosed in distribution, that is, the degree of fuzziness related to the objectives. The renowned measures are De Luca and Termini [3] and Pal and Pal [4] measure and others [13–16].

In recent years, several of the previously published papers highlighted the importance of decision making methods in different application areas [17–20]. Though, in general, the criteria concerned in the multi-criteria decision-making (MCDM) dispute with each other, and therefore, it is difficult to find a solution gratifying all criteria at the same time. The general illustration is an association between the development prospect and environment protection. An effective solution needs to be capable of maximizing both objectives, although, in most circumstances, such an option is not feasible. The Pareto efficient solution was the first that showed such circumstances, holding the condition that the enhancement of one criterion will cause worsening of at least one other criterion [21]. Consistent with the compromise programming [22], a large number of approaches have been developed in the literature for the purpose of handling the MCDM-related problems [18], for instance, the methods such as TOPSIS, ELECTRE, PROMETHEE, VIKOR, etc.

### *Motivation and Novelty*

The problem of e-waste requires to be solved effectively and immediately based on the sustainability principles with the aim of achieving the circular economy objectives, as mentioned earlier [23]. Existing literature has been comprehensively reviewed, and numerous experts in the field have been interviewed in order to find out the way e-waste is managed currently across the world [24–28]. In general, the e-waste management can be classified into improper or proper [13]. The improper e-waste management refers to the utilization of several recycling technologies, which lead in turn to social and environmental degradations, hence bringing about negative sustainability implications. On the other hand, proper e-waste management is often implemented only in developed countries since they have access to necessary infrastructure. The aim of the paper is understanding the reason why a number of firms and organizations have not adopted the policy measures pertaining to the e-waste management, especially with taking into account the fact that the electronics industry is playing one of the most significant roles in economy, and that there are lots of public health problems accompanied with the inappropriate removal of e-waste.

Sustainable planning of e-waste issues has received much attention in waste management, but there have been very few studies for the practice of recycling partner job selection [29,30]. Due to multiple criteria, the recycling job selection is considered as an MCDM problem concerning both qualitative and quantitative uncertain information. In order to handle the recycling partner job selection problem in e-waste management, we present a new MCDM approach under fuzzy environment. The objectives of the present study are listed in the following points:


The structure of this paper is organized in the following sections. Section 2 provided the fundamental outset of FSs and fuzzy information measures of the proposed method. Section 3 proposed a novel method based on a new divergence measure for FSs. Section 4 presented the analysis of the proposed method for e-waste recycling job selection. Section 5 presented the results of the proposed method and comparison of the proposed method with other existing methods. Section 6 discussed the conclusion, limitations, and recommendations for further work.

### **2. Preliminaries**

This section firstly reminds various entropy and divergence measures for the probability distribution. We also discuss the outset of FSs and fuzzy information measures.

For any probability distribution *S* = (*s*1,*s*2, ... ,*sn*) ∈ Δ*n*, [31] pioneered the entropy as follows:

$$H(S) = -\sum\_{i=1}^{r} s\_i \ln s\_i. \tag{1}$$

Rényi [32] is given by

$$H\_{\text{Renyi}}(S) = \frac{1}{\alpha - 1} \ln \left( \sum\_{i=1}^{r} s\_i^{\alpha} \right) \tag{2}$$

where α > 0, α -1.

Pal and Pal [4] pioneered entropy on exponential function as

$$H\_{\text{Pul}}(S) = \sum\_{i=1}^{r} s\_i e^{(1-s\_i)} - 1. \tag{3}$$

Next, Kullback and Leibler [6] proposed the divergence measure from a probability distribution *S* to probability distribution *T*, which measures the degree of discrimination, is defined as

$$C\_{KL}(S\|T) = \sum\_{i=1}^{r} s\_i \ln \frac{s\_i}{t\_i}.\tag{4}$$

The ln represents the logarithmic used throughout this correspondence unless otherwise stated. It is well known that *CKL*(*S*||*T*) is nonnegative, additive but not symmetric [33]. To obtain an asymmetric measure, one can define its symmetric version, i.e., Jeffrey's invariant is mentioned as [34]

$$D\_m(S\|T) = \mathbb{C}\_{KL}(S\|T) + \mathbb{C}\_{KL}(T\|S). \tag{5}$$

Clearly, Equations (4) and (5) divergences share most of their properties.

Renyi divergence is associated with Rényi [32] entropy as Kullback–Leibler divergence is associated with Shannon's entropy, and comes up in many settings.

$$\mathbb{C}\mathbb{C}\_{\mathbb{R}}(S\|T) = \frac{1}{\alpha - 1} \ln \sum\_{i=1}^{r} s\_i^{\alpha} t\_i^{1-\alpha} \, \text{s} \tag{6}$$

where α > 0, α -1.

Lin [8] initiated the Jensen–Shannon divergence for the distributions *P* and *Q* is given by

$$\mathcal{L}\_{fS}(S\|T) = H\left(\frac{S+T}{2}\right) - \frac{H(S)+H(T)}{2},\tag{7}$$

where *H*(.) is the Shannon entropy shown in (1).

For simplicity, we write

$$R(S\|T) = \frac{1}{2} \Big[ \mathbb{C}\_{\text{JS}} \Big( \mathbb{S} \Big| \frac{S+T}{2} \Big) + \mathbb{C}\_{\text{JS}} \Big( \mathbb{S} \Big| \frac{S+T}{2} \Big) \Big].\tag{8}$$

**Definition 1** (Zadeh [1])**.** *Let X* = {*x*1, *x*2, ... , *xn*} *be the finite discourse set. An FS K defined on X is given as*

$$K = \{ (\mathbf{x}\_i, \mu\_K(\mathbf{x}\_i)) : \mu\_K(\mathbf{x}\_i) \in [0, 1]; \ \forall \mathbf{x}\_i \in X \},\tag{9}$$

*where the function* μ*K*(*xi*)(0 ≤ μ*K*(*xi*) ≤ 1) *is the membership degree of xi to K in X*.

Throughout this paper, <sup>R</sup> = [0, <sup>∞</sup> ], let *FSs*(*X*) be the set of all FSs on a *<sup>X</sup>* and <sup>P</sup> (*X*) be the set of all crisp sets on discourse set *X*. μ*K*(*xi*) is the membership function of *K* ∈ *FS*(*X*), [*a*] is the FSs of *X* for which μ [*a*](*xi*) = *a*, ∀ *xi* ∈ *X* (*a* ∈ [0, 1]). For FSs *K*, we use *K<sup>c</sup>* to articulate the complement of *<sup>K</sup>*, *<sup>i</sup>*.*e*., <sup>μ</sup>*K<sup>c</sup>* (*xi*) = <sup>1</sup>−μ*K*(*xi*), <sup>∀</sup> *xi* <sup>∈</sup> *<sup>X</sup>*. For FSs *<sup>K</sup>* and *<sup>L</sup>*, *<sup>K</sup>*∪*<sup>L</sup>* is given as <sup>μ</sup>*K*∪*L*(*xi*) = max μ*K*(*xi*), μ*L*(*xi*) ! , *<sup>K</sup>* <sup>∩</sup> *<sup>L</sup>* is defined as <sup>μ</sup>*K*∩*L*(*xi*) = min μ*K*(*xi*), μ*L*(*xi*) ! and *K* ⊆ *L* iff μ*K*(*xi*) ≤ μ*L*(*xi*).

**Definition 2** (Montes, Couso, Gil and Bertoluzza [9])**.** *Let K* = (*xi*, <sup>μ</sup>*K*(*xi*)) : *xi* <sup>∈</sup> *<sup>X</sup>*! *and <sup>L</sup>* <sup>=</sup> (*xi*, <sup>μ</sup>*L*(*xi*)) : *xi* <sup>∈</sup> *<sup>X</sup>*! *be two FSs in the finite discourse set <sup>X</sup>*. *Then, the function Dm* : *FS*(*X*) <sup>×</sup> *FS*(*X*) <sup>→</sup> <sup>R</sup> *is called the divergence measure for FSs if it holds the following axioms:*

(P1). *Dm*(*K*||*L*) = *Dm*(*L*||*K*), (P2). *Dm*(*K*|| *L*) = 0 if *K* = *L*, (P3). *Dm*(*K* ∩ *T L* ∩ *T*) ≤ *Dm*(*K L*) for every *T* ∈ *FS*(*X*), (P4). *Dm*(*K* ∪ *T L* ∪ *T*) ≤ *Dm*(*K L*) for every *T* ∈ *FS*(*X*).

Firstly, Bhandari and Pal [5] pioneered divergence measure for FSs based on KL-divergence measure as follows:

$$CE\_B(\mathbb{K} \| \mathbf{L}) = \sum\_{i=1}^{r} \left[ \mu\_K(\mathbf{x}\_i) \ln \frac{\mu\_K(\mathbf{x}\_i)}{\mu\_L(\mathbf{x}\_i)} + (1 - \mu\_K(\mathbf{x}\_i)) \ln \frac{(1 - \mu\_K(\mathbf{x}\_i))}{(1 - \mu\_K(\mathbf{x}\_i))} \right] \tag{10}$$

and symmetric form is given by

$$D\_{mB}(\mathbb{K} \| L) = \mathbb{C}E\_B(\mathbb{K} \| L) + \mathbb{C}E\_B(L \| \mathbb{K}).\tag{11}$$

Fan and Xie [12] developed exponential divergence as follows:

$$CE\_F(K\|L) = \sum\_{i=1}^{r} \left(1 - (1 - \mu\_K(\mathbf{x}\_i))e^{\left(\mu\_K(\mathbf{x}\_i) - \mu\_L(\mathbf{x}\_i)\right)} - \mu\_K(\mathbf{x}\_i)\, e^{\left(\mu\_L(\mathbf{x}\_i) - \mu\_K(\mathbf{x}\_i)\right)}\right). \tag{12}$$

Bajaj and Hooda [35] proposed a divergence measure based on Rényi [32] divergence measure as follows:

$$\text{CE}\_{H}(\mathbf{K}\|\mathcal{L}) = \frac{1}{\alpha - 1} \ln \left| \sum\_{i=1}^{r} \left[ \mu\_{K}^{a}(\mathbf{x}\_{i}) \mu\_{L}^{1-a}(\mathbf{x}\_{i}) \right.\\ \left. + \left(1 - \mu\_{\mathcal{K}}(\mathbf{x}\_{i})\right)^{a} \left(1 - \mu\_{\mathcal{L}}(\mathbf{x}\_{i})\right)^{1-a} \right] \right|, \tag{13}$$

where α > 0, α -1.

The aim of this review is to give different two parametric generalizations of measures (4), (5), and (7) for FSs and to study their properties and application. These generalizations are put in the form of unified expression for FSs. We will also develop some new extension of divergence measures for FSs and apply these measures to information theory, image processing, statistics, and engineering.

### **3. Proposed Method**

From the available literature, it was examined that all the existing measures did not incorporate the plan of decision expert (DE) preferences into the measure. Moreover, the above-mentioned measures are in a linear order; therefore, they do not provide the precise nature of the options. In order to take the flexibility and efficiency of the criteria of fuzzy sets, the new generalized parametric divergence measures were presented to enumerate the degree of fuzziness of a set. For this, novel divergence measures for FSs have been developed, which composes the DEs more consistent and flexible for the diverse values of the parameters. After that, these measures have been originated by intriguing the convex linear combinations of the degree of membership between two FSs. Based on the above-mentioned works, some enviable properties of developed measures have been studied. Here, the purpose was to endeavor with the parametric and non-parametric extension of symmetric and non-symmetric divergences. A similar variety of work of the divergence measures with their parametric generalization for probability distributions can be done in [36]. It is worth mentioning that developing a generalized divergence by initiating a real parameter permits to unite various existing divergence measures considered separately and acquiesces several new divergences. It offers a vast horizon of divergence measures for authors to select that deems finest for their research disciplines. Next, we developed divergence measures based method to construct the criterion weights. Criterion, which has less amount of entropy and larger the cross-entropy, needs to be carefully taken into consideration. To reinforce the weight-evaluating approaches and overall performance values of alternatives, some new divergence measures were initiated, which extend the existing ones.

### *3.1. New Divergence for FSs*

Corresponding to Kumar and Chhina [10] divergence measure, we proposed the divergence measure for FSs as follows:

$$\begin{split} D\_{m1}(\mathbf{K} \| \mathbf{L}) &= \sum\_{i=1}^{r} \left[ \frac{(\mu\_{\mathbf{K}}(\mathbf{x}\_{i}) + \mu\_{\mathbf{L}}(\mathbf{x}\_{i}))(\mu\_{\mathbf{K}}(\mathbf{x}\_{i}) - \mu\_{\mathbf{L}}(\mathbf{x}\_{i}))^{2}}{\mu\_{\mathbf{K}}(\mathbf{x}\_{i})\mu\_{\mathbf{L}}(\mathbf{x}\_{i})} \right] \ln \left( \frac{(\mu\_{\mathbf{K}}(\mathbf{x}\_{i}) + \mu\_{\mathbf{L}}(\mathbf{x}\_{i}))}{2\sqrt{\mu\_{\mathbf{K}}(\mathbf{x}\_{i})\mu\_{\mathbf{L}}(\mathbf{x}\_{i})}} \right) \\ &+ \frac{(2 - \mu\_{\mathbf{K}}(\mathbf{x}\_{i}) - \mu\_{\mathbf{L}}(\mathbf{x}\_{i}))(\mu\_{\mathbf{K}}(\mathbf{x}\_{i}) - \mu\_{\mathbf{L}}(\mathbf{x}\_{i}))^{2}}{(1 - \mu\_{\mathbf{K}}(\mathbf{x}\_{i}))(1 - \mu\_{\mathbf{L}}(\mathbf{x}\_{i}))} \ln \left( \frac{(2 - \mu\_{\mathbf{K}}(\mathbf{x}\_{i}) - \mu\_{\mathbf{L}}(\mathbf{x}\_{i}))}{2\sqrt{(1 - \mu\_{\mathbf{K}}(\mathbf{x}\_{i}))(1 - \mu\_{\mathbf{L}}(\mathbf{x}\_{i}))}} \right) \right]. \end{split} \tag{14}$$

Measure (14) describes as symmetric Chi-square, arithmetic, and geometric mean divergence measure for FSs. Consider the function

$$f(\mathbf{x}) = \sum\_{i=1}^{r} \left[ \frac{(\mathbf{x} + \mathbf{1})(\mathbf{x} - \mathbf{1})^2}{\mathbf{x}} \ln \left( \frac{(\mathbf{x} + \mathbf{1})}{2\sqrt{\mathbf{x}}} \right) \right]. \tag{15}$$

where *x* ∈ [0, 1]. It may be noted that *f*(*x*) fulfills *f*(*x*) > 0, ∀ *x* ∈ [0, 1], and *f*(1) = 0. Thus *Dm*1(*K L*) = 0 if *K* = *L*. The convexity of *f*(*x*) ensures that *Dm*1(*K*||*L*) is non-negative and *Dm*1(*K*||*L*) = *Dm*1(*L*||*K*).

*Symmetry* **2020**, *12*, 90

Corresponding to Triangular divergence measure [37] for the probability distribution, we define the following divergence measure for FSs as

$$D\_{m2}(K\|L) = \sum\_{i=1}^{r} \left[ \frac{\left(\mu\_K(\mathbf{x}\_i) - \mu\_L(\mathbf{x}\_i)\right)^2}{\mu\_K(\mathbf{x}\_i) + \mu\_L(\mathbf{x}\_i)} + \frac{\left(\mu\_L(\mathbf{x}\_i) - \mu\_K(\mathbf{x}\_i)\right)^2}{2 - \mu\_K(\mathbf{x}\_i) - \mu\_L(\mathbf{x}\_i)} \right]. \tag{16}$$

Next, we obtained divergence inequality presenting the bounds for *Dm*1(*K*||*L*)in terms of *Dm*2(*K*||*L*).

**Theorem 1.** *The measures Dm*1(*K*||*L*) *and Dm*2(*K*||*L*), *are defined as (14) and (16), hold the inequality*

$$D\_{m1}(K\|L) \le 4\sum \left(\frac{\left(\mu\_K(\mathbf{x}\_i) - \mu\_L(\mathbf{x}\_i)\right)^2}{\sqrt{\mu\_K(\mathbf{x}\_i)\mu\_L(\mathbf{x}\_i)}} + \frac{\left(\mu\_L(\mathbf{x}\_i) - \mu\_K(\mathbf{x}\_i)\right)^2}{\sqrt{(1 - \mu\_K(\mathbf{x}\_i))(1 - \mu\_L(\mathbf{x}\_i))}}\right) - 2\, D\_{m2}(K\|L). \tag{17}$$

**Proof.** Let α, β ∈ [0, 1]. Consider arithmetic mean (AM), geometric mean (GM) and harmonic mean (HM), then they hold inequality, i.e., HM ≤ GM ≤ AM. Now, HM ≤ AM.

Or,

$$\frac{2\alpha\beta}{\alpha+\beta} \le \frac{\alpha+\beta}{2}.$$

Or,

$$
\ln\left(\frac{\alpha+\beta}{2\sqrt{\alpha\beta}}\right) \ge \ln\left(\frac{2\sqrt{\alpha\beta}}{\alpha+\beta}\right).\tag{18}
$$

Multiplying both sides of (α + β)(α − β) 2 /αβ, we obtained

$$\frac{\left(\alpha+\beta\right)\left(\alpha-\beta\right)^{2}}{\alpha\beta}\ln\left(\frac{\alpha+\beta}{2\sqrt{\alpha\beta}}\right) \ge \frac{\left(\alpha+\beta\right)\left(\alpha-\beta\right)^{2}}{\alpha\beta}\ln\left(\frac{2\sqrt{\alpha\beta}}{\alpha+\beta}\right).\tag{19}$$

From HM <sup>≤</sup> GM, we have 2 \* αβ/α + β ≤ 1, and thus

$$\ln\left(\frac{2\sqrt{\alpha\beta}}{\alpha+\beta}\right) = \ln\left(1 + \left(\frac{2\sqrt{\alpha\beta}}{\alpha+\beta} - 1\right)\right) \approx \frac{4\sqrt{\alpha\beta}}{\alpha+\beta} - \frac{2\alpha\beta}{\left(\alpha+\beta\right)^2} - \frac{3}{2}.\tag{20}$$

Now, from (18) and (19), we obtained

$$\frac{\left(\alpha+\beta\right)\left(\alpha-\beta\right)^{2}}{\alpha\beta}\ln\left(\frac{\alpha+\beta}{2\sqrt{\alpha\beta}}\right) \le \frac{4\left(\alpha-\beta\right)^{2}}{\sqrt{\alpha\beta}} - \frac{2\left(\alpha-\beta\right)^{2}}{\left(\alpha+\beta\right)}.$$

Therefore,

*r i*=1 # (μ*K*(*xi*)+μ*L*(*xi*))(μ*K*(*xi*)−μ*L*(*xi*)) 2 <sup>μ</sup>*K*(*xi*)μ*L*(*xi*) ln (μ*K*(*xi*)+μ*L*(*xi*)) 2 <sup>√</sup>μ*K*(*xi*)μ*L*(*xi*) +(2−μ*K*(*xi*)−μ*L*(*xi*))(μ*L*(*xi*)−μ*K*(*xi*)) 2 (1−μ*K*(*xi*))(1−μ*L*(*xi*)) ln (2−μ*K*(*xi*)−μ*L*(*xi*)) 2 <sup>√</sup>(1−μ*K*(*xi*))(1−μ*L*(*xi*)) \$ ≤ 4 (μ*K*(*xi*)−μ*L*(*xi*)) 2 <sup>√</sup>μ*K*(*xi*)μ*L*(*xi*) <sup>+</sup> (μ*L*(*xi*)−μ*K*(*xi*)) 2 <sup>√</sup>(1−μ*K*(*xi*))(1−μ*L*(*xi*)) − 2 *r i*=1 ( (μ*K*(*xi*)−μ*L*(*xi*)) 2 <sup>μ</sup>*K*(*xi*)+μ*L*(*xi*) <sup>+</sup> (μ*L*(*xi*)−μ*K*(*xi*)) 2 2−μ*K*(*xi*)−μ*L*(*xi*) ) . (21)

Hence

$$D\_{m1}(K\|L) \le 4\sum \left(\frac{\left(\mu\_{\mathcal{K}}(\mathbf{x}\_{i}) - \mu\_{\mathcal{L}}(\mathbf{x}\_{i})\right)^{2}}{\sqrt{\mu\_{\mathcal{K}}(\mathbf{x}\_{i})\mu\_{\mathcal{L}}(\mathbf{x}\_{i})}} + \frac{\left(\mu\_{\mathcal{L}}(\mathbf{x}\_{i}) - \mu\_{\mathcal{K}}(\mathbf{x}\_{i})\right)^{2}}{\sqrt{(1 - \mu\_{\mathcal{K}}(\mathbf{x}\_{i}))(1 - \mu\_{\mathcal{L}}(\mathbf{x}\_{i}))}}\right) - 2\, D\_{m2}(K\|L). \tag{22}$$

Based on Parkash [38] divergence measure, we introduce divergence measure for FSs as follows:

$$D\_{\rm int}(K||L) = \frac{1}{\left(\alpha - \frac{1}{2}\right)} \sum\_{i=1}^{r} \left[ \mu\_{\rm K}(\mathbf{x}\_{i}) \left(\alpha + \frac{1}{2}\right)^{\ln\left(\frac{\mu\_{\rm K}(\mathbf{x}\_{i})}{\mu\_{\rm L}(\mathbf{x}\_{i})}\right)} + (1 - \mu\_{\rm K}(\mathbf{x}\_{i})) \left(\alpha + \frac{1}{2}\right)^{\ln\left(\frac{(1 - \mu\_{\rm K}(\mathbf{x}\_{i}))}{(1 - \mu\_{\rm L}(\mathbf{x}\_{i}))}\right)} - 1\right]; \ \alpha > 0, \ \alpha \neq \frac{1}{2}.\tag{23}$$

However, it has been pointed out that (23) has a drawback, i.e., when μ*L*(*xi*) approaches 0 or 1, its value will tend toward infinity. Therefore, the modified version is

$$D\_{\mathsf{nFE}}(K|L) = \frac{1}{\left(a+\frac{1}{2}\right)} \sum\_{i=1}^{r} \left[ \mu\_{K}(\mathbf{x}\_{i}) \left(a+\frac{1}{2}\right)^{\ln\left(\frac{\mu\_{K}(\mathbf{z}\_{i})}{\left(1/2\right)\left(\mu\_{K}(\mathbf{z}\_{i})+\mu\_{L}(\mathbf{z}\_{i})\right)}\right)} + \left(1-\mu\_{K}(\mathbf{x}\_{i})\right) \left(a+\frac{1}{2}\right)^{\ln\left(\frac{(1-\mu\_{K}(\mathbf{z}\_{i}))}{\left(1-\left(1/2\right)\left(\mu\_{K}(\mathbf{z}\_{i})+\mu\_{L}(\mathbf{z}\_{i})\right)}\right)} - 1\right] \tag{24}$$
 
$$a > 0, \ a \neq \frac{1}{2}.$$

Measures (23) and (24) are not symmetric. Therefore the symmetric version is given as follows:

$$D\_{m\Im}(K\|L) = D\_{m\mathbb{R}}(K\|L) \,) + D\_{m\mathbb{R}}(L\|K). \tag{25}$$


**Remark 1.** *It is noted that if* α → 1/2, *then (23) and (24) reduce to the Bhandari and Pal [5] and Shang and Jiang [7] divergence measures for FSs*.

Inspired by [39] information radius measure, the divergence measure for FSs is as

$$D\_{m4}(K|L) = \begin{cases} \frac{1}{a-1} \sum\_{i=1}^{r} \left[ \left(\frac{\mu\_{K}^{2}(x\_{i}) + \mu\_{i}^{2}(x\_{i})}{2}\right) \left(\frac{\mu\_{K}(x\_{i}) + \mu\_{i}(x\_{i})}{2}\right) \right]^{1-a} \\ \qquad + \left(\frac{(1-\mu\_{K}(x\_{i}))^{a} + (1-\mu\_{L}(x\_{i}))^{a}}{2}\right) \left(\frac{2-\mu\_{K}(x\_{i}) - \mu\_{L}(x\_{i})}{2}\right)^{1-a} - 1 \right], \text{ or } (>0) \neq 1, \\\quad \sum\_{i=1}^{r} \left[ \left(\frac{\mu\_{K}(x\_{i})\ln\mu\_{K}(x\_{i}) + \mu\_{L}(x\_{i})\ln\mu\_{L}(x\_{i})}{2}\right) - \left(\frac{\mu\_{K}(x\_{i}) + \mu\_{L}(x\_{i})}{2}\right) \ln\left(\frac{\mu\_{K}(x\_{i}) + \mu\_{L}(x\_{i})}{2}\right) \\ \qquad + \left(\frac{(1-\mu\_{K}(x\_{i}))\ln(1-\mu\_{K}(x\_{i})) + (1-\mu\_{L}(x\_{i}))\ln(1-\mu\_{L}(x\_{i}))}{2}\right) \\ \qquad - \left(\frac{2-\mu\_{K}(x\_{i}) - \mu\_{L}(x\_{i})}{2}\right) \ln\left(\frac{2-\mu\_{K}(x\_{i}) - \mu\_{L}(x\_{i})}{2}\right) \Big], a = 1. \end{cases} \tag{26}$$

**Theorem 2.** *Let K*, *L*, *T* ∈ *FSs*(*X*), *then the proposed measure Dm*γ(*K*||*L*)(γ = 1, 2, 3, 4) *satisfies the following properties, which are given as follows:*


Bajaj and Hooda [35] defined the following divergence for FSs based on Sharma and Mittal [40]:

$$\mathbb{C}\_{a}^{\beta}(K\|L) = \frac{1}{1-\beta} \left[ \left( \sum\_{i=1}^{r} \left\{ \mu\_{K}^{a}(\mathbf{x}\_{i}) \mu\_{L}^{1-a}(\mathbf{x}\_{i}) + (1-\mu\_{K}(\mathbf{x}\_{i}))^{a} (1-\mu\_{L}(\mathbf{x}\_{i}))^{1-a} \right\} \right)^{\beta-1} - 1 \right]. \tag{27}$$

In particular, when α = β, we obtained

$$C\_{\beta}^{\beta}(K||L) = \frac{1}{\beta - 1} \left[ \sum\_{i=1}^{r} \left\{ \mu\_K^{\beta}(\mathbf{x}\_i) \mu\_L^{1-\beta}(\mathbf{x}\_i) + (1 - \mu\_K(\mathbf{x}\_i))^{\beta} (1 - \mu\_L(\mathbf{x}\_i))^{1-\beta} \right\} - 1 \right]. \tag{28}$$

The measure *C*<sup>β</sup> <sup>β</sup>(*K*||*L*) has also been studied extensively in various ways. For a brief review, the following limiting cases are as follows:

$$\begin{aligned} \lim\_{\alpha \to 1} \mathcal{C}\_{\alpha}^{\beta}(K \| L) &= \mathcal{C}\_{1}^{\beta}(K \| L); \lim\_{\beta \to 1} \mathcal{C}\_{\alpha}^{\beta}(K \| L) = \mathcal{C}\_{\alpha}^{1}(K \| L);\\ \lim\_{\alpha \to 1} \mathcal{C}\_{\alpha}^{1}(K \| L) &= \lim\_{\beta \to 1} \mathcal{C}\_{1}^{\beta}(K \| L) = \lim\_{\beta \to 1} \mathcal{C}\_{\beta}^{\beta}(K \| L) = \mathcal{C}(K \| L); \end{aligned}$$

where

$$C\_1^{\beta}(K\|L) = \frac{1}{\beta - 1} \left[ \exp\left\{ (\beta - 1) \sum\_{i=1}^{r} \left( \mu\_K(\mathbf{x}\_i) \ln \frac{\mu\_K(\mathbf{x}\_i)}{\mu\_L(\mathbf{x}\_i)} \right. \right. \\ \left. + (1 - \mu\_K(\mathbf{x}\_i)) \ln \frac{(1 - \mu\_K(\mathbf{x}\_i))}{(1 - \mu\_L(\mathbf{x}\_i))} \right) \right\} - 1\right],\tag{29}$$

is an exponential-type divergence measure for FSs.

Instead of studying these measures separately, we can study them jointly for FSs based on [36] for the probability distribution. The unification is given as follows:

$$\mathbb{S}\_{\alpha}^{\beta}(K\|L) = \begin{cases} \mathbb{C}\_{\alpha}^{\beta}(K\|L), & \alpha \neq 1, \ \beta \neq 1, \\\mathbb{C}\_{1}^{\beta}(K\|L), & \alpha = 1, \ \beta \neq 1, \\\mathbb{C}\_{\alpha}^{1}(K\|L), & \alpha \neq 1, \ \beta = 1, \\\mathbb{C}(K\|L), & \alpha = 1, \ \beta = 1, \end{cases} \tag{30}$$

For all *<sup>K</sup>*, *<sup>L</sup>* <sup>∈</sup> *FSs*, <sup>α</sup> <sup>∈</sup> [0, <sup>∞</sup>], and <sup>β</sup> <sup>∈</sup> [−∞, <sup>∞</sup>]. Here, the measure *<sup>C</sup>*<sup>β</sup> <sup>β</sup>(*K*||*L*) does not appear in the unified Expression (30), it is a particular case of *C*<sup>β</sup> <sup>α</sup>(*K*||*L*). Hence it is already contained in it. The unified expression S<sup>β</sup> <sup>α</sup>(*K*||*L*) is called the unified (α, β)− directed divergence.

### 3.2.1. First Generalization of the Unified Expression

Next, D and R-divergence have been given by (5) and (8), respectively, depending on the divergence measure *<sup>C</sup>*(*K*||*L*). Based on unified expression <sup>S</sup><sup>β</sup> <sup>α</sup>(*K*||*L*) and the Equations (5) and (8), we extended the D and R-divergences. Here, an alternative system to generalize the D and R-divergence was discussed.

$$\mathbb{S}^1 V\_a^\beta (K \| L) = \frac{1}{2} \Big[ \mathbb{S}\_a^\beta \Big( K \Big| \Big| \frac{K+L}{2} \Big) + \mathbb{S}\_a^\beta \Big( L \Big| \Big| \frac{K+L}{2} \Big) \Big]. \tag{31}$$

and

$${}^{1}\mathcal{W}\_{a}^{\beta}(K\|L) = \mathbb{S}\_{a}^{\beta}(K\|L) + \mathbb{S}\_{a}^{\beta}(L\|K),\tag{32}$$

For all *K*, *L* ∈ *FSs*, α ∈ [0, ∞] and β ∈ [−∞, ∞].

The generalized Jensen difference divergence measures according to the (31) are given by the following unified expression:

$${}^{1}V\_{\alpha}^{\beta}(K\|L) = \begin{cases} {}^{1}R\_{\alpha}^{\beta}(K\|L), & \alpha \neq 1, \ \beta \neq 1, \\\ {}^{1}R\_{\beta}^{\beta}(K\|L), & \alpha = 1, \ \beta \neq 1, \\\ {}^{1}R\_{\alpha}^{1}(K\|L), & \alpha \neq 1, \ \beta = 1, \\\ R(K\|L), & \alpha = 1, \ \beta = 1, \end{cases} \tag{33}$$

where

<sup>1</sup>*R*<sup>β</sup> <sup>α</sup>(*K*||*L*) = <sup>2</sup> (β−1) ⎡ ⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣ . *r i*=1 μα *<sup>K</sup>*(*xi*) μ*K*(*xi*)+μ*L*(*xi*) 2 1−<sup>α</sup> +(1 − μ*K*(*xi*)) α 2−μ*K*(*xi*)−μ*L*(*xi*) 2 1−α-<sup>β</sup>−<sup>1</sup> α−1 + . *r i*=1 μα *<sup>L</sup>*(*xi*) μ*K*(*xi*)+μ*L*(*xi*) 2 1−<sup>α</sup> +(1 − μ*L*(*xi*)) α 2−μ*K*(*xi*)−μ*L*(*xi*) 2 1−α-<sup>β</sup>−<sup>1</sup> α−1 − 2 ⎤ ⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ , α - 1, α β, <sup>1</sup>*R*<sup>β</sup> <sup>1</sup>(*K*||*L*) <sup>=</sup> <sup>2</sup> (β−1) ⎡ ⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣ exp ⎧ ⎪⎪⎪⎪⎨ ⎪⎪⎪⎪⎩ (β − 1) *r i*=1 <sup>μ</sup>*K*(*xi*)ln <sup>2</sup>μ*K*(*xi*) μ*K*(*xi*)+μ*L*(*xi*) +(<sup>1</sup> <sup>−</sup> <sup>μ</sup>*K*(*xi*))ln <sup>2</sup>(1−μ*K*(*xi*)) 2−μ*K*(*xi*)−μ*L*(*xi*) exp ⎧ ⎪⎪⎪⎪⎨ ⎪⎪⎪⎪⎩ (β − 1) *r i*=1 <sup>μ</sup>*L*(*xi*)ln <sup>2</sup>μ*L*(*xi*) μ*K*(*xi*)+μ*L*(*xi*) +(<sup>1</sup> <sup>−</sup> <sup>μ</sup>*L*(*xi*))ln <sup>2</sup>(1−μ*L*(*xi*)) 2−μ*K*(*xi*)−μ*L*(*xi*) ⎫ ⎪⎪⎪⎪⎬ ⎪⎪⎪⎪⎭ − 2 ⎤ ⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ , β - 1, and (34)

$${}^{1}R\_{a}^{1}(K||L) = \frac{2}{a-1} \left| \ln \left\{ \begin{array}{l} \left. \begin{array}{l} \mu\_{K}^{a}(\mathbf{x}\_{i}) \left(\frac{\mu\_{K}(\mathbf{x}\_{i}) + \mu\_{L}(\mathbf{x}\_{i})}{2}\right)^{1-a} \\ + (1-\mu\_{K}(\mathbf{x}\_{i}))^{a} \left(\frac{2-\mu\_{K}(\mathbf{x}\_{i})-\mu\_{L}(\mathbf{x}\_{i})}{2}\right)^{1-a} \end{array} \right. \\\\ \left. \begin{array}{l} \mu\_{L}^{a}(\mathbf{x}\_{i}) \left(\frac{\mu\_{K}(\mathbf{x}\_{i})+\mu\_{L}(\mathbf{x}\_{i})}{2}\right)^{1-a} \\ + (1-\mu\_{L}(\mathbf{x}\_{i}))^{a} \left(\frac{2-\mu\_{K}(\mathbf{x}\_{i})-\mu\_{L}(\mathbf{x}\_{i})}{2}\right)^{1-a} \end{array} \right. \end{array} \right\} \right| , \ a \neq 1, 2$$

for all *K*, *L* ∈ *FSs*, α ∈ [0, ∞], and β ∈ [−∞, ∞].

The generalized D-divergence measures, according to (32) are given by the following expression:

$${}^{1}\mathcal{W}\_{\alpha}^{\emptyset}(K|\mathcal{L}) = \begin{cases} {}^{1}D\_{m\_{\alpha}^{\emptyset}}^{\beta}(K|\mathcal{L}), & \alpha \neq 1, \ \beta \neq 1, \\ {}^{1}D\_{m\_{\alpha}^{\emptyset}}^{\beta}(K|\mathcal{L}), & \alpha = 1, \ \beta \neq 1, \\ {}^{1}D\_{m\_{\alpha}^{\emptyset}}^{-1}(K|\mathcal{L}), & \alpha \neq 1, \ \beta = 1, \\ {}^{1}D\_{m}(K|\mathcal{L}), & \alpha = 1, \ \beta = 1, \end{cases} \tag{35}$$

*Symmetry* **2020**, *12*, 90

where

<sup>1</sup>*Dm* β <sup>α</sup>(*K*||*L*) = <sup>2</sup> (β−1) ⎡ ⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣ . *r i*=1 μα *K*(*xi*)μ1−<sup>α</sup> *<sup>L</sup>* (*xi*) +(1 − μ*K*(*xi*)) <sup>α</sup>(<sup>1</sup> <sup>−</sup> <sup>μ</sup>*L*(*xi*)) <sup>1</sup>−α, <sup>β</sup>−<sup>1</sup> α−1 + . *r i*=1 μα *L*(*xi*)μ1−<sup>α</sup> *<sup>K</sup>* (*xi*) +(1 − μ*L*(*xi*)) <sup>α</sup>(<sup>1</sup> <sup>−</sup> <sup>μ</sup>*K*(*xi*)) <sup>1</sup>−α, <sup>β</sup>−<sup>1</sup> <sup>α</sup>−<sup>1</sup> − 2 ⎤ ⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ , α - 1, α β, <sup>1</sup>*Dm* β <sup>1</sup>(*K*||*L*) <sup>=</sup> <sup>1</sup> (β−1) ⎡ ⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣ exp ⎧ ⎪⎪⎪⎪⎨ ⎪⎪⎪⎪⎩ (β − 1) *r i*=1 <sup>μ</sup>*K*(*xi*)ln μ*K*(*xi*) μ*L*(*xi*) +(<sup>1</sup> <sup>−</sup> <sup>μ</sup>*K*(*xi*))ln (1−μ*K*(*xi*)) 1−μ*L*(*xi*) + exp ⎧ ⎪⎪⎪⎪⎨ ⎪⎪⎪⎪⎩ (β − 1) *r i*=1 <sup>μ</sup>*L*(*xi*)ln μ*L*(*xi*) μ*K*(*xi*) +(<sup>1</sup> <sup>−</sup> <sup>μ</sup>*L*(*xi*))ln (1−μ*L*(*xi*)) (1−μ*K*(*xi*)) ⎫ ⎪⎪⎪⎪⎬ ⎪⎪⎪⎪⎭ − 2 ⎤ ⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ , β - 1, and ⎡ ⎧ ⎧ μα ⎫ ⎤ (36)

$${}^{1}D\_{m}^{-1}(K\|L) = \frac{1}{\pi^{-1}} \left[ \ln \left\{ \begin{array}{l} \sum\limits\_{i=1}^{r} \left\{ \begin{array}{l} \mu\_{K}^{\alpha}(\mathbf{x}\_{i})\mu\_{L}^{1-\alpha}(\mathbf{x}\_{i})\\+\left(1-\mu\_{K}(\mathbf{x}\_{i})\right)^{\alpha}(1-\mu\_{L}(\mathbf{x}\_{i}))^{1-\alpha} \right\} \\ \sum\limits\_{i=1}^{r} \left\{ \begin{array}{l} \mu\_{L}^{\alpha}(\mathbf{x}\_{i})\mu\_{K}^{1-\alpha}(\mathbf{x}\_{i})\\+\left(1-\mu\_{L}(\mathbf{x}\_{i})\right)^{\alpha}(1-\mu\_{K}(\mathbf{x}\_{i}))^{1-\alpha} \end{array} \right\} \end{array} \right\} \right], \ \alpha \neq 1, 2$$

For all *K*, *L* ∈ *FSs*, α ∈ [0, ∞] and β ∈ [−∞, ∞]. In particular, when α = β, we obtained

$$\begin{split} \, ^1R\_{\beta}^{\beta}(K\|L) &= R\_{\beta}^{\beta}(K\|L) \\ = \frac{1}{\left(\beta - 1\right)} \left[ \left. \sum\_{i=1}^{r} \left\{ \left( \frac{\mu\_{K}^{\kappa}(\mathbf{x}\_{i}) + \mu\_{L}^{\kappa}(\mathbf{x}\_{i})}{2} \right) \left( \frac{\mu\_{K}(\mathbf{x}\_{i}) + \mu\_{L}(\mathbf{x}\_{i})}{2} \right)^{1-\alpha} \right. \right. \\ & \left. + \left( \frac{(1 - \mu\_{K}(\mathbf{x}\_{i}))^{a} + (1 - \mu\_{L}(\mathbf{x}\_{i}))^{a}}{2} \right) \left( \frac{2 - \mu\_{K}(\mathbf{x}\_{i}) - \mu\_{L}(\mathbf{x}\_{i})}{2} \right)^{1-\alpha} \right] \, ^{\rho} \right. \end{split} \tag{37}$$

and

$$\begin{aligned} \,^1D\_{m\_{\beta}^{\beta}}(M\|\mathcal{N}) &= D\_{m\_{\beta}^{\beta}}(M\|\mathcal{N})\\ = \frac{2}{(\beta - 1)} \begin{bmatrix} \sum\_{i=1}^{r} \left\{ \left\| \mu\_{K}^{\beta}(\mathbf{x}\_{i}) \mu\_{L}^{1-\beta}(\mathbf{x}\_{i}) \right\|^{2} \\\ + \left(1 - \mu\_{K}(\mathbf{x}\_{i})\right)^{\beta} (1 - \mu\_{L}(\mathbf{x}\_{i}))^{1-\beta} \right\} \\\ + \left\{ \mu\_{L}^{\beta}(\mathbf{x}\_{i}) \mu\_{K}^{1-\beta}(\mathbf{x}\_{i}) \\\ + \left(1 - \mu\_{L}(\mathbf{x}\_{i})\right)^{\beta} (1 - \mu\_{K}(\mathbf{x}\_{i}))^{1-\beta} \right\} \end{bmatrix}, \;\beta \neq 1, \; \beta > 0. \end{aligned} \tag{38}$$

### 3.2.2. Second Generalization of Unified Expression

The expressions emerging in (37) and (38) are employed to generate an alternative method for generalizing the R and D-divergence, respectively.

The generalization of Jensen divergence measure is based on an expression (37) are given by

$${}^{2}V\_{\alpha}^{\beta}(K|\!|L) = \begin{cases} {}^{2}R\_{\alpha}^{\beta}(K|\!|L), & \alpha \neq 1, \ \beta \neq 1, \\ {}^{2}R\_{1}^{\beta}(K|\!|L), & \alpha = 1, \ \beta \neq 1, \\ {}^{2}R\_{\alpha}^{1}(K|\!|L), & \alpha \neq 1, \ \beta = 1, \\ {}^{2}(K|\!|L), & \alpha = 1, \ \beta = 1, \end{cases} \tag{39}$$

*Symmetry* **2020**, *12*, 90

where

$$\begin{split} \,^2R\_a^\beta(K||L) &= \frac{1}{(\beta - 1)} \left[ \begin{aligned} & \left( \sum\_{i=1}^r \left( \frac{\mu\_K^\alpha(x) + \mu\_i^\alpha(x)}{2} \right) \left( \frac{\mu\_K(x\_i) + \mu\_i(x\_i)}{2} \right)^{1-\alpha} \\ & + \left( \frac{(1 - \mu\_K(x\_i))^\alpha + (1 - \mu\_L(x\_i))^\alpha}{2} \right) \left( \frac{2 - \mu\_K^\alpha(x\_i) - \mu\_L(x\_i)}{2} \right)^{1-\alpha} - 1 \right] \end{aligned} \right] \,^\alpha \neq 1, \ \alpha \neq \beta, \\ & \,^2R\_1^\beta(K||L) = \frac{1}{(\beta - 1)} \left[ \exp\left\{ (\beta - 1) R(K||L) \right\} - 1 \right], \ \beta \neq 1, \text{ and} \\ & \,^2R\_a^1(K||L) = \frac{1}{a - 1} \left[ \ln \left( \sum\_{i=1}^r \left( \frac{\mu\_K^\alpha(x\_i) + \mu\_i^\alpha(x\_i)}{2} \right) \left( \frac{\mu\_K(x\_i) + \mu\_i(x\_i)}{2} \right)^{1-\alpha} \right) \right] \end{split} \tag{40}$$

for all *K*, *L* ∈ *FSs*, α ∈ [0, ∞], and β ∈ [−∞, ∞].

The second generalization of D-divergence is based on an expression emerging in (38) as follows:

$${}^{2}\mathcal{W}\_{\alpha}^{\beta}(K||L) = \begin{cases} {}^{2}D\_{m\_{\alpha}}^{\beta}(K||L), & \alpha \neq 1, \ \beta \neq 1, \\ {}^{2}D\_{m\_{1}}^{\beta}(K||L), & \alpha = 1, \ \beta \neq 1, \\ {}^{2}D\_{m\_{\alpha}}^{-1}(K||L), & \alpha \neq 1, \ \beta = 1, \\ {}^{2}D\_{m}(K||L), & \alpha = 1, \ \beta = 1, \end{cases} \tag{41}$$

where

<sup>1</sup>*Dm* β <sup>α</sup>(*K*||*L*) = <sup>2</sup> (β−1) ⎡ ⎢⎢⎢⎢⎢⎢⎢⎣ *r i*=1 1 2 + μα *K*(*xi*)μ1−<sup>α</sup> *<sup>L</sup>* (*xi*) +(1 − μ*K*(*xi*)) <sup>α</sup>(<sup>1</sup> <sup>−</sup> <sup>μ</sup>*L*(*xi*)) 1−α +μα *L*(*xi*)μ1−<sup>α</sup> *<sup>K</sup>* (*xi*) + (1 − μ*L*(*xi*)) <sup>α</sup>(<sup>1</sup> <sup>−</sup> <sup>μ</sup>*K*(*xi*)) <sup>1</sup>−α, <sup>β</sup>−<sup>1</sup> <sup>α</sup>−<sup>1</sup> − 2 ⎤ ⎥⎥⎥⎥⎥⎥⎦ , α - 1, α β, <sup>2</sup>*Dm* β <sup>1</sup>(*K*||*L*) <sup>=</sup> <sup>2</sup> (β−1) & exp+ <sup>β</sup>−<sup>1</sup> 2 *Dm*(*K*||*L*) , − 1 ' , β - 1, <sup>2</sup>*Dm*<sup>1</sup> <sup>α</sup>(*M*||*N*) = <sup>1</sup> α−1 ⎡ ⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣ ln ⎧ ⎪⎪⎪⎪⎪⎪⎪⎪⎨ ⎪⎪⎪⎪⎪⎪⎪⎪⎩ *r i*=1 1 2 + μα *K*(*xi*)μ1−<sup>α</sup> *<sup>L</sup>* (*xi*) +(1 − μ*K*(*xi*)) <sup>α</sup>(<sup>1</sup> <sup>−</sup> <sup>μ</sup>*L*(*xi*)) 1−α +μα *L*(*xi*)μ1−<sup>α</sup> *<sup>K</sup>* (*xi*) + (1 − μ*L*(*xi*)) <sup>α</sup>(<sup>1</sup> <sup>−</sup> <sup>μ</sup>*K*(*xi*)) 1−α, ⎫ ⎪⎪⎪⎪⎪⎪⎪⎪⎬ ⎪⎪⎪⎪⎪⎪⎪⎪⎭ ⎤ ⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ , α - 1, (42)

for all *K*, *L* ∈ *FSs*, α ∈ [0, ∞] and β ∈ [−∞, ∞].

In particular, when α = β, we obtained

$${}^{1}\boldsymbol{V}\_{\alpha}^{\beta}(\mathsf{K}\|\boldsymbol{L}) = {}^{2}\boldsymbol{V}\_{\alpha}^{\beta}(\mathsf{K}\|\boldsymbol{L}) \text{ and } {}^{1}\boldsymbol{W}\_{\alpha}^{\beta}(\mathsf{K}\|\boldsymbol{L}) = {}^{2}\boldsymbol{W}\_{\alpha}^{\beta}(\mathsf{K}\|\boldsymbol{L}).$$

The measures <sup>γ</sup>*V*<sup>β</sup> <sup>α</sup>(*K*||*L*) <sup>γ</sup> = 1, 2! are called the unified (α, <sup>β</sup>)−Jensen difference divergence measures and the measures <sup>γ</sup>*W*<sup>β</sup> <sup>α</sup>(*K*||*L*) γ = 1, 2! are called the unified (α, β)−D divergence (Jeffreys) invariant measures.

### **4. Fuzzy MCDM Method for E-Waste Recycling Job Selection**

The evolution of the fuzzy MCDM method is according to the conception of the degree of optimality rooted in an option where multiple criteria distinguish the concept of the desirable option. This perception has been applied extensively by the MCDM approach known technique for order preference by similarity to the ideal solution. As considered by the notion, the most desirable option should not only have the shortest distance from the ideal option but also have the longest distance from the anti-ideal option.

Based on the concept, the overall preference value of an option is computed by its divergences to the ideal solution and the anti-ideal solution. This divergence is thus interrelated with the criteria weights and should be incorporated in the divergence measure. To handle the issue, the fuzzy MCDM method developed uses the optimal criteria weights and the optimal dimension weights, as shown in Figure 1 and discussed in Section 4, to weight the divergence between the option and the ideal/anti-ideal option. The proposed method was implemented to evaluate the recycling job selection problem of sustainable planning of the e-waste as follows:

**Definition 3.** *A triangular fuzzy number (TFN)* ζ *is given by triplet* (*f*, *g*, *h*). *The membership function* μζ(*x*) *is defined as follows:*

$$\mu\_{\zeta}(x) = \begin{cases} 0, & x \le f \\ \frac{\frac{x-f}{\xi-f}}{\xi-f}, & f \le x \le g \\ \frac{\frac{x-h}{\xi-h}}{\xi-h}, & g \le x \le h \\ 0, & x \le h. \end{cases} \tag{43}$$

The linguistic variable refers to those expressed in form of linguistic ratings. The philosophy of linguistic variables is highly constructive in handling with circumstances of a high complexity level or imprecision to be logically expressed in the form of traditional quantitative phenomenon. Such linguistic values are characterized by fuzzy numbers (FNs). Table 1 demonstrates linguistic values for weights and ratings.

**Table 1.** Linguistic values for evaluating sustainability assessment of e-waste products.


Now, to develop a fuzzy MCDM approach, the canonical demonstration of operation on TFN is implemented, which is associated with the graded mean integration representation model [41].

**Definition 4** (Chou [41])**.** *For TFN*ζ*ij* <sup>=</sup> *fij*, *gij*, *hij the graded mean integration representation of TFN* ζ*ij is defined by*

$$P(\zeta\_{ij}) = \frac{f\_{ij} + 4g\_{ij} + h\_{ij}}{6}.\tag{44}$$

Next, linear normalization is applied to the transformation of different criteria scale into a similar scale since it has simple calculations instead of vector normalization. As a result, here is constructed the normalized triangular fuzzy matrix represented by = ζ*ij m* ×*n* , where

$$\zeta = \left( \frac{f\_{ij}}{h\_j^{\diamond}}, \frac{g\_{ij}}{h\_j^{\diamond}}, \frac{h\_{ij}}{h\_j^{\diamond}} \right) \; ; \; j \in \upsilon\_{b\prime} \tag{45}$$

and

$$\mathcal{L} = \left( \frac{f\_j^\*}{f\_{ij}}, \frac{f\_j^\*}{g\_{ij}}, \frac{f\_j^\*}{h\_{ij}} \right); \ j \in \upsilon\_n;\tag{46}$$

such that

$$h\_j^\diamond = \max\_i h\_{i\diamond} \text{ if } j \in \upsilon\_b \text{ and } f\_j^\* = \min\_i f\_{i\diamond} \text{ if } j \in \upsilon\_n. \tag{47}$$

where *vb* and *vn* stand for the set of criteria in terms of beneficial and non-beneficial, respectively.

Generally, an MCDM problem can be sketchily demonstrated as

$$\mathbf{F} = \begin{bmatrix} \mathbf{Z}\_1 & \mathbf{Z}\_2 & \cdots & \mathbf{Z}\_s \\ \mathbf{Y}\_1 & \begin{bmatrix} \boldsymbol{\xi}\_{11} & \boldsymbol{\xi}\_{11} & \cdots & \boldsymbol{\xi}\_1 \\ \boldsymbol{\xi}\_{21} & \boldsymbol{\xi}\_{22} & \cdots & \boldsymbol{\xi}\_{2s} \\ \vdots & \vdots & \ddots & \vdots \\ \boldsymbol{\xi}\_{r1} & \boldsymbol{\xi}\_{r2} & \cdots & \boldsymbol{\xi}\_{rs} \end{bmatrix} \\ \end{bmatrix} \tag{48}$$

where *Y* = {*Y*1,*Y*2, ... ,*Yr*} and *Z* = {*Z*1,*Z*2, ... ,*Zs*} are the sets of alternatives and criteria, respectively, and <sup>ζ</sup>*ij* <sup>=</sup> *fij*, *gij*, *hij* ; *i* = 1(1)*r*, *j* = 1(1)*s* present the fuzzy numbers.

Let the MCDM problems consist of *r* alternatives *Yi* (*i* = 1(1)*r*) such that alternative is achieved by means of *S* criteria *Zj* (*j* = 1(1)*s*). ζ*ij* is constructed by alternative *Yi* (*i* = 1(1)*r*) with respect to criterion *Zj* (*j* = 1(1)*s*), are fuzzy values (FVs). Let ω*<sup>j</sup>* be the weight of criterion with the condition that <sup>ω</sup>*<sup>j</sup>* <sup>≥</sup> 0, *<sup>s</sup> j*=1 ω*<sup>j</sup>* = 1. Here, ω*<sup>j</sup>* = (ω1, ω2, ... , ω*s*) *<sup>T</sup>* symbolizes the set of known information, this is generated by decision experts in the form of linear constraints, concerning the criterion weights. It is worth mentioning that the proposed method is appropriate for circumstances where the number of decision experts is small such that they assess the criterion based on their experience and knowledge and the alternatives could be of any type, then assessment of alternatives is constructed in the form of FVs.

The developed approach is implemented to solve the MCDM problems with partially or completely unknown criteria's weights information. This method consists of the subsequent steps (see Figure 1):

Step 1: Construct the fuzzy decision matrix F = ζ*ij r*×*s* .

The decision experts furnish all the feasible assessments regarding the alternative *Yi* concerning criterion *Zj* , mentioned by fuzzy numbers (FNs) <sup>ζ</sup>*ij* <sup>=</sup> *fij*, *gij*, *hij* ; *i* = 1(1)*r*, *j* = 1(1)*s* which are obtained based on Table 1 and Equations (43) and (44) and demonstrated in Equation (48).

Step 2: Compute ideal solution (IS) and anti-ideal solution (A-IS).

The optimal values (or IS) for diverse criterion are altered and pointed out as

$$\varepsilon^{+} = \begin{cases} \max\_{i=1(1)r} \zeta\_{ij} & \text{for benefit criterion } v\_j \\ \min\_{i=1(1)s} \zeta\_{ij} & \text{for cost criterion } v\_j \end{cases}, \text{ for } j = 1(1)s. \tag{49}$$

Similarly, the worst values (or A-IS) for diverse criterion is given by

$$\varepsilon^{-} = \begin{cases} \min\_{i=1(1)r} \zeta\_{ij} & \text{for benefit criterion } v\_j \\\max\_{i=1(1)s} \zeta\_{ij} & \text{for cost criterion } v\_j \end{cases} \text{ for } j = 1(1)s. \tag{50}$$

### Step 3: Compute the criteria weights

In case the information about the criterion weight ω*<sup>j</sup>* is partially known, then the criterion weights can be evaluated in advance. Based on the divergence measure analysis, we developed a nonlinear programming model for the purpose of selecting the criterion weight vector ω*j*; it will maximize all of the deviation values for the alternatives.

**Figure 1.** Procedure of the proposed method for multi-criteria decision-making (MCDM) problems. According to (14), we evaluated *Dm*<sup>+</sup> *ij* <sup>ζ</sup>*ij*, <sup>ε</sup><sup>+</sup> and *Dm*<sup>−</sup> *ij* <sup>ζ</sup>*ij*, <sup>ε</sup><sup>−</sup> as follows:

*Dm*<sup>+</sup> *ij* <sup>ζ</sup>*ij*, <sup>ε</sup><sup>+</sup> <sup>=</sup> *<sup>n</sup> i*=1 ⎡ ⎢⎢⎢⎢⎢⎢⎣ μζ*ij*(*xi*)+με<sup>+</sup> (*xi*) μζ*ij*(*xi*)−με<sup>+</sup> (*xi*) 2 μζ*ij*(*xi*)με<sup>+</sup> (*xi*) ln ⎛ ⎜⎜⎜⎜⎜⎝ μζ*ij*(*xi*)+με<sup>+</sup> (*xi*) 2 2 μζ*ij*(*xi*) με<sup>+</sup> (*xi*) ⎞ ⎟⎟⎟⎟⎟⎠ + <sup>2</sup>−μζ*ij*(*xi*)−με<sup>+</sup> (*xi*) με<sup>+</sup> (*xi*)−μζ*ij*(*xi*) 2 <sup>1</sup>−μζ*ij*(*xi*) (1−με<sup>+</sup> (*xi*)) ln ⎛ ⎜⎜⎜⎜⎜⎜⎜⎜⎝ <sup>2</sup>−μζ*ij*(*xi*)−με<sup>+</sup> (*xi*) 2 3 <sup>1</sup>−μζ*ij*(*xi*) (1−με<sup>+</sup> (*xi*)) ⎞ ⎟⎟⎟⎟⎟⎟⎟⎟⎠ ⎤ ⎥⎥⎥⎥⎥⎥⎥⎥⎦ , (51) *Dm*<sup>−</sup> *ij* <sup>ζ</sup>*ij*, <sup>ε</sup><sup>−</sup> <sup>=</sup> *<sup>n</sup> i*=1 ⎡ ⎢⎢⎢⎢⎢⎢⎣ μζ*ij*(*xi*)+με<sup>−</sup> (*xi*) μζ*ij*(*xi*) <sup>−</sup>με<sup>−</sup> (*xi*) 2 μζ*ij*(*xi*) με<sup>−</sup> (*xi*) ln ⎛ ⎜⎜⎜⎜⎜⎝ μζ*ij*(*xi*) +με<sup>−</sup> (*xi*) 2 2 μζ*ij*(*xi*) με<sup>−</sup> (*xi*) ⎞ ⎟⎟⎟⎟⎟⎠ + <sup>2</sup>−μζ*ij*(*xi*)−με<sup>−</sup> (*xi*) με<sup>−</sup> (*xi*)−μζ*ij*(*xi*) 2 <sup>1</sup>−μζ*ij*(*xi*) (1−με<sup>−</sup> (*xi*)) ln ⎛ ⎜⎜⎜⎜⎜⎜⎜⎜⎝ <sup>2</sup>−μζ*ij*(*xi*)−με<sup>−</sup> (*xi*) 2 3 <sup>1</sup>−μζ*ij*(*xi*) (1−με<sup>−</sup> (*xi*)) ⎞ ⎟⎟⎟⎟⎟⎟⎟⎟⎠ ⎤ ⎥⎥⎥⎥⎥⎥⎥⎥⎦ . (52)

where

Next, the overall performance of the alternative *ui* computed by the given formula

$$J(Y\_i) = \sum\_{j=1}^n \omega\_j D\_{mij\nu}$$

$$D\_{mij} = \frac{D\_{m\_{ij}}}{D\_{m\_{ij}} + D\_{m\_{ij}} + \varepsilon}.\tag{53}$$

*Dm* − *ij* <sup>+</sup> *Dm* <sup>+</sup> *ij* Apparently, the larger value of *J*(*Yi*) shows the superior option. Thus, all the alternatives are

$$\begin{aligned} \max f &= \sum\_{i=1}^{m} f(\boldsymbol{Y}\_{i}) = \sum\_{i=1}^{m} \sum\_{j=1}^{n} \boldsymbol{\omega}\_{j} \boldsymbol{D}\_{\text{mij}} \\ \text{s.t.} & \left\{ \begin{array}{ll} \boldsymbol{\omega} \in \boldsymbol{\mathcal{W}}, \ \boldsymbol{\omega}\_{j} \ge 0, \\ \sum\end{array} \right. \\ & \left. \begin{array}{l} \boldsymbol{\omega} = \boldsymbol{1}, \ j = 1 \text{ (1)} \text{s.} \\ \end{array} \right. \end{aligned} \tag{54}$$

Step 4: Compute the closeness degree of the alternative(s).

Based on (53), the closeness degree *J*(*Yi*) of each alternative *Yi* (*i* = 1(1)*r*) regarding the ideal solution is evaluated.

measured as a whole to construct a combined weight vector. Thus, LP-model is demonstrated as below:

Step 5: Rank the alternatives.

Choose the biggest value, which is signified by *J*(*Yk*), among the values *J*(*Yi*), *i* = 1(1)*r*. Thus, *Yk* is the best option.

### **5. Investigating the Sustainable Planning of an E-Waste Recycling Job Selection**

In the global climate change and global warming, the three entities of society, economy, and environment are in an inseparable connection with each other [42,43]. Such interconnection has caused human well-being to be closely dependent on the environment health condition [44,45]. In consequence, we can see the aftermaths of such conditions in the form of complicated challenges that have already occurred as sustainability challenges [46]. Clearly, the natural resources are being exhausted, but simultaneously the demand of society is increasingly rising, which has placed a disparaging pressure upon the environment, economy, and society [33]. One of the typical instances of sustainability challenges is an electronic waste (e-waste) [47]; this is a problem of high complexity in its nature, it does not seem to be solvable at all, and this is of a socio-ecological scale. E-waste emerges with discarding the electronic products like cellular phones, computers, and other electronic appliances we are using daily. As can be easily understood, the last few decades have witnessed a vast evolution of the electrical and electronics industry [48]. There has been an extraordinary rise in consuming electronic equipment, especially computers and mobile phones. This tremendous increase in consumption has led to the accumulation of waste electrical and electrical equipment (WEEE) [49–51], which is normally discussed under the title of E-waste [52]. Across the globe, electronic equipment usage has become an indispensable part of daily life. Currently, there is a big pressure from academic communities, interest groups, environmental watchdogs, etc. on electronic producers and local industries to bring into action the effective management mechanisms in a way to make efficient response to the perceived and potential e-waste problems.

To deal with the e-waste recycling planning issues pointed above, we proposed a novel sustainable planning method for meeting the best sustainability interests of an e-recycling company. The method utilizes a fuzzy MCDM approach and a series of optimal weighting approaches to find and choose the option recycling activities for e-waste recycling jobs of an e-recycling company. It shows an innovative contribution to the procedural development of weighting the three dimensions of corporate sustainability for planning decisions.

In this section, a case study of recycling partner selection in sustainable planning of e-waste was presented, aiming at showing the viability of the proposed approach. The proposed method was utilized to rank the given recycling associations in India. Let *Y*1, *Y*2, *Y*3, and *Y*<sup>4</sup> are four selected associations that conduct the recycling procedures for products that are end-of-life vehicles, scraped electronics, scraped metals, scraped paper recycling, as well as dismasting operations. These four associations were computed based on given inter-independent criterion set {*Z*1, *Z*2, *Z*3, *Z*4, *Z*5}. Out of these first, second and fourth were benefit criteria, while third and fifth were cost criteria. In order to choose an appropriate sustainable recycling partner, the proposed approach was applied and evaluated as follows: after preliminary screening, four potential alternatives of this company were considered, which are denoted as *Yi*(*i* = 1, 2, ..., 4) with most favorable performance assessment of the e-waste options on qualitative sustainability criteria (given in Table 2). An expert group consisting of three decision-makers (D1, D2, D3, and D4) was established for the purpose of doing the performance rating of each e-waste option. The decision-makers' weights were assumed as <sup>1</sup> = 0.25, <sup>2</sup> = 0.25, <sup>3</sup> = 0.25, <sup>4</sup> = 0.25, since they had different levels of technical knowledge and expertise. The next step was to estimate the best e-waste recycling partner selection through the proposed method. To estimate the best e-waste recycling partner option, the decision experts (DEs) assumption was that each criterion is beneficial. Table 3 depicts the estimation values in terms of linguistic values constructed by e-waste recycling partner decision experts.

Here, evaluating the mean values of fuzzy scores of the estimation outcomes allocated by DEs, we achieved the estimation matrix. Afterward, Equations (45)–(47) were implemented to construct a triangular fuzzy normalized estimation matrix (see Table 4). Later on, the ratings were transformed into crisp values on the basis of Definition 4. After that, the normalized F-DM was created according to Equation (44), is presented in Table 5.




**Table 2.** *Cont.*

Here, there is a group of experts to make decisions on choosing the recycling partner. The decision experts furnish all the feasible evaluations regarding the alternative *Yi* with respect to criterion *Zj*, and construct aggregated decision matrix, which is given in Table 5 associated with Table 1 and Equations (43) and (47). According to their knowledge and experience regarding the criterion set, partial information of the weights is given by

$$W = \left\{ \left( \omega\_{\bar{j}} \right)^{T} \middle| \begin{array}{l} 0.2 \le \omega\_{1} \le 0.35, \ 0.1 \le \omega\_{2} \le 0.27, \ 0.15 \le \omega\_{3} \le 0.25, \ \omega\_{1} \le 0.2\omega\_{4}, \\ 0.08 \le \omega\_{4} \le 0.15, \ 0.2 \le \omega\_{5} \le 0.4, \ \omega\_{2} - \omega\_{5} \le \omega\_{3} \text{|such that } \sum\_{j=1}^{s} \omega\_{j} = 1. \end{array} \right.$$


**Table 3.** Evaluation of e-waste recycling job alternatives in linguistic values.

**Table 4.** Triangular fuzzy evaluation matrix for e-waste recycling job selection problem.



**Table 5.** Aggregated fuzzy decision matrix for e-waste recycling job selection problem.

Step 1: Fuzzy IS and A-IS are calculated by using (49) and (50) are as follows:

$$
\varepsilon^{+} = \langle 0.757, 0.76, 0.231, 0.317, 0.244 \rangle, \tag{55}
$$

$$
\varepsilon^{-} = \{0.24, 0.25, 0.767, 0.229, 0.77\}. \tag{56}
$$

Step 2: Corresponding to (51) and (52), the divergence measure of ζ*ij* form ε<sup>+</sup> and ζ*ij* form ε<sup>−</sup> are evaluated as follows:

$$\begin{array}{l} D\_{m\_{11}^{+}} ^{+} = 0.3021, \; D\_{m\_{12}^{+}} ^{+} = 0.0000, \; D\_{m\_{13}^{+}} ^{+} = 0.0000, \; D\_{m\_{14}^{+}} ^{+} = 0.0236, \; D\_{m\_{15}^{+}} ^{+} = 0.0000, \\ D\_{m\_{21}^{+}} ^{+} = 0.2211, \; D\_{m\_{22}^{+}} ^{+} = 0.3913, \; D\_{m\_{23}^{+}} ^{+} = 0.5253, \; D\_{m\_{24}^{+}} ^{+} = 0.0000, \; D\_{m\_{25}^{+}} ^{+} = 0.4674, \\ D\_{m\_{31}^{+}} ^{+} = 0.00000, \; D\_{m\_{32}^{+}} ^{+} = 0.0000, \; D\_{m\_{33}^{+}} ^{+} = 0.000566, \; D\_{m\_{34}^{+}} ^{+} = 0.1138, \; D\_{m\_{35}^{+}} ^{+} = 0.0000315, \\ D\_{m\_{41}^{+}} ^{+} = 0.4910, \; D\_{m\_{42}^{+}} ^{+} = 0.058, \; D\_{m\_{43}^{+}} ^{+} = 0.0089, \; D\_{m\_{44}^{+}} ^{+} = 0.0015, \; D\_{m\_{45}^{+}} ^{+} = 0.0053. \end{array}$$

And

$$\begin{array}{l} D\_{\overline{m}\_{11}} = 0.0000525, D\_{\overline{m}\_{12}} = 0.3913, D\_{\overline{m}\_{13}} = 0.5253, D\_{\overline{m}\_{14}} = 0.008647, D\_{\overline{m}\_{15}} = 0.4674, \\ D\_{\overline{m}\_{21}} = 0.0004378, D\_{\overline{m}\_{22}} = 0.0000, D\_{\overline{m}\_{23}} = 0.0000, D\_{\overline{m}\_{24}} = 0.1138, D\_{\overline{m}\_{25}} = 0.0000, \\ D\_{\overline{m}\_{31}} = 0.4583, D\_{\overline{m}\_{32}} = 0.3913, D\_{\overline{m}\_{33}} = 0.2478, D\_{\overline{m}\_{34}} = 0.0000, D\_{\overline{m}\_{35}} = 0.3250, \\ D\_{\overline{m}\_{41}} = 0.0000, D\_{\overline{m}\_{42}} = 0.0101, D\_{\overline{m}\_{43}} = 0.1009, D\_{\overline{m}\_{44}} = 0.0172, D\_{\overline{m}\_{45}} = 0.1081. \end{array}$$

Next, the overall performances, by using (53), of alternative are calculated as follows:

*Dm*<sup>11</sup> = 0.0001737, *Dm*<sup>12</sup> = 1.0000, *Dm*<sup>13</sup> = 1.0000, *Dm*<sup>14</sup> = 0.2681, *Dm*<sup>15</sup> = 1.0000, *Dm*<sup>21</sup> = 0.0020, *Dm*<sup>22</sup> = 0.0000, *Dm*<sup>23</sup> = 0.0000, *Dm*<sup>24</sup> = 1.0000, *Dm*<sup>25</sup> = 0.0000, *Dm*<sup>31</sup> = 1.0000, *Dm*<sup>32</sup> = 1.0000, *Dm*<sup>33</sup> = 0.9977, *Dm*<sup>34</sup> = 0.0000, *Dm*<sup>35</sup> = 0.9999, *Dm*<sup>41</sup> = 0.0000, *Dm*<sup>42</sup> = 0.1483, *Dm*<sup>43</sup> = 0.9189, *Dm*<sup>44</sup> = 0.9198, *Dm*<sup>45</sup> = 0.9533.

Step 3: To compute the weight vector, construct the model

$$\begin{array}{l} \max J = 1.0022a\nu\_1 + 2.1483\omega\_2 + 2.9166\omega\_3 + 2.1879\omega\_4 + 2.9532\omega\_5\\ \text{s.t.} \begin{cases} 0.25 \le \omega\_1 \le 0.4, \, 0.16 \le \omega\_2 \le 0.27, \\\ 0.1 \le \omega\_4 \le 0.18, \, 0.2 \le \omega\_5 \le 0.35, \\\ 0.15 \le \omega\_3 \le 0.25, \, \omega\_1 \ge 0.2\omega\_4, \, \omega\_5 - \omega\_2 \le \omega\_3 \end{cases} \\ \text{s.t.} \begin{cases} & \omega\_j = \left(\omega\_1, \omega\_2, \dots, \omega\_s\right)^T, \omega\_j \ge 0, \, \sum\_{j=1}^s \omega\_j = 1. \end{cases} \end{array} \tag{57}$$

Using MATHEMATICA, model (57) is computed and the criteria's weight vector is computed by

$$\left(\boldsymbol{\omega}\_{\boldsymbol{\dot{\beta}}}\right)^{T} = \left(0.25, 0.16, 0.165, 0.1, 0.325\right)^{T}.$$

Step 4: The calculated closeness degrees of the alternatives are given as

$$f(Y\_1) = 0.6769, \ f(Y\_2) = 0.1005, \ f(Y\_3) = 0.8996, \ f(Y\_4) = 0.5771.$$

Step 5: Based on calculated closeness degrees of the alternatives, the ranking of the associations is *Y*<sup>3</sup> *Y*<sup>1</sup> *Y*<sup>4</sup> *Y*2.

Hence, a suitable e-waste recycling job is *Y*3.

*Comparison and Discussion for the Sustainable Planning of an E-Waste Recycling Job Selection*

The grading of given associations is also acquired by the TOPSIS, F-TOPSIS, intuitionistic fuzzy TOPSIS, and proposed methods, and is presented in Table 6.


**Table 6.** Comparison of grading order of alternatives from various methods.

We observed that there was no discrepancy in the grading order of the e-waste recycling job options by the TOPSIS method, F-TOPSIS method, IF-TOPSIS methods, and proposed method. Hence, all the methods provided the unique optimal alternative *Y*3, i.e., desirable e-waste recycling job. In general, the advantages of the extended approach over the existing methods are presented by


From the analyses presented above, the proposed method based on divergence measures of FSs has the following advantages.

First, FSs used in this paper can express the evaluation information more flexibly. They can embed several values in membership degrees and can retain the completeness of original data or the inherent thoughts of decision-makers, which is the prerequisite of guaranteeing the accuracy of final outcomes.

Second, the proposed fuzzy divergence measures are different from the existing divergence measures that always involve the extensions whose impact on the final solution may be considerable, because the proposed divergence measures can include the advantages of parametric generalization, and overcome these shortcomings. This can avoid losing and distorting the preference information provided, which makes the final results better correspond with real decision-making problems.

Finally, the proposed method can provide a useful and flexible way to efficiently facilitate the decision-making process within the fuzzy environment. Moreover, the first method could handle some special cases where the weight information is not always available and instead only partial knowledge of criteria weights may be obtained as a group of linear constraints.

### **6. Conclusions**

In the present study, we introduced some new divergence measures for FSs, which are generalizations of probabilistic divergence measures and discussed some elegant properties, which shows the strength of the proposed measures. Later on, we defined a family of unified divergence measures for FSs based on various types of entropy function. Next, an approach, which is based on the fuzzy divergence measure to determine the weights of criteria, was developed for MCDM problems within the fuzzy atmosphere. The criteria with large cross-entropy and small entropy need to be well taken into account. Finally, we implemented the proposed method with an example that demonstrated its applicability and effectiveness in comparison to the results of the methods already proposed in the literature.

The advantages of the proposed method were that they could be easily and conveniently evaluated and they could efficiently reduce the loss of information estimation. The method proposed in this study was proved both feasible and valid through the example illustration of recycling partner selection of sustainable practices and comparison with existing methods. Thus, proposed method had vast application potential for solving MCDM problems in FSs, where alternatives were constructed with regard to the criterion set in terms of FVs, and the criterion weights were partially known. In the future, we would enlarge our research to IF-divergence measures and interval-valued intuitionistic fuzzy-divergence measures and implement various real-life applications.

**Author Contributions:** Conceptualization, P.R. and Y.Y.; methodology, P.R.; software, A.R.M. validation, A.M., M.A. and A.M.; formal analysis, A.R.M.; resources, A.M.; writing—original draft preparation, M.A.; writing—review and editing, D.S.H. supervision, K.G.; project administration, K.G. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was supported by Ton Duc Thang University.

**Conflicts of Interest:** The authors declare no conflict of interest.

### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Foresight Based on MADM-Based Scenarios' Approach: A Case about Comprehensive Sustainable Health Financing Models**

### **Sarfaraz Hashemkhani Zolfani 1,\*, Reza Dehnavieh 2, Atousa Poursheikhali 3, Olegas Prentkovskis <sup>4</sup> and Payam Khazaelpour <sup>5</sup>**


Received: 26 October 2019; Accepted: 16 December 2019; Published: 27 December 2019

**Abstract:** As indicated by a worldwide common perspective about health and sustainable health systems, the health structure as a part of public health is a key theme of many societies. The future is shaped by probable future scenarios, for which dealing practically has many complications. This study is focused on the future scenarios for a comprehensive sustainable health financing model to support a superior structure for a decision and policy-making pilot for the society. This aim is followed based on multiple attribute decision making (MADM)-based scenarios using two MADM methods, step-wise weight assessment ratio analysis (SWARA) and weighted aggregated sum product assessment (WASPAS), as a hybrid model which is the first real case study of the approach. Four main probable future scenarios are identified and selected based on experts' viewpoints about sustainable health financing models. These scenarios include membership in the World Trade Organization (WTO), dynamic basic insurance, international cooperation, and effective resources management. The process of evaluating based on the approach works as a wider picture, including all criteria and alternatives together. Sustainable medical services, empowering the private sector in both production and technology, and employing international managers took place as the top priority for considering the most applicable alternatives in the future. This structure is designed and developed in Iran's context, and the Institute for Futures Studies in Health is the pilot of the research.

**Keywords:** future scenarios; sustainable health insurance model; multiple attribute decision making (MADM)-based scenarios; step-wise weight assessment ratio analysis (SWARA); weighted aggregated sum product assessment (WASPAS)

### **1. Introduction**

The health system is supposed to function properly in four determined aspects to accomplish three fundamental objectives of health promotion, having reasonable financial contribution and responsiveness. In addition, concerning the growing challenges of health systems such as resources limitation and increased demand, neglecting variable circumstance and requirements would have detrimental consequences and lead to damaging failures (Coiera and Hovenga, 2007) [1]. These functions are stewardship, resource generation, financing, and services provision (WHO report, 2000) [2]. Financing is considered one of the most essential functions of health systems.

Health system financing is the procedure by which revenues are collected from primary and secondary sources, which are accumulated in fund pools and allocated to specific activities of providers (1). Consumers, the government, social insurance, and private health insurance payments constitute four main sources of out-of-pocket payments in health system financing (Markel, 2014) [3]. In this respect, the objectives of the financing function of the health system are adequate funding, equitable financial access to quality health services, protection from financial costs, and efficiency in resource mobilization allocation and usage. To obtain these objectives, three main functions are identified, including purchasing services, pooling funds, and collecting funds (WHO report, 2000) [2]. The health-financing system is a highly contextual subject that needs to be addressed exclusively in each country (Moon, 2014; Wang et al., 2016) [4,5].

Considering other areas of science in which the MADM-based scenario approach has been demonstrated, the MADM-based scenario approach was implemented to select bicycle chain materials for strategic success in an Indian manufacturing scenario (Singh & Kumar, 2012) [6]. An MADM-based scenario was applied to select the material alternatives in bicycle manufacturing. Another example of using MADM-based scenario notion takes advantage of implementing the high average of subscribers' service quality, namely the Quality of Service (QoS), where different radio access technologies are implemented using different handover criteria to link the continuity of the service (Zineb, Ayadi, & Tabbane, 2017) [7]. Consequently, the MADM-based scenario approach is worked out in different fields and found to be reliable for prioritizing alternatives.

Increasing demand for health services and the increasing costs of health services supplies comprised two main sources for expenditure growth in the health sector that affect drivers and dynamics. The growing burden of disease has been asserted as one of the causes of increasing the demand for health services that stems from the aging population and unhealthy lifestyles. Advanced technologies and innovative strategy have led to rising unit cost of care. Additionally, vested interests and incentive systems have led to a suboptimal allocation of resources (2). Unit cost increment and substandard resource allocation are both sources of expanding the expenditure growth for supplying health services. These specified drivers could potentially be the unsustainability factors in health system financing (Industry agenda, 2012; Ooms et al., 2014) [8,9].

There are vital indicators in health financing monitoring and analyzing listed in Table 1 whose future status can be determined in different scenarios of health financing (WHO report, 2004) [10]:

**Table 1.** Important indicators in health financing monitoring. GDP: gross domestic product.

### **Health Financing INDICATOR**




<sup>-</sup> Government expenditures payment as a percentage of total health expenditure

<sup>-</sup> Government expenditures payment as a percentage of total government expenditures

<sup>-</sup> Percentage of the population who are poor due to illness

The World Business Council for Sustainable Development (WBCSD) acknowledged that the method of managing health systems is unsustainable i.e., the costs that need to be afforded are higher, and on the other side, less deliveries are achieved than expected (NHS report, 2009) [11]. Sustainable systems are those that perpetuate despite their surrounding changes and the function for their main objectives (Coiera and Hovenga, 2007) [1]. Health is a vital issue in many nations and is directly related to a sustainable community.

Scenarios can alter the capabilities of a system such as a health system to perceive the nature and effect of the future. Therefore, scenarios can help sense the future and its prerequisites that need to be dealt with (Gnatzy and Moser, 2012; Gille and Houy, 2014; Rhisiart et al., 2015) [12–14]. In the general structure of countries, the health structure is remarkably diverse, despite having some relatively comparative aspects. Considerably, Iran is not an exception, setting important strategic plans for upgrading the general health structure of the country.

Speaking of strategic plans, there are two general approaches of using public taxation, called the Boursig approach and Bimsark approach, i.e., social health insurance in which the latter is followed in Iranian policy making. Public budget, social health insurances, the family's pocket pays, and private insurance are the financial structures for revenue gathering in Iran (Keshavarzian, 2014) [15].

Concerning the contexts of Iran, strategic planning, futures studies, decision making, and policy making are required to create a holistic picture for managing the future in superior ways. Each of the specialized plans need time to be developed and progressed alongside thoroughly considering the health structure.

In the current situation, the general position of the country is so dynamic and seems to have experienced some improvements compared to the previous decade. It implies an appropriate situation for the country to be developed in different aspects, especially in public health with a sustainable structure. Future scenarios as a framework can be conducive in this procedure and generally can be considered as the backbone to support this idea in the best way. In general, scenarios are qualitative, and there are different ways to evaluate them. It is believed that quantitative evaluations are more accurate in comparison with qualitative structures if quantitative evaluation is possible.

**"Multiple attribute decision making (MADM)-based scenarios"** is the latest perspective to consider scenarios in a complete quantitative way (Hashemkhani Zolfani et al., 2016a) [16]. In comparison with other methods and perspectives, this structure is independent and takes future scenarios into account in its structure and methodology, meaning that it enjoys the expert-based structure, which is similar to other decision-making methods. Facing the related future of the policies and strategic decisions is too complicated. So, this approach has an advantage to lead policy makers toward having a better sight on the critical points and topics. In addition, creating a wider picture is something that is not common among approaches and methods, especially toward future scenarios and events. For this reason, the health system of countries appears as a critical and serious topic on which governments are spending time all around the world. Comparably, Iran's health structure is an evolutionary path through which the development of foresights models importantly seems to be necessary for the country. Finally, coming to the expert-based structure of this methodology, an institution (Institute for Futures Studies in Health) was selected as the pilot for this study.

In this study, probable related future scenarios of health financing will be directed in an **"MADM-based scenarios"** structure for analyzing the general environment of the future of the health structure of the country in a sustainable way. Two MADM methods, SWARA and WASPAS, will be applied as a suitable hybrid MADM model and a detailed discussion will be presented finally.

### **2. Literature Review**

In this section, the sustainable health structure and current state of the health system in Iran are discussed as two influential issues.

### *2.1. Sustainable Health*

Sustainable development has been defined as "meeting the needs of people here and at the present time, without prejudicing the needs of others elsewhere and in the future" (Pencheon, 2015) [17].

The primary aspects of sustainable development are economic, social, and environmental. However, sustainable development has been expanded into other dimensions according to the contexts of the studied situation. In the related context of the present study, the governance dimension was introduced as the new dimension. The fundamental issue is the continuous and harmonious interactions of all these dimensions to create the desired balance and construct sustainable systems. Health is critical to achieving these four pillars (Sustainable Development Solutions Network, 2014) [18].

As claimed by Liaropoulos and Goranitis [19], economic crises have brought unprecedented attention to the healthcare establishment based on sustainability. For this reason, sustainable healthcare becomes fundamentally pertinent to sustainable financing. Wages decrease due to economic crises and unemployment matters, and accordingly healthcare insurance cover rises with medical costs. To tackle the problem of increasing costs, an MADM-based scenario is applied to prioritize sustainable healthcare criteria and their future impacts to shed lights on future sustainable healthcare financing under sanction circumstances.

To achieve sustainability in a health system, all the functions of the health system should collaborate in a balanced and sustainable form in fulfilling their tasks consistent with sustainability requirements (Smith et al., 2008) [20]. A sustainable health model can be achieved by provisioning high quality and improved health services, and it can guarantee the perceived safety, improve life quality, enhance well-being, and promote public health. A sustainable health system not only deals with diseases but also with dimensions and determinants of health (Timmers, 2014) [21].

Health systems are central to the new agenda in which the 13 health targets are proposed to cover most national health concerns. Universally, health coverage achieving access to quality health care is stated in declaration states. Universal health coverage provides several opportunities such as inclusion, equity, financial protection, livelihood generation, a common global vision, and a unified global rallying point. UHC includes health protection, promotion, prevention, treatment, rehabilitation, and palliation. Health is positioned as the major contributor to the other sustainable development goals (SDGs); by overlooking health, many of the other goals would remain inconclusive (Agenda for sustainable development 2015) [22].

As one of the widespread problems of the millennium in goal development, in the new agenda after the segmentation of countries, health systems are required to provide integrated and fair services considering the costs and resources.

The sustainable development goals are intertwined and indivisible so that progress in one area relies on progress in many other areas as the synergies occur among health, education, nutrition, social protection, and conflict.

Most importantly, the sustainability of a health system is highly dependent on its sustainable financing, as it can ensure a method for financial contributions of prepayment for health care. It aims to reduce the risk share among the population, avoid catastrophic health-care expenditure, and improve the individual's health status (WHA, 2005) [23].

### *2.2. Current State of Health System Financing in Iran*

Health is currently generally perceived as a fundamental right, and the urgency of some global health issues has pushed global health policy to the top of the international agenda (Gottret and Schieber, 2000) [24]. Health financing is a standout amongst the most imperative components in health systems to promote health. Health financing refers to the "function of a health system concerned with the mobilization, accumulation and allocation of money to cover the health needs of the people, individually and collectively, in the health system the purpose of health financing is to make funding available, as well as to set the right financial incentives to providers, to ensure that all individuals have access to effective public health and personal health care" (The World Health Report, 2000) [25].

Health financing is principal to the capacity of health systems to maintain and improve human welfare. At the extreme, without the necessary funds, no health workers would be employed, no medicines would be accessible, and no health promotion or prevention would occur.

The Iranian health structure system has an egalitarian perspective, although there are so many challenges in between. After the revolution, around 40 years ago, health insurance started covering people, and the government still has been trying to support people in terms of health insurance (Rashidian et al., 2018) [26]. The Iranian insurance system is a combination of all sectors, both public and private, etc.; however, it is mostly public. Primary care payments mostly are based on subsidy and capitation, while hospital payments are considered according to the general costs and services of physicians (Almaspoor Khangah et al., 2017) [27]. In Iran, primary health care is financed and delivered predominantly by the government. Secondary and tertiary cares are delivered both publicly and privately (Kavosi et al., 2012) [28].

Since May 2014, after President Rouhani's government starting day, his new policies in the government about Ministry of Health and Medical Education (MoHME) followed new strategies and plans in the health structure of the country as a "Health System Transformation Plan" (HSTP) in terms of increasing the equity in different regions of Iran where people are equally treated while categorized in differently economic levels (Olyaeemanesh et al., 2018) [29].

The main public health insurers are as follows.


### **3. Methodology**

To date, various methods have been developed and introduced in MADM; however, the general concept of this research is based on the new future-oriented MADM and MADM-based scenarios, which is presented to make future scenarios quantitative and multi-attribute based, simultaneously (Hashemkhani Zolfani et al., 2016a) [16]. Each scenario can be considered separately, and each MADM-based scenario is utilized for making decisions about forthcoming situations. This new methodology has been presented to empower new future-based trends in the MADM field such as prospective MADM (Hashemkhani Zolfani et al., 2016b; Hashemkhani Zolfani et al., 2018b; Hashemkhani Zolfani and Masaeli, 2019) [31–33]. "MADM-based scenarios" are in contrast with scenarios-based MADM. The concept of MADM-based scenarios is not intended to assess the scenarios as, the MADM is the core concept, while in another previous perspective, scenarios are considered to be the core area under study. In this concept, scenarios support presenting a superior model with MADM methods in practice and eventually, a final evaluation can be performed and an ultimate decision can be made.

A foresight comprehensive plan can be completely different in future scenarios. In a foresight strategic policy, pre-planning is considered in all possible scenarios. It can be inferred that future scenarios are similar in most components, although some differences exist among the criteria, alternatives, and relative importance. A final analysis is structured respecting all scenarios and MADM models based on the scenarios. It can be deduced that MADM-based scenarios form a

multi-disciplinary perspective comprised of two multi-disciplinary fields, which are futures studies and multiple criteria decision making (MCDM) as a sub-field of Decision Science. The process of the MADM-based scenarios' concept is illustrated in Figure 1.

**Figure 1.** The general process of multiple attribute decision making (MADM)-based scenarios (Hashemkhani Zolfani et al., 2016a) [16].

The classic part of "MADM-based scenarios" is structured with classic MADM methods. SWARA-WASPAS as a hybrid model is selected for this study.

In the present study, a policy-based structure is applied in decision making related to selecting criteria, as it is more pertinent to the SWARA method comparing other MADM methods for criteria weighting and evaluating. SWARA-WASPAS is a new fast-growing hybrid model that has been presented in recent years as a powerful hybrid model for solving multi-criteria decision problems (Hashemkhani Zolfani et al., 2013) [34]. WASPAS itself has both of the advantages of its two classic methods. Considering the importance of this new hybrid MADM model, a review article has been published recently that shows dimensions of the framework from different points of view (Mardani et al., 2017) [35]. Published studies were identified using SWARA-WASPAS research strategy as listed below:


### *3.1. Step-Wise Weight Assessment Ratio Analysis (SWARA)*

SWARA is one of the MADM methods that applies policy-based perspective for criteria weighting and evaluating (Hashemkhani Zolfani and Saparauskas, 2013) [44]. SWARA is comparable with other MADM methods such as AHP (Saaty 1980) [45], ANP (Saaty and Vargas 2001) [46], FARE (Ginevicius, 2011) [47], BWM (Rezaei, 2015) [48] (Haghnazar Kouchaksaraei et al., 2015 [49]; Hashemkhani Zolfani et al., 2015) [50], rough strength relational DEMATEL (Roy et al., 2018) [51], and extended SWARA (Hashemkhani Zolfani et al., 2018a) [52]. The first step in the SWARA method is ranking criteria regarding their priority, which is the policy-based section of this method that can be advantageous for decision making on the top level of policy making [53]. SWARA is characterized as a friendly structure method that works in a straightforward manner. The procedure of SWARA is based on the below-mentioned steps (Stanujkic et al., 2015) [54]:

*Step* **1** At the first step, all criteria should be ranked according to experts' viewpoints.

*Step* **2** From the second criterion, a comparative importance of the average value *sj* should be done as follows: the relative importance of criterion *j* in relation to the previous (*j* − 1) criterion.

*Step* **3** Determine the coefficient *kj*

$$k\_j = \begin{cases} 1 & j = 1 \\ s\_j + 1 & j > 1 \end{cases} \tag{1}$$

*Step* **4** Determine the recalculated weight *qj*

$$q\_j = \begin{cases} 1 & j = 1 \\ \frac{k\_{j-1}}{k\_j} & j > 1 \end{cases} \tag{2}$$

*Step* **5** Final step in calculating criteria' weights

$$w\_j = \frac{q\_j}{\sum\_{k=1}^n q\_j} \tag{3}$$

where *wj* denotes the relative weight of criterion *j*.

### *3.2. Weighted Aggregated Sum Product Assessment (WASPAS)*

WASPAS has been introduced as the one of the state-of-the-art developed MADM methods for evaluating alternatives based on a combination of weighted sum model (WSM) and weighted product model (WPM) (Zavadskas et al., 2012) [55]. The procedure of WASPAS is presented below (Stojic et al., 2018) [56]:

Step 1. Establish a normalized decision-making matrix:

$$\overline{\mathbf{x}\_{ij}} = \frac{\mathbf{x}\_{ij}}{\operatorname\*{optx}\_{ij}}, \text{ where } i = \overline{1, m}; j = \overline{1, n} \tag{4}$$

If the opt value is max

$$\overline{\mathbf{x}\_{ij}} = \frac{\operatorname\*{opt}\mathbf{x}\_{ij}}{\mathbf{x}\_{ij}}, \text{where } i = \overline{1, m}; \ j = \overline{1, n} \tag{5}$$

If the opt value is min

Step 2. Calculate the WASPAS weighted and normalized decision-making matrix for the summarizing part:

$$\overline{\overline{\mathbf{x}\_{ij}}},sum = \overline{\mathbf{x}\_{ij}} \boldsymbol{q}\_{j\circ} \text{ where } i = \overline{1, m}; \; j = \overline{1, n} \tag{6}$$

Step 3. Calculate the WASPAS weighted and normalized decision-making matrix for the multiplication part:

$$\overline{\overline{\mathbf{x}\_{ij}}}, \text{mult} = \overline{\mathbf{x}}\_{ij}^{q\_j} \text{ where } i = \overline{1, m}; \ j = \overline{1, n} \tag{7}$$

Step 4. Final calculating for evaluating and prioritizing alternatives:

$$\text{WPS}\_{i} = 0.5 \sum\_{j=1}^{n} \overline{\mathbf{X}} i j, \text{sum} + 0.5 \prod\_{j=1}^{n} \overline{\mathbf{X}} i j, \text{mult}, \text{ where } i = \overline{1, m}; \ j = \overline{1, n} \tag{8}$$

### **4. Case Study**

The Institute of Futures Studies in Health is in affiliation with Kerman Medical University consisting of five research centers of Modeling in Health, Social Determinants of Health, Health Services Management, Medical Informatics, and the Regional Knowledge Hub, WHO Collaboration Center for HIV Surveillance. The institute delivers consulting services reflecting contemporary topics and issues for future-oriented projects, documents and plans for assistance, research, training, and development deputy of the Health and Medical Education Ministry, and it also cooperates with the Academy of Medical Sciences.

The expertise areas of the institute are Medical informatics, Health services management, Social determinants of health, Modeling in health, HIV surveillance, Data capacity, Data mining, Systematic review, Meta-analysis and meta-synthesis, System dynamics, Futures studies methods, Statistics, and Epidemiology and health philosophy.

### **5. Future Scenarios**

Four possible and probable scenarios are established regarding the experts' standpoints to demonstrate the future in practice. The general concept has been structured with a foresight perspective as a national plan for the country. Each scenario can be a possible future for the country that can directly affect the health structure.

At the general conference meeting, each scenario was discussed along with its criteria, and final criteria were selected. It is noteworthy that criteria were developed and selected based on each scenario in the future i.e., for each probable future, diverse criteria would be suitable for evaluating the intended situation.

In the procedure of establishing scenarios, finding key criteria and probable important alternatives (strategies), experts cooperated with the director of the research. As mentioned earlier, "The Institute for Futures Studies in Health" is responsible for futures studies regarding the healthcare system of the country. An expert team was selected for this study, and their information is presented in Table 2.


**Table 2.** Information of expert team involved in the research.

The final future scenarios based on the experts' opinions are presented below:


*Futures Scenarios: Definitions, Criteria, and Alternatives*

1. Membership in World Trade Organization (WTO)

WTO had more than 162 members by the end of 2015. WTO is involved with Millennium Development Goals and Sustainable Development Goals. Its international economic structure depends on regulations and the structure of the WTO all over the world. Relations, frameworks, and countries' overseas issues can be partly influenced by the WTO. One of the probable scenarios, which Iran can be confronted with, will be the economic structure after joining the WTO. The economical aspect in each section can evolve differently. The health structure of the country has this opportunity to make a strategic plan for developing a health framework along diverse paths.

Iran is the biggest country among the observer countries, and turning Iran's situation from observer to permanent membership is not far from reality. It would be easier to access medicine in all aspects such as technologies, and medicine would be cheaper with higher quality as well. International cooperation in the treatment of patients in hospitals and other various aspects can be developed into multi-nationals and completely international. New opportunities would exist in research and technological innovations for the treatment of diseases and exploring new medicines for the country. In the process of setting up a futures exercise, the probable situations and scenarios would turn into realistic ones through strategic plans. Final criteria of this scenario are presented in Table 3.


**Table 3.** Final criteria of "Membership in World Trade Organization (WTO)" scenario.

In this section, the analysis of alternatives and the selection of the most important ones by experts in a conference meeting are presented. Key alternatives related to the "Membership in World Trade Organization (WTO)" scenario are shown in Table 4.

**Table 4.** Final alternatives related to "Membership in World Trade Organization (WTO)" scenario. R&D: research and development.


### 2. Dynamic Basic Insurance

In this scenario, insurance companies have the role of a smart shopper by appropriate organizational structure. There is a proper relationship between insurance companies and other organizations involved in the health sector. In dynamic insurance, premiums are collected and calculated sustainably, and both resources and revenue pooling are proper. The structure of health insurance acts properly in terms of quality and quantity. Basic insurance has a good coverage and determines and provides their service package by defined, logical, and smart standards. The level of out-of-pocket payment and catastrophic healthcare expenditure is decreased. Insurance systems monitor the quality of provider's services and beside the defined and logical mechanisms of their contracts with providers, they also monitor the efficient implementation of the contracts. The position of insurance in the health system is accepted by all its functions. Insurance companies pay providers properly, while the insurance's choices are limited and the competition among insurances is restricted. Final criteria of the scenario are presented in Table 5.


**Table 5.** Final criteria of "Dynamic basic insurance" scenario.

For the next scenario, experts made a decision on probable key alternatives for the "Dynamic basic insurance" scenario. The process of this evaluation is structured to just select those alternatives with 100% effectiveness. All alternatives linked to the "Dynamic basic insurance" scenario are shown in Table 6.

**Table 6.** Final alternatives related to the "Dynamic basic insurance" scenario.


### 3. International Joint Cooperation

This scenario consists of joint contracts that include the medical education area, educational and research contracts, health technologies, cooperation with international organizations such as the World Health Organization, the design and implementation of the joint projects for health sector reforms, joint graduate students, defining and holding joint training workshops to meet the needs of the same training, increasing foreign investment on technological infrastructure such as hospitals established by an Iranian and foreigner joint force with no border restrictions, defining the benefits of health services free zones such as health settlements or a health island without any visa and passport limitations, health tourism, increasing the quality and diversity of health services, Memorandum of Understanding (MoU) and strategies definition for crisis management, countering and controlling the epidemics and prevalent diseases defined for MOU members, providing opportunities for drawing on the expertise of Iranians living abroad by inviting them to participate in MOU's implementation, and finally defining the joint projects through which the know-how and know-why knowledge are transferred. As a result, the foreign aid of health system increases. Final criteria of International Joint Cooperation are shown in Table 7.


**Table 7.** Final criteria of "International joint cooperation" scenario.

Another four alternatives are selected based on experts' ideas for the "International joint cooperation" scenario, in practice. Alternatives are illustrated in Table 8.


**Table 8.** Final alternatives related to "Dynamic basic insurance" scenario.

### 4. Effective Resources Management

In this scenario, a significant portion of gross domestic product (GDP) is spent on health care. The input resources of the health sector are relatively sustainable. The amount of out-of-pocket payment share to the total health spending is not substantial. Financial data and statistical information on the health system are accessible and integrated. Moreover, the evidence-based philosophy is used by the policy makers. Resource consumption in different parts of the health system is done the most possible ideal way. Controlling the resource consumption is accomplished appropriately. Various groups of stakeholders benefit from the cost effectiveness of the health sector's advanced technologies, as the most suitable technologies are chosen to be used in health sectors. The relationship between insurance companies, the Ministry of Health, and other health stakeholders is properly established. As a consequence, the community choices of various services increase, which also affects providers' choices for selecting and delivering the service. It should be kept in mind that during financial crisis in the country, if the state is in charge of financing the health sector, the health sector will confront complications. Final criteria of this scenario are shown in Table 9.

**Table 9.** Final criteria of "Effective resources management" scenario.


According to the methodology of MADM-based scenarios, there is no limitation for the numbers of alternatives, criteria, and probable differences. Based on experts' viewpoints, five key probable alternatives were generated for the "Effective resources management" scenario. "Effective resources management" scenario's alternatives also are shown in Table 10.


**Table 10.** Final alternatives related to "Effective resources management" scenario.

### **6. Results**

In view of all that has been mentioned so far in the methodology and previous sections, four scenarios are created to show the probable futures. For each scenario, calculations were carried out separately based on a hybrid MADM model (SWARA-WASPAS). The full explanation of the obtained results in each section is explained in Sections 6.1 and 6.2.

### *6.1. SWARA Results*

According to the four scenarios, calculations were performed separately, and each scenario along with its criteria was evaluated based on the SWARA method to prioritize and weigh the criteria. As noted earlier, SWARA is mostly appropriate for the cases that are policy-based rather than for general decision-making structures. Concerning the foresight perspective of the study, a policy and SWARA-based framework was applied for this research. Based on the SWARA methodology and policy-based perspective method, the criteria of each scenario were prioritized based on the experts' judgments and their importance for the country. The results for each scenario are shown in Tables 11–14.


**Table 11.** Criteria's evaluation of "Membership in World Trade Organization (WTO)" scenario.

Health services' quality was selected as the most influential and effective criterion in the first scenario. This criterion also exists in all other scenarios, and it is one of the most effective criteria, undoubtedly.


**Table 12.** Criteria's evaluation of "Dynamic basic insurance" scenario.

Justice in health, Level of accountability, and Quality of financial resources management were three new criteria for this scenario, which were not identified as important criteria for the previous scenario. As it can be seen, justice in health was indicated as a vital criterion in this scenario, despite the significance of the quality of health services being considered undeniable since different scenarios have distinct ends.


**Table 13.** Criteria's evaluation of "International joint cooperation" scenario.

Quality of human resource management, Cost management (medical equipment), Reinforced social aspects, and Flexibility in management and accountability were justified criteria for the third scenario. Similarly to the previous scenario, the quality of human resource management, which was the key criterion in this scenario, had top priority.

**Table 14.** Criteria's evaluation of "Effective resources management" scenario.


The cost of "Cooperation" and "Level of accessibility (Geographical position)" were other new criteria that were revealed as important criteria only in the fourth scenario. It can be inferred that new criteria in different scenarios are the key criteria of that specific scenario since they are defined respective to their critical situation in the given scenario to increase the accuracy of decision making.

### *6.2. WASPAS Results*

Similarly to the SWARA part, each scenario was evaluated separately. The related alternatives of each scenario were analyzed in this section, and their priorities are shown in Tables 14–29. All these evaluations were executed based on WASPAS methodology. The tables of the final ranking are illustrated in the main body of the article, but the calculation parts are presented in Appendix A (Table A1).

### 6.2.1. "Membership in World Trade Organization (WTO)" Scenario

According to the Table 3, four probable alternatives were selected by experts involved in this research. In this research, future strategies in a special situation were considered as a scenario for the future and as the probable alternatives. Strategies/alternatives of this scenario are: (1) Pharmaceutical Security, (2) Focus of government on research and development (R&D), (3) Empowerment of private sector in both production and technology, and (4) Applying both FDI and equipped new technologies in one structure. The process is shown in Tables 15–18.


**Table 15.** Decision-making matrix.



**Table 17.** Normalized weighted decision-making matrix for the summarizing part and multiplication part.


**Table 18.** The results of weighted aggregated sum product assessment (WASPAS).


### 6.2.2. "Dynamic Basic Insurance" Scenario

As illustrated in Table 5, four different probable alternatives were selected by the experts of this study. These strategies that are also considered as alternatives in this research are as follows: (1) Hiring international managers, (2) Benchmarking, (3) Supportive governmental policies, and (4) Good governance. The calculations are depicted in Tables 19–22.


**Table 19.** Decision-making matrix.

**Table 20.** Normalized weighted decision-making matrix for the summarizing part.


**Table 21.** Normalized weighted decision-making matrix for the summarizing part and the multiplication part.


**Table 22.** The results of WASPAS.


### 6.2.3. "International Joint Cooperation" Scenario

In this section, similar to previous scenarios, the selected strategies were evaluated based on WASPAS methodology. The strategies for the dynamic basic insurance scenario include the following subjects: (1) Pharmaceutical security; (2) Supportive governmental policies; (3) Sustainable medical services; and (4) Sustainable human resource management. The results can be seen in Tables 23–26.


**Table 23.** Decision-making matrix.

**Table 24.** Normalized weighted decision-making matrix for the summarizing part.


**Table 25.** Normalized weighted decision-making matrix for the summarizing part and multiplication part.


**Table 26.** The results of WASPAS.


### 6.2.4. "Effective Resources Management" Scenario

In this scenario, five strategies were selected as alternatives referring to experts' opinions. The strategies for the effective resources management scenario are: (1) Benchmarking, (2) Supportive governmental policies, (3) Good governance, (4) Sustainable human resource management, and (5) Re-engineering the process. The final priorities are illustrated in Tables 27–30.


**Table 27.** Decision-making matrix.

**Table 28.** Normalized weighted decision-making matrix for the summarizing part.


**Table 29.** Normalized weighted decision-making matrix for the summarizing part and the multiplication part.


**Table 30.** The results of WASPAS.


### **7. Discussion**

This discussion is prepared in three sections. In the first section, the most effective criterion is addressed, and in the second section, the discovery of the most applicable strategy as an alternative is discussed. The third and last section concentrates on the explanation of the macro monitoring system for the health structure as a comprehensive sustainable health system.

### *7.1. First Round Analysis: Finding the Most E*ff*ective Criterion*

According to the methodology, the first round in the discussion is pertinent to the analysis of effective criteria, especially the most effective ones. As it is shown in Table 31, all 15 criteria and their relative importance (weights) in the different scenarios are presented. Creating Table 31 prepares a general perspective about criteria and their situations in scenarios.


**Table 31.** Weights of criteria based on the scenarios.

Considering the critical criteria, the impact of scenarios, their average weights, and their averages based on their priorities are addressed in the second round. As it can be seen, in Table 32, the mentioned items provide the final analysis of the criteria. A final ranking based on the criteria's weights and their effectiveness is presented in Table 32, and the most effective criterion (C1: Health services' quality) was identified regarding the expectations.


**Table 32.** Investigation on effectiveness of criteria based on scenarios.

### *7.2. First Round Analysis: Finding the Most Applicable Alternative*

Similar to the previous section, 11 strategies as alternatives and their priorities in these scenarios are presented in Table 33, and based on this table, a final evaluation was carried out.


**Table 33.** Alternatives and their priorities in the scenarios.

Analyzing and evaluating alternatives is done using different items. Other evaluation methods were utilized for examination, which was beyond this research original methodology. Average in state of max criteria (ASMAC) and Average state of "min" criteria (ASMIC) were added to be checked and examined in this process. A final evaluation of the alternatives is presented in Table 34. It is noteworthy that these two methods are completely different methods, and no relation exists between them. The final discussion of these two methods is addressed in Section 8.

The original part of the MADM-based scenarios is the priority based on applicability. Analyzing the priority based on ASMAC and ASMIC was tested for this framework for the first time. The second part was identified as ineffective for two reasons. Firstly, it was not related directly to the original part, and secondly, it was identified as ineffective because of the maximum and minimum criteria. To elaborate more, it was conceived that those alternatives with no minimum criterion can be ranked higher than other alternatives, which are more influential in general.

### *7.3. Macro Monitoring System*

The Institute for Futures Studies in Health of Kerman Medical University has been collaborating as a partner in this research. This institute plays a key role in monitoring weak signals and also preparing scenarios for managing probable crisis in the future. The preparation of Wildcard scenarios has been undertaken by this institute as a concurrent part of the system for directing the general health structure of the country according to macro needs.

### *7.4. Final Analysis*

According to the results, the quality of medical services was identified as the most effective criterion, and sustainable services delivery was recognized as the most applicable alternative. The obtained findings imply that despite the uncertainty of the probable futures, financial sustainability in the health system can be achieved by presenting sustainable and high-quality services delivery.

Sustainable services delivery is affected by three other functions of the health system i.e., stewardship, resource creation, and financing. Thus, to realize the sustainability goals in sustainable service delivery, all these functions need to operate in a sustained manner.

Sustainable medical services delivery was the most applicable alternative for achieving sustainable health financing. It can be deduced that separating a part (a function) of a health system and attempting to make it sustainable does not seem to be pragmatic.

It can be speculated that the economic environment of the health system in three financing layers of international, regional/national, and sectoral (health sector) is interwoven; however, the amount of the effect that can be transformed from the international and regional/national layer to the health financing layer depends on the robustness of the financing mechanisms of the health financing function. For instance, it should be determined how the revenue collection task of the financial system can be performed despite the environmental changes.

Health expenditures are provided from two main channels consisting of the general and private sectors. General expenditures incorporate governmental budget and social security insurance, and private expenditures incorporate out-of-pocket, private insurances and other resources such as donations.



**Table**

In comparison with other middle and upper-income countries, the governmental share in providing health expenditure is lower in Iran. Two main channels of providing health expenditure are out-of-pocket and governmental payment, which are both affected by other layers of the economic environment. If the health system becomes capable of developing a mechanism of wealth creation such as entrepreneurship based on a national innovation system, moving toward the third generation medical universities, health tourism, etc., this fulfillment will bring about two main results. One of them is the risk reduction as a consequence of the concentration on limited sources and decreasing the dependence on different financing mechanisms. The other result is the lessening of of environmental impacts, which boost the sustainability of the system.

Improving the quality of health services in Iran requires attention to several key factors such as organizing, regulation, financing, payment, and behavioral systems.

Some important factors in organizing are listed as follows: a proper structure for implementing health care programs, determining appropriate structures of quality assessment, providing appropriate definitions of different levels involved in quality promotion, considering the suitable structures for quality-related knowledge share among stakeholders, and defining related appropriate educational and research roles to improve the quality of care.

Defining an appropriate regulatory structure related to quality improvement, the revision of the rules and guidelines associated with different service levels according to the new needs of society (such as chronic diseases), the development of appropriate laws and regulations in the country to produce various types of evidence-based guidelines, and creating a proper understanding about the quality of health legislation are all factors related to the country law and regulation quality.

In regard to the financing factors, the following issues should be considered: defining the health care quality monitoring system of the country and the development of appropriate indicators, considering infrastructural resources of service quality in allocating them in the health sector, creating informational infrastructure related to service quality, providing evidence-based guidelines at different levels of service with the participation of all stakeholders, medical education involvement in training about service qualities concepts and quantitative and qualitative training, a smart selection of models to improve the quality of concepts of stakeholders, motivating and empowering healthcare personnel to provide high-quality services, and ultimately monitoring the quality of inputs such as medicines and medical equipment.

The payment system factors such as creating mechanisms of using quality-based payment methods—and more effectively, the role of insurance in promotion of service quality by strategic purchasing—would enhance the quality of healthcare services.

For the behavioral function, the critical factors that need to be considered are giving feedback related to services quality to providers and society, defining national and local events to appreciate the efforts related to quality improvement, promoting self-assessment processes in organizations providing health services, and institutionalizing moral, social, and cultural values in services delivery to meet the needs of the non-clinical health care community.

### **8. Methodological Discussion and Contributions**

To date, there has been few studies using MADM-based scenarios; hence, this research attempted to add substantially to our understanding of this methodology, in order to demonstrate how this methodology can be applied, and to draw attention to the constructive attributes of this methodology in practice. Additionally, this methodology provides a bright vision for decision makers and policy makers to make more accurate and comprehensive decisions. In this study, two extra evaluation parts were added to this applied methodology. The first one is "Average based on priorities (Min)", which is influential in evaluating momentous criteria, and the second one is "Estate of alternatives based on ASMAC and ASMIC" which is utilized in the evaluation process of applicable alternatives.

Both extra evaluation parts were presented to be examined in reality and case studies. The results revealed reliable evidence to confirm the validation of "Average based on priorities (Min)", while no evidence has been attained to support the effective usage of the "Estate of alternatives based on ASMAC and ASMIC" in this specific study. It is surmised that the Average in state of max criteria (ASMAC) and Average state of "min" criteria (ASMIC) were not conducive for finding applicable alternatives due to some limitations and reasons. In the current research, the ranking of alternatives became practically impossible, since alternative 9 did not have a minimum estate. Taken together, these findings suggest that "Average based on priorities (Min)" can be considered as the salient methodology identified as indispensable for finding effective criteria.

### **9. Conclusions**

The current findings add to a growing body of literature on sustainable health financing. In this study, health services' quality was recognized as the most effective criterion in assessing the different future alternatives. In fact, the promotion of health services' quality affects different aspects of health system functioning and its enhancement. From the viewpoint of patients, the delivery of quality services is the most paramount aspect of what they receive in a health system. Through the lens of providers, delivering high-quality health services assures their thorough functioning in a health system. From the policy makers' perspective, delivering health services in a satisfactory quality level implies that the health services are delivered in compliance with standards and regulations; furthermore, it also ensures that the limited health sources are utilized productively. From the health insurance agencies' outlook, high-quality service delivery can be translated into strategic purchasing for all of the health insurance covered population. At first glance, it may seem that delivering high-quality services will heighten the expenditures. However, after deep contemplation, it can be discerned that delivering high-quality services not only reduces many expenditures incurred by the further reference of patients and the side effects of low-quality health services delivery, it also makes the system well-known as a high quality health care system that is highly committed to its social responsibility besides its other administrative responsibilities, which meets the social pillar of sustainability approach. Therefore, high-quality services delivery can be the joint criterion incorporating the distinctive functions of the health system such as financing, service delivery, and so on.

In the other part of the study, sustainable medical service was determined as the most pertinent and applicable alternative in various health financing scenarios, which signifies that economic, social, and environmental aspects of sustainability are fostered in superior services delivery to individuals. Indeed, sustainable medical services can assure the quality of the services, which was determined as the most effective criterion of health financing for future. In regard to sustainability standards and principles, financing the health system should be possible in every economic, social, and environmental fluctuation influencing the health system such as changes to the oil prices, other probable sanctions of Iran, population aging, and so forth.

Sustainable medical services can guarantee the quality of medical services to establish a sustainable financing in a health system regardless of different probable future scenarios of the health system in Iran. Future research directions in sustainable health and medical services based on the MADM scenario will lead researches to the proper delivery and quality monitoring of medical products under sanction circumstances. For this reason, sustainable development that brings about new rules, regulations, and changes of medical delivery/products with a high quality of service is necessarily needed under sanction. For instance, the wide accessibility of alternative medical products is of interest under sanctioned circumstances based on medical priorities. Moreover, the proper delivery of medical products is guaranteed when alternative medical products are used, and of course, they become economically controlled when sanctions are applied. Future analysis and insight need to be investigated on how potential environmentally friendly medical sources of the sanctioned countries can alter outsourced counterparts to maintain sustainability in a myriad of aspects.

**Author Contributions:** Main idea, conceptualization, methodology, identifying criteria, calculations, discussion and editing and writing–original draft by S.H.Z.; Main idea, interviews, identifying criteria, appendix by R.D. & A.P.; Editing, conceptualization by O.P.; Editing, revision and discussion by P.K. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest.

### **Appendix A**

**Table A1.** Explanations of the all criteria.


### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Simulation-Based Positioning of Circular Economy Manager's Skills in Construction Projects**

### **Jarosław Górecki**

UTP University of Science and Technology, Faculty of Civil and Environmental Engineering and Architecture, prof. S. Kaliskiego 7, 85-796 Bydgoszcz, Poland; gorecki@utp.edu.pl

Received: 5 December 2019; Accepted: 24 December 2019; Published: 26 December 2019

**Abstract:** Circular economy (CE) is an emerging economic model based on the endless circulation flow of resources creating additional value. In temporary organizations such as construction projects, all administrative decisions are crucial for final success. One of the ideas is to enroll a circular economy manager (CEMR) and put him in an organizational structure. Implementation of the CE concept should be the effort of the entire project team. However, actions specific to the innovative nature of the procedures related to the CE in construction projects require additional support. It can be provided by professionals who can adapt a wide spectrum of knowledge to be used for promoting CE in the execution of construction processes. CEMRs can play the role of patrons of the CE issues because they support project managers in saving material resources in construction projects. The symmetry between visible outcomes of the CE idea and the employment of an extra manager has contributed to the development of the CEMR selection criteria model. However, effective recruitment for such a post may be a bit complicated for decision-makers, especially when CE is still enigmatic, as its procedures are quite undiscovered. All in all, the multi-criteria decision-making problem forces one to prepare the list of selection criteria and to rank them according to status in the hierarchy. This article shows prioritized criteria for selecting the CEMR based on the advanced literature review concluded after several expert-based reviews and calculated after some Monte Carlo simulations. The main purpose of this article is to help decision-makers in construction projects to perform a reliable recruitment process.

**Keywords:** circular economy; selection process; construction project; management; Monte Carlo

### **1. Introduction**

Progressive urbanization of natural areas fosters biodiversity loss and climate changes. The rapid development of urban areas all over the world causes many serious environmental concerns, such as emissions, floods, etc. Fortunately, growing knowledge about the economic growth paradigm [1] and ecological consciousness [2,3] in societies (among consumers, decision-makers, etc.) makes it possible to realize that limited resources are not just about limited capital that governments and institutions have, but it is also—and perhaps mainly—a question of scarcity of resources itself [4]. Therefore, a concept of the ecological perspective gets more and more popular and, for many, balanced development is the only way to get rid of this problem. Ecological practices play a significant role in the sustainable growth of nations in many countries.

Sustainable production [5,6] can be described as an environmental protection strategy based on continuous, coordinated, preventive action with regard to processes aimed at increasing the efficiency of production and services as well as reducing the risk to people and the natural environment [7]. The sustainable production framework links the production process with the concept of reducing the use of resources and the environmental impact of the product. Therefore, it applies to all stages of the product life cycle—from cradle to grave and from design to disposal [8].

More and more businesses are developing sustainable manufacturing systems [9,10] and recruiting experts to their research and development (R & D) departments in search of new or more effective technologies. There are numerous issues waiting to be addressed, however.

A holistic approach is necessary for effective decision making connected with sustainable development [11] that aims to improve the quality of life while maintaining social equality, biodiversity, and variety of natural resources [12]. It means that considerations of economic issues should take into account their impact on social aspects, policies and the natural environment. For this purpose serves the circular economy (CE) concept [13–17].

The European Union's action plan for the circular economy [18] announced in 2015 by the European Commission is a response to the previously mentioned challenges. Legislative initiatives aim to reconcile environmental and business interests. The package is treated as a clear signal for all market players that full implementation of new ecological and raw materials policy is a priority for European decision-makers. The circular economy, according to the report, will boost the competitiveness of the EU by protecting businesses from resource scarcity and volatile prices, helping to create new business opportunities as well as innovative and more efficient ways to produce and consume.

According to the European Commission, certain sectors are facing particular challenges in the context of the circular economy due to the characteristics of their products or value chains, their environmental footprint and potential dependence on resources [19,20] from outside Europe. As a result, the action plan enumerates five priority areas that require a specific approach:


The above conclusions indicate that the construction industry is problematic and is responsible for the growing amount of waste that does not return to the value chain. It is also a very energy-intensive industry. In the United Kingdom, for example, annual emissions associated with the embodied energy required to produce all kinds of materials needed are over 10% of the UK's total emissions [21]. Strategies to reduce the energy demand for buildings and infrastructure are rapidly evolving that go beyond the maintenance phase and include the energy demand required for the production of the materials phase as well. According to Barrett et al. [21], there are many ways to minimize energy demand by evolving activities in the construction sector. A decreased use of materials through better design and manufacturing procedures as well as increased re-use and recycling rates are just the most evident examples. Cooper et al. [22] describe the circular economy approaches and estimate their potentials in construction as large.

A preliminary observation of project-based organizational structures focused on the roles of personnel responsible for managing environmental aspects of production reveals presumed neglect in this area. There is low popularity in general [105 results for "*circular economy manager*" in Google browser and one record [23] in ScienceDirect database (both numbers retrieved 17 December 2019)] and practically lacking in construction sector [17 results for "*circular economy manager*", "*construction project*" in Google browser and zero records in ScienceDirect database (both numbers retrieved 17 December 2019)] of circular economy manager posts. The above search results indicate, however, that the idea of establishing circular economy manager (CEMR) is not completely unfounded, because the first symptoms of this phenomenon appear in the economic sphere. The presumption confronted with an inevitable transition of the construction industry into circular economy opens a new, undiscovered scientific gap for assuring a list of the most important competencies of circular economy manager.

The study was motivated by the question of whether or not project-based organizations like many construction companies note the necessity of hiring specialists on separate posts responsible for managing issues connected with this new business model and the philosophy of construction project execution based on closed-loop principles.

In the literature, some signs of such an attitude are present; however, the majority of examples only gently touch this problem, recalling the post: sustainability manager [24–27], environmental manager [28–31], or site waste manager [32–35].

However, interest should be also addressed regarding the existence of the symmetrical relationship, which has a dual character in this case. On the one hand, better functioning of the CE idea is achievable when the CEMR is employed in an organization. On the other hand, the effectiveness of the CEMR is greater the more thoroughly implemented the idea of CE in the organization is.

Therefore, an important decision-making process to be made by a construction company is the circular economy manager selection. This facilitates the fit of the project organizational structure to the new approach and choosing the right person that meets requirements connected with the post. A recruitment phase in construction companies is quite difficult and time-consuming and does not guarantee final success. Having a selection procedure pattern for the circular economy manager in construction projects may allow companies to avoid unnecessary costs and time wasted. The most important challenge is to identify selection criteria regarding what, in case of such a niche, may be very difficult due to lack of experience and case studies. Therefore, CE needs more complex problem solving skills, resource management capabilities, process skills and technical skills compared to the rest of the economy [36].

A review of the literature in the field of construction project management reveals a great interest in material resource issues categorized in symmetry with current sustainability problems. The author's proposal is a reply to these needs. It consists of employing the CEMR and putting him in a project structure similarly to the risk manager (RMR) or the quality manager (QMR).

The CEMR in the construction project should have an independent position and be subordinated only to the top management of the project (PMR).

The aim of the article is to launch a discussion on this topic, because the idea of employing a CEMR is innovative and has not been discussed widely in the literature yet. Indication of the problem along with its structuring in the article is based on the author's concept of the CEMR selection criteria model prepared on the basis of opinions of stakeholders of construction projects and gathered in the course of the study.

In general, there is a lack of knowledge about how to conduct a recruitment process to select the best candidates on a circular economy manager post in a construction project. A crucial question is what factors are influencing a successful decision. This paper aims to prioritize them in a clear, replicable way. It allows stakeholders of the construction projects to make more reliable and therefore less expensive decisions. The following research questions should, therefore, be formulated:


In order to get rid of the confusion coming from the questions mentioned above, a survey-based study was performed and followed by the Monte Carlo simulation. This paper is organized, apart from the Introduction, into five main sections. The second chapter describes theoretical considerations about a topic of the research; the third one shows methods that can be used in the research. Criteria and selection models of CEMR are presented in the fourth chapter, which is followed by the research results in the fifth chapter. The last part of this article discusses the findings and provides ideas for future research.

### **2. Theoretical Framework**

This part describes a theoretical context of the research on the circular economy manager selection problem. The literature review provides a better explanation of the circular economy idea and its connections with construction management processes.

### *2.1. Circular Economy (CE) in the Construction Sector*

Circular economy (CE) appears as an interesting concept of reducing negative externalities of human economic activity by finding new concepts of the flow of matter in manufacturing processes, assuming its closed-loop circulation. As a result, the CE-based systems allow one to preserve waste generation. They are created to save resources when the product life cycle comes to an end, allowing them to be reused and thus to create another value. The CE concept attracts decision-makers from various industries. Its application is particularly important in the construction sector, which is featured by close relations (direct interference) with the natural environment. The industry is one of the main waste generators and also one of the least environmentally friendly branches of the economy [37]. The limited resources force us to seek new technologies of waste recovery that can convert outputs of one production system into inputs of another (new) production system. The construction sector contributes significantly to CO2 emissions. Many of these emissions are generated in the manufacturing process of the construction materials. For example, in the Netherlands, concrete is responsible for 5% of global CO2 emissions [38]. CE can be considered at various levels both in the objective (in relation to buildings, structures and their parts) and in the subjective perspective (in relation to construction companies, workers, etc.).

During the construction process, huge amounts of resources are used, which results in the generation of significant amounts of waste, energy consumption and the production of harmful emissions into the environment. Also, it has a significant impact on many other sectors. There are many difficulties associated with finding the best practices in strategic and operational decision-making [15].

It is necessary to notice the life cycle of construction projects. It begins at the "cradle"—a moment of creation of an idea—and finishes at the grave after the demolition of a structure. In order to protect the environment and natural resources, the construction materials used for the construction project must be fully or partially recycled and reused (they must be "reborn").

Working in the construction industry requires both theoretical knowledge as well as practical experience and good abilities of the identification of global trends as well as local conditions. Therefore, a great deal is required of managers working in this sector.

### *2.2. Implementation of CE in Construction Project Management (CPM)*

The following changes precede the introduction of the CE in the construction industry. Firstly, the industry needs to be prepared technically for more efficient business models, including knowledge of more sustainable production systems, new organizational structures, etc. In addition, to benefit from the transition, the industry must be prepared with new technologies, services, management models, and digital platforms. The consumer must then be notified about the CE's financial and environmental benefits. However, it is not just the construction phase itself that needs to be evolved. This includes also a significant change in ways of maintaining a house, a road or other types of infrastructure. Secondly, the industry must generate and promote reliable measures to ensure that the condition of the company is a factor in its propensity to implement the CE. This measurement was proposed in the literature by Nuñez-Cacho et al. [15,39]. On the other hand, this article is intended to fill another research gap—the creation of fundamentals for CE-based construction project management relying on stakeholders' awareness of the role of circular economy manager and his competences being valued in the recruitment process.

Implementation of circular economy in construction projects may result in a higher value of their products, including increased profitability or reduction of costs in the project life cycle [40] by reduced consumption of resources as well as external factors such as higher built environment quality, lower emissions, etc.

The main goal of construction companies implementing strategies based on CE should be to achieve high project management maturity, taking into account the closed-loop production rules. This approach can be called circular economy maturity (CEM) [41]. Such an attitude in CPM is reflected in the competences of project team members, including the circular economy manager (CEMR). It is also necessary to create an environmental risk management plan in the project and risk capital to cover potential losses and unexpected increases in costs related to environmentally friendly attitudes [42], including the implementation of CE.

The project management maturity can be observed in a company when its successes are treated as concurrent with successes of implemented projects. The project manager is responsible for achieving the project goals, whereas the circular economy manager should support him by enhancing competencies specifically related to closed-loop production.

The ability of the effective CPM is often determined by a reliable risk management plan in the project and risk capital to cover possible losses related to the project implementation. The project management maturity understood in this way manifests itself mainly in the selection of competent project team members. Therefore, it seems obvious that a reliable recruitment process resulting in right personnel selection helps in achieving the project management maturity by construction companies.

The next part of the article focuses on methods useful in the selection process of circular economy managers in construction projects.

### **3. Methods**

This part describes the circular economy manager recruitment problem from the point of view of methods that can be used to create a reliable selection framework.

### *3.1. Overview of Methods*

Decision theory has been a field of research for many years. It deals with situations in which decision-makers make a choice among given alternatives. Many studies have been published about this comprehensive issue. In the case of this research, the choice is among a group of candidates who are being considered for the CEMR post in construction projects. A brief overview of recent decision-making methods is useful, considering CEMR selection is ultimately about the decision of who will be responsible for circular economy issues in a construction project and if such a strategy will eventually be successful (and for whom) or not.

Choosing the right method for circular economy manager selection determines the desired stakeholder's engagement, causes a general acceptance of such an activity, and leads to the final success of the CE implementation process. Therefore, in this article, it is useful to present a variety of possible methods (Table 1) retrieved from the literature.


**Table 1.** Multi-criteria decision-making methods in project manager selection (PMS).

AHP, Analytical Hierarchy Process; ANP, analytical Network Process; DEMATEL, Decision MAking Trial and Evaluation Laboratory; MCDM, Multiple-Criteria Decision-Making; SAW, Simple Additive Weighting; TOPSIS, Technique for Order of Preference by Similarity to Ideal Solution; VIKOR,Vise Kriterijumska Optimizacija I Kompromisno Resenje; COPRAS-G = Grey COmplex PRoportional Assessment; ARAS = Additive Ratio Assessment.

Apart from different algorithms useful in the selection and ranking of alternatives, there are a variety of criteria weighting methods for personnel selection problems. Among the most significant tools, one can consider analytical hierarchy process (AHP) [45,70,77], analytical network process (ANP) [70], stepwise weight assessment ratio analysis (SWARA) [46,78–81], factor relationship (FARE) [77], linguistic weighted average (LWA) [49,67], average of normalized columns (ANC) [59], weighted aggregated sum product assessment (WASPAS) [82], KEmeny median indicator ranks accordance (KEMIRA) [83], or fuzzy weighted averaging (FWA) [72]. Also, the superiority and inferiority ranking (SIR) method [84] or the best–worst method (BWM) [85] have potentially wide application in weighting criteria for MCDM.

As one can observe, there are many interesting and comprehensive MCDM methods used for personnel selection, including hybrid methods. However, for the problem highlighted in the article for the CEMR selection process, it is proposed to use a simple but efficient way of reliability-based recruitment that is useful in construction projects and is described below.

### *3.2. Description of the Selected Method*

The research was conducted from October 2018 to January 2019. A graphical summary of the selected method is presented in Figure 1. At the early stage of the research covering a conceptual phase, the literature study [database: ScienceDirect; keywords (multiple combination): circular economy, manager, construction, construction project, construction management, multi-criteria decision-making methods, uncertainty, risk; years: 2010–2019] was performed to provide a basis for a definition of the problem, an initial expert brainstorming and a specification of the research methods. The expert sessions included professionals (35–68 years old) directly related to the companies operating in the construction sector, i.e., members of the Polish Chamber of Civil Engineers. When selecting these experts, their professional skills in the field of construction management were taken into account (at least 10 years of experience). All of them were directly involved in a minimum of three construction projects.

**Figure 1.** Graphical summary of the selected method.

Next, a second stage of the literature review [database: ScienceDirect; keywords (multiple combinations): green jobs, circular economy, manager, competence, leadership, selection criteria, skills, competencies, hierarchy, construction, construction project, construction management; years: 2010–2019] was done. This time, the most important purpose was to identify selection criteria for managers in construction projects especially suitable for the circular economy area. A further step of the research covered the quantitative process, which consisted of:


One purpose of the research was to show an opinion about the selection criteria of circular economy manager in construction projects, which is a proposal of the survey's author for better management by indicating and assessing a hierarchy of the competencies required from candidates to this position (deterministic approach). The second goal was to verify a reliability level of the decision-making process of the candidate selection based on the given criterion and to check differences between deterministic and probabilistic attitudes towards the ranking of skills.

The survey was carried out in January 2019. The questionnaire, divided into 18 questions on eight pages, was launched on the Internet platform, and its link address was sent to civil engineers, scientists and experts of building research and related fields, including project management, real estate and construction, united at the Cooperative Network for Building Researchers (CNBR) Yahoo Group.

Respondents were asked to respond to two initial questions about their experience in the area of research, ten further questions concerning importance and selection reliability of competencies expected from circular economy managers in construction projects divided into six groups (basic skills, complex problem solving skills, resource management skills, social skills, systems skills and technical skills), and they had an opportunity to leave feedback about a matter of the study.

Later, received responses from the survey were incorporated into a model of recruitment. The model is based on all 68 competencies—factors responsible for successful decision-making aimed at selecting the best candidates for the CEMR posts in construction projects. As each such decision is prone to risk, a multi-criteria decision-making procedure based on simulations, thus enabling one to take into account risk factors, was considered as the most suitable in this phase of the research.

Another purpose of the survey was to determine the hierarchy of the requirements expected from the potential CEMRs as well as to estimate the parameters of probability distributions, which reflect the accuracy of meeting expectations with respect to the candidates' reliability (R, probability of success) of the selection. In literature, one can find many methods connected with different criteria weighting techniques [52,62,69]. A level of possible deviations from predictions, that is, risk level (R, probability ˇ of failure) was discovered simultaneously because, according to the reliability theory, the following equation exists:

$$
\mathbb{R} = \mathbf{1} - \mathbb{R} \tag{1}
$$

The term *risk* refers to variability in the outcome of decisions. Thus, if it is assumed that the personnel selection based on given criteria is a decision made by someone who is responsible for a recruitment process, then the risk may be understood as a difference between her/his expectations to a candidate and its real performance. However, at the same time, this situation is associated with particular levels of probability of occurrence. Most risk quantifications involve subjective judgement. Different experts in the area might have quite different assessments of the same issue. On the basis of the survey, there were collected probabilities of occurrence of the particular factor needed for the risk analysis in the @Risk application. The analysis is a quantitative method that determines the probability distributions of the outcomes resulting from decisions. This technique can be described in four steps:


There are many methods of simulation, but of special interest are Monte Carlo (MC) simulations. These comprise a powerful method to estimate the model results under risk factors. For this purpose, there are needed values that are drawn in a random way from an input probability distribution. It is performed repetitively until the total number of iterations finishes. This process is called sampling. In the research, the Latin hypercube sampling method was used. It is designed to avoid the clustering, and all values in the input distribution have a better chance at being sampled.

The Monte Carlo simulation method of random variables is currently used in all branches of empirical science. Its application in data analysis is generally based on the fact that operations on real sets of measurement data (prone to an error) are carried out on—by imitating them—random sets generated by a computer.

As a result of the simulation proposed in this paper, the probability distributions for individual criteria for the assessment of candidates are obtained. Although the Monte Carlo simulation method has been thus far used to analyze employee selection factors, its current applications in this area do not refer to circular economy manager in construction projects, and, what is more, do not go deeper into the structure of requirements for candidates, which was noticed in this work as necessary. Therefore, the application of the Monte Carlo simulation to the risk analysis of CEMR selection—supporting decision making—proposed in this article is, without any doubt, an innovative solution.

### **4. Criteria and Selection Model of CEMR**

A role of CEMR is to mobilize team members and maintain good relations with all project stakeholders, especially in matters related to all closed-loop production processes. The CEMR should have the following competencies basically characterizing managers, i.e., communication (building relationships, leadership), the strength of authority and negotiation (seeking compromises, treating conflicts and crises as opportunities, not threats), as well as commitment and motivation (faith in the project). However, the CEMR is a special type of manager for whom an ordinary level of competence is not enough. Therefore, it is required to establish a unique competency framework that can be useful for construction companies and other entities involved in construction projects executed according to the CE principles, recruiting on posts of senior management staff responsible for CE issues.

Based on the literature, a list of criteria for CEMR in construction projects was specified. All criteria were divided into basic, universal criteria (UC) and those related to construction industry-specific criteria (SC). All of them are present in six groups adapted after Burger et al. [36], which are shown in Table 2:



**Table 2.** List of CEMR criteria for construction projects according to the literature.


### **Table 2.** *Cont.*

UC, universal criteria; SC, specific criteria.

All above 68 criteria (XBS,1, ... , XTS,14) were analyzed through an expert assessment conducted online. Additionally, a model of personnel selection with a reference level of assessment of a candidate for CEMR in construction projects was created.

The @Risk application was used to perform Monte Carlo simulations of the proposed model. In each group *i* = 1; 6, a deterministic mean was calculated. Then, simulation inputs were set, that is, the definition of distributions and the number of iterations.

For all *j*-cases in each *i*-*g*roup, Bernoulli distribution was defined by *RiskBernoulli(p)* parameter, which is used to model an event that either occurs with probability *p* (value 1) or does not occur with probability 1 − *p* (value 0). Next, simulation outputs were added. They were cut-off scores (COS*i*) calculated for each group in the following way:

$$\text{COS}\_{i} = \frac{\sum\_{j=-1}^{n} \left( \text{Weighted average}\_{j} \times \text{RiskBernoulli} \left( \text{Reliability}\_{j} \right) \right)}{n} \tag{2}$$

Some crucial results of the conducted survey and the simulated model are presented in the further part of the manuscript.

### **5. Results**

The online survey included two basic parts. The first one contained two questions about the respondent's role played in professional life and her/his experience.

The largest number of respondents (31.71%) were scientists involved in research on engineering economics, construction companies and related matters, followed by scientists of sustainability, circular economy and related areas (21.95%), experts of circular economy, sustainable development and construction industry (17.07% for each group), and others—five people related to the construction industry or CE but not considered as experts were a minority of the sample (12.20%), as shown in Figure 1.

A structure of the respondents regarding their experience and seniority in the profession is presented in Figure 2.

**Figure 2.** Leading roles of the sample.

Analyzing a professional practice of respondents (Figure 3), the largest was a group with over 15 years of experience (36.59%), then 5–10 years (31.71%), and the smallest had less than five years of professional experience.

**Figure 3.** Declared experience of the sample.

In the second part of the survey, there were proposed 68 factors important during the decision-making process of recruitment on circular economy manager post in construction projects. They were divided into six groups: basic skills, complex problem solving skills, resource management skills, social skills, systems skills and technical skills.

The respondents assessed the importance of each factor, expressing the opinion numerically in a five-point scale based on Likert's approach to scaling responses in survey research: 1—strongly not important; 2—almost not important; 3—medium importance; 4—important; 5—very important. The collected dataset underwent a prioritizing process based on calculated weighted averages.

Table 3 presents the most important findings of the conducted research—the importance hierarchy of the competencies demanded from CEMR and their average accuracy of meeting expectations with respect to potential candidates.


**Table 3.** List of prioritised criteria for CEMR selection in construction projects according to the survey.


**Table 3.** *Cont.*


**Table 3.** *Cont.*

The proposed 68 factors (X**i,j**) may be useful for a recruitment process. In the next step, the criteria divided into six groups were taken to further consideration.

Monte Carlo simulations of the proposed model were performed in the @Risk application. For *i*-group, a deterministic mean was calculated, and for all *j*-cases in each *i*-*g*roup, Bernoulli distribution was defined by *RiskBernoulli(p)* parameter in order to get probabilistic values. For example, "Operation monitoring" skill was coded with formula *RiskBernoulli(0.7737),* which returned a Bernoulli distribution with parameter 0.7737. This had a 77.37% chance of returning 1 (understood as a success, "skill observed") and a 22.63% chance of returning 0 (understood as a lack of success, "skill not observed"). A number of iterations was set as 1000, which gave sufficient results (smoothed graphical results). Additionally, the cut-off scores (COS*i*) were calculated for each group on the basis of Equation (2).

The aggregated results of the simulation are presented in Table 4. More details are given in Appendix A.


**Table 4.** Monte Carlo simulation results.

The simulation approach to assess the hierarchy of CEMR selection criteria gives significant changes in the order of cut-off score groups used in the recruitment process of the right candidate for this position. From a deterministic perspective, the order of these groups is as follows:


According to the simulation results, after taking into account the reliability of the candidate's compliance with the relevant requirements, the rearrangement in the order of the score groups can be observed:


The simulation data can be treated as a reference level of assessment of a candidate for CEMR in construction projects. In the next section, some peculiar findings are described.

### **6. Discussion and Conclusions**

For a simulation-based positioning of circular economy manager's skills in construction projects, it is necessary to use a specific approach. It requires a detailed recognition of the most important skills needed in a given post. Besides, CEMR selection is a multi-criteria problem similar to the project manager selection process [62]. On the basis of this similarity, a model of selection criteria for the circular economy manager had to be built.

The most elevated requirements were observed for "Vision and imagination" (4.48), "Management of material resources" (4.45), and "Resource management" (4.45), whereas some technical skills such as "Equipment maintenance" (3.47) or the ability of "Repairing" (3.37) might be neglected. In fact, these abilities are suitable for lower managers or workers rather than CEMRs. After aggregation, the most important factors were grouped in "Resource management skills" (3.667 median), and the least important were "Technical skills" (2.703) modeled on contemporary employees during simple workshops or short training.

Circular economy manager selection appears as an important decision-making process in construction companies interested in the CE implementation [37]. A high level of complexity of issues related to CE makes that employment of the CEMR in construction projects seem inevitable. Such a transition requires adapting the project organizational structure to this new approach and selecting the right person to meet the detailed requirements connected with the post. A recruitment phase is usually time-consuming and therefore quite expensive for construction companies; what is more, it does not guarantee final success. The cut-offs, those being the results of the study, are the points separating successful and unsuccessful performers according to a standard established by the employer. Having a coherent model treated as a selection procedure pattern for the circular economy manager, construction companies may avoid unnecessary costs and time wasted. This model was presented in the article.

However, as always, there are different scenarios that may occur. One situation is observed when, as a result of an assessment based on the proposed model, the *true positives* are obtained. They are those who pass the selection test (beat the cut off scores in each group of factors), succeed on the job, and who perform satisfactorily afterwards.

The opposite situation is observed when candidates are correctly rejected as a result of the measurement, and there is a presumption—almost a certainty—that they would not be successful employees anyway, thus they are called *true negatives*. The third scenario is observed when people are rejected but would have performed well on the job if they had been hired. They are called *false negatives*. Finally, *false positives* describe individuals who are selected for the post but do not turn out to be a good choice. The research reveals that decision-makers can make more reliable selections in the process of the CEMR recruitment provided that they take the approach proposed in the article.

Due to lack of experience and case studies, it was quite difficult to identify selection criteria for CEMR. Fortunately, the literature review brought several ideas that were confronted by a series of expert assessments. The results of the research revealed that complex problem solving skills, resource management skills, and system skills are very crucial in the CEMR selection process.

The main contribution of this article is the simulation-based positioning of skills of the circular economy manager. It was based on quantified expert opinions. The first stage provided a list of criteria that result from a deterministic approach. However, the results of the Monte Carlo simulation were quite unexpected. Despite small numerical discrepancies, the hierarchy changed significantly. Only the first ("Resource management skills") and the last ("Technical skills") groups of requirements stayed in the same position. The other aggregate lists swapped the order in neighboring pairs. This was, of course, due to the fact that the weights of individual skills are probabilistic, and the changing values of the simulated variables resulted in a different average level. Regardless, the experts' indications and their sensitivity when assessing individual skills are important as well. From this point of view, the selection of experts in the scientific process (see: conceptual phase, Figure 1) whose knowledge was used to construct the model was extremely important.

All in all, this evident gap in knowledge of how to conduct a recruitment process to select the best candidates on a circular economy manager post in construction projects was filled.

The symmetry, whose dual character was emphasized, should inspire decision-makers to take appropriate actions. On the one hand, it should be remembered that the CE idea brings greater effects when the CEMR works in an organization. On the other hand, the effectiveness of the CEMR is greater if the idea of CE is thoroughly implemented and well known in the organization. The above dependence determines how to search for solutions that guarantee the success of both the construction project and the company that executes it.

Besides, it is reasonable to emphasize the meaning of the simulation model, which is especially advisable for implementing new ideas. In this case, creating a new position in the structure of construction projects can be treated as an innovative solution. The collected expert opinions brought a number of ideas related to the specification of CEMR's skills that need to be prioritized.

### **7. Limitations and Future Research Lines**

As was explained previously, the topic of this manuscript was worthwhile for scientific analysis even if we currently have an early stage of knowledge development in this area. The aim of the article was to start a discussion on this, because the idea of employing CEMR is very innovative.

However, there are still several problems that should be solved in future research:


The survey should be continued to receive more experts' opinions to increase the accuracy of its results. It seems that some case studies may be helpful in confronting the theoretical approach of the model; however, there are no common examples of hiring CEMR in construction projects, thus we must wait for future research.

This study used a Monte Carlo simulation approach. The main contribution of this paper is the identification of the prioritized criteria for selecting CEMR candidates to construction projects. The proposed model gives six expert-based cut-off scores described by probability distributions. The outcomes of the research can help in making more reliable decisions connected with the CEMR selection process in construction projects.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The author declares no conflict of interest.

### **Appendix A**

The detailed reports of the simulations are presented in Figures A1–A6. Each summary consists of two elements: graphical results (histograms, distribution charts, tornado charts) on the left side as well as numerical results (mainly statistics connected with the simulation) on the right side.




**Figure A1.** Screenshot of the simulation results for the "basic skills" cut-off score group.

Figure A1 relates to "Basic skills" cut-off score group, whereas other Figures A2–A6 describe results of: "complex problem solving skills", "resource management skills", "social skills", "systems skills" and "technical skills", respectively.




**Figure A2.** Screenshot of the simulation results for the "complex problem solving skills" cut-off score group.

The probabilistic attitude changes the hierarchy of selection criteria for circular economy manager in construction projects in comparison to the deterministic approach. From this point of view, a closer look at the details of the simulation helps to understand the essence of this study.




**Figure A3.** Screenshot of the simulation results for the "resource management skills" cut-off score group.




**Figure A4.** Screenshot of the simulation results for the "social skills" cut-off score group.




**Figure A5.** Screenshot of the simulation results for the "systems skills" cut-off score group.



**Figure A6.** Screenshot of the simulation results for the "technical skills" cut-off score group.

### **References**


© 2019 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **A New Model for Stock Management in Order to Rationalize Costs: ABC-FUCOM-Interval Rough CoCoSo Model**

### **Živko Erceg 1, Vitomir Starˇcevi´c 2, Dragan Pamuˇcar 3,\*, Goran Mitrovi´c 4, Željko Stevi´c <sup>1</sup> and Srđan Žiki´c <sup>5</sup>**


Received: 3 December 2019; Accepted: 13 December 2019; Published: 17 December 2019

**Abstract:** Cost rationalization has become imperative in every economic system in order to create adequate foundations for its efficient and sustainable management. Competitiveness in the global market is extremely high and it is challenging to manage business and logistics systems, especially with regards to financial parameters. It is necessary to rationalize costs in all activities and processes. The presence of inventories is inevitability in every logistics system, and it tends to create adequate and symmetrical policies for their efficient and sustainable management. In order to be able to do this, it is necessary to determine which products represent the largest percentage share in the value of procurement, and which are the most represented quantitatively. For this purpose, ABC analysis, which classifies products into three categories, is applied taking into account different constraints. The aim of this paper is to form a new model that involves the integration of ABC analysis, the Full Consistency Method (FUCOM), and a novel Interval Rough Combined Compromise Solution (CoCoSo) for stock management in the storage system. A new IRN Dombi weighted geometric averaging (IRNDWGA) operator is developed to aggregate the initial decision matrix. After grouping the products into three categories A, B and C, it is necessary to identify appropriate suppliers for each category in order to rationalize procurement costs. Financial, logistical, and quality parameters are taken into account. The FUCOM method has been used to determine the significance of these parameters. A new Interval CoCoSo approach is developed to determine the optimal suppliers for each product group. The results obtained have been modeled throughout a multi-phase sensitivity analysis.

**Keywords:** management; costs; FUCOM; ABC analysis; Interval Rough CoCoSO; finances; sustainability

### **1. Introduction**

Managing all activities and processes, whether in engineering or any other area, requires proactive action and a focus on achieving sustainability. It primarily refers to the economic aspect of sustainability taking into account specific characteristics of an area in which certain research is conducted. Competitiveness in the market is huge and every company has to strive to reduce costs in its internal processes and activities since it is practically the only way to increase its competitiveness. This is also proved by Stojˇci´c et al. [1] stating that in order to achieve a competitive market position, it is necessary to rationalize logistics activities and processes. A warehouse appears as one of the subsystems in which rationalization is possible and which, as a special logistics subsystem besides transportation, represents the biggest cause of logistical costs, and thus there is constant search for potential places for savings in these subsystems. One of the items that is certainly a problem for a large number of companies, whether referring to manufacture or distribution of finished products, is inventory. The goal is to find the optimal amount of inventory in order to control a warehouse in the best way possible and to rationalize the costs the logistics subsystem causes. This is mandatory if we want to achieve balance (symmetry) between production and consumption. For all the reasons given above, this paper examines the storage system from the aspect of stock management. Attention has to be directed towards performing all activities in the storage system in an efficient manner, which, in one way, is warehouse management as defined in [2]. The significance of this system and its management can be seen from the statement given in the above reference that warehouse systems and material handling are basic elements in the flows of goods and link manufacture and consumption points. It is for these reasons that it is important to establish an efficient synchronization of all activities and processes in the storage system, which is primarily achieved through adequate stock management.

This paper has several goals. The first goal relates to the development of a new approach that involves the Interval Rough Combined Compromise Solution (CoCoSo) model, which is a contribution to the literature that addresses multi-criteria models where uncertainty and imprecision exist. The integration of interval rough numbers and the CoCoSo method enables decision-makers to achieve more precise results based on their preferences. The second goal of the paper is to rationalize costs in the storage system through adequate stock management. It is primarily about identifying different groups of inventory and adopting adequate procurement policies. For this reason, in the subsequent phase, the selection of adequate suppliers for each group is performed. The third goal of the paper is to integrate the two aforementioned goals, which involves the creation of a new stock management model that combines different approaches: ABC analysis, the Full Consistency Method (FUCOM), and the Interval Rough CoCoSo model highlighting its benefits. It is important to note that such a model for stock management, which involves the integration of all the previous approaches, has not been noticed, and thus, from that aspect, the significance of this study can be perceived. With the formation of this model, the goal is to address efficiently one of typical warehouse planning issues, which is, according to Van den Berg and Zijm [3], warehouse management. In addition, it is necessary to achieve intelligent stock management that can result in reducing storage costs.

The rest of the paper is structured throughout several other sections. In the second section, a review of the literature related to the application of ABC analysis in inventory and stock management, and the application of multi-criteria decision-making (MCDM) methods in the storage system is presented. The third section presents a three-part methodology. The first part describes ABC analysis with certain constraints that have to be taken into account when executing it, while the second part is a brief overview of the FUCOM method. The third part in the third section presents the development of a new Interval Rough CoCoSo model with its algorithm given in detail. The fourth section is a case study presenting the problem in detail and the method to solve it. The fifth section includes an extensive sensitivity analysis using different models, as well as the discussion of the results obtained. Finally, there are concluding considerations with further research suggestions moving in several different directions. The algorithm of the developed IRN Dombi weighted geometric averaging (IRNDWGA) operator is provided in Appendix A.

### **2. Literature Review**

ABC analysis is a frequently used technique to identify the state of stock management in a warehouse. Due to its simplicity on the one hand, and its great usefulness on the other, it has been noticed that it is widely used in various fields. The research conducted by Flores and Whybark [4] can be considered the first study that indicates the importance of applying multi-criteria optimization in the traditional ABC analysis. Flores and Whybark [5] supported the view through the advancement of their previous study by applying multi-criteria optimization in ABC analysis. Flores et al. [6] showed that the Analytic Hierarchy Process (AHP) method is one of the most appropriate multi-criteria techniques for stock classification in a warehouse. In their studies, Guvenir and Erel [7] and Malmborg et al. [8] went a step further and applied ABC analysis in combination with multi-criteria techniques and heuristic models (genetic algorithms). After these studies, Ramanathan [9] came up with the idea of optimizing storage systems using linear modeling (DEA model), combined with ABC analysis. That idea led to the comprehensive application of DEA and ABC analysis in evaluating the efficiency of storage systems.

ABC analysis was applied in [10] to control stock in a pharmaceutical company and create policies for managing it. Cycle counting calculations were performed and an inventory control application was based on the ABC-VED (vital, essential, desirable) model. Ishizaka et al. [11] applied the methodology to classify inventory into three groups based on three criteria: Annual Usage Value (AUV), Frequency Of Issue per year (FOI), and Current Stock Value (CSV). They formed the DEASort model that they applied with the Analytic Hierarchy Process (AHP) method. Compared to the single-criterion function, the study has concluded that the proposed model generates more savings per inventory classification groups. The study [12] also introduced a new hybrid model combining the ABC multi-criteria classification using the evolutionary algorithm with the MCDM method, the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS). The objective function of the model, as the authors point out, is to minimize inventory management costs, as well as to exploit the robustness and usefulness of partial approaches in the model, reduce the inventory costs, ensure acceptable performance, and meet the constraints of inventory management. It is concluded that the approach allows for more efficient inventory management. ABC analysis can certainly serve to create adequate policies to rationalize costs, as confirmed by the research [13] that created a new inventory management policy concerning the pre-existing situation that refers to spare parts. In order to increase the efficiency of inventory management and to solve the problem of single-criterion function, a hybrid approach, which involves ABC, AHP, and TOPSIS, is proposed in the research [14] for inventory management in an electronics company. Based on ABC analysis, in the study [15], the extraction of two spare parts requiring special treatment was performed, and the conditions for applying the Economic Order Quantity (EOQ) were obtained. Using the results of ABC analysis, a modification of the warehouse layout was made. The constant tendency to integrate different approaches with ABC analysis in order to create an adequate management model is also evident in the study [16] that has developed a new classification algorithm, called the FNS (functional, normal, and small) algorithm, which combines classical ABC classification with a new grouping strategy. In the algorithm, the handling frequency, lead time, contract manufacturing process, and specialty are used as input criteria, and the outputs are new classes for the inventories. The model has led to the result that inventories can be classified in more detail and useful management strategies can be created. For the purpose of inventory management regarding a Chinese manufacturer, the ELECTRE III method was combined with ABC analysis in the study [17]. It represents an innovative model for optimal classification of inventory. Ng [18] points out the lack of applying ABC analysis alone because, as already noted, it is based on only single criterion. Therefore, in his paper, in which he also emphasizes the importance of other criteria, he has developed a simple model for multiple criteria inventory classification. The model converts all criteria measures of an inventory item into a scalar score. The classification based on the calculated scores using ABC principle is applied. Various approaches have been developed to create adequate foundations for inventory optimization in storage systems, as already been emphasized. It is important to note that the study [19] combined rough set theory with ABC analysis by considering additional criteria.

In addition to crisp multi-criteria decision-making (MCDM) techniques, the authors applied a fuzzy technique to include uncertainties when evaluating storage systems. Thus, fuzzy AHP was applied to evaluate the hydrogen storage system in the automotive industry [20]. To improve supply chain performance, companies need to select an adequate warehouse location that will meet multiple needs and requirements. The combination of fuzzy AHP and fuzzy TOPSIS was applied in [21] for the optimal selection of five potential warehouse locations. The importance and impact of warehousing on the complete efficiency of supply chains has been confirmed in a number of studies, such as [22–25]. Ashrafzadeh et al. [22] point out that the selection of a warehouse location is of strategic importance for many companies. In the study, the fuzzy TOPSIS method was used to evaluate potential warehouse locations. The study [25] is noticeable in the area of the supply chain management of hazardous substances. The study shows the influence of selecting an appropriate warehouse location on reducing the risk of negative effects. In the study, the authors use the fuzzy Multi-Objective Optimization Ratio Analysis (MULTIMOORA) technique for multi-criteria optimization of warehouse locations. The combination of MCDM methods in integration with fuzzy logic was also applied in [23] to determine a warehouse location. In the study, the authors used fuzzy TOPSIS, fuzzy Simple Additive Weighting (SAW), and fuzzy Multi-Objective Optimization Ratio Analysis (MOORA) methods, while Emeç and Akkaya [26] applied stochastic AHP and VlseKriterijumska Optimizacija I Kompromisno Resenje (VIKOR) for the same purpose.

Fuzzy multi-criteria decision-making is also used for other tasks related to storage systems. Saputro and Daneshvar Rouyendegh [27] evaluated and selected material handling equipment in a warehouse using a hybrid multi-criteria model that involves the use of fuzzy AHP and fuzzy TOPSIS techniques. In their study, Erkan and Can [28] demonstrated the importance of using barcode technology, i.e., Radio Frequency Identification (RFID) technology to optimize logistics processes. For that purpose, they applied the fuzzy AHP method.

### **3. Methods**

This paper introduces a new methodology for stock management in the storage system with the aim of rationalizing costs and creating an adequate stock management model (Figure 1). The proposed methodology is implemented throughout three phases. In the first phase, on the basis of procurement costs, the classification of inventory in the storage system is performed using ABC analysis. After grouping the products into Groups A, B, and C, product ordering policies and the time interval for controlling the products are defined. After defining the groups, potential suppliers (different number of suppliers) for each of these groups are considered, i.e., those who are specialized in specific types of products. In this way, the second phase is approached defining the criteria for selecting suppliers and potential alternatives for each group individually. In this phase, the application of FUCOM defines the weight coefficients of the criteria within each of Groups A, B, and C. In the third phase, expert evaluation of alternatives and aggregation of expert decisions into a single decision matrix are conducted. The uncertainties and inaccuracies in expressing expert preferences are represented by the use of interval rough numbers (IRN). The evaluation of alternatives is conducted using the Interval Rough (IR) CoCoSo model. As a result of the IR CoCoSo model, a ranking of suppliers is obtained within each Group, A, B, and C. In the next step, validation of the obtained results and making the final decision are carried out within a sensitivity analysis of the results.

**Figure 1.** Proposed methodology for stock management.

In addition to the model being described in detail, it is important to highlight again the completely new elements of the model: The Interval Rough (IR) Combined Compromise Solution (CoCoSo) model and the IRN Dombi weighted geometric averaging (IRNDWGA) operator.

### *3.1. ABC Analysis*

One of the most frequently used inventory classification techniques is ABC analysis [9,14]. During the last decades, many companies have taken seriously the task of managing the inventory efficiently because of the surplus of stock and the need to make more profits for their financial and logistical well-being. For this purpose, the ABC classification is one of the most frequently used analyses in production and inventory management domains, in order to classify a set of items in the three predefined classes A, B, and C, where each class follows a specific management and control policies [12]. Efficient inventory classification is a vital activity for companies that handle large quantities of inventory.

Considering all of the above, it can be concluded that ABC analysis is, in one way, indispensable in the creation of inventory management models, but not sufficient. This is confirmed by the study [14] stating that ABC analysis is one of the most frequently used inventory classification techniques, but this technique considers only a single criterion as the annual sales volume of each item. Therefore, different approaches are developed depending on a case study in which different techniques are integrated, as is the case in this paper.

ABC analysis is a stochastic method. It is very simple, and that is the reason why it is widely used in the field of material and commodity business. ABC analysis aims to maximize cost-effectiveness and productivity and increase business success and economy. It is used in companies that have a wide range of products. The purpose of using this analysis is to establish a functional control and management system within the procurement and warehousing business, and thus the possibility of achieving greater cost-effectiveness of the company. ABC analysis is based on the most important products that are of greatest benefit, i.e., they bring in the most revenue. The process of conducting ABC analysis can be described in three phases. The first phase is collecting data on annual requirements or material consumption by types over a certain period, usually one year. After that, the values of requirements/consumption are calculated by multiplying the quantities of individual materials by their planned or average purchase prices. Then, the materials are sorted in descending order by the values of annual requirements/consumption, the percentage share of the value of individual materials in the total value of annual requirements/consumption is calculated and the percentage shares are cumulated. A comparison of the cumulative percentages of the annual requirements/consumption and the percentages of the number of types is performed to determine Groups A, B, and C, and for each product the group it belongs to.

The cost share of the total procurement value should comply with the constraint represented by Equation (1).

$$A = 40 - 80\%, \quad B = 15 - 40\%, \quad C = 5 - 20\% \tag{1}$$

The share in the total number (quantity) of different types of products should comply with the constraint represented by Equation (2).

$$A = \text{5} - 2\text{5\%}, \quad B = \text{20} - 40\text{\%}, \quad \text{C} = 40 - 7\text{5\%} \tag{2}$$

The third constraint implies that there are most products of *C*, followed by B and least products of *A*, which is shown in Equation (3).

$$A < B < \mathbb{C} \tag{3}$$

### *3.2. Full Consistency Method FUCOM*

One of the new methods that is based on the principles of pairwise comparison and validation of results through deviation from maximum consistency is the full consistency method (FUCOM) [29]. Benefits that are determinative for the application of FUCOM are a small number of pairwise comparisons of criteria (only *n*−*1* comparison), the ability to validate the results by defining the deviation from maximum consistency (DMC) of comparison, and appreciating transitivity in pairwise comparisons of criteria. The FUCOM model also has a subjective influence of a decision-maker on the final values of the weights of criteria. This particularly refers to the first and second steps of FUCOM in which decision-makers rank the criteria according to their personal preferences and perform pairwise comparisons of ranked criteria. However, unlike other subjective models, FUCOM has shown minor deviations in the obtained values of the weights of criteria from optimal values [29–33]. Additionally, the methodological procedure of FUCOM eliminates the problem of redundancy of pairwise comparisons of criteria, which exists in some subjective models for determining the weights of criteria.

Assume that there are *n* evaluation criteria in a multi-criteria model that are designated as *wj*, *j* = 1, 2,..., *n*, and that their weight coefficients need to be determined. Subjective models for determining weights based on pairwise comparison of criteria require a decision-maker to determine the degree of impact of the criterion *i* on the criterion *j*. In accordance with the defined settings, Figure 2 presents the FUCOM algorithm [34].

**Figure 2.** Steps of the Full Consistency Method (FUCOM) method.

### *3.3. A New MCDM Model—Interval Rough CoCoSo Approach*

The process of group decision-making is accompanied by a great amount of uncertainty and subjectivity, so decision-makers often have dilemmas when assigning certain values to decision attributes [34]. Suppose that one decision attribute should be assigned a value presented by a qualitative scale whose values range from 1 to 7. One decision-maker (DM) may consider that the decision attribute should have a value between 5 and 6, another DM may consider that a value between 4 and 5 should be assigned, while the third DM has no dilemma about the value of the decision attribute and assigns a value of 5. The dilemmas presented are extremely common in a group decision-making process. In such situations, one of the solutions is to geometrically average two values between which individual decision-makers are in doubt. However, in such situations, the uncertainty (ambiguity) that prevailed in a decision-making process would be lost and further calculation would be reduced to crisp values. On the other hand, the use of fuzzy [35,36] or grey techniques would entail predicting the existence of uncertainty and subjectively defining the interval by which uncertainty is exploited. Subjectively defined intervals in further data processing can significantly influence the final decision, which should definitely be avoided if we aim at impartial decision-making. On the contrary, the approach based on interval rough numbers includes exploiting the uncertainty that exists in the data obtained [34].

In this paper, a new approach in the theory of rough sets, a CoCoSo approach based on Interval Rough Numbers (IRN), is proposed to process the uncertainty contained in data in group decision-making. The crisp CoCoSo method was developed in 2018 by Yazdani et al. [37].

The Interval Rough CoCoSo approach consists of seven steps, which are explained below.

*Step 1*. Forming the initial decision-making matrix (*X*).

*Symmetry* **2019**, *11*, 1527

In the first step, it is needed to perform the evaluation of *l* alternatives by *n* criteria. The procedure for obtaining the basic decision-making matrix (*X*) is the same as in other MCDM approaches. In this approach interval rough numbers are applied as input parameters. Advantages of application Interval Rough Numbers (IRN) are presented in [34].

Based on Equations (1)–(13) from [34], we determine the vectors *Ai* = (*IR*(*xi*1), *IR*(*xi*2), ... , *IR*(*xin*)), where *IR*(*xij*) = ( *RN*(*xL ij*),*RN*(*x<sup>U</sup> ij* ) ) = (*xL ij*, *xU ij*) , ( *x<sup>L</sup> ij* , *<sup>x</sup><sup>U</sup> ij* ) represents the value of alternative *<sup>i</sup>* by criteria *j* (*i* = 1, 2, ... , *l*; *j* = 1, 2, ... , *n*).

$$X = \begin{bmatrix} \mathbf{C}\_1 & \mathbf{C}\_2 & \dots & \mathbf{C}\_n \\ A\_1 & \begin{bmatrix} IR(\mathbf{x}\_{11}) & IR(\mathbf{x}\_{12}) & \dots & IR(\mathbf{x}\_{1n}) \\ IR(\mathbf{x}\_{21}) & IR(\mathbf{x}\_{22}) & IR(\mathbf{x}\_{2n}) \\ \dots & \dots & \dots & \dots & \dots \\ IR(\mathbf{x}\_{l1}) & IR(\mathbf{x}\_{l2}) & \dots & IR(\mathbf{x}\_{ln}) \end{bmatrix}\_{l \times n} \tag{4}$$

where *l* represents the number of alternatives, *n* represents the number of criteria. The initial (aggregated) decision matrix can be obtained using some of IRN operators.

*Step 2*. Normalization of the initial interval rough group matrix using Equations (5)–(9)

$$N = \begin{array}{ccccc} & \text{C}\_1 & \text{C}\_2 & \dots & \text{C}\_n\\ & A\_1 & \begin{bmatrix} \text{IR}(n\_{11}) & \text{IR}(n\_{12}) & \dots & \text{IR}(n\_{1n})\\ \text{IR}(n\_{21}) & \text{IR}(n\_{22}) & & \text{IR}(t\_{2n})\\ \dots & \dots & \dots & \dots & \dots\\ \text{IR}(n\_{l1}) & \text{IR}(n\_{l2}) & \dots & \text{IR}(n\_{ln}) \end{bmatrix}\_{\text{l}\times n} \end{array} \tag{5}$$

where *IR*(*nij*) represents the elements of the interval rough normalized matrix (*N*).

(a) For "*benefit* type" criteria (maximum value of criteria is preferable)

$$MR(n\_{ij}) = \left( \left[ n\_{ij}^{L}, n\_{ij}^{II} \right], \left[ n\_{ij}^{\prime L}, n\_{ij}^{\prime L} \right] \right) = \left( \left[ \frac{\mathbf{x}\_{ij}^{L} - \mathbf{x}\_{i}^{-}}{\mathbf{x}\_{i}^{+} - \mathbf{x}\_{i}^{-}}, \frac{\mathbf{x}\_{ij}^{II} - \mathbf{x}\_{i}^{-}}{\mathbf{x}\_{j}^{+} - \mathbf{x}\_{i}^{-}} \right], \left[ \frac{\mathbf{x}\_{ij}^{\prime L} - \mathbf{x}\_{i}^{-}}{\mathbf{x}\_{i}^{+} - \mathbf{x}\_{i}^{-}}, \frac{\mathbf{x}\_{ij}^{\prime L} - \mathbf{x}\_{i}^{-}}{\mathbf{x}\_{i}^{+} - \mathbf{x}\_{i}^{-}} \right] \right) \tag{6}$$

(b) For "*cost* type" criteria (minimum value of criteria is preferable)

$$IR(n\_{ij}) = \left( \left[ n\_{ij'}^{L} n\_{ij}^{II} \right] \left[ n\_{ij'}^{\prime L}, n\_{ij}^{\prime L} \right] \right) = \left( \left[ \frac{\mathbf{x}\_{ij}^{\prime II} - \mathbf{x}\_i^{+}}{\mathbf{x}\_i^{-} - \mathbf{x}\_i^{+}}, \frac{\mathbf{x}\_{ij}^{\prime L} - \mathbf{x}\_i^{+}}{\mathbf{x}\_i^{-} - \mathbf{x}\_i^{+}} \right] \left[ \frac{\mathbf{x}\_{ij}^{II} - \mathbf{x}\_i^{+}}{\mathbf{x}\_i^{-} - \mathbf{x}\_i^{+}}, \frac{\mathbf{x}\_{ij}^{L} - \mathbf{x}\_i^{+}}{\mathbf{x}\_i^{-} - \mathbf{x}\_i^{+}} \right] \right) \tag{7}$$

where *x*− *<sup>i</sup>* and *<sup>x</sup>*<sup>+</sup> *<sup>i</sup>* represent the minimum and maximum values of the rough boundary interval of the observed criteria, respectively:

$$\mathbf{x}\_{i}^{-} = \min\_{i} \{ \mathbf{x}\_{i\circ}^{L} \mathbf{x}\_{i\circ}^{\prime L} \} \tag{8}$$

$$\mathbf{x}\_{i}^{+} = \max\_{i} \left\{ \mathbf{x}\_{ij}^{\mathrm{II}}, \mathbf{x}\_{ij}^{\prime \mathrm{II}} \right\} \tag{9}$$

*Step 3*: Weighting the previous normalized interval rough matrix using Equation (10):

$$IR(V\_{ij}) = \left( \left[ v\_{ij}^L, v\_{ij}^{\mathrm{LI}}, v\_{ij}^{\prime L}, v\_{ij}^{\prime L} \right] \right)\_{m \times n} = \left( \left[ n\_{ij}^L \times w\_{ij}^L, n\_{ij}^{\mathrm{LI}} \times w\_{ij}^{\mathrm{LI}}, n\_{ij}^{\prime L} \times w\_{ij}^{\prime L}, n\_{ij}^{\prime L} \times w\_{ij}^{\prime L} \right] \right) \tag{10}$$

*IR*(*wj*) represents the weight coefficients of criteria.

*Step 4*: Summing all the values of the alternatives obtained (summing by rows) using Equation (11):

$$IR(\mathbf{S}\_i) = \left( \left[ \mathbf{s}\_{i'}^L, \mathbf{s}\_{i'}^{\mathrm{II}}, \mathbf{s}\_{i'}^{\prime L}, \mathbf{s}\_i^{\prime L \mathrm{I}} \right] \right)\_{1 \times m} = \sum \left[ \mathbf{v}\_{i j'}^L, \mathbf{v}\_{i j'}^{\mathrm{II}}, \mathbf{v}\_{i j}^{\prime L}, \mathbf{v}\_{i j}^{\prime \mathrm{I}} \right] \tag{11}$$

*Step 5*. Determination of the weighted sum model using Equations (12) and (13):

$$IR(SW\_i) = \left( \left[ sw\_i^L, sw\_i^{lL}, sw\_i'^L, sw\_i'^{lL} \right] \right)\_{1 \times m} = \sum \left[ \left( n\_{ij}^L, n\_{ij}^{lL}, n\_{ij}'^L, n\_{ij}'^{lL} \right)^{wj} \right] \tag{12}$$

$$IR(SW\_i) = \sum \left[ \left( n\_{ij}^{L} \right)^{w\_j^{\prime L}}, \left( n\_{ij}^{lL} \right)^{w\_j^{\prime L}}, \left( n\_{ij}^{\prime L} \right)^{w\_j^{lL}}, \left( n\_{ij}^{\prime lL} \right)^{w\_j^{lL}} \right] \tag{13}$$

*Step 6.* Determination of aggregated strategies.

In this step, three aggregated appraisal scores are used to generate relative performance scores of the alternatives, using Equations (14)–(17):

First, it is required to calculate the sum of matrices *IR*(*Si*) and *IR*(*SWi*). In this way, the matrix *IR*(*Ti*) is obtained by applying Equation (14)

$$IR(T\_i) = \left[t\_{ij}^L, t\_{ij}^{lI}, t\_{ij}^{\prime L}, t\_{ij}^{lI}\right] = \left[s\_i^L + sw\_i^{\prime}, s\_i^{\prime I} + sw\_i^{\prime I}, s\_i^{\prime L} + sw\_i^{\prime L}, s\_i^{\prime I} + sw\_i^{\prime I}\right] \tag{14}$$

Subsequently, all values by columns are summed and the matrix *IR*(*Ti*)1×<sup>1</sup> is obtained.

$$\mathbf{h}(\mathbf{a})k\_{\text{in}} = \frac{IR(T\_i)}{\sum IR(T\_i)} \tag{15}$$

$$S(\mathbf{b})k\_{i\bar{b}} = \frac{S\_i}{\min\_i S\_i} + \frac{SW\_i}{\min\_i SW\_i} \tag{16}$$

$$\lambda(\mathbf{c})k\_{\rm ic} = \frac{\lambda(S\_{\bar{i}}) + (1 - \lambda)(SW\_{\bar{i}})}{(\lambda \max\_{\bar{i}} S\_{\bar{i}} + (1 - \lambda)\max\_{\bar{i}} SW\_{\bar{i}})}; \ 0 \le \lambda \le 1. \tag{17}$$

Equation (15) represents the arithmetic mean of sums of *IR*(*Si*) and *IR*(*SWi*) scores, while Equation (16) signifies the sum of relative scores of *IR*(*Si*) and *IR*(*SWi*). Equation (17) computes a balanced compromise score of *IR*(*Si*) and *IR*(*SWi*) models. In Equation (17), the value of λ ranges from 0 to 1 and can be chosen by the decision-maker.

*Step 7*: The final ranking of the alternatives is determined based on *ki* values:

Higher *ki* values indicate better position of the alternatives in the ranking pre-order.

$$k\_i = (k\_{\rm ia} k\_{i\bar{b}} k\_{i\bar{c}})^{\frac{1}{3}} + \frac{1}{3}(k\_{\rm ia} + k\_{i\bar{b}} + k\_{i\bar{c}}) \tag{18}$$

The ranking of alternatives is performed by transformation of the interval rough numbers *IRN*(*Ki*) = &*Ki <sup>L</sup>*,*Ki U*' , & *Ki L*, *Ki <sup>U</sup>*' into real numbers *Ki*(*<sup>i</sup>* <sup>=</sup> 1, 2, ... , *<sup>l</sup>*), applying Equations (19) and (20) from [38].

$$\mu\_i = \left[\frac{RB(K\_{\text{ii}})}{RB(K\_{\text{ii}}) + RB(K\_{\text{ii}})}\right], \ 0 \le \mu\_i \le 1; \text{RB}(K\_{\text{iii}}) = \left[K\_i^{\prime II} - K\_i^{\prime L}\right]; \text{RB}(K\_{\text{ii}}) = \left[K\_i^{\prime L} - K\_i^{\prime L}\right] \tag{19}$$

$$K\_i = \left( \left[ \mu\_i \cdot K\_i^{\,.\,L} \right] + \left[ (1 - \mu\_i) \cdot K\_i^{\,\,coll} \right] \right) \tag{20}$$

where *RB*(*Kui*) and *RB*(*Kli*) represents the rough boundary intervals of *IRN*(*Ki*).

### **4. Case Study**

The proposed model, which is shown and explained in detail in Figure 1, was implemented in a company engaged in trading activities and its own production of building materials. It is located in the territory of Bosnia and Herzegovina. It is important to note that in the recent past, the company has also introduced the production of eco-pellets, which indicates that it is trying to take care of sustainability aspects as well.

### *4.1. Application of ABC Analysis for Product Classification*

The input parameters for ABC analysis are the parameters obtained from the procurement report at the retail facility of the company where the research was conducted. The data collected cover one calendar year, i.e., the complete period of the previous year. Product characteristics are systematized by: Product code, product name, quantity of products purchased, purchase value per unit of product, and total value of procurement. Based on the financial parameters, ABC analysis has been carried out, part of which is shown in Table 1. The product assortment for the observed annual period is a total of 83 products.



After ABC analysis, complying with the constraints presented by Equations (1)–(3), the results given in Figure 3 are obtained.

**Figure 3.** Results of ABC analysis.

Figure 3 shows the number of products per groups, the percentage share of the number of products, and the percentage share of the costs. Based on the results obtained, the following can be observed. Group A consists of eight products, i.e., 9.6% of the total number of products, which certainly represents the smallest number of classified groups, but with the highest value. Therefore, these products represent a total of 75.3% of procurement costs since they are products with higher value or higher demand. There are 26 products classified in Group B, which is 31.3% of the total number of products. In respect of financial structure, i.e., the costs of this product category, they are 19.5%, while Group C products account for 5.12% of the cost structure but represent the largest part of the products in percentage and numerical terms; 59% and 49%, respectively. Group A consists of products that are of highest priority for the company. There is a high need for these products, and they have a high value, so they need special attention. It is reflected in more frequent and rigorous control, thorough preparation of procurement activities and the creation of partnerships with potential suppliers. Considering the products that belong to Group B, it is necessary to pay almost as much attention as to the previous group, with some minor changes. This group of products needs to be controlled less frequently; two to three times a year. Group C includes the large number in the total number of products, but they are of very low values. For this group of products, the procurement processes should be simplified as much as possible and higher amount of security stock should be kept. Most commonly, the orders cover the annual requirements, so the control mostly takes place once a year.

### *4.2. Calculation of the Criterion Weights Applying the FUCOM Method*

The criteria used to evaluate the suppliers within each group are: C1—Payment method (max), C2—Financial stability (max), C3—Product price (min), C4—Delivery time (min), C5—Reliability (max), C6—Flexibility (max), C7—Product quality (max), C8—Warranty period (max) and C9—Reputation (max). Three of these criteria belong to financial criteria, logistic parameters, and quality indicators, respectively. After performing the calculation and applying all the steps of the FUCOM method, the vectors of weight coefficients within each of the groups are defined, as shown in Figure 4.

**Figure 4.** Results of applying the FUCOM method—the weight values of the criteria.

Figure 4 shows the values of all the criteria for suppliers to be evaluated within the three groups. Considering the results of ABC analysis and the types of products belonging to different groups, the results are as follows:


### *4.3. Evaluation of Suppliers Applying the Interval Rough CoCoSo Model*

The final values of the weight coefficients obtained by the FUCOM model are further used to evaluate and select the optimal alternative (supplier) in the IR CoCoSo multi-criteria model. The evaluation of suppliers is performed within the three groups defined in ABC analysis. Eight suppliers are evaluated in Group A, six suppliers are considered in Group B, while nine suppliers are analyzed in Group C. The suppliers are evaluated within each group on the basis of the nine criteria previously presented. Applying the IR CoCoSo model within each group, a rank of suppliers is defined. The alternatives are evaluated using the linguistic scale: Very good (VG), 9; Good (G), 7; Medium (M), 5; Fair (F), 3; and Poor (P), 1.

In the research, three experts took part and evaluated the suppliers using the predefined scale. Thus, three correspondent expert matrices were obtained within each group of suppliers, one for each expert, respectively (Table 2).

*Step 1*. The initial (aggregated) decision matrix (Table 3) is obtained using the IRN Dombi weighted geometric averaging (IRNDWGA) operator, Equation (A2). IRNDWGA is derived from the rough Dombi weighted geometric averaging operator presented in [39] and given in Appendix A.

Applying Equation (A2), the experts' individual IRN matrices are transformed into an aggregated IRN initial decision matrix. Thus, e.g., at position A1-C1 (Supplier Group A), the following values in expert correspondent matrices are obtained:

*IRN*(*xE*<sup>1</sup> <sup>11</sup> ) = ([1, 2.33]; [3, 3.67]), *IRN*(*xE*<sup>2</sup> <sup>11</sup> ) = ([1, 2.33]; [3, 3.67]) and *RN*(*xE*<sup>3</sup> <sup>11</sup> ) = ([2.33, 5]; [3.67, 5]). As mentioned in the previous part of the paper, three experts participated in the study and were assigned the following weight coefficients *wE* = (0.299, 0.328, 0.373) *<sup>T</sup>*. Based on the values shown in Equation (A2) and assuming that ρ = 1 is at position A1-C1, the aggregation of values is performed, as follows:

*IRNDWGA*(*x*11) = ⎧ ⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨ ⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩ *xl*<sup>12</sup> = <sup>3</sup> *<sup>j</sup>*=<sup>1</sup> <sup>ϕ</sup>*lj* 1+ ⎧ ⎪⎪⎪⎨ ⎪⎪⎪⎩ 3 *j*=1 *wj* ⎛ ⎜⎜⎜⎜⎜⎜⎜⎝ 1−*f* ϕ*lj f* ϕ*lj* ⎞ ⎟⎟⎟⎟⎟⎟⎟⎠ ρ⎫ ⎪⎪⎪⎬ ⎪⎪⎪⎭ 1/<sup>ρ</sup> = 4.33 <sup>1</sup>+(0.299×( <sup>1</sup>−0.23 0.23 )+0.328×( <sup>1</sup>−0.23 0.23 )+0.373×( <sup>1</sup>−0.54 0.54 )) <sup>=</sup> 1.27 *xl*<sup>12</sup> = <sup>3</sup> *<sup>j</sup>*=<sup>1</sup> *Lim*(ϕ*j*) 1+ ⎧ ⎪⎪⎨ ⎪⎪⎩ 3 *j*=1 *wj* ⎛ ⎜⎜⎜⎜⎝ <sup>1</sup>−*<sup>f</sup>*(<sup>ϕ</sup>*lj*) *<sup>f</sup>*(<sup>ϕ</sup>*lj*) ⎞ ⎟⎟⎟⎟⎠ ρ⎫ ⎪⎪⎬ ⎪⎪⎭ 1/<sup>ρ</sup> = 9.66 <sup>1</sup>+(0.299×( <sup>1</sup>−0.24 0.24 )+0.328×( <sup>1</sup>−0.24 0.24 )+0.373×( <sup>1</sup>−0.52 0.52 )) <sup>=</sup> 2.91 *xu*<sup>12</sup> = <sup>3</sup> *<sup>j</sup>*=<sup>1</sup> <sup>ϕ</sup>*lj* 1+ ⎧ ⎪⎪⎪⎨ ⎪⎪⎪⎩ 3 *j*=1 *wj* ⎛ ⎜⎜⎜⎜⎜⎜⎜⎝ 1−*f* ϕ*lj f* ϕ*lj* ⎞ ⎟⎟⎟⎟⎟⎟⎟⎠ ρ⎫ ⎪⎪⎪⎬ ⎪⎪⎪⎭ 1/<sup>ρ</sup> = 9.67 <sup>1</sup>+(0.299×( <sup>1</sup>−0.31 0.31 )+0.328×( <sup>1</sup>−0.31 0.31 )+0.373×( <sup>1</sup>−0.38 0.38 )) <sup>=</sup> 3.22 *xu*<sup>12</sup> = <sup>3</sup> *<sup>j</sup>*=<sup>1</sup> *Lim*(ϕ*j*) 1+ ⎧ ⎪⎪⎨ ⎪⎪⎩ 3 *j*=1 *wj* ⎛ ⎜⎜⎜⎜⎝ <sup>1</sup>−*<sup>f</sup>*(<sup>ϕ</sup>*lj*) *<sup>f</sup>*(<sup>ϕ</sup>*lj*) ⎞ ⎟⎟⎟⎟⎠ ρ⎫ ⎪⎪⎬ ⎪⎪⎭ 1/<sup>ρ</sup> = 12.34 <sup>1</sup>+(0.299×( <sup>1</sup>−0.297 0.297 )+0.328×( <sup>1</sup>−0.297 0.297 )+0.373×( <sup>1</sup>−0.405 0.405 )) <sup>=</sup> 4.07 = ([1.27, 2.91], [3.22, 4.07])

Thus, at position A1-C1, a rough aggregated value *IRN*(*x*11) = ([1.27, 2.91], [3.22, 4.07])is obtained applying the IRNDWGA operator (Table 3). The aggregation of the residual values from Table 3 is performed (using Equation (A2)) in a similar way.




A8

4.07],[5.23,6.1]) ([5.82, 6.76],[5.93,7.96])

5.81],[5.81,7.84])

([5.86, 6.79],[5.96,7.99])

7.09],[5.96,7.99])

([5.93, 7.96],[6.53,8.52])

6.76],[7.84,8.77])

([7.84, 8.77],[7.84,8.77])

5.96],[5.96,7.99])

([5.96, 7.99],[7.88,8.79])

7.96],[7.84,8.77])

([5.18, 6.01],[7.19,8.02])

6.01],[7.78,8.74])

([5.18, 6.01],[7.78,8.74])

5.84],[5.2,6.05]) ([5.2, 6.05],[5.84,7.87])

7.51],[5.81,7.84])

([1.81, 5.45],[4.22,7.51])



### *Symmetry* **2019**, *11*, 1527

*Step 2:* In this step, using Equations (6) and (7), the initial decision matrix is modified, and a normalized matrix is formed (Table 4).


**Table 4.** Normalized interval rough matrix.

An example of normalization for benefit criteria is explained for *IR*(*n*11) for Group A:

$$IR(n\_{11}) = ([0.00, 0.22], [0.26, 0.38]) = \left( \left[ \frac{1.27 - 1.27}{8.74 - 1.27}, \frac{2.91 - 1.27}{8.74 - 1.27} \right], \left[ \frac{3.22 - 1.27}{8.74 - 1.27}, \frac{4.07 - 1.27}{8.74 - 1.27} \right] \right)$$

An example of normalization for cost criteria is explained for *IR*(*n*13) for Group A:

$$IR(n\_{13}) = ([0.00, 0.19], [0.16, 0.57]) = \left(\left[\frac{8.77 - 8.77}{3.76 - 8.77}, \frac{7.84 - 8.77}{3.76 - 8.77}\right], \left[\frac{7.96 - 8.77}{3.76 - 8.77}, \frac{5.93 - 8.77}{3.76 - 8.77}\right]\right)$$

*Step 3:* In this step, Equation (10) is applied in order to perform the weighting of the previously obtained matrix with the criterion values identified by using the FUCOM method. An example of the calculation of this matrix shown in Table 5 is as follows:

*IR*(*V*11) = ([0.00, 0.04, 0.05, 0.07]) = ([0.00 × 0.183, 0.22 × 0.183, 0.26 × 0.183, 0.38 × 0.183])


**Table 5.** Weighted normalized interval rough matrix.

*Step 4 and 5.* The following section presents the weighted IR sequences, *IR*(*Si*) and *IR*(*SWi*), which are further used to compare the alternatives. The sequences *IR*(*Si*) and *IR*(*SWi*) are obtained using Equations (11)–(13) and are shown in Table 6.

The sequence *IR*(*Si*) for the first alternative for Group A is obtained as follows:

$$IR(\\$\_1) = ([0.28, 0.50, 0.46, 0.72]) = \begin{bmatrix} 0.00 + 0.07 + 0.00 + 0.05 + 0.05 + 0.04 + 0.05 + 0.00 + 0.02 \\ 0.04 + 0.10 + 0.02 + 0.07 + 0.07 + 0.06 + 0.06 + 0.04 + 0.05 \\ 0.05 + 0.08 + 0.02 + 0.05 + 0.08 + 0.05 + 0.06 + 0.03 + 0.04 \\ 0.07 + 0.11 + 0.05 + 0.09 + 0.10 + 0.07 + 0.09 + 0.07 + 0.06 \end{bmatrix}$$

The sequence *IR*(*SWi*) for the first alternative for Group A is obtained as follows:

$$\operatorname{IR}(\mathcal{SW}\_{1}) = [5.51, 8.26, 8.18, 8.64] \begin{bmatrix} (0.00)^{0.183} + (0.59)^{0.114} + (0.00)^{0.096} + (0.39)^{0.134} + (0.41)^{0.122} \\ + (0.56)^{0.071} + (0.49)^{0.099} + (0.00)^{0.099} + (0.28)^{0.082} \\ (0.22)^{0.183} + (0.87)^{0.114} + (0.19)^{0.096} + (0.51)^{0.134} + (0.60)^{0.122} \\ + (0.79)^{0.071} + (0.61)^{0.099} + (0.44)^{0.099} + (0.58)^{0.082} \\ (0.26)^{0.183} + (0.66)^{0.114} + (0.16)^{0.096} + (0.41)^{0.134} + (0.68)^{0.122} \\ + (0.74)^{0.071} + (0.88)^{0.099} + (0.31)^{0.099} + (0.51)^{0.082} \\ (0.38)^{0.183} + (0.95)^{0.114} + (0.57)^{0.096} + (0.70)^{0.134} + (0.85)^{0.122} \\ + (0.99)^{0.071} + (0.87)^{0.099} + (0.73)^{0.099} + (0.74)^{0.082}, \end{bmatrix}$$


**Table 6.** The *IR*(*Si*) and *IR*(*SWi*) sequences of the IR CoCoSo model.

*Step 6*: Applying Equations (14)–(17), the relative significance of the alternatives is obtained within the aggregation strategies. When calculating the relative significance of alternatives within the third aggregation strategy, the value of the coefficient λ is taken as 0.5. The effect of changing the coefficient λ (0 ≤ λ ≤ 1) on the change in the relative significance of alternatives has been considered in the discussion of the results. The relative significance of the alternatives within the integration strategies, as well as the final ranking of alternatives by Groups A, B, and C, is shown in Table 7.

First, it is required to calculate the sum of matrices *IR*(*Si*) and *IR*(*SWi*). In this way, the matrix *IR*(*Ti*) is obtained by applying Equation (14).

⎤

⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦

$$IR(T\_i) = \begin{bmatrix} 5.79, 8.76, 8.64, 9.36 \\ 7.39, 8.96, 8.93, 9.66 \\ 7.63, 9.24, 9.25, 9.85 \\ 6.46, 8.84, 8.85, 9.36 \\ 7.01, 8.92, 8.70, 9.50 \\ 7.69, 8.96, 8.67, 9.40 \\ 6.99, 8.76, 8.86, 9.48 \\ 5.71, 8.89, 8.00, 9.40 \end{bmatrix}$$


**Table 7.** The relative significance of the alternatives and the final ranking of the alternatives.

An example of the calculation is as follows:

$$IR(T\_1) = [5.79, 8.76, 8.64, 9.36] = [5.51 + 0.28, 8.26 + 0.50, 8.18 + 0.46, 8.64 + 0.72]$$

Subsequently, all values by columns are summed and a matrix is obtained:

$$\sum \text{IR}(T\_i) = [54.66, 71.33, 69.89, 76.00] = \begin{bmatrix} 5.79 + 7.39 + 7.63 + 6.46 + 7.01 + 7.69 + 6.99 + 5.71 \\ 8.76 + 8.96 + 9.24 + 8.84 + 8.92 + 8.96 + 8.76 + 8.89 \\ 8.64 + 8.93 + 9.25 + 8.85 + 8.70 + 8.67 + 8.86 + 8.00 \\ 9.36 + 9.66 + 9.85 + 9.36 + 9.50 + 9.40 + 9.48 + 9.40 \end{bmatrix}$$

Applying Equation (15), the first aggregation strategy is obtained:

$$k\_{\rm ia} = \frac{IR(T\_i)}{\sum IR(T\_i)} = [0.08, 0.13, 0.12, 0.17] = \left[\frac{5.79}{76.00}, \frac{8.76}{69.89}, \frac{8.64}{71.33}, \frac{9.36}{54.66}\right]$$

Applying Equation (16), the second aggregation strategy is obtained:

$$k\_{\rm b} = \frac{S\_i}{\min\_i S\_l} + \frac{S \mathcal{W}\_l}{\min\_i \mathcal{W}\_l} = [1.03, 2.20, 1.91, 4.86] = \left[\frac{0.28}{0.72} + \frac{5.51}{8.63}\right] \left[\frac{0.50}{0.46} + \frac{8.26}{7.49}\right] \left[\frac{0.46}{0.50} + \frac{8.18}{8.26}\right] \left[\frac{0.72}{0.22} + \frac{8.64}{5.37}\right] = 0.23$$

Applying Equation (17), the third aggregation strategy is obtained:

First, the following matrix is calculated:

$$A(S\_i) + (1 - \lambda)(SW\_i) = \begin{bmatrix} 2.89, 4.38, 4.32, 4.68\\ 3.69, 4.48, 4.46, 4.83\\ 3.81, 4.62, 4.62, 4.92\\ 3.23, 4.42, 4.42, 4.68\\ 3.51, 4.46, 4.35, 4.75\\ 3.85, 4.48, 4.34, 4.70\\ 3.49, 4.38, 4.43, 4.74\\ 2.85, 4.45, 4.00, 4.70 \end{bmatrix}$$

whose elements are obtained as follows:

$$
\lambda(S\_1) + (1 - \lambda)(SW\_1) = [2.89, 4.38, 4.32, 4.68] \\
= \begin{bmatrix}
0.50 \times 0.28 + (1 - 0.50) \times 5.51 \\
0.50 \times 0.50 + (1 - 0.50) \times 8.26 \\
0.50 \times 0.46 + (1 - 0.50) \times 8.18 \\
0.50 \times 0.72 + (1 - 0.50) \times 8.64
\end{bmatrix}
$$

After that, the following matrix is calculated:

$$(\lambda \max\_{i} S\_{i} + (1 - \lambda) \max\_{i} SW\_{i}) = [3.91, 4.62, 4.62, 4.92]$$

and *kic* is obtained as follows:

$$k\_{1c} = [0.59, 0.95, 0.94, 1.20] = \left[\frac{2.89}{3.91}, \frac{4.38}{4.62}, \frac{4.32}{4.62}, \frac{4.68}{4.92}\right]$$

*Step 7*. The final ranking of the alternatives is obtained by applying Equation (18).

$$k\_1 = [0.92, 1.73, 1.59, 3.08] = \begin{bmatrix} (0.08 \times 1.03 \times 0.59)^{\frac{1}{3}} + \frac{1}{3}(0.08 + 1.03 + 0.59) \\ (0.13 \times 2.20 \times 0.95)^{\frac{1}{3}} + \frac{1}{3}(0.13 + 2.20 + 0.95) \\ (0.12 \times 1.91 \times 0.94)^{\frac{1}{3}} + \frac{1}{3}(0.12 + 1.91 + 0.94) \\ (0.17 \times 4.86 \times 1.20)^{\frac{1}{3}} + \frac{1}{3}(0.17 + 4.86 + 1.20) \end{bmatrix}$$

The alternatives are ranked based on the value of *k*, whereby it is preferable that the alternative has a value of *k* as high as possible. Based on the values obtained, it can be concluded that the initial ranking of suppliers using the FUCOM-IR CoCoSo model is; Group A: A3 > A2 > A5 > A6 > A4 > A8 > A7 > A1, Group B: A2 > A4 > A5 > A3 > A6 > A1, and Group C: A7 > A2 > A1 > A3 > A4 > A9 > A8 > A5 > A6.

### **5. Validation of the Results through Sensitivity Analysis**

In the following part, the results presented in the previous section are validated through sensitivity analysis. Validation was performed throughout three phases. In the first phase, the impact of changing the parameters λ and ρ on the ranking results was analyzed. In the second phase, the impact of changing the most significant criterion on the ranking results was performed. In the third phase, the results of the FUCOM-IRN CoCoSo model were compared with the results of other multi-criteria techniques.

In the earlier research [39], it is shown that changes in the parameter ρ lead to the transformation of Dombi functions, and thus to the change in the values of the functions. The impact of the parameters ρ on the results of the functions can be significant. It is therefore a logical step to check the impact of the parameters ρ on the final ranking results when validating the results. An analysis of the change in the value of the parameter ρ was performed throughout a total of 100 scenarios, where the change of the parameter ρ was analyzed in the interval ρ ∈ [1, 100]. The change in the parameter ρ in the interval ρ ∈ [1, 100] was analyzed in each of the considered groups of suppliers (Group A, B, and C). The results of the impact of the parameter ρ on changes in the values of the criterion functions of alternatives are shown in Figure 5.

**Figure 5.** The impact of changing the parameter ρ on changes in the value of the criterion functions of alternatives.

Generally, increasing the value of the parameter ρ, the process of calculating the IRN Dombi functions is becoming more complex. Decision-makers most often choose the value of this parameter in accordance with their preferences. When making decisions in real systems and in real time, it is recommended that value one is taken for the value of ρ. The parameter value ρ = 1 simplifies the decision-making process. Figure 5 shows that when the parameter ρ has different values, the ranking positions of the considered alternatives in Group B remain the same. In Group A, the parameter values of the sixth-ranked and seventh-ranked alternatives (A7 and A8) have swapped places, while, for the remaining values of the parameter ρ, the initial ranking from Table 7 has been confirmed. At the same time, the rankings of the remaining alternatives are unchanged. It is similar in Group C of suppliers. For parameter values ρ > 1, the fourth-ranked and fifth-ranked alternatives (A1 and A4) have swapped places, while the rankings of the remaining alternatives are unchanged. This means that there is a satisfactory advantage between the most influential alternatives and that the alternatives A3 and A2 (Group A), A2 and A4 (Group B), and A7 and A2 (Group C) stand out as dominant in the considered set. Thus, changes in the value of the parameter ρ have an impact on the change in the value of the criterion functions of the alternatives, but these changes are insufficient to cause major changes in the ranks.

In the following part, the impact of changing the parameter λ on the rankings of alternatives is discussed. Throughout 100 scenarios, the analysis of the impact of the coefficient λ value on the change of the criterion functions and the rankings of the alternatives have been analyzed. In the initial scenario (S1), the value of λ = 0.01 has been considered. In each subsequent scenario, the value of λ is increased by the value of 0.01. Thus, a total of 100 scenarios have been formed. The results show that the parameter λ does not affect changes in the value of the criterion functions, and nor the supplier ranks. It should be emphasized that this only applies to the example discussed in this paper. For other values of the alternatives in the initial decision matrix, the impact of the parameter λ may influence the change in the result. Therefore, such an analysis should be an indispensable step in checking the stability of proposed ranks.

In the second phase of result validation, i.e., sensitivity analysis, an analysis of the impact of changing the most significant criterion (C1) on the ranking results by supplier groups has been performed. Based on the recommendations given in [40–42], a total of 20 scenarios are formed applying Equation (21), [41].

$$\mathcal{W}\_{\text{n}\emptyset} = (1 - \mathcal{W}\_{\text{n}\text{r}}) \frac{\mathcal{W}\_{\emptyset}}{(1 - \mathcal{W}\_{\text{II}})} \tag{21}$$

where *Wn*<sup>β</sup> represents the corrected value of criteria C2–C9, *Wn*<sup>α</sup> represents the impaired value of the C1 criterion, *W*<sup>β</sup> represents the original value of the considered criterion, and *Wn* represents the original value of the C1 criterion.

In the first scenario, the value of the C1 criterion was reduced by 2%, while the values of the remaining criteria were proportionally corrected using Equation (21). In each subsequent scenario, the value of the C1 criterion was reduced by 5% while the values of the remaining criteria were adjusted to meet the condition *<sup>n</sup> <sup>j</sup>*=<sup>1</sup> *wj* = 1. After the formation of new 20 vectors of the weight coefficients of the criteria for each supplier group, new supplier ranks were obtained (Figure 6).

**Figure 6.** The impact of changing the C1 criterion on the rank of suppliers.

Analyzing the data from sensitivity analysis (Figure 6), we can conclude the following:


alternatives. As with Group A, there is a high rank correlation here as well, which is confirmed by the mean of correlation coefficient in Group B (Group B = 0.97) which is extremely high.

(3) Group C: Changes in the values of the C1 criterion in 20 scenarios lead to a change in the ranks of alternatives A2, A1, A3, and A4. At the same time, the remaining alternatives A7, A9, A8, A5, and A6 have maintained their rankings across all 20 scenarios. The best-ranked alternative A7 has remained the best-ranked alternative in all 20 scenarios, and we can conclude that the A7 alternative has a sufficient advantage over the remaining alternatives. The Spearman's correlation coefficient (Group C = 0.91) shows that there is a high correlation between the obtained ranks and the initial rank from Table 7, which leads us to the conclusion that the rank obtained is confirmed and credible.

In the third phase, the rankings of the IRN CoCoSo models were compared with the results of other multi-criteria techniques: IRN WASPAS [43,44], IRN MABAC [43], and IRN MAIRCA [41]. The results of the ranking when applying the above multi-criteria techniques are shown in Figure 7.

**Figure 7.** Comparison of different multi-criteria models.

The ranking results show that all multi-criteria models for Group B and Group C confirm the rankings of the first-ranked alternatives, A2 and A4 (Group B) and A7 and A2 (Group C). With Group A, by all multi-criteria techniques, the two best-ranked alternatives are A3 and A2. In all models, A3 is the best-ranked, while A2 is the second-ranked alternative according to IRN CoCoSo, IRN MABAC and IRN MAIRCA models. Only according to the IRN WASPAS model, the alternative A2 is third-ranked. Based on the above comparisons for Group A, we can identify a set of alternatives {A3, A2} as the most dominant and having a significant advantage over the remaining alternatives. In addition, since the A3 alternative is first-ranked in all the models, we can conclude that the A3 alternative dominates A2 and the remaining alternatives in Group A. Based on the results presented, it can be concluded that the rank suggested by the IR CoCoSo model has been confirmed.

### **6. Conclusions**

In this paper, a new multi-parameter model for stock management in the storage system and cost rationalization related to the activities and processes that take place in the storage system has been formed. Multiple techniques have been integrated in order to obtain a unique model that provides good results and helps to run an efficient and sustainable business. First, on the basis of the data collected on an annual basis, the products were classified according to ABC analysis, taking into account the cost values of the procurement, which was a prerequisite for its completion. The purpose of using this analysis is to establish a functional control and management system within the procurement and warehousing business, and thus creating the possibility of achieving greater economy of the company.

Subsequently, each group of products was individually examined, a list of potential suppliers was formed for each group separately and criteria that would be used for the evaluation of alternatives were determined. The FUCOM method was applied to determine the weights of nine criteria. The criteria were evaluated separately for each group and therefore their values were different. Since it was a group decision-making, the initial matrix was obtained by applying the IRN Dombi weighted geometric averaging (IRNDWGA) operator. Then, the IR CoCoSo approach was used for the evaluation of suppliers for all product groups A, B and C.

To determine the validity of the results obtained, an extensive sensitivity analysis was performed throughout several phases. The first phase involved changing the parameter ρ in the IRNDWGA operator, from which it was concluded that there was a satisfactory advantage among the most influential alternatives. This means that changes in the value of the parameter ρ affected the changes in the value of the criterion functions of the alternatives, but the changes were not sufficient to cause major changes in ranks. Subsequently, a total of 100 scenarios were formed which implied a change in the value of the parameter λ, which in no scenario influenced the change of ranks. Then, 20 scenarios were formed in which the analysis of the impact of changing the most significant criterion was performed, which influenced the changes of ranks, but not to a great extent. Additionally, it is important to note here that the first-ranked alternatives for all product categories remained in their positions. In addition to all the above, there was a comparison with other Interval Rough Number MCDM approaches: IRN MAIRCA (MultiAtributive Ideal-Real Comparative Analysis), IRN MABAC (Multi-Attributive Border Approximation area Comparison) and IRN WASPAS (Weighted Aggregated Sum Product ASsessment), which also confirmed the validity of the proposed methodology in this paper. Spearman's correlation coefficient was also calculated, showing a very high correlation of ranks.

The concluding considerations move towards highlighting the contributions of this research, and they are as follows. A unique multi-parameter model has been created to assist in adequate and efficient management of the storage system, while respecting financial parameters that are crucial to achieve sustainability in any business today. A new IRNDWGA operator has been developed to average the initial matrix, which in future research can be applied to any problem in group decision-making based on interval rough numbers. A new IR CoCoSo approach, which can also be applied to such and similar models in the future, has been developed. Stock management policies have been defined in accordance with individual product categories A, B, and C.

In addition to the suggestions for future research that are outlined through contribution highlighting, the following directions can be emphasized too. Since it is necessary to perform ABC analysis at regular intervals constantly, it can be implemented in combination with XYZ analysis [45], so that their cross-analysis would produce new results. The integration of such approaches with MCDM methods and theories of uncertainty are surely possible continuations of this research.

**Author Contributions:** Conceptualization, Ž.S.; methodology, Ž.E., D.P. and Ž.S.; validation, D.P. and V.S. formal analysis, S.Ž.; investigation, G.M. and S.Ž.; data curation, G.M.; writing—original draft preparation, Ž.E.; writing—review and editing, V.S.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest.

### **Appendix A**

Based on arithmetic operations with interval rough numbers [46] and rough Dombi T-norm and T-conorm [39], the IRN Dombi weighted geometric averaging (IRNDWGA) operator is derived.

**Definition A1.** *Let IRN*(ϕ*j*) = & *RN*(ϕ*lj*),*RN*(ϕ*uj*) ' = (ϕ*lj* ,ϕ*lj*) , ( <sup>ϕ</sup>*uj*,ϕ*uj*)*,* (*<sup>j</sup>* <sup>=</sup> 1, 2, ... , *<sup>n</sup>*)*, be the set of interval rough numbers (IRNs) in R, and wj* ∈ [0, 1]*represents the weight coe*ffi*cient of IRN*(ϕ*j*)*,*(*j* = 1, 2, ... , *n*)*, which fulfills the requirement that <sup>n</sup> j*=1 *wj* = 1*. Then the IRNDWGA operator can be defined as follows:*

$$IRNDRWA\{IRN(\left(\varphi\_1\right), IRN(\left(\varphi\_2\right), \dots, IRN(\left(\varphi\_n\right)\right)\} = \prod\_{j=1}^n \left(IRN(\left(\varphi\_j\right))\right)^{w\_j} \tag{A1}$$

**Theorem A1.** *Let IRN*(ϕ*j*) = (ϕ*lj* ,ϕ*lj*) , ( <sup>ϕ</sup>*uj*,ϕ*uj*)*,* (*<sup>j</sup>* <sup>=</sup> 1, 2, ... , *<sup>n</sup>*)*, be the set of IRNs in R, then the aggregated values of interval rough numbers from the set R can be determined by using Equation (A1). The aggregated IRN values are obtained by using Equation (A2).*

$$\begin{aligned} \text{IRNDWGA}\{\text{INN}(\varphi\_{1}), \text{IRN}(\varphi\_{2}), \dots, \text{IRN}(\varphi\_{n})\} &= \left\{ \begin{array}{l} \Sigma^{n}\_{j=1} \underbrace{\Sigma^{n}\_{j=1}}\_{1 + \left\lfloor \begin{array}{l} \Sigma^{n}\_{j=1} \Sigma\_{j} \\\\ 1 + \left\lfloor \begin{array}{l} \Sigma\_{j} \end{array} \right\rfloor \end{array} \right\}^{1/\rho} \cdot \frac{\Sigma^{n}\_{j=1} \Sigma\_{j}}{1 + \left\{ \begin{array}{l} \Sigma\_{j} \end{array} \right\}^{1/\rho}} \cdot \left\{ \begin{array}{l} \Sigma^{n}\_{j=1} \Sigma\_{j} \\\\ 1 + \left\{ \begin{array}{l} \Sigma\_{j} \end{array} \right\} \end{array} \right\}^{1/\rho}}\_{1 + \left\{ \begin{array}{l} \Sigma\_{j} \end{array} \right\}^{1/\rho}} \cdot \left\{ \begin{array}{l} \Sigma^{n}\_{j=1} \Sigma\_{j} \\\\ 1 + \left\{ \begin{array}{l} \Sigma\_{j} \end{array} \right\}^{1/\rho} \end{array} \right\}^{1/\rho}}\_{1 + \left\{ \begin{array}{l} \Sigma\_{j} \end{array} \right\}^{1/\rho}} \end{aligned} \tag{A2}$$

where *wj* ∈ [0, 1] represent weight coefficients of *RN*(ϕ*j*),*j* = 1, 2, ... , *n*, which fulfill the requirement

$$\text{that } \sum\_{j=1}^{n} w\_{j} = 1 \text{ and } f(\text{lRN}(\varphi\_{j})) = \begin{pmatrix} f(\underline{\varphi}\_{l}) \\ \underline{\sum}\_{j=1}^{n} \underline{\varphi}\_{lj} \end{pmatrix} = \begin{pmatrix} \underline{\varphi}\_{lj} \\ \underline{\sum}\_{j=1}^{n} \underline{\varphi}\_{lj} \end{pmatrix}; f(\overline{\varphi}\_{lj}) = \frac{\overline{\varphi}\_{lj}}{\underline{\sum}\_{j=1}^{n} \overline{\varphi}\_{lj}}; \\\ f(\underline{\underline{\varphi}}\_{\underline{\text{unj}}}) \left( \underline{\sum}\_{j=1}^{\overline{\sum}\_{\overline{\varphi}\_{l}}}; f(\overline{\varphi}\_{lj}) = \frac{\overline{\varphi}\_{\overline{\varphi}\_{\overline{\text{un}}}}}{\overline{\sum}\_{j=1}^{n} \overline{\varphi}\_{\overline{\text{un}}}} \right). \end{cases} \text{represents the IRN function.}$$

### **Proof.** If *n* = 2, based on Dombi operations with IRN, we obtain the following equation:

*IRNDWGA IRN*(ϕ1), *IRN*(ϕ2) ! = = ⎛ ⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝ ⎡ ⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣ 2 *<sup>j</sup>*=<sup>1</sup> <sup>ϕ</sup>*lj* 1+ ⎧ ⎪⎪⎨ ⎪⎪⎩ *w*1 ⎛ ⎜⎜⎜⎜⎝ <sup>1</sup>−*<sup>f</sup>*(<sup>ϕ</sup>*l*<sup>1</sup>) *<sup>f</sup>*(<sup>ϕ</sup>*l*<sup>1</sup>) ⎞ ⎟⎟⎟⎟⎠ ρ +*w*<sup>2</sup> ⎛ ⎜⎜⎜⎜⎝ <sup>1</sup>−*<sup>f</sup>*(<sup>ϕ</sup>*l*<sup>2</sup>) *<sup>f</sup>*(<sup>ϕ</sup>*l*<sup>2</sup>) ⎞ ⎟⎟⎟⎟⎠ ρ⎫ ⎪⎪⎬ ⎪⎪⎭ 1/<sup>ρ</sup> , 2 *<sup>j</sup>*=<sup>1</sup> ϕ*lj* 1+ . *w*1 <sup>1</sup>−*f*(ϕ*l*1) *<sup>f</sup>*(ϕ*l*1) ρ +*w*<sup>2</sup> <sup>1</sup>−*f*(ϕ*l*2) *<sup>f</sup>*(ϕ*l*2) ρ-1/<sup>ρ</sup> ⎤ ⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ , ⎡ ⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣ 2 *<sup>j</sup>*=<sup>1</sup> <sup>ϕ</sup>*uj* 1+ ⎧ ⎪⎪⎨ ⎪⎪⎩ *w*1 ⎛ ⎜⎜⎜⎜⎝ <sup>1</sup>−*<sup>f</sup>*(<sup>ϕ</sup>*u*<sup>1</sup>) *<sup>f</sup>*(<sup>ϕ</sup>*u*<sup>1</sup>) ⎞ ⎟⎟⎟⎟⎠ ρ +*w*<sup>2</sup> ⎛ ⎜⎜⎜⎜⎝ <sup>1</sup>−*<sup>f</sup>*(<sup>ϕ</sup>*u*<sup>1</sup>) *<sup>f</sup>*(<sup>ϕ</sup>*u*<sup>1</sup>) ⎞ ⎟⎟⎟⎟⎠ ρ⎫ ⎪⎪⎬ ⎪⎪⎭ 1/<sup>ρ</sup> , 2 *<sup>j</sup>*=<sup>1</sup> ϕ*uj* 1+ . *w*1 <sup>1</sup>−*f*(ϕ*u*1) *<sup>f</sup>*(ϕ*u*1) ρ +*w*<sup>2</sup> <sup>1</sup>−*f*(ϕ*u*2) *<sup>f</sup>*(ϕ*u*2) ρ-1/<sup>ρ</sup> ⎤ ⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ ⎞ ⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ = ⎛ ⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝ ⎡ ⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣ 2 *<sup>j</sup>*=<sup>1</sup> <sup>ϕ</sup>*lj* 1+ ⎧ ⎪⎪⎪⎪⎨ ⎪⎪⎪⎪⎩ 2 *j*=1 *wj* ⎛ ⎜⎜⎜⎜⎜⎜⎜⎜⎝ 1−*f* ϕ*lj f* ϕ*lj* ⎞ ⎟⎟⎟⎟⎟⎟⎟⎟⎠ ρ⎫ ⎪⎪⎪⎪⎬ ⎪⎪⎪⎪⎭ 1/<sup>ρ</sup> , 2 *<sup>j</sup>*=<sup>1</sup> ϕ*lj* 1+ ⎧ ⎪⎪⎨ ⎪⎪⎩ 2 *j*=1 *wj* ⎛ ⎜⎜⎜⎜⎝ <sup>1</sup>−*<sup>f</sup>*(<sup>ϕ</sup>*lj*) *<sup>f</sup>*(<sup>ϕ</sup>*lj*) ⎞ ⎟⎟⎟⎟⎠ ρ⎫ ⎪⎪⎬ ⎪⎪⎭ 1/ρ ⎤ ⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ , ⎡ ⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣ 2 *<sup>j</sup>*=<sup>1</sup> <sup>ϕ</sup>*uj* 1+ ⎧ ⎪⎪⎪⎪⎨ ⎪⎪⎪⎪⎩ 2 *j*=1 *wj* ⎛ ⎜⎜⎜⎜⎜⎜⎜⎜⎝ 1−*f* ϕ*uj f* ϕ*uj* ⎞ ⎟⎟⎟⎟⎟⎟⎟⎟⎠ ρ⎫ ⎪⎪⎪⎪⎬ ⎪⎪⎪⎪⎭ 1/<sup>ρ</sup> , 2 *<sup>j</sup>*=<sup>1</sup> ϕ*uj* 1+ ⎧ ⎪⎪⎨ ⎪⎪⎩ 2 *j*=1 *wj* ⎛ ⎜⎜⎜⎜⎝ <sup>1</sup>−*<sup>f</sup>*(<sup>ϕ</sup>*uj*) *<sup>f</sup>*(<sup>ϕ</sup>*uj*) ⎞ ⎟⎟⎟⎟⎠ ρ⎫ ⎪⎪⎬ ⎪⎪⎭ 1/ρ ⎤ ⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ ⎞ ⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠

If *n* = *r*, based on Equation (A2), we obtain the following equation:

$$\begin{aligned} \{ \text{IRNIDWGA}[IRN(\varphi\_{1}),IRN(\varphi\_{2}),\ldots,IRN(\varphi\_{r})] \} &= \left\{ \begin{bmatrix} \frac{\sum\_{j=1}^{r}\underline{\varphi}\_{j}}{\sum\_{j=1}^{r}\underline{\mu}\_{j}}\\\\ \frac{\sum\_{j=1}^{r}\underline{\mu}\_{j}}{\sum\_{j=1}^{r}\underline{\mu}\_{j}\left(\frac{1-\left(\underline{\varphi}\_{j}\right)}{f(\underline{\varphi}\_{j})}\right)^{r}} \end{bmatrix}^{1/\rho} \begin{aligned} \frac{\sum\_{j=1}^{r}\overline{\mu}\_{j}}{\left\{\sum\_{j=1}^{r}w\_{j}\left(\frac{1-f(\underline{\varphi}\_{j})}{f(\underline{\varphi}\_{j})}\right)^{r}\right\}^{1/\rho}} \end{aligned} \right\} \\ \left\{ \begin{aligned} \frac{\sum\_{j=1}^{r}\underline{\mu}\_{j}}{\sum\_{j=1}^{r}w\_{j}\left(\frac{1-\left(\underline{\varphi}\_{j}\right)}{f(\underline{\varphi}\_{j})}\right)^{r}} \end{aligned} \right\} \end{aligned} \right\} \left\{ \begin{aligned} \sum\_{j=1}^{r}\underline{\mu}\_{j} \\\\ \frac{\sum\_{j=1}^{r}\underline{\mu}\_{j}}{\left(\sum\_{j=1}^{r}w\_{j}\left(\frac{1-f(\underline{\varphi}\_{j})}{f(\underline{\varphi}\_{j})}\right)\right)^{1/\rho}} \end{aligned} \right\} \left\{ \begin{aligned} \sum\_{j=1}^{r}\underline{\mu}\_{j} \\\\ \frac{\sum\_{j=1}^{r}w\_{j}\left(\frac{1-f(\underline{\varphi}\_{j})}{f(\underline{\varphi}\_{j})}\right)^{r}} \end{aligned} \right\}$$

If *n* = *r* + 1, we obtain the following equation:

$$=\left\{ \begin{bmatrix} \frac{\sum\_{j=1}^{r}\underline{\omega}\_{j}}{\sum\_{j=1}^{r}\left(\frac{1-\left<\underline{\omega}\_{j}\right>}{\left<\underline{\omega}\_{j}\right>}\right)^{1/r}} & \frac{\sum\_{j=1}^{r}\overline{\omega}\_{j}}{\left<\underline{\omega}\_{j}\right>}\\ \frac{1+\left<\underline{\omega}\_{j}\right>\left(\frac{1-\left<\underline{\omega}\_{j}\right>}{\left<\underline{\omega}\_{j}\right>}\right)^{1/r}} & 1+\left<\frac{\sum\_{j=1}^{r}\overline{\omega}\_{j}}{\left<\frac{1-\left<\overline{\omega}\_{j}\right>}{\left<\left(\frac{1-\left<\underline{\omega}\_{j}\right>}\right)^{1/r}}\right>}\right> \end{bmatrix} \left| \frac{\sum\_{j=1}^{r}\underline{\omega}\_{j}}{\left<\frac{1-\left<\underline{\omega}\_{j}\right>}{\left<\left(\frac{1-\left<\underline{\omega}\_{j}\right>}{\left<\left$$

We can conclude that Theorem A1 is true for *n* = *r* + 1, then Equation (A2) is valid for all *n*. -

### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

*Article*

## **Evaluating and Prioritizing the Green Supply Chain Management Practices in Pakistan: Based on Delphi and Fuzzy AHP Approach**

### **Yuanyuan Zhou 1,\*, Li Xu 2,3 and Ghulam Muhammad Shaikh <sup>4</sup>**


Received: 12 October 2019; Accepted: 25 October 2019; Published: 1 November 2019

**Abstract:** Nowadays, green supply chain management (SCM) practices are increasing among firms to adopt green practices and reduce the negative effects of supply chain operations on the environment. Firms such as manufacturing, mining, and agriculture have to improve their capacity in green SCM practices because environmental regulations force them to consider these issues. However, green practices are new and require comprehensive study to determine this problem. This study has taken the case of three garment manufacturing firms for the evaluation of green SCM practices in the context of Pakistan. The green SCM requires multi-dimensional techniques; therefore, fuzzy-based multi-criteria decision analysis approaches must be adopted while assessing green SCM practices of firms. This is because fuzzy-based methods obtain a significant solution for complex, vague, and uncertain multi-attribute problems in fuzzy environment. Therefore, in this study, a hybrid decision model comprised of Delphi, and Fuzzy Analytical Hierarchy Process (AHP) methodologies is proposed for assessing the green SCM practices of firms in terms of green design, green purchasing, green production, green warehousing, green logistics, and reverse logistics. The Fuzzy AHP method results reveal that "green purchasing," "green design," and "green production" are ranked the most important green indicators. Further, results reveal the ranking of manufacturing firms (alternatives) in the context of green SCM practices. This study shall help industries to focus on green SCM practices and adopt the green manufacturing process.

**Keywords:** green supply chain management practices; green indicators; manufacturing firms; Delphi; Fuzzy AHP; Pakistan

### **1. Introduction**

Nowadays, green SCM has become a very popular practice because of increasing awareness of environmental protection and sustainability. Industries are obliged to take into account green practices to reinforce their green image and for the betterment of the environment [1]. In this context, various firms such as service, manufacturing, agriculture, and mining industries carry out green SCM practices in many countries of the world [2]. The firms are responsible for their practices in preventing hazardous environmental activities such as overflowing waste and raw material extraction from sites; thus, firms should put more environmental standards and regulations on their activities. The main objectives of green SCM are to achieve sustainable development goals by minimizing or eliminating

the environmental damages created by supply chain practices. Therefore, green SCM activities make it important and possible for firms to restructure their design, purchase, production, warehousing, and logistics operations. Further, reverse logistics is a key attribute of green SCM in capturing value from used products and materials or to appropriately recycle them [3].

Moreover, despite the popularity and implementation of green SCM practices in many countries, still, there is room to put forward implications in terms of both practical and research [4]. However, several motivations are reported in the previous studies for firms to implement green SCM practices [5,6]. From these previous studies, it has been identified that some firms implement green SCM based on customer satisfaction and expectations, while some firms adopt green practices for fulfilling environmental regulations. The current globalization has forced policymakers to adopt sustainable activities such as environmental, social, and economic dimensions [7]. Therefore, it is important to involve industrial value creation for the sustainable manufacturing process. The various studies have pointed out the importance of the fourth industrial revolution (so-called Industry 4.0) based on sustainability [8–12]. Nowadays, by the adoption of Industry 4.0, firms are liable to incorporate the triple bottom line sustainability issues for sustainable industrial value creation; therefore, in a previous study, the authors recommended several important dimensions along with triple bottom line dimensions for sustainable industrial value creation for industrial Internet of Things [13]. In another study, an assessment of the economic, ecological and social potential for industrial value creation in Industry 4.0 is qualitatively assessed from a macro and micro perspective, and, the study indicates that the industrial value creation has a positive contribution to sustainable development [14].

It is a matter of fact that firms or managers have to assess their green SCM practices and performances. However, in the process and evaluation of green SCM practices, it has always been considered as a major problem for selecting optimal indicators for firms' development and implementation in greener SCM operations. Because the decision problem is often very complex and includes vagueness, this study aims to propose an assessment approach for green SCM practices in the context of manufacturing firms in Pakistan. Therefore, Multi-Criteria Decision Making (MCDM) methods can be considered as very significant in minimizing the problem under fuzzy environment [15–18]. This study contributes to obtaining the most important green indicators for the evaluation of green SCM practices. Secondly, a decision model is proposed, which uses Delphi and Fuzzy Analytical Hierarchy Process (AHP) methodologies. In fact, the Delphi and AHP methods have been used together in various studies under fuzzy environment [19–21]. However, in the present study, a decision model is used in the perspectives of green SCM practices in the context of Pakistan. The analysis of the decision model would help managers and governments to evaluate this decision problem.

The main objective of this study is to develop a comprehensive set of green indicators to assess the SCM practices of firms using the Delphi and Fuzzy AHP method. This study is attempting to investigate green SCM indicators with respect to the manufacturing firms of Pakistan. Moreover, this is the first work to evaluate the green SCM practices using the Fuzzy AHP approach. The Delphi method is employed to determine and refine the most important green indicators based on expert's feedback. Then, the Fuzzy AHP method is used to evaluate and rank the significant green indicators (criteria), sub-indicators (sub-criteria), and manufacturing firms (alternatives) from the perspective of green supply chain practices. Also, these green indicators can be employed as benchmarking tools for analyzing the firms' green SCM activities.

The remaining sections of the study are organized as follows: Section 2 presents the related studies and proposes the green indicators for supply chain operations. Section 3 presents the proposed decision framework for the study. Section 4 clarifies the results and discussion. Finally, Section 5 provides the conclusion and managerial implications.

### **2. Related Studies**

Green SCM is a growing topic of interest in professional and academic circles, focused on green process enhancement, reverse logistics or waste reduction, increase in the product life cycle quality, and decrease in harmful environmental activities [22]. Green SCM emphasizes green activities, which are thoroughly related to sustainable environmental practices [23]. Meanwhile traditional SCM practices are hazardous to the environment, such as raw material production, distribution, and material waste, which can create a bad impact on the environment and also a source of pollution. Therefore, to protect the environment, it is essential to take into account green practices such as green manufacturing, green packaging, and reverse logistics in overall SCM operations [24]. Various countries have planned to implement the environmental standards and regulations for the industries to protect the environment from unwanted activities. These standards require industries to adopt green and environmentally friendly strategies in the entire SCM activities for sustainable environmental, economic, and social development.

In previous studies, the various green-based SCM activities have been determined with different aims and objectives. The Multi-Criteria Decision Making (MCDM) based approaches are often used to determine the feasible options for implementation of green SCM activities in the firms. MCDM is a branch of operations research methods to support turning the multi-faceted decision-making problem into a tiny problem [25]. Mirko et al. [26] have conducted a literature survey on sustainability engineering issues using MCDM applications from 2008–2018; the findings of the study present that MCDM methods are very suitable in solving sustainability decision problems. MCDM greatly helps in structuring and prioritizing the decision problem; it supports decision-makers to analyze, select, and rank alternatives based on the evaluation of various criteria of the decision problem [27]. This study evaluates the green supply chain management (SCM) practices in the perspectives of Pakistani manufacturing firms.

### *2.1. Application of MCDM Approaches in Green Supply Chain Management Practices*

The MCDM approaches are widely suitable and significant in evaluating green SCM decision problems, such as Analytical Hierarchy Process (AHP), Decision Making Trial and Evaluation Laboratory (DEMATEL), Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS), Analytic Hierarchy Process (ANP), Linear Programming, and Fuzzy Programming. These approaches provide direction in shifting toward sustainable and green SCM operations, which helps in making the environmentally sustainable. Table 1 summarizes the MCDM methods used in green SCM practices.

It is identified that there are various studies relating to the adoption of green SCM using MCDM methods. This study contributes further by developing a hybrid decision methodology comprised of a Delphi and Fuzzy AHP to evaluate and rank the green SCM practices in the context of Pakistani manufacturing firms.

### *2.2. Proposed Green Indicators for Supply Chain Practices*

This study identifies several important green indicators for implementing green SCM practices. These green indicators are considered as a supporting tool for supply chain activities. In this study, a comprehensive set of literature reviews has been analyzed to identify the important indicators from the perspective of green supply chain practices. Thus, the six green indicators and twenty-two sub-indicators have been identified after a thorough literature review. These green indicators are: green design (G1), green purchasing (G2), green production (G3), green warehousing (G4), green logistics (G5), and reverse Logistics (G6). Table 2 summarizes the green SCM indicators and sub-indicators.




**Table 2.** Green supply chain management indicators and sub-indicators.

It is identified that MCDM approaches are widely used in the implementation of green SCM practices. These approaches are considered very significant in solving complex decision problems. To the best of the authors' knowledge, this is a very first attempt to investigate green supply activities from the perspective of Pakistan. In the present study, a Delphi and Fuzzy AHP method has been used to evaluate the green SCM practices of manufacturing firms of Pakistan.

### **3. Research Methodology**

This study uses the Delphi and Fuzzy AHP methods to determine the important indicators and sub-indicators for evaluating the green SCM practices in Pakistan. Firstly, this study assesses inclusive literature to find out the important green indicators and sub-indicators. Then, a Delphi method is employed to finalize the green indicators in the context of Pakistan. Finally, the Fuzzy AHP method is used to determine the weights and ranking of green indicators (main criteria), sub-indicators (sub-criteria), and green activities of three garment manufacturing firms (alternatives) for successful adoption of green SCM practices. Ten experts participated during the Delphi and Fuzzy AHP method; the consulted experts were knowledgeable about green practices in SCM. These experts included three research fellows, three university professors, two stakeholders, and two government institute analysts. Figure 1 presents the research methodology of the present study.

**Figure 1.** Decision methodology of the study.

### *3.1. Delphi Method*

In this study, a Delphi approach is applied to finalize the green SCM indicators and sub-indicators for the adoption of green initiatives in the manufacturing firms of Pakistan. The main purpose of the Delphi method is to collect experts' opinions about any decision problem through semi-structured interviews, group discussions, and questionnaires [53]. In this method, the professional and relevant field experience experts share their opinion, ideas, knowledge, and expertise to make mutual understanding regarding the problem [54,55]. There are various steps involved in the Delphi process, which are comprised of: the selection of experts, the first round of questionnaire survey, the second and third round of the survey, and this process is continued or repeated until the experts research mutual consent [56]. This method does not have restrictions on the number of experts for providing feedback to any decision problem. Different studies have proposed different numbers of experts and rules for the validation of the data. However, a minimum of 9 and 18 experts should be involved in the decision-making process of the Delphi method [57]. Therefore, in this study, ten experts were consulted and participated in providing meaningful feedback about the decision problem. The questionnaire survey to these experts was sent through a webmail service, and each of the experts was asked to assign weights in the survey instrument using a Likert point scale between 1 and 5 (not at all important and extremely important). The survey instrument is attached in the Appendix A.

### *3.2. Fuzzy Analytical Hierarchy Process Method*

AHP is a very significant method of MCDM [57]. This method is a four steps hierarchal approach of MCDM, which is used to rank the criteria, sub-criteria, and alternatives according to the particular goal of the decision problem [58]. However, in this study, the Fuzzy based AHP is employed to spoil the decision problem into a very small problem. This is because the fuzzy set theory helps in solving the incomplete information and immeasurable problem under fuzzy environment [59]. The pairwise comparison is then operated in a matrix using TFNs [60], to evaluate and prioritize the green SCM practices in Pakistan. Table 3 presents the TFNs rating scale employed in the present study.


**Table 3.** Linguistic variable and Triangular Fuzzy Numbers (TFNs) [61].

Gogus and Boucher [62] proposed an approach to calculate the inconsistency ratio of fuzzy pairwise comparison matrices. The steps of Fuzzy AHP are given below:

**Step 1**: Transform a fuzzy triangular matrix into two independent matrices. At this step, a fuzzy triangular matrix is divided into two matrices, assuming that the triangular fuzzy number is presented as follows.

$$X\_i = (l\_i, m\_{i\prime} u\_i) \tag{1}$$

Then, the first matrix can be created by middle numbers of the fuzzy triangular matrix, that is:

$$X\_m = \begin{bmatrix} x\_{\bar{i}jm} \end{bmatrix} \tag{2}$$

The second matrix can be created by the geometric mean of the upper and lower bounds of the fuzzy triangular matrix, that is:

$$X\_{\mathcal{S}} = \lceil \sqrt{\mathbf{x}\_{i\bar{j}u} \mathbf{x}\_{i\bar{j}l}} \rceil \tag{3}$$

**Step 2**: Calculate the weight vector based on the Saaty method and calculation of lambda max (λmax). **Step 3**: Calculate the consistency index (CI) for each matrix; the CI can be computed based on the following equation:

$$CI\_{\rm ll} = \frac{\lambda\_{\rm max}^{m} - n}{n - 1} \tag{4}$$

$$CI\_{\mathcal{S}} = \frac{\lambda\_{\text{max}}^{\mathcal{S}} - n}{n - 1} \tag{5}$$

**Step 4**: Calculate the consistency ratio (CR) of the matrices. For CR, the consistency index (CI) of each matrix is divided by its random index (RI).

$$\text{CR}\_{\text{W}} = \frac{\text{CI}\_{\text{m}}}{\text{RI}\_{\text{m}}} \tag{6}$$

$$\text{CR}\_{\mathcal{S}} = \frac{\text{Cl}\_{\mathcal{S}}}{\text{RI}\_{\mathcal{S}}} \tag{7}$$

The matrices are to be considered consistent if the values of *CRm* and *CRg* are less than 0.1; however, if the range surpasses 0.1, then it does not provide significant results. Table 4 shows the values of the RI for each matrix of the Gogus and Boucher method.

The accomplishment of the above Fuzzy AHP steps would provide the results of green indicators, sub-indicators, and firms' activities (alternatives) for implementing green SCM in Pakistan.


**Table 4.** Random index (RI) scale for each matrix.

### **4. Results and Discussion**

In the study, a hybrid decision framework (i.e., a Delphi and Fuzzy AHP) has been used to analyze actual garment manufacturing firms. In the case study, the firms are required to improve and transform the manufacturing process into green activities to obey the environmental management regulations and SCM. Therefore, to deal with the problem of supplier selection, the firm should adopt green SCM indicators or criteria to follow the environmental regulations. This decision methodology outlines a systematic and feasible approach for managers towards evaluating and ranking six important green indicators for supply chain operations. Initially, the results of the Fuzzy AHP method has been utilized to evaluate six green indicators (criteria), and twenty-two sub-indicators (sub-criteria), and three firms (alternatives) for the adoption of green SCM practices.

### *4.1. Case Information*

Due to the increasing prosperity of the garment products and network market, the manufacturing firms are producing at large scale to meet consumer demand [63]. Currently, several firms produce the largest share of professional garment manufacturing products in Pakistan and these firms are also the largest exporters of the country. The firms are continuing to develop next-generation technology (i.e., Industry 4.0) for improving their competitiveness and satisfying customer demand [64,65]. Furthermore, a rapid transforming of garment manufacturing to green SCM activities is occurring, such as product design, production, purchasing etc. The firms are continuing to develop new green technologies and green products to comply with the environmental regulations of the government. Thus, for the firm to sustain in a competitive market, a proper green SCM system is very important.

It is essential to understand the role of green SCM practices for the sustainable manufacturing process in the firms. Therefore, this study develops the firms' green SCM indicators and sub-indicators to transform the firms' activities into green activities. In the study, three garment manufacturing firms were analyzed. The experts identified a systematic procedure for assessing the green SCM practices of the firms. To select the significant firm, this study, therefore implemented the decision methodology, and firms were evaluated with respect to proposed green indicators for SCM operations. The analysis obtained in this study would provide suggestions to the firms, and it would also be very useful for effective and efficient green SCM adoption process.

### *4.2. Results of Fuzzy AHP Method*

The Fuzzy AHP approach has been carried out to determine the various green indicators and sub-indicators weights with respect to the decision methodology of the study. Ten experts' opinions were undertaken to analyze the results. Therefore, in this study, the group decision-making approach has been used to obtain the final results of green SCM indicators and sub-indicators in the context of Pakistan. The detailed fuzzy pairwise comparisons matrix of the indicators and sub-indicators with respect to the goal is presented in Appendix B. In the following section, the results of the Fuzzy AHP method have been analyzed.

### 4.2.1. Ranking of Green Indicators

Table 5 presents the weights and ranking of indicators for green SCM practices in Pakistan. The results of the present study show that green purchasing (G2) with a weight of 0.253 is the most important indicator for the sustainable development of green SCM practices in the country. The second-ranked most important indicator was green design (G1) with a weight of 0.228, while other ranks in the following order: green production (G3) with a weight of 0.192, green logistics (G5) with a weight of 0.152, reverse logistics (G6) with a weight of 0.106, and green warehousing (G4) with a weight of 0.069. Finally, this ranking evaluated the importance of each indicator for the sustainable implementation of green SCM practices in Pakistan.


**Table 5.** The indicators (criteria) results with respect to the goal.

### 4.2.2. Ranking of Sub-Indicators (Green Design)

Figure 2 depicts the weights and ranking of sub-indicators with respect to green design (G1). In the classification of sub-indicators, it is identified that eco-design products (G11) with a weight of 0.373 is the most significant sub-indicator for the implementation of green design products in green SCM practices. Reuse and recycle the product (G14) and reduce the consumption of materials (G13) are recognized as second and third important sub-indicators with a weight of 0.288 and 0.228. Meanwhile, reducing the use of hazardous products (G12) is identified as the least important sub-indicator with respect to green design. Finally, for the sustainable development of green SCM practices in Pakistan, it is necessary to determine green design sub-indicators.

**Figure 2.** The sub-indicators results with respect to the green design (G1).

### 4.2.3. Ranking of Sub-Indicators (Green Purchasing)

Figure 3 depicts the weights and ranking of sub-indicators with respect to green purchasing (G2). The first place in the ranking of the sub-indicators is to select the eco-friendly supplier (G21) with a weight of 0.429 is first ranked sub-criteria for the evaluation of green SCM practices in the country. Meanwhile, pushing suppliers to take eco-friendly actions (G23) and purchase of eco-friendly raw materials (G22) with a weight of 0.356 and 0.215 are considered as second and third important sub-indicator in the perspective of green SCM in Pakistan.

**Figure 3.** The sub-indicators results with respect to green purchasing (G2).

4.2.4. Ranking of Sub-Indicators (Green Production)

Figure 4 presents the weights and ranking of sub-indicators with respect to green production (G3). In the ranking of the sub-indicators, cleaner production (G31) is identified as a significant sub-indicator with a weight of 0.411 for the evaluation of green SCM practices in the country. Further, reduce the environmental impact on operations (G33), and lean production (G32) are also best-suited sub-indicators with a weight of 0.328 and 0.183. Reduce the amount of scrap (G34) is determined as the least significant sub-indicator. In the study, all these sub-indicators are crucial for the development of green CSM practices in Pakistan.

**Figure 4.** The sub-indicators results with respect to green production (G3).

### 4.2.5. Ranking of Sub-Indicators (Green Warehousing)

Figure 5 presents the weights and ranking of sub-indicators with respect to green warehousing (G4). In the G4 category, eco-packaging (G41) was prioritized as the top-ranked sub-indicator with a weight of 0.373. The following sub-indicators were prioritized: reducing the inventory levels (G43), sale of excess inventories (G42), and sale of scrap materials (G44), respectively. Therefore, the key steps must be taken by managers and governments to analyze the indictors for implementing green SCM practices.

4.2.6. Ranking of Sub-Indicators (Green Logistics)

Figure 6 illustrates the weights and ranking of sub-indicators with respect to green logistics (G5). The results indicate that using eco-friendly transportation (G52) is an optimal sub-indicator with a weight of 0.417 for carrying out the green SCM practices in Pakistan. Meanwhile, eco-friendly distribution (G53) with a weight of 0.347 is recognized as the second important sub-indicator in the perspective of green logistics; reduce fuel consumption (G51) is considered as the least significant sub-indicator for the development of sustainable green activities in SCM.

**Figure 6.** The sub-indicators results with respect to green logistics (G5).

### 4.2.7. Ranking of Sub-Indicators (Reverse Logistics)

Figure 7 indicates the weights and ranking of sub-indicators with respect to reverse logistics (G6). Develop an environmental management system (G61) was prioritized as the most influential sub-indicator with a weight of 0.382. The use of alternative energy sources (G62) is ranked as the second important sub-indicator, while recycling end-of-life products (G63), and use of waste of other industries (G64) followed as third and fourth, respectively.

**Figure 7.** The sub-indicators results with respect to the reverse logistics (G6).

### *4.3. Ranking of Overall Sub-Indicators*

Table 6 presents the overall ranking of green sub-indicators with respect to the goal. Overall, twenty-two sub-indicators were analyzed for the evaluation of green SCM practices. The results indicate that "select the eco-friendly supplier (G21)", "pushing supplier to take eco-friendly actions (G23)", and "eco-design products (G11)" are the most significant sub-indicators (sub-criteria), while "sale of excess inventories (G42)", "use of waste of other industries (G64)", and "sale of scrap materials (G44)" are considered as most insignificant sub-indicators for the development and adoption of green SCM practices in Pakistan.

### *4.4. Ranking of Manufacturing Firms (Alternatives)*

The above section provides the results of six green indicators and twenty-two sub-indicators, while in this section, the three manufacturing firms (alternatives) results have been analyzed using the Fuzzy AHP method. Figure 8 presents the final ranking order of alternatives with respect to the goal. In the study, the names of the firms are not disclosed due to privacy to the implementation of green SCM activities. Experts' analysis assisted in obtaining consistent and reliable results for the Fuzzy AHP approach. The results indicate that the garment manufacturing firm (F1) is identified as the most significant option for the development of green SCM practices. While F2 is recognized as a second suitable option, while F3 is considered as a third important firm in the adoption of green supply practices in Pakistan.


**Table 6.** The weights and ranking of overall sub-indicators.

### *4.5. Discussion*

In the study, three garment manufacturing firms of Pakistan were selected as a case study. Each of the firms has been evaluated with proposed green indicators and sub-indicators for implementing sustainable SCM practices. The proposed decision methodology has been successfully applied to this complex decision problem. The results indicated that the manufacturing firm (F1) is the most optimal in performing and implementing green activities in the firm. Additionally, the results of Fuzzy AHP methodology recommended that green indicators revealed that green purchasing (G2) with 0.253 placed in the first priority, green design (G1) with 0.228 in the second place, green production (G3) with 0.192 in the third place, green logistics (G5) with 0.152 in the fourth place, reverse logistics (G6) with

0.106 in the fifth place, and green warehousing (G4) with 0.069 acquired the lowest importance. Further, the five most important green indicators are selecting the eco-friendly supplier, pushing supplier to take eco-friendly actions, eco-design products, cleaner production, and reuse and recycle the product.

This is the first study that identifies and assesses the green indicators for supply chain activities in Pakistan. However, there are various studies pertaining to the evaluation of green SCM with different aims and objectives. The findings of Wu et al. [30], shows that environmental policy objectives and top-level management support are very important factors for shifting of any firm to sustainable SCM operations, they also indicate that green purchasing is the most important factor in supply management operations. The study by Tseng et al. [32] found that supplier involvement and process control are the two important aspects for determining the close-loop hierarchical structure in green SCM; these findings are very similar to this study in which the selecting eco-friendly supplier is recognized as the most important green sub-indicator for SCM practices The results of the case study are similar to Deng et al. [35], who found that green purchasing and green design are the most important green criteria for evaluating the firm activities toward sustainable SCM. The results of this study are also in line with Korrakot et al. [36], who indicated that green procurement is the most optimal green driver for implementing the green SCM practices. Moreover, the results are partially similar to the study of Chen et al. [33], who investigated the business strategy for green SCM using ANP method and found that green design, green purchasing, and green manufacturing are the significant green strategies for SCM activities. In a broader sense, the decision methodology can be utilized as an analytical approach to propose and select a strategic environmental development plan for green SCM practices of the firms. To obtain significant results, managers should understand the firms' green SCM assessment indicators. In this study, it is identified that none of the previous studies used a Delphi and Fuzzy AHP model to evaluate green indicators and sub-indicators for assessing firms' green SCM practices in the context of Pakistan.

The developed decision methodology is validated through the case study of Pakistan, and the manufacturing companies (alternatives) were assessed in a fuzzy environment considering uncertainty and vagueness of real-life cases. There are various studies, which investigate the green performance of the firms, but, in this study, the Fuzzy AHP methodology is used for assessing and prioritizing the overall green SCM practices of the firms. The assessment procedure enables firms to check their green practices comparing the other firms and to determine useful opinion about the areas of improvement. This study, therefore, would greatly guide managers and governments to initiate green SCM practices in the country.

### **5. Conclusions**

In this study, green SCM practices have been evaluated from the perspective of Pakistan. The main aim of this study was to develop important indicators to analyze the green SCM practices using the Delphi and Fuzzy AHP methodologies. Thus, this is a very first attempt and research work to evaluate green SCM practices for minimizing the negative impact of supply chain operation on the environment. The industries have to improve their ability to adopt green activities based on environmental regulations and practices. This study contributes to green SCM practices and developed significant indicators based on the literature review. Therefore, the various important and validated indicators were determined for the successful implementation of green SCM in the country. To achieve sustainable development goals, it is necessary for a country to focus on green practices. The various studies have comprehensively discussed the basics of green SCM implementation practices; however, nowadays, this is a very critical topic, and more research work is needed to explore and analyze the development of green SCM practices in Pakistan.

The decision making and evaluation process in real-life is often difficult to understand because the numerous uncertainties and fuzziness exist. This research has developed a systematic decision framework methodology comprises of a Delphi and Fuzzy AHP methods to evaluate the green indicators for handling vague and inconsistent data appropriately. This study has selected three garment manufacturing firms to conduct further research. The Fuzzy AHP results identified that green purchasing (G2), green design (G1), and green production (G3) are the most important indicators for the adoption of green SCM practices in Pakistan. While green logistic (G5), reverse logistics (G6), and green warehousing (G4) are considered as the least important indicators for the development of green SCM. Moreover, the leading sub-indicator in green SCM practices were G21, G23, and G11. Afterwards, the results of the alternatives reveal that F1 is the most optimal manufacturing firm for the sustainable implementation of green SCM practices in Pakistan. The F2 and F3 ranked as second and third important firms in adoption to green practices. This decision-making process would assist managers in analyzing and selecting the optimal firm for green practices in Pakistan.

### *5.1. Managerial Implications*

A hybrid decision methodology has been developed to evaluate the green practices with respect to GSM. The firms can derive advantage from the decision methodology developed in the study, which can be employed as a roadmap to a consensus understanding to assess firms' activities in green SCM. Based on the findings of this study, with the green evaluation tool developed, now firms can determine the various ways to enhance green practices and reduce environmental impacts. Thus, managers can develop a resilient relationship with their partners, depending on their strengths and take necessary actions to overcome the weaknesses. The digitalization of the firms can be possible by adopting Industry 4.0 approaches and sustainability-related issues for systematically analyzing the decision problem. Moreover, the results of the study can also assist managers in selecting the finest green SCM partner for future cooperation and collaboration. Therefore, the results of the study are very significant for implementing the developed decision framework in green production practices.

### *5.2. Limitations and Future Research Work*

This research has several key limitations. For instance, the managers of the firms did not participate in the study due to difficulties in approaching and accessing them. Therefore, in future research, top-level managers can be employed in the decision-making process, which shall improve the consistency and robustness of the proposed research framework. Another limitation is that this study used Fuzzy AHP method; however, it is also essential to use other MCDM techniques such as ANP, DEA, VIKOR, DEMATEL, and ELECTRE to compare the results of the proposed framework and also analyze the decision problem as a case studies on supply chain practices. This is an outline for future research. This study provides a base for proposing green indicators and implementing green SCM activities in the context of Pakistani firms. Additionally, in future research, the horizontal integration of supply chain management practices should be undertaken with Industry 4.0 and environmental sustainability-related issues to analyze the decision problem more comprehensively.

**Author Contributions:** Conceptualization, L.X.; methodology, Y.Z.; validation, G.M.S.; formal analysis, Y.Z.; investigation, L.X. and G.M.S.; writing—original draft preparation, Y.Z.; writing—review and editing, L.X. and G.M.S.

**Funding:** This paper received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest.


**Table A2.** Please rate your opinion on a 5-point Likert scale on the following indicators in terms of their importance in evaluating and prioritizing green SCM practices in Pakistan.


**Table 3.** Additional green SCM indicators relating to the problem mentioned above, if any.

Green design (G1) Green purchasing (G2) Green production (G3) Green warehousing (G4) Green logistics (G5) Reverse logistics (G6)

### **B. Pairwise Comparison Matrix of Fuzzy AHP Method**

**Table 4.** Fuzzy pairwise comparison matrix with respect to the goal.


**Table 5.** Fuzzy pairwise comparison matrix with respect to the green design (G1).


CRm = 0.0208 and CRg = 0.0506.

**Table 6.** Fuzzy pairwise comparison matrix with respect to the green purchasing (G2).


CRm = 0.007 and CRg = 0.0152.

**Table 7.** Fuzzy pairwise comparison matrix with respect to the green production (G3).


CRm = 0.0198 and CRg = 0.0456.

**Table 8.** Fuzzy pairwise comparison matrix with respect to the green warehousing (G4).


CRm = 0.023 and CRg = 0.0516.


**Table 9.** Fuzzy pairwise comparison matrix with respect to the green logistics (G5).

**Table 10.** Fuzzy pairwise comparison matrix with respect to the reverse logistics (G6).


CRm = 0.0174 and CRg = 0.0464.

### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

*Article*

## **Cloud-Based Multi-Robot Path Planning in Complex and Crowded Environment with Multi-Criteria Decision Making Using Full Consistency Method**

### **Novak Zagradjanin 1, Dragan Pamucar 2,\* and Kosta Jovanovic <sup>1</sup>**


Received: 14 July 2019; Accepted: 24 September 2019; Published: 4 October 2019

**Abstract:** The progress in the research of various areas of robotics, artificial intelligence, and other similar scientific disciplines enabled the application of multi-robot systems in different complex environments and situations. It is necessary to elaborate the strategies regarding the path planning and paths coordination well in order to efficiently execute a global mission in common environment, prior to everything. This paper considers the multi-robot system based on the cloud technology with a high level of autonomy, which is intended for execution of tasks in a complex and crowded environment. Cloud approach shifts computation load from agents to the cloud and provides powerful processing capabilities to the multi-robot system. The proposed concept uses a multi-robot path planning algorithm that can operate in an environment that is unknown in advance. With the aim of improving the efficiency of path planning, the implementation of Multi-criteria decision making (MCDM) while using Full consistency method (FUCOM) is proposed. FUCOM guarantees the consistent determination of the weights of factors affecting the robots motion to be symmetric or asymmetric, with respect to the mission specificity that requires the management of multiple risks arising from different sources, optimizing the global cost map in that way.

**Keywords:** multi-robot systems; path planning; D\* Lite; FUCOM; cloud technology

### **1. Introduction**

Path planning is of fundamental importance in mobile robotics. Algorithms for path planning are intendend to generate a collision free path between the start and the goal point within the configuration space of the robot, satisfying, at the same time, a certain optimization criteria. Configuration space implies the concept that completely specifies the robot location in its workspace including specification of all degrees of freedom [1]. Path planning for multi-robot systems is much harder than for a single robot, since the size of the joint configuration space grows exponentially in the number of robots [2]. Consequently, algorithms for single robot path planning cannot be directly applied to the multi-robot systems.

The existing methods for multi-robot path planning, from the aspect of implemented algorithms, can be divided into two general approaches [3]. The coupled approach regards the group of robots as a single entity, such that all paths are planned simultaneously in a joint or composite configuration space and therefore could guarantee completeness, but these solutions do not scale well with large robot teams and they usually cannot be solved in real-time. The decoupled approach first computes separate paths for the individual robots and then employs different strategies to resolve possible conflicts. These

solutions are usually fast enough for real-time applications, but they can not guarantee completeness and the robots might easily get stuck in common deadlock situations.

Planning the motion of a team of robots that perform tasks in a real environment involves a large number of challenges. Firstly, the real environment is usually highly dynamic place, so it is difficult to provide a precise planning map (the current map is out of date for a short period of time). Secondly, the time that is available to robots for deliberation is usually very limited—they must quickly make decisions and act according to them.

This paper considers the multi-robot system with a high level of autonomy that is based on the cloud technology and intended for execution of tasks in a complex and crowded environment. A common approach for path planning in robotics is to integrate an approximate, but fast global planner with a precise local planner [4]. This mean that, in a crowded environment, the global planner usually computes paths that ignore the crowds. Subsequently, the local planner takes into account the crowds, as well as kinematic and dynamic constraints of the robot and creates feasible local trajectories. However, if a global planner totaly ignores the costs of navigation through a crowd, such a plan might prove inefficient [5]. Moreover, complex environments may have, besides crowds and static obstacles, other characteristics that influence the motion of robots and that are desirable to consider during the path planning on the global level.

With the aim of taking into consideration the crowds in the initial phase of planning on global level (to form a global cost map), as well as other conditions of the environment, the implementation of multi-criteria decision making (MCDM) is presented in this paper. Particular attention is paid to the problem of determining the criteria weights and, for that purpose, the FUCOM [6] is proposed. The FUCOM belongs to new methods for the determination of the weight coefficients of criteria in MCDM. The application of this method is growing due to its advantages that will be described in detail in this paper. Some implementations of FUCOM can be found in [7–10]. According to our knowledge, the FUCOM-based approach has not been used so far for path planning in robotics.

The necessary environmental information for MCDM according to this paper is provided from external datasets in combination with data that were collected with robots onboard sensors. Another approach is only based on robots online learning of the environment, but this technique will be the topic of our next research. For global path, a planning graph-based D\* Lite algorithm is used [11]. It is adjusted with the aim of application for the multi-robot path planning. Decoupled approach is implemented with a paths coordination strategy.

The goal of proposed approach is to ensure a global cost map for the computing of initial paths, which is similar to the real one as much as possible. In this way, we will try to reduce the total cost of the paths i.e., to manage the risks in path planning in crowded environment, keeping the robots at a certain safe distance from the crowd, but while taking into consideration, at the same time, other conditions of the environment. The application of FUCOM method provides an efficient strategy for solving this problem.

This paper is organized, as follows. Section 2 presents an overview of selected papers that deal with graph-based search in path planning, crowd-sensitive path planning, as well as using MCDM in the forming of cost map. Section 3 describes system architecture and its main "components"—D\* Lite algorithm and FUCOM method. Sections 4–6 introduce the model and procedure for the forming of cost map based on MCDM while using FUCOM, describing the path planning in simulated environment, and giving discussions of the results. Finally, Section 7 presents conclusions.

### **2. Related Work**

Similar to the path planning problem for single robot, graph search methods that are based on the A\* algorithm are frequently used in path planning for multi-robot systems. In [12], the M\* algorithm is proposed that employs so-called subdimensional expansion to plan in the joint configuration space, while using A\* as the underlying path planning algorithm. It starts by planning in low-dimensional subspace representing configuration spaces of individual robots and increases the dimensionality when the need for coordination is detected. A method for optimizing the priorities for decoupled and prioritized A\*-based path planning algorithm for multi-robot system is presented in [13]. This method is a randomized approach that repeatedly reorders the agents to find a sequence for which a solution can be computed and to minimize the overall path lengths. A lattice-based method to multi-robot path planning for non-holonomic vehicles with implemented A\* algorithm is presented in [14]. This method generates kinematically feasible motions for multi-robot system. In [15], the A\* algorithm in combination with potential field approach is used for path planning of a given set of mobile robots, while moving and avoiding obstacles in a chained fashion. The [16] focuses on consideration of path planning and controlling a group of autonomous agents to accomplish multiple tasks in dynamic environments. It represents the approach in researching that emphasizes the influence of implemented multi-robot task allocation approach on the efficiency of path planning.

The extension of A\*, D\* algorithm, is a well-known informed incremental graph search algorithm for partially-known environment. D\* Lite is an alternative to D\* that is at least as efficient as D\*, but is algorithmically different and simpler [17]. It is one of most popular path planning algorithms, which is extensively used for mobile robot navigation in a complex environment [18]. The D\*-based global path planner has been successfully demonstrated in a lot of practical applications [19–21]. Several robots have used the D\* algorithm in combination with the Morphin local planner (which version provides local navigational autonomy for the NASA Mars Exploration Rovers [22]) to drive autonomously on rough terrain over long distances [23–25]. The D\* algorithm is also successfully used as a planner for multi-robot systems [26], as well as D\* Lite [27,28].

Crowd-sensitive path planning for single robot or multi-robot systems has been a very popular topic in the robotics community in recent years. In [5] and [29], four A\*-based crowd-sensitive path planners (CSA\*, Flow-A\*, Risk-A\*, and CUSUM-A\*) are presented. All of them are intended for global navigation in a crowded environment, making the assumption that the robot is only limited to onboard two-dimensional (2D) range sensors. This approach formulates the problem as Bayesian online learning. Each cell in the crowd density map is modeled as a Poisson distribution with rate λ. A Poisson distribution is commonly used to model natural discrete events, including crowd counting [30]. Initially, when the robot has no information regarding the crowd, each of these algorithms generate plans that are identical to those of A\*. However, as the robot travels and observes the crowd, it updates its cost map. Once enough information is gathered through map updates, the cost maps force the planner to efficiently avoid crowded areas. CSA\* simply avoids crowded areas. On other side, Flow-A\* learns and incorporates the direction of crowd flow in the environment to allow for the robot to follow social norms (avoid going against the flow of the crowd). However, people usually change their behavior in the presence of robots. In such scenarios, the learned model should account for such interactions. Risk-A\* uses that approach and improves safety. To detect and adjust for temporal changes in crowd patterns, a statistical change detection technique can be used. These methods allow for CUSUM-A\* to forget old models and learn new ones that reflect the current state of the environment. CSA\*, Flow-A\*, Risk-A\*, and CUSUM-A\* are intended primarily for indoor crowded environments, but a similar approach can be applied in the outdoor crowded environment. However, they only take one factor that influences the motion of robots on the global level into account. In reality, there are usually more such factors.

Dynamic decision support system (DSS) for mission planning and control of autonomous underwater vehicles (AUVs) in complex environments with real-world operational constraints is proposed in [31]. The component of DSS that is intended to reduce the AUV path solution space in complex environments is based on a MCDM while using the analytic network process (ANP) and fuzzy logic. The ANP is used to define the importance weight of the path planning decision factors (ten factors). The ANP is chosen because it is a more general form of the analytic hierarchy process (AHP). ANP does not require independence and it allows for decision factors to 'influence' or 'be influenced' by other factors in the model. Although this method is applied to planning and control of underwater

gliders, it was concluded that the presented approach has much broader applications to unmanned vehicles in general.

### **3. System Overview**

Some of typical forms of the application of multi-robot systems include the following: transport of goods and materials into industrial centers and stores, cleaning of closed and open areas, participation in search and rescue missions, reconnaissance, elimination of the consequences of emergencies, execution of military missions, etc. Prior to everything it is necessary to elaborate well the strategies regarding the allocation of tasks among the robots, path planning, and paths coordination in order to efficiently execute a global mission in any kind of common environment.

With the aim of a wider application of multi-robot systems, agents are supposed to be as simple as possible, energy efficient, low-cost, but still smart enough to reliably perform an assigned task with a high level of autonomy. These two goals are contradictory to each other. Cloud computing is a key enabler for solving these challenges. The idea is to design and create the architecture in which the multi-robot system uses the benefits of converged infrastructure and the resources of the cloud (information, memory, communication, and other) [32–36]. In practice, this usually means that on the cloud level gathering of information and their processing is performed, while the robots, as cloud service users, obtain only data and commands necessary for a direct execution of the allocated tasks. The overview of cloud architecture and whole system considered in this paper is presented in Figure 1.

**Figure 1.** Block diagram of the system architecture.

The "data base" module provides a map of the environment, as well as information regarding terrain configuration and crowd density distribution.

"Robots" module simulates the motion of robots in accordance to the given commands and data, as well as their sensor activities (environment perception). Robots have a mode for an emergency stop in case of unpredicted situations. The proposed multi-robot system in this project consists of three robots (homogenous system).

The "system manager" module is a cloud level, i.e., it provides the execution of the most demanding tasks necessary for functioning of the multi-robot system, such as: creating a cost map based on MCDM while using FUCOM, robot task planning, path planning/replanning of each robot, paths coordination, solving the conflict situations, and mapping updating based on data collected with robots' sensors.

Figure 2 provides allocation of the computation time between the modules in one iteration of randomly chosen scenario.

**Figure 2.** Gantt chart of computation time allocation between the modules.

Out of the Figure 2, it may be seen that this approach significantly unloads the robots at the expense of using the cloud resources. The "system manager" executes the most demanding operations related to creating a global cost map that is based on MCDM using FUCOM, robots task planning, paths planning and replanning, paths coordination, etc. Only data and commands that are necessary for a direct execution of given tasks are delivered to the robots as the users of the cloud services. As a part of sequences that are related to the robots activity, the cumulative time of their motion and perception with their own sensors is shown.

As the described architecture is of a modular type, it can be expanded in many ways and adjusted for other scenarios. In that sense, the architecture can be, depending on the actual need, simply adjusted for the multi-robot system with a larger number of robots of homogenous or heterogenous structure.

### *3.1. D\* Lite Algorithm*

The environment may be represented as a 2D traversability grid with uniform resolution, in which each cell has allocated a real-valued traversal cost larger than zero, reflecting the difficulty of navigation in the appropriate area in order to plan the robot motion. This traversability grid is usually approximated as a discrete graph and after that some graph-based search technique can be applied for path planning. One way to do this is to allocate the node to each cell center, with the edges that connect the node to each adjacent cell center (node). The cost of each edge is a combination of the traversal costs of the two cells that it crosses through and the length of the edge [4].

While keeping this in mind, suppose that *S* is a set of nodes in one such graph. Each node *s*, where *s* ∈ *S*, represents one cell of the map and it is associated with nodes representing neighboring fields. The set of successors of node *s* is denoted by *Succ*(*s*), while the set of predecessors of node *s* is denoted by *Pred*(*s*). For any pair of nodes *s*, *s*' ∈ *S*, where *s*' ∈ *Succ*(*s*), we define cost of transition from *s* to *s*', denoted by *c*(*s*, *s*'), to be positive: 0 < *c*(*s*, *s*') ≤ ∞. In general case *c*(*s*, *s*') *c*(*s*', *s*). The cost of transition from node *s*<sup>0</sup> to node *s*к, where *s*0, *s*<sup>к</sup> ∈ *S*, is defined as the sum of the sequential cost of transitions between neighboring nodes (edge costs) in the set of nodes {*s*0, *s*1, ... , *s*к−1, *s*к}, i.e., as (*c*(*s*0, *s*1) + ... + *c*(*si*−1, *si*) + ... + *c*(*sk*−1, *sk*)), where *c*(*si*−1, *si*) represents the cost of moving from *si*−<sup>1</sup> to *si*

and *si* ∈ *Succ*(*si*−1), 1 ≤ *i* ≤ *k*. The least-cost path from *s*<sup>0</sup> to *s*<sup>к</sup> is denoted with *c*\*(*s*0, *s*к). For *s*<sup>0</sup> = *s*к, we define *c*\*(*s*0, *s*к) = 0.

The goal of least-cost path search algorithms, such as A\*, is to find a path from *sstart* to *sgoal* whose cost is minimal, i.e., equal to *c*\*(*sstart*, *sgoal*). D\* Lite algorithm, as well as A\*, during operation forms and maintains (updates) the value of four functions that describe cell *s*:


Moreover, in addition to the above functions, D\* Lite, as well as A\*, forms and maintains (updates) a priority queue or list *OPEN*. The *OPEN* list contains all inconsistent cells detected so far during the search, which are candidates for further processing in terms of the propagation of the inconsistency. A cell *s* becomes inconsistent if, during the search, its *g*(*s*) is reduced. At each step, the algorithm adds to the search tree the cell from the *OPEN* list, which, at that moment, has the smallest *key* value (for A\* algorithm *key*(*s*) = *f*(*s*)), until it reaches the goal cell. By processing the cells from the *OPEN* list in the order corresponding to the minimum value of the *key*, the algorithm extends from the start to the goal cell the path that has the lowest total cost. Therefore, the cells that were taken from the *OPEN* list and processed in this way are said to be expanded. The process itself is called the expansion of the cell.

A\* algorithm, unlike D\* Lite, implies that the search for a solution is always done over the same graph and with unchanged transition costs between the cells. In the real world, however, there are usually situations in which at the moment of starting the motion, the environment in which the robot is sent is only partially known. In addition, dynamic changes of the environment might appear or occur during the robot motion. In such situations, the path planning process usually starts with the assumption that all areas that are unknown at the initial moment are free to pass, while the robot during the motion by its sensors explores the terrain and collects information for map updating. The changes in the map involve the changing of the graph that the algorithm uses to generate a path. In such circumstances, it may happen that the solution that is reached by the A\* algorithm is not optimal or even valid. Therefore, A\* algorithm in this case calculates the path from scratch (in regard to the current position of the robot), without using the results from the previous iterations. In these scenarios, the class of algorithms known as incremental algorithms is efficient, while bearing in mind that, in the situation when new information occurs, they use the search results from the previous iterations to the maximum possible extent, in order to correct the current or find a new valid solution. D\* Lite belongs to this class of algorithms. It can be said that D\* Lite is extension of A\*, which is able to cope much more efficiently with changes to the graph used for planning.

To be able to do this, in addition to the *g* value D\* Lite forms and maintains (updates) for each cell one-step lookahead cost *rhs*, that represents the path cost estimate that is derived from looking at the *g* values of its neighbors: *rhs*(*s*) = min*s'*∈*Succ*(*s*)(*c*(*s*, *s*') + *g*(*s*')) or zero if *s* is the goal cell. In implementation, each cell maintains a pointer to the cell from which it derives its *rhs* value, so the robot should follow the pointers from its current cell to pursue an optimal path to the goal.

A cell is *consistent* if its *g* and *rhs* values are equal, otherwise it is *inconsistent* (it is called *overconsistent* if *g* > *rhs* and *underconsistent* if *g* < *rhs*). Overconsistent cells propagate path cost reductions, while underconsistent cells propagate path cost enlargements through the environment.

Like A\*, D\* Lite algorithm also uses a heuristic to focus and to speed up the search. D\* Lite also maintains a priority queue or list of inconsistent cells (*OPEN*) to be expanded in the current search iteration. The prioritization of cells in the *OPEN* list during the expansion is also done based on the assigned *key* value. However, unlike the A\* algorithm, in D\* Lite the *key* value has the form of a row vector 1 × 2: *key*(*s*) = [*k*1(*s*), *k2*(*s*)] = [min(*g*(*s*), *rhs*(*s*)) + *h*(*sstart*, *s*); min(*g*(*s*), *rhs*(*s*))]. The *key* value of the cell *s* is less than the *key* value of the cell *s*', denoted *key*(*s*) ≤ *key*(*s*'), which means that *s* is a cell with a higher priority, if *k*1(*s*) < *k*1(*s*') or both *k*1(*s*) = *k*1(*s*') and *k*2(*s*) ≤ *k*2(*s*').

The measure of search efficiency for graph-based algorithms is a number of expanded cells [37,38]. A sloution is reached much more quickly by expanding a fewer cells.

D\* Lite generates an initial solution in a similar manner as the backward version of A\* algorithm (i.e., search is performed from *sgoal* to *sstart*). If the robot detects changes in the environment during the motion (i.e., the cost of some edge is altered), D\* Lite first updates the *rhs* values of all of the cells directly affected by the changed edge cost. After that, priority queue *OPEN* is updated, i.e., the algorithm places new inconsistent cells onto the queue. Subsequently, the cells are expanded from the updated *OPEN* list according to the prioritization based on the assigned *key* value. This ensures the propagation of inconsistency. In this way, D\* Lite algorithm checks the validity of the current path and corrects it if necessary. D\* Lite is efficient because it processes only those cells that are directly affected by the changes. In other words, while using the previously obtained results to calculate the corrected path, D\* Lite does not replan from scratch over the entire graph as A\*. As a result, it can be up to two orders of magnitude more efficient than A\*.

Figure 3 presents the pseudocode of basic version of D\* lite [39].

**Figure 3.** D\* Lite algorithm (basic version).

The principle of D\* Lite that is presented in Figure 3 can be summarized as follows:

D\* Lite performs searches by assigning the current cells of the robot and target to the start and goal cells of each search, respectively. The initialization process sets both the initial *g* and *rhs* values of all cells except the goal cell to infinity (lines 15–16). The goal cell is inserted into the priority queue (*OPEN*) because it is initially inconsistent (line 17). D\* Lite then finds a cost-minimal path from the start cell to the goal cell (line 19). In real implementation line 20 means that the computed path is being traversed by the robot (the robot makes a steps transition cell by cell along the path i.e., *sstart* is changing). As the robot travels, it, at the same time, observes the environment. If changes in edge costs in some robot step are detected, D\* Lite updates the *rhs* values of each cell immediately affected by the changed edge costs and places those cells that have been made inconsistent onto the *OPEN* queue (lines 21–23). D\* Lite then propagates the effects of these *rhs* values changes to the rest of the cell space and checks/replans the path through recalling ComputePath() function (line 19) until it terminates again. Line 18 in real implementation means that the whole process ends when it becomes *sstart* = *sgoal*.

As we said earlier, in this paper, the D\* Lite algorithm is adjusted with the aim of application for the multi-robot path planning. The decoupled approach is implemented with the paths coordination strategy. Robots share the knowledge about the environment through the cloud in order to perform a mission cooperatively.

### *3.2. Full Consistency Method (FUCOM)*

The FUCOM method is used to define the importance weight of the decision factors. FUCOM is one of the newer models that is, like Analytical Hierarchy Process (AHP) and Best Worst Method (BWM), based on the principles of pairwise comparison of criteria and the validation of results through a deviation from maximum consistency [6]. FUCOM is a model that, to some extent, eliminates the drawbacks of the BWM and AHP models. Benefits that are determinative for the application of FUCOM include a small number of pairwise comparison of criteria (only *n* − 1 comparison, *n* = number of criteria), the ability to validate the results by defining the deviation from maximum consistency (OMK) of comparison, and taking into consideration the transitivity during pairwise comparison. As well as with other subjective models for determining the weight of the criteria (AHP, BWM, etc), there is a subjective influence of the decision-maker in the FUCOM model on the final values of the weight of the criteria. This particularly refers to the first and second steps of FUCOM, in which decision-makers rank the criteria according to their personal preferences and perform a pairwise comparison of ranked criteria. However, unlike other subjective models, FUCOM showed minor deviations in the obtained values of the weight of the criteria from the optimum values [6]. Additionally, the methodological procedure of FUCOM eliminates the problem of redundancy of pairwise comparison, which is present in some subjective models for determining the weight of the criteria.

Assume that there are *n* evaluation criteria in a multi-criteria model that are denoted as *wj*, *j* = 1, 2, ... , *n* and that their weight coefficients need to be determined. Subjective models for determining weights based on pairwise comparison of criteria require the decision-maker to determine the degree of influence between the criteria. In accordance with the defined settings, the next section presents the FUCOM algorithm, Figure 4 [6].


### **Figure 4.** FUCOM algorithm.

### **4. Multi-Criteria Decision Making Model and Procedure**

We use a grid-based approach for map representation in order to navigate through the environment. This means that the map of area of interest is converted into 2D uniform grid of squares, referred to as 'cells'. Each cell is assigned a price that depends on four criteria.

Criteria (factors):


The team of experts in charge of mission planning is responsible for decision-making. The assumption is that experts have years of experience in the field of robot path planning, as well as that they have access to information regarding conditions of the environment in which the robots move. Defining *wij* as the weight of factor *j* defined by expert *i* (*i* = 1,2, ... , *I*; *j* = 1,2, ... , *J*) and *Cmij* as the score of *j*th factor for cell *m* provided by the *i*th expert (*i* = 1,2, ... , *I*; *j* = 1,2, ... , *J*; *m* = 1,2, ... , *M*), we find *Cm*, the 'path planning index' of the *m*th cell. Here, equation (1) is used to aggregate the score of *j*th factor given by *I* experts for cell *m*, as well as its total score for all *J* factors:

$$\begin{aligned} \mathbf{C}\_{j}^{m} &= \left( \mathbf{C}\_{1j}^{m} w\_{1j} + \dots + \mathbf{C}\_{ij}^{m} w\_{ij} + \dots \dots + \mathbf{C}\_{Ij}^{m} w\_{Ij} \right) / I \\ \mathbf{C}^{m} &= \left( \mathbf{C}\_{1}^{m} + \dots + \mathbf{C}\_{j}^{m} + \dots + \mathbf{C}\_{I}^{m} \right) \end{aligned} \tag{1}$$

The final path planning indices are next converted into a colour-coded system that reflects the risk level in a cell. In this way, the map is composed of green, yellow, orange, and red risk categories. Green is the lowest risk level that represents cells with 0 ≤ *Cm* ≤ 2.5. Yellow marks a moderate risk level and it represents cells with 2.5 ≤ *Cm* ≤ 5. Orange indicates a high risk and it represents cells with

5.0 ≤ *C<sup>m</sup>* ≤ 7.5 and red symbolises severe risk and represents cells with 7.5 ≤ *C<sup>m</sup>* ≤ 10. Each risk level implies its unique cost of transition through the cell. This approach provides risk-sensitive planning.

Data processing according to the previously mentioned steps can take considerable amount of computing time. Hence, forming the datasets of traversal costs at the cloud level in the phase of preprocessing using its resources is proposed.

### **5. Simulation and Results**

Crowds may behave differently in the presence of the robots. Some people make way for the robots, others hinder their motion. Areas that are attractive to children can be more risky than other crowded areas. In any case, approaching the robot to a person or vice versa unacceptably close represents a risky action to a greater or lesser extent.

The global cost map here is represented as a discrete grid with 100 × 100 cells. A complex environment was simulated to evaluate the performance of the proposed concept. In addition to static obstacles, the presence of pedestrians in initially free cells in the environment was simulated, with the probability of occurrence in proportion to the scores of appropriate criteria.

In such generated scenarios, we evaluated the path planning performance under D\* Lite without MCDM and D\* Lite with MCDM by total travel distance and total number of risky actions (those that brought the robot within distance of two cells from a pedestrian). The robot should do so quickly, travel relatively short distances, avoid collisions, and take relatively few risks. One such scenario is presented in Figure 5, with specifications as follows.

Start and goal positions of robots:


The first MCDM model—D\* Lite with MCDM1

Determining criteria wegihts:

*Step 1*. The decision-makers performed the ranking of the criteria: *C*<sup>3</sup> ≥ *C*<sup>2</sup> ≥ *C*<sup>1</sup> > *C*4.

*Step 2*. The decision-makers performed the pairwise comparison of the ranked criteria from Step 1. The comparison was made with respect to the first-ranked *C3* criterion. The comparison was based on the scale [1, 9]. Thus, the priorities of the criteria (*Cj*(*k*) ) for all of the criteria ranked in Step 1 were obtained (Table 1).



Based on the obtained priorities of the criteria, the comparative priorities of the criteria are calculated: ϕ*C*3/*C*<sup>2</sup> = ϕ*C*2/*C*<sup>1</sup> = 1/1 = 1 and ϕ*C*1/*C*<sup>4</sup> = 5/1 = 5.

*Step 3*. The final values of weight coefficients should meet the following two conditions: (1) The first condition: *<sup>w</sup>*<sup>3</sup> *<sup>w</sup>*<sup>2</sup> <sup>=</sup> *<sup>w</sup>*<sup>2</sup> *<sup>w</sup>*<sup>1</sup> <sup>=</sup> 1 and *<sup>w</sup>*<sup>1</sup> *<sup>w</sup>*<sup>4</sup> <sup>=</sup> 5; (2) The second condition: The final values of the weight coefficients should meet the condition of mathematical transitivity, i.e., that *<sup>w</sup>*<sup>3</sup> *<sup>w</sup>*<sup>1</sup> <sup>=</sup> <sup>1</sup> · <sup>1</sup> <sup>=</sup> 1 and *<sup>w</sup>*<sup>2</sup> *<sup>w</sup>*<sup>4</sup> <sup>=</sup> <sup>1</sup> · <sup>5</sup> <sup>=</sup> 5. The final model for determining the weight coefficients can be defined as:

$$\begin{array}{c} \min\_{\begin{subarray}{c} \boldsymbol{\chi} \\ \text{s.t.} \end{subarray}} \left\{ \begin{array}{c} \left| \frac{\overline{w}\_{3}}{\overline{w}\_{2}} - 1 \right| \leq \chi\_{\prime} \; \middle| \; \frac{\overline{w}\_{2}}{\overline{w}\_{1}} - 1 \right| \leq \chi\_{\prime} \; \middle| \; \frac{\overline{w}\_{1}}{\overline{w}\_{4}} - 5 \right\| \leq \chi\_{\prime} \\ \text{s.t.} \; \middle| \; \left| \frac{\overline{w}\_{3}}{\overline{w}\_{1}} - 1 \right| \leq \chi\_{\prime} \; \middle| \; \frac{\overline{w}\_{2}}{\overline{w}\_{4}} - 5 \right| \leq \chi\_{\prime} \\ \text{s.t.} \; \begin{array}{c} \left| \; \frac{4}{\overline{w}\_{1}} \; \middle| \; w\_{j} = 1, \; w\_{j} \geq 0, \; \mathsf{V} \; j \end{array} \right. \end{array}$$

By solving the problem with Lingo 17.0 software (Chicago, IL, USA), the final values of the weight coefficients (0.313, 0.313, 0.313, 0.063) *<sup>T</sup>* and deviation from full consistency (DFC) of the results χ = 0.00 are obtained.

The second MCDM model—D\* Lite with MCDM2 Determining criteria wegihts:

*Step 1*. Ranking of the criteria: *C*<sup>3</sup> > *C*<sup>2</sup> > *C*<sup>1</sup> ≥ *C*4.

*Step 2*. In the second step, the decision-maker performed the pairwise comparison of the ranked criteria from Step 1. The comparison was made with respect to the first-ranked *C*<sup>3</sup> criterion. The comparison was based on the scale [1, 9]. Thus, the priorities of the criteria (*Cj*(*k*) ) for all of the criteria ranked in Step 1 were obtained (Table 2).


Based on the obtained priorities of the criteria, the comparative priorities of the criteria are calculated: ϕ*C*3/*C*<sup>2</sup> = 4/1 = 4, ϕ*C*2/*C*<sup>1</sup> = 7/4 = 1.75, and ϕ*C*1/*C*<sup>4</sup> = 7/7 = 1.

*Step 3*. Nonlinear model constraints: (1) The first constraint: *<sup>w</sup>*<sup>3</sup> *<sup>w</sup>*<sup>2</sup> <sup>=</sup> 4, *<sup>w</sup>*<sup>2</sup> *<sup>w</sup>*<sup>1</sup> <sup>=</sup> 1.75, and *<sup>w</sup>*<sup>1</sup> *<sup>w</sup>*<sup>4</sup> <sup>=</sup> 1; (2) The second constraint: The final values of the weight coefficients should meet the condition of mathematical transitivity, i.e., that *<sup>w</sup>*<sup>3</sup> *<sup>w</sup>*<sup>1</sup> <sup>=</sup> <sup>4</sup> · 1.75 <sup>=</sup> 7 and *<sup>w</sup>*<sup>2</sup> *<sup>w</sup>*<sup>4</sup> <sup>=</sup> 1.75 · <sup>1</sup> <sup>=</sup> 1.75. The final model for determining the weight coefficients can be defined as:

$$\begin{array}{c} \min\_{\lambda} \chi\\ \text{s.t.} \begin{cases} \left| \frac{w\_3}{w\_2} - 4 \right| \le \chi\_{\prime} \left| \frac{w\_2}{w\_1} - 1.75 \right| \le \chi\_{\prime} \left| \frac{w\_1}{w\_4} - 1 \right| \le \chi\_{\prime}\\ \left| \frac{w\_3}{w\_1} - 7 \right| \le \chi\_{\prime} \left| \frac{w\_2}{w\_4} - 1.75 \right| \le \chi\_{\prime} \end{cases} \\\ \begin{array}{c} \left| \begin{array}{c} 4\\ \sum\\ j=1 \end{array} \left| w\_j = 1, \ w\_j \ge 0, \forall j \end{array} \right. \end{array} \end{array}$$

By solving the problem with Lingo 17.0 software, the final values of the weight coefficients (0.651, 0.163, 0.093, 0.093) *<sup>T</sup>* and the DFC of the results χ = 0.00 are obtained.

Table 3 presents the results of the experiment, averaged over 15 trials.

**Table 3.** D\* Lite with MCDM improves navigation performance.


**Figure 5.** Scenario cost map (left) and path planning results (right): (**a**) D\* Lite without multi-criteria decision making (MCDM); (**b**) D\* Lite with MCDM1; and, (**c**) D\* Lite with MCDM2.

### **6. Discussion**

Initially, when there is no information about the crowds, plans that are identical to basic version of D\* Lite are generated and they can move robots directly through the crowd, as is shown in Figure 5a. Crowds can slow, divert, or halt the robot, and thereby increase its travel time and risky actions. On the other side, forming the global cost maps that were based on the gathered information about the environment, expert knowledge, and MCDM using FUCOM force the planner to tend to

avoid crowded areas, but while taking into consideration at the same time other conditions of the environment—Figure 5b,c.

Experts, according to their personal preferences, determine the values of the criteria for cells in grid map (by analyzing available information about the environment), as well as the weights of the criteria (with respect to the particular situation and mission). It is logical that, for maps with a large number of cells, the value of the criteria will be assigned at the level of the regions, where the region includes a group of cells with the same or similar value of the considered criterion by the expert preference.

The application of FUCOM provides an efficient determination the weights of factors that decisively affect the robots motion to be symmetric or asymmetric, with respect to the particular environment and having in mind the mission specificity and objectives, optimizing the global cost map in that way. This refers to the fact that, in most situations, the robots motion in a crowded environment is subordinated to the pedestrians, while certain missions require emergency response of the robots, when they are in some way assigned higher priority in movement. This must be taken into account when defining the global cost map for path planning.

Based on Table 3, it can be concluded that the distance traveled was not statistically significantly different with any of the tested approaches. The most significant differences are in risky actions: D\* Lite with MCDM2 took fewer risky actions than D\* Lite without MCDM for 18.9%, while D\* Lite with MCDM1 took fewer risky actions than D\* Lite without MCDM for 10.1%. This is mainly because, in the case of testing D\* Lite with MCDM2, the FUCOM model puts greater weight on the criterion related to the crowd than in the case of testing D\* Lite with MCDM1. In this way, the possibility of managing the overall risk is provided.

As D\* Lite is global planner, the choice of a local collision-avoidance planner directly impacts the number of risky actions and performance of the proposed approach.

### **7. Conclusions**

This paper considers the multi-robot system with a high level of autonomy, based on the cloud technology and intended for the execution of tasks in a complex and crowded environment.

The cloud approach shifts the computation load from agents to the cloud and provides powerful processing capabilities to the multi-robot system. This allows for the onboard systems of the robots to be greatly reduced, keeping only sensors, communication, actuation, and manipulation modules. The implemented multi-robot path planning algorithm uses common data base and it can operate in an environment that is unknown in advance.

Mission control is based on human expert knowledge of robots capabilities, as well as on available information regarding environmental conditions. The application of MCDM using FUCOM provides an adaptive approach to path planning, in terms of optimizing the global cost map while taking into account all of the factors affecting the robots motion in the environment and having in mind a mission specificity that requires the management of risks that arise from different sources.

We tested a presented approach in simulation on complex scenarios and demonstrated an improvement of global path planning through the statistically significant reductions in the number of risky actions. A limitation of this approach is that it needs external information regarding the environment, not only those gathered with robots' sensors.

**Author Contributions:** Conceptualization, N.Z., D.P. and K.J.; Methodology, N.Z., D.P. and K.J.; Validation, N.Z., D.P. and K.J.; Investigation, N.Z. and D.P.; Data Curation, N.Z., D.P. and K.J.; Writing—Original Draft Preparation, N.Z., D.P. and K.J.; Writing—Review & Editing, N.Z., D.P. and K.J.; Visualization, N.Z. and D.P.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest.

### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

*Article*

## **A New Model for Defining the Criteria of Service Quality in Rail Transport: The Full Consistency Method Based on a Rough Power Heronian Aggregator**

### **Dragan Đorđevi´c 1, Gordan Stoji´c 2, Željko Stevi´c 3,\*, Dragan Pamuˇcar 4, Ana Vulevi´c <sup>5</sup> and Vesna Miši´c <sup>3</sup>**


Received: 27 June 2019; Accepted: 23 July 2019; Published: 2 August 2019

**Abstract:** The European standard on transport logistics and services in public passenger transport EN 13816 is based on a relationship between the perception of users and transport carriers throughout the groups of criteria taken as a basis for observation in this paper. The constant development and improvement of services in order to achieve sustainability of passenger transport is an imperative on the one hand and a challenge on the other. This is highly evident in persons with disabilities who are faced with many physical and social barriers related to access to rail transport. In this paper, a new model for the selection of criteria for the quality of passenger service in rail transport, from the perspective of persons with disabilities as the main category of passengers, has been created. The survey has covered 168 criteria classified in several groups and the entire territory of Serbia. In order to select the most important criteria, a new model that implies the integration of Full Consistency Method and a Rough Power Heronian aggregator has been developed. The development of a new aggregator enables more accurate decision-making in the process of group decision-making. The results obtained in this paper show that the most important criteria according to importance are Accessibility, Availability, Security, Time, Customer care, Information, Comfort, Environmental impact. Based on the criteria obtained for the service quality of rail transport for persons with disabilities, railway carriers will be able to change and improve the existing services, content, characteristics, equipment of railway stations and vehicles.

**Keywords:** persons with disabilities; passenger service; rough power heronian aggregator

### **1. Introduction**

Insufficient activity in recognizing persons with disabilities as passengers in most modes of transport including rail transport, which can contribute to the increase in revenues, points to the necessity of carrying out the research that puts the focus of this category of passengers. Although there has been a lot of progress in improving the position of people with disabilities, it can still be said that these people are exposed to discrimination in their everyday lives.

The main obstacles they face include: mobility to jobs, visits to the doctor, shopping, performing other social and recreational activities that are directly conditioned by inaccessible transport, which is

at the same time the first barrier when leaving the house [1–4]. Therefore, in understanding the needs of persons with disabilities it is necessary to determine their required, desired and unfulfilled activities.

In order to observe the mobility of this population, it should be taken into account individual behaviour in travel, personality characteristics, lifestyle and previous experience, socioeconomic and demographic characteristics that may have an impact on individual requirements.

There are a significant number of studies and strategies analysing the position of people with disabilities, which tells us that this issue is becoming increasingly important and that more attention is being paid to it. In addition to the Convention on the Rights of Persons with Disabilities of the United Nations (UN) accepted and adopted in Serbia in 2006, other laws and documents for the improvement of the rights of persons with disabilities are being regulated in parallel. The strategy for improving the position of persons with disabilities in Serbia for the period 2007–2015 [5] and until 2020 [6] continues to make improvements in general objectives defining solutions for providing access to the built environment, affordable transport, information, communications and services for the public. At a local level of municipalities, the activities [7] that are the result of the long-standing efforts of the associations of persons with disabilities, NGOs and support of local authorities have been undertaken.

In the territory of Serbia, people with disabilities mainly use road transport services for their daily or periodic needs. They use transport services of rail transport far less, primarily due to the inadequate quality of services provided, although rail transport has well-known comparative advantages. In regular annual reports on the quality of services provided in rail passenger transport, persons with reduced mobility, that is, disabilities, as well as accessibility to facilities and vehicles, are not recognized [8]. According to the records of the only passenger transport operator on the railway network of the Republic of Serbia "Srbija voz" JSC, in 2016, the participation of passengers with disabilities in rail transport at all stations was less than 1%, of which 75% of passengers started from the Belgrade station (the capital), while this number was less than 10% for other stations. This picture is even worse on the rest of the railway network of the Republic of Serbia. Although some works have already been carried out and are still being carried out on the railway network in Serbia, the number of passengers with disabilities in rail transport does not increase. There is one important reason for this—most of the reconstructed stations and sections are not adapted for the reception and dispatch of passengers with disabilities. Other reasons for not using rail transport are not known and recognizable because they are not being examined.

Observing experiences in other research it is possible to identify similar elements that arise from the practical problems faced by people with disabilities in rail transport. The lack of universal and unique analyses conducted in different countries and with different bases supports this research.

Studies conducted in developed countries, such as the UK, have found that additional information needs to be collected for better use by hard-to-reach groups [9]. In this paper, certain elements of the service quality of public transport have been recognized and considered throughout the open answers of respondents. Public transport testing throughout the assessment of a number of criteria describing the quality of service provided [10]' is useful and important information for different stakeholders and decision-makers [11–13].

Comparing planned and provided, that is, expected and verified services, it is possible to measure the efficiency of service quality. For the purpose of uniformizing and promoting the service quality of public transport, the expectation of users is placed first. The European standard CEN 320/TC-EN 13816:2002 on transport logistics and services in passenger transport is based on the perception of the criteria of service quality provided by transport carriers as well as according to Reference [14].

The aim of the paper is to point out the necessity for creating a model that will provide an insight into defining necessary criteria of the service provided for persons with disabilities in rail passenger transport. Based on the dimensioned criteria of the level of service quality in rail transport for persons with disabilities, railway carriers will be able to change and improve the existing services for this category of passengers. In addition to suggestions in the normative sense, the introduction of the necessary standardization of services in passenger transport, which is closely related to the network of railway lines, content, characteristics and equipment of railway stations, vehicles and connections with other modes of transport, is expected to be implemented. This approach will enable greater integration with similar transport systems in Europe.

To evaluate the quality of service provided, some of mathematical models, such as Servqual [15], regression tree [16,17], structural equation modelling [18,19], have been used. Different approaches to evaluation show that it is possible to obtain good quality information in a decision-making process.

Making decisions in real systems requires a rational understanding of the relationship between attributes and eliminating the impact of data representing extreme values. For this purpose, the Heronian mean [20] operator is proposed, which enables the presentation of interconnections between elements and their fusion into a unique utility function. On the other hand, the Power aggregation (PA) operator [21] eliminates the influence of unreasonable arguments taking into account the degree of support between input arguments. Apparently, Heronian and Power aggregators can successfully achieve this goal. In order to unify the common advantages of the Heronian and Power aggregators, in this paper, we propose a new rough Power-Heronian aggregator, which is created by combining the Heronian and Power aggregators. The Power-Heronian aggregator uses all the benefits of the Heronian and Power aggregators. So far, there is no research on how to use the Power-Heronian aggregator for rough number (RN) aggregation. Therefore, the logical aim and motivation for this study is to demonstrate the application of a hybrid Power-Heronian aggregator in a rough environment. In addition, since the use of RNs makes it easier to describe inaccurate information, the need for combining the Heronian and Power aggregators to solve the MCDM problem is imposed.

In addition to the aforementioned motivation for carrying out this research, this paper fills the gap that exists in the literature related to the provision of good quality service for persons with disabilities in rail transport using an integrated model.

According to our findings, this is the first model that is considering this issue and we think that it will significantly help in identifying key parameters in evaluation for providing good quality services for people with disabilities in rail transport.

In addition to the introductory considerations that define the importance and need for the research and its main goals, the paper is structured throughout several sections. In the second section, a literature review is given. The third section presents the steps of the methodology applied. The fourth section presents a case study with the structure and criteria selected for evaluation and with the basic data of respondents. The fifth section provides the results of the research.

### **2. Literature Review**

In Reference [22], the authors emphasize that improved transport enables people with disabilities to live independently. The study describes their perceptions of transport, the existing discrimination they face and the obstacles they meet. It also mentions possible ways of approaching transport to this population. Taking into account the concept of social inclusion of persons with disabilities, it can be seen that it is strongly connected with the use of transport. According to this basic concept, providing a better environment provides more opportunities and a basis for participation in a larger number of activities for all persons with disabilities. The basis for ensuring a better quality of life in urban conditions is recognized to be in transport [23].

In different studies, the needs for both the entire population of persons with disabilities and certain groups with specificities were examined: needs of persons with disabilities [3]; assessment of the basic indicators that determine the problems faced by persons with disabilities in transport [9]; monitoring an accessible and inclusive transport system for evaluation and improvements in public transport [24]; research on mobility factors for persons with disabilities for employment [25]; understanding the barriers faced by persons with disabilities and the way of planning travel by people with disabilities in peak and off-peak hours in traffic [26]; the reasons for selecting certain modes of transport [27]; understanding a general picture, providing information with the possibility of time tracking, involving users and decision-makers in order to improve the accessibility of transport [28]; more complex

behavioural analysis, prioritization, prevention and identification of necessary measures with the implementation and improvement of transport for people with disabilities [29]; observation of global parameters in a wider area in order to increase mobility for all [30]; analyses of the realization of health, social, cultural, spatial aspects in interaction with transport and increased mobility [31]; general approach and strategy setting in the analysis of problems in order to achieve and overcome the problems of travel for tourist purposes by persons with disabilities [32].

Improvements to be made to increase social inclusion, the quality of engagement, effective regulations and strategies were identified on the basis of the study of best practice examples in Europe and presented in Reference [33]. Some of the tools identified in this research are the development of new technologies for assessing regulations grouped in three areas: accessibility of public transport, social impact on transport and transport sector. The conclusion is that an inclusive environment provides much better opportunities and a basis for participation in all activities. In order to ensure better quality of life in urban form, transport is also recognized according to Reference [34]. In this sense, improving accessibility for people with disabilities and those with reduced mobility is recognized as a life opportunity. Recognizing the way to contribute to better inclusion of urban transport, a European methodology for measuring accessibility of transport is made [35].

In transport planning, accessibility is not always at the first place [23]. Therefore, offered transport services are unable to satisfy all users due to the existence of certain barriers. In this case, transport can become adapted to person with disabilities only if there is understanding of planning and construction process, which can cause better mobility. When referring to barriers, it can be said that they are various [36]. Today, an increasing number of obstacles also include providing a variety of information related to transport. By following the good solutions applied in practice, an example for contributing to a better society can be given. The basic concept is to give answers that will improve the conditions of rail travel, recognize certain criteria that should be met by people with disabilities in order to provide railway travelling conditions that will be as easy as possible for everyone, regardless of age and type of disability. This research is focused on some challenges in order to identify and understand what opinion can be formed through presented criteria to persuade people with disabilities that there are a lot of benefits of using rail travel. It is hard to take one and only specified criteria just for railway transport and there are general transport requirements to be fulfilled. Taking this into consideration, objective criteria which can be related to railway transport service and people with disabilities are presented in Table 1. Simple observation of problems can start from recognition that some passengers may need additional support at a station or to get on/off the train or they just do not have experience with this type of transport mode.


**Table 1.** Basic description of the process for defining criteria by groups according to research of people with disabilities in several studies.

The approach to improving the customer service with the interaction of decision-makers and service providers is increasingly being regulated by the Public Service Obligation Model. In the paper [37], a new model, in which the main approach is a sustainable public transport system at both a local and regional level, is described.

Different analyses of quality depend on the way of how criteria are observed. Thus, there are studies related to the choices of transport modes; general description of the system; description of the desired quality of the system; description of the desired service quality; comparison of certain groups of users; comparison of the quality of service and users' satisfaction according to different requirements in different areas looking for the information on how and on which criteria is possible to have influence.

In the paper [38], the criteria based on which users decide on the means of transport choosing between the public bus transport and a mini-bus taxi vehicle are compared and ranked using the Servqual model. The most significant criteria (improvement of the communication system, accuracy, comfort, reduction of travel time) for improving the service provided are presented as the basis for informing decision-makers that it is possible to reduce the number of private car users. To monitor and control the quality of service provided in rail transport using the Servqual model, three dimensions (service products, social responsibility and service provided) are considered to assess the most important factors for providing better service and passenger satisfaction [15]. Also, in this paper in observing three dimensions of service quality (comfort, connection and convenience) for the analysis of rail transport model for assessing Service quality "zones of tolerance" is used and for the identification of the most significant attributes. A service quality analysis (service provided, access, availability, time and environment) and their interconnection with user satisfaction in public bus transport is performed using the Servqual model [39].

For the consideration of potential and existing users of new services regarding high-speed lines, user satisfaction methods applying factor analyses are used [40]. A factor analysis in Reference [41] identifies the components of rail system specifically related only to the service provided on platforms which is very important for local conditions.

The paper uses the EN 13816 standard [14] based on various aspects of the assessment of quality of service provided, which is analysed using the composite indicators and cluster analysis of users' assessment.

The analysis between different users and how this reflects on the assessment of the most important attributes in rail transport regarding the perception of service quality is carried out in Reference [16] using the methodology based on a classification and regression tree (CART) approach. The analysis of service quality perceptions is conducted by the CART non-parametric method [17] for the analysis of Granada transport system together with the analysis of the socioeconomic characteristics of public transport users. The test is performed in order to determine the most important characteristics and to determine the homogeneity of responses. The study in Reference [42] is carried out to reveal the weaknesses and advantages of changing different modes of transport throughout customer satisfaction with the quality of services using the CART model.

The problem of environmental impact, along with the attractiveness of the public transport of the city of Thessaloniki, is measured using the basic components of service quality by the Exploratory Factor Analysis [43]. The paper highlights the issues of improving the service provided, the frequency and the use of transport.

The assessment of the quality provided in rail transport and its rating is carried out using linguistic variables with the improved PROMETHEE-II method [44]. In Reference [45], a statistical analysis, fuzzy trapezoidal numbers and TOPSIS method is applied and in Reference [46], a fuzzy analytic hierarchy process, trapezoidal fuzzy sets and Choquet integral method for evaluating rail transport and the service quality in the city of Istanbul are used.

The AHP method and the Fuzzy Sets Theory are used to identify the structure of the service quality provided in the city of Palermo [47]. The research identifies limitations in assessing an existing approach in order to propose to regulatory authorities and decision-makers how the interaction between the market and users can be organized in a better way.

The evaluation of the way of behaviour and intentions for loyalty of public transport users of the city of Kaohsiung is carried out throughout the quality of service provided and users' satisfaction using structural equation modelling [48]. To understand the presence of rail transport and the possibility of replacing passenger cars with it, understanding of customer satisfaction with service quality is performed using structural equation modelling [18]. The evaluation of critical factors for the use of high-speed rail in Taiwan and Korea is carried out throughout the service quality related to their satisfaction and possible loyalty using structural equation modelling [19]. Such an analysis can provide a very good rating among different groups and very good answers to all decision-makers for creating marketing strategies and setting up continuous improvement.

Most of the models used in the quality service analysis can apply to different needs. Some of the representative models observed only for rail traffic at the level of general criteria are shown in Table 2. These models do not only mention disabled people as a group of users but with some criteria adaptation, they can be used for that purpose. The experience from the observed research has been taken into account for the final selection of criteria for the purposes of this work.


**Table 2.** Overview of service quality models in railway transport.

### **3. Methodology**

### *3.1. Full Consistency Method*

One of the newer models, based on the principles of pairwise comparison and validation of results through deviation from maximum consistency is the Full consistency method (FUCOM) [49]. FUCOM is a model that to some extent eliminates the stated deficiencies of the BWM and AHP models. Benefits that are determinative for the application of FUCOM are a small number of pairwise comparisons of criteria (only *n* − 1 comparison), the ability to validate the results by defining the deviation from maximum consistency (DMC) of comparison and appreciating transitivity in pairwise comparisons of criteria. As with other subjective models for determining the weights of criteria (AHP, BWM, etc.), the FUCOM model also has a subjective influence of a decision-maker on the final values of the weights of criteria. This particularly refers to the first and second steps of FUCOM in which decision-makers rank the criteria according to their personal preferences and perform pairwise comparisons of ranked criteria. However, unlike other subjective models, FUCOM has shown minor deviations in the obtained values of the weights of criteria from optimal values [49]. Additionally, the methodological procedure of FUCOM eliminates the problem of redundancy of pairwise comparisons of criteria, which exists in some subjective models for determining the weights of criteria.

Assume that there are *n* evaluation criteria in a multi-criteria model that are designated as *wj*, *j* = 1, 2, ... , *n* and that their weight coefficients need to be determined. Subjective models for determining weights based on pairwise comparison of criteria require a decision-maker to determine the degree of impact of the criterion *i* on the criterion *j*. In accordance with the defined settings, the next section (Algorithm 1) presents the FUCOM algorithm [49].

**Algorithm 1** FUCOM

**Input**: Expert pairwise comparison of criteria

**Output**: Optimal values of the weight coefficients of criteria/sub-criteria

*Step 1*: Expert ranking of criteria/sub-criteria.

*Step 2*: Determining the vectors of the comparative significance of evaluation criteria.

*Step 3*: Defining the restrictions of a non-linear optimization model.


*Step 4*: Defining a model for determining the final values of the weight coefficients of evaluation criteria: minχ

*s*.*t*. *wj*(*k*) *wj*(*k*+1) − ϕ*k*/(*k*+1) <sup>≤</sup> <sup>χ</sup>, <sup>∀</sup>*<sup>j</sup> wj*(*k*) *wj*(*k*+2) − ϕ*k*/(*k*+1) ⊗ ϕ(*k*+1)/(*k*+2) ≤ χ, ∀*j n j*=1 *wj* = 1 *wj* ≥ 0, ∀*j Step 5*: Calculating the final values of evaluation criteria/sub-criteria (*w*1, *w*2, ... , *wn*) *T*.

### *3.2. Some Power Heronian Aggregation Operators with Rough Numbers*

In group decision-making problems, the priorities are defined based on multiple experts aggregated subjective evaluation. The RNs consist of upper approximation, lower approximation and boundary interval. It has been pointed out that the logic of rough set theory is entirely based on the original data, without the requirement of any additional information. According to Reference [50]. a RN can be defined as follows:

Let Ω be a universe containing all objects and X be a random object from Ω. It is assumed that there exists a set built with *k* classes representing the DM's preferences, *R* = (*J1*, *J2*, ... , *Jk*) with condition *J1* < *J2* < ... < *Jk*. Then, if ∀*X* ∈ Ω, *Jq* ∈ *R*, 1 ≤ *q* ≤ *k*, the lower approximation *Apr*(*Jq*) = <sup>+</sup> *X* ∈ Ω/*R*(*X*) ≤ *Jq* , , upper approximation *Apr*(*Jq*) = <sup>+</sup> *X* ∈ Ω/*R*(*X*) ≥ *Jq* , and boundary interval *Bnd*(*Jq*) = <sup>+</sup> *X* ∈ Ω/*R*(*X*) - *Jq* , = + *X* ∈ Ω/*R*(*X*) > *Jq* , ∪ + *X* ∈ Ω/*R*(*X*) < *Jq* , are determined. An object can be presented with RN defined with lower limit *Lim*(*Jq*) and upper limit *Lim*(*Jq*) as follows:

$$\begin{aligned} \underline{\dim}(\mathcal{J}\_{\mathbb{F}}) &= \sum R(X) / M\_L | X \in \underline{\operatorname{Apr}}(\mathcal{J}\_{\mathbb{F}})\\ \overline{\dim}(\mathcal{J}\_{\mathbb{F}}) &= \sum R(X) / M\_{\mathcal{U}} | X \in \overline{\operatorname{Apr}}(\mathcal{J}\_{\mathbb{F}}) \end{aligned}$$

where *ML* and *MU* represent the numbers of objects contained in the lower and upper object approximations of *Jq* respectively. For object *Jq*, the rough boundary interval (*RBnd*(*Jq*)) presents the interval between upper and lower limits as: *RBnd*(*Jq*) = *Lim*(*Jq*) − *Lim*(*Jq*). The value of rough boundary interval presents a measure of uncertainty. A higher *RBnd*(*Jq*) value shows that variations in the experts' preferences do exist, while a lower value denotes that the experts have harmonized opinions without major deviations in their preferences. Finally, *RN*(*Jq)* can be presented using lower and upper limits as:

$$RN(I\_q) := \left[ \underline{\dim}(I\_q), \overline{\dim}(I\_q) \right]$$

The power aggregation (*PA*) operator proposed by [21] is a very significant aggregation operator that eliminates the influence of unreasonable arguments taking into account the degree of support between input arguments. The traditional PA operator is defined in the following section.

**Definition 1** ([21])**.** *Let (*ξ1, ξ2, ... , ξ*n) be a set of non-negative numbers and p,q* ≥ *0. If*

$$PA(\underline{\xi}\_1, \underline{\xi}\_2, \dots, \underline{\xi}\_n) = \frac{\sum\_{i=1}^n \left(1 + T(\underline{\xi}\_i)\right) \underline{\xi}\_i}{\sum\_{i=1}^n \left(1 + T(\underline{\xi}\_i)\right)}\tag{1}$$

*where <sup>T</sup>*(ξ*i*) = *<sup>n</sup> j*=1,*ji Sup*(ξ*i*, ξ*j*)*. With Sup*(ξ*i*, ξ*j*) *we indicate the degree of support that* ξ*<sup>i</sup> obtains from* ξ*j, where Sup*(ξ*i*, ξ*j*) *satisfies the following axioms:*

$$\begin{aligned} \sup(\underline{\xi}\_{i\prime}, \underline{\xi}\_{j}) &= \sup(\underline{\xi}\_{j\prime}, \underline{\xi}\_{i}) \\\\ \sup(\underline{\xi}\_{i\prime}, \underline{\xi}\_{j}) &= [0, 1] \end{aligned}$$

$$\sup(\underline{\xi}\_{i\prime}, \underline{\xi}\_{j}) > \sup(\underline{\xi}\_{i\prime}, \underline{\xi}\_{k}), if \left| \underline{\xi}\_{i} - \underline{\xi}\_{j} \right| < \left| \underline{\xi}\_{i} - \underline{\xi}\_{k} \right| $$

*ref. [51] has proposed first the Heronian mean (HM) operator, which enables the display and processing of the interrelationship of input arguments [52]. The HM operator is defined in the following section.*

**Definition 2** ([20])**.** *Let p,q* ≥ *0, (*ξ1, ξ2, ... , ξ*n) be a set of non-negative numbers. If*

$$HM^{p,q}(\xi\_1, \xi\_2, \dots, \xi\_n) = \left(\frac{2}{n(n+1)} \sum\_{i=1}^n \sum\_{j=i}^n \xi\_i^p \xi\_j^q\right)^{\frac{1}{p+q}} \tag{2}$$

*then HMp,q is called the Heronian mean (HM) operator.*

*Based on the settings defined, traditional PA and HM operators, Equations (1) and (2), in the following section, a hybrid rough power Heronian aggregation (RPHA) operator is developed.*

**Definition 3.** *Set* ξ*<sup>i</sup>* = [*Lim*(ξ*i*), *Lim*(ξ*i*)] *(i* = *1, 2,* ... *, n) as a collection of RNs in* Ψ, *then RPHA can be defined as follows:*

$$RPRHA^{p,q}(\boldsymbol{\xi}\_1, \boldsymbol{\xi}\_2, \dots, \boldsymbol{\xi}\_n) = \left(\frac{2}{n(n+1)} \sum\_{i=1}^n \sum\_{j=i}^n \left(\frac{n(1+T(\boldsymbol{\xi}\_i))}{\sum\_{l=1}^n (1+T(\boldsymbol{\xi}\_l))} \boldsymbol{\xi}\_i\right) \left(\frac{n(1+T(\boldsymbol{\xi}\_j))}{\sum\_{l=1}^n (1+T(\boldsymbol{\xi}\_l))} \boldsymbol{\xi}\_j\right)^q\right)^{\frac{1}{p+q}}\tag{3}$$

*where <sup>T</sup>*(ξ*i*) <sup>=</sup> *<sup>n</sup> j*=1,*ji Sup*(ξ*i*, ξ*j*)*. With Sup*(ξ*i*, ξ*j*) *we indicate the degree of support that* ξ*<sup>i</sup> obtains from* ξ*j, where Sup*(ξ*i*, ξ*j*) *satisfies the following three axioms:*

$$\operatorname{Supp}(\xi\_{i\prime}, \underline{\xi}\_{j}) = \operatorname{Supp}(\underline{\xi}\_{j\prime}, \underline{\xi}\_{i})$$

$$\operatorname{Supp}(\underline{\xi}\_{i\prime}, \underline{\xi}\_{j}) = [0, 1]$$

$$\operatorname{Supp}(\underline{\xi}\_{i\prime}, \underline{\xi}\_{j}) > \operatorname{Supp}(\underline{\xi}\_{i\prime}, \underline{\xi}\_{k}), \operatorname{ if}(\underline{\xi}\_{i\prime}, \underline{\xi}\_{j}) < d(\underline{\xi}\_{i\prime}, \underline{\xi}\_{k})$$

*where d*(ξ*i*, ξ*j*) *represents the distance between the rough numbers* ξ*<sup>i</sup> and* ξ*j.*

Then *RPHAp*,*q* represents a rough power Heronian aggregation (*RPHA*) operator. *RPHA* combines the benefits of *PA* and *HM* operators and is a powerful tool with the following features: (1) it eliminates the impact of unreasonable arguments; (2) it takes into account the degree of support between the input arguments; and (3) it takes into account the interrelationship of input arguments.

**Theorem 1.** *Set* ξ*<sup>i</sup>* = [*Lim*(ξ*i*), *Lim*(ξ*i*)] *as a collection of RNs in* Ψ, *then according to Equation (3), the aggregation results are obtained for RNs and the following aggregation formula can be developed:*

*RPHAp*,*q*(ξ1, ξ2, ... , ξ*n*) = ⎛ ⎜⎜⎜⎜⎝ 2 *n*(*n*+1) *n i*=1 *n j*=*i <sup>n</sup>*(1+*T*(ξ*i*)) *n t*=1(1+*T*(ξ*t*))ξ*<sup>i</sup> p <sup>n</sup>*(1+*T*(ξ*j*)) *n t*=1(1+*T*(ξ*t*))ξ*<sup>j</sup> q*⎞ ⎟⎟⎟⎟⎠ 1 *p*+*q* = ⎡ ⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣ ⎛ ⎜⎜⎜⎜⎝ 2 *n*(*n*+1) *n i*=1 *n j*=*i <sup>n</sup>*(1+*Lim*(*T*(ξ*i*))) *n t*=1(1+*Lim*(*T*(ξ*t*)))*Lim*(ξ*i*) *p <sup>n</sup>*(1+*Lim*(*T*(ξ*j*))) *n t*=1(1+*Lim*(*T*(ξ*t*)))*Lim* ξ*j q*⎞ ⎟⎟⎟⎟⎠ 1 *p*+*q* , ⎛ ⎜⎜⎜⎜⎝ 2 *n*(*n*+1) *n i*=1 *n j*=*i <sup>n</sup>*(1+*Lim*(*T*(ξ*i*))) *n <sup>t</sup>*=1(1+*Lim*(*T*(ξ*t*))) *Lim*(ξ*i*) *p <sup>n</sup>*(1+*Lim*(*T*(ξ*j*))) *n <sup>t</sup>*=1(1+*Lim*(*T*(ξ*t*))) *Lim* ξ*j q*⎞ ⎟⎟⎟⎟⎠ 1 *p*+*q* ⎤ ⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ (4)

**Proof.** By the operational rules of RNs defined in Reference [50] we have

$$\mathbf{(a)}$$

(b)

$$\left(\frac{n(1+T(\underline{\xi}\_{i}))}{\sum\_{t=1}^{n}(1+T(\underline{\xi}\_{t}))}\underline{\xi}\_{i}\right)^{p} = \left[\left(\frac{n(1+\underline{\operatorname{Lim}}(T(\underline{\xi}\_{i})))}{\sum\_{t=1}^{n}(1+\underline{\operatorname{Lim}}(T(\underline{\xi}\_{t})))}\underline{\operatorname{Lim}}(\underline{\xi}\_{i})\right)^{p}\left(\frac{n\left(1+\overline{\operatorname{Lim}}(T(\underline{\xi}\_{i}))\right)}{\sum\_{t=1}^{n}\left(1+\overline{\operatorname{Lim}}(T(\underline{\xi}\_{t}))\right)}\overline{\operatorname{Lim}}(\underline{\xi}\_{i})\right)^{p}\right]^{1-\frac{1}{p}}$$

$$\left(\frac{n\left(1+T\left(\xi\_{j}\right)\right)}{\sum\_{t=1}^{n}\left(1+T\left(\xi\_{t}\right)\right)}\xi\_{j}\right)^{q} = \left[\left(\frac{n\left(1+\underline{\dim}\left(T\left(\xi\_{j}\right)\right)\right)}{\sum\_{t=1}^{n}\left(1+\underline{\dim}\left(T\left(\xi\_{t}\right)\right)\right)}\underline{\dim}\left(\xi\_{j}\right)\right)^{q},\left(\frac{n\left(1+\overline{\dim}\left(T\left(\xi\_{j}\right)\right)\right)}{\sum\_{t=1}^{n}\left(1+\underline{\dim}\left(T\left(\xi\_{t}\right)\right)\right)}\underline{\dim}\left(\xi\_{j}\right)\right)^{q}\right]^{1-\frac{1}{q}},\tag{c.14}$$

$$\left(\frac{n\left(1+T(\xi\_{i})\right)}{\sum\_{l=1}^{R}\left(1+T(\xi\_{i})\right)}\xi\_{i}\right)^{p}\left(\frac{n\left(1+T(\xi\_{j})\right)}{\sum\_{l=1}^{R}\left(1+T(\xi\_{l})\right)}\xi\_{j}\right)^{q} = \left[\begin{array}{c} \frac{n\left(1+\underline{\int\lim\limits T(\xi\_{i})}{\sum\limits\_{l=1}^{a}\left(1+\underline{\int\lim\limits T(\xi\_{i})\right)}\right)}\underline{\lim\limits}\underline{\lim}\left(\xi\_{i}\right)}{\begin{array}{c} \frac{n\left(1+\underline{\int\lim\limits T(\xi\_{i})\right)}{\sum\limits\_{l=1}^{a}\left(1+\underline{\int\lim\limits T(\xi\_{i})\right)}\right)}\underline{\lim\limits}\underline{\lim\limits}\underline{\lim\limits}\xi\_{j}\right]^{q}}{\begin{array}{c} \frac{n\left(1+\underline{\int\lim\limits T(\xi\_{i})\right)}{\sum\limits\_{l=1}^{a}\left(1+\underline{\int\lim\limits T(\xi\_{i})\right)}\right)}\underline{\lim\limits}\underline{\lim\limits}\underline{\lim\limits}\xi\_{j}}{\begin{array}{c} \frac{n\left(1+\underline{\int\lim\limits T(\xi\_{i})\right)}{\int\lim\limits T(\xi\_{j})\right)}\end{array}}\underline{\lim\limits}\underline{\lim\limits}\underline{\lim\limits}\xi\_{j}\geq\frac{n\left(1+\underline{\int\lim\limits T(\xi\_{i})\right)}{\int\lim\limits T(\xi\_{j})\right)}\underline{\lim\limits}\underline{\lim\limits}\xi\_{j}$$

(d)

⎛ ⎜⎜⎜⎜⎝ 2 *n*(*n*+1) *n i*=1 *n j*=*i <sup>n</sup>*(1+*T*(ξ*i*)) *n t*=1(1+*T*(ξ*t*))ξ*<sup>i</sup> p <sup>n</sup>*(1+*T*(ξ*j*)) *n t*=1(1+*T*(ξ*t*))ξ*<sup>j</sup> q*⎞ ⎟⎟⎟⎟⎠ 1 *p*+*q* = ⎡ ⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣ ⎛ ⎜⎜⎜⎜⎝ 2 *n*(*n*+1) *n i*=1 *n j*=*i <sup>n</sup>*(1+*Lim*(*T*(ξ*i*))) *n t*=1(1+*Lim*(*T*(ξ*t*)))*Lim*(ξ*i*) *p <sup>n</sup>*(1+*Lim*(*T*(ξ*j*))) *n t*=1(1+*Lim*(*T*(ξ*t*)))*Lim* ξ*j q*⎞ ⎟⎟⎟⎟⎠ 1 *p*+*q* , ⎛ ⎜⎜⎜⎜⎝ 2 *n*(*n*+1) *n i*=1 *n j*=*i <sup>n</sup>*(1+*Lim*(*T*(ξ*i*))) *n <sup>t</sup>*=1(1+*Lim*(*T*(ξ*t*))) *Lim*(ξ*i*) *p <sup>n</sup>*(1+*Lim*(*T*(ξ*j*))) *n <sup>t</sup>*=1(1+*Lim*(*T*(ξ*t*))) *Lim* ξ*j q*⎞ ⎟⎟⎟⎟⎠ 1 *p*+*q* ⎤ ⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦

So, Theorem 1 is true. -

**Theorem 2.** *(Idempotency): Set* ξ*<sup>i</sup>* = [*Lim*(ξ*i*), *Lim*(ξ*i*)] *as a collection of RNs in* Ψ, *if* ξ*<sup>i</sup>* = ξ*, then RPHAp*,*q*(ξ1, ξ2, ... , ξ*n*) = *RPHAp*,*q*(ξ, ξ, ... , ξ)*.*

**Proof.** Since ξ*<sup>i</sup>* = ξ, that is, *Lim*(ξ*i*) = *Lim*(ξ), *Lim*(ξ*i*) = *Lim*(ξ), then

*RPHAp*,*q*(*RN*(ξ1),*RN*(ξ2), ... ,*RN*(ξ*n*)) = *RPHAp*,*q*(*RN*(ξ),*RN*(ξ), ... ,*RN*(ξ)) = ⎡ ⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣ ⎛ ⎜⎜⎜⎜⎝ 2 *n*(*n*+1) *n i*=1 *n j*=*i <sup>n</sup>*(1+*Lim*(*T*(ξ*i*))) *n t*=1(1+*Lim*(*T*(ξ*t*)))*Lim*(ξ*i*) *p <sup>n</sup>*(1+*Lim*(*T*(ξ*j*))) *n t*=1(1+*Lim*(*T*(ξ*t*)))*Lim* ξ*j q*⎞ ⎟⎟⎟⎟⎠ 1 *p*+*q* , ⎛ ⎜⎜⎜⎜⎝ 2 *n*(*n*+1) *n i*=1 *n j*=*i <sup>n</sup>*(1+*Lim*(*T*(ξ*i*))) *n <sup>t</sup>*=1(1+*Lim*(*T*(ξ*t*))) *Lim*(ξ*i*) *p <sup>n</sup>*(1+*Lim*(*T*(ξ*j*))) *n <sup>t</sup>*=1(1+*Lim*(*T*(ξ*t*))) *Lim* ξ*j q*⎞ ⎟⎟⎟⎟⎠ 1 *p*+*q* ⎤ ⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ = ⎡ ⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣ ⎛ ⎜⎜⎜⎜⎝ 2 *n*(*n*+1) *n i*=1 *n j*=*i n*(1+*Lim*(*T*(ξ))) *n t*=1(1+*Lim*(*T*(ξ)))*Lim*(ξ) *p <sup>n</sup>*(1+*Lim*(*T*(ξ))) *n t*=1(1+*Lim*(*T*(ξ*t*)))*Lim*(ξ) *q* ⎞ ⎟⎟⎟⎟⎠ 1 *p*+*q* , ⎛ ⎜⎜⎜⎜⎝ 2 *n*(*n*+1) *n i*=1 *n j*=*i <sup>n</sup>*(1+*Lim*(*T*(ξ))) *n <sup>t</sup>*=1(1+*Lim*(*T*(ξ*t*))) *Lim*(ξ) *p <sup>n</sup>*(1+*Lim*(*T*(ξ))) *n <sup>t</sup>*=1(1+*Lim*(*T*(ξ*t*))) *Lim*(ξ) *q*⎞ ⎟⎟⎟⎟⎠ 1 *p*+*q* ⎤ ⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ = ⎡ ⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣ ⎛ ⎜⎜⎜⎜⎝ 2 *n*(*n*+1) *n i*=1 *n j*=*i n* 1 *<sup>n</sup> Lim*(ξ) *p n* 1 *<sup>n</sup> Lim*(ξ) *q* ⎞ ⎟⎟⎟⎟⎠ 1 *p*+*q* , ⎛ ⎜⎜⎜⎜⎝ 2 *n*(*n*+1) *n i*=1 *n j*=*i n* 1 *<sup>n</sup> Lim*(ξ) *p n* 1 *<sup>n</sup> Lim*(ξ) *q* ⎞ ⎟⎟⎟⎟⎠ 1 *p*+*q* ⎤ ⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ = ⎡ ⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣ ⎛ ⎜⎜⎜⎜⎝ 2 *n*(*n*+1) *n i*=1 *n j*=*i* (*Lim*(ξ))*p*+*<sup>q</sup>* ⎞ ⎟⎟⎟⎟⎠ 1 *p*+*q* , ⎛ ⎜⎜⎜⎜⎝ 2 *n*(*n*+1) *n i*=1 *n j*=*i Lim*(ξ) *p*+*<sup>q</sup>* ⎞ ⎟⎟⎟⎟⎠ 1 *p*+*q* ⎤ ⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ = ξ

The proof of Theorem 2 is completed. -

**Theorem 3.** *(Boundedness): Set* ξ*<sup>i</sup>* = [*Lim*(ξ*i*), *Lim*(ξ*i*)] *as a collection of RNs in* Ψ, *let* ξ<sup>−</sup> = [*minLim*(ξ*i*), *minLim*(ξ*i*)] *and* ξ<sup>+</sup> = [*max Lim*(ξ*i*), *max Lim*(ξ*i*)], *then*

$$
\xi^- \le RPHA^{p,q}(\xi\_1, \xi\_2, \dots, \xi\_n) \le \xi^+ \dots
$$

**Proof.** Let ξ<sup>−</sup> = min(ξ1, ξ2, ... , ξ*n*)=[min*Lim*(ξ*i*), min*Lim*(ξ*i*)] and ξ<sup>+</sup> = max(ξ1, ξ2, ... , ξ*n*) = [max*Lim*(ξ*i*), max*Lim*(ξ*i*)]. Then, it can be stated that *Lim*(ξ−) = min*<sup>i</sup>* (*Lim*(ξ*i*)), *Lim*(ξ−) = min*<sup>i</sup>* (*Lim*(ξ*i*)), *Lim*(ξ+) = max*<sup>i</sup>* (*Lim*(ξ*i*)) and *Lim*(ξ+) = max*<sup>i</sup>* (*Lim*(ξ*i*)). Based on that, the following inequalities can be formulated:

$$\begin{array}{l} \xi^{-} \leq \underline{\xi\_{i}} \leq \underline{\xi^{+}};\\ \min\_{i} \left( \underline{\dim}(\underline{\xi\_{i}}) \leq \underline{\dim}(\underline{\xi\_{i}}) \leq \max\_{i} (\underline{\underline{\zeta}\_{i}}) \right);\\ \min\_{i} \left( \overline{\underline{\lim}}(\underline{\xi\_{i}}) \right) \leq \overline{\underline{\lim}}(\underline{\xi\_{i}}) \leq \max\_{i} (\overline{\underline{\lim}}(\underline{\xi\_{i}})). \end{array}$$

According to the inequalities shown above, it can be concluded that <sup>ξ</sup><sup>−</sup> <sup>≤</sup> *RPHAp*,*q*(ξ1, <sup>ξ</sup>2, ... , <sup>ξ</sup>*n*) <sup>≤</sup> ξ<sup>+</sup> holds. -

**Theorem 4.** *(Commutativity): Let the rough set* (ξ <sup>1</sup>, ξ <sup>2</sup>, ... , ξ *<sup>n</sup>*) *be any permutation of* (ξ1, ξ2, ... , ξ*n*)*. Then RPHAp*,*q*(ξ1, ξ2, ... , ξ*n*) = *RPHAp*,*q*(ξ <sup>1</sup>, ξ <sup>2</sup>, ... , ξ *n*)*.*

**Proof.** This property is obvious. -

*Primer 1.* Let ξ<sup>1</sup> ∈ [3, 5], ξ<sup>2</sup> ∈ [4, 7] and ξ<sup>3</sup> ∈ [3, 4] be three rough numbers and let *p* = *q* = 1, then applying *RPHA* we obtain an aggregated rough number <sup>ξ</sup> = & *Lim*(ξ), *Lim*(ξ) ' using the following calculations:

Step 1: For the upper and lower limit of a rough number, the normalized functions of the lower and upper limits of rough numbers are calculated:

*<sup>f</sup>*(*Lim*(ξ1)) <sup>=</sup> <sup>3</sup> <sup>3</sup>+4+<sup>3</sup> <sup>=</sup> 0.300, *<sup>f</sup>*(*Lim*(ξ2)) <sup>=</sup> <sup>4</sup> <sup>3</sup>+4+<sup>3</sup> <sup>=</sup> 0.400, *<sup>f</sup>*(*Lim*(ξ3)) <sup>=</sup> <sup>3</sup> <sup>3</sup>+4+<sup>3</sup> <sup>=</sup> 0.300, *f Lim*(ξ1) <sup>=</sup> <sup>5</sup> <sup>5</sup>+7+<sup>4</sup> <sup>=</sup> 0.313; *<sup>f</sup> Lim*(ξ2) <sup>=</sup> <sup>7</sup> <sup>5</sup>+7+<sup>4</sup> <sup>=</sup> 0.438, *<sup>f</sup> Lim*(ξ3) <sup>=</sup> <sup>4</sup> <sup>5</sup>+7+<sup>4</sup> <sup>=</sup> 0.250.

Step 2: Calculating the degree of support of the upper and lower limits of rough numbers: *Sup*(*Lim*(ξ1), *Lim*(ξ2)) = 0.1, *Sup*(*Lim*(ξ1), *Lim*(ξ3)) = 0.0, *Sup*(*Lim*(ξ2), *Lim*(ξ3)) = 0.1, *Sup*(*Lim*(ξ1), *Lim*(ξ2)) = 0.125, *Sup*(*Lim*(ξ1), *Lim*(ξ3)) = 0.063 and *Sup*(*Lim*(ξ2), *Lim*(ξ3)) = 0.188. Step 3: By applying Expression (4), *RPHA1,1* is calculated:

*RPHA*1,1([3, 5]; [4, 7]; [3, 4]) = = ⎡ ⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣ ⎛ ⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝ 2 3(3+1) ⎧ ⎪⎪⎪⎪⎪⎪⎨ ⎪⎪⎪⎪⎪⎪⎩ <sup>3</sup>·0.1 <sup>3</sup>+0.1+0.2+0.1 <sup>3</sup> 1 <sup>3</sup>·0.1 <sup>3</sup>+0.1+0.2+0.1 <sup>3</sup> 1 <sup>+</sup> <sup>3</sup>·0.1 <sup>3</sup>+0.1+0.2+0.1 <sup>3</sup> 1 <sup>3</sup>·0.2 <sup>3</sup>+0.1+0.2+0.1 <sup>4</sup> 1 + <sup>3</sup>·0.1 <sup>3</sup>+0.1+0.2+0.1 <sup>3</sup> 1 <sup>3</sup>·0.1 <sup>3</sup>+0.1+0.2+0.1 <sup>3</sup> 1 <sup>+</sup> <sup>3</sup>·0.2 <sup>3</sup>+0.1+0.2+0.1 <sup>4</sup> 1 <sup>3</sup>·0.2 <sup>3</sup>+0.1+0.2+0.1 <sup>4</sup> 1 + <sup>3</sup>·0.2 <sup>3</sup>+0.1+0.2+0.1 <sup>4</sup> 1 <sup>3</sup>·0.1 <sup>3</sup>+0.1+0.2+0.1 <sup>3</sup> 1 <sup>+</sup> <sup>3</sup>·0.1 <sup>3</sup>+0.1+0.2+0.1 <sup>3</sup> 1 <sup>3</sup>·0.1 <sup>3</sup>+0.1+0.2+0.1 <sup>3</sup> 1 ⎫ ⎪⎪⎪⎪⎪⎪⎬ ⎪⎪⎪⎪⎪⎪⎭ ⎞ ⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ 1 1+1 , ⎛ ⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝ 2 3(3+1) ⎧ ⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨ ⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩ <sup>3</sup>·0.188 <sup>3</sup>+0.188+0.313+0.25 <sup>5</sup> 1 <sup>3</sup>·0.188 <sup>3</sup>+0.188+0.313+0.25 <sup>5</sup> 1 <sup>+</sup> <sup>3</sup>·0.188 <sup>3</sup>+0.188+0.313+0.25 <sup>5</sup> 1 <sup>3</sup>·0.313 <sup>3</sup>+0.188+0.313+0.25 <sup>7</sup> 1 <sup>+</sup> <sup>3</sup>·0.188 <sup>3</sup>+0.188+0.313+0.25 <sup>5</sup> 1 <sup>3</sup>·0.25 <sup>3</sup>+0.188+0.313+0.25 <sup>4</sup> 1 + <sup>3</sup>·0.313 <sup>3</sup>+0.188+0.313+0.25 <sup>7</sup> 1 <sup>3</sup>·0.313 <sup>3</sup>+0.188+0.313+0.25 <sup>7</sup> 1 <sup>+</sup> <sup>3</sup>·0.313 <sup>3</sup>+0.188+0.313+0.25 <sup>7</sup> 1 <sup>3</sup>·0.25 <sup>3</sup>+0.188+0.313+0.25 <sup>4</sup> 1 <sup>+</sup> <sup>3</sup>·0.25 <sup>3</sup>+0.188+0.313+0.25 <sup>4</sup> 1 <sup>3</sup>·0.25 <sup>3</sup>+0.188+0.313+0.25 <sup>4</sup> 1 ⎫ ⎪⎪⎪⎪⎪⎪⎪⎪⎪⎬ ⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭ ⎞ ⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ 1 1+1 ⎤ ⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ = [3.367, 5.414]

In the following section, specific cases of *RPHAp*,*q* operator are shown.

(a) If *p*, *q* = 1, then the *RPHAp*,*q* operator (Expression (4)) transforms into a rough number neutrosophic number power line Heronian operator as follows:

$$= \begin{bmatrix} \operatorname{RPH} A^{1,1}(\boldsymbol{\xi}\_{1}, \boldsymbol{\xi}\_{2}, \ldots, \boldsymbol{\xi}\_{n}) = \\ \begin{bmatrix} \left(\frac{2}{n(n+1)} \sum\_{i=1}^{n} \sum\_{j=i}^{n} \left(\frac{n(1 + \underline{\operatorname{Lin}}(T(\boldsymbol{\xi}\_{i})))}{\sum\_{i=1}^{n} (1 + \underline{\operatorname{Lin}}(T(\boldsymbol{\xi}\_{i})))} \underline{\operatorname{L}} \underline{\operatorname{m}}(\boldsymbol{\xi}\_{i})\right) \left(\frac{n\left(1 + \underline{\operatorname{Lin}}(T(\boldsymbol{\xi}\_{j}))\right)}{\sum\_{i=1}^{n} (1 + \underline{\operatorname{L}}(T(\boldsymbol{\xi}\_{i})))} \underline{\operatorname{L}} \underline{\operatorname{m}}(\boldsymbol{\xi}\_{j})\right) \right)^{\frac{1}{2}} \\ \left(\frac{2}{n(n+1)} \sum\_{i=1}^{n} \sum\_{j=i}^{n} \left(\frac{n\left(1 + \overline{\operatorname{L}}\overline{\operatorname{m}}(T(\boldsymbol{\xi}\_{i}))\right)}{\sum\_{i=1}^{n} \left(1 + \overline{\operatorname{L}}\overline{\operatorname{m}}(T(\boldsymbol{\xi}\_{i}))\right)} \overline{\operatorname{L}} \underline{\operatorname{m}}(\boldsymbol{\xi}\_{i})\right) \left(\frac{n\left(1 + \overline{\operatorname{L}}\overline{\operatorname{m}}(T(\boldsymbol{\xi}\_{j}))\right)}{\sum\_{i=1}^{n} \left(1 + \overline{\operatorname{L}}\overline{\operatorname{m}}(T(\boldsymbol{\xi}\_{i}))\right)} \overline{\operatorname{L}} \underline{\operatorname{m}}(\boldsymbol{\xi}\_{j})\right)\right)^{\frac{1}{2}} \end{bmatrix}$$

(b) If *p*, *q* = 1/2, then the *RPHAp*,*q* operator (Expression (4)) transforms into a rough number power basic Heronian operator as follows:

$$\begin{split} & \quad RPHA^{\frac{1}{2},\frac{1}{2}}(\boldsymbol{\xi}\_{1},\boldsymbol{\xi}\_{2},\ldots,\boldsymbol{\xi}\_{n}) = \\ &= \left[ \left( \frac{2}{n(n+1)} \sum\_{i=1}^{n} \sum\_{j=i}^{n} \left( \frac{n(1 + \underline{\operatorname{Lin}}(T(\boldsymbol{\xi}\_{i})))}{\sum\_{i=1}^{n} (1 + \underline{\operatorname{Lin}}(T(\boldsymbol{\xi}\_{i})))} \underline{\operatorname{Lin}}(\boldsymbol{\xi}\_{i}) \right)^{\frac{1}{2}} \left( \frac{n\left(1 + \underline{\operatorname{Lin}}(T(\boldsymbol{\xi}\_{j}))\right)}{\sum\_{i=1}^{n} (1 + \underline{\operatorname{L}}(T(\boldsymbol{\xi}\_{i})))} \underline{\operatorname{L}} \underline{\operatorname{im}}(\boldsymbol{\xi}\_{j}) \right)^{\frac{1}{2}} \right] \right] \\ &= \left[ \left( \frac{2}{n(n+1)} \sum\_{i=1}^{n} \sum\_{j=i}^{n} \left( \frac{n\left(1 + \underline{\operatorname{L}}(T(\boldsymbol{\xi}\_{i}))\right)}{\sum\_{i=1}^{n} \left(1 + \underline{\operatorname{L}}(T(\boldsymbol{\xi}\_{i}))\right)} \overline{\operatorname{L}} \underline{\operatorname{m}}(\boldsymbol{\xi}\_{i}) \right)^{\frac{1}{2}} \left( \frac{n\left(1 + \underline{\operatorname{L}}(T(\boldsymbol{\xi}\_{i}))\right)}{\sum\_{i=1}^{n} \left(1 + \underline{\operatorname{L}}(T(\boldsymbol{\xi}\_{i}))\right)} \underline{\operatorname{L}} \underline{\operatorname{m}}(\boldsymbol{\xi}\_{j}) \right)^{\frac{1}{2}} \right] \right] \end{split}$$

(c) If *p* = 0, then the *RPHAp*,*q* operator (Expression (4)) transforms into a rough number power generalized linear ascending weight operator as follows:

$$= \begin{bmatrix} \operatorname{RPH} \boldsymbol{A}^{0,q}(\boldsymbol{\xi}\_{1}, \boldsymbol{\xi}\_{2}, \dots, \boldsymbol{\xi}\_{n}) = \\ \left[ \frac{2}{n(n+1)} \sum\_{i=1, j=i}^{n} \left( \frac{n\left(1 + \underline{\dim}\left(\boldsymbol{T}\left(\boldsymbol{\xi}\_{j}\right)\right)\right)}{\sum\_{l=1}^{n} \left(1 + \underline{\dim}\left(\boldsymbol{T}\left(\boldsymbol{\xi}\_{l}\right)\right)\right)} \underline{\dim}\left(\boldsymbol{\xi}\_{j}\right) \right)^{q} \right] \frac{1}{n} \left( \frac{2}{\underline{n}(n+1)} \sum\_{i=1, j=i}^{n} \left( \frac{n\left(1 + \overline{\dim}\left(\boldsymbol{T}\left(\boldsymbol{\xi}\_{j}\right)\right)\right)}{\sum\_{l=1}^{n} \left(1 + \overline{\dim}\left(\boldsymbol{T}\left(\boldsymbol{\xi}\_{l}\right)\right)\right)} \underline{\dim}\left(\boldsymbol{\xi}\_{j}\right)^{q} \right)^{\frac{1}{q}} \right] \end{bmatrix}$$

(d) If *q* = 0, then the *RPHAp*,*q* operator (Expression (4)) transforms into a rough number power generalized linear descending weight operator as follows:

$$\begin{split} & \operatorname{RPHPA}^{p,0}(\xi\_1, \xi\_2, \dots, \xi\_n) = \\ &= \left[ \left( \frac{2}{n(n+1)} \sum\_{i=1}^n \left( \frac{n(1 + \underline{\dim(T(\xi\_i))})}{\sum\_{l=1}^n \left(1 + \underline{\dim(T(\xi\_i))}\right)} \underline{\dim(\xi\_i)} \right)^p \right)^{\frac{1}{p}} \left( \frac{2}{n(n+1)} \sum\_{i=1}^n \left( \frac{n\left(1 + \overline{\dim}(T(\xi\_i))\right)}{\sum\_{l=1}^n \left(1 + \overline{\dim}(T(\xi\_l))\right)} \underline{\dim(\xi\_i)} \right)^p \right)^{\frac{1}{p}} \right] \end{split}$$

### **4. Case Study**

### *4.1. Basic Structure of Survey*

For the purposes of this research, a survey, which included 99 respondents from the most of representative associations of persons with disabilities in the entire territory of the Republic of Serbia, was conducted. Out of the total number, 31 relevant experts, persons with disabilities, were selected. The physical data collection of a large number of respondents in one place is almost unfeasible. All questionnaires were submitted to associations, which were later sent by e-mail or the respondents were called in person. This type of communication must have been conducted primarily for the purpose of respecting the rights in relation to providing personal data prescribed by legal provisions on the protection of citizens. The research involved persons with the following disabilities: multiple sclerosis, muscular dystrophy, cerebral palsy and polio, paraplegia, hearing impaired persons and deaf people, impaired vision persons and blind people and people with intellectual disabilities. The surveyed population covered urban and rural areas on the territory of the Republic of Serbia. The study was conducted in the period from September 2017 to April 2019. Although the period of data collection is relatively long, the number of received responses in questionnaires is very satisfactory since this population due to the circumstances is not sufficiently available. Data collection was carried out in two phases. In the first phase, the correction of given formulations was made with a smaller number of respondents and then the questionnaire thus adapted was forwarded to other respondents. The structure of the questionnaire was divided into two sections. The first section provided the criteria describing the quality of service and the second section presented the socioeconomic characteristics of the respondents.

### *4.2. Service Quality Criteria*

Because of the non-existent universal criteria of the service quality of public transport and especially for persons with disabilities in the territory of the Republic of Serbia, the standard EN 13816 [14] is used as the basis. The criteria of this standard are supplemented with a review of literature of similar research on the quality of service, classified into different groups and research on the specificities related to persons with disabilities, Table 3. Some data have also been taken over from research or studies where user satisfaction assessment was only performed through the observed concepts or the necessary equipment for improving the use of rail transport and then these data have been converted into criteria. For the purposes of the research in this paper, the criteria are divided into eight main groups showing different aspects of the service provided: (1) Availability, (2) Information (3) Accessibility, (4) Time, (5) Custom care, (6) Comfort, (7) Security and (8) Environmental impact. The main groups are subdivided into subgroups and criteria. There are no subgroups for some main groups. By the insight into additional literature, the main criteria are supplemented, Table 1 and subgroups and all criteria are defined in tables in Appendix A.

As this analysis considers people with disabilities and their opinion, special attention is focused on establishing general criteria that describe their needs, whether regarding problems related to accessibility, way of displaying information, specific needs referring to care for passengers or necessary comfort. Taking into account their needs, a description of necessary criteria is more or less taken for other main criteria, which define their basic needs.


### **Table 3.** Basic description of the criteria by groups.

### *4.3. Sample Characteristics*

The general characteristics are shown in Table 4. The survey includes more male population with 61.3%. By percentage, the number of respondents by age up to 30 years is 32.3%, from 30 to 40 years is 16.1%, from 40 to 50 years is 22.6% and over 50 years is 29%. On the territory of the Republic of Serbia, this population is very financially dependent on others, so that according to the data it can be seen that only 22.6% are employed and 77.4% have financial assistance for care and pension. Regarding mobility, the majority of respondents are wheelchair users 42%, people with walking difficulties or needing assistance 32.4% and self-moving persons 25.8%. If data on sensory damage are observed, 67.7% or respondents do not have any, while 16.1% have visual impairment and 16.1% have combined impairment. In addition, 29.1% of the respondents have speech problems while others do not.


### **Table 4.** Sample characteristics.

### **5. Results**

### *5.1. Main Principles for the Analysis of Criteria*

The analysis of criteria is based on the selection of the most important criteria of each group/subgroup. This approach has been established in order to obtain a general picture of the way of thinking and needs of general population of persons with disabilities. In total, there are eight main groups with their subgroups. A planned selection of criteria for each subgroup will provide a final list of criteria, based on the perception of the decision maker. This form will respond to what needs to be done when organizing the infrastructure, vehicles and services provided. The number of criteria considered under subgroups is 147. Thus, the total number together with the main group is 155 criteria observed for assessment.

The main group *Availability* has three subgroups, with a total of 22 criteria, the number of which by subgroups is as follows: *Frequency and punctuality* has 7 criteria, *Mode of transport*, *network and infrastructure* has 8 criteria and *Passenger facilities and working hours* has 7 criteria.

The main group *Accessibility* has four subgroups with a total of 29 criteria, the number of which by subgroups is as follows: *Access for disabled public transport* has 8 criteria, *Communication*, *Colour*, *Contrast* has 7 criteria, *Corridors*, *free routes and paths* has 7 criteria and *Ticket o*ffi*ce and machines* has 7 criteria.

The main group *Information has three subgroups*, with a total of 22 criteria, the number of which by subgroups is as follows: *Ticket o*ffi*ce and machines* has 8 criteria, *Facilities* has 7 criteria, *Understandable* has 7 criteria.

The main group *Time* does not have subgroups and contains 7 criteria.

The main group *Custom care* has five subgroups, with a total of 37 criteria, the number of which by subgroups is as follows: *Assistance* has 7 criteria, *Service* has 7 criteria, *Sta*ff has 7 criteria, *Ticket* has 7 criteria and *Cleanliness and maintenance* has 9 criteria.

The main group *Comfort* has two subgroups, with a total of 14 criteria, the number of which by subgroups is as follows: *Ambient* has 7 criteria and *Comfortable* has 7 criteria.

The main group *Security* does not have subgroups and contains 9 criteria.

The main group *Environmental impact* does not have subgroups and contains 7 criteria.

In order to establish the basic form for assessing, the idea of this paper is to evaluate the main groups by importance and to select one criterion in each subgroup from Figure 1. Therefore, the aim is to obtain a total of 20 approximately equal criteria presented in subgroups. By this approach, it is thought that the rating of the service quality provided for all persons with disabilities will be achieved, since by these mutual responses it is possible to obtain equal importance.

The calculation has started from the following steps:


**Figure 1.** The main principle of selecting the most important criterion.

### *5.2. Calculation of the Coe*ffi*cient Weight Values with FUCOM Method*

The respondents evaluated the criteria in two steps and the calculation of the weight criteria was carried out in three steps.

The ratings of basic criteria were presented as an example of calculation. The respondent No. 31 did not evaluate the main criteria.

A detailed overview of determining weight coefficients of the first-level criteria is provided in the following section.

Step 1. In the first step, the decision makers ranked the criteria:

Basic Criteria (BC1)-first respondent: C3 > C7 > C2 > C4 > C5 > C6 > C1 > C8;

Step 2. In the second step, the decision makers compared in pairs the ranked criteria from the step 1. The comparison is made according to the first-ranked criterion, based on the scale [1,8]. This is how the importance of the criteria is obtained (*Cj*(*k*) ) for all the criteria ranked in the step 1 (Table 5).


**Table 5.** Importance of main criteria—An example of the first decision-maker's responses.

Based on the obtained importance of criteria are calculated comparative importance values of criteria for every respondent:

DM1 : ϕC3/C7 = <sup>1</sup> <sup>1</sup> <sup>=</sup> 1, <sup>ϕ</sup>C7/C2 <sup>=</sup> <sup>1</sup> 1.5 <sup>=</sup> 0, 7, <sup>ϕ</sup>C2/C4 <sup>=</sup> 1.5 1.5 <sup>=</sup> 1, <sup>ϕ</sup>C4/C5 <sup>=</sup> 1.5 1.5 = 1, ϕC6/C1 = 1.5 <sup>2</sup> <sup>=</sup> 0.7, <sup>ϕ</sup>C1/C8 <sup>=</sup> <sup>2</sup> <sup>2</sup> = 1

Step 3. Final values of weight coefficient should satisfy two conditions:

(1) Final values of weight coefficient should satisfy the condition where:

$$\text{DLM1: } w\_3/w\_7 = 1, \; w\_7/w\_2 = 0.7, \; w\_2/w\_4 = 1; \; w\_4/w\_5 = 1, \; w\_5/w\_6 = 1, \; w\_6/w\_1 = 0.7, \; w\_1/w\_8 = 1$$

(2) In addition to the defined relations, final values of weight coefficients should also satisfy the condition of mathematical transitivity, respectively

$$w\_3/w\_7 = 1 \cdot 0.7 = 0.7, \; w\_7/w\_4 = 0.7 \cdot 1 = 0.7, \; w\_2/w\_5 = 1 \cdot 1 = 1, \; 1$$

 $w\_4/w\_6 = 1 \cdot 1 = 1$ ,  $w\_5/w\_1 = 1 \cdot 0.7 = 0.7$ ,  $w\_6/w\_8 = 0.7 \cdot 1 = 0.7$ .

By applying the equations from fourth and fifth step of FUCOM method can be defined the models for determining weight coefficients of the first-level criteria for every decision maker:

$$\begin{cases} \min \chi\\ \min \chi\\ \text{s.t.} \begin{cases} \left| \frac{w\_{3}}{w\_{\mathcal{V}}} - 1 \right| = \chi\_{\prime} \left| \frac{w\_{\mathcal{V}}}{w\_{2}} - 0.7 \right| = \chi\_{\prime} \left| \frac{w\_{2}}{w\_{4}} - 1 \right| = \chi\_{\prime} \left| \frac{w\_{4}}{w\_{5}} - 1 \right| = \chi\_{\prime} \left| \frac{w\_{5}}{w\_{6}} - 1 \right| = \chi\_{\prime}\\ \left| \frac{w\_{5}}{w\_{1}} - 0.7 \right| \left| \frac{w\_{7}}{w\_{4}} - 0.7 \right| \left| \frac{w\_{2}}{w\_{5}} - 1 \right| , \left| \frac{w\_{4}}{w\_{6}} - 1 \right| \left| \frac{w\_{5}}{w\_{1}} - 0.7 \right| \left| \frac{w\_{5}}{w\_{1}} - 0.7 \right| \left| \frac{w\_{6}}{w\_{8}} - 0.7 \right|\\ \frac{\Sigma}{\gamma} w\_{j} = 1, \ w\_{j} \ge 0, \forall j \end{cases}$$

By solving the models presented, the values of weight coefficients of the first-level criteria for every decision-maker (DM) are obtained as shown in Table 6. For the first decision-maker for whom the example of calculation is presented above, the following values of the criteria are obtained: C1 = 0.088, C2 = 0.118, C3 = 0.176, C4 = 0.118, C5 = 0.118, C6 = 0.118, C7 = 0.176, C8 = 0.08. To confirm the reliability of the obtained weight criteria, the DFC value (deviation from maximum consistency) is used. The DFC value represents the deviation of the obtained values of weight coefficients from the maximum consistency. The optimal values of weight coefficients are obtained when the maximum consistency condition is satisfied, that is, when DFC is zero, that is, χ = 0.00.


**Table 6.** The assessment results of weight criteria for the main group using the FUCOM method.

*5.3. Evaluation Criteria Using Power Heronian Aggregation Operators with Rough Numbers*

First, the transformation of an individual into a group rough matrix is completed, Table 7, as follows:

$$\overline{c\_1}\underline{l\underline{im}}(0.088) = \frac{1}{3}(0.088 + 0.077 + 0.048) = 0.070$$

$$\begin{array}{llll}\underline{l\underline{im}}(0.088) = &\frac{1}{28}(0.088 + 0.150 + 0.232 + 0.216 + 0.122 + 0.128 + 0.182 + 0.172) \\ &+ 0.172 + 0.182 + 0.184 + 0.182 + 0.194 + 0.136 + 0.273 + 0.125 \\ &+ 0.158 + 0.164 + 0.225 + 0.206 + 0.158 + 0.151 + 0.149 + 0.214 \\ &+ 0.193 + 0.095 + 0.206 + 0.184) = 0.170 \end{array}$$

$$RN(c\_1^1) = [0.070, 0.170];$$



### *5.4. Final Results and Discusion*

The final evaluation of the criteria is presented in two tables. Table 8 shows the estimates of the main groups by importance and in Table 9, the estimates of 21 most important criteria by the subgroups are presented. For the purpose of analysis and comparability of the evaluation of the most important criteria, the Power Rough Heronian Values are converted into Crisp values. Although the aim was to identify the 20 main criteria, due to the same estimate, in the main group *Custom Care* and the subgroup *Cleanliness and Maintenance*, two the most important criteria, were presented.


**Table 8.** Final assessment of the General criteria.


### **Table 9.** Final assessment of the criteria.

According to decision-makers, *Accessibility* is the most important criterion for people with disabilities. Taking into account that these are people with disabilities and knowing that their main problem is accessibility (0.164) to infrastructure facilities and vehicles, the selection of this main criterion indicates the current situation. *Availability* is estimated (0.159) as the second criterion the third criterion is *Security* (0.145), which shows a poor state of how people with disabilities feel in transport. The following ranked criterion is *Time*. *Customer care and information* are ranked as equally important. The worst ranking criteria is *Comfort and Environmental impact*.

The results in Tables 8 and 9 indicate the weakest points that must be solved first so that rail passenger traffic becomes more attractive for people with disabilities. Pointing to the accessibility problem in Table 8, it can be seen that people with disabilities are not able to use the railways equally as other passengers. Table 9 presents a value description for each selected most significant criterion in groups and subgroups.

The sub-criteria values from Table 9 are described below. An overview of the most important criteria and a short comment were made in relation to the current services provided for people with disabilities.

### 5.4.1. Accessibility

In the subgroup Access for disabled public transport, the criterion Ramps are adopted with adequate slope (0.177) was selected. Other criteria were evaluated with less importance. The selection of this criterion tells us that the railway stations are very inaccessible.

In the subgroup *Communication*, *Colour*, *Contrast*, the criterion *Obstacle free accessibility to all media* (0.183) was selected. As in the *Access* criterion, it can be seen that information is not, in most cases, accessible to people with disabilities. In the sub criterion *Corridors*, *free routes and paths*, the criterion *Accessibility input*/*output equipment in stations or stops* (0.172) was selected. As an approximately similar criterion, *Barrier free path from parking places* (0.166) was selected. This selection shows that the area around railway stations is not adapted or accessible to people with disabilities. In the subgroup *Ticket o*ffi*ce and machines*, the criterion *Accessibility of tickets sales points* (0.194) was selected. At railway stations, it is noticed that the ticket sale points are not adapted since they do not have sufficient height or do not have adequate communication equipment. In Figure 2, all other criteria according to their importance are shown by subgroups.

**Figure 2.** Final assessment of the Accessibility criteria.

### 5.4.2. Availability

In the Frequency and punctuality subgroup, Punctuality of public transport to intercity bus and railway stations (0.165) was selected as the most important criterion. By approximately similar importance, the Frequency of public transport to bus or train station (0.164) and Accuracy of intercity buses or trains (0.163) criteria were determined. The first ranking criterion tells us that the arrival of people with disabilities to intercity stations is generally not achievable at all as well as to other users because this transport is not performed on a regular basis.

In the *Mode of transport*, *network and infrastructure* subgroup, *Availability of seats on train*/*bus* (0.148) was selected as the most important criterion. By approximately similar importance, *Availability of other modes of transport* (0.144), *Ease of access to the interchange* (0.142) were selected. The lack of sufficient number of seats for people with disabilities shows that there is still not enough financial resources, as well as general awareness to find a solution to solve this problem. Observing the other most important criteria, it can be seen that they are also not well approached by all aspects.

In the Passenger facilities and working hours subgroup, the criterion Availability of schedule information by phone/mail (0.172) was selected. By an approximately similar value, the criterion Availability of telephone signals and Wi-Fi (0.168) was selected. In this sub criterion, it is noticed that there are not enough new modern ways of delivering information. In addition to their disabilities, people with disabilities are not limited in applying new technologies but in this case, it can be seen that it does not apply enough to transport in Serbia. In Figure 3, all other criteria according to their importance are shown by subgroups.

**Figure 3.** Final assessment of the Availability criteria.

### 5.4.3. Security

In the *Emergency and safety* subgroup, the criterion *Stability of moving vehicles* (0.138) was selected. If a vehicle is not equipped with a separate space, a seat or equipment adapted for people with disabilities, the estimate of this criterion is clear, which tells us that there is a significant number of unsuitable vehicles. Figure 4 shows all other criteria in terms of their significance.


**Figure 4.** Final assessment of the Security criteria.

### 5.4.4. Time

In the *Time* group, only the main group was observed. The most important criterion was *Travel time information in abnormal conditions (disruption, delay, eviction,* ... *)* (0.164). The following approximately similar criterion was *Length of the actual travel time in the vehicle* (0.161). The selection of the most important criterion tells that people with disabilities have most problems in emergencies since then new unplanned transport due to their specificity and requirements can be very difficult. Figure 5 shows all other criteria in terms of their importance.

**Figure 5.** Final assessment of the Time criteria.

### 5.4.5. Custom Care

In the *Assistance* subgroup, the criterion *Assistance to*/*from connecting services (arrival, departure, buying tickets, moving)* (0.172) was selected. This tells us that the existing way of serving people with disabilities at railway stations is insufficient or does not meet basic requirements.

In the *Service* subgroup, the criterion *Customer Service (o*ffi*ce, website*, *contact telephone, complaint handling, etc.)* (0.175) was selected. In addition to the previous criterion, currently available services are not able to respond to all needs of people with disabilities.

In the *Sta*ff subgroup, the criterion *Assistance provision for disabled persons and persons with reduced mobility* (0.213) was selected. The observation for this criterion is the same as for the first subgroup.

In the *Ticket* subgroup, the criterion *Better prices and benefits* (0.177) was selected. This selection tells us that the population of people with disabilities, which is known to have much less income or financial assistance, is not able to achieve a sufficient number of rides with the discounts currently available.

In the *Cleanliness and Maintenance* subgroup, two criteria with the same significance, *Maintenance and Vehicle Safety* and *Cleanliness of the toilet in the station* (0.137) were selected. For these two criteria, it can be said that they would not differ from those referring to people without disabilities since it reflects the existing situation in the transport system. In Figure 6, all other criteria according to their significance are shown by subgroups.

**Figure 6.** Final assessment of the Custom care criteria.

### 5.4.6. Information

In the Ticket office and machines subgroup, the criterion Universal guidelines for movement over stations (0.148) was selected. The approximate criterion according to importance is *Availability of information on accessibility of stations on the Internet* (0.140). It is shown that there is insufficient awareness of the necessary guidelines referring to the area of railway stations and the contents provided there. The second criterion tells us that the use of the Internet is very useful for people with disabilities.

In the Facilities subgroup, the criterion Updated, precise and reliable information on vehicles (operating hours, stops, service interruptions, etc.) (0.175) was selected. The approximate criterion according to importance is Information available through other communication technologies (internet, phone, mobile applications, etc.) (0.172). Insufficiently updated information for people with disabilities is a major problem because they need to make a lot of planning for their movement and due to poor transport, any cancellation or unforeseen situation presents a major problem for them. Thus, the already mentioned new technologies are one of the advantages that can provide enough information in travel planning.

In the *Understandable* subgroup, the criterion *Ease of understanding information in the booking confirmation* (0.173) was selected. Providing information in the present way is not useful enough for people with disabilities, which in most cases can be confusing to them. In Figure 7, all other criteria according to their importance are shown by subgroups.


**Figure 7.** Final assessment of the Information criteria.

### 5.4.7. Comfort

In the *Ambient* subgroup, the criterion *Drinking water and sanitation* (0.185) was selected. *Air-conditioning in the vehicle (0.170)* was selected as the second most important criterion.

In the *Comfortable* subgroup, the criterion *Comfort of intercity vehicles* (0.174) was selected.

Observing this criterion, it can be concluded that existing vehicles do not meet the basic criteria related to persons with disabilities. Figure 8 shows all other criteria according to their significance.

**Figure 8.** Final assessment of the Comfort criteria.

### 5.4.8. Environmental Impact

In the *Environmental impact* group, only the main group was observed. The most important criterion is *Unpleasant smell* (0.176). This criterion describes the existing situation that tells us that there is no adequate care regarding hygienic conditions at certain railway stations. Figure 9 shows all other criteria according to their importance.

**Figure 9.** Final assessment of the Environmental impact criteria.

### **6. Conclusions**

The use of public transport by persons with disabilities is a basic precondition for increasing their mobility and better inclusion of this population in all regular activities, increasing the possibilities of communication and meeting all social needs. The lack of appropriate statistical data, inadequate number of relevant research on the position of people with disabilities in traffic and in addition to many adopted regulations in practice, points to the inconsistency in the application and improvement of affordable transport infrastructure and means of transport. The insufficient consideration of the needs of persons with disabilities results in the fact that transport in Serbia is still inaccessible to a large part of this population.

The contribution of this paper is that for the first time in Serbia, a comprehensive survey of persons with disabilities and their perceptions regarding the assessment of the service provided in rail transport has been carried out. The developed Full Consistency Method and Rough Power Heronian aggregator is used to evaluate criteria in the area of passenger transport by using the EN 13816 standard. The applied criteria are adapted for people with disabilities. An additional contribution to the implementation of this model is the avoidance of the socioeconomic characteristics of decision-makers, especially when it is known that persons with disabilities are highly dependent on finances and that their income, pensions or assistance are very low.

The first three most important main criteria are *Accessibility*, *Availability* and *Security*. This research has confirmed the practical situation that the accessibility of rail passenger system is the most important criterion in the assessment of main groups. Therefore, this is the first place where it is necessary to intervene in order to be able to think about increasing the number of persons with disabilities in the public rail transport. In the *Accessibility* subgroup, the most important criteria describe that it is necessary to solve the problems of environment, through the most important criteria, *Ramps must be adopted with adequate slopes*, *Free accessibility to all media*, *Accessibility input*/*output equipment in stations or stops* and *Accessibility of tickets sales points*. In the second main group, the need for accuracy in departures/arrivals of public transport to railway stations, sufficient number of seats in railway vehicles and available information is highlighted. In the *Security* group, the stability of the vehicles required is indicated.

In other main groups, it is necessary to emphasize the following criteria: for *Time*(most importantly, *Travel time information in abnormal conditions: disruption, delay, evacuation*), for *Customer care* (most significant *Assistance to*/*from connecting services: arrival, departure, buying tickets, moving, Customer service: o*ffi*ce, website, contact telephone, complaint handling and so forth, Assistance provision to disabled people, better prices and benefits, maintenance and vehicle stability, Cleanliness of the toilet in the station*) and for *Information* (emphasizing the absence of *Universal guidelines for movement over stations, Update precise and reliable information on vehicles: operating hours, stops, service interruptions and so forth and Easy to understand information in the booking confirmation*). In main groups *Comfort* and *Environmental Impact*, responses are approximately similar as regarding people without disabilities. The *Ticket* subgroup also indicates that the existing number of rail transport facilities is insufficient and that it is necessary to introduce new facilities for both disabled persons and their companions.

This way of tracking data can serve as a help to decision-makers at both local and regional levels to control the service provided for persons with disabilities. On the other hand, in addition to a unique proposal of criteria that must be considered, a selection of new criteria can be expanded and monitored by associations of people with disabilities. Taking into account the research results previously presented, we can summarize the following advantages of the multi-criteria model: (1) MCDM approach based on RN uses exclusively internal knowledge of crisp numbers; (2) The proposed model eliminates the defects of a traditional fuzzy approach that implies a subjective definition of the limits of interval numbers; (3) Interval limits of RN do not depend on a subjective assessment but are defined on the basis of data imprecision; (4) The hybrid RN multi-criteria model provides flexible decision-making and takes into account the interaction between decision attributes; (5) The model examines the interrelationship

### *Symmetry* **2019**, *11*, 992

between attributes and eliminates the impact of extreme values that can undermine the rationality of traditional models.

The planned research will be presented to associations of people with disabilities with the aim of implementation and use in the regular annual survey of passengers in rail passenger traffic.

Future research in this area and the provision of more detailed information may refer to the ranking of criteria by processing data in different environments (regional, urban or rural) and individually for certain types of disabilities. Such research can bring new more detailed conclusions and point out the needs for certain types of equipment, service or the characteristics of the rail passenger system. In addition, with certain adjustments, the new model can also be applied to other modes of transport.

**Author Contributions:** Conceptualization, G.S.; Data curation, A.V.; Investigation, D.Đ.; Methodology, Ž.S., D.P. and V.M.; Project administration, A.V.; Validation, D.P.; Writing—original draft, D.Đ.; Writing—review & editing, G.S. and Ž.S.

**Funding:** This research received no external funding.

**Acknowledgments:** The authors would like to thank to all associations of people with disabilities in Serbia.

**Conflicts of Interest:** The authors declare no conflict of interest.

### **Appendix A**

### **Table A1.** Main group availability.



**Table A2.** Main group accessibility.

**Table A3.** Main group information.


### **Table A4.** Main group time.


**Table A5.** Main group custom care.


### **Table A6.** Main group comfort.


**Table A7.** Main group security.


**Table A8.** Main group Environmental impact.


### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Ranking of Heritage Building Conversion Alternatives by Applying BIM and MCDM: A Case of Sapieha Palace in Vilnius**

### **Miroslavas Pavlovskis, Darius Migilinskas, Jurgita Antucheviciene \* and Vladislavas Kutut**

Department of Construction Management and Real Estate, Vilnius Gediminas Technical University, Sauletekio al. 11, LT-10223 Vilnius, Lithuania ˙

**\*** Correspondence: jurgita.antucheviciene@vgtu.lt; Tel.: +370-685-37036

Received: 3 July 2019; Accepted: 17 July 2019; Published: 1 August 2019

**Abstract:** A balance (symmetry) between socio-cultural and socio-economic benefits as a part of the economic, social, and cultural development policy of each city and country should be assured when converting built heritage. To anticipate building conversion priorities and opportunities, modern technologies can be employed. However, currently the activity of reconstruction of heritage buildings is part of the construction domain wherein modern digital technologies have been the least ever applied. Therefore, photogrammetry and the 3D modeling of existing heritage buildings was suggested. A case study of Sapieha Palace, built in the Baroque style in 1689–1691 in Vilnius, Lithuania was explored in this research. The applied technologies and software (Agisoft Photoscan, Autodesk ReCap and Autodesk Revit) allowed for the creation of a high quality and accurate model involving both the textured exterior of the building and the interior layout. In addition, the valuable features of a building were identified and marked in a three-dimensional digital model. Based on the model, the authors formulated possible conversion alternatives of the building and identified the associated decision-making criteria, as well as determined their relative significance by an expert survey method. The authors suggested the application of the multiple criteria decision making (MCDM) method under uncertainty, namely the rough weighted aggregated sum product assessment (WASPAS), for ranking alternatives according to multiple criteria. Therefore, the suggested integration of modern digital technologies and decision-making models helps to assure the rational conversion decision of built cultural heritage based on high accuracy data as well as contributing to the sustainable development of engineering processes.

**Keywords:** heritage building; photogrammetry; 3D modelling; MCDM; Rough WASPAS; expert survey

### **1. Introduction**

In the modern conditions of commercialization, the problems of reconstruction and restoration of historic buildings have become especially important. Parts of historic monuments have been disappearing due to real estate developers who are eagerly destroying the existing building and building a new one, because it is usually faster and cheaper than reconstructing the old one. Moreover, this is not only due to the greed of new owners, but also because of the complexity of conversion processes. However, lately a so-called global real estate revolution has been transforming the urban landscape everywhere. Development and redevelopment projects have mixed up in real estate construction [1]. While in previous years, the practical challenges of promoting built heritage protection and sustainable urban development were conceived of as discrete, contemporary conceptualizations and policies view the two as interrelated and mutually enforcing [2].

Cultural heritage is an important cultural asset taken over by the state and its inhabitants over several generations. Heritage diversity and peculiarity is linked to the state's history. Heritage means what is inherited both materially and spiritually. Materially cultural heritage is considered the ancient objects and their surroundings surviving from ancient times. Such artifacts or places have historical, archaeological, ethnological, mythological, memorial, religious, architectural, urban, artistic, and scientific value. The material cultural heritage relatively can be divided to movable or immovable heritage [3].

The built cultural heritage is also understood as real estate, i.e., the immovable material property that was created during the construction process and everything that is related to the construction activity and now used by the public. Heritage objects can be classified into several groups: buildings (residential houses, industrial buildings, special purpose buildings, etc.); objects of urban heritage (historical parts of the city, old towns, their districts, small towns); sites of historical events, also buildings and related objects that are associated with famous state persons, writers, artists, scientists, history of science and technology; works of fine monumentalism, applied and decorative art [3,4].

The built cultural heritage has a unique educational and socio-economic benefit. Heritage may be used for promoting urban sustainability [5], thus managing and using heritage buildings creates new jobs and resources for the economy. In addition, cultural heritage forms an exceptional benefit associated with the socio-cultural and socio-economic good, as part of the economic, social, and cultural development policy of each country [6].

The use and maintenance of the objects of the cultural heritage must be beneficial not only socially, but also economically. Ideally, cultural heritage objects should be continually used for their primary purpose, especially for small-scale enterprises working in traditional methods. In this case, the historic buildings would become the company's brand. Naturally, facilities should be upgraded in the heritage sites. Engineering communications need investments in energy efficiency improvements of a building and sometimes site cleaning from soil contamination is needed. Nevertheless, the valuable properties of cultural heritage would be preserved [4].

To anticipate building development/redevelopment priorities and opportunities, modern technologies can be employed. However, currently the activity of restoration and reconstruction of historic buildings is the area of construction where modern digital technologies are the least ever applied. Nowadays, building information modelling (BIM) technology is usually applied to new buildings. Due to the specific nature of the tasks to be solved and the lack of spatial data for the use of BIM, there are not many examples of its application for existing buildings, but this is a rapidly expanding area in the context of research and professional practice. The documentation of 3D historical buildings (HB) is a necessity for the preservation and proper management of built heritage [7–9].

The use of BIM technologies in the renovation and restoration processes of existing historic buildings is called the Historical Building Information Modeling (HBIM) [10] or heritage building information modelling (HBIM) [11]. In general, the use of BIM technologies in historical, cultural heritage, and reconstruction of old reconstructed buildings is similar as it differs only at the level of available information and potential intervention. The same information technology and the same equipment are used: advanced computing equipment, the latest data processing software packages, high-precision laser scanning devices, high definition and photogrammetric capture, reliable positioning and precise orthophotogrammetry/aerial photography techniques (airplanes, flying and drones). However, it is not enough to have the most advanced and accurate equipment, it is necessary to have sufficient competence, skills and qualifications.

When initial documentation (3DHB) is prepared, a question arises how to select the most efficient sustainable development strategy of a building or urban territory. Making research or performing construction or building maintenance works, the question often arises as to how to make effective and rational decisions to solve technological and management tasks by simultaneously evaluating alternative solutions according to several contradictory criteria. In search of the best alternative to possible solutions, multiple criteria decision making (MCDM) methods are increasingly being used to solve theoretical and practical problems, as the large number of indicators and alternatives that are evaluated make the formal decision-making process insufficient. Multiple criteria decision-making methods are a way of simultaneously evaluating several alternatives according to several often-conflicting performance indicators. A key feature of MCDM methods is that their solution cannot be the best for all criteria. A result is appropriate for all criteria but not ideal for each criterion individually. There are many methods for solving multi-criteria problems that differ by solved problems and their complexity, by the type of data used, or by the number of individuals involved in the decision-making process. Each method has its own advantages and disadvantages, an algorithm, and highlights a different aspect of the analyzed object or situation. A comprehensive overview of MCDM methods and their application for engineering and sustainability issues can be found in several review publications [12–16].

The authors presented one of the first attempts to apply BIM and MCDM for building redevelopment possibilities in 2016 [17]. The weighted aggregated sum product assessment method with grey attributes scores (WASPAS-G) for ranking redevelopment alternatives of currently inactive factory has been applied. In the current research, the authors focus to a precise 3D model preparation of the existing historical building by applying aerial photography techniques. Next, they suggest the application of the novel Rough WASPAS method developed in 2018 for ranking alternatives under uncertain conditions.

The paper is structured in five sections. In the first section, introductory considerations about a concept of built heritage, importance of its preserving and the need of applying HBIM as well as MCDM for effective HB development are provided. In the second section, a literature review is carried out presenting preparation peculiarities of 3DHB, application of MCDM for HB, and demonstrating application of Rough WASPAS method in different areas. The third section presents research methodology, consisting of the principal scheme of the research, 3DHB preparation steps and Rough WASPAS approach. The fourth section presents a case study of Sapieha Palace: 3D modelling results, description of possible redevelopment alternatives, development of criteria system and alternatives ranking results. The fifth section contains concluding remarks.

### **2. Literature Review**

A short literature review is carried out presenting 3DHB preparation peculiarities, the application of MCDM for HB, and the application of the WASPAS method under an uncertain environment, including Rough WASPAS.

### *2.1. Application of 3D Modelling and BIM Technologies to Built Heritage*

The study of built heritage requires a lot of analytical work with archival and design documentation and old photographs. The result of digitization of this material is the creation of databases of parametric objects, which include volumetric models of architectural monuments and sets of their elements, as well as general information about the object: description, photos, cartographic information, etc. Archives of such databases allow an increase in search speed and accuracy in order to obtain reliable information about the object.

Ilter and Ergen [18] divide research of existing buildings into the following stages:


Geometric and topographical information has to be collected to create a BIM model for a building under reconstruction. In a case if a reliable data acquisition technique can provide a fully informative BIM model, information about the existing building can be used to manage documents or other tools [19].

In practice, this process begins with data collection and data processing, e.g., by laser scanning or photogrammetry, by using structural details and construction materials from historical and architectural books, databases of object elements are created. The correlation of parameter objects with laser scanning data allows for the creation of the final virtual 3D model of a historic building. This detailed 3D model can provide most of the information about the object and the elements of the object, such as used building materials, operating cost reduction schedules, energy consumption, visualization, drawings, section planes, etc. [20].

According to Dore and Murphy [10], HBIM simply can be described as a system designed for modeling built heritage objects from remotely obtained data (laser scanning or photogrammetry) using BIM software. While, Reinoso-Gordo et al. [11] observe that the HBIM development process requires additional features that are lacking in standard BIM software packages, such as the lack of tools to model highly complex and irregular shapes, sloped walls or variable geometry elements. HBIM is a new approach to historical building modeling, which consists of a library of parametric objects based on historical architectural data.

Each historic building has its own architectural identity and valuable qualities, original or emergent due to historical changes over time. It is a feature or an element of a building that is valuable from an ethnic, historical, aesthetic or scientific point of view. By modeling a building, valuable properties can be divided into the following groups:


HBIM, respectively, can be divided into three major sub-models: structural, architectural and engineering [11]. Assigning valuable properties of a building to the appropriate sub model helps proper maintain and manage a building.

With regard to the use of built cultural heritage, the value of HBIM for historical and architectural monuments must be appreciated. In addition to all the above benefits, BIM provides new opportunities for the participants in the construction project to monitor and research [8,21–23]:


The main reason that determines the limited information modeling of historical buildings is the cost of the procedures. Even when large construction companies use BIM in their projects, small businesses do not have enough resources to invest in HBIM when working with historic buildings [23,24].

### *2.2. MCDM for Built Heritage*

Construction heritage is an important part of social, economic, historical, architectural and cultural uniqueness in many countries. Historical buildings differ from other buildings in two main ways, which can affect their conservation or use:

(1) Physical properties, as these buildings may have a complex and unusual geometry, unconventional building materials used for construction of these buildings, they can be diverse in their composition, buildings have no insulation, and use passive and natural ventilation.

(2) Principles of preservation, as the reconstruction of built heritage is regulated by established principles and procedures of conservation, which require protection of historical value and characteristics of a building [25].

Because of these differences, revitalization technologies used in present-day buildings can be useless for and may cause damage to traditionally constructed buildings, leading to loss of cultural heritage. As the refurbishment of heritage buildings is described by multiple criteria, multi-criteria decision making (MCDM) methods can be applied to find rational solutions for construction heritage revitalization or conservation [26].

MCDM methods suggested to use for built heritage decisions are becoming more applicable for various problems in civil engineering, construction and building technology. After analyzing Clarivate Analytic Web of Science publications, it can be stated that the topic of heritage buildings is widely analyzed in engineering civil, construction building technology, materials science multidisciplinary, architecture, computer science interdisciplinary applications and computer science categories [26].

MCDM for heritage buildings most often used for: evaluation process of historic building revitalization [27–29], planning reconstruction of a historical building [30], cultural heritage preservation, renovation and adaptation to social needs of population [31].

Most commonly used criteria systems for evaluation of assets refurbishment solutions are [14,27–29,31–36]:


The three most often used MCDM methods in the area are the analytic hierarchy process (AHP), the analytic network process (ANP) and fuzzy Delphi. Experts' knowledge is also used for assessment of cultural heritage value and selection of optimal alternative for its conservation or refurbishment [26].

Therefore, the direction of built heritage research applying MCDM was established as follows: evaluation of historic building reconstruction alternatives, examine reuse adaptability of each case, research in possibility to use a particular solution in future for different objects of built heritage. It is also established that MCDM is practically not used contemporaneously with BIM or HBIM technology.

### *2.3. Application of WASPAS Method under Uncertain Environment*

The crisp WASPAS method was proposed by Zavadskas et al. in 2012 [37]. Since now, its applications have been comprehensively reviewed in a number of publications, including a research of Stojic et al. [38].

In addition, several extensions of the method under uncertain environment have been proposed recently. Two basic extensions, WASPAS with grey numbers (WASPAS-G) and WASPAS under fuzzy environment (WASPAS-F) were developed in 2015 [39,40]. The WASPAS method with interval type-2 fuzzy sets was developed in 2016 [41]. Integration of the WASPAS and single-valued neutrosophic set is suggested as well as applied in [42,43]. Zavadskas et al. [44] developed a combination of the

WASPAS method with an interval valued intuitionistic fuzzy numbers (WASPAS-IVIF). A paper of Nie et al. proposing the WASPAS method with interval neutrosophic sets was published in 2017 [45].

Many research papers apply crisp or uncertain WASPAS, or the method in combination with other MCDM methods in the domain of civil engineering. Several groups of problems can be identified: location or site selection [39,43,44,46]; energy supply, renewable energy sources, optimal indoor environment, nearly zero-energy buildings [47–51]; logistic problems and supplier selection [41,52,53]; contractors or personnel selection [40,54].

A single paper is related assessing building redevelopment possibilities by applying WASPAS-G [14].

As regard to assessment of relative weights of criteria, WASPAS the most often is applied in combination with AHP [14,39,55–58] and SWARA [40,59,60].

The newest development of the method with rough numbers (Rough-WASPAS) was suggested by Stoji´c et al. last year [38]. It does not yet have many applications [61,62], therefore, it is worth further investigation.

### **3. Research Methodology**

Research methodology is presented, consisting of the principal scheme of the research, 3DHB preparation steps and detailed rough WASPAS approach.

### *3.1. Principal Block-diagram of the Research*

It is important to make the most appropriate solution for a specific task when performing construction heritage maintenance or planning a particular management work. Multiple criteria decision-making (MCDM) methods help to solve a problem, but they require accurate and reliable information. The information can be obtained from documentation or the BIM model, which also requires relevant data. A part of the information is obtained from sensor readings, other measurements or test results. Therefore, the goal of all participants implementing the project is to collect, process and analyze data properly.

The suggested model of operation and management of heritage buildings is shown in Figure 1. Model elements are grouped into four blocks according to their attributes. Blocks with green line represent historical data collection tools. Blocks bounded by blue line reflect the output data obtained using the appropriate tool. Red blocks name the methods and technologies proposed to apply for the operation and management of heritage buildings in the research. While purple blocks indicate the result or product expected to be obtained.

The model can be used to determine which methods, tools and technologies are best suited to a particular phase of the research to solve the tasks of extending a lifetime of a heritage building, ensuring integrated data collection and comprehensive analysis of objects. The model shows a flow of information, and a data exchange cycle is constantly taking place to select an optimal solution. The model also clearly demonstrates that optimal results can only be achieved by using all of the methods and tools together.

**Figure 1.** Model of operation and management of heritage buildings applying BIM and MCDM.


Historical building data collection tools Output data obtained using the appropriate tool Methods and technologies proposed to apply Result or product expected to obtain

### *3.2. Preparation of 3DHB Model*

Summarizing a global practice, one can distinguish the following HBIM data collection and processing steps that are needed to successfully use BIM technologies for historic and cultural heritage buildings:


The proposed steps of Rough WASPAS method are presented according to Stoji´c et al. [38]: Step 1: Description of a problem consisting of *m* alternatives and *n* criteria.

Step 2: Selection of *k* experts. Experts need to evaluate the alternatives according to all the criteria using the linguistic scale as presented in Table 1 [38,63].


**Table 1.** A scale for evaluation of alternatives in terms of criteria [38].

Step 3: Making expert survey. Filling in expert evaluation matrices.

Step 4: Converting individual matrices of experts *k*1*, k*2*,...,kn* into a rough group matrix (RGM):

$$\begin{array}{c} \text{RGM} = \begin{bmatrix} \begin{bmatrix} \mathbf{x}\_{11}^{L}, \mathbf{x}\_{11}^{LI} \\ \mathbf{x}\_{21}^{L}, \mathbf{x}\_{21}^{LI} \end{bmatrix} & \begin{bmatrix} \mathbf{x}\_{12}^{L}, \mathbf{x}\_{12}^{LI} \\ \mathbf{x}\_{22}^{L}, \mathbf{x}\_{22}^{LI} \end{bmatrix} & \cdots & \begin{bmatrix} \mathbf{x}\_{1n'}^{L}, \mathbf{x}\_{1n}^{LI} \\ \mathbf{x}\_{2n'}^{L}, \mathbf{x}\_{2n}^{LI} \end{bmatrix} \\ \vdots & \vdots & \ddots & \vdots \\ \begin{bmatrix} \mathbf{x}\_{m1}^{L}, \mathbf{x}\_{m1}^{LI} \end{bmatrix} & \begin{bmatrix} \mathbf{x}\_{m2}^{L}, \mathbf{x}\_{m2}^{LI} \end{bmatrix} & \cdots & \begin{bmatrix} \mathbf{x}\_{mN}^{L}, \mathbf{x}\_{mn}^{LI} \end{bmatrix} \end{array} \end{array} \tag{1}$$

Step 5: Normalization of the matrix:

$$m\_{ij} = \frac{\begin{bmatrix} \mathbf{x}\_{ij}^{L}; \mathbf{x}\_{ij}^{LI} \end{bmatrix}}{\max\begin{bmatrix} \mathbf{x}\_{ij}^{+L}; \mathbf{x}\_{ij}^{+II} \end{bmatrix}} \text{ for } \mathbf{C}\_{1,}\mathbf{C}\_{2,}, \dots, \mathbf{C}\_{n\mathbf{c}}\mathbf{c}B,\tag{2}$$

$$m\_{ij} = \frac{\min\left[\mathbf{x}\_{ij}^{-L}; \mathbf{x}\_{ij}^{-II}\right]}{\left[\mathbf{x}\_{ij}^{L}; \mathbf{x}\_{ij}^{II}\right]} \text{ for } \mathbf{C}\_{1,}\mathbf{C}\_{2,}, \dots, \mathbf{C}\_{n} \in \mathbb{C},\tag{3}$$

where *B* represents a set of benefit criteria and *C* represents a set of cost criteria.

Step 6: Obtaining a weighted normalized matrix, by multiplying a normalized matrix (Equation (4)) by a weight of each criterion *wj*:

$$NM = \begin{bmatrix} \begin{bmatrix} n\_{11}^L, n\_{11}^{IL} \\ n\_{21}^L, n\_{21}^{IL} \end{bmatrix} & \begin{bmatrix} n\_{12}^L, n\_{12}^{IL} \\ n\_{22}^L, n\_{22}^{IL} \end{bmatrix} & \cdots & \begin{bmatrix} n\_{1n'}^L, n\_{1n}^{IL} \\ n\_{2n'}^L, n\_{2n}^{IL} \end{bmatrix} \\ \vdots & \vdots & \ddots & \vdots \\ \begin{bmatrix} n\_{m1}^L, n\_{m1}^{IL} \end{bmatrix} & \begin{bmatrix} n\_{m2}^L, n\_{m2}^{IL} \end{bmatrix} & \cdots & \begin{bmatrix} n\_{mm'}^L, n\_{mm}^{IL} \end{bmatrix} \end{bmatrix} \\ \end{bmatrix} \tag{4}$$
 
$$\begin{bmatrix} 1 & \cdots & 1 \end{bmatrix}$$

$$Vn = \left[v\_{ij'}^{L}, v\_{ij}^{LI}\right]\_{m \times n},\tag{5}$$

$$v\_{ij}^{L} = w\_{\;\!\!\!\/} \times n\_{ij'}^{L}, \; i = 1, 2, \dots, m, j\_{\prime}$$

$$v\_{ij}^{ll} = w\_{\;\!\!\!\/} \times n\_{ij'}^{ll}, \; i = 1, 2, \dots, m, j.$$

Step 7: Calculating weighted sum result for each alternative:

$$Q\_i = \left[q\_{ij}^L; q\_{ij}^{\mathrm{LI}}\right]\_{1 \times m'} \tag{6}$$

$$q\_{ij}^L = \sum\_{j=1}^n v\_{ij}^L; \; q\_{ij}^{\mathrm{LI}} = \sum\_{j=1}^n v\_{ij}^{\mathrm{LI}}.$$

Step 8: Calculating a weighted product result:

$$P\_i = \left[ p\_{ij}^L; p\_{ij}^{\iota l} \right]\_{1 \times \text{cur}'} \tag{7}$$

$$p\_{ij}^L = \prod\_{j=1}^n \left( v\_{ij}^L \right)^{\mathbf{w}\_j} $$

$$p\_{ij}^{\iota l} = \prod\_{j=1}^n \left( v\_{ij}^{\iota l} \right)^{\mathbf{w}\_j} .$$

Step 9: Calculating values of each alternative *Ai*:

$$A\_i = \left[ a\_{ij'}^L; a\_{ij}^{lL} \right]\_{1 \ge m'} \tag{8}$$
 
$$A\_i = \lambda \times Q\_i + (1 - \lambda) \times P\_{i\nu}$$

where coefficient λ is calculated:

$$\lambda = \frac{\sum P\_i}{\sum Q\_i + \sum P\_i} = \frac{\Sigma \left[ p\_{ij}^L; p\_{ij}^{II} \right]}{\Sigma \left[ q\_{ij}^L; q\_{ij}^{II} \right] + \Sigma \left[ p\_{ij}^L; p\_{ij}^{II} \right]}.\tag{9}$$

Step 10. Ranking of alternatives according to values *Ai.*

### **4. Case Study and Research Results**

In this section, a historical description of the case study—Sapieha Palace in Vilnius—is provided. Its valuable features are distinguished, and the conversion alternatives preserving these valuable features are suggested. Criteria system for evaluation of potential conversion alternatives is suggested. Calculations applying Rough WASPAS multiple criteria decision-making method are made and the alternatives are ranked.

### *4.1. Description of the Case Study: Sapiega Palace*

Sapieha Palace and its surrounding park is the only Baroque palace and park ensemble that remained in Lithuania. It is situated in Antakalnis district in Vilnius—capital of Lithuania. There is not much to know about the history and architecture of the Sapieha Palace, but it is well known that a wooden palace was already in this place in the early 17th century. As Kirkoras [64] wrote, "according to a legend, the Sapieha Palace was built from a Pagan temple or from ruins of Pantheon of all Gods of Lithuania. . . ".

The remaining palace and park complex was built in 1689–1692 according to the project of architect Giovanni Battista Frediani and funded by Polish prince and Great Lithuanian Etmon Jan Kazimierz Sapieha the Younger (1642–1720). The palace was decorated by Giovanni Pietro Perti and with frescos by Michelangelo Palloni. Then it was reconstructed several times [64,65].

Construction of Antakalnis Palace is related to dynastic plans of Kazimieras Jonas Sapieha, to visual representing and demonstrating meaningfulness of his social positions. The marriage of the son of Sapieha Alexander to Maria Catherina de Bethune, the daughter of King Louis XIV's envoy to Poland, could have been held in the palace in 1691. It is also mentioned that Kazimieras Jonas Sapieha, who was separated from the Church by the Bishop of Vilnius, Konstantin Kazimierz Bzostovsky, raised a feast in this palace for his supporters with music and shoots of cannons in 1694 [64,65].

The oldest plan of the complex is from the Vilnius city plan at the beginning of the 18th century, known as G. Fiurstenhof's plan. This plan dates back to 1737 and shows rather detailed picture of this time (Figure 2). An important part of the complex were springs outside the palace. There, two ponds were installed from which water flowed into the palace and the park, including fountains [66].

**Figure 2.** Extract from G. Fiurstenhof's Vilnius city plan [67].

In 1794, during the uprising and its suppression, a Russian army was staying at the Sapieha residence. The palace and also other buildings, as well as the park, were strongly destroyed. The palace belonged to the Sapieha family until the end of the 18th century, 1797, when Franciskus Sapieha sold the Palace with a land plot and forest to Kossakowski family, and so Sapieha links with this estate ended. In 1806 the entire jurisdiction of Antakalnis was bought by state adviser Vaitiekus Puslovskis, who sold the main part of it with the palace and the park to the city of Vilnius in 1808 [65,68].

Since the beginning of the 19th century, the process of decaying the former residence began. Very big losses were made by the war in 1812, when the French army occupied not only the Sapieha residence, but also a monastery and a church. The French army then used the premises of the palace for hospital purposes.

At the beginning of the 19th century the Sapieha residence and the whole environment changed. When the residence was established, Antakalnis was a picturesque countryside covered with forest. At the beginning of the 19th century, the Sapieha estate was already surrounded by residential plots. By order of Governor-General in 1809, a military hospital was established in the Palace and in a surrounding area. According to Kirkoras [64], a hospital, a hospital office and a church was opened on

15 February 1829 and started a new stage in the life of the palace and surrounding area. The project was completed in 1843 and during 1843–1848 reconstruction the palace gained its present appearance. Figure 3 presents 1830 drawing with uploaded authentic architectural elements and demolished elements that are intended to be restored according to the palace restoration project. It is presented according to architectural research in 2009 [64,65].

**Figure 3.** Sapieha Palace main façade: technical restoration project [65].

Situation in the area has not changed substantially until World War II. From 1919 to 1927, there was Stephen Báthory University hospital. Any major changes have not been implemented during the period. Some internal restructuring was carried out, adapting premises to needs of this time. In 1927–1928, the complex was adapted to needs of University's Ophthalmology Institute, which operated here until the World War II. During the World War II, the military hospital was adapted to the needs of the German army.

In 1945, after the war, the whole territory went to the Soviet army. The whole area was fenced, and it became completely enclosed [66].

After the restoration of Lithuania's independence in 1992, the Sapieha Palace was handed over to Lithuanian National Martynas Mažvydas Library, and in 2005—to the State Property Fund. In 2010, the building was handed over to the Department of Cultural Heritage. Then, the ensemble's restoration plans and a palace restoration project were presented in 2011. So far, one project has already been implemented at Sapieha Palace entitled "Reconstruction and restoration of Sapieha Palace in Vilnius, stage I: Restoration of authentic Baroque volumes, masonry restoration, other exterior management works, covered galleries, roofs". In 2018, Sapieha Palace was transferred to the Contemporary Art Center [69]. Nevertheless, a question of its entire reconstruction and restoration as well as its contemporary effective use still has not been solved.

Heritage building conversion should be made preserving valuable properties of the Sapieha residence. According to the Law on the Protection of the Real Cultural Heritage of the Republic of Lithuania, a valuable feature is a cultural heritage object, its location, part, or an element of it that is valuable from an ethnic, historical, aesthetic, or scientific point of view.

According to the data of the Register of Cultural Property, the Sapieha residence is considered a cultural monument of national siginifficance. Its valuable properties are architectural, historical and artistic.

The most valuable properties of the Sapieha Palace are grouped by attributes presented in Table 2. Table 2 is composed based on the data of the Department of Cultural Heritage [69].


### **Table 2.** The most valuable properties of the Sapieha Palace [69].


**Table 2.** *Cont*.

### *4.2. 3D modelling Results*

As BIM modeling is founded on development of a 3D parametric model that is gradually updated during different stages using various methods [16,19] and technologies [24,70], the process of creating a 3D model is described below.

Digitalization of the heritage buildings is influenced by several properties such as [70,71]:


There is no final solution for how to model the objects of built heritage. An issue of 3D modelling cannot be always resolved by using a single technique. The choice of a correct method depends on the kind of building and level of detail of the BIM model to be created.

The particular heritage building modeling algorithm was created after researching various methods, technologies, and software. The phases of the presented Sapieha Palace model creation are as follow:

The first phase consisted on survey of the existing building and collection of archival data "as built". We have got and analyzed design of the building, drawings, section planes, materials used, documentation related to the repair works of the building for different periods from 1718 to 2012 when was the last restoration of the palace.

During the second phase, photographic survey, we used the DJI Mavic Pro drone with a DJI FC220 digital camera with a three-axis stabilization system and a navigation system for positioning. To limit the hidden areas, the building was photographed in a circle on different levels.

The third phase was devoted to creating a dense point cloud. For this, all 339 photos taken with a drone were used. For photo processing Agisoft PhotoScan software was applied.

Then point cloud was transformed and imported it into Autodesk Revit software (Figure 4).

**Figure 4.** Point cloud converted with Autodesk ReCap software.

During the fourth phase, a point cloud is combined with drawings (Figure 5).

**Figure 5.** Drawings and point cloud combination in Floors modelling in Autodesk Revit software.

After that, the floors, walls and roof of the palace are modeled. All elements are parameterized, and information about materials, functions and valuable properties is entered (Figure 6).

After the modeling phase, a high quality and accurate 3D model involving both the textured exterior of the building and the interior layout is obtained. Further, it is used to determine possible conversion alternatives and building life cycle simulations.

**Figure 6.** Floors modelling in Autodesk Revit software.

### *4.3. Description of Possible Redevelopment Alternatives*

Improving economic situation, increasing quality of life create preconditions for development of culture and art and promote adaptation of abandoned buildings having historic or architectural value such as the Sapieha Palace to the needs of local population, municipality, science or business.

Alternative conversion variants should be prepared considering authenticity of a building. Cultural heritage is a witness to history, and its authenticity is particularly important. It is a very broad concept that encompasses historical features of an object, its functions, artistic forms and composition, materials and constructions, as well as the environmental aspect. Preserving authenticity is one of the most important features in determining the conversion value of a built heritage object.

The authors suggest three possible conversion alternatives for Sapieha Palace.

The first alternative (*A*1) involves the establishment of a Tourist Information Center with a permanent Museum. The Sapieha Palace would become a center of cultural knowledge, the purpose of which is to acquaint the public with Museum's values and their collections, to organize permanent and temporary exhibitions, to prepare Museum-promoting publications, catalogs of Museum values, and to work on education. The history of the Sapieha Palace allows us to cover different periods, to reveal historical dynamics of cultural and artistic development. The Baroque ensemble of the Sapieha Palace and the frescoes remaining in the palace are a great starting point for creating educational stories about values of culture and society in various contexts and periods.

The second alternative (*A*2) constitutes an option for the conversion of Sapieha palace into a Research Institution, the main tasks of which would be to explore the history of heritage, other areas of architecture, contemporary trends in science and education. Currently, increasing attention is being paid to the links between different research disciplines. Therefore, this new institution should reflect this trend and contribute to modern education.

The third alternative (*A*3) entails the adaption of the Sapieha Palace for a hotel with a conference center. So-called business tourism is popular worldwide. Countries with an advanced conference center system attract a number of international events, meetings, and conferences. Hundreds of international business representatives usually attend such events. However, there is no abundance of large conference centers in Lithuania. Therefore, this alternative has a real demand, especially because the Sapieha Palace is grateful for the purpose of a Hotel.

The building is raised on a high base; the main portal leads to a two-sided staircase that creates a solemn mood. Almost all the rooms on the second floor of the palace communicate with each other. Especially distinguish the large hall, decorated with stickers and painted ceilings. Painting, molding, ornamental stoves, fireplaces also are found in other premises. Such a large decoration outside the palace was not a frequent phenomenon; it expresses the representative nature of the building, demonstrates the taste and financial capacity of the palace owner. Up to the present day, the majority of second floor windows decoration have been preserved. The motifs and elements, as well as the details of the park gate decorations can be used to judge the high artistic level of the palace's equipment. Overall, the beauty of the Sapieha Palace and the area equipped with the right infrastructure provide an opportunity for a very high standard hotel with conference center.

### *4.4. Criteria System*

Most commonly used criteria systems for evaluation of built heritage preservation and conversion solutions were discussed in Section 2.2. Seven main groups of criteria were distinguished, consisting of 4–5 criteria each group [27–29,31–36].

As our research is aimed at enhancing sustainable development, of course three main sub-systems of sustainability are evaluated: economic value of changes, impact on natural environment and influence to social environment. As our case study is a built heritage and a priority is given to its proper preservation, the historical-cultural value criteria group is involved. At least, when analyzing building reconstruction works, we certainly need to involve technological–architectural criteria group.

The suggested criteria system, consisting of five groups of criteria *G*1–*G*<sup>5</sup> and a number of criteria *Xj*, *j* = 1, ... , *m* presented in Figure 7.


**Figure 7.** Criteria system for assessing built heritage conversion solutions.

Part of the criteria are benefit criteria or maximizing, i.e., their larger value is better, and part of the criteria are cost criteria or minimizing, i.e., their lower value is better.

The first group of criteria—Economic benefit/expenses of changes (*G*1)—consists of four criteria. Three of them are cost criteria—investment in investigation and research, design and reconstruction work. The peculiarity of our problem of conversion of heritage building requires involving investment to investigation and research besides usual investments in design and construction. The last criterion in the group is benefit criterion and describes the possibility of generating income for the municipality/city after conversion.

The next group of criteria is entitled: influence on social environment (*G*2). The group also consists of four criteria. All the four criteria are benefit (maximizing) criteria in the current case. They evaluate benefits for city/country and benefits for private business after implementing a particular conversion alternative. Additionally, is presented criterion of job creation, contributing to reducing unemployment for municipal/city residents. The peculiarity of our considered problem is reflected by a specific criterion, namely the benefits for heritage preservation when implementing a particular conversion alternative.

The third component of sustainable development - Impact on natural environment (*G*3). The group involves two maximizing and two minimizing criteria. Maximizing criteria are related to entire complex of the Palace and its surroundings, including the old park, and express possibilities of preserving the surrounding landscape, and possibilities of park use for public needs and recreation. Conversely, minimizing criteria are of another nature and they express a negative attitude to expected pollution during construction works as well as the operation of new facility.

The next group of criteria is closely related to our specific task of conversion of built heritage, namely historical–cultural value preservation (*G*4). It consist of five benefit criteria: preserving the building's authenticity after reconstruction; preserving architectural-compositional value of an object; new activities that help propagate history, culture, architecture; free public access to values of heritage and history; technical-economic value of an object, consisting of heritage categories as building construction technique and quality as well as periods of evolution of building.

The last group of criteria is related to the intended construction works and is entitled technological–architectural possibilities (*G*5). The group has one minimizing criterion in terms of time and cost—volume of reconstruction works. The next two criteria are maximizing, and they reflect a suitability of the internal layout of a building and its infrastructure adaptation possibilities for an intended purpose of conversion. The last criterion is lifetime of the building after reconstruction. It is a maximizing criterion and is determined according to purpose of a converted building.

All criteria are measured by a linguistic scale as presented in Table 1. An expert survey is applied to evaluate performance of each alternative in terms of every criterion.

### *4.5. Expert Survey: Relative Weights and Values of Criteria*

The research applies expert survey for determining relative weights of criteria. In addition, because of selected methodology of evaluation of alternatives (Rough WASPAS), expert survey is applied for determining particular values of criteria considering analyzed conversion alternatives.

Ten experts were selected. They all are professionals in built heritage. They all are employed in science and research state institutions and all having a respective scientific degree as well as work experience of 5–20 years. The distribution of experts is presented in Table 3.


**Table 3.** Distribution of experts.

At first, the experts considering their knowledge and experience were asked to fill a questionnaire and rank groups of criteria as well as to rank criteria in each group as presented in Figure 7. They had to assign scores based on the scoring scale 1–5, where 5 means the highest importance and 1 means that the criterion has the lowest importance to a problem solution.

The relative importance (weight) of groups of criteria, of each criterion in a group and overall importance of every criterion was calculated using to the above-presented methodology (Equations (1)–(9)). The results of calculation are provided in Tables 4 and 5.

The most important group of criteria according to experts' opinion is *G*4—historical and cultural value preservation, with a weight *wj* = 1.03. The next follow economic benefit/expenses of changes and influence on social environment with equal weights of 0.66. Technological—architectural possibilities gained almost the same result (0.65). The least important according to experts is the impact on natural environment criteria group, but it is not far behind (0.60) (Table 4).

The most important criterion among 21 investigated criteria according to experts is Preserving the building's authenticity, the next follow activities that help to propagate history, culture, and public access to heritage and history (Table 5).


**Table 4.** Weights of the groups of criteria.


**Table 5.** Weights of criteria in the groups and the overall weights.

Next, the experts were asked to evaluate three potential conversion alternatives *A*1, *A*<sup>2</sup> and *A*<sup>3</sup> (see Section 4.2) according to all the criteria *X*1*–X*<sup>21</sup> (see Figure 7) and using the evaluation scale as presented in Table 1. The evaluations are presented in Table 6. These evaluations further are processed using Equations (1)–(9) and the results are be presented in the next chapter.


**Table 6.** Evaluation of alternatives according to criteria.

### *4.6. Calculation Results Applying Rough WASPAS*

The evaluation of alternatives is made by using expert scores and by applying Rough WASPAS methodology according to presented Equations (1)–(9) based on source [38].

Group rough matrix is presented in Table 7. The normalized matrix is presented in Table 8. The weighted normalized matrix is obtained using weights, calculated in Section 4.4, and presented in Table 9.


**Table 7.** Group rough matrix.

**Table 8.** Normalized matrix.


**Table 9.** Weighted normalized matrix.


The next two steps are calculating WSM (summing all the values of the alternatives obtained (summing by rows)) and calculating WPM (Tables 10 and 11).


**Table 10.** Summing all the values of the alternatives obtained (summing by rows).

**Table 11.** Determination of the weighted product model.


Next, coefficient λ is calculated: λ = [0.597; 0.518].

The final determination of the relative values of the alternatives and their ranking is provided in Table 12.

**Table 12.** Determining the relative values of the alternatives and their ranking.


The calculation results show that the first alternative *A*1—the establishment of a tourism information center with a permanent museum—gained the first rank. The second ranked alternative (*A*2—conversion of Sapieha palace to a Research Institution) is behind the first alternative only by 1.4 percent. Accordingly, we can state that the both alternatives are evaluated almost equally. While, the third alternative (*A*3—adaption of the Sapieha Palace for a hotel with a conference center) is 9% behind the best ranked alternative, therefore its implementation should not be rational.

For sensitivity analysis, relative values of the alternatives depending on the value of the coefficient λ are calculated (Table 13, Figure 8).

**Table 13.** Relative values of the alternatives depending on the value of the coefficient λ.


After the sensitivity analysis, we can state that the ranking order of alternatives is stable and is *A*<sup>1</sup> *A*<sup>2</sup> *A*3. While, difference between alternatives is varying depending on a value of coefficient λ. The difference between the first and the second alternatives varies from 1.7% to 1.1%when λ varies from 0 to 1. The difference between the first and the third alternatives varies from 10.1% to 7.7% when λ varies from 0 to 1. We can state that the larger is λ, the lower is the difference between the alternatives.

**Figure 8.** Results of sensitivity analysis dependent on coefficient *l*.

### **5. Conclusions**

The authors suggested the integration of modern digital technologies and multi-criteria decision-making (MCDM) models to achieve the rational and sustainable conversion decision of built heritage based on high accuracy data.

A case study of Sapieha Palace built in Baroque style in 1689–1691 in Vilnius, Lithuania was explored. The most valuable properties of the Sapieha Palace were grouped by attributes and the state level of them was between "satisfactory" for structure elements and "unsatisfactory" for decorative elements. To restore this, building deep analysis needs to be made and advanced tools can be used to ensure the high efficiency of detail reconstruction.

The authors suggested applying photogrammetry and the 3D modeling of the existing heritage building. The applied technologies and software (Agisoft Photoscan, Autodesk ReCap and Autodesk Revit) allowed for the creation of a high quality and accurate model involving both the textured exterior of the building and the interior layout. In addition, the valuable features of a building were identified and marked in three-dimensional digital model.

Based on the model, the authors formulated three possible conversion alternatives: the establishment of a tourism information center with a permanent Museum, a center of cultural cognition; a research institution, whose main tasks are to explore the history of heritage, other areas of architecture, and contemporary trends in science and education; and finally, the adaption of the Sapieha Palace to a hotel with a conference center.

The associated criteria of rationality of alternatives were identified, and their weights determined by the expert survey method. An international team of very well experienced experts from several European countries was gathered, whose experience lies in engineering, architecture, heritage preservation and construction, and they are representatives of research institutions and industry.

Preserving the building's authenticity was determined as the most important criterion among 21 investigated criteria according to experts. The next criteria follow activities that help to propagate history and culture, and its weight is 5.4% lower compared to the weight of the highly ranked criterion. The lowest weight was assigned to benefits for private business criterion and it makes up only 37% compared to the highest weight.

The authors suggested the application of a novel MCDM method under uncertainty, namely the rough weighted aggregated sum product assessment (WASPAS), for ranking alternatives according to multiple criteria. The mentioned experts were involved in evaluating of alternatives in terms of criteria.

Calculation results showed that the first alternative *A*1—the establishment of a tourism information center with a permanent museum—gained the first rank. The second ranked alternative (*A*2—the conversion of Sapieha palace to a research institution) is behind the first alternative only by 1.4%. Accordingly, we can state that the both alternatives are evaluated almost equally. While, the third alternative (*A*3—adaption of the Sapieha Palace for a Hotel with a Conference Center) is 9 percent behind the best ranked alternative, therefore its implementation should be not rational.

As the sensitivity analysis shows that the ranking order of alternatives is stable and two alternatives gain almost the same importance with the difference in degree of utility varying from 1.1% to 1.7%, the authors suggest the conversion of the Sapieha Palace to a combined purpose building—a research institution, whose main tasks are to explore the history of heritage and other areas of architecture, with a museum and possible tourism information center.

The suggested research methodology can be employed in other case studies for ranking heritage building conversion alternatives. The suggested criteria system and established weights of criteria can be used for evaluating potential redevelopment alternatives of the analyzed building. Appropriate experts can be involved to evaluate alternatives in terms of criteria, and the rough WASPAS method can be applied for the ranking of alternative scenarios.

**Author Contributions:** Conceptualization, M.P., D.M., J.A. and V.K.; methodology, M.P., D.M. and J.A.; formal analysis, M.P.; investigation, M.P., D.M., J.A., and V.K.; resources, M.P. and V.K.; data curation, M.P., D.M. and V.K.; writing—original draft preparation, M.P., D.M. and J.A.; writing—review and editing, M.P., D.M. and J.A.; visualization, M.P.; supervision, J.A.

**Funding:** This research received no external funding.

**Acknowledgments:** Parts of the research conducted in Germany in Bamberg Otto-Friedrichuniverse under an internship of the DBU "Fellowships for university graduates from Central and Eastern Europe (CEE)".

**Conflicts of Interest:** The authors declare no conflict of interest.

### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Personalized Product Evaluation Based on GRA-TOPSIS and Kansei Engineering**

**Huafeng Quan 1, Shaobo Li 1,2,\*, Hongjing Wei <sup>2</sup> and Jianjun Hu 1,3,\***


Received: 27 April 2019; Accepted: 21 June 2019; Published: 3 July 2019

**Abstract:** With the improvement of human living standards, users' requirements have changed from function to emotion. Helping users pick out the most suitable product based on their subjective requirements is of great importance for enterprises. This paper proposes a Kansei engineering-based grey relational analysis and techniques for order preference by similarity to ideal solution (KE-GAR-TOPSIS) method to make a subjective user personalized ranking of alternative products. The KE-GRA-TOPSIS method integrates five methods, including Kansei Engineering (KE), analytic hierarchy process (AHP), entropy, game theory, and grey relational analysis-TOPSIS (GRA-TOPSIS). First, an evaluation system is established by KE and AHP. Second, we define a matrix variate—Kansei decision matrix (KDM)—to describe the satisfaction of user requirements. Third, the AHP is used to obtain subjective weight. Next, the entropy method is employed to obtain objective weights by taking the KDM as input. Then the two types of weights are optimized using game theory to obtain the comprehensive weights. Finally, the GRA-TOPSIS method takes the comprehensive weights and the KMD as inputs to rank alternatives. A comparison of the KE-GRA-TOPSIS, KE-TOPSIS, KE-GRA, GRA-TOPSIS, and TOPSIS is conducted to illustrate the unique merits of the KE-GRA-TOPSIS method in Kansei evaluation. Finally, taking the electric drill as an example, we describe the process of the proposed method in detail, which achieves a symmetry between the objectivity of products and subjectivity of users.

**Keywords:** KE-GRA-TOPSIS; KE; AHP; entropy; game theory; GRA-TOPSIS; personalized product evaluation

### **1. Introduction**

Products are the material basis for the survival and development of enterprises [1,2]. With the increasing market competition, only by launching products that meet users' requirements can enterprises increase user satisfaction, stimulate their purchase desire, and boost sales [3,4]. In this case, developing an appropriate method to rank products that reflect user satisfaction is critical. As living level improving, users' requirements changing from function to emotion, which has been studied using Kansei engineering (KE) approach. The "Kansei" is a Japanese word that contains sensibilities, impressions, and emotions of human [5,6]. Different users may prefer different products. Therefore, this paper aims to make a subjective user personalized ranking of products and pick out the most suitable one for users. The product selection issue can be seen as a multi criteria decision-making (MCDM) problem.

MCDM involves a complex external environment and many different attributes. Many methods have been proposed to solve the MCDM problem. The techniques for order preference by similarity to ideal solution (TOPSIS) is one of effective and powerful methods, which was developed by Hwang and Yoon [7]. Its conception is to find a positive ideal solution (PIS) and a negative ideal solution (NIS) as comparison standards for each alternative. By comparing the degree of differentiation between the ideal solutions and alternatives, the disparity of alternatives can be acquired. The most suitable alternative should be nearest to the PIS and farthest from the NIS. Lei et al. [8] applied the TOPSIS method to assess engines. The closeness, which is used to rank alternatives, is obtained by calculating the Euclidean distance between the alternative and the ideal solutions. They selected circulating water temperature difference, engine oil temperature, turbocharger's boost temperature, intercooler's decreased temperature, fuel consumption, and maximum torque as criteria for evaluation. The six criteria are positive indicators, which means that the higher the better. Therefore, take maximum values to construct the PIS and minimum values to build the NIS.

In TOPSIS, measuring the separation of each alternative from the PIS and NIS is a critical part. Moreover, there are other distance metrics besides Euclidean distance, such as Manhattan [9], Chebyshev [10], Hamming [11], and Minkowski [12], etc. In MCDM, since it is impossible to build a unique mathematical model to compare the performance of these distance metrics, the selection always depends on the decision-maker's (DM's) assessments [13]. Moreover, Euclidean distance is the most popular distance metrics [14].

With the deepening realization about the TOPSIS method, some extended methods have emerged. Sakthivel et al. [15] combined grey relational analysis (GRA) with TOPSIS to propose the GAR-TOPSIS method. Its closeness is a combination of the grey relational degree and the Euclidean distance. They selected brake thermal efficiency, exhaust gas temperature, oxides of nitrogen, smoke, hydrocarbon, carbon monoxide, and carbon dioxide as criteria to evaluate fuel blends. The seven criteria are cost criteria, which means that the lower the better. Therefore, take minimum values to construct the PIS and maximum values to build the NIS. ¸Sengül et al. [16] adopted fuzzy TOPSIS to rank power stations. The fuzzy ideal solutions are used instead of the ideal solutions to calculate the closeness. They selected nine criteria, including both benefit and cost criteria. The fuzzy PIS is constructed with maximum values of benefit criteria (CO2 emission, job criterion, efficiency, installed capacity, and the amount of energy produced) and minimum values of cost criteria (investment cost, operation cost, land use, and payback period). The fuzzy NIS is constructed with benefit criteria's minima and cost criteria's maxima.

As mentioned above, in TOPSIS and its extension methods, their criteria are function property values (such as charging efficiency in the power station ranking problem), either the higher the better, or the lower the better. For the human perception of products, people only care about whether the criteria satisfy their requirements, rather than the exact values of the criteria. The more user requirements (which are usually subjective) are satisfied the better, and vice versa. Considering the huge difference between the criteria users used in the perceptual evaluation and the objective criteria measured from the products, TOPSIS and its extension methods cannot be used directly to make a subjective user personalized ranking of products. Since KE is a feasible method of processing criteria for user-based evaluation, it can make the processed criteria suitable for applying TOPSIS and its extension methods to user-specific subjective product evaluation. Moreover, since perception is uncertain and GRA-TOPSIS can measure the uncertainty between things [13,17], this research adapts GRA-TOPSIS and KE to rank product alternatives.

Determining the weight of individual criterion is an essential part of the TOPSIS and its extension methods. Once the weight is determined, all alternatives can be compared based on the aggregate performance of all criteria. The weights of criteria are categorised as subjective, objective, and combinative. The subjective weighting method based on subjective preferences of the DM or expert, including the Delphi method [18], the AHP method [19], the stepwise weight assessment ratio analysis (SWARA) [20], the factor relationship (FARE) [21], the best–worst method (BWM) [22], KEmeny median indicator ranks accordance (KEMIRA) [23], etc. As the number of criteria increases, the MCDM problem can become intricate, and the DM/expert may not be able to assign a precise weight for each criterion. The objective weighting method extracts statistical weights through dispersion analyses of the data,

including entropy [24], data envelopment analysis (DEA) [25], and the criteria importance through inter-criteria correlation (CRITIC) method [26]. The combinative weighting method is a compromise between the subjective and objective [27]. It can not only express the preference of the DM/expert, but also consider the intrinsic information of criteria. Concretely, the subjective and objective weights are combined by the combination principle to obtain comprehensive weights. Commonly used combination principles are multiplication, addition, game theory [28], and evidence theory [29,30]. AHP and entropy are the most useful and practical methods. We believe that reasonable weights should take into account both subjective preferences and objective information. Therefore, this paper uses AHP and entropy to obtain two types of weights and integrate them based on game theory.

Table 1 summarizes some methods of MCDM in the literature. Although various extended TOPSIS methods have been successful in ranking alternatives, there are few approaches from the Kansei point of view. Due to the complexity and uncertainty of the perception, product evaluation becomes a very complicated task. Therefore, this paper integrates five methods (KE, AHP, entropy, game theory, and GRA-TOPSIS) to construct a hybrid KE-GRA-TOPSIS method, which ranks alternatives from criteria and users' requirements. The main contributions of this paper are summarized as follows.


The rest of this paper is organized as follows. In Section 2, we present the general framework firstly. Then we describe the KE, AHP, entropy method, game theory, and GRA-TOPSIS method in detail. In Section 3, the feasibility and effectiveness of the proposed method is verified through an example, and the related experimental results are presented. Finally, the conclusions of this study are provided in Section 4.




**Table 1.** *Cont*.

### **2. Methods**

### *2.1. Research Framework*

To make a subjective user personalized ranking of alternative products, this research combines KE, AHP, entropy, game theory, and GRA-TOPSIS to form a KE-GRA-TOPSIS method. As shown in Figure 1, the KE-GRA-TOPSIS method contains three parts. In part 1, the KE and AHP methods are used to construct a hierarchical evaluation structure for products. In part 2, the AHP method is used to calculate the subjective weights, which must pass a consistency check. Moreover, the semantic differential (SD) method is used to construct the KDM based on the initial decision matrix and user requirements. Then, the entropy method is used to calculate the objective weights. Finally, game theory is used to obtain the optimal weights based on subjective weights and objective weights. The optimal weights are later used in the GRA-TOPSIS method. In part 3, the weighted matrix is formed based on the KDM and the comprehensive weights in part 2. Next, determine the ideal solutions. Then, calculate the Euclidean distance and grey relational degree between each alternative and the ideal solutions. After that, we can obtain integrated results and closeness. Finally, all the alternatives are ranked in a descending order based on the value of closeness. The alternatives with higher rank can meet user requirements better.

**Figure 1.** The Kansei engineering-based grey relational analysis and techniques for order preference by similarity to ideal solution (KE-GRA-TOPSIS) framework. In part 1, we construct an evaluation structure by Kansei engineering (KE) and analytic hierarchy process (AHP). In part 2, the comprehensive weights are obtained based on AHP, KE, entropy, and game theory. In part 3, the KE-GRA-TOPSIS method is used to rank the alternatives.

### *2.2. KE Method*

Kansei refers to the feelings that people experience when the outside world stimulates them. The stimulation includes many aspects such as sight, hearing, touch, and smell. Kansei is a comprehensive evaluation of humans, which plays a vital role in product design. Sometimes even users cannot express their requirements with clear words. Research on Kansei is an important means of meeting user needs. KE is a combination of Kansei and engineering, and it is one of the main areas of ergonomics [34]. In ergonomics and psychology, adjectives are often used to describe persons feeling of products. Since there may be correlations, redundancies, and similarities between adjectives, a pair of adjectives with opposite meanings can better reflect human psychology. In KE, such words are called as Kansei words [6].

The SD method is widely used to quantify human perception [5,35,36]. The SD scale is the key to the SD method, which consists of bipolar scales and *N*-point rating scale. Typically, the bipolar scale is a pair of Kansei words, and the grade of *N* is five, seven, and nine. An example of a seven-point scale is shown in Figure 2.

**Figure 2.** Example of a seven-point scale. "1" indicates that the product looks extremely female, "2" indicates quite female, "3" indicates slightly female, "4" indicates neither female nor masculine, "5" indicates slightly masculine, "6" indicates quite masculine, and "7" indicates extremely masculine.

Users evaluate the perception ("Criteria" axis) of the product ("Alternatives" axis) based on the SD scale ("Scales" axis) to obtain a matrix (Figure 3). The matrix represents users' Kansei evaluation of the product.

**Figure 3.** The Kansi evaluation matrix.

Formation of the Kansei evaluation matrix (also called initial matrix) based on *m* alternatives (*A* = {*Ai*, *i* = 1, 2, ... , *m*}) and *n* criteria (*C* = {*Cj*, *j* = 1, 2, ... , *n*}). The matrix *H* is as described in Equation (1). *hij* is the evaluation value of the *j* th criterion index of the *i* th alternative.

$$H = \begin{Bmatrix} h\_{ij} \end{Bmatrix} = \begin{bmatrix} h\_{11} & h\_{12} & \dots & h\_{1n} \\ h\_{21} & h\_{22} & \dots & h\_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ h\_{m1} & h\_{m2} & \dots & h\_{mn} \end{bmatrix} (i = 1, 2, \dots, m, j = 1, 2, \dots, n), \tag{1}$$

As user requirements *U*= {*U*j, *j* = 1, 2, ... , *n*} of products vary from person to person. Therefore, we define a KDM *B* to describe the degree of satisfaction. In matrix *B*, the user requirements constitute the PIS, and the farthest values from requirements constitute the NIS. Matrix *B* (Equation (2)) is constructed by Equation (3).

$$B = \begin{Bmatrix} b\_{\bar{i}\bar{j}} \end{Bmatrix} = \begin{bmatrix} b\_{11} & b\_{12} & \dots & b\_{1n} \\ b\_{21} & b\_{22} & \dots & b\_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ b\_{m1} & b\_{m2} & \dots & b\_{mn} \end{bmatrix} (i = 1, 2, \dots, m, j = 1, 2, \dots, n), \tag{2}$$

$$b\_{\bar{i}\bar{j}} = N - \left| \boldsymbol{l} \boldsymbol{l}\_{\bar{j}} - \boldsymbol{h}\_{\bar{i}\bar{j}} \right| (i = 1, 2, \dots, m, j = 1, 2, \dots, n), \tag{3}$$

$$\text{where } b\_{\vec{\eta}} \text{ is an element of matrix } B \text{ corresponding to the } i^{\text{th}} \text{ criterion of } i^{\text{th}} \text{ alternative. } N \text{ is the grade of the SD scale.}$$

### *2.3. AHP Method*

The AHP is a combination of qualitative and quantitative analysis, which was proposed by Saaty [37,38]. The core concept of AHP is decomposing a complex problem into a hierarchic structure, and assess the relative importance of these criteria by pairwise comparison. The hierarchy is constructed in such a way that the goal is at the top, criteria and indexes are in the middle, and alternatives at the bottom, as shown in Figure 4. The criteria link the alternatives to the goal. In this research, we take Kansei words as indexes.

**Figure 4.** The hierarchy structure.

Using the AHP to obtain weights, the pairwise comparison matrix is necessary. It is constructed by comparing the importance of two factors using Saaty scale. The detailed assignment of the Saaty scale is shown in Table 2. For *n* factors, the total number of comparisons is *C*<sup>2</sup> *<sup>n</sup>* = *n* × *n*(*n* − 1)/2.



Let *O* represent an *n* × *n* pairwise comparison matrix, which is described in Equation (4). *oij* is the importance of the *i* th to the *j* th factor. For matrix *O*, the diagonal elements are self-comparison. Thus, *oij* = 1, where *i* = *j*. *oij* and *oji* are symmetric about the diagonal of the matrix. Thus, *oij* = 1/*oji*, where *oij* > 0.

$$O = \begin{Bmatrix} o\_{\bar{i}\bar{j}} \end{Bmatrix} = \begin{bmatrix} o\_{11} & o\_{12} & \dots & o\_{1\bar{j}} \\ o\_{21} & o\_{22} & \dots & o\_{2\bar{j}} \\ \vdots & \vdots & \ddots & \vdots \\ o\_{\bar{i}1} & o\_{\bar{i}2} & \dots & o\_{\bar{i}\bar{j}} \end{bmatrix} (i, j = 1, 2, \dots, n), \tag{4}$$

The subjective weight matrix *W*<sup>1</sup> = (*w*1, *w*2, ... , *wn*) is obtained by Equation (5). Moreover, the objective weight is later used in game theory.

$$w\_i = \frac{\prod\_{j=1}^n \sqrt[n]{o\_{ij}}}{\sum\_{i=1}^n \prod\_{j=1}^n \sqrt[n]{o\_{ij}}} \ (i, j = 1, 2, \dots, n), \tag{5}$$

The maximum eigenvalue λmax of *O* is obtained by Equation (6).

$$\lambda\_{\text{max}} = \sum\_{i=1}^{n} \frac{(Ow)\_i}{nw\_i} \ (i, j = 1, 2, \dots, n), \tag{6}$$

A consistency check is necessary to ensure the rationality of the pairwise comparison matrix. Consistency Ratio (*CR*) is an indicator of consistency, it is calculated by Equation (7).

$$CR = \frac{CI}{RI} = \frac{\lambda\_{\text{max}} - n}{(n-1)RI'} \tag{7}$$

where *n* is the number of criteria. Consistency Index (*CI*) is estimated as (λmax − *n*)/(*n* − 1). Random index (*RI*) is defined in Table 3. If *CR* ≤ 0.1, the comparison matrix is reasonable; otherwise, it needs to be modified.

**Table 3.** Random index (*RI*) values computed by Saaty [37].


### *2.4. Entropy Method*

The information entropy theory was first introduced to information systems from thermodynamics by Shannon [39]. According to the information entropy theory, the entropy can reflect the degree of diversity within a criterion dataset [26,27]. The greater the degree of diversity, the higher the weight of this criterion, and vice versa.

In this research, the entropy method begins with the Kansei matrix *B*, which described in Equation (1). To determine objective weights by the entropy method, matrix *B* needs to be normalized. The normalized matrix *P* can be represented as Equation (8). *pij* is the normalized value, which can be calculated by Equation (9).

$$P = \begin{Bmatrix} p\_{ij} \end{Bmatrix}\_{\text{m:xi}} = \begin{bmatrix} p\_{11} & p\_{12} & \dots & p\_{1n} \\ p\_{21} & p\_{22} & \dots & p\_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ p\_{m1} & p\_{m2} & \dots & p\_{mn} \end{bmatrix} (i = 1, 2, \dots, m, j = 1, 2, \dots, n), \tag{8}$$

$$p\_{ij} = \frac{b\_{ij}}{\sum\_{i=1}^{m} b\_{ij}} \ (i = 1, 2, \dots, m, j = 1, 2, \dots, n), \tag{9}$$

The entropy of the *j* th criterion (*Ej*) can be calculated by Equation (10).

$$E\_j = -K \sum\_{i=1}^{m} \left( p\_{ij} \ln p\_{ij} \right) (j = 1, 2, \dots, n), \tag{10}$$

where *K* = 1/ln(*m*) is constant.

The degree of divergence (*dj*) can be calculated by Equation (11).

$$d\_j = 1 - E\_j \ (j = 1, 2, \dots, n),\tag{11}$$

*dj* is the inherent contrast intensity of *Cj*. The more divergent the performance rating *pij* is, the more critical the criterion *Cj* is for the problem.

The objective weight (*W*<sup>2</sup> = {*wi*, *j* = 1, 2, ... , *n*}) for each criterion *Cj* (*j* = 1, 2, ... , *n*) is calculated by Equation (12). Moreover, the objective weight is later used in game theory.

$$w\_j = \frac{d\_j}{\sum\_{j=1}^n d\_j} \ (j = 1, 2, \dots, n), \tag{12}$$

### *2.5. Game Theory*

As mentioned previously, there are certain drawbacks whether objective or subjective weighting methods. The objective weight neglects the DM's preference and the actual situation. Conversely, the subjective weight neglects the intrinsic information of the criteria. Therefore, the comprehensive weights, combining the subjective and objective weights with a combination principle, is more reasonable.

Game theory is a method originated from modern mathematics, and it is employed to obtain the optimum equilibrium solution among two or more participants [28,30,32]. In game theory, each participant wants to maximize his payoff, which requires them to reach a collective decision that makes every participant obtain the best payoff. The decision involves consensus and compromises. In this research, to make the comprehensive weight have both the subjective preference and objective information, we regard this problem as a "weight" game, the subjective and objective weights are participants, and comprehensive weights are the collective decision.

A basic weight vector set *W* = {*W*1, *W*2, ... , *WL*} is constructed by *L* kinds of weights. A possible weight set is constructed by arbitrary linear combinations of *L* vectors. It can be described as Equation (13).

$$\mathcal{W} = \sum\_{k=1}^{L} \alpha\_k w\_k^T \ (\alpha\_k > 0),\tag{13}$$

where α = (α*1,* α2, ... , α*L*) is the weight coefficient and *w* is a possible weight vector in set *W*.

According to the game theory, the obtain of the optimum equilibrium weight vector *w*\* can be regarded as optimization of α*k*. The α*<sup>k</sup>* is a linear combination. The optimization is aiming to minimize the deviation between *w* and *wk*. It can be expressed as Equation (14).

$$\min \left\| \sum\_{k=1}^{L} \alpha\_{k} w\_{k}^{T} - \alpha\_{i} \right\|\_{2} \ (i = 1, 2, \dots, L), \tag{14}$$

The optimal first-order derivative condition of Equation (14) is shown in Equation (15), based on the differentiation property of the matrix.

$$\sum\_{k=1}^{L} \alpha\_k w\_i w\_k^T = w\_i w\_i^T \ (i = 1, 2, \dots, L),\tag{15}$$

Equation (15) can be converted into a system of linear equations as shown in Equation (16).

$$
\begin{bmatrix}
w\_1 w\_1^\mathrm{T} & w\_1 w\_2^\mathrm{T} & \dots & w\_1 w\_L^\mathrm{T} \\
w\_2 w\_1^\mathrm{T} & w\_2 w\_2^\mathrm{T} & \dots & w\_2 w\_L^\mathrm{T} \\
\vdots & \vdots & \ddots & \vdots \\
w\_L w\_1^\mathrm{T} & w\_L w\_2^\mathrm{T} & \dots & w\_L w\_L^\mathrm{T} \\
\end{bmatrix}
\begin{bmatrix}
\alpha\_1 \\
\alpha\_2 \\
\vdots \\
\alpha\_L \\
\end{bmatrix} = \begin{bmatrix}
w\_1 w\_1^\mathrm{T} \\
w\_2 w\_2^\mathrm{T} \\
\vdots \\
w\_L w\_L^\mathrm{T} \\
\end{bmatrix} \tag{16}
$$

α can be calculated by Equation (16) and normalized by Equation (17).

$$a\_k^\* = \frac{a\_k}{\sum\_{k=1}^L a\_k} \ (k = 1, 2, \dots, L),\tag{17}$$

Lastly, the comprehensive weight *w*\* is calculated by Equation (18). The comprehensive weight is later used in the GRA-TOPSIS method.

$$w^\* = \sum\_{k=1}^{L} \alpha\_k w\_{k'}^T \tag{18}$$

### *2.6. GRA-TOPSIS Method*

In 1994, Tzeng et al. [40] illustrated similarities of the grey relation model and TOPSIS in the input and process. In a subsequent study [17], they proposed the GRA-TOPSIS method to evaluate alternatives. The idea of the GRA-TOPSIS method is as follows. First, construct a PIS and NIS through the TOPSIS method. Secondly, adopt GRA to calculate the gray correlation degree. Third, calculate the Euclidean distance by TOPSIS. Finally, aggregate the gray correlation degree and the Euclidean distance to obtain the closeness [33]. According to the closeness, the alternatives are ranked. The specific steps are as follows.

Step 1: Constructing the decision matrix.

In this research, we replace the decision matrix with the KDM *B*, which is described in Equation (2). Step 2: Calculating the normalized decision matrix.

The normalized decision matrix *R* is described in Equation (19). *rij* is the normalized value, which can be calculated by Equation (20).

$$R = \begin{Bmatrix} r\_{i\bar{j}} \end{Bmatrix} = \begin{bmatrix} r\_{11} & r\_{12} & \dots & r\_{1n} \\ r\_{21} & r\_{22} & \dots & r\_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ r\_{m1} & r\_{m2} & \dots & r\_{mn} \end{bmatrix} (i = 1, 2, \dots, m, j = 1, 2, \dots, n), \tag{19}$$

$$r\_{ij} = \frac{b\_{ij}}{\sqrt{\sum\_{i=1}^{m} b\_{ij}^2}} \ (i = 1, 2, \dots, m, j = 1, 2, \dots, n), \tag{20}$$

Step 3: Calculating the weighted decision matrix.

The matrix *Z* is based on the matrix *B* and the comprehensive weight *w*\* = (*w*1, *w*2, ... , *wn*). It is described in Equation (21). *zij* is the weighted value, which can be calculated by Equation (22).

$$Z = \begin{Bmatrix} z\_{ij} \end{Bmatrix} = \begin{bmatrix} z\_{11} & z\_{12} & \dots & z\_{1n} \\ z\_{21} & z\_{22} & \dots & z\_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ z\_{m1} & z\_{m2} & \dots & z\_{mn} \end{bmatrix} (i = 1, 2, \dots, m, j = 1, 2, \dots, n), \tag{21}$$

$$z\_{i\bar{j}} = r\_{i\bar{j}} w\_{\bar{j}}\ (i = 1, 2, \dots, m, j = 1, 2, \dots, n),\tag{22}$$

Step 4: Determining the ideal solutions.

The ideal solutions include the PIS *A*<sup>+</sup> = (*z*<sup>+</sup> <sup>1</sup> , *<sup>z</sup>*<sup>+</sup> <sup>2</sup> , ... , *<sup>z</sup>*<sup>+</sup> *<sup>n</sup>* ) and NIS *A*<sup>−</sup> = (*z*<sup>−</sup> <sup>1</sup> , *z*<sup>−</sup> <sup>2</sup> , ... , *z*<sup>−</sup> *<sup>n</sup>* ). They are determined by Equations (23) and (24), respectively.

$$z\_j^+ = \max z\_{ij} \ (i = 1, 2, \dots, m, j = 1, 2, \dots, n), \tag{23}$$

$$z\_j^- = \min z\_{ij} \ (i = 1, 2, \dots, m, j = 1, 2, \dots, n), \tag{24}$$

Step 5: Calculating the separation of each alternative from the PIS and NIS.

We use Euclidean distance to measure the separation of each alternative from the PIS and NIS. The separations are defined in Equations (25) and (26).

$$D\_{\vec{i}}^{+} = \|z\_{\vec{i}} - A^{+}\|\_{2} = \sqrt{\sum\_{j=1}^{n} (z\_{ij} - z\_{\vec{j}}^{+})^2} \ (i = 1, 2, \dots, m), \tag{25}$$

$$\|D\_i^- = \|z\_i - A^-\|\_2 = \sqrt{\sum\_{j=1}^n \left(z\_{ij} - z\_j^-\right)^2} \ (i = 1, 2, \dots, m), \tag{26}$$

where *D*<sup>+</sup> *<sup>i</sup>* represents the distance between alternative *Ai* and *<sup>A</sup>*+. *<sup>D</sup>*<sup>−</sup> *<sup>i</sup>* represents the distance between alternative *Ai* and *A*−.

Step 6: Calculating the grey relational coefficients.

The grey relational coefficients can be calculated by Equations (27) and (28), respectively.

$$w\_{ij}^{+} = \frac{\min\_{i} \min\_{j} \left| z\_{j}^{+} - z\_{ij} \right| + \rho \max\_{i} \max\_{j} \left| z\_{j}^{+} - z\_{ij} \right|}{\left| z\_{j}^{+} - z\_{ij} \right| + \rho \max\_{i} \max\_{j} \left| z\_{j}^{+} - z\_{ij} \right|} = \frac{\rho w\_{j}}{w\_{j} - z\_{ij} + \rho w\_{j}} \ (i = 1, 2, \dots, m, j = 1, 2, \dots, n), \tag{27}$$

$$v\_{ij}^{-} = \frac{\min\_{i} \min\_{j} \left| z\_{j}^{-} - z\_{ij} \right| + \rho \max\_{i} \max\_{j} \left| z\_{j}^{-} - z\_{ij} \right|}{\left| z\_{j}^{-} - z\_{ij} \right| + \rho \max\_{i} \max\_{j} \left| z\_{j}^{-} - z\_{ij} \right|} = \frac{\rho w\_{j}}{z\_{ij} + \rho w\_{j}} \ (i = 1, 2, \dots, m, j = 1, 2, \dots, n), \tag{28}$$

where ρ is the distinguishing coefficient, ρ [0, 1]; ρ = 0.5 is usually applied following the rule of least information [41].

Step 7: Calculating the grey relational degree and integrated results.

The grey relational degrees are calculated by Equations (29) and (30), respectively.

$$v\_i^+ = \frac{1}{n} \sum\_{j=1}^n v\_{ij}^+ \ (i = 1, 2, \dots, m), \tag{29}$$

$$v\_i^- = \frac{1}{n} \sum\_{j=1}^n v\_{ij}^- \left( i = 1, 2, \dots, m \right),\tag{30}$$

The dimensionless processing is performed on *D*<sup>+</sup> *<sup>i</sup>* , *D*<sup>−</sup> *<sup>i</sup>* , *<sup>v</sup>*<sup>+</sup> *<sup>i</sup>* and *v*<sup>−</sup> *<sup>i</sup>* , and the integrate results are obtained by Equations (31) and (32).

$$s\_i^+ = \beta \frac{D\_i^+}{\max(D\_i^+)} + \gamma \frac{v\_i^+}{\max(v\_i^+)} \ (i = 1, 2, \dots, m),\tag{31}$$

$$s\_i^- = \beta \frac{D\_i^-}{\max(D\_i^-)} + \gamma \frac{v\_i^-}{\max(v\_i^-)} \ (i = 1, 2, \dots, m),\tag{32}$$

where β is the influence coefficient of the distance from alternative to the ideal solution on the closeness. γ is the influence coefficient of the grey relational degree of the alternative and the ideal solution on the closeness. β, γ [0, 1], β + γ = 1.

Step 8: Calculating the closeness and ranking the alternatives.

The closeness *Ci* is defined to determine the ranking order of all alternatives. It is calculated by Equation (33).

$$\mathbf{C}\_{i} = \frac{\mathbf{s}\_{i}^{+}}{\mathbf{s}\_{i}^{+} + \mathbf{s}\_{i}^{-}} \ (i = 1, 2, \dots, m), \tag{33}$$

If the alternative *Ai* is closer to *A*<sup>+</sup> and farther from *A*−, *Ci* is more approximate to 1. Therefore, we can pick out the best-fit one among all alternatives.

### **3. Empirical Study**

To illustrate the possibilities for the application of the proposed method, we conducted a case study of electric drill selection. It has the following steps; (1) use KE and AHP to construct an evaluation structure, (2) adopt AHP to obtain the subjective weights, (3) adopt entropy to obtain the objective weights, (4) employ game theory to get the comprehensive weights, (5) adopt the SD method to build the KDM, and (6) use GRA-TOPSIS to rank alternatives.

### *3.1. Evaluation System and Alternatives*

To evaluate the perception of electric drills, we use AHP and KE to establish a hierarchy shown in Figure 5. The target layer has only one element, which is product selection. We have identified six criteria as the dimensions for Kansei evaluation: "Gender", "Acceptance", "Structure", "Popularity", "Weight sense", and "Technical sense". Each criterion includes a pair of Kansei words. "Gender" comprising "Female" and "Masculine". "Acceptance" comprising "Unique" and "Ordinary". "Structure" comprising "Simple" and "Refined". "Popularity" comprising "Modern" and "Traditional". "Weight sense" comprising "Light" and "Steady". "Technical sense" comprising "Technical" and "Artificial". The difference in the color, trigger switch, air vent, chuck, model, name label, etc. of electric drills has led to different evaluation results. We selected 14 electric drills as the alternatives, and they are shown in Figure 6.

**Figure 5.** Product evaluation system.

**Figure 6.** The alternatives. (**a**) Alternative *A*1. (**b**) Alternative *A*2. (**c**) Alternative *A*3. (**d**) Alternative *A*4. (**e**) Alternative *A*5. (**f**) Alternative *A*6. (**g**) Alternative *A*7. (**h**) Alternative *A*8. (**i**) Alternative *A*9. (**j**) Alternative *A*10. (**k**) Alternative *A*11. (**l**) Alternative *A*12. (**m**) Alternative *A*13. (**n**) Alternative *A*14.

### *3.2. Criteria Weighting*

In this research, we take the DM (also called as user) requirements as "extremely masculine", "slightly ordinary", "quite simple", "slightly modern", "quite light", and "slightly technical". We need to match an electric drill closest to the DM requirements in the given 14 alternatives. Based on the 7-point SD scale, the DM requirements can be expressed as *U* = [7,5,2,3,2,3]. The selection of the electric drill is as follows.

First, the DM is invited to construct a pairwise comparison matrix based on Equation (1). Then, according to Equations (5) and (6), the subjective weights and the maximum eigenvalue are obtained. Finally, we finished the consistency check based on Equation (7). The results are shown in Table 4.


**Table 4.** The pairwise comparison matrix and weight.

<sup>1</sup> λmax = 6.008, *CI* = 0.0176, *RI* = 1.24, *CR* = 0.0142 < 0.1, the consistency check is passed.

We constructed the questionnaire (Figure 7) and invited 30 people (10 designers and 20 consumers) to evaluate of 14 electric drills in six dimensions, and the average of the results (Table 5) constitute an initial decision matrix *H*.

This is a questionnaire for Kansei engineering. Please refer to the criteria in the table for scoring.


**Figure 7.** One of the questionnaires.


**Table 5.** The evaluation results from questionnaires.

According to Equations (3) and (9), the KDM *B* and the normalized matrix *P* are obtained as


Then the entropy *E* and objective weight *w* of each criterion are calculated by using Equations (10)–(12). The specific calculation results are shown in Table 6.

**Table 6.** The entropy and objective weight.


We have acquired subjective and objective weights, and then we will optimize them based on game theory. Using Equations (16) and (17), we can get the weight coefficient α = (0.4331,0.5669). According to Equation (18), the comprehensive weight is obtained, i.e., *w*\* = (0.2541, 0.1583, 0.213, 0.1316, 0.0844, 0.1584).

### *3.3. Alternative Ranking*

According to Equations (20) and (22), the normalized matrix *R* and the weighted decision matrix *Z* are obtained as


According to Equations (23) and (24), the positive ideal solution *A*<sup>+</sup> and the negative ideal solution *A*<sup>−</sup> are determined, that is, *A*<sup>+</sup> = [0.0907 0.0543 0.0804 0.0447 0.0321 0.0487], *A*<sup>−</sup> = [0.0209 0.0315 0.0291 0.024 0.016 0.0283]. Then, the distances and the grey relational coefficients are obtained by Equations (25)–(30), which is shown in Table 7.


**Table 7.** The distance and the grey relational coefficient.

To illustrate the unique merits of KE-GRA-TOPSIS in Kansei evaluation, a comparison of KE-GRA-TOPSIS, KE-TOPSIS, and KE-GRA is conducted in this study. In Equations (31) and (32), β and γ denote the proportion of TOPSIS and GRA in GRA-TOPSIS, respectively. When β = 1, γ = 0 indicates only the TOPSIS method is used; when β = 0, γ = 1 indicates only the GRA method is used. According to Equations (31) and (32), the comparison integrated results are obtained in Table 8.

According to Equation (33), the closeness and ranking of KE-GRA-TOPSIS, KE-TOPSIS, and KE-GRA are obtained. The comparison results are shown in Table 9.


**Table 8.** The comparison integrated results.

<sup>1</sup> We take β = γ = 0.5 in KE-GRA-TOPSIS.

**Table 9.** The comparison results.


Bold indicates inconsistency with the DM's ranking results.

As shown in Table 8, both KE-GRA-TOPSIS and KE-TOPSIS recommend *A*<sup>6</sup> as the best-fit product for user requirements ("extremely masculine", "slightly ordinary", "quite simple", "slightly modern", "quite light", and "slightly technical"). This predicted result is the same as the DM's choice. KE-GRA recommends *A*<sup>3</sup> as the best-fit product. In this experiment, the DM is the user, so we take the DM's ranking results as the comparison standard for the other three methods. The symbol '' means "better than", and the DM's ranking can be expressed as *A*<sup>6</sup> *A*<sup>9</sup> *A*<sup>5</sup> *A*<sup>10</sup> *A*<sup>1</sup> *A*<sup>7</sup> *A*<sup>11</sup> *A*<sup>8</sup> *A*<sup>12</sup> *A*<sup>3</sup> *A*<sup>4</sup> *A*<sup>2</sup> *A*<sup>14</sup> *A*13. The ranking of KE-GRA-TOPSIS is *A*<sup>6</sup> *A*<sup>9</sup> *A*<sup>5</sup> *A*<sup>10</sup> *A*<sup>1</sup> *A*<sup>7</sup> *A*<sup>11</sup> *A*<sup>8</sup> *A*<sup>12</sup> *A*<sup>3</sup> *A*<sup>13</sup> *A*<sup>2</sup> *A*<sup>4</sup> *A*14. Compared to the standard, the order of *A*4, *A*13, and *A*<sup>14</sup> are confused. Three out of fourteen are wrong. The ranking of KE-TOPSIS is *A*<sup>6</sup> *A*<sup>5</sup> *A*<sup>9</sup> *A*<sup>1</sup> *A*<sup>10</sup> *A*<sup>7</sup> *A*<sup>11</sup> *A*<sup>8</sup> *A*<sup>12</sup> *A*<sup>4</sup> *A*<sup>3</sup> *A*<sup>2</sup> *A*<sup>14</sup> *A*13. The order of *A*<sup>1</sup> and *A*<sup>10</sup> is reversed, as are *A*<sup>3</sup> and *A*4, *A*<sup>5</sup> and *A*9. Six out of fourteen are wrong. The ranking of KE-GRA is *A*<sup>3</sup> *A*<sup>13</sup> *A*<sup>10</sup> *A*<sup>6</sup> *A*<sup>9</sup> *A*<sup>2</sup> *A*<sup>1</sup> *A*<sup>4</sup> *A*<sup>12</sup> *A*<sup>14</sup> *A*<sup>7</sup> *A*<sup>5</sup> *A*<sup>8</sup> *A*11. Only *A*<sup>12</sup> is correct. These results imply that the KE-GRA-TOPSIS method has the highest accuracy, followed by the KE-TOPSIS method, and the KE-GAR method has the lowest accuracy. This experiment verifies the feasibility of the KE-GRA-TOPSIS method.

Figure 8 is drawn according to the closeness results in Table 9. As shown in Figure 8, there is a big gap in the closeness of alternatives in KE-TOPSIS, because it only considers the distance of alternatives, amplifying the evaluation results. The gap between alternatives in KE-GRA is relatively small, as this method focuses on the connection between criteria but ignores the distance between alternatives. The KE-GRA-TOPSIS takes into account both the connections between the criteria and the distance between alternatives, so its closeness is more in line with the actual situation.

**Figure 8.** The closeness comparison results.

We also compared the choice of DM with the results of KE-GRA-TOPSIS, GAR-TOPSIS, KE-TOPSIS, and TOPSIS. The comparison results are shown in Table 10. Since the accuracy of KE-GRA is too low to be effective, we canceled it in the comparative method. As shown in Table 10, KE-GRA-TOPSIS has the highest accuracy rate of 78.6%, followed by 57.2% of KE-TOPSIS. Moreover, the accuracies of GAR-TOPSIS and TOPSIS are 7.2% and 0, respectively. Figure 9 is drawn according to Table 10. In Figure 9, we can easily find that the results of KE-GAR-TOPSIS and KE-TOPSIS are similar, while GRA-TOPSIS and TOPSIS are similar. Furthermore, the results of KE-GAR-TOPSIS and KE-TOPSIS are roughly consistent with the DM's choice. This experiment verifies the TOPSIS and its extension methods cannot be used directly to make a subjective user personalized ranking of products.


**Table 10.** The ranking comparison results.

Bold indicates inconsistency with the DM's ranking results.

To illustrate the effectiveness of the proposed method, we invited another 10 participants to repeat the experiment. Table 11 shows the requirements and ranking results given by the participants. It is worth noting that the subjective weights in AHP are adjustable. In this experiment, all the participants agreed to use the weights in Table 4. The results are shown in Figure 10.

**Figure 9.** The ranking comparison results.

**Table 11.** The requirements and ranking results of participants.


In Figure 10, the bar chart shows the correct ranking number and the line chart is the accuracy. In KE-GRA-TOPSIS, the accuracy of U1–U10 is 85.7%, 100%, 85.7%, 100%, 100%, 100%, 85.7%, 100%, 100%, and 100%. Its average is calculated as 95.7%. In TOPSIS, the accuracy of U1–U10 is 85.7%, 85.7%, 85.7%, 85.7%, 85.7%, 85.7%, 71.4%, 85.7%, 100%, and 85.7%. Its average is calculated as 85.7%. Their accuracy implies that the KE-GRA-TOPSIS and KE-TOPSIS perform well, and KE-GRA-TOPSIS is more accurate than KE-TOPSIS.

The above results show that the Kansei evaluation matrix is feasible, and the KE-GAR-TOPSIS method can accurately rank products based on user requirements. An accurate prediction can lead to an accurate recommendation. Moreover, the accurate recommendation can increase user satisfaction, stimulate the purchase desire, and expand the sales of production enterprise. Furthermore, it can help enterprises to increase market occupancy in the highly competitive marketplace.

### **4. Conclusions**

We propose the KE-GRA-TOPSIS method to evaluate product design alternatives, according to both the criterion and user requirements. Firstly, we use KE and AHP to establish an evaluation system. Second, we use AHP to obtain subjective weights. Third, in order to get objective weights based on the entropy method, we introduced a KDM, which is a combination of the initial decision matrix and user requirements. In the process of constructing the KDM, we adopt an SD method and a formula to get the corresponding values. Fourth, after obtaining two types of weights, we use game theory to get the optimal weights. Finally, we construct a weighted matrix based on the optimal weights and the KDM and use the GRA-TOPSIS method to rank the alternatives. Taking the electric drill as an example, we demonstrate the effectiveness and feasibility of KE-GRA-TOPSIS. Moreover, through a comparison experiment, we illustrate the unique merits of KE-GRA-TOPSIS in Kansei evaluation. Our method realizes a symmetry between the objectivity of products and subjectivity of users. In the future, we will devote to developing a software system based on the proposed method, providing a convenient operation and interaction for users.

**Author Contributions:** H.Q. designed the study, performed the experiments, analyzed the results, and wrote the manuscript. S.L., H.W., and J.H. provided good advice on experiments design, methodology and data analyzed. They also revised and refined the manuscript. All authors have read and approved the final manuscript.

**Funding:** This work was supported by the National Natural Science Foundation of China under Grant No. 91746116 and Science and Technology Foundation of Guizhou Province under Grant Nos. [2015]4011, [2016]5013, [2015]02, and [2019]3003.

**Conflicts of Interest:** The authors declare no conflict of interest.

### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Optimizing the Paths of Trains Formed at the Loading Area in a Multi-loop Rail Network**

### **Xingkui Li, Boliang Lin \* and Yinan Zhao**

School of Traffic and Transportation, Beijing Jiaotong University, Beijing 100044, China **\*** Correspondence: bllin@bjtu.edu.cn; Tel.: +86-10-5168-2598

Received: 29 May 2019; Accepted: 26 June 2019; Published: 1 July 2019

**Abstract:** Each loop in a multi-loop rail network consists of two segments, both of which have roughly the same conditions and mileage and are approximately symmetrical. This paper is devoted to optimizing the paths of trains formed at the loading area in a multi-loop rail network. To attain this goal, three different situations are analyzed, and two models are proposed for networks with adequate and inadequate capabilities. Computational experiments are also carried out using the commercial software Lingo, with the branch and bound algorithm. The results show that the models can achieve the same solution with different solution times. To solve the problem of path selection for large-scale train flows, a genetic algorithm is also designed and proves to perform well in a set of computational experiments.

**Keywords:** multi-loop rail network; path optimization; trains formed at loading area; genetic algorithm

### **1. Introduction**

In traditional railroad operations, each train may carry a single block or multiple blocks, where each block is consisted of set of railcars that may have disparate origins and destinations. In railroad freight transportation, a freight flow, which consists of many railcars/wagons with the same origin and destination (OD), may pass through several classification yards and may be handled by some yard operations on their journeys. Unfortunately, these yard operations, including freight railcar classification activities, consume roughly 2/3 of railcar time, making them a major source of delay and unreliable service. Therefore, under the premise of sufficient freights, the most ideal mode of a train is to carry a single group of railcars, with the same origin and destination, and move them directly from the loading area to the unloading area without going through the marshalling station, also known as direct train service.

The length of China's railway network will be more than 175 thousand kilometers by 2025, including about 38 thousand kilometers of high-speed railways, according to the Medium- and Long-Term Planning for the China Railway Network issued in 2016. With the construction of new lines and the capacity expansion of existing railways, convenient inter-regional channels (featuring multiple rail lines and large capacities, including 12 railway freight corridors, such as the Beijing–Tianjin–Northeast Corridor and the Yangtze River Delta–Northwest Corridor) and international rail freight corridors for the "Belt and Road" will be formed, which is conducive to the development of direct train service that can speed up freight transportation and improve overall service efficiency.

However, with the continuous construction and development of rail lines, China's rail network features a multi-loop structure with higher density and accessibility. Because of this, the number of potential paths between the loading area and unloading area for each train has increased rapidly, which has resulted in a problem of path selection for trains.

To reduce the freight transportation cost and improve transportation efficiency, we have to choose a reasonable path for each train in the multi-loop rail network. However, there are a large number of potential paths for each train in such a huge rail network. Therefore, it is of great significance to study optimizing the paths of trains in a multi-loop rail network.

### **2. Literature Review**

As for the problem of optimizing the paths of trains, many methods and models have been proposed, which are worthy of review.

To set the context, we begin by providing an overview of merely studying the optimization of train flow path. Lin et al. [1] developed a linear 0–1 integer programming model for the car routing problem and proposed a method for generating alternative path sets. Jiang et al. [2] discussed mathematical models for the capacitated and incapacitated traffic allocation problems, respectively. Wang et al. [3] proposed a stochastic dependent chance multi-objective programming model, which aims to maximize the reliability of the car flow routing plan and minimize the expected total cost. Nong et al. [4] introduced a distributed computing method to solve the problem that the computational complexity of car flow routing optimization increases exponentially with the number of nodes and the size of the car flow in the rail network. Based on a tree-shaped path, Cao et al. [5] presented a collaborative optimization model for the loaded and empty car flow routing problem and the empty car distribution problem, considering multiple car types. Sadykov et al. [6] formulated the freight railcar flow routing problem as a multi-commodity flow problem and proposed some approaches to solve it. Borndörfer et al. [7,8] researched the routing problem from a strategic perspective and tried to find the routes in a rail network of the Deutsche Bahn AG. Considering the storage cost, unit transportation cost, and demand in each stage, Zhao et al. [9] investigated the allocation problem of empty freight cars in rail networks with dynamic demands and formulated a stage-based optimization model for allocating empty freight cars. Based on the tree-shaped path, a 0–1 mixed integer programming model for the railway car flow routing problem was proposed by Wen et al. [10]. Fu and Dessouky [11] focused on the Single Train Routing Problem and tried to route one train through an empty rail network as fast as possible. Peter et al. [12] formulated an integer multi-commodity network flow model with a nonlinear objective function to find a route for railway carriages. Fügenschuh et al. [13] presented a mixed-integer linear programming model for the car-routing problem on the Deutsche Bahn, and then they added nonlinear constraints into the model because of the turnover waiting time. Some linearization techniques, as well as a tree-based reformulation and heuristic cuts, were proposed to speed up the numerical solution process.

Some scholars have combined the freight train formation plan with the flow path to establish integrated optimization models or to design solution methods. Assad [14] presented a routing/makeup model from the viewpoint of network flows and combinatorial optimization. Haghani [15] tried to solve the routing/makeup/empty car distribution problem and proposed a model with a nonlinear objective function and linear constraints. Lin et al. [16] proposed a model for the train routing and makeup plan problem (TRMP) and developed a simulated annealing algorithm to solve large-scale TRMP. Considering the fluctuation of wagon flow, Yan et al. [17] established a model of the train formation plan and wagon-flow path and designed an improved branch and bound method to solve that model.

The methods mentioned above aimed at solving the routing problem in a long term or middle term timescale. The following studies try to find solutions for real-time train scheduling. To achieve the passenger assignment in a rail network, a Wardrop equilibrium model was analyzed by Cominetti and Correa [18]; this study includes the effects of congestion on passengers' choices. Fu et al. [19] proposed a train stop scheduling approach that combined the passenger assignment procedure and defined four criteria to make sure that the travel path used by a traveler was feasible. By using an automatic fare collection (AFC) system, Zhou et al. [20] did some research on estimating the path-selecting proportion of passengers. Xu et al. [21] proposed a mathematical model for the train routing and timetabling problem with switchable scheduling rules. Based on a connection network, Wang et al. [22] proposed a general train unit routing model and then proposed a strategy to reduce the scale of the connection network. Samà et al. [23] proposed an integer linear programming model and then designed an algorithm inspired by ant colonies' behavior to solve the real-time train routing selection problem. Since the real-time train scheduling and routing problem is an NP-hard problem, Samà et al. [24] proposed lower and upper bound algorithms to shorten computing time.

Because the train flow needs to be considered in this paper, some studies on the well-known "maximal flow problem" in transport are also reviewed. Two algorithms were put forward to solve the problem of maximum flow assignment/distribution in [25] and [26]. András et al. [27] used maximal flow and shortest route algorithms to choose edges in a transportation network. Gao [28] introduced a minimum cost and maximum flow method to solve the flow distribution problem of hazardous materials transportation. V. K. Singh et al. [29] presented some modifications of Ford–Fulkerson's labeling method for solving the maximal network flow problem and assignment problems. Di et al. [30] formulated two deterministic bi-level programming models, in which the lower level assigned all the flows to the super network.

To conclude our discussion on train routing, Table 1 summarizes some classic contributions. In particular, the details (i.e., the Model Structure, Decision Variables, and Constraints) of these models are listed in Table 1. Notice that there are three different constraint types in the Constraints column, namely capacity constraint (e.g., linkage constraints between engine and car flows in No. 1, and yard capacity and track limitations in a station in No. 5), operation principle (e.g., a single train flow cannot be split in No. 2, flow conservation in No. 4 and No. 5, combinations of train routing assignments in No. 6), and time constraints (e.g., running time restrictions in No. 3).


**Table 1.** A literature review of some classic studies.

MINLP indicates Mixed-Integer Nonlinear Programming, ILP indicates Integer Linear Programming and LP indicates Linear Programming.

It is worth mentioning that this paper intends to study a long-term (1 year) plan for train flow path selection. Although most existing studies consider the train routing problem from the perspective of either the huge networks (e.g., Lin et al., 1997; Borndörfer et al., 2016) or a single corridor (e.g., Xu et al., 2017; Samà et al., 2016), moving goods by direct train service in the multi-loop rail network has been rarely investigated. Therefore, this study intends to provide the following contributions to direct train service routing in a multi-loop network.


does not need to determine the path set in advance but only needs to select the arc for each loop and finally form a path by connecting each arc.


The rest of this paper is organized as follows. Section 3 describes the problem setting, problem statement, and some toy examples. Section 4 presents mathematical formulations of the train routing problem under two different capacity situations. Section 5 provides a set of numerical examples to evaluate the performance of the proposed mathematical models and shows the use of a genetic algorithm to solve large-scale train flow path selection problems. Finally, concluding remarks and future research directions are given in Section 6.

### **3. The Path Problem of Trains Formed at Loading Area in a Multi-Loop Rail Network**

In the railway freight transportation system, a loading area refers to an area with one or multiple loading sites. This area often generates a large amount of freight flow. A train formed at the loading area means that the train runs directly to the unloading area without any reclassification. In practice, a rail network often exhibits a multi-loop structure. As shown in Figure 1, there are one or more loops from the loading area *s* to the unloading area *t*. If there is only one train from *s* to *t*, then there will be two path selection schemes for one loop, four path selection schemes for two loops, and 2*<sup>n</sup>* path selection schemes for *n* loops. Obviously, the easiest situation includes only one train, but if there are dozens or even hundreds of trains, the problem will become much more complicated.

**Figure 1.** Situations from one loop to *n* loops.

To facilitate the description of the problem, a simplified rail network with two loops is designed, as shown in Figure 2.

**Figure 2.** A toy network with two loops *K*Up <sup>1</sup> .

In this figure, let the loading area be *s*, the unloading area be *t*, the first loop be *K*1, and the second loop be *<sup>K</sup>*2. Respectively, let *<sup>K</sup>*Up <sup>1</sup> and *<sup>K</sup>*Down <sup>1</sup> represent the upper and lower arcs of the first loop, and let *K*Up <sup>2</sup> and *<sup>K</sup>*Down <sup>2</sup> represent the upper and lower arcs of the second loop. The capacities of these four arcs are represented by *C*Up <sup>1</sup> ,*C*Down <sup>1</sup> ,*C*Up <sup>2</sup> , and *<sup>C</sup>*Down <sup>2</sup> , and their lengths are represented by *l* Up <sup>1</sup> ,*l* Down <sup>1</sup> ,*l* Up <sup>2</sup> , and *l* Down <sup>2</sup> (*l* Up <sup>1</sup> < *l* Down <sup>1</sup> , *l* Up <sup>2</sup> > *l* Down <sup>2</sup> ), respectively.

We now give three examples to illustrate the ideas and questions we are interested in. Assumed that there are three train flows that originated from loading areas denoted as *f* <sup>1</sup> (the orang line), *f* <sup>2</sup> (the cyan line), and *f* <sup>3</sup> (the purple line), and three situations should be taken into consideration.

Situation 1: The capacity (the 'capacity' in this article refers to the residual capacity after deducting other trains that are not formed at the loading area *s* on the rail network) of both the upper arc and lower arc of each loop can meet the needs of all train flows. In this case, these three flows (i.e., train flows) will be distributed to the rail network according to their shortest paths, as shown in Figure 3.

**Figure 3.** Distribution of train flows for Situation 1.

Situation 2: It may not be possible to satisfy the needs of all train flows only through the upper or lower arc of each loop, but the total capacity of the upper and lower arcs of each loop is big enough. Under this circumstance, some train flows are preferentially distributed to the arcs in the shortest path. Then, the remaining flows are distributed to the other arc. As shown in Figure 4, when *f* <sup>1</sup> and *f* <sup>2</sup> are distributed to the shortest path, the remaining capacity of *K*Up <sup>1</sup> and *<sup>K</sup>*Down <sup>2</sup> cannot accommodate *<sup>f</sup>* 3, so *f* <sup>3</sup> can only be distributed to *K*Down <sup>1</sup> and *<sup>K</sup>*Up <sup>2</sup> .

**Figure 4.** Distribution of train flows for Situation 2.

Situation 3: This situation occurs when a certain loop becomes the bottleneck of the rail network. In this case, the loop cannot accommodate all the train flows, even if the capacity of the upper and lower arcs is summed up. The flows whose demands are not satisfied are called infeasible flows. As shown in Figure 5, two more train flows, i.e., *f* <sup>4</sup> (the pink line) and *f* <sup>5</sup> (the gray line), are added to the network based on Figure 4. The best case is that both flows (*f* <sup>4</sup> and *f* 5) can be shipped to their destinations. Unfortunately, the capacity of the rail network is insufficient to accommodate both *f* <sup>4</sup> and *f* 5. Figure 5 shows that only *f* <sup>4</sup> is transported to its destination, so *f* <sup>5</sup> becomes an infeasible flow.

**Figure 5.** Distribution of train flows for Situation 3.

The above three cases correspond to different conditions for shipping trains in the multi-loop rail network; each of these conditions needs to be discussed separately. In the next section, different mathematical models will be established to solve the problem of the train's path.

### **4. Mathematical Models**

### *4.1. Variables and Parameters*

Firstly, we define a rail network *<sup>T</sup>* = *K*,*K*Up,*K*Down , where *K* ={*K*1,*K*2, ... ,*Kn*} represents the set of *n* loops in the network; *K*Up={*K*Up <sup>1</sup> ,*K*Up <sup>2</sup> , ... ,*K*Up *<sup>n</sup>* }∈ *K* represents the set of upper arcs, and *K*Down={*K*Down <sup>1</sup> ,*K*Down <sup>2</sup> , ... ,*K*Down <sup>n</sup> }∈ *K* represents the set of lower arcs. *s* is defined as the loading area, *t* is defined as the unloading area, and the set *Q*={*Q*1,*Q*<sup>2</sup> , ... ,*Qm*} represents the *m* trains from *s* to *t*. The generalized operation cost includes salary costs, vehicle taxes, insurance, maintenance, and kilometer taxes. Since this paper is concerned with long-term planning, a parameter *u* is introduced to represent the generalized unit operation cost to make the object function clearer.

The subscripts are defined as follows:

*k*: Index of loops, *k* = 1, 2, ··· , *n*. *q*: Index of train flows, *q* = 1, 2, ··· , *m*. Parameters: *W*: Generalized operation cost *R*: Incomes for transporting goods *P*: Profits for transporting goods *f <sup>q</sup>*: The freight volume of the *q*-th train flow α1 *<sup>q</sup>*: Freight rate No.1 of the *q*-th train flow(basic rate, ¥/Ton) α2 *<sup>q</sup>*: Freight rate No.2 of the *q*-th train flow(additional rate, ¥/Ton-km) *u*: Generalized unit operation cost *L* Up *<sup>k</sup>* : Length of the upper arc of the *k*-th loop *L*Down *<sup>k</sup>* : Length of the lower arc of the *k*-th loop *C*Up *<sup>k</sup>* : Capacity of the upper arc of the *k*-th loop *C*Down *<sup>k</sup>* : Capacity of the lower arc of the *k*-th loop Variables: *xq*,*<sup>k</sup>* = ⎧ ⎪⎪⎨ ⎪⎪⎩ 1 When train service *q* selects the upper arc of the loop *k* 0 otherwise .

It is worth noting that freight rates may vary from one good to another. According to the Rules Relating to Railway Goods Tariff, the freight rate of goods transported by rail consists of two parts, freight rate No. 1 and freight rate No. 2. The freight rate is used to calculate the freight train service fees charged by railway companies to shippers, and it depends on the goods being shipped (e.g., for grain and coal, freight rate No. 1 is 9.6 ¥/ton and freight rate No. 2 is 0.0484 ¥/ton-km; for steel, freight rate No. 1 is 10.4 ¥/ton and freight rate No. 2 is 0.0549 ¥/ton-km). The calculation formula of freight train service fees is as follows:

$$F = \alpha^1 q + \alpha^2 q l \tag{1}$$

where α<sup>1</sup> is the freight rate No. 1 and α<sup>2</sup> is the freight rate No .2. *q* represents the volume of freight, *l* indicates the travel distance of the freight carried by train, and *F* is the fees charged by railway companies to shippers.

According to the analysis in Section 2, two models can be constructed corresponding to Situation 1 (the capacity of the upper or lower arc of each loop can meet the needs of all train flows), Situation 2 (the total capacity of the upper and lower arcs can meet the needs of all train flows), and Situation 3 (the bottleneck of the rail network cannot satisfy the needs of all train flows).

### *4.2. Mathematical Models under Situation 1 and Situation 2*

Situation 1 and Situation 2 can be summarized as one situation where the capacity of a rail network can satisfy all the requirements, so they can be solved by one mathematical model.

Assumptions:


Model I is constructed with the goal of maximizing the total profit from delivering all the train flows under the capacity constraint of each arc. 0–1 variables are introduced to indicate whether the upper arc or lower arc is selected.

### **Model I:**

Generalized transportation cost:

$$\mathcal{W} = \sum\_{q} \sum\_{k} \mu f^{q}(\mathbf{x}^{\rho,k} L\_{k}^{\text{Up}} + (1 - \mathbf{x}^{\rho,k}) L\_{k}^{\text{Down}}) . \tag{2}$$

Income for transporting goods:

$$R = \sum\_{k} \sum\_{q} a\_{q}^{2} f^{q}(L\_{k}^{\text{Up}} \mathbf{x}^{q,k} + L\_{k}^{\text{Down}}(1 - \mathbf{x}^{q,k})) + \sum\_{q} a\_{q}^{1} f^{q}. \tag{3}$$

Profits from transporting goods:

$$P = \mathcal{R} - \mathcal{W}.\tag{4}$$

The mathematical model under Situation 1 and Situation 2 can be stated as follows:

$$\max \quad P = R - W \tag{5}$$

such that

$$\sum\_{q} f^{q} \chi^{q,k} \le \mathbb{C}\_{k}^{\text{Up}}, \forall k \tag{6}$$

$$\sum\_{q} f^{q}(1 - \mathbf{x}^{q,k}) \le \mathsf{C}\_{k}^{\text{Down}}, \forall k \tag{7}$$

$$\mathbf{x}^{q,k} \in \langle 0, 1 \rangle, \forall q, k. \tag{8}$$

Constraints (6) and (7), respectively, indicate that the total freight volume of train flows in the upper (or lower) arc should not exceed the capacity of the upper (or lower) arc. Constraint (8) indicates that the *q*-th train flow can only select the upper or lower arc of the *k*-th loop, which demonstrates the principle that a single train flow cannot be split.

### *4.3. Mathematical Mode under Situation 3*

For Situation 3, a new model should be established because of infeasible train flows. When the model is built under the condition that the capacity of a rail network cannot meet all the requirements, the decision variables should not only indicate whether the train flow selects the upper arc or the lower arc in the loop, but also whether the train flow is infeasible or not. Thus, Model I is no longer applicable.

In this case, we can introduce *x* Up *<sup>q</sup>*,*<sup>k</sup>* as a decision variable to indicate whether the train flow *<sup>f</sup> <sup>q</sup>* selects the upper arc, and introduce *x*Down *<sup>q</sup>*,*<sup>k</sup>* as another decision variable to indicate whether the train flow *f <sup>q</sup>* selects the lower arc. Then, we add a logical constraint *x* Up *<sup>q</sup>*,*<sup>k</sup>* <sup>+</sup> *<sup>x</sup>*Down *<sup>q</sup>*,*<sup>k</sup>* ≤ 1 so that each train flow can only select one arc in a loop if it is infeasible. Model II under Situation 3 is as follows:

### **Model II:**

Generalized transportation cost:

$$\mathcal{W} = \sum\_{q} \sum\_{k} u f^{q} (\mathbf{x}\_{q,k}^{\mathrm{Up}} L\_{k}^{\mathrm{Up}} + \mathbf{x}\_{q,k}^{\mathrm{Down}} L\_{k}^{\mathrm{Down}}).\tag{9}$$

Income for transporting goods:

$$R = \sum\_{k} \sum\_{q} \varepsilon\_{q}^{2} f^{q}(L\_{k}^{\text{Up}} \mathbf{x}\_{q,k}^{\text{Up}} + L\_{k}^{\text{Down}m} \mathbf{x}\_{q,k}^{\text{Down}m}) + \sum\_{q} \alpha\_{q}^{1} f^{q}(\mathbf{x}\_{q,1}^{\text{Up}} + \mathbf{x}\_{q,1}^{\text{Down}m}). \tag{10}$$

The mathematical model under Situation 3 can be stated as follows:

$$\mathbf{\color{red}{\max}}\quad P = \mathcal{R} - \mathcal{W} \tag{11}$$

such that

$$\sum\_{q} f^{q} \mathbf{x}\_{q,k}^{\mathrm{Up}} \leq \mathbf{C}\_{k}^{\mathrm{Up}}, \forall k \tag{12}$$

$$\sum\_{q} f^{q} \mathbf{x}\_{q,k}^{\text{Down}} \le \mathbb{C}\_{k}^{\text{Down}}, \forall k \tag{13}$$

$$\mathbf{x}\_{q,k}^{\text{UP}} + \mathbf{x}\_{q,k}^{\text{Down}} \le \mathbf{1}\_{\text{\textquotedblleft}q} \forall q, k \tag{14}$$

$$\mathbf{x}\_{q,k}^{\text{Up}} + \mathbf{x}\_{q,k}^{\text{Domn}} = \mathbf{x}\_{q,k+1}^{\text{Up}} + \mathbf{x}\_{q,k+1}^{\text{Domn}}, \forall q, 1 \le k \le n-1 \tag{15}$$

$$\mathbf{x}\_{q,k'}^{\text{UP}} \mathbf{x}\_{q,k}^{\text{Down}} \in \{0, 1\}, \forall q, k. \tag{16}$$

Note that: ⎧

*x* Up *<sup>q</sup>*,*<sup>k</sup>* <sup>=</sup> ⎪⎪⎨ ⎪⎪⎩ 1 When the flow *q* selects the upper arc of the loop *k* 0 otherwise *x*Down *<sup>q</sup>*,*<sup>k</sup>* <sup>=</sup> ⎧ ⎪⎪⎨ ⎪⎪⎩ 1 When the flow *q* selects the lower arc of the loop *k* 0 otherwise .

Constraints (12) and (13), respectively, indicate that the total freight volume of train flows through the upper/lower arc should not exceed the capacity of the upper/lower arc. Constraint (14) indicates that a train flow can only select the upper or lower arc if it is feasible, which demonstrates the principle that a single train flow cannot be split. Constraint (15) indicates that infeasible train flows should not pass through any arc in the network, while feasible train flows should be shipped from the loading area to the unloading area. Constraint (16) indicates that the decision variables are 0–1 variables.

### **5. Computational Experiments**

We assume that a freight multi-loop rail network (as shown in Figure 6) exists with eight loops, and the distance from loading area *s* to unloading area *t* is around 1000 km (for practical purposes).

**Figure 6.** A rail network for computational experiments.

In parentheses, the first number before the comma is the length of arcs, in km, and the other number after the comma is the capacity of the corresponding arcs, in 10<sup>4</sup> Tons/Year.

Computational experiments are conducted to verify the feasibility of the models in Section 3. The parameters of the rail network are shown in Table 2:

˄ ˅


**Table 2.** Parameters of the rail network.

Of the upper arcs, the longest distance is 149 km (the third loop). The shortest distance is 72 km (the seventh loop). The maximum capacity is 66.82 million tons per year (the fourth loop). The minimum capacity is 45.63 million tons per year (the sixth loop). The average distance of these eight loops is around 119 km. The average capacity of these eight loops is about 54.44 million tons per year. Similarly, Of the lower arcs, the longest distance is 158 km (the third loop). The shortest distance is 78 km (the fourth loop). The maximum capacity is 62.19 million tons per year (the second loop). The minimum capacity is 40.41 million tons per year (the seventh loop). The average distance of these eight loops is around 119 km, and the average capacity of these eight loops is approximately 49.56 million tons per year.

30 train flows are also generated and their parameters are shown in Table 3:


**Table 3.** Parameters of the train flows.

Among the 30 flows, the largest volume flow is *f* <sup>28</sup> (4970 thousand tons per year), while the smallest is *f* <sup>3</sup> (1110 thousand tons per year), and the average volume of these 30 flows is approximately 3056 thousand tons per year. There are six different freight rates for the 30 flows, i.e., (5.7, 0.0336), (6.4, 0.0378), (7.6, 0.0435), (9.6, 0.0484), (10.4, 0.0549), and (14.8, 0.0765).

### *5.1. The Results of the Two Models under Situation 1 and Situation 2*

In theory, both Situation 1 and Situation 2 can be solved by Model I and Model II. We make an assumption that the generalized unit operation cost is 0.04, i.e., *u* = 0.04¥/ton-km. Firstly, Model I and Model II are tested in Lingo using the branch and bound algorithm, with the assumption that the capacity of the rail network can satisfy all requirements. The optimal solution is described in Table 4.


**Table 4.** The results for Model I and Model II.

For Model I and Model II, we find that the solution results, which include the value of the objective function (¥147,846) and path selection (see Table 4) of each freight flow, are exactly the same, which shows that Model I and Model II can achieve the same solution.

### *5.2. Comparison the Solution Time between Model I and Model II under Situation 1 and Situation 2*

Since both Model I and Model II can solve the problem of train flow path selection under Situation 1 and Situation 2 and can achieve a consistent result, in the next step, we focus on which model is faster or more efficient.

In this section, we use the business solution software Lingo to test the influence of train flow numbers and network loops on the performance of the aforementioned models. In this example, the number of total train flows varies from 10 to 70 and the number of loops varies from 4 to 16. It is worth noting that this test is carried out under conditions of Situation 1 and Situation 2, as Model I is not appropriate under Situation 3. The maximum solution time is set up to 24 hours and the results of solution times are shown in Table 5. In this table, the numbers in parentheses in the second column are the number of train flows and the number of loops, respectively (e.g., (30,8) represents 30 train flows and 8 loops).


**Table 5.** The solution times comparing Model I and Model II.

It is clear that the solution time of Model I is much shorter than that of Model II. For instance, when the number of the train flow is 50 and the number of the loop is 8, the solution time is 1 second for Model I and 28 seconds for Model II. As shown in Table 5, for Model II, the solution times increases with the number of train flows and loops, because, as the number of train flows or loops increases, there may be more conflicts between different train flows. For Model I, the number of train flows and the number of loops have little effect on the solving speed, and the solving speed of Model I performed very well in the given train flows and loops. Since Model II has more decision variables and constrains than Model I under the same circumstances, it may take us more time to find an optimal solution with Model II, but the final results of both models are identical. Generally, the solution time in Model II is longer than that in Model I. However, since the studied problem is intended for planning with a long time horizon, Model II is still reasonable.

### *5.3. Analysis of the Solution E*ffi*ciency of Model II under Situation 3*

Since only Model II can solve the path selecting problem under Situation 3, its solution efficiency will be the focus of our next study. When the capacity of the rail network is changed, it creates a problem of train path selection under Situation 3. The details are as follows (see Table 6):


**Table 6.** Parameters of the rail network based on Table 2.

Compared with the capacity in Table 2, the capacity of the third loop in Table 6 has changed from 51.63 million tons per year to 41.63 million tons per year. The third loop of the rail network becomes a bottleneck for the train flows, which remain unchanged.

We keep the generalized unit transportation cost *u* unchanged and then test Model II by also using the branch and bound algorithm. The objective function is ¥146,257, and the optimal solution is described in Table 7.


**Table 7.** The results for Model II.

It can be found from Table 7 that when the capacity of the third loop is reduced by 10 million tons per year, three flows with a total volume of 7.05 million tons become infeasible flows, namely, *f* 1, *f* 15, and *f* 25. In practice, these infeasible flows should be distributed to another route beyond the corridor.

We plot the results of the above examples in Figure 7. The horizontal axis indicates the serial number of the train flows, while the vertical axis indicates the serial number of the loop in the rail network, and each broken line represents a flow. When the broken line turns to the right, it means that the flow selects the upper arc when passing through this loop. Conversely, if the broken line turns to the left, it indicates that the path selects the lower arc. The lines with the same color in the figure indicate that they have the same path. Particularly, the gray straight line represents infeasible flows.

**Figure 7.** Comparison of the results of Model I and Model II.

It is clearly illustrated in Figure 7 that although the conditions of the rail network have changed, the paths of the 15 flows remain unchanged. The 15 train flows are as follows:

+ *<sup>f</sup>* 2, *<sup>f</sup>* 5, *<sup>f</sup>* 6, *<sup>f</sup>* 7, *<sup>f</sup>* 8, *<sup>f</sup>* 10, *<sup>f</sup>* 12, *<sup>f</sup>* 16, *<sup>f</sup>* 17, *<sup>f</sup>* 19, *<sup>f</sup>* 21, *<sup>f</sup>* 24, *<sup>f</sup>* 26, *<sup>f</sup>* 28, *<sup>f</sup>* <sup>30</sup>, .

Note that there are 10 flows in the above mentioned 15 train flows share the same path: *<sup>s</sup>*→*K*Down <sup>1</sup> <sup>→</sup>*K*Down <sup>2</sup> <sup>→</sup>*K*Up <sup>3</sup> <sup>→</sup>*K*Down <sup>4</sup> <sup>→</sup>*K*Up <sup>5</sup> <sup>→</sup>*K*Down <sup>6</sup> <sup>→</sup>*K*Up <sup>7</sup> <sup>→</sup>*K*Up <sup>8</sup> →*t*. Coincidentally, the shortest path form *<sup>s</sup>*→*<sup>t</sup>* in the rail network is also *<sup>s</sup>*→*K*Down <sup>1</sup> <sup>→</sup>*K*Down <sup>2</sup> <sup>→</sup>*K*Up <sup>3</sup> <sup>→</sup>*K*Down <sup>4</sup> <sup>→</sup>*K*Up <sup>5</sup> <sup>→</sup>*K*Down <sup>6</sup> <sup>→</sup>*K*Up <sup>7</sup> <sup>→</sup>*K*Up <sup>8</sup> →*t*, which shows that under these two models, train flows always prefer the shortest path, followed by the other paths. In practice, the most cost-effective and shortest path always has priority in distributing flows, and then other paths are selected, which could explain why so many flows choose the shortest path.

We then enumerate the solution times of Model II under different train flows and loops. Like Section 5.2, we set the maximum solution time to 24 hours, and the results of the solution times are shown in Table 8.


**Table 8.** The solution times of Model II under Situation 3.

As shown in Table 8, the solution time increases with the number of train flows and loops for Model II under Situation 3. However, we found that the model could not be solved within 24 hours when the number of train flows increased to 40. The result above exposes a shortcoming of Model II—when the number of the train flows increases, Lingo cannot solve the routing problem in a short time. Thus, the next task is to design a heuristic algorithm to solve the problem of large-scale train flow path selection.

### *5.4. A Genetic Algorithm for Solving this Problem*

The path selection of a train flow (8 loops) can be expressed as follows (for Model II):

$$
\begin{Bmatrix}
1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 \\
0 & 1 & 1 & 0 & 0 & 1 & 1 & 0
\end{Bmatrix}
$$

where the top and bottom lines represent the upper and lower arcs, respectively. The arc is selected when the corresponding number is 1; otherwise, it is 0. For example, the path for the train flow above is *<sup>s</sup>*→*K*Up <sup>1</sup> <sup>→</sup>*K*Down <sup>2</sup> <sup>→</sup>*K*Down <sup>3</sup> <sup>→</sup>*K*Up <sup>4</sup> <sup>→</sup>*K*Up <sup>5</sup> <sup>→</sup>*K*Down <sup>6</sup> <sup>→</sup>*K*Down <sup>7</sup> <sup>→</sup>*K*Up <sup>8</sup> →*t*. Since the structure of the solution of this problem is similar to the coding strategy of a genetic algorithm (GA), we try to use a genetic algorithm to solve the problem of large-scale train flow path selection.

Genetic algorithms are a part of evolutionary computing, which was inspired by Darwin's theory of evolution. The principles and process of GAs are well known. A GA starts with a set of solutions, called a population. Solutions from parent population are taken and used to form a child population. Solutions are selected according to their fitness—the more suitable they are, the more chances they have to reproduce. By means of continuous selection, crossover, and mutation operations, the genes of a certain offspring can meet the requirements. the flowchart of the GA workflow for this problem is shown in Figure 8.

**Figure 8.** Flowchart of the genetic algorithm (GA) workflow.

Here are some key steps to note:

We set the population size to 100 and chose the best population in a set of 500 initial solutions to generate the first population. The fitness criterion is defined as maximizing the profits for transporting goods. Roulette wheel selection is used to select candidates for parents. A fixed mutation rate of 1% is used to prevent premature convergence. An elite retention strategy, that the worst individuals in the current generation will be replaced by the elite individuals from the previous generation, is adopted.

Some details about crossover need to be explained. Crossover selects genes from parent chromosomes and creates a new offspring. For each train flow randomly selected from an individual, the process of crossover randomly chooses a crossover point, and every number before this point is copied from a parent train flow. Then, every number after the crossover point is copied from the other parent train flow. Crossover can be defined as follows (see Figure 9) (| is the crossover point):


**Figure 9.** A toy example of crossover.

The GA above is adopted to test its feasibility with the same data in Section 5.3. We have done 20 experiments on this algorithm, and the computational results are shown in Figure 10.

**Figure 10.** Twenty computational experiments for Model II under Situation 3.

In Figure 10, most of the experiments converge to around ¥142,500 and only a few experiments show poor results, which indicates the stability of the algorithm. Note that the objective function solving by the GA is ¥143,223, which decreased by 2.07% compared to ¥146,257 in Section 5.3. Therefore, we can conclude that although the GA cannot achieve an exact solution, it can approach an exact solution very well.

Next, we will analyze the solution time of the GA. We enumerate the solution times of the GA under different train flows and loops. The results of the solution times are shown in Table 9.


**Table 9.** The solution times of the GA.

In Table 9, the solution time of the GA increases with the number of train flows and loops. Once set, the number of loops remains unchanged (8 loops), and the solution time is 41.5 seconds for the number of train flows to be 10 but 339.4 seconds for 70 train flows. For each additional 10 flows, the average increase of the solution time is about 50 seconds. Then the number of flows (30 flows) were kept constant to test the influence of the number of loops in the solution time. We can see that the solution time is 129 seconds when the number of loops is 4 and is 182.1 seconds when the number of loops is 16. The average increase of the solution time is about 8 seconds for each additional 2 loops.

Compared with the solution time using Lingo (as shown in Table 8), GA's solution time increases more gradually. Moreover, it is ideal to use GA to solve the problem of selecting large-scale train flow paths (e.g., if the number of flows is 40 or more) in a short time.

To summarize, both Model I and Model II can solve the problem of train flow path selection under Situation 1 and Situation 2 and can achieve a consistent result. However, the solution time of Model I is much shorter than that of Model II. Since Lingo cannot solve Model II under Situation 3 in a short time when the scale of the train flows becomes large genetic algorithm was designed and performs well.

### **6. Conclusions and Future Work**

In this paper, we investigate the problem of optimizing the path of trains formed at the loading area in a multi-loop rail network. Essentially, this is a combinatorial optimization problem. The complexity of the problem increases exponentially with the growth of the loop number. Three different situations are analyzed in detail. Then, two mathematic models, i.e., Model I (based on Situation 1 and Situation 2; the capacity of rail network is sufficient) and Model II (based on Situation 3; the capacity of rail network is insufficient) are established. Finally, a set of computational experiments are conducted to verify the feasibility of the models. In the experiment with 8 loops and 30 train flows, we find that Model I and Model II can achieve the same solution under Situation 1 and Situation 2. Then, the solution time between Model I and Model II under Situation 1 and Situation 2 is compared, which shows that Model I is much better than Model II. Since Model II cannot be solved by Lingo in a short time under Situation 3 when it becomes a large-scale problem, a genetic algorithm is proposed. This algorithm performs well in a set of numerical experiments. To conclusion, Lingo is recommended to be used in the case of Situation 1 and Situation 2 or the small-scale experiment of Situation 3, while GA is more suitable to solve the problem of a large-scale train flow path selection under Situation 3.

Certainly, much work remains to be done before our model can be efficiently used in the real world. Since we only consider the paths of trains formed at the loading area in this paper, all flows go through the entire network and none flow from intermediate locations. In future work, the train flows generated from intermediate stations can be taken into account. Further analysis of some terms in the objective function, and their constraints, are required, as well as a more accurate and efficient algorithm used in the solution process.

However, we have developed two adaptable models for optimizing the path of trains formed at the loading area in a multi-loop rail network, along with an efficient genetic algorithm. The results of this study should prove to be a valuable approach for the strategic management of rail freight transportation systems.

**Author Contributions:** The authors contributed equally to this work.

**Acknowledgments:** This work was supported by the National Key R&D Program of China (2018YFB1201402).

**Conflicts of Interest:** The authors declare no conflict of interest.

### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Intuitionistic Type-2 Fuzzy Set and Its Properties**

### **Surajit Dan 1, Mohuya B. Kar 2, Saibal Majumder 3, Bikashkoli Roy 4, Samarjit Kar 4,\* and Dragan Pamucar <sup>5</sup>**


Received: 16 May 2019; Accepted: 14 June 2019; Published: 18 June 2019

**Abstract:** Decision making under uncertainty describes situations that consider a profound lack of knowledge, where the functional form is completely unknown, and often, the relevant input and output variables are unknown as well. Data, being the vital input of decision making, contain a dissimilar level of imprecision that necessitates different approaches for making a proper and legitimate decision. In this article, we propose the concept of the intuitionistic type-2 fuzzy set (IT2FS). Several arithmetic operations on IT2FS such as union, intersection, complement, containment, etc., are defined, and the related algebraic properties of IT2FS are also studied. Subsequently, we define two new operators, namely the necessity operator and the possibility operator, to convert an IT2FS into an ordinary T2FS, and then discuss some of their basic properties. Moreover, in this study, two distance measures, the Hamming distance and Euclidian distance of IT2FS, are proposed, and their applications are illustrated with an example.

**Keywords:** type-2 fuzzy set; intuitionistic type-2 fuzzy set; possibility and necessity operators; distance measure

### **1. Introduction**

Uncertainty is an intrinsic feature of information. In many scientific and industrial applications, we make decisions in an environment with different kinds of uncertainty. Currently, most of the decision-making processes involve retrieving and analyzing information, which is mostly incomplete, noisy, fragmentary, or sometimes contradictory. Therefore, the models characterizing the real world require being accompanied by the proper uncertainty representations. With the advent of soft computing (SC) methods, several powerful tools in the field of computational intelligence were introduced including type-1 fuzzy logic, neural networks, evolutionary algorithms, and hybrid intelligent systems [1–3].

The application of fuzzy sets in decision making and optimization problems has been widely studied since its inception [4]. However, in the literature, many studies have shown growing interest in the study of decision-making problems using intuitionistic fuzzy sets/numbers [5,6]. The intuitionistic fuzzy set (IFS) is an extension of the fuzzy set introduced by Atanassov [7,8]. IFS is a more generalized version of the fuzzy set, which can be defined in terms of the degrees of membership and non-membership (the residual term is referred to as the degree of indeterminacy). Presently, IFSs are being studied and used in different fields of science and technology for decision making problems. For instance, Marasini et al. [9] implemented an IFS approach for the problem of students' satisfaction with university teaching, which can take into account a source of uncertainty related to items and another related to subjects. Subsequently, Marasini et al. [10] presented a study, where the intuitionistic fuzzy set was used for questionnaire analysis, with a focus on the construction of membership, non-membership, and uncertainty functions. In addition, several researchers have developed several decision-making problems in the field of intuitionistic fuzzy sets [11–16].

A type-2 fuzzy set (T2FS) is an extension of the ordinary fuzzy set, i.e., type-1 fuzzy set (T1FS). The fundamental superiority of the type-2 fuzzy set, over the type-1 fuzzy set, has been its ability to capture the membership of relevant membership values, where the uncertainty is handled more accurately. The membership value of a type-1 fuzzy set is a real number in [0, 1]. On the other hand, the membership value of a T2FS is a type-1 fuzzy set. The concept of T2FS was introduced by Zadeh [17–19]. The overviews of type-2 fuzzy sets were given in Mendel [20]. Since ordinary fuzzy sets and interval-valued fuzzy sets are special cases of type-2 fuzzy sets, Takac [21] proposed that type-2 fuzzy sets are very practical in the circumstances where there are more uncertainties. Kundu et al. [22] proposed a fixed charge transportation problem with type-2 fuzzy parameters from the viewpoint of type reduction and the centroid. Mizumoto and Tanaka [23,24] and Dubois and Prade [25] investigated the logical operations of T2FS. Later, many researchers did many investigations on theoretical [26–29], as well as on various application domains [30–34] of T2FS.

Considering both intuitionistic fuzzy and type-2 fuzzy environments, Singh and Garg [35] proposed the symmetric TIT2 fuzzy set to develop some new interval type-2 (IT2) intuitionistic fuzzy aggregation operators, which can consider the multiple interactions between the input arguments. Subsequently, Garg and Singh [36] proposed triangular interval type-2 intuitionistic fuzzy sets (TIT2) and their three aggregation operators: TIT2 intuitionistic fuzzy weighted averaging, TIT2 intuitionistic fuzzy ordered weighted averaging and TIT2 intuitionistic fuzzy hybrid averaging based on Frank norm operation laws. However, in spite of the existing works on interval type-2 intuitionistic fuzzy sets, in the literature, to the best of our knowledge, there does not exist any study on generalized the intuitionistic type-2 fuzzy set. Therefore, to circumvent this gap in the literature, in this study, we introduce the concept of the generalized intuitionistic type-2 fuzzy set (IT2FS) whose type-1 membership is the ordinary fuzzy membership, and the resulting type-2 consists of both membership and non-membership as the intuitionistic fuzzy set. We introduce the notions of basic set operations and focus on the algebraic properties of these sets with several illustrative examples. Further, we define two operators on the set whose basic function is to convert an IT2FS into an ordinary T2FS and also describe some properties of these operators. Finally, we define two distance measures, the Hamming distance and Euclidian distance, of IT2FS, which are illustrated with a numerical example. The main contribution of this article is highlighted as follows.


The rest of the paper is organized as follows. The preliminary concepts of our study are presented in Section 2. In Section 3, we propose the intuitionistic type-2 fuzzy set (IT2FS) and give examples. The geometrical interpretation of IT2FS is shown in Section 4. Subsequently, in Section 5, some set-theoretic operations of IT2FS including union, intersection, and complement are defined. In Section 6, some properties of IT2FS like *idempotency*, *commutativity*, *associativity*, *distributive*, *involution*, and *De Morgan s law* are verified. In Section 7, the necessity and possibility operators of IT2FS are defined. Successively, in Section 8, two distance measures of IT2FS are defined, and an example, illustrating the application of these distance measures in a real-life application based on a medical diagnosis system, is presented in Section 9. Finally, the conclusion of the study is drawn in Section 10.

### **2. Preliminaries**

Before introducing the new concept of IT2FS, we first present some essential concepts and notations of T2FS and IFS.

### *2.1. Type-2 Fuzzy Set*

A type-2 fuzzy set (T2FS) is a fuzzy set whose membership degree includes uncertainty, i.e., the membership degree is a fuzzy set and not a crisp set. A T2FS *A*5is defined as (Mendel and John [37]):

$$\widetilde{A} = \left\{ \left( (\mathbf{x}, \; u), \; \mu\_{\widetilde{A}}(\mathbf{x}, u) \right) : \forall \mathbf{x} \in X, \; \forall u \in f\_{\mathbf{x}} \subseteq [0, 1] \right\}.$$

where 0 <sup>≤</sup> <sup>μ</sup>*A*5(*x*, *<sup>u</sup>*) <sup>≤</sup> 1 is the secondary membership function and Jx is the primary membership of <sup>x</sup> <sup>∈</sup> X, which is the domain of <sup>μ</sup>A5(x, u). Alternatively, A can be expressed as: <sup>5</sup>

$$\widetilde{A} = \int\_{\mathbf{x} \in \mathcal{X}} \left( \int\_{\mathbf{u} \in \mathcal{J}\_{\mathbf{x}}} \mu\_{\widetilde{A}}(\mathbf{x}, \mathbf{u}) / \mathbf{u} \right) / \mathbf{x}, \; \mathcal{J}\_{\mathbf{x}} \subseteq [0, \ 1] \prime$$

where < < denotes union over all admissible *x* and *u*. For the discrete universe of discourse, < is replaced by . For each value of <sup>∈</sup> *<sup>X</sup>*, the secondary membership function <sup>μ</sup>*A*5(*x*, *<sup>u</sup>*) is defined as:

$$\mu\_{\overrightarrow{A}}(\mathbf{x},\boldsymbol{\mu}) = \int\_{\mathbf{u}\in I\_{\rm x}} \mu\_{\overrightarrow{A}}(\mathbf{x},\boldsymbol{\mu}) / \mu\_{\boldsymbol{\mu}}$$

where for a particular *<sup>u</sup>* <sup>=</sup> *<sup>u</sup>* <sup>∈</sup> *<sup>J</sup>*x, <sup>μ</sup>*A*5(*x*, *<sup>u</sup>* ) is called the secondary membership grade of (*x*,*u* ).

**Example 1.** *Let the set "young" be represented by a T2FS A*5*. The "youthness" is the primary membership function of A*5*and the "degree of youthness" is the secondary membership function. Let X* = {8, 10, 14} *be an age set with the primary membership of the points of X respectively J*<sup>8</sup> = {0.8, 0.9, 1.0}*, J*<sup>10</sup> = {0.6, 0.7, 0.8}*, and J*<sup>14</sup> = {0.4, 0.5, 0.6}. *The secondary membership function of the point 8 is:*

$$
\overleftarrow{\mu}\_{\overrightarrow{A}}(8,\,\,\mu) = \left(0.9/0.8\right) + \left(0.7/0.9\right) + \left(0.6/1.0\right),
$$

*i.e.,* <sup>5</sup>μ*A*5(8, 0.8) <sup>=</sup> 0.9 *is the secondary membership grade of 8 with the primary membership grade 0.8. Similarly,*

$$
\overline{\mu}\_{\overline{A}}(10, u) = (0.8/0.6) + (0.7/0.7) + (0.6/0.8) \text{ and } \overline{\mu}\_{\overline{A}}(14, u) = (0.9/0.4) + (0.8/0.5) + (0.5/0.6).
$$

*Accordingly, discrete T2FS A can be represented as:* 5

$$A = \begin{array}{l}(0.9/0.8 + 0.7/0.9 + 0.6/1.0)/8 + (0.8/0.6 + 0.7/0.7 + 0.6/0.8)/10 \\ + (0.9/0.4 + 0.8/0.5 + 0.5/0.6)/14. \end{array}$$

### *2.2. Intuitionistic Fuzzy Set*

The classical fuzzy set is a set with a membership function, but an intuitionistic fuzzy set (IFS) is a set that has a membership function, as well as a non-membership function. According to Atanassov [7,8], an intuitionistic fuzzy set *A* in *X* is defined as an object of the following form.

$$A = \{ \mathbf{x}, \ \mu\_A(\mathbf{x}), \ \gamma\_A(\mathbf{x}) : \mathbf{x} \in \mathbf{X}, \ \mu\_A(\mathbf{x}) \in [0, 1], \ \gamma\_A(\mathbf{x}) \in [0, 1] \},$$

where the functions,

$$
\mu\_A: X \to [0,1]
$$

and:

$$\nu\_A: \mathcal{X} \to [0,1]$$

define the degree of membership and the degree of non-membership respectively such that:

$$0 \le \mu\_A(\mathfrak{x}) + \nu\_A(\mathfrak{x}) \le 1 \text{ for all } \mathfrak{x} \in X.$$

For the discrete universe of discourse, an IFS can be represented as:

$$\sum\_{\mathbf{x}\in\mathcal{X}} \left( \mu\_A(\mathbf{x}), \nu\_A(\mathbf{x}) \right) / \mathbf{x}.$$

According to this concept, every discrete ordinary fuzzy set may be written as:

$$\sum\_{\mathbf{x}\in X} \left( \mu\_A(\mathbf{x}), \ 1-\mu\_A(\mathbf{x}) \right) / \mathbf{x}\_r$$

where <sup>μ</sup>*<sup>A</sup>* is the membership function of the fuzzy set *<sup>A</sup>*. For continuous IFS, the is replaced by <sup>&</sup>lt; .

**Example 2.** *Let the set young be represented by the intuitionistic fuzzy set A. The degree of youthness and the degree of adultness are the membership and non-membership function, respectively. Let X* = {12, 15, 17} *and the membership grade of the point 12 be* μ*A*(12) = {0.7, 0.8, 0.9}*, and the non-membership grade of the point 12 is* ν*A*(12) = {0.1, 0.2, 0.0}.

*Similarly, let* μ*A*(15) = {0.6, 0.7, 0.8,}*,* ν*A*(15) = {0.4, 0.2, 0.1}*,* μ*A*(17) = {0.4, 0.5, 0.6}*,* ν*A*(17) = {0.5, 0.3, 0.2}*.*

*Therefore, the discrete intuitionistic fuzzy variable A* is represented as:

$$A = ((0.7, 0.1) + (0.8, 0.2) + (0.9, 0.0))/12 + ((0.6, 0.4) + (0.7, 0.2) + (0.8, 0.1))/15$$

$$+ ((0.4, 0.5) + (0.5, 0.3) + (0.6, 0.2))/17.$$

### **3. Intuitionistic Type-2 Fuzzy Set**

In this section, we introduce the concept of the intuitionistic type-2 fuzzy set, where the type-1 membership is the ordinary fuzzy membership with the secondary membership and non-membership functions. An IT2FS *A*5on *X* is defined as an object of the following form:

$$\overline{A} = \left\{ \langle \mathbf{x}, \boldsymbol{\mu}, \mu\_{\widetilde{A}}(\mathbf{x}, \boldsymbol{\mu}), \nu\_{\widetilde{A}}(\mathbf{x}, \boldsymbol{\mu}) \rangle : \mathbf{x} \in X, \ \boldsymbol{\mu} \in \mathfrak{J}\_{\mathbf{x}} \subseteq [0, 1] \right\}.$$

where the functions:

<sup>μ</sup>*A*<sup>5</sup> : *<sup>X</sup>* <sup>×</sup> *Jx* <sup>→</sup> [0, 1] <sup>ν</sup>*A*<sup>5</sup> : *<sup>X</sup>* <sup>×</sup> *Jx* <sup>→</sup> [0, 1]

and:

are defined as the degree of membership and degree of non-membership functions of the element *u* ∈ *Jx* and:

$$0 \le \mu\_{\overrightarrow{A}}(\mathbf{x}, \boldsymbol{\mu}) + \nu\_{\overrightarrow{A}}(\mathbf{x}, \boldsymbol{\mu}) \le 1 \text{ for every } \mathbf{x} \in X, \ \boldsymbol{\mu} \in \mathfrak{J}\_{\mathbf{x}}$$

For a discrete universe of discourse, an IT2FS can be represented as:

$$\widetilde{A} = \sum\_{\mathbf{x} \in \mathfrak{X}} \left( \sum\_{\mathbf{u} \in \mathfrak{I}\_{\mathbf{x}}} (\,\_{\mu} \!\_{\widehat{A}} (\mathbf{x}, \mathbf{u}), \nu\_{\widehat{A}} (\mathbf{x}, \mathbf{u}) \Big) \Big| \,\!\_{\mathbf{u}}\right) / \mathbf{x}, \; \; J\_{\mathbf{x}} \subseteq [0, 1] ;$$

whereas, for the continuous case, is replaced by < , i.e., for the continuous universe, the representation is:

$$\widetilde{A} = \int\_{\mathfrak{X} \in \mathfrak{X}} \left( \int\_{\mathfrak{u} \in I\_{\mathfrak{X}}} \mu\_{\widetilde{A}}(\mathfrak{x}, \mathfrak{u}), \,\, \nu\_{\widetilde{A}}(\mathfrak{x}, \mathfrak{u}) \right) / \mathfrak{u} \rangle / \mathfrak{x}, \,\, J\_{\mathfrak{X}} \subseteq [0, 1].$$

Subsequently, we explain the concept of IT2FS by an example.

**Example 3.** *Let the set young be represented by an IT2FS A*5*. The youthness is the primary membership function of A*5*. Then, the degree of youthness and the degree of adultness are the secondary membership and non-membership functions, respectively. Let X* = {8, 10, 14} *be the set, and the primary membership of the points of X is J*<sup>8</sup> = {0.8, 0.9, 1.0}, *J*<sup>10</sup> = {0.6, 0.7, 0.8}, *and J*<sup>14</sup> = {0.4, 0.5, 0.6}*, respectively. Then, the discrete IT2FS A is given by* 5

$$A = ((0.9, 0.0)/0.8 + (0.7, 0.1)/0.9 + (0.6, 0.3)/1.0)/8 + ((0.8, 0.1)/0.6 + (0.7, 0.2)/0.7 + (0.6, 0.4)/0.8)/10 + ((0.9, 0.1)/0.4 + (0.8, 0.2)/0.5 + (0.5, 0.4)/0.6)/14.$$

### **4. Geometrical Interpretation of the Intuitionistic Type-2 Fuzzy Set**

Let *X* be the given universe and *A*5 be an IT2FS defined on *X*. Let us construct a mapping *f <sup>A</sup>*<sup>5</sup> from *<sup>X</sup>* to *<sup>F</sup>* (a tetrahedron as shown in Figure <sup>1</sup> passing through the points O (0,0,0), A(1,0,0), B(0,1,0), C(0,0,1)) such that for *x* ∈ *X*, there exists:

$$y = f\_{\widetilde{A}}(\mathbf{x}) \in F$$

with the coordinate *u*, <sup>μ</sup>*A*5, <sup>ν</sup>*A*5 for which *<sup>u</sup>* <sup>∈</sup> *Jx* <sup>⊆</sup> [0, 1] and 0 <sup>≤</sup> <sup>μ</sup>*A*<sup>5</sup> <sup>+</sup> <sup>ν</sup>*A*<sup>5</sup> <sup>≤</sup> 1.

**Figure 1.** Geometrical representation of IT2FS.

### **5. Operations on IT2FS**

In this section, similar to several existing set-theoretic operations on fuzzy sets, we also present basic operations such as union, intersection, and complement on the proposed IT2FS.

In this context, let us consider two IT2FS *A*5and 5*B* on *X* as defined below.

*A*5 = ; *x*∈*X* ; *<sup>u</sup>*∈*J<sup>u</sup> x* <sup>μ</sup>*A*5(*x*, *<sup>u</sup>*), <sup>ν</sup>*A*5(*x*, *<sup>u</sup>*) /*u* /*x* and 5*B* = ; *x*∈*X* ; *<sup>v</sup>*∈*J<sup>v</sup> x* <sup>μ</sup>5*B*(*x*, *<sup>v</sup>*), <sup>ν</sup>5*B*(*x*, *<sup>v</sup>*) /*v* /*x*,

where *J u <sup>x</sup>* (⊆ [0, 1]) are the domains of the secondary membership function, respectively. Then, the union of *A*5and 5*B* is defined as:

$$\widetilde{A} \cup \widetilde{B} = \int\_{x \in X} \frac{\left( \int\_{w \in I\_x^{\overline{w}}} \frac{\left( \mu\_{\widetilde{A} \cup \overline{B}}(x, w), \, \nu\_{\widetilde{A} \cup \overline{B}}(x, w) \right)}{w} \right)}{x} , l\_x^{\text{ul}} \cup l\_x^{\text{v}} = J\_x^{\text{uv}} \subseteq [0, 1] \text{ s}$$

where:

$$\mu\_{\widetilde{A}\cup\widetilde{B}}(\mathbf{x}) = \Phi(\int\_{\mathfrak{u}\in\mathbb{J}\_{\mathbf{x}}^{\mathbf{u}}} \mu\_{\widetilde{A}}(\mathbf{x},\mathfrak{u}), \prime \mu\_{\ast} \int\_{\mathbb{J}\_{\mathbf{x}}^{\mathbf{v}}} \mu\_{\widetilde{B}}(\mathbf{x},\mathbf{v}) , \prime \upsilon \Big).$$

By using the extension principle, we obtain,

$$\mu\_{\widetilde{A}\cup\widetilde{B}}(\mathbf{x},w) = \int\_{\mathfrak{u}\in\mathfrak{f}\_{\mathbf{x}}^{\mathbf{u}}} \int\_{\mathfrak{v}\in\mathfrak{f}\_{\mathbf{x}}^{\mathbf{v}}} \left(\mu\_{\widetilde{A}}(\mathbf{x},\mathfrak{u}) \wedge \mu\_{\widetilde{B}}(\mathbf{x},\mathfrak{v})\right) / \Phi(\mathfrak{u},\mathfrak{v})\_{\mathbf{v}}$$

where Φ(*u*, *v*) is the *t*-conorm of *u* and *v*, i.e.,

$$\mu\_{\widetilde{A}\cup\widetilde{B}}(\mathbf{x},w) = \int\_{\mathfrak{u}\in I\_{\mathbf{x}}^{\mathbf{u}}} \int\_{\mathfrak{v}\in I\_{\mathbf{x}}^{\mathbf{v}}} \left(\mu\_{\widetilde{A}}(\mathbf{x},\mathfrak{u}) \wedge \mu\_{\widetilde{B}}(\mathbf{x},\mathfrak{v})\right) / (\mathfrak{u}\vee\mathfrak{v})\,.$$

Similarly,

$$\nu\_{\widetilde{A}\cup\widetilde{B}}(\mathbf{x},\mathbf{w}) = \int\_{\mathfrak{u}\in\mathbb{I}\_{\mathbf{x}}^{\mathbf{u}}} \int\_{\mathfrak{v}\in\mathbb{I}\_{\mathbf{x}}^{\mathbf{v}}} \left(\nu\_{\widetilde{A}}(\mathbf{x},\mathbf{u})\lor\nu\_{\widetilde{B}}(\mathbf{x},\mathbf{v})\right) / (\mathfrak{u}\lor\boldsymbol{\upsilon})\,.$$

The intersection of *A*5and 5*B* is defined as:

$$\widetilde{A} \cap \widetilde{B} = \int\_{x \in X} \left( \int\_{w \in l\_x^w} \left( \mu\_{\widetilde{A} \cap \widetilde{B}}(\mathbf{x}, w), \, \nu\_{\widetilde{A} \cap \widetilde{B}}(\mathbf{x}, w) \right) / w \right) / \ge\_r l\_x^w \cup l\_x^v = l\_x^w \subseteq [0, 1]\_r$$

where:

$$\mu\_{\widetilde{A}\cap\widetilde{B}}(\mathbf{x},\boldsymbol{w}) = \int\_{\boldsymbol{\nu}\in\mathbb{I}\_{\mathbf{x}}^{\boldsymbol{w}}} \int\_{\boldsymbol{\nu}\in\mathbb{I}\_{\mathbf{x}}^{\boldsymbol{w}}} \left(\mu\_{\widetilde{A}}(\mathbf{x},\boldsymbol{\nu}) \wedge \mu\_{\widetilde{B}}(\mathbf{x},\boldsymbol{\nu})\right) / (\boldsymbol{\nu}\wedge\boldsymbol{\nu}) \,\,\mathrm{d}\boldsymbol{w}$$

and:

$$\nu\_{\widetilde{A}\cup\widetilde{B}}(\mathbf{x},\mathbf{w}) = \int\_{\mathfrak{u}\in\mathfrak{f}\_{\mathbf{x}}^{\mathbf{u}}} \int\_{\mathfrak{v}\in\mathfrak{f}\_{\mathbf{x}}^{\mathbf{v}}} \left(\nu\_{\widetilde{A}}(\mathbf{x},\mathbf{u})\lor\nu\_{\widetilde{B}}(\mathbf{x},\mathbf{v})\right) / (\mathfrak{u}\wedge\mathfrak{v})\,.$$

The complement of *A*5is defined as:

$$\widetilde{A}^{\mathfrak{c}} = \int\_{\mathfrak{x} \in \mathfrak{X}} \left( \int\_{\mathfrak{u} \in I\_x^{\mathfrak{u}}} \left( \mu\_{\widetilde{A}}(\mathfrak{x}), \,\, \nu\_{\widetilde{A}}(\mathfrak{x}) \right) / \left(1 - \mathfrak{u} \right) \right) / \mathfrak{x}^{\mathfrak{c}}$$

and:

$$\overline{\overline{A}} = \int\_{\mathbf{x} \in \mathcal{X}} \left( \int\_{\mathbf{u} \in I\_x^{\mu}} \left( \nu\_{\overline{A}}(\mathbf{x}, \boldsymbol{\mu}), \ \mu\_{\overline{A}}(\mathbf{x}, \boldsymbol{\mu}) \right) / \boldsymbol{\mu} \right) / \mathbf{x}.$$

In addition, there are some more operations on IT2FSs that are defined below.

$$\left(\widetilde{A} \subset \widetilde{B} \text{ if } f \left(\forall \mathbf{x} \in X\right) \left(\mu \le \upsilon, \ \mu\_{\widetilde{A}}(\mathbf{x}, \boldsymbol{\mu}) \le \mu\_{\widetilde{B}}(\mathbf{x}, \boldsymbol{\upsilon}) \text{ and } \nu\_{\widetilde{A}}(\mathbf{x}, \boldsymbol{\mu}) \ge \nu\_{\widetilde{B}}(\mathbf{x}, \boldsymbol{\upsilon})\right)\right)$$

and:

$$\begin{array}{c} \widetilde{A} = \vartriangleleft \left( \forall \mathbf{x} \in \mathcal{X} \right) \Big( \boldsymbol{\mu} = \boldsymbol{\upsilon}, \ \mu\_{\widetilde{A}}(\mathbf{x}, \boldsymbol{\mu}) = \ \mu\_{\widetilde{B}}(\mathbf{x}, \boldsymbol{\upsilon}) \text{ and } \boldsymbol{\nu}\_{\widetilde{A}}(\mathbf{x}, \boldsymbol{\mu}) = \boldsymbol{\nu}\_{\widetilde{B}}(\mathbf{x}, \boldsymbol{\upsilon}) \Big) \\\ \widetilde{A} = \sum\_{\mathbf{x} \in \mathcal{X}} \Big( \sum\_{\mathbf{u} \in I\_{\mathtt{x}}} \Big( \boldsymbol{\nu}\_{\widetilde{A}}(\mathbf{x}, \boldsymbol{\mu}), \boldsymbol{\mu}\_{\widetilde{A}}(\mathbf{x}, \boldsymbol{\mu}) \Big) / \boldsymbol{\mu} \Big) / \mathbf{x}. \end{array}$$

For the continuous case, is replaced by < .

We present the following example to illustrate the properties of IT2FS as mentioned below.

**Example 4.** *Let A*5*and* 5*B be two IT2FSs representing the set young. The youthness is the primary membership function of A*5 *and* 5*B. Let the degree of youthness and degree of adultness be the secondary membership and non-membership functions of A*5*and* 5*B, respectively. We consider both A*5*and* 5*B to be defined on X* = {8, 10, 14}*, which are eventually represented as:*

$$A = \begin{pmatrix} (0.9, 0.0)/0.8 + (0.7, 0.1)/0.9 + (0.6, 0.3)/1.0 \end{pmatrix} / 8 + \begin{pmatrix} (0.8, 0.1)/0.6 \\ (0.8, 0.1)/0.6 \end{pmatrix} / 0.7$$
 
$$+ (0.6, 0.4)/0.8)/10 + ((0.9, 0.1)/0.4 + (0.8, 0.2)/0.5 + (0.5, 0.4)/0.6)/14$$

*and:*

$$B = \begin{pmatrix} (0.8, 0.1)/0.7 + (0.6, 0.3)/0.8 + (0.5, 0.5)/0.9 \end{pmatrix} / 8 + \begin{pmatrix} (0.9, 0.1)/0.4 \end{pmatrix}$$

$$+ (0.7, 0.2)/0.5 + (0.5, 0.3)/0.6)/10 + ((0.8, 0.2)/0.5 \end{pmatrix}$$

$$+ (0.7, 0.2)/0.6 + (0.6, 0.3)/0.7)/14.$$

*Now, for a particular element 8the secondary membership and non-membership function of A and* 5 5*B* are:

$$((0.9, 0.0)/0.8 + (0.7, 0.1)/0.9 + (0.6, 0.3)/1.0)/8$$

*and:*

$$((0.8, 0.1) / 0.7 + (0.6, 0.3) / 0.8 + (0.5, 0.5) / 0.9) / 8$$

*respectively.*

*Then for x* <sup>=</sup> <sup>8</sup>*, the union operation of A and* <sup>5</sup> <sup>5</sup>*B is* <sup>μ</sup>*A*5∪5*B*(8), <sup>ν</sup>*A*5∪5*B*(8) :

= ((0.9, 0.0)/0.8 +(0.7, 0.1)/0.9 + (0.6, 0.3)/1.0)/8 = ((0.8, 0.1)/0.7 + (0.6, 0.3)/0.8 +(0.5, 0.5)/0.9)/8 = ((0.9, 0.0)/0.8 +(0.7, 0.1)/0.9 + (0.6, 0.3)/1.0)/8 = ((0.8, 0.1)/0.7 + (0.6, 0.3)/0.8 +(0.5, 0.5)/0.9)/8 = (((0.9 ∧ 0.8),( 0.0 ∨ 0.1))/(0.8 ∨ 0.7) + ((0.7 ∧ 0.8),(0.1 ∨ 0.1))/(0.9 ∨ 0.7) + ((0.6 ∧ 0.8),(0.3 ∨ 0.1) )/(1.0 ∨ 0.7) + ((0.9 ∧ 0.6),(0.0 ∨ 0.3))/(0.8 ∨ 0.8) + ((0.7 ∧ 0.6),(0.1 ∨ 0.3) )/(0.9 ∨ 0.8) + ((0.6 ∧ 0.6),(0.3 ∨ 0.3) )/(1.0 ∨0.8) + ((0.9 ∧ 0.5),(0.0 ∨ 0.5) )/(0.8 ∨ 0.9) + ((0.7 ∧ 0.5),(0.1 ∨0.5) )/(0.9 ∨ 0.9) + ((0.6 ∧ 0.5),(0.3 ∨ 0.5) )/(0.1 ∨ 0.9))/8 = ((0.8, 0.1)/0.8 +(0.7, 0.1)/0.9 + (0.6, 0.3)/1.0 + ( 0.6, 0.3)/0.8 + (0.6, 0.3)/0.9 +(0.6, 0.3)/1.0 + (0.5, 0.5)/0.9 + (0.5, 0.5)/0.9 + (0.5, 0.5)/1.0)/8 = ((*max*{0.8, 0.6} , *min*{0.1, 0.3})/0.8 + (*max*{0.7, 0.6, 0.5, 0.5}, *min*{0.1, 0.3, 0.5, 0.5})/0.9 +(*max*{0.6, 0.6, 0.5}, *min*{0.3, 0.3, 0.5}/1.0 )/8 = ((0.8, 0.1)/0.8 +(0.7, 0.1)/0.9 + (0.6, 0.3)/1.0)/8.

*Similarly, the intersection is given by* <sup>μ</sup>*A*5∩5*B*(8), <sup>ν</sup>*A*5∩5*B*(8) :

= ((0.9, 0.0)/0.8 +(0.7, 0.1)/0.9 + (0.6, 0.3)/1.0)/8 ∧ ((0.8, 0.1)/0.7 + (0.6, 0.3)/0.8 +(0.5, 0.5)/0.9)/8 = (((0.9 ∧ 0.8), (0.0 ∨ 0.1))/(0.8 ∧ 0.7) + ((0.7 ∧ 0.8),(0.1 ∨ 0.1))/(0.9 ∧ 0.7) + ((0.6 ∧ 0.8),(0.3 ∨ 0.1) )/(1.0 ∧ 0.7) + ((0.9 ∧ 0.6),(0.0 ∨ 0.3) )/(0.8 ∧ 0.8) + ((0.7 ∧ 0.6),(0.1 ∨ 0.3) )/(0.9 ∧ 0.8) + ((0.6 ∧ 0.6),(0.3 ∨ 0.3) )/(1.0 ∧0.8) + ((0.9 ∧ 0.5),(0.0 ∨ 0.5) )/(0.8 ∧ 0.9) + ((0.7 ∧ 0.5),(0.1 ∨0.5) )/(0.9 ∧ 0.9) + ((0.6 ∧ 0.5),(0.3 ∨ 0.5) )/(0.1 ∧ 0.9))/8. = ((0.8, 0.1)/0.7 +(0.7, 0.1)/0.7 + (0.6, 0.3)/0.7 + ( 0.6, 0.3)/0.8 + (0.6, 0.3)/0.8 +(0.6, 0.3)/0.8 + (0.5, 0.5)/0.8 + (0.5, 0.5)/0.9 + (0.5, 0.5)/0.9)/8 = ((*max*{0.8, 0.7, 0.6}, *min*{0.1, 0.1, 0.3})/0.7 +(*max*{0.6, 0.6, 0.6, 0.5}, *min*{0.3, 0.3, 0.3, 0.5})/0.8 +(*max*{0.5, 0.5}, *min*{0.5, 0.5}/0.9)/8 = ((0.8, 0.1)/0.7 +(0.6, 0.3)/0.8 + (0.5, 0.5)/0.9)/8

$$A^c = ((0.9, 0.0) / \ \ \ 0.2 + (0.7, 0.1) / 0.1 + (0.3, 0.6) / 0.0) / 8 + ((0.8, 0.1) / 0.4 + (0.7, 0.2) / 0.3 \\ \ + (0.6, 0.4) / 0.2) / 10 + ((0.9, 0.1) / 0.6 + (0.8, 0.2) / 0.5 + (0.5, 0.4) / 0.4) / 14$$

*and:*

$$\begin{aligned} \widetilde{A} &= ((0.0, 0.9)/0.8 + (0.1, 0.7)/0.9 + (0.3, 0.6)/1.0)/8 + ((0.1, 0.8)/0.6 + (0.2, 0.7)/0.7 \\ &+ (0.4, 0.6)/0.8)/10 + ((0.1, 0.9)/0.4 + (0.2, 0.8)/0.5 + (0.4, 0.5)/0.6)/14. \end{aligned}$$

### **6. Properties of IT2FS**

Considering various set-theoretic properties like *idempotency*, *commutativity*, *associativity*, *distributive law*, *involution*, and *De Morgan s law*, which exist for a fuzzy set and an intuitionistic fuzzy set. In this section, analogously, we also define similar properties for IT2FS along with necessary and relevant examples. Subsequently, we consider three IT2FSs *A*5, 5*B*, and *C* 5, and we define these operations as presented below.


$$(\mathbf{v}) \quad \left(\overleftarrow{A}^{\mathbf{c}}\right)^{\mathbf{c}} = \overleftarrow{A}\left(Involution\right)$$

(vi) (*A*5∪ 5*B*) *<sup>C</sup>* <sup>=</sup> *<sup>A</sup>*5*<sup>c</sup>* <sup>∩</sup> <sup>5</sup>*B<sup>c</sup>* , (*A*5∪ 5*B*) *<sup>C</sup>* <sup>=</sup> *<sup>A</sup>*5*<sup>c</sup>* <sup>∪</sup> <sup>5</sup>*B<sup>c</sup>* (*De Morgans law*)

The proofs of (i)–(iii) and (v) are obvious. We illustrate these results later by examples.

**Proof of (vi).**

(*A*5∪ 5*B*) *C* = + *x*,(*<sup>u</sup>* <sup>∨</sup> *<sup>v</sup>*), <sup>μ</sup>*A*5(*x*, *<sup>u</sup>*) <sup>∧</sup> <sup>μ</sup>5*B*(*x*, *<sup>v</sup>*), <sup>ν</sup>*A*5(*x*, *<sup>u</sup>*) <sup>∧</sup> <sup>ν</sup>5*B*(*x*, *<sup>v</sup>*)/*<sup>x</sup>* <sup>∈</sup> *<sup>X</sup>*, *<sup>u</sup>*, *<sup>v</sup>* <sup>∈</sup> *<sup>J</sup>*<sup>x</sup> <sup>⊆</sup> [0, 1] ,*c* = + *x*,(<sup>1</sup> <sup>−</sup> *<sup>u</sup>* <sup>∨</sup> *<sup>v</sup>*), <sup>μ</sup>*A*5(*x*, *<sup>u</sup>*) <sup>∧</sup> <sup>μ</sup>5*B*(*x*, *<sup>v</sup>*), <sup>ν</sup>*A*5(*x*, *<sup>u</sup>*) <sup>∧</sup> <sup>ν</sup>5*B*(*x*, *<sup>v</sup>*)/*<sup>x</sup>* <sup>∈</sup> *<sup>X</sup>*, *<sup>u</sup>*, *<sup>v</sup>* <sup>∈</sup> *<sup>J</sup>*<sup>x</sup> <sup>⊆</sup> [0, 1] , = + *<sup>x</sup>*,(<sup>1</sup> <sup>−</sup> *<sup>u</sup>* <sup>∨</sup> <sup>1</sup> <sup>−</sup> *<sup>v</sup>*), <sup>μ</sup>*A*5(*x*, *<sup>u</sup>*) <sup>∧</sup> <sup>μ</sup>5*B*(*x*, *<sup>v</sup>*), <sup>ν</sup>*A*5(*x*, *<sup>u</sup>*) <sup>∧</sup> <sup>ν</sup>5*B*(*x*, *<sup>v</sup>*)/*<sup>x</sup>* <sup>∈</sup> *<sup>X</sup>*, *<sup>u</sup>*, *<sup>v</sup>* <sup>∈</sup> *<sup>J</sup>*<sup>x</sup> <sup>⊆</sup> [0, 1] , = + *<sup>x</sup>*,(<sup>1</sup> <sup>−</sup> *<sup>u</sup>*), <sup>μ</sup>*A*5(*x*, *<sup>u</sup>*), <sup>ν</sup>*A*5(*x*, *<sup>u</sup>*)/*<sup>x</sup>* <sup>∈</sup> *<sup>X</sup>*, *<sup>u</sup>* <sup>∈</sup> *<sup>J</sup>*<sup>x</sup> <sup>⊆</sup> [0, 1] , ∩ + *x*,(<sup>1</sup> <sup>−</sup> *<sup>v</sup>*), <sup>μ</sup>5*B*(*x*, *<sup>v</sup>*), <sup>ν</sup>5*B*(*x*, *<sup>v</sup>*)/*<sup>x</sup>* <sup>∈</sup> *<sup>X</sup>*, *<sup>v</sup>* <sup>∈</sup> *<sup>J</sup>*<sup>x</sup> <sup>⊆</sup> [0, 1] , = *<sup>A</sup>*5*<sup>c</sup>* <sup>∩</sup> <sup>5</sup>*B<sup>c</sup>* .

Furthermore:

(*A*5∪ 5*B*) *C* = + *x*,(*<sup>u</sup>* <sup>∧</sup> *<sup>v</sup>*), <sup>μ</sup>*A*5(*x*, *<sup>u</sup>*) <sup>∧</sup> <sup>μ</sup>5*B*(*x*, *<sup>v</sup>*), <sup>ν</sup>*A*5(*x*, *<sup>u</sup>*) <sup>∨</sup> <sup>ν</sup>5*B*(*x*, *<sup>v</sup>*)/*<sup>x</sup>* <sup>∈</sup> *<sup>X</sup>*, *<sup>u</sup>*, *<sup>v</sup>* <sup>∈</sup> *<sup>J</sup>*<sup>x</sup> <sup>⊆</sup> [0, 1] ,*c* = + *x*,(<sup>1</sup> <sup>−</sup> *<sup>u</sup>* <sup>∧</sup> *<sup>v</sup>*), <sup>μ</sup>*A*5(*x*, *<sup>u</sup>*) <sup>∧</sup> <sup>μ</sup>5*B*(*x*, *<sup>v</sup>*), <sup>ν</sup>*A*5(*x*, *<sup>u</sup>*) <sup>∨</sup> <sup>ν</sup>5*B*(*x*, *<sup>v</sup>*)/*<sup>x</sup>* <sup>∈</sup> *<sup>X</sup>*, *<sup>u</sup>*, *<sup>v</sup>* <sup>∈</sup> *<sup>J</sup>*<sup>x</sup> <sup>⊆</sup> [0, 1] , = + *x*,(<sup>1</sup> <sup>−</sup> *<sup>u</sup>* <sup>∨</sup> <sup>1</sup> <sup>−</sup> *<sup>v</sup>*), <sup>μ</sup>*A*5(*x*, *<sup>u</sup>*) <sup>∧</sup> <sup>μ</sup>5*B*(*x*, *<sup>v</sup>*), <sup>ν</sup>*A*5(*x*, *<sup>u</sup>*) <sup>∨</sup> <sup>ν</sup>5*B*(*x*, *<sup>v</sup>*)/*<sup>x</sup>* <sup>∈</sup> *<sup>X</sup>*, *<sup>u</sup>*, *<sup>v</sup>* <sup>∈</sup> *<sup>J</sup>*<sup>x</sup> <sup>⊆</sup> [0, 1] , = + *x*,(<sup>1</sup> <sup>−</sup> *<sup>u</sup>*), <sup>μ</sup>*A*5(*x*, *<sup>u</sup>*), <sup>ν</sup>*A*5(*x*, *<sup>u</sup>*)/*<sup>x</sup>* <sup>∈</sup> *<sup>X</sup>*, *<sup>u</sup>* <sup>∈</sup> *<sup>J</sup>*<sup>x</sup> <sup>⊆</sup> [0, 1] , ∪ + *x*,(<sup>1</sup> <sup>−</sup> *<sup>v</sup>*), <sup>μ</sup>5*B*(*x*, *<sup>v</sup>*), <sup>ν</sup>5*B*(*x*, *<sup>v</sup>*)/*<sup>x</sup>* <sup>∈</sup> *<sup>X</sup>*, *<sup>v</sup>* <sup>∈</sup> *<sup>J</sup>*<sup>x</sup> <sup>⊆</sup> [0, 1] , = *<sup>A</sup>*5*<sup>c</sup>* <sup>∪</sup> <sup>5</sup>*B<sup>c</sup>* .


**Example 5.** *Let A*5, 5*B*, *C be the three IT2FS on X* 5 = {6, 5, 4} *defined as:*

$$A = (1.0, 0.0) / 0.9 / 6 + (0.1, 0.7) / 0.3 / 5 + (0.4, 0.5) / 0.6 / 4$$

$$\overleftarrow{B} = (0.8, 1.0) / 0.4 / 6 + (0.1, 0.8) / 0.5 / 5 + (0.5, 0.5) / 0.9 / 4$$

and:

$$C = (0.1, 0.8) / 0.0 / 6 + (1.0, 0.0) / 0.9 / 5 + (0.4, 0.4) / 0.7 / 4.5$$

*For simplicity, we have taken Jx as a singleton set for each x* ∈ *X*.

*Then:*

$$
\overline{A} \cup \overline{A} = (1.0, 0.0) / 0.9 / 6 + (0.1, 0.7) / 0.3 / 5 + (0.4, 0.5) / 0.6 / 4 = \overline{A}.
$$

*Therefore, the idempotent property holds.*

$$
\overline{A} \cup \overline{B} = (0.8, 1.0) / 0.9 / 6 + (0.1, 0.8) / 0.5 / 5 + (0.4, 0.5) / 0.9 / 4
$$

*and:*

$$
\overline{A} \cup \overline{A} = (0.8, 1.0) / 0.9 / 6 + (0.1, 0.8) / 0.5 / 5 + (0.4, 0.5) / 0.9 / 4.1
$$

*Correspondingly,*

$$A \cap B = (0.8, 1.0) / 0.4 / 6 + (0.1, 0.8) / 0.3 / 5 + (0.4, 0.5) / 0.6 / 4$$

*and:*

$$B \cap A = (0.8, 1.0) / 0.4 / 6 + (0.1, 0.8) / 0.3 / 5 + (0.4, 0.5) / 0.6 / 4.5$$

*Therefore, the commutative property holds.*

$$(A \cup B) \cup C = (0.1, 0.8) / 0.9 / 6 + (0.1, 0.8) / 0.9 / 5 + (0.4, 0.5) / 0.9 / 4$$

*Symmetry* **2019**, *11*, 808

*and:*

$$A \cup (B \cup C) = (0.1, 0.8) / 0.9 / 6 + (0.1, 0.8) / 0.9 / 5 + (0.4, 0.5) / 0.9 / 4.$$

*Likewise,*

$$(\overline{A} \cap \overline{B}) \cap \overline{C} = (0.1, 0.8) / 0.0 / 6 + (0.1, 0.8) / 0.3 / 5 + (0.4, 0.5) / 0.6 / 4$$

*and:*

$$A \cap (B \cap C) = (0.1, 0.8) / 0.0 / 6 + (0.1, 0.8) / 0.3 / 5 + (0.4, 0.5) / 0.6 / 4.5$$

*Therefore, the associative property holds.*

$$A \cup (B \cap C) = (0.1, 0.8) / 0.9 / 6 + (0.1, 0.8) / 0.5 / 5 + (0.4, 0.5) / 0.7 / 4$$

*and:*

$$(\overline{A}\cup\overline{B})\cap(\overline{A}\cup\overline{C})=(0.1,0.8)/0.9/6+(0.1,0.8)/0.5/5+(0.4,0.5)/0.7/4.$$

*Moreover,*

$$A \cap (B \cup C) = (0.1, 0.8) / 0.4 / 6 + (0.1, 0.8) / 0.3 / 5 + (0.4, 0.5) / 0.6 / 4$$

*and:*

$$(\overline{A}\cap\overline{B})\cup(\overline{A}\cap\overline{C}) = (0.1, 0.8)/0.4/6 + (0.1, 0.8)/0.3/5 + (0.4, 0.5)/0.6/4.5$$

*Therefore, the distributive property holds.*

$$\left(\overleftarrow{A^c}\right)^{\varepsilon} = (1.0, 0.0)/0.9/6 + (0.1, 0.7)/0.3/5 + (0.4, 0.5)/0.6/4 = \overleftarrow{A}.$$

*Therefore, the involution property holds.*

$$\left(\widetilde{A}\cap\widetilde{B}\right)^{\varepsilon} = (0.8, 1.0) / 0.6 / 6 + (0.1, 0.8) / 0.7 / 5 + (0.4, 0.5) / 0.4 / 4$$

*and:*

$$
\overline{A}^{\mathbb{C}} \cup \overline{B}^{\mathbb{C}} = (0.8, 1.0) / 0.6 / 6 + (0.1, 0.8) / 0.7 / 5 + (0.4, 0.5) / 0.4 / 4.
$$

*Furthermore,*

$$\left(\widetilde{A}\cup\widetilde{B}\right)^{\varepsilon} = (0.8, 1.0) / 0.1 / 6 + (0.1, 0.8) / 0.5 / 5 + (0.4, 0.5) / 0.1 / 4$$

*and:*

$$
\overline{A}^{\mathbb{C}} \cap \overline{B}^{\mathbb{C}} = (0.8, 1.0) / 0.1 / 6 + (0.1, 0.8) / 0.5 / 5 + (0.4, 0.5) / 0.1 / 4.
$$

*Therefore, De Morgan's law holds.*

### **7. Necessity and Possibility Operators on IT2FS**

Keeping in view the existence of several measure functions of the fuzzy set and its variants, in this section, we propose the possibility and necessity of IT2FS. There are some cases in which we need a gross result. From an IT2FS, if we want a gross result in terms of ordinary T2FS, then we need two operators that can transform an IT2FS into an ordinary T2FS. Let *A*5be an IT2FS over *X* with primary membership function *<sup>u</sup>*, secondary membership function <sup>μ</sup>*A*5(*x*, *<sup>u</sup>*), and secondary non-membership function <sup>ν</sup>*A*5(*x*, *<sup>u</sup>*). Then, the two operators are defined as follows.

(i) Necessity operator:

$$\Delta \, \overline{A} = \left\{ \langle \mathbf{x}, \boldsymbol{\mu}, \mu\_{\overline{A}}(\mathbf{x}, \boldsymbol{\mu}), 1 - \mu\_{\overline{A}}(\mathbf{x}, \boldsymbol{\mu}) \rangle : \mathbf{x} \in \mathbb{X}, \; \mathbf{u} \in J\_{\mathbf{x}} \subseteq [0, 1] \right\}.$$

(ii) Possibility operator:

$$\nabla \widetilde{A} = \left\{ \langle \mathbf{x}, \boldsymbol{\mu}, 1 - \nu\_{\widetilde{A}}(\mathbf{x}, \boldsymbol{\mu}), \nu\_{\widetilde{A}}(\mathbf{x}, \boldsymbol{\mu}) \rangle : \mathbf{x} \in \mathbb{X} \; , \; \mathbf{u} \in J\_{\mathbb{X}} \subseteq [0, 1] \right\}.$$

The operators can also be represented as:

$$\Delta \widetilde{A} = \int\_{\underline{\mathbf{x}} \in \mathcal{X}} \left( \int\_{\underline{\mathbf{u}} \in I\_{\overline{\mathbf{x}}}} \mu\_{\widetilde{A}}(\mathbf{x}, \boldsymbol{\mu}), \, 1 - \mu\_{\widetilde{A}}(\mathbf{x}, \boldsymbol{\mu}) \right) / \mu \rangle / \infty$$

and:

$$\nabla \overline{A} = \int\_{\mathbf{x} \in \mathcal{X}} \left( \int\_{\mathbf{u} \in I\_{\mathbf{x}}} \mu\_{\overline{A}}(\mathbf{x}, \mathbf{u}), \,\, \nu\_{\overline{A}}(\mathbf{x}, \mathbf{u}) \right) / \mu \rangle / \infty.$$

For the discrete case, is replaced by < . Obviously, if *A*5is an ordinary T2FS, then Δ *A*5 = *A*5 = ∇*A*5. Let us now explain this idea by an example.

**Example 6.** *Consider Example 3 in Section 3.*

$$\begin{aligned} A &= \left( \left( 0.9, 0.0 \right) / 0.8 + \left( 0.7, 0.1 \right) / 0.9 + \left( 0.6, 0.3 \right) / 1.0 \right) / 8 + \left( \left( 0.8, 0.1 \right) / 0.6 + \left( 0.7, 0.2 \right) / 0.7 \right) \\ &+ \left( 0.6, 0.4 \right) / 0.8 \rangle / 10 + \left( 0.9, 0.1 \right) / 0.4 + \left( 0.8, 0.2 \right) / 0.5 + \left( 0.5, 0.5 \right) / 0.6 \end{aligned}$$

*Now, for this set A:* 5

$$\begin{array}{cccc} \Delta A = ((0.9, 0.1) / \ & 0.8 + (0.7, 0.3) / 0.9 + (0.6, 0.4) / 1.0) / 8 + ((0.8, 0.2) / 0.6 + (0.7, 0.3) / 0.7 \\ & + (0.6, 0.4) / 0.8) / 10 + ((0.9, 0.1) / 0.4 + (0.8, 0.2) / 0.5 + (0.5, 0.5) / 0.6) / 14. \end{array}$$

*and:*

$$\begin{array}{rcl} \nabla A = \left( \left( 1.0, 0.0 \right) / \quad 0.8 + \left( 0.9, 0.1 \right) / 0.9 + \left( 0.7, 0.3 \right) / 1.0 \right) / 8 + \left( \left( 0.9, 0.1 \right) / 0.6 + \left( 0.8, 0.2 \right) / 0.7 \right) \\ &+ \left( 0.6, 0.4 \right) / 0.8 \right) / 10 + \left( \left( 0.9, 0.1 \right) / 0.4 + \left( 0.8, 0.2 \right) / 0.5 + \left( 0.6, 0.4 \right) / 0.6 \right) / 14. \end{array}$$

### *Proposition*

Here, we compile some relevant properties of the necessity and possibility operators of IT2FS. For every IT2FS *A*5, we have:


### **Proof.**

$$\begin{array}{c} \overline{\Delta \widetilde{A}} = \overline{\Delta \langle \langle \mathbf{x}, \boldsymbol{\mu}, \nu\_{\widetilde{A}}(\mathbf{x}, \boldsymbol{\mu}), \mu\_{\widetilde{A}}(\mathbf{x}, \boldsymbol{\mu}) \rangle/\mathbf{x} : \mathbf{x} \in \mathsf{X}, \ \mathbf{u} \in l\_{\mathbf{x}} \subseteq [0, 1] \rangle} = \\\langle \mathbf{i} \rangle \quad \overline{\langle \langle \mathbf{x}, \boldsymbol{\mu}, \nu\_{\widetilde{A}}(\mathbf{x}, \boldsymbol{\mu}), 1 - \nu\_{\widetilde{A}}(\mathbf{x}, \boldsymbol{\mu}) \rangle/\mathbf{x} : \mathbf{x} \in \mathsf{X}, \ \mathbf{u} \in l\_{\mathbf{x}} \subseteq [0, 1] \rangle} = \langle \langle \mathbf{x}, \boldsymbol{\mu}, 1 - \nu\_{\widetilde{A}}(\mathbf{x}, \boldsymbol{\mu}), \nu\_{\widetilde{A}}(\mathbf{x}, \boldsymbol{\mu}) \rangle/\mathbf{x} : \mathbf{x} \in \mathsf{X}, \ \mathbf{u} \in l\_{\mathbf{x}} \subseteq [0, 1] \rangle} \\\quad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \overline{\langle \mathbf{x}, \boldsymbol{\mu}, 1 - \nu\_{\widetilde{A}}(\mathbf{x}, \boldsymbol{\mu}), \nu\_{\widetilde{A}}(\mathbf{x}, \boldsymbol{\mu}) \rangle/\mathbf{x} : \mathbf{x} \in \mathsf{X}, \ \mathbf{u} \in l\_{\mathbf{x}} \subseteq [0, 1] \rangle} \end{array} \rangle$$

$$\begin{array}{c} \nabla \widetilde{A} = \overline{\nabla \langle \langle \mathbf{x}, \boldsymbol{\mu}, \nu\_{\widetilde{A}}(\mathbf{x}, \boldsymbol{\mu}), \mu\_{\widetilde{A}}(\mathbf{x}, \boldsymbol{\mu}) \rangle / \mathbf{x} : \mathbf{x} \in \mathsf{X}, \ \mathbf{u} \in l\_{\mathbf{X}} \subseteq [0, 1] \rangle} = \\ \mbox{ (ii)} \quad \overleftarrow{\langle \langle \mathbf{x}, \boldsymbol{\mu}, 1 - \mu\_{\widetilde{A}}(\mathbf{x}, \boldsymbol{\mu}), \mu\_{\widetilde{A}}(\mathbf{x}, \boldsymbol{\mu}) \rangle / \mathbf{x} : \mathbf{x} \in \mathsf{X}, \ \mathbf{u} \in l\_{\mathbf{X}} \subseteq [0, 1] \rangle} = \mbox{ } \langle \langle \mathbf{x}, \boldsymbol{\mu}, \mu\_{\widetilde{A}}(\mathbf{x}, \boldsymbol{\mu}), 1 - \mu\_{\widetilde{A}}(\mathbf{x}, \boldsymbol{\mu}) \rangle / \mathbf{x} : \mathbf{x} \in \mathsf{X}, \ \mathbf{u} \in l\_{\mathbf{X}} \subseteq [0, 1] \rangle} \\ \mbox{ } \qquad \qquad \qquad \mathsf{X}, \ \mathbf{u} \in l\_{\mathbf{X}} \subseteq [0, 1] \rangle = \ \widetilde{\Delta A}. \end{array}$$

$$\begin{array}{rcl}\Delta\Delta\overline{A} = \Delta\langle\langle\mathbf{x},\boldsymbol{\mu},\boldsymbol{\mu}\_{\widetilde{A}}(\mathbf{x},\boldsymbol{\mu}),1-\boldsymbol{\mu}\_{\widetilde{A}}(\mathbf{x},\boldsymbol{\mu})\rangle/\mathbf{x}:\mathbf{x}\in\mathsf{X},\ \mathbf{u}\in\mathsf{I}\_{\mathbf{x}}\subseteq[0,1]\rangle = \langle\langle\mathbf{x},\boldsymbol{\mu},\boldsymbol{\mu}\_{\widetilde{A}}(\mathbf{x},\boldsymbol{\mu}),1-\boldsymbol{\mu}\_{\widetilde{A}}(\mathbf{x},\boldsymbol{\mu})\rangle/\mathbf{x}:\mathbf{x}\in\mathsf{I}\_{\mathbf{x}}\cup\mathsf{I}\_{\mathbf{x}}\subseteq[0,1]\rangle\\\mathbf{x}:\mathbf{x}\in\mathsf{X},\ \mathbf{u}\in\mathsf{I}\_{\mathbf{x}}\subseteq[0,1]\rangle=\Delta\overline{A}.\end{array}$$

$$\begin{aligned} \text{(iv)} \quad & \quad \Delta \nabla \overline{A} = \Lambda \langle \langle \mathbf{x}, \mu, 1 - \nu\_{\overline{A}}(\mathbf{x}, \mu), \nu\_{\overline{A}}(\mathbf{x}, \mu) \rangle / \mathbf{x} : \mathbf{x} \in \mathbb{X}, \ \mathbf{u} \in l\_{\mathbf{x}} \subseteq [0, 1] \rangle = \langle \langle \mathbf{x}, \mu, 1 - \nu\_{\overline{A}}(\mathbf{x}, \mu), 1 - (1 - \nu\_{\overline{A}}(\mathbf{x}, \mu)) \rangle / \mathbf{x} : \mathbf{x} \rangle \\ & \quad \nu\_{\overline{A}}(\mathbf{x}, \mu) \underline{\rangle} / \langle \mathbf{x} : \mathbf{x} \in \mathbb{X}, \ \mathbf{u} \in l\_{\mathbf{x}} \subseteq [0, 1] \rangle = \langle \langle \mathbf{x}, \mu, 1 - \nu\_{\overline{A}}(\mathbf{x}, \mu), \nu\_{\overline{A}}(\mathbf{x}, \mu) \rangle / \mathbf{x} : \mathbf{x} \in \mathbb{X}, \ \mathbf{u} \in l\_{\mathbf{x}} \subseteq [0, 1] \rangle = \nabla \overline{A}. \end{aligned}$$

$$\begin{array}{c} \nabla \Delta \tilde{A} = \nabla \langle \langle \mathbf{x}, u, \mu\_{\tilde{A}}(\mathbf{x}, u), 1 - \mu\_{\tilde{A}}(\mathbf{x}, u) \rangle / \mathbf{x} : \mathbf{x} \in \mathbb{X}, \ u \in I\_{\mathbf{x}} \subseteq [0, 1] \rangle = \langle \langle \mathbf{x}, u, 1 - (1 - \mu\_{\tilde{A}}(\mathbf{x}, u), (1 - \mu\_{\tilde{A}}(\mathbf{x}, u))) / \mathbf{x} : \mathbf{x} \rangle \rangle / \mathbf{x} : \mathbf{x} \rangle \\\ \mu\_{\tilde{A}}(\mathbf{x}, u)) / \langle \mathbf{x} : \mathbf{x} \in \mathbb{X}, \ u \in I\_{\mathbf{x}} \subseteq [0, 1] \rangle = \langle \langle \mathbf{x}, u, \mu\_{\tilde{A}}(\mathbf{x}, u), 1 - \mu\_{\tilde{A}}(\mathbf{x}, u) \rangle / \mathbf{x} : \mathbf{x} \in \mathbb{X}, \ u \in I\_{\mathbf{x}} \subseteq [0, 1] \rangle = \Delta \overline{A}. \end{array}$$

(vi) ∇∇*A*<sup>5</sup> <sup>=</sup> ∇{*x*, *<sup>u</sup>*, 1 <sup>−</sup> <sup>ν</sup>*A*5(*x*, *<sup>u</sup>*), <sup>ν</sup>*A*5(*x*, *<sup>u</sup>*)/*<sup>x</sup>* : x <sup>∈</sup> X, u <sup>∈</sup> *<sup>J</sup>*<sup>x</sup> <sup>⊆</sup> [0, 1]} <sup>=</sup> {*x*, *<sup>u</sup>*, 1 <sup>−</sup> <sup>ν</sup>*A*5(*x*, *<sup>u</sup>*), <sup>ν</sup>*A*5(*x*, *<sup>u</sup>*))/

$$\begin{aligned} \mathbf{x}: \mathbf{x} \in \mathbb{X}, \mathbf{u} \in \mathbb{J}\_{\mathbf{x}} &\subseteq [0, 1] \\ \text{(vii)} \; \Delta \overline{A} = \langle (\mathbf{x}, \boldsymbol{\mu}, \boldsymbol{\mu}\_{\widehat{A}}(\mathbf{x}, \boldsymbol{\mu}), 1 - \boldsymbol{\mu}\_{\widehat{A}}(\mathbf{x}, \boldsymbol{\mu}) \rangle / \mathbf{x}: \mathbf{x} \in \mathbb{X}, \; \mathbf{u} \in \mathbb{J}\_{\mathbf{x}} &\subseteq [0, 1] \end{aligned} $$

Now, 1 <sup>−</sup> <sup>μ</sup>*A*<sup>5</sup> (*x*, *<sup>u</sup>*) <sup>≥</sup> <sup>ν</sup>*A*<sup>5</sup> (*x*, *<sup>u</sup>*) & *since* <sup>0</sup> <sup>≤</sup> <sup>μ</sup>*A*5(*x*, *<sup>u</sup>*) + <sup>ν</sup>*A*5(*x*, *<sup>u</sup>*) <sup>≤</sup> <sup>1</sup> ' . Therefore,

$$\begin{array}{c} \{ \langle \mathbf{x}, \boldsymbol{\mu}, \boldsymbol{\mu}\_{\widetilde{A}} (\mathbf{x}, \boldsymbol{\mu}), \mathbf{1} & -\boldsymbol{\mu}\_{\widetilde{A}} (\mathbf{x}, \boldsymbol{\mu}) \rangle / \mathbf{x} : \mathbf{x} \in \mathsf{X}, \ \mathbf{u} \in \mathsf{J}\_{\mathbf{x}} \subseteq [0, 1] \} \\ & \subset \langle \langle \mathbf{x}, \boldsymbol{\mu}, \boldsymbol{\mu}\_{\widetilde{A}} (\mathbf{x}, \boldsymbol{\mu}), \boldsymbol{\nu}\_{\widetilde{A}} (\mathbf{x}, \boldsymbol{\mu}) \rangle / \mathbf{x} : \mathbf{x} \in \mathsf{X}, \ \mathbf{u} \in \mathsf{J}\_{\mathbf{x}} \subseteq [0, 1] \rangle. \end{array}$$

Hence, *A*5 ⊂ ∇*A*5.

Consequently, both imply Δ *A*5 ⊂ *A*5 ⊂ ∇*A*5


**Theorem 1.** *For any two IT2FSs A and* 5 5*B, we have:*

$$\begin{array}{ll} \text{(i)} & \Delta(A \cap B) = \Delta A \cap \Delta B\\ \text{(ii)} & \Delta(\overline{A} \cup \overline{B}) = \Delta \overline{A} \cup \Delta \overline{B}\\ \text{(iii)} & \nabla(\overline{A} \cap \overline{B}) = \nabla \overline{A} \cap \nabla \overline{B}\\ \text{(iv)} & \nabla(\overline{A} \cup \overline{B}) = \nabla \overline{A} \cup \nabla \overline{B} \end{array}$$

### **Proof.**

(i) <sup>Δ</sup>(*A*5<sup>∩</sup> <sup>5</sup>*B*) = <sup>Δ</sup>{*x*, *<sup>u</sup>* <sup>∧</sup> *<sup>v</sup>*, <sup>μ</sup>*A*5(*x*, *<sup>u</sup>*) <sup>∧</sup> <sup>μ</sup>5*B*(*x*, *<sup>v</sup>*), <sup>ν</sup>*A*5(*x*, *<sup>u</sup>*) <sup>∨</sup> <sup>ν</sup>5*B*(*x*, *<sup>v</sup>*)/*<sup>x</sup>* <sup>∈</sup> *<sup>X</sup>*, *<sup>u</sup>*, *<sup>v</sup>* <sup>∈</sup> *<sup>J</sup>*<sup>x</sup> <sup>⊆</sup> [0, 1]} <sup>=</sup> {*x*,(*<sup>u</sup>* <sup>∧</sup> *<sup>v</sup>*), <sup>μ</sup>*A*5(*x*, *<sup>u</sup>*) <sup>∧</sup> <sup>μ</sup>5*B*(*x*, *<sup>v</sup>*), 1 <sup>−</sup> ((μ*A*<sup>5</sup> (*x*, *<sup>u</sup>*) <sup>=</sup> <sup>μ</sup>5*B*(*x*, *<sup>v</sup>*))/*<sup>x</sup>* <sup>∈</sup> *<sup>X</sup>*, *<sup>u</sup>*, *<sup>v</sup>* <sup>∈</sup> *<sup>J</sup>*<sup>x</sup> <sup>⊆</sup> [0, 1]} <sup>=</sup> {*x*,(*u*<sup>∧</sup> *<sup>v</sup>*), <sup>μ</sup>*A*5(*x*, *<sup>u</sup>*) <sup>∧</sup> <sup>μ</sup>5*B*(*x*, *<sup>v</sup>*), (<sup>1</sup> <sup>−</sup> <sup>μ</sup>*A*<sup>5</sup> (*x*, *<sup>u</sup>*)) <sup>∨</sup> (<sup>1</sup> <sup>−</sup> <sup>μ</sup>5*B*(*x*, *<sup>v</sup>*))/*<sup>x</sup>* <sup>∈</sup> *<sup>X</sup>*, *<sup>u</sup>*, *<sup>v</sup>* <sup>∈</sup> *<sup>J</sup>*<sup>x</sup> <sup>⊆</sup> [0, 1]} <sup>=</sup> {*x*, *<sup>u</sup>*, <sup>μ</sup>*A*5(*x*, *<sup>u</sup>*), (<sup>1</sup> <sup>−</sup> <sup>μ</sup>*A*<sup>5</sup> (*x*, *<sup>u</sup>*))/*<sup>x</sup>* <sup>∈</sup> *<sup>X</sup>*, *<sup>u</sup>*, *<sup>v</sup>* <sup>∈</sup> *<sup>J</sup>*<sup>x</sup> <sup>⊆</sup> [0, 1]} ∩ {*x*, *<sup>v</sup>*, <sup>μ</sup>5*B*(*x*, *<sup>v</sup>*), (<sup>1</sup> <sup>−</sup> <sup>μ</sup>5*B*(*x*, *<sup>v</sup>*))/*<sup>x</sup>* <sup>∈</sup> *X*, *u*, *v* ∈ *J*<sup>x</sup> ⊆ [0, 1]} = Δ*A*5 ∩ Δ5*B* (ii) Δ(*A*5∪ 5*B*) = Δ{*x*, *u* <sup>=</sup> *<sup>v</sup>*, <sup>μ</sup>*A*5(*x*, *<sup>u</sup>*) <sup>∧</sup> <sup>μ</sup>5*B*(*x*, *<sup>v</sup>*), <sup>ν</sup>*A*5(*x*, *<sup>u</sup>*) <sup>∨</sup> <sup>ν</sup>5*B*(*x*, *<sup>v</sup>*)/*<sup>x</sup>* <sup>∈</sup> *<sup>X</sup>*, *<sup>u</sup>*, *<sup>v</sup>* <sup>∈</sup> *<sup>J</sup>*<sup>x</sup> <sup>⊆</sup> [0, 1]} <sup>=</sup> {*x*,(*u* <sup>=</sup> *<sup>v</sup>*), <sup>μ</sup>*A*5(*x*, *<sup>u</sup>*) <sup>∧</sup> <sup>μ</sup>5*B*(*x*, *<sup>v</sup>*), 1 <sup>−</sup> ((μ*A*<sup>5</sup> (*x*, *<sup>u</sup>*) <sup>=</sup> <sup>μ</sup>5*B*(*x*, *<sup>v</sup>*))/*<sup>x</sup>* <sup>∈</sup> *<sup>X</sup>*, *<sup>u</sup>*, *<sup>v</sup>* <sup>∈</sup> *<sup>J</sup>*<sup>x</sup> <sup>⊆</sup> [0, 1]} <sup>=</sup> {*x*,(*u* <sup>=</sup> *<sup>v</sup>*), <sup>μ</sup>*A*5(*x*, *<sup>u</sup>*) <sup>∧</sup> <sup>μ</sup>5*B*(*x*, *<sup>v</sup>*), (<sup>1</sup> <sup>−</sup> <sup>μ</sup>*A*<sup>5</sup> (*x*, *<sup>u</sup>*)) <sup>∨</sup> (<sup>1</sup> <sup>−</sup> <sup>μ</sup>5*B*(*x*, *<sup>v</sup>*)) /*<sup>x</sup>* <sup>∈</sup> *<sup>X</sup>*, *<sup>u</sup>*, *<sup>v</sup>* <sup>∈</sup> *<sup>J</sup>*<sup>x</sup> <sup>⊆</sup> [0, 1]} <sup>=</sup> {*x*, *<sup>u</sup>*, <sup>μ</sup>*A*5(*x*, *<sup>u</sup>*), (<sup>1</sup> <sup>−</sup> <sup>μ</sup>*A*<sup>5</sup> (*x*, *<sup>u</sup>*))/*<sup>x</sup>* <sup>∈</sup> *<sup>X</sup>*, *<sup>u</sup>*, *<sup>v</sup>* <sup>∈</sup> *<sup>J</sup>*<sup>x</sup> <sup>⊆</sup> [0, 1]} ∩ {*x*, *<sup>v</sup>*, <sup>μ</sup>5*B*(*x*, *<sup>v</sup>*), (<sup>1</sup> <sup>−</sup> <sup>μ</sup>5*B*(*x*, *<sup>v</sup>*))/*<sup>x</sup>* <sup>∈</sup> *X*, *u*, *v* ∈ *J*<sup>x</sup> ⊆ [0, 1]} = Δ*A*5∪ Δ5*B* . (iii) <sup>∇</sup>(*A*5<sup>∩</sup> <sup>5</sup>*B*) = ∇{*x*, *<sup>u</sup>* <sup>∧</sup> *<sup>v</sup>*, <sup>μ</sup>*A*5(*x*, *<sup>u</sup>*) <sup>∧</sup> <sup>μ</sup>5*B*(*x*, *<sup>v</sup>*), <sup>ν</sup>*A*5(*x*, *<sup>u</sup>*) <sup>∨</sup> <sup>ν</sup>*<sup>B</sup>* <sup>5</sup> (*x*, *<sup>v</sup>*)/*<sup>x</sup>* <sup>∈</sup> *<sup>X</sup>*, *<sup>u</sup>*, *<sup>v</sup>* <sup>∈</sup> *<sup>J</sup>*<sup>x</sup> <sup>⊆</sup> [0, 1]} <sup>=</sup> {*x*, *<sup>u</sup>* <sup>∧</sup> *<sup>v</sup>*, 1 <sup>−</sup> (*vA*5(*x*, *<sup>u</sup>*) <sup>=</sup> <sup>μ</sup>5*B*(*x*, *<sup>v</sup>*)), (ν*A*5(*x*, *<sup>u</sup>*) <sup>=</sup> <sup>ν</sup>*<sup>B</sup>* <sup>5</sup> (*x*, *<sup>v</sup>*)/*<sup>x</sup>* <sup>∈</sup> *<sup>X</sup>*, *<sup>u</sup>*, *<sup>v</sup>* <sup>∈</sup> *<sup>J</sup>*<sup>x</sup> <sup>⊆</sup> [0, 1]} <sup>=</sup> {*x*, *<sup>u</sup>* <sup>∧</sup> *<sup>v</sup>*,(1<sup>−</sup> <sup>ν</sup>*A*<sup>5</sup> (*x*, *<sup>u</sup>*) <sup>∧</sup> <sup>1</sup> <sup>−</sup> <sup>ν</sup>5*B*(*x*, *<sup>v</sup>*)), <sup>ν</sup>*A*5(*x*, *<sup>u</sup>*) <sup>∨</sup> <sup>ν</sup>5*B*(*x*, *<sup>v</sup>*)/*<sup>x</sup>* <sup>∈</sup> *<sup>X</sup>*, *<sup>u</sup>*, *<sup>v</sup>* <sup>∈</sup> *<sup>J</sup>*<sup>x</sup> <sup>⊆</sup> [0, 1]} <sup>=</sup> {*x*, *<sup>u</sup>*, 1<sup>−</sup> <sup>ν</sup>*A*5(*x*, *<sup>u</sup>*), <sup>ν</sup>*A*5(*x*, *<sup>u</sup>*)/*<sup>x</sup>* <sup>∈</sup> *<sup>X</sup>*, *<sup>u</sup>*, *<sup>v</sup>* <sup>∈</sup> *<sup>J</sup>*<sup>x</sup> <sup>⊆</sup> [0, 1]}∩{*x*, *<sup>u</sup>*, 1 <sup>−</sup> <sup>ν</sup>5*B*(*x*, *<sup>u</sup>*), <sup>ν</sup>5*B*(*x*, *<sup>u</sup>*)/*<sup>x</sup>* <sup>∈</sup> *<sup>X</sup>*, *<sup>u</sup>*, *<sup>v</sup>* <sup>∈</sup> *<sup>J</sup>*<sup>x</sup> <sup>⊆</sup> [0, 1]} = ∇*A*5∩ ∇5*B*.

(iv) ∇(*A*5∪ 5*B*) = ∇{*x*, *u* <sup>=</sup> *<sup>v</sup>*, <sup>μ</sup>*A*5(*x*, *<sup>u</sup>*) <sup>∧</sup> <sup>μ</sup>5*B*(*x*, *<sup>v</sup>*), <sup>ν</sup>*A*5(*x*, *<sup>u</sup>*) <sup>∨</sup> <sup>ν</sup>5*B*(*x*, *<sup>v</sup>*)/*<sup>x</sup>* <sup>∈</sup> *<sup>X</sup>*, *<sup>u</sup>*, *<sup>v</sup>* <sup>∈</sup> *<sup>J</sup>*<sup>x</sup> <sup>⊆</sup> [0, 1]} <sup>=</sup> {*x*, *u* <sup>=</sup> *<sup>v</sup>*,(<sup>1</sup> <sup>−</sup> <sup>ν</sup>*A*<sup>5</sup> (*x*, *<sup>u</sup>*)) <sup>=</sup> <sup>ν</sup>5*B*(*x*, *<sup>v</sup>*), (ν*A*5(*x*, *<sup>u</sup>*) <sup>=</sup> <sup>ν</sup>5*B*(*x*, *<sup>v</sup>*)/*<sup>x</sup>* <sup>∈</sup> *<sup>X</sup>*, *<sup>u</sup>*, *<sup>v</sup>* <sup>∈</sup> *<sup>J</sup>*<sup>x</sup> <sup>⊆</sup> [0, 1]} <sup>=</sup> {*x*, *<sup>u</sup>* <sup>=</sup> *<sup>v</sup>*,(1<sup>−</sup> <sup>ν</sup>*A*<sup>5</sup> (*x*, *<sup>u</sup>*)) <sup>∧</sup> (<sup>1</sup> <sup>−</sup> <sup>ν</sup>5*B*(*x*, *<sup>v</sup>*)), <sup>ν</sup>*A*5(*x*, *<sup>u</sup>*) <sup>∨</sup> <sup>ν</sup>5*B*(*x*, *<sup>v</sup>*)/*<sup>x</sup>* <sup>∈</sup> *<sup>X</sup>*, *<sup>u</sup>*, *<sup>v</sup>* <sup>∈</sup> *<sup>J</sup>*<sup>x</sup> <sup>⊆</sup> [0, 1]} <sup>=</sup> {*x*, *<sup>u</sup>*, 1<sup>−</sup> <sup>ν</sup>*A*5(*x*, *<sup>u</sup>*), <sup>ν</sup>*A*5(*x*, *<sup>u</sup>*)/*<sup>x</sup>* <sup>∈</sup> *<sup>X</sup>*, *<sup>u</sup>*, *<sup>v</sup>* <sup>∈</sup> *<sup>J</sup>*<sup>x</sup> <sup>⊆</sup> [0, 1]} > { *x*,*u*, 1−ν5*B*(*x*,*u*), <sup>ν</sup>5*B*(*x*,*u*) *<sup>x</sup>* <sup>∈</sup> *<sup>X</sup>*, *<sup>u</sup>*, *<sup>v</sup>* <sup>∈</sup> *<sup>J</sup>*<sup>x</sup> <sup>⊆</sup> [0, 1]} <sup>=</sup> <sup>∇</sup>*A*5<sup>&</sup>gt; <sup>∇</sup>5*B*.

By virtue of the above results, we can define the following relations. -

**Definition 1.** *For any two IT2FSs A and* 5 5*B, we define the following relations.*

$$\left[\widetilde{\mathcal{A}} \subset\_{\Delta} \widetilde{\mathrm{B}} \text{iff} \,(\forall \mathbf{x} \in \mathbf{X}) \mathbf{u} \leq \mathbf{v}, \,\mu\_{\widetilde{\mathcal{A}}}(\mathbf{x}, \mathbf{u}) \leq \mu\_{\widetilde{\mathcal{B}}}(\mathbf{x}, \mathbf{v}) \right] \left[\widetilde{\mathcal{A}} \subset\_{\nabla} \widetilde{\mathrm{B}} \text{iff} \,(\forall \mathbf{x} \in \mathbf{X}) (\mathbf{u} \leq \mathbf{v}), \,\mathbf{v}\_{\widetilde{\mathcal{A}}}(\mathbf{x}, \mathbf{u}) \geq \mathbf{v}\_{\widetilde{\mathcal{B}}}(\mathbf{x}, \mathbf{v}) \right]$$

*Further, a relationship between two operators is established in the following theorem.*

**Theorem 2.** *For any two IT2FSs A and* 5 5*B, we have:*


### **Proof.**

(i) If *A*5 ⊂<sup>Δ</sup> 5*B*, then (∀*x* ∈ *X*) *<sup>u</sup>* <sup>≤</sup> *<sup>v</sup>*, <sup>μ</sup>*A*5(*x*, *<sup>u</sup>*) <sup>≤</sup> <sup>μ</sup>5*B*(*x*, *<sup>v</sup>*) . Therefore,

$$1 - \mu\_{\overrightarrow{A}}(\mathbf{x}, \boldsymbol{\mu}) \ge \ 1 - \mu\_{\overrightarrow{B}}(\mathbf{x}, \boldsymbol{\nu}) \,.$$

Hence,

$$\left[ (\forall \mathbf{x} \in X) (\mu \le \upsilon, \ \mu\_{\widetilde{A}}(\mathbf{x}, \boldsymbol{\mu}) \le \ \mu\_{\widetilde{B}}(\mathbf{x}, \boldsymbol{\upsilon}), \ 1 - \mu\_{\widetilde{A}}(\mathbf{x}, \boldsymbol{\mu}) \ge 1 - \mu\_{\widetilde{B}}(\mathbf{x}, \boldsymbol{\upsilon}) \right].$$

Therefore,

$$
\Delta \overline{A} \subset \Delta \overline{B}.
$$

On the other hand, if Δ*A*5 ⊂ Δ5*B*, then:

$$\mathbb{E}\left[ (\forall \mathbf{x} \in X)(\boldsymbol{u} \leq \boldsymbol{v}, \ \mu\_{\widehat{A}}(\mathbf{x}, \boldsymbol{u}) \leq \mu\_{\widehat{B}}(\mathbf{x}, \boldsymbol{v}), \ 1 - \mu\_{\widehat{A}}(\mathbf{x}, \boldsymbol{u}) \geq 1 - \mu\_{\widehat{B}}(\mathbf{x}, \boldsymbol{v}) \right].$$

This implies *A*5 ⊂<sup>Δ</sup> 5*B*. (ii) If *A*5 ⊂∇ 5*B*, then (∀*x* ∈ *X*) *<sup>u</sup>* <sup>≤</sup> *<sup>v</sup>*, <sup>ν</sup>*A*5(*x*, *<sup>u</sup>*) <sup>≤</sup> <sup>ν</sup>5*B*(*x*, *<sup>v</sup>*) . Therefore,

$$1 - \nu\_{\overrightarrow{A}}(\mathbf{x}, \boldsymbol{\mu}) \le \ 1 - \nu\_{\overrightarrow{B}}(\mathbf{x}, \boldsymbol{\upsilon}) .$$

Hence,

$$\left[ (\forall \mathbf{x} \in \mathbf{X}) (\boldsymbol{\mu} \le \boldsymbol{\upsilon}, \; 1 - \nu\_{\overrightarrow{A}}(\mathbf{x}, \boldsymbol{\mu}) \le 1 - \nu\_{\overrightarrow{B}}(\mathbf{x}, \boldsymbol{\upsilon}), \; \nu\_{\overrightarrow{A}}(\mathbf{x}, \boldsymbol{\mu}) \ge \; \mu\_{\overrightarrow{B}}(\mathbf{x}, \boldsymbol{\upsilon}) \right].$$

Therefore,

∇*A*5 ⊂ ∇5*B*.

On the other hand, if:

∇*A*5 ⊂ ∇5*B*.

then:

$$\mathbb{E}\left[\left(\forall \mathbf{x} \in \mathbf{X}\right) \left(\boldsymbol{u} \leq \boldsymbol{v}, \ 1-\boldsymbol{\nu}\_{\overrightarrow{A}}(\mathbf{x},\boldsymbol{u}) \leq \ 1-\boldsymbol{\nu}\_{\overrightarrow{B}}(\mathbf{x},\boldsymbol{v}), \ \boldsymbol{\nu}\_{\overrightarrow{A}}(\mathbf{x},\boldsymbol{u}) \geq \ \boldsymbol{\nu}\_{\overrightarrow{B}}(\mathbf{x},\boldsymbol{v})\right].$$

This gives *A*5 ⊂∇ 5*B*. (iii) If *A*5 ⊂<sup>Δ</sup> 5*B* and *A*5 ⊂∇ 5*B*, then:

$$(\forall x \in X) \Big(\mu \le v, \ \mu\_{\widehat{A}}(x, u) \le \ \mu\_{\widehat{B}}(x, v)\Big).$$

and:

$$(\forall \mathbf{x} \in X) \newline (\mu \le \upsilon, \ \upsilon\_{\overrightarrow{A}} \left( \mathbf{x}, \mu \right) \ge \upsilon\_{\overrightarrow{B}} \left( \mathbf{x}, \upsilon \right) .]$$

Hence:

$$\left[ (\forall \mathbf{x} \in \mathbf{X}) (\boldsymbol{\mu} \le \boldsymbol{\upsilon}, \,\, \mu\_{\overline{A}}(\mathbf{x}, \boldsymbol{\mu}) \le \,\, \mu\_{\overline{B}}(\mathbf{x}, \boldsymbol{\upsilon}), \,\, \nu\_{\overline{A}}(\mathbf{x}, \boldsymbol{\mu}) \ge \nu\_{\overline{B}}(\mathbf{x}, \boldsymbol{\upsilon}) \right].$$

*A*5 ⊂ 5*B*.

Therefore,

On the other hand, if:

*A*5 ⊂ 5*B*

then:

$$\left[ (\forall \mathbf{x} \in X) (\boldsymbol{\mu} \le \boldsymbol{\nu}, \,\mu\_{\widetilde{A}}(\mathbf{x}, \boldsymbol{\mu}) \le \,\mu\_{\widetilde{B}}(\mathbf{x}, \boldsymbol{\nu}), \,\nu\_{\widetilde{A}}(\mathbf{x}, \boldsymbol{\mu}) \ge \,\nu\_{\widetilde{B}}(\mathbf{x}, \boldsymbol{\nu}) \right].$$

i.e.,

$$(\forall x \in X) \Big(\mu \le v, \ \mu\_{\widetilde{A}}(x, \mu) \le \ \mu\_{\widetilde{B}}(x, v)\Big),$$

and:

$$(\forall x \in X) \Big( (\mu \le v, \,\, \nu\_{\overrightarrow{A}}(\mathfrak{x}, \mu) \le \,\, \nu\_{\overrightarrow{B}}(\mathfrak{x}, v)).$$

Hence, *A*5 ⊂<sup>Δ</sup> 5*B* and *A*5 ⊂∇ 5*B*. -

### **8. Distance Measures of IT2FS**

Similar to the distance measures of various variants of fuzzy sets including T2FS and IFS, here, in this section, we present two distance measures of the proposed IT2FS. Let *A*5and 5*B* be two IT2FS. Then, we can define the following distances as below.

(i) Hamming distance:

$$d\_{H}(\widetilde{A},\widetilde{B}) = \sum\_{\mathbf{x}\in X} \sum\_{\mathbf{u},\mathbf{v}\in I\_{\mathbf{x}}} \sum\_{j=1}^{L(\mathbf{x},\mathbf{u},\mathbf{u}')} \sum\_{j=1}^{L(\mathbf{x},\mathbf{u},\mathbf{u}')} \left( \left| \mu\_{\overline{A}}^{j}(\mathbf{x},\mathbf{u}) - \mu\_{\overline{B}}^{j}(\mathbf{x},\mathbf{u}') \right| + \left| \nu\_{\overline{A}}^{j}(\mathbf{x},\mathbf{u}) - \nu\_{\overline{B}}^{j}(\mathbf{x},\mathbf{u}') \right| \right).$$

(ii) Euclidean distance:

$$d\_{E}(\widetilde{A}, \widetilde{B}) = \left[ \sum\_{\mathbf{x} \in \mathcal{X}} \sum\_{\boldsymbol{\mu}, \boldsymbol{\nu} \in \mathcal{I}\_{\mathbf{x}}} \sum\_{j=1}^{L(\mathbf{x}, \boldsymbol{\mu}, \boldsymbol{\nu}'; \widetilde{A}, \widetilde{B})} \left| \boldsymbol{\mu}\_{\widetilde{A}}^{j}(\mathbf{x}, \boldsymbol{\nu}) - \boldsymbol{\mu}\_{\widetilde{B}}^{j}(\mathbf{x}, \boldsymbol{\nu}') \right|^{2} + \left| \boldsymbol{\nu}\_{\widetilde{A}}^{j}(\mathbf{x}, \boldsymbol{\nu}) - \boldsymbol{\nu}\_{\widetilde{B}}^{j}(\mathbf{x}, \boldsymbol{\nu}') \right|^{2} \right]^{\frac{1}{2}}.$$

where *<sup>u</sup>* and *<sup>u</sup>* are the primary membership functions of *<sup>A</sup>*<sup>5</sup> and <sup>5</sup>*B*, respectively. <sup>μ</sup>*A*<sup>5</sup> and <sup>ν</sup>*A*<sup>5</sup> are the corresponding secondary membership and non-membership functions of *A*5; whereas, <sup>μ</sup>5*<sup>B</sup>* and <sup>ν</sup>5*<sup>B</sup>* are the corresponding secondary membership and non-membership functions of <sup>5</sup>*B*. *L*(*x*, *u*, *u* ; *A*5, 5*B*) is the length of the sequences of the secondary membership and non-membership functions of:

$$\widetilde{A} = \left( \mathbf{x}, \boldsymbol{\mu}, \left( \mu\_{\widetilde{A}}^{1}(\mathbf{x}, \boldsymbol{\mu}), \mu\_{\widetilde{A}}^{2}(\mathbf{x}, \boldsymbol{\mu}), \dots, \, \mu\_{\widetilde{A}}^{p}(\mathbf{x}, \boldsymbol{\mu}) \right) \Big| \nu\_{\widetilde{A}}^{1}(\mathbf{x}, \boldsymbol{\mu}), \nu\_{\widetilde{A}}^{2}(\mathbf{x}, \boldsymbol{\mu}), \dots, \, \nu\_{\widetilde{A}}^{p}(\mathbf{x}, \boldsymbol{\mu}) \Big| \right)$$

and:

$$\widetilde{B} = \left( \mathbf{x}, \boldsymbol{\mu}', \left( \mu\_{\overline{B}}^1(\mathbf{x}, \boldsymbol{\mu}'), \mu\_{\overline{B}}^2(\mathbf{x}, \boldsymbol{\mu}'), \dots, \ \mu\_{\overline{B}}^p(\mathbf{x}, \boldsymbol{\mu}') \right), \left( \nu\_{\overline{B}}^1(\mathbf{x}, \boldsymbol{\mu}'), \nu\_{\overline{B}}^2(\mathbf{x}, \boldsymbol{\mu}'), \dots, \ \nu\_{\overline{B}}^p(\mathbf{x}, \boldsymbol{\mu}') \right) \right)$$

respectively. For the sake of simplicity, in this study, the length of the sequences of the secondary membership and non-membership functions of *A*5and 5*B* are considered equal. Let us now explain this idea with an example.

**Example 7.** *Let X be a non-empty universe. Let A and* 5 5*B be two IT2FSs over X*, which are given as:

$$\begin{array}{c} A = (0.7, 0.2) / 0.8 / \mathbf{x} + (0.6, 0.1) / 0.6 / \mathbf{y} + (0.5, 0.3) / 0.9 / z \\ \widetilde{B} = (0.6, 0.4) / 0.6 / \mathbf{x} + (0.5, 0.3) / 0.3 / \mathbf{y} + (0.4, 0.2) / 0.4 / z \\\\ d\_H(\widetilde{A}, \widetilde{B}) = |0.7 \quad -0.6| + |0.2 - 0.4| + |0.6 - 0.5| + |0.1 - 0.3| + |0.5 - 0.4| + |0.3 - 0.2| \\ = 0.80 \end{array}$$

and:

$$d\_{E}(\overline{A}, \overline{B}) = \begin{bmatrix} |0.7 & -0.6|^2 + |0.2 - 0.4|^2 + |0.6 - 0.5|^2 + |0.1 - 0.3|^2 + |0.5 - 0.4|^2 \\ & + |0.3 - 0.2|^2]^{\frac{1}{2}} = 0.35. \end{bmatrix}$$

### **9. An Example**

Most human reasoning involves the use of variables that are characteristically imprecise and usually represented under the paradigm of fuzzy set theory. The fuzzy set theory becomes the basis of the concept related to a linguistic variable, whose values are words instead of numbers. However, in some scenarios like decision making problems (e.g., medical diagnosis, marketing, sales, etc.), representation of a linguistic variable with respect to a membership function only is not satisfactory, where there is always a chance of the existence of a non-null complement. In this context, IT2FS can be more rational for representing both the secondary membership and non-membership grades of a primary membership grade of an element to a set. Here, we present an example of a medical diagnosis system.

Let *P* = *P*1, *P*2, *P*3, *P*<sup>4</sup> be a set of patients, and let *D* = {*Viral Fever*, *Dengue*, *Typhoid*, *Throat disease*} be a set of diseases and *S* = {*Temperature*, *Cough*, *Throat pain*, *Headache*, *Chest pain*} be a set of symptoms. Let us consider the disease symptoms and their corresponding intensity be represented as the primary membership function and secondary membership functions. Accordingly, in Table 1, we represent each symptom for a disease with its primary membership function and its intensity with the secondary membership and non-membership functions. Similarly, Table 2 represents the symptoms of patients. These symptoms and the disease of the patients are linguistic terms that involve uncertainty. As an example, in a day, the *Temperature* of the patient can be mild, moderate, or high throughout the day, at various time intervals. Based on these recorded values of the *Temperature*, the assessment of different doctors might change, as well. For instance, if the *Temperature* of a patient is recorded as 105.5 Fahrenheit (F), 103.2 F, and 100.7 F, respectively in the morning, afternoon, and night, then one doctor might analyze that the patient is having *Viral Fever*, while the other doctor might suggest the patient is having *Dengue* or may prefer to record the *Temperature* of the patient for some subsequent days before coming into any conclusion. In this context, it becomes quite inevitable that a particular

symptom of a patient can fluctuate in a day, and essentially for that patient, the opinion (analysis) of the experts (doctors) also varies. Consequently, if we consider *Temperature* as IT2FS, then the *f everishness* of the patient can be the primary membership of *Temperature*. The *degree o f f everishness* and *degree o f healthiness* are considered as the secondary membership function and non-membership function, respectively. Therefore, to incorporate the uncertainties of the symptoms and the diseases rationally, the parameters, symptom, and disease of a patient are represented as IT2FS. Tables 3 and 4 report the Hamming distance and the Euclidian distance for each patient from a corresponding disease. Here, in these tables, the smallest distances are highlighted as bold.


**Table 1.** Symptoms vs. diseases.



**Table 3.** Hamming distance between patients and diseases.


**Table 4.** Euclidian distance between patients and diseases.


According to the principle of the minimum distance point, the lowest distance measures in Tables 3 and 4 infer the proper diagnosis of the disease of a patient. From Table 3, we observe that patients *P*1, *P*2, *P*3, and *P*<sup>4</sup> suffer from *Throat disease*, *Viral Fever*, *Dengue*, and *Typhoid*, respectively. However, from Table 4, it can be inferred that *P*<sup>1</sup> and *P*<sup>3</sup> are diagnosed with *Typhoid*, whereas, *P*<sup>2</sup> is suffering from *Viral Fever*, and *P*<sup>4</sup> is determined to have *Dengue*.

### **10. Conclusions**

In this paper, we proposed a new concept of the generalized intuitionistic type-2 fuzzy set, which hybridizes the concept of the intuitionistic fuzzy set and generalized type-2 fuzzy set. Subsequently, we defined various set-theoretic operations and operational laws of IT2FS. Besides, we proposed the necessity and possibility operators of IT2FS. Furthermore, two distance measures, namely the

Hamming distance and the Euclidian distance, for IT2FS were also investigated, and their applications were demonstrated with an example related to a medical diagnosis system. In the future, the concept of generalized IT2FS can be used to develop a decision support system on various domains, including a medical diagnosis system and intelligent transportation system. Moreover, various aggregation operators can also be developed for generalized IT2FS. In addition, a detailed study on the topological properties of generalized IT2FS and its extension to the intuitionistic type-2 multi-fuzzy set and intuitionistic type-2 fuzzy soft set, respectively, can be considered as the future research interest.

**Author Contributions:** The individual contribution and responsibilities of the authors were as follows: S.D., M.B.K., S.M., and B.R. performed the research study, collected, pre-processed, and analyzed the data and obtained the results, and worked on the development of the paper. S.K. and D.P. provided good advice throughout the research by giving suggestions on the methodology, the modelling uncertainty of the patient data, and refinement of the manuscript. All the authors have read and approved the final manuscript.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest.

### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Hybrid Group MCDM Model to Select the Most E**ff**ective Alternative of the Second Runway of the Airport**

### **Zenonas Turskis 1, Jurgita Antucheviˇciene˙ 2,\*, Violeta Keršuliene˙ <sup>3</sup> and Gintaras Gaidukas <sup>2</sup>**


Received: 2 May 2019; Accepted: 11 June 2019; Published: 13 June 2019

**Abstract:** Sustainable and efficient development is one of the most critical challenges facing modern society if it wants to save the world for future generations. Airports are an integral part of human activity. They need to be adapted to meet current and future sustainable needs and provide useful services to the public, taking into account prospects and requirements. Many performance criteria need to be assessed to address issues that often conflict with each other and have different units of measurement. The importance of the criteria to evaluate the effectiveness of alternatives varies. Besides, the implementation of such decisions has different—not precisely described in advance—effects on the interests of different groups in society. Some criteria are defined using different scales. Stakeholders could only evaluate the implemented project alternatives for efficiency throughout the project life cycle. It is essential to find alternative assessment models and adapt them to the challenges. The use of hybrid group multi-criteria decision-making models is one of the most appropriate ways to model such problems. This article presents a real application of the original model to choose the best second runway alternative of the airport.

**Keywords:** multiple criteria decision making; MCDM; transport; sustainable; airport; runway; hybrid; MULTIMOORA.

### **1. Introduction**

In recent decades, scientists have devoted much attention to researching transport and logistics issues. Research reflects the growing awareness of sustainability among stakeholders and helps develop appropriate administrative ideas and guidelines for sustainable development [1,2]. Quality infrastructure and industrial base are two of the essential factors in the efficient operation of transport systems. A fuzzy set is a methodological concept of knowledge that people use worldwide to explore possible medium-sized samples of discrete alternatives with a tool that works well for decision makers [3,4]. This method allows researchers to work with many examples of discrete options. Private investors rarely take into account several objective criteria and usually make decisions based on subjective factors. Such projects are local and meet the priorities of a particular country's transport policy. Developing cities face enormous pressure to transport infrastructure, which has a significant impact on economic activity in these cities [5,6]. Ližbetin [5] focuses on the research of the terminal network.

Standard cost–benefit analysis does not indicate significant economic consequences. It does not answer the question of who will benefit and who will lose [7]. Wang et al. [8] presented the reasons and the complex model of land use for transport purposes to assess the broader economic impacts of transport infrastructure projects. Semanjski and Gautama [9] stressed that cities are firmly based on efficient urban logistics to make them attractive for quality living and economic development. They proposed an extended multi-criteria decision-making method with the ability to integrate the views of different stakeholders and assess the costs associated with sustainability and the spatial context of the route. Moretti et al. [10] stated that transport infrastructure is a way of survival. There are three significant impacts of climate change on transport systems: infrastructure, transport operations and transport demand. The strategy developed or implemented to prevent the adverse effects of climate change on transport infrastructure has such important objectives: avoiding losses, protecting structures, controlling and reporting to consumers. It is useful for intelligent transport systems such as Automated Traffic Management, Passenger Information Systems, Early Warning Systems and Security Alerts.

The transport sector is a significant source of environmental noise and continues to make a considerable contribution (almost a quarter) to Europe's greenhouse gas emissions. The development of sustainable vehicles and infrastructure systems can help to move towards sustainable mobility, reduce oil consumption and reduce greenhouse gas emissions from transport. Bigerna et al. [11] defined several variable conditions using the fuzzy, qualitative contrast analysis of data. The integration of economic, ecological and social factors at the same time in one business success assessment model is very rare [12,13]. Bajec and Tuljak-Suban [14] have proposed a unified Analytic Hierarchy Process (AHP) and Data Envelopment Analysis model (DEA) based on the assumption of unwanted performance criteria assessed on the scale of logistics service providers. Balbaa et al. [15] argued that increasing environmental pollution encourages researchers to find other clean, renewable energy [16] sources or to manage available sources optimally. Carlan et al. [17] have shown that using different vehicle types allow the combination of transport tasks with worldwide travel and reduces operating costs by 25–35% and carbon emissions by 34–38%. Innovation contributes to the development of sustainable transport. It is essential for stakeholders to undertake modernization and innovation processes [18] since they will enable them to incorporate additional value to its market offer and will increase their international competitiveness [19].

Nosal Hui et al. [20] demonstrated the possibility of using a Simple Additive Weighting approach (SAW) [21] to assess European and national policies in the field of transport, taking into account their impact on the use of innovation in the market. Lopez et al. [22] focused on exploring how technological innovations adopted by public transport companies can increase urban sustainability. They used a hierarchical analysis of importance (IPA), a AHP, to report the impact on ecology and social sustainability.

Cargo and passenger transport are an essential criterion in economic development, which encourages the mobility of persons. Paddeu et al. [23] highlighted that traffic flows in urban areas do not only have benefits but also have negative externalities such as environmental, social and transport activities. Moreover, passenger and freight transport safety are one of the critical criteria [24]. This criterion is more important than travel time or cost.

Airports are one of the critical elements of regional development. They have a substantial impact on regional development [25]. The effects of local development include features such as human capital and the use of high-tech industrial products. The higher the number of airport terminals, the faster the economic and prosperity growth in the region, contributing to population growth and welfare [25]. Airports are one of the essential elements of the international transport system, which has become an integral part of many people's journeys throughout the world and becomes one of the main ones [26].

Vilnius Airport has reached the maximum design capacity of passenger flows, which affects not only traffic safety but also the further economic and social development of the country. Airport capacity depends on many criteria, including the location of runways, space availability, the ability to manage traffic flows at peak traffic volumes, and access to meteorological stations [27]. The best alternatives to airport development must be identified and ensure the safest movement of all traffic participants within the airport's internal (person) area when developing the airport in harmony with the region needs.

The main parameters of the runway are the wind direction and speed but also include other criteria: meteorological conditions, traffic demand, adjacent airport flows, and adverse weather conditions, instruments of the flight rules, departure restrictions, environmental standards, overhead runways, distance between tracks, airspace constraints, procedural constraints (noise reduction), route layout, traffic in terminal access, and other criteria [27].

Scientific literature shows that the assessment and implementation of airport development and traffic safety measures are multi-criteria, discrete optimization tasks, all of which are very inaccurate. Many modern economic phenomena are uncertain. Meanwhile, decision makers usually treat them as precisely defined. Fuzzy logic is the right tool for modelling inaccurate, ambiguous and vague events [28,29].

### **2. Materials and Methods**

Zadeh [30] offered a fuzzy set theory to define such problem-solving models. Many researchers use different Multi-Criteria Decision-Making (MCDM) methods and differently control target alternatives [31–33]. This fact justifies the use of several ways to determine the values of options and integrate them for the alternative's multi-attribute utility function. Besides, the weights of the criteria are usually defined using different methods selected from the set of methods available to assess the importance of rules. Hybrid approaches are best suited to solve similar problems [34,35].

Decision making in groups is finding the best choice among many possible alternatives. The main problem is how to combine multiple input data into the separate representative product [36,37].

If the fuzzy data are available for the input and output variables system, then several rule blocks are useful to create several methods of creating method blocks [38]. Fuzzy relation summarizes the concept of classical relationships, which makes it possible to partially associate elements of the universe of discourse [39]. Triangular fuzzy sets are the essential models of membership classes as only three parameters fully define them. The semantics are apparent because the fuzzy sets are expressed based on knowledge of the concept spreads and their typical values. A linear change in membership level is the simplest membership model. The derivative of the triangular membership function could be used as a measure of sensitivity. When a derivative of a membership triangle function is a measure of sensitivity, its sensitivity is constant for each linear segment of the fuzzy set [40]. Changing fuzzy sets and adjusting their membership functions could change the semantics of fuzzy sets. Therefore, by allowing the various intensities of the association, we get a much more relevant mathematical model of the problem. Since Zadeh [30] established fuzzy sets, they developed rapidly. However, the inadequacy of fuzzy sets is because a fuzzy set only has a membership degree, and it cannot cope with some complex fuzzy information. Atanassov [41] later introduced the intuitionistic fuzzy set.

Nonetheless, in practical problems, the intuitionistic fuzzy set also has limitations; it cannot handle information that is between truth and falsity. Atanassov and Gargov extended the membership degree and non-membership degree to interval numbers and offered the interval-valued intuitionistic fuzzy set to explain information that is within the limits of truth and falsity [42]. Turksen [43] provided interval-valued fuzzy sets, which also used the membership degree and the non-membership degree to describe determinacy and indeterminacy. However, in certain circumstances, the membership degree and non-membership degree cannot express fuzzy information clearly. Therefore, Smarandache [44] introduced neutrosophic sets by increasing a hesitation degree to describe the difference between the membership degree and non-membership degree [45].

The objectives of sustainable development models are the involvement of innovative research tools, the analysis of urban development, the dynamics, ecology, and optimization of urban systems, carrying capacity and social needs [46–48]. Urban sustainability MCDM models could be a useful forecasting tool for evaluating trends in a developing city and for assisting decision makers to focus on finding, assessing and selecting best environmental development strategies [49,50].

Concerning problems, decision making aims at:


The use of land for urban development is a specific MCDM problem with fuzzy and changing conditions, factors, and objectives of sustainability [51–54]. The solution model reflects a set of criteria that are important for local-specific state policy priorities, situations and values, for measuring and guiding future development trends.

Irving Fischer (1867–1947) and other early economists developed a concept of the Highest or Best Use. Property land use developers should use the three-step analysis to determine maximum land use potential, including asset analysis, property rights and limitations analysis, as well as market analysis (including future developments). The exact definition of the highest and best use is different, but, usually, the request should be:


The primary purpose of all multi-criteria methods is to formalize, find, and capture trade-offs between interest criteria. Multi-criteria optimization aims in determining the best feasible solution according to the requirements representing different effects. Multi-criteria decision processes are distinguished along with several aspects.

Franklin [55], de Condorcet [56], Edgeworth [57], and Pareto [58,59] were pioneers in multi-criteria decision making. Pareto (1848–1923)—Italian engineer, economist, and philosopher—presented the concept of Pareto efficiency and helped develop microeconomic science. Pareto Efficiency or Pareto Optimization is a state of non-redistributable selected resources to facilitate a particular person or preference criterion, while not compromising on at least one person or preference criterion. Pareto efficiency is a minimal concept of optimality that does not necessarily lead to a socially desirable distribution of resources. He says nothing about equality or the welfare of the whole society. It is a statement that it is not possible to change the value of a single variable without altering the other variables in the multi-objective optimization (Pareto Optimization) task. Because there is no essential information about permissible compromises among decision makers, these methods only provide a compromise curve (also known as Pareto optimal solutions). Then, the decision maker can choose the desired point, reflecting his approach to the compromise curve.

The first decision-making axioms were formed by Ramsey in 1931 [60]. Later, von Neumann and Morgenstern introduced the Theory of Games and Economic Behavior [61]. Keeney, Raiffa [62], Dyer, and Sarin [63] developed the multi-attribute preference function theory. Algorithmic thinking and model building in MCDM provides a contemporary, and one of the most attractive cross-disciplinary research approaches in science and Operations Research for explaining certain kinds of human behavior and decision making [64–66]. Zionts [67] focused on the applications of MCDM and started popularizing the acronym "MCDM". MCDM methods include two classes of methods, namely, continuous and discrete ways, based on the nature of the possible choices. Continuous Multi-Objective (Multi-Targeted) Decision-Making (MODM) approaches are methods to determine the best (optimal) value of a Multi-Objective Utility function that can acquire infinitely multiple values in a decision space of the concerned problem. Discrete MCDM methods or Multi-Attribute Decision-Making methods (MADM) are decision-aiding (decision-support)-based processes that determine the best

choice between the finite number of predetermined alternatives, preferences, and tradeoffs among attributes (goals) or to estimate possible options and sort them by preference (Figure 1). A constructed multi-attribute preference function [68] is based on the elicited information. Discrete approaches are subdivided into weighting methods and ranking ways. Empirical MCDM techniques continue to be used, and their application to different problems expanded in recent decades.

**Figure 1.** Multi-criteria decision-making wheel.

The AHP (Analytic Hierarchy Process) encourages the adoption of sustainability concepts in urban development plans, which are very different and, therefore, jeopardize the idea of local balance. The AHP facilitates stakeholder groups to determine the relevance of the factors to be assessed and to integrate their decisions. To this end, it uses a well-grounded geometric mean and relative weighting ratio 9-point scale (one means equal importance and nine means extreme importance) procedures. Other types of scale related to the relative importance of the criteria, called nominal, ordinal or interval scale, may also be used. The results are similar to those obtained with Saaty's 9-point scale. The systematic measurement and comparison of the importance of pairs of criteria is the basis for methods such as AHP [69] or SWARA (Step-wise Weight Assessment Ratio Analysis) [70] to determine the relative importance of criteria. There are many different subjective approaches for this reason: AHP [69], ANP [71], expert judgement method [72], SWARA [70,73], FARE (FActor RElationship) [74], etc. In 1965, Eckenrode [75] compared the efficiency of six methods (Ranking [76,77], Rating [75], Partial Paired Comparisons I [78], Partial Paired Comparisons II, Complete Paired Comparisons [79], and Successive Comparisons [80]) in collecting the judgment data and found no significant differences among the techniques. The values calculated by all of the methods correlates.

### *Symmetry* **2019**, *11*, 792

Decision makers developed hybrid procedures that combine the strategies of the three classes of methods described. The following information could be the basis for different classification schemes:


Strategic Decisions and Strategic Decision Making

Decision makers need to assess many uncertain factors when making strategic decisions. Implemented strategic projects have long-term consequences. They require a lot of money and natural resources to implement. Strategic solutions need alignment with the facts that stakeholders will face in a future reality.

Technical Complexity

The two most significant and most demanding challenges in solving strategic decisions are the specific levels of high uncertainty and complexity of solutions. It is difficult to decide whether real life needs a new project. It may be difficult to assess whether a project is successful or not because the project has not been implemented before. This uncertainty means that there is a lack of detailed knowledge of the impact of the project's external environment on the functioning of potential strategies. Another source of risk arises when doubts arise as to what strategic goals or policy values should lead to a decision or choice of action.

The strategic implementation of the decisions will require many resources in various fields, such as marketing, finance, operations, research and development. The study of the interrelationship between these choices is a necessary part of such possessions.

Social Complexity

Strategic decision making is a social discourse and a complicated process through which leaders understand their strategic concerns and can act on them. A group of stakeholders discuss solutions in various written reports, speeches, letters to shareholders or informal conversations. Strategic seminars usually involve a group of executives representing critical organizational stakeholder groups that form the structure for the solution problem. Different interpretations of questions are the basis for understanding the problem and cognitive conflicts. Negotiations help to resolve disputes and create a collective mental foundation of the decision.

### **3. Problem-Solving Model**

Many of the multi-criteria methods of decision aiding can be used to solve the problem. The choice of decision makers among these models depends on the objectives of the task, the type of initial data describing these goals, the type of decision makers, the groups of people interested in the decision, the timeliness of collecting data, time-honored decision makers, skills, and qualifications. As competition augments and technological differentiation becomes a design that is more difficult, precisely what is referred to as industrial design offers an efficient way to solve free market problems [81]. Usually, many interrelated economic, technological, ecological and social factors influence the various areas of project selection.

Some of the MADM techniques suffer from significant shortcomings [82]. It is the motivation of investigators to invent and adopt the new algorithms [83,84]. Zavadskas et al. [85] merged two different MCDM methods and introduced an original Weighted Aggregated Sum-Product assessment method (WASPAS), while the innovative combination of three different MCDM methods was introduced as a new Multiple Objective Optimization on the basis of Ratio Analysis Plus Full Multiplicative Form (MULTIMOORA) method [86,87]. Construction is slow to innovate. Choosing effective technological systems in a building is a complex, multi-criteria task. Zavadskas et al. [87] integrated six MCDM methods to assess the feasible options of construction technologies by using six MCDM methods: the ELECTRE III, ELECTRE IV, TOPSIS, VIKOR, SWARA, and MULTIMOORA. Later, Turskis and Juodagalviene [ ˙ 88] presented a novel approach to solve complicated construction engineering problems based on ten MCDM methods: Game Theory, AHP, SAW, MEW (Multiplicative Exponential Weighting), TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution), EDAS (Evaluation based on Distance from Average Solution), ARAS (Additive Ratio Assessment), Full Multiplicative form, Laplace Rule, and Bayes Rule. Hashemi et al. [33] presented the group decision model based on grey-intuitionistic fuzzy-ELECTRE (ELimination Et Choix Traduisant la REalité (ELimination and Choice Expressing REality)) and VIKOR (Vlsekriterijumska Optimizacija I KOmpromisno Resenje).

In this study, the authors chose the fuzzy extension of the MULTIMOORA method (Figure 2). Brauers and Zavadskas [89] introduced a Multi-Objective Optimization by Ratio Analysis (MOORA) technique. Its anatomy is not complicated and so permits one to arrive at a final solution faster. The utilization of the MOORA is broad and covers various disciplines and industries [90,91]. In 2010, Brauers and Zavadskas extended the MOORA method with the purely multiplicative utility function [85]. It became a robust MULTIMOORA method (MOORA plus the full multiplicative form). In 2011, Brauers et al. [92] extended the MULTIMOORA method with fuzzy numbers.

**Figure 2.** Graphical structure of approach to solve the challenge.

### *3.1. Research object*

Vilnius Airport (IATA: VNO, ICAO: EYVI) is the international airport of Vilnius, the capital of Lithuania. It is 7 km to south of the City Centre. It is the largest airport in Lithuania by passenger traffic (Table 1). The airport began operations in 1932 as Wilno–Porubanek, Porubanek. Today, Vilnius Airport has one runway and approximately 3.8 million passengers a year (Table 2). A state-owned enterprise, Lithuanian Airports manage the airport under the Ministry of Transport and Communications.




The characteristics of the aircraft that regularly uses the airport or the future benefits and the type of runway end are the basis for selecting the appropriate standards for airport infrastructure development.

The main stages of the runway design are as follows:


The alternative is an intersection of a runway with a planned land plot in the general plan of Vilnius (Figure 3). The other runway position is within the existing territory of the airport. The third alternative represents the location of a new runway, which requires the additional minimal area that is already provided by the general plan of the Vilnius city. The choice offers the possibility of constructing a parallel runway, the implementation of which requires the most significant land plot.

**Figure 3.** Investigated four alternatives (1-4) to the second take-off run.

### *3.2. Setting the Criteria Values*

The weighting of the criteria is determined by the verbal substantiation principle, which replaces the verbal (qualitative) estimates with fuzzy numbers according to Table 3 values. The basis of assessment is the following oral assessments: "bad", "satisfactory" and "good". The triangular dependency function is used to describe the work criteria. When each element *f* 5consists of three numeric values (f1, f2, f3), the graphic (Figure 4) represents it.



**Figure 4.** Fuzzy element *f* 5, which has a dependency function μ*<sup>f</sup>* 5(*x*).

Its function of dependence μ*<sup>f</sup>* 5is as follows:

$$\mu\_{\tilde{f}}(\mathbf{x}) = \begin{cases} \frac{\mathbf{x} - f\_1}{f\_2 - f\_1}, (f\_1 \le \mathbf{x} \le f\_2), \\\frac{f\_3 - \mathbf{x}}{f\_3 - f\_2}, (f\_2 \le \mathbf{x} \le f\_3). \end{cases} \tag{1}$$

Table 1 gives an oral rating (expressed in numbers).

The experts selected and evaluated the survey criteria according to the guidelines of the International Vilnius Airport Long-term Development Plan [93]. (Table 4).


**Table 4.** Assessment of alternatives to the second runway.

### *3.3. Determination of Criteria Weights By Ranking Method*

Criteria weights indicate how many times the usefulness (significance) of one criterion is more significant or lower than the next criterion. In this work, estimates of the rankings of experts assigned to the requirements (expert survey method) are the basis to criteria weighting. The experts participating in the expert survey (according to Kendall [94], they must be at least *r* ≥ 7), according to the importance: one means least importance and nine means extreme importance (Table 5). The higher the rating scale, the more valuable the criteria are, and this reduces the coherence of expert opinions. The precision of the multi-criteria task depends on the applied range of the selected points [95].

**Table 5.** Expert judgement and weighting. 


An expert survey yields only estimate. The formula determines the average rank rating *ti*:

$$
\tilde{t}\_l = \frac{\sum\_{k=1}^r t\_{ik}}{r},
\tag{2}
$$

where *tik* is an estimate of the expert attributed to the criterion; *r*—number of experts.

The following formula expresses the significance of the criteria qi:

$$q\_i = \frac{\overline{t}\_i}{\sum\_{i=1}^n \overline{t}\_i}.\tag{3}$$

The coherence of expert judgment is determined using Kendall's concordance coefficient [94]. Calculated according to the formulas:

$$W = \frac{12S}{r^2 \times (n^3 - n) - r \sum\_{k=1}^{r} T\_k \text{ \textquotedblleft}},\tag{4}$$

$$S = \sum\_{i=1}^{n} \left( \sum\_{k=1}^{r} t\_{ik} - \frac{1}{n} \sum\_{i=1}^{n} \sum\_{k=1}^{r} t\_{ik} \right)^2,\tag{5}$$

$$T\_k = \sum\_{l=1}^{H\_1} \left( h\_l^3 - h\_l \right) \tag{6}$$

where *S* is the sum of the deviation of the criteria values from the mean squares; *Tk* – *k* is the number of related ranks; number of equal grades *k*; *hl*—the number of ranks in a group of related ranks, in terms of *k* expert; the grade assigned to a criterion of an expert; *r*—the number of experts; *n*—the number of evaluated criteria.

*Symmetry* **2019**, *11*, 792

The study states that there are no related grades so that the following formula determines the concordance coefficient:

$$\mathcal{W} = \frac{12S}{r^2(n^3 - n)}.\tag{7}$$

A set of values of the coefficient *W* is [0; 1], 0 ≤ *W* ≥ 1. The lower the compatibility of expert opinions, the more the coefficient of the concordance coefficient *W* approaches 0, the more harmonious *W* approaches 1.

The formula determines the significance of the concordance factor:

$$\chi^2 = \frac{12S}{r \times n \times (n+1) - \frac{1}{n-1} \sum\_{k=1}^{r} T\_k}. \tag{8}$$

If the value of χ<sup>2</sup> calculated by the last formula is greater than χ<sup>2</sup> *tabl* (from the table), the compatibility of expert opinions is acceptable. If χ<sup>2</sup> < χ<sup>2</sup> *tabl* these expert opinions are not harmonized.

### *3.4. Task-Solving by the MULTIMOORA-F Method*

Brauers et al. [92] suggested the extension of MULTIMOORA with fuzzy criteria values. This method consists of three parts: a) The Ratio Analysis System of MOORA defines data normalization by comparing alternatives of an objective to all values of the objective; b) The ratio system is the basis for the Reference Point of MOORA; c) Purely multiplicative utility function is the basis for the Full Multiplicative Form. The third part of the MULTIMOORA method helps to avoid subjectivity, as it is not necessary to determine the coefficients of significance (weights) of the criteria here. It is the most sensible alternative to the values of the descriptive measures when compared with other multi-purpose methods.

The first step in the MULTIMOORA-F method is to construct a computable matrix: *X* with *xij* element, where *j* is the index of criterion, and *i* is the alternative's index (*i* = 1, 2, ... , *m* and *j* = 1, 2, ... , *n*) *m* is the number of considered alternatives, and *n* is number of judgement criteria.

### 3.4.1. Calculation of Relative Sizes, the Fuzzy Ratio System of the MOORA Method

Output data, having different uncertain units of measure <sup>5</sup>*xij*, are normalized to non-dimensional sizes. Comparing the values of fuzzy numbers is the basis of normalization.

$$\begin{aligned} \overleftarrow{\mathbf{x}\_{ij}} = \left( \mathbf{x}\_{ij1}^{\*}; \mathbf{x}\_{ij2}^{\*}; \mathbf{x}\_{ij3}^{\*} \right) = \left( \frac{\mathbf{x}\_{ij1}}{\sqrt{\sum\_{i=1}^{m} \mathbf{x}\_{ij3}^{2}}}; \frac{\mathbf{x}\_{ij2}}{\sqrt{\sum\_{i=1}^{m} \mathbf{x}\_{ij2}^{2}}}; \frac{\mathbf{x}\_{ij3}}{\sqrt{\sum\_{i=1}^{m} \mathbf{x}\_{ij1}^{2}}} \right) . \end{aligned} \tag{9}$$

After normalization, calculate the sum of coefficients yi for each *i*-th alternative. Normalized values express aggregation or subtraction of fuzzy numbers.

$$
\widetilde{y\_i} = \sum\_{j=1}^{\mathcal{S}} \widetilde{x\_{ij}} w\_j \ominus \sum\_{j=\mathcal{S}+1}^n \widetilde{x\_{ij}} w\_{j\prime} \tag{10}
$$

where *j* = 1, 2, ... , *g*. *g*—the number of maximized criteria (these criteria values are added). Values of the remaining *<sup>j</sup>* <sup>=</sup> *<sup>g</sup>* <sup>+</sup> 1, *<sup>g</sup>* <sup>+</sup> 2, ... , *<sup>n</sup>*(minimized) criteria are subtracted. Then every fuzzy number <sup>5</sup>*y*<sup>∗</sup> *i* is converted to the best non-fuzzy performance value *y*∗ *<sup>i</sup>* <sup>=</sup> <sup>1</sup> 3 *y*∗ *<sup>i</sup>*<sup>1</sup> + *<sup>y</sup>*<sup>∗</sup> *<sup>i</sup>*<sup>2</sup> + *<sup>y</sup>*<sup>∗</sup> *i*3 .

### 3.4.2. The Fuzzy Reference Point Part of the MOORA Method

A ratio system is a basis to report the point of uncertainty. The shortest distance to the reference point is found using the coefficients calculated for Equation (9). Practically, it is very similar to the TOPSIS method in Manhattan City space [53,82] or has week similarity to the EDAS method [4]. The *<sup>j</sup>*-th coordinate represents the minimum <sup>5</sup>*x*<sup>−</sup> *<sup>j</sup>* or maximum <sup>5</sup>*x*<sup>+</sup> *<sup>j</sup>* of the reference point for the j criterion 5*x*∗ *ij*, where

$$\begin{aligned} \widetilde{\mathbf{x}\_j^+} &= \left( \max\_i \mathbf{x}\_{ij1'}^\*, \max\_i \mathbf{x}\_{ij2'}^\*, \max\_i \mathbf{x}\_{ij3}^\* \right), j \le \mathbf{g};\\ \widetilde{\mathbf{x}\_j^-} &= \left( \min\_i \mathbf{x}\_{ij1'}^\*, \min\_i \mathbf{x}\_{ij2'}^\*, \min\_i \mathbf{x}\_{ij3}^\* \right), j > \mathbf{g}. \end{aligned} \tag{11}$$

Then each coefficient of the normalized matrix is recalculated, and the deviation gives the final rank from the reference point, and the alternative rank is determined based on the Tchebycheff metric and the Min–Max method:

$$\min\_{\vec{\boldsymbol{x}}} \Big( \max\_{\vec{\boldsymbol{y}}} d(\overleftarrow{\boldsymbol{x}}\_{\boldsymbol{\lambda}}, \overleftarrow{\boldsymbol{x}\_{i\boldsymbol{j}}}) \Big). \tag{12}$$

3.4.3. Part of the Multiplicative Form of the MOORA Method

Each i - th alternative may include minimization and maximization of the utility function and is expressed by the formula:

$$
\overline{\mathcal{U}}'\_i = \overline{A}\_i \otimes \overline{B}\_{i\prime} \tag{13}
$$

where (*A*5*i*) and (5*Bi*) – respectively, the maximized and minimized criterion of the product.

$$
\widetilde{A}\_i = (A\_{i1}, A\_{i2}A\_{i3}) = \prod\_{j=1}^{\mathcal{S}} \widetilde{x\_{ij}} w\_{j\prime} \tag{14}
$$

$$
\widetilde{B}\_i = (B\_{i1}, B\_{i2}B\_{i3}) = \prod\_{j=g+1}^n \widetilde{x}\_{ij} w\_{j\prime} \tag{15}
$$

where *i* = 1, 2, ... , *g* is an index of the maximizing criteria, while *i* = g+1, g+2, ... , *n* is an index of the minimizing criteria, *g* is a number of maximizing criteria, and *n* is a number of criteria.

### **4. Practical Problem Solving Using the MULTIMOORA-F Method**

In particular, a team of experts was formed to solve the problem. Experts have chosen individuals with experience in solving similar problems and a master's degree in construction engineering.

In the first step of the task, a standard five-step Delphi methodology was applied to identify the most critical evaluation criteria. The efficiency criteria presented in Table 4 were selected. In the second stage of the task, the same experts ranked the requirements according to their importance. It attributes the ranking of criteria and those criteria weights. The experts identified the values of the efficiency criteria describing the alternatives presented in Table 5 (in the initial task solution matrix) using the Delphi methodology. Then, decision makers broadened work on the task according to the methodology given above (3.3.1–3.3.3). The initial model (Table 6) was normalized (Table 7), and then the alternatives were sorted according to the full part of the product (Table 8), the relationship between the Min–Max method (Table 9), and the deviation from the reference point (Table 10). The theory of dominance determines the final ranking of the alternatives (Table 11).


**Table 6.** Initial decision matrix. **Table 7.** Normalized decision-making matrix.


**Table 8.** Results of the task solution using the complete form of the product.





### *Symmetry* **2019** , *11*, 792

RP

0.10;0.10;0.10

0.06;0.06;0.06

0.16;0.16;0.16

0.01;0.01;0.01

0.10;0.10;0.10

0.12;0.12;0.12

0.01;0.01;0.01

0.00;0.00;0.00

0.20;0.20;0.20

Decision makers consider all criteria of the MADM task as independent from each other, and the people making the decisions (experts) are essential in the determination of a set of criteria, values of qualitative measures and the definition of the importance of specific goals of the stakeholders. The development of composite indicators for integrated performance in societies typically relies on a priori assumptions rather than model-free, data-driven evidence [96]. Traditional variable selection processes tend not to consider relatedness and redundancy among variables, instead thinking only of individual correlations. The analysis of sensitivity and uncertainty is one of the complex problems in the application of the MADM models [97]. The majority of discrete optimization (MADM) parameters of a cumulative distribution function are unknown and, in most cases, the decision maker cannot define them. With the advent of advanced estimation techniques, mutual information has become a viable means of characterizing input–output interactions in complex problems. Lüdtke et al. [98] recommend entropy-based sensitivity analysis. The sensitivity analysis in this paper lays the theoretical foundations for an information-theoretic sensitivity analysis that assigns credit or influence to input variables in terms of their overall contribution to a system's output entropy. It is based on the review of the difference between a change of input data and the results of the multi-attribute utility function [99].

The fuzzy entropy-based sensitivity analysis shows that criterion *x***<sup>2</sup>** has the most significant impact on the final solution in this particular matrix (approximately 14%). Criterion *x***<sup>4</sup>** and *x***<sup>7</sup>** are considered to be in the second and third places to influence the final decision (approximately 12.5% or approximately the same influential as the second criterion). The difference between the most important criterion *x***<sup>2</sup>** and the least essential criterion *x***<sup>5</sup>** is approximately 4%. Besides, the relative impact of the most important criterion is approximately thirty-five per cent higher than the least influential criterion.

### **5. Conclusions**

The sustainability of urban development is the decision-making process of planning and implementing decisions under a variety of influencing factors in uncertain and dynamically changing conditions. Multi-criteria methods define sustainable development goals. They infringe on the interests of various groups of society in a different way. Knowledge-based agents represent specific urban development situations and determine the relative importance of sustainability issues to the community. Exact numbers cannot describe these purposes. Such tasks have to be fuzzy or intermediate/grey problem-solving models. Relevance or priorities during the negotiation process cover many sustainability criteria and help define sources of conflict and build a mutually acceptable compromise. Many city sustainability models include qualified judgment. The involvement of a community or stakeholder group is a crucial procedural function of the decision-making process, which is considered desirable in models of smart growth or New Urbanism. Interested parties define critical criteria and possible alternatives based on the integrated estimates of density, accessibility, availability and natural resources of the place, and land-use mix. Public attitudes towards regularly defined public policy priorities and the weights of sustainability criteria reflect the importance of stakeholders' goals achievement in a specific situation reflect the full basis of decision making.

Scientists concluded that the ranks of the alternatives change when using various MCDM methods to determine them.

Airports are an integral part of human activity, and their sustainable development is a critical feature for the modern world. The criteria system has been developed to solve the problem by selecting the following efficiency criteria: *x*1—Use with dominant winds, significance *w*<sup>1</sup> = 0.12; *x*2—Airspace compatibility, *w*<sup>1</sup> = 0.08; *x*3—Increase in flight field capacity, *w*<sup>3</sup> = 0.17; *x*4—Investment need and new infrastructure, *w*<sup>4</sup> = 0.12; *x*5—Effects on the environment, *w*<sup>5</sup> = 0.10; *x*6—Noise reduction, *w*<sup>6</sup> = 0.12; *x*7—Earth demand, *w*<sup>7</sup> = 0.18; *x*8—Interruptions to construction works, *w*<sup>8</sup> = 0.03, and *x*9—Cost-effectiveness, *w*<sup>9</sup> = 0.08.

The MULTIMOORA method is one of the most versatile and most applied multi-criteria decision-making methods. Its application has proven to be successful in many decision-making tasks. The MULTIMORA method integrates three different ways to identify alternative rankings. Therefore, it is more reliable than many other approaches.

The results of the calculation indicate the non-uniform priorities distribution of alternatives. According to the theory of calculating the ratio, the other options distribute as follows *a*<sup>4</sup> *a*<sup>1</sup> *a*<sup>2</sup> *a*3, while based on the reference point theory: *a*<sup>1</sup> *a*<sup>4</sup> *a*<sup>3</sup> *a*2, and according to the complete form of the product: *a*<sup>1</sup> *a*<sup>4</sup> *a*<sup>3</sup> *a*2. The method of domination summarizes the results of the task and presents the final line of the alternatives priority, which is as follows (from the most effective to the least efficient): *a*<sup>4</sup> *a*<sup>1</sup> *a*<sup>2</sup> *a*3.

Stakeholders selected and successfully implemented the fourth alternative.

The given methodology can be used by selected and competent experts to perform various individual MCDM optimization tasks to find essential objectives, critical criteria, and best-predicted alternatives.

**Author Contributions:** Z.T. analyzed the literature, developed the research hypothesis, designed the research framework, calculated and wrote the paper. J.A. and V.K. reviewed and edited the paper. G.G. collected the data, analyzed the literature and the data.

**Funding:** This research received no external funding.

**Acknowledgments:** We appreciate the valuable comments of reviewers and editors.

**Conflicts of Interest:** The authors declare no conflicts of interest.

### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Development of a Novel Freight Railcar Load Planning and Monitoring System**

**Snežana Mladenovi´c 1, Stefan Zdravkovi´c 1, Slavko Veskovi´c 1,\*, Slađana Jankovi´c 1, Života Đorđevi´c <sup>2</sup> and Nataša Đali´c <sup>3</sup>**


Received: 17 April 2019; Accepted: 24 May 2019; Published: 4 June 2019

**Abstract:** Rail transport has unmistakable sustainable (environmental and economic) advantages in goods transportation on a massive scale. Goods loading constitutes an important segment of goods transportation by rail. Incorrect loading can be a serious threat to traffic safety as well as a generator of unforeseen expenses related to goods, railway infrastructure and vehicles. At the beginning, the paper identifies the presence of incorrect loading into freight railcars. The analysis of the available loading software has led to the conclusion that no software offers adequate support to the planning and monitoring of the loading of goods into a covered railcar using a forklift truck. For this reason, the main aim of the research is to formulate a mathematical model that includes real-world constraints, as well as the design and implementation of an original user-friendly load planning and monitoring software system. Experimental evaluations of the implemented software have been made based on single and multiple railcar pallet loading problems, considering the following three optimization criteria: maximization of wagon load weight, maximization of wagon volume utilization and maximization of weighted profit. By testing the optimization and visualization features of the software and analyzing the results, it has been concluded that it can offer full support to real load planning and monitoring problems.

**Keywords:** multi-criteria modeling in railway transport; sustainable railway transport; Cutting and Packing problem; container loading; load planning and monitoring software

### **1. Introduction**

With the raising of environmental awareness, rail transport is becoming increasingly important, trying to establish itself as the sustainable transportation system of the future. The global rail sector is making a great effort to keep its environmental advantage by improving energy efficiency and reducing carbon dioxide emissions. For example, 28 European member states of the International Union of Railways (French: Union Internationale des Chemins de fer, UIC) have collectively assumed the obligation to lower their carbon dioxide emissions per passenger kilometer and gross ton-kilometer by 50% by 2030 [1]. Due to its numerous and unmistakable environmental advantages compared to other modes of transport and in particular road transport, it is the most important transportation system in the mass flow of goods and passengers. Rail transport is the most cost-effective system of public transportation of passengers in densely-populated regions and zones of big cities, as well as goods when it comes to long-haul transportation and intermodal transportation systems. Taking into account the evident advantages of rail transportation in terms of sustainability, the European Commission has dubbed it as the backbone of the EU transport system [2]. This, naturally, makes it necessary to ensure

its permanent "intelligentization" in all spheres [3], as well as to reform traditional railway companies and establish optimal models for their organization and functioning [4].

Goods transportation is an important railway service. Nevertheless, in many economies around the world, road transport remains the predominant mode of freight transport [5]. Furthermore, EU-28 inland freight transported by road (76.4%) was more than four times as high as the share transported by rail (17.4%) in 2016 [6]. The global motivation of all researchers in the field of freight rail transport, such as ours, is to change this unfavorable trend. Each train operating company aims to ensure high-quality goods transportation, making the highest possible profit from it and maintaining the highest possible level of safety. Each business entity (e.g., carrier, payer of transportation services, goods/railcar owner, infrastructure manager) involved in goods transportation by rail has certain duties, responsibilities and rights, defined by law. In the Republic of Serbia, goods transportation by rail is defined by the Law on Contracts in Rail Transportation [7].

Goods loading constitutes an important segment of goods transportation by rail. Incorrect goods loading and irregular securing can be a serious threat to traffic safety. a freight railcar's carrying capacity must be taken into account, as well as the category and carrying capacity of a railway line, the permissible axle load, the permissible load per meter etc. To maintain the initially achieved balance, cargo stability must be taken into account as well, i.e., there can be no significant movement of cargo in a freight railcar during the journey. In addition to jeopardizing safety, incorrect loading may result in cargo loss, damage or delivery delays or in damage to railway infrastructure and vehicles. The assessment of damage and establishing responsibility for it can sometimes have a court epilogue. Proper loading of railway wagons is one of the ways used to avoid these unwanted and extraordinary events, so that the user gains confidence in the efficiency and reliability of freight rail transport, which in the long run can influence the increase of its share in the total volume of freight traffic. In the past, in Serbia there was a problem concerning the large number of excluded freight wagons due to incorrect loading and securing of goods, as well as significant costs that were incurred on this basis [8]. Therefore, the author's clear motive is to observe the present situation and, where necessary, suggest innovative solutions. Of course, the main goal is for the carrier to achieve as much profit as possible, and for all other business entities involved in transportation to minimize their costs.

In the review [9], the authors conclude that cargo planning and loading represent the most essential factors in goods transportation by railways, airlines, trucks and buses. Arranging cargo properly in the available space is a very difficult task today and it is therefore necessary to design optimization algorithms. In the optimization context, the everyday real transportation problem of loading goods into freight railcars can be treated as a "Cutting and Packing" (C&P) problem of operational research. Practice shows that many operational research algorithms often have limited real-life applications. Some methods have never been implemented, whereas some implemented systems are used rarely or not at all. Consequently, the main contribution of the research described in this paper is the development of an original user-friendly software system for freight railcars load planning and monitoring, called 'RailLoad. It was adapted according to specific the needs of load planning and monitoring in Serbia, with real-world constraints, however, the presence of a mathematical model and source code gives it a greater significance, because the objectives, constraints, graphical user interface, language for communication with the user, etc. can easily be changed in accordance with new requirements.

The rest of the paper is arranged as follows: Section 2 describes the basic structure and types of C&P problems, with special reference to the three-dimensional container loading problem (3D CLP). Section 3 gives a brief overview of selected papers addressing the CLP with something in common with our research, either in terms of the approach or in terms of goods transportation, which is a field of application of interest to us. Section 4 is trying to find answers to the following questions: Is Serbia's rail transportation faced with the problem of incorrect loading into freight railcars? Are there any software systems which support loading? Are there any problems in using such software systems? Section 5 describes loading rules for freight railcars, formalizing them by means of a mathematical

model. Section 6 presents a selected number of test examples, solved using the originally developed software for the optimization and visualization of the loading plan and its user-friendly monitoring.

### **2. Cargo Loading as the C&P Problem**

People solve packing problems in different situations on a daily basis. Relying on intuition and the sense of space, man can solve packing problems when putting groceries in the fridge or placing luggage in a car trunk. However, in transportation, logistic or industrial settings, in which a multitude of different packing problems can arise, with a large number of constraints to which there are often numerous exceptions, and with multi objectives, even a highly experienced person can prove to be an insufficiently efficient "solver". Consequently, ever since it was formulated in the 1950s [10], the C&P problem has not declined in importance [11] either in the academic community or in real environments, because it is a powerful instrument for boosting profitability.

### *2.1. C&P Problem Fundamentals*

The basic structure of the C&P problem is simple: there is a set of large items and a set of small items, defined in one, two, three or more dimensions [12]. The task is to select some or all small items, to group them into one or more subsets and then to attach each subset to a large item in such a way as to meet all the set constraints and optimize the selected objective function. The problem solution may involve using all or only some small items, i.e., all or only some large items. The literature uses different terms for the C&P problem such as the container/vehicle/cargo/pallet loading problem, the knapsack/rucksack problem, the nesting problem, etc.

Dyckhoff was the first to systematize diverse terminology, subtypes and potential forms of application in his comprehensive survey [13] nearly 30 years ago. In his C&P problem topology, packing or loading of vehicles/cars/pallets/container/bins etc., the very subject of our paper, appears as a separate subclass.

The types of C&P problems are given in an easy-to-survey manner in [14] as follows:


As we can see, some C&P problem types are closely related and often overlap when it comes to real planning.

### *2.2. Container Loading Problem*

Two basic feasibility conditions must be met in the CLP: all small objects (boxes, items) must be completely inside a container and there can be no overlapping of small objects. The review paper [15] differentiates between the following CLP types:


The review paper [16], along with the six aforementioned CLP types introduces another type:

• Multiple heterogeneous knapsack problem (MHKP)—containers are weakly or strongly heterogeneous and strongly heterogeneous items.

Both surveys [15,16], recognize similar classes of constraints: Container-related constraints:


Item-related constraints:


Load-related constraints:

• Stability constraints—unstable cargo can cause cargo/container damage, injure cargo-handling personnel and, if the container is a vehicle, jeopardize traffic safety. Vertical stability prevents items from falling down on the floor of the container or on other items, whereas horizontal stability ensures that there is no significant movement of items while the container is on the move;

• Complexity constraints—special constraints taking into account whether the loading of a container is manual or automated, as well as the automated loading method.

Cargo-related constraints:


The paper [16] is a complementary review to [15] and it focuses on the design and implementation of solution methodologies for solving CLPs. Authors also provide an experimental comparison of different algorithms on benchmark data sets to identify the state of the art in solution methods in the area. The extensive review [16], which includes 113 papers, discusses specific aspects of the solution approaches such as placement heuristics (how a layout is constructed) and the improvement heuristics (how to search for better solutions).

Naturally, real CLPs usually include some special constraints as well.

### **3. Literature Review**

There are a huge number of papers studying different CLP types, with different constraints and different objectives. According to the review paper [15], 163 papers that in the broadest sense research CLPs were published in the 1980–2011 period. Naturally, the last few years have also seen numerous papers trying to find solutions in the field by using different techniques. In the dissertation [17], relying on data obtained from Google Scholar, Scopus and Web of Science, one can see an upward trend in the number of these papers in the 2011–2016 period. Generally, the papers can be grouped in different ways, e.g., according to the CLP type they are addressing (IIPP, SLOPP, MILOPP, MHLOPP, SKP or MIKP), outlined in Section 2.2. Another classification could be based on whether exact, approximate, metaheuristic, heuristic or combined approaches are used to find a solution. At this point, we shall just give a brief overview of papers that have something in common with our research, either in terms of the approach or in terms of the field of application (goods transportation).

A number of papers have developed 0–1 linear programming (LP) models for CLPs. In the paper [18], the authors present a 0–1 LP model which includes orientation constraints, stability constraints (vertical and horizontal) and stacking constraints (load-bearing). The problems considered are of the IIPP, SLOPP and SKP types. Numerical experiments have been performed by the standard problem solver (GAMS/CPLEX) and the proposed models validated. The authors conclude that the proposed models can be useful in motivating future research exploring decomposition methods, relaxation methods, heuristics, among others, in order to solve more realistic container loading problems. The paper [19] discusses a single CLP, which aims to pack a given set of unequal-size rectangular boxes into a single container in such a way as to minimize the length of the occupied space in the container. At first, a 0–1 mixed integer LP model is formulated. a simple but effective loading placement method is proposed for solving large-size instances. According to authors, the fundamental of the proposed procedure has the potential for dealing with the nonlinear objective function. In the paper [20], the loading heterogeneous pallets in a single railcar is dealt with as a C&P problem. a mathematical model in 0–1 LP terms was formulated and implemented for its solution. The model and originally developed software were tested on a number of examples. The paper also suggests directions for further research, such as multiple railcars loading and the relaxation of the constraint that a railcar has to be filled to its volume capacity. In the follow-up research, described in the paper [21],

the mathematical model includes specific allocation and loading priority constraints in addition to weight, balance and stability constraints that are mandatory, while the problem studied concerns the simultaneous loading of items into two identical railcars.

Mixed integer programming (MIP) is a common way of solving a combined container loading and vehicle routing problem, which is crucial if goods are transported in road vehicles. The paper [22] proposes a MIP model for the capacitated vehicle routing problem (CVRP) with sequence-based pallet loading and axle weight constraints. All small items are homogeneous pallets and may be placed in two horizontal rows in the vehicles. The model takes into account weight restrictions on the axles of the tractor and trailer of the vehicle at all times (i.e., at the depot as well as after each delivery). The authors compare the model to the CVRP with sequence-based pallet loading without axle weight restrictions and conclude that not including axle weight restrictions may induce major violations of axle weight limits. Kang et al. [23] define the problem as follows: heterogeneous vehicles are available to ship the materials, and each vehicle has a limited loading capacity and a limited travelling distance. Different types of vehicles have different loading capacities and different travelling distance limits. The purpose of this research is to study a multiple vehicle routing problem with a soft time window and heterogeneous vehicles. Two models, using MIP and a genetic algorithm, are developed to solve the problem. The authors claim that, based on the outcomes of the models, managers can determine the optimal or near optimal methods for assigning the routings of multiple vehicles in each period, and for allocating loading sizes for each vehicle in each period, while aiming to minimize the total transportation cost. Moura and Oliveira [24] develop a MIP model combining the vehicle routing problem with time windows and the container loading problem. The authors note that the capacity constraints of the vehicles in the vehicle routing problem are often improperly used when real-world applications are considered. The capacity constraint is not only related to admissible weight but also to the vehicle's volume dimensions. The routes designed for a given vehicle capacity, in terms of weight limits, can lose their admissibility due to incompatibility of cargo dimensions, and vice versa. To address loading issues in more detail in routing problems, one needs a richer model. Loading constraints may seriously affect the nature of the problem. In dynamic vehicle routing problems transport requests arrive according to a stochastic pattern, and the task is to route the vehicles in an orderly fashion to satisfy the demand. For this reason, the mathematical programming approach, with "hard" constraints and an objective function, is hardly useful. Therefore, the authors of the paper [25] have suggested an adaptive neuro-fuzzy system, capable of selecting a vehicle's route in uncertainty conditions. Railcar loading is also often performed in conditions of insufficient relevant information, with no uniformly selected optimization criterion, i.e., it takes place in multi-criteria circumstances.

It was observed long time ago that C&P problems were non-deterministic polynomial hard (NP-hard), i.e., that they could not be solved in polynomial time [26] in their general form. Consequently, popular metaheuristics such as genetic algorithms, tabu search and simulated annealing, as well as heuristics based on the familiarity with the real problem, are often used to solve cargo loading problems. There were numerous papers in the past that used genetic algorithms (GA) for the CLP, while many papers do so even today [23,27,28]. Consequently, the paper [28] presents an adaptive GA, based on the general loading mathematical model aiming to maximize space utilization. Based on the dynamic space division method, the authors develop a dedicated genetic algorithm that uses a two-stage real-number encoding method. The encoding method consisting of the sequence of cargoes and rotation lifts a single rule for cargo loading order restrictions. This algorithm searches for the best combination of cargos' sequence and rotation through the GA, which provides a larger search space for the algorithm to find a better solution. Simulated annealing also represents a frequently used metaheuristic. In a more recent paper [29], a linear MIP model and a simulated annealing algorithm are developed for the problem of packing rectangular boxes inside a container in such a way as to maximize the total value of the packed boxes, while some realistic constraints, such as vertical stability constraints, are included. The packing is orthogonal, and the boxes can be freely rotated into any of the six orientations. Some boxes are preplaced in the container and these preplaced

boxes represent potential obstacles. Numerical experiments are conducted for containers with and without obstacles. The authors claim that the simulated annealing approach is successful and can handle large number of packing instances. Loading a partially loaded bin can be viewed as loading to a bin with obstacles. In this context, we can consider that paper [30] solves the problem similar to the previous one. Authors defined the extreme points rule for identifying possible positions to place items in a given, partially loaded bin. The rule is also used to derive new constructive heuristics for a 3D bin packing problem. Computational results show the effectiveness of new heuristics compared to state-of-the-art results. In [31] the problem is defined as follows: given are a finite set of 3D boxes in different sizes and an unlimited set of containers in the same size; the cargo loading problem is to determine the minimum number of containers that can contain all the boxes. As the problem is NP-hard, the authors propose to use tabu search optimization with a tree-based heuristic cargo loading algorithm as its inner heuristic to solve it. In the work of [32], which examines 3D multi-container packing problems with a homogeneous set of containers, minimizing the number of containers is also set as a goal. It is considered that the cost of the containers is not part of the decision process (e.g., the company owns its own containers). Authors introduced a greedy adaptive search procedure; a new framework for multi-dimensional, multi-container packing problems. The procedure combines the simplicity of greedy algorithms with learning mechanisms aimed at guiding the overall method towards good solutions. Experiments were carried out on standard benchmark instances for this type of a problem and authors claim that experiments indicated that the proposed procedure attains near-optimal solutions in very short computational times.

### **4. Motivation for our Research**

Reading the previous section, one gets the impression that the cargo loading problem has largely been solved. The vast majority of papers, the number of which has been steadily increasing for years, present different models and techniques of solving this problem. However, practice shows that the bulk of theoretical research deals with combinatorial optimization that has limited use in real-world applications. Some methods have never been implemented, while some implemented systems are only used briefly or not at all. In the subsections below, we look at the following three questions: (1) Is rail transportation in Serbia faced with the problem of incorrect loading of freight railcars?; (2) Are there any software systems that solve the loading problem and which are their characteristics?, and (3) Are there any problems in using the available software systems, specifically regarding the field of application we are interested in, namely the loading of freight railcars?

### *4.1. Incorrect Loading—The Main Reason Why Freight Railcars Are Withdrawn from Service in Serbia*

We have tried to find the answer to the question of whether rail transportation in Serbia is faced with the problems of incorrect loading and securing of goods by monitoring the number of railcars withdrawn from service and the related costs at one of Serbia's railway nodes, namely the Niš node, over a period of one year. Considering that the withdrawn railcars' loading points were not monitored and that they by all means differ, we have no information on whether any loading software was used during the loading or not.

Table 1 shows the number of withdrawn freight railcars at the Niš node and the related costs in Euros [Eur]. Since the data for 2018 was gathered in the first few days of 2019, damage assessment was not yet over for a number of freight railcars. The symbol *i \*j* used in the table means that the number of railcars withdrawn at the observed site was *i*, but the incurred costs were not yet known for *j* railcars. Consequently, the amounts listed under "costs" refer to *i-j* railcars.


**Table 1.** Number of vehicles withdrawn at the Niš node and the related costs registered in [Eur].

1. Marshalling yard. \* The symbol i\*j used in the table means that the number of railcars withdrawn at the observed site was i, but the incurred costs were not yet known for j railcars.

We can see that incorrect loading is the dominant reason why freight railcars were withdrawn from service (361 railcars out of the total of 432 railcars, i.e., nearly 84%). The costs related to 257 railcars withdrawn to repair the cargo amount to 21,195.5 i.e., to 82.5 [Eur] per railcar on average. The expected costs for all 361 railcars withdrawn in order to repair the cargo are estimated at over 29,661 [Eur]. The costs for 57 railcars withdrawn due to technical faults amount to 7484.4 i.e., to 131.3 [Eur] per railcar on average. The type of technical faults occurring in the withdrawn railcars was not analyzed. Practice shows that exceeding the axle weight i.e., weight per wheel often causes damage to the suspension system as well as contact area on the wheel. It is therefore highly likely that some technical faults occurring in the withdrawn railcars are the result of their earlier overloading (e.g., the so-called "soft springs", which have lost their defined flexing capacity and intended purpose).

Moreover, the analysis we conducted takes only into account the railcars withdrawn from service either in order to repair the cargo (reload it) or due to technical faults, but it does not take into account potential damage to infrastructure or the costs related to the damage sustained by the cargo and delays in its delivery.

### *4.2. Available Cargo Loading Software*

There is a very long list of commercial or free software for the optimization and planning of cargo loading, with graphic, usually 3D visualization of the found solution. The paper [33] gives a brief overview of the available software tools and their characteristics. This subsection provides a brief overview of the selected tools.

The Cube-IQ Load Planning System [34] is logistics software, designed to ensure optimal utilization of the volume/weight of the cargo space it calls a container. Cube-IQ optimizes the loading of items into one or more containers, optionally of different sizes. The default loading units are boxes, but the loading of items of different shape (cylindrical, L-shape) can also be requested. The software creates 3D loading plans. It supports SQL, Excel, XML and other data formats for import/export. There is a licensed version of the software, as well as its fully-functional 45-day trial version.

3D Load Packer (3DLP) [35] is a space optimizer designed to find the best arrangement of the given different-size 3D rectangular objects ("boxes") within one or more rectangular enclosures ("containers"). The allowed packing orientation can be specified for each box. In addition to weight and volume constraints, if a truck, trailer or railcar is the container in reality, the axle load must be taken into account, which the program enables. The software also allows the selection of the objective function and calculates costs for each box/container item as well as total costs. The available versions of the software include its licensed version and its 30-day trial or free version.

PackVol [36] is 3D optimization software for load planning, designed to help utilize the cargo space of a vehicle/container in the best way possible in order to minimize transportation costs. The program is available in three editions: PackValLITE (14-day trial version is available for evaluation), PackValSTANDARD (30-day trial version is available) and PackValDYNLOAD (45-day trial version is available). The latest version is very interesting as it implements the load at "**multiple steps**" (e.g." boxes on pallets, pallets in containers). The purchase of a license during the evaluation period

makes it possible to go on working, without losing the settings, the code or the data from the trial version. The program is written in C++, which ensures a small executable code, with great advantages in terms of speed. Import/export is possible via Excel files, as well as communication with databases via the appropriate ODBC (Open Database Connectivity) protocol.

PalletStacking [37] allows users to find the best arrangement of boxes on loading pallets to warehousing or transportation. This software calculates the most optimal dimensions of boxes, which is a significant feature of the package. Typically, it enables a 3D representation of the found solution and its export to an Excel file. Additionally, the user may opt to generate an optimizing report in a HTML or PDF file. There is a licensed version of the software, as well as its free demo version.

EasyCargo [38] is Apple-like online container loading software. Consequently, the software is not installed on a local computer but is used via the present web-browser. The load plan result is displayed in interactive 3D (Figure 1), just like in a game. You can rotate or zoom to explore details of the load plan. The build-in manual load plan editor adjusts the rotation or position of each box. The load plan editor works on a "Drag&Drop" basis. In addition to the licensed version, there is a free 10-day trial of the full version.

**Figure 1.** 3D cargo loading graphic generated by EasyCargo loading software.

### *4.3. Problems Encountered in Using Commercial Cargo Loading Software*

The previous subsection has shown us that there are numerous software tools that solve loading problems (some of which are free), with somewhat different functionalities, while Section 4.1. uses a railway node to illustrate the fact that incorrect loading is the main reason why freight railcars are withdrawn from service. Load planners appear to be relying more on their own experience than on the available software tools.

The reasons for this are numerous. Namely, the application of ready-to-use optimization software tools has proved to be quite an effort in practice. Pinedo [39] analyzes generic systems versus application-specific systems in subsection 17.5. (page 476), drawing the following conclusion: "Dozens of software houses have developed systems that they claim can be implemented in many different industrial settings after only some minor customization. It often turns out that the effort involved in customizing such systems is quite substantial. The code developed in the customization process may end up being more than half the total code." The commonest reason for this is that the setting in which optimization is to be performed has some restrictions or constraints that are hard to "integrate" into a ready-to-use software system. General constraints can include numerous "special cases" the coding of which can be so elaborate that it is better to build a system "from scratch". Also, commercial optimization software may not have the interface needed to connect to the already present information system in a real-world setting. For example, a software system is designed to accept input data from a SQL (Structured Query Language) server database (which is quite common because ready-to-use systems are usually designed as an upgrade of database management systems), while data is already stored in Excel spreadsheets or on an Oracle database. Another reason for developing one's own system is that the user insists on the original code in order to be able to

maintain it independently. Moreover, the representation of the found solution (textual or graphic) must resemble that to which users have become accustomed in their years-long work. Language barriers in input/output communication and the representation of the found solution in a non-standard form can discourage the user from using a ready-to-use software package at all. The user acting as a planner often likes to compare different solutions and make what-if analyses. Pinedo ends section 17.5. (page 479) with the following words: "An important advantage of an application-specific system is that manipulating a solution is usually considerably easier than with a generic system."

As part of this research, we have tested the extent to which the software tools, sketched in Section 4.2, can solve our problem of loading items into railcars. We shall now outline the experiment with the EasyCargo loading software. Six different types of same-size boxes had to be loaded into a container the size of a Habis railcar. The first setback observed was the limit in defining the container's maximum carrying capacity (50.000 kg), which is less than the real carrying capacity of a Habis railcar (53.500 kg). Also, if a standard trailer and/or a truck is selected, the predefined carrying capacity/volume/axle load limits etc. cannot be changed. Still, the software has some very good functionalities (e.g., rotation of boxes, no packing on level two), but it does not make it possible to define loading priority constraints, which is often necessary in real transportation. If you look at the 3D representation of the loading solution shown in Figure 1, it is clear that it is not possible to see the boxes that are in the middle row of level one, i.e., other methods of solution representation (e.g." a table) must be used to get the relevant information, which is usually an unfavorable option for a load planner. Finally, such a representation is not very helpful in monitoring the sequential loading of items (boxes by boxes, in a fixed sequence) with a forklift truck.

Having neither ambition nor opportunity to test all commercial loading software, we can conclude that in many cases it is simply inadequate, this being the very rationale for our research.

### **5. Loading Box Pallets into Covered Railcars**

Palletized freight is a type of freight often transported by railway. a pallet is a loading unit used to pack multiple pieces of the same- or different-type freight in a specific sequence until it reaches certain weight and height, in order to protect the freight and ensure its easy and fast manipulation by a forklift (lifting, lowering, transport, packing). Palletized goods can be transported in closed or open railcars, but closed railcars are used far more often given that goods transported on pallets are usually not weather resistant. Moreover, different types of pallets that vary in size, weight or useful load capacity are used in railway transport.

Our problem, the loading of box pallets into closed railcars, is a CLP-type C&P problem, where the container is a covered railcar, while small items are pallets. The loading into a single railcar or the simultaneous loading into several railcars, which can either be identical or different, can be observed. The box pallets can be considered a weakly heterogeneous set of cargo. Namely, exchange box pallets known as EPAL (European Pallet Association, EPAL) box pallets, which have a fixed length, width, height, loading capacity and superimposed load, are most commonly used in Europe as well as Serbia, Figure 2. The safe working load and weight of an empty pallet depend on the year of its manufacture [40].


**Figure 2.** European Pallet Association (EPAL) box pallet and its characteristics.

The loaded pallets are of the same size and volume, but their gross weight may differ and, consequently, they can be considered a weakly heterogeneous set of cargo. According to the classification given in Section 2.2, our CLP can be classified as an IIPP, SLOPP or MILOPP.

In the Republic of Serbia, the loading of freight railcars is defined by the regulations laid down in the documents [41,42]. The UIC and railway administrations have adopted standards on the loading of railcars used in international transport, the so-called RIV cars (Italian: Regolamento Internazionale dei Veicoli, RIV) and these regulations have been incorporated into the mathematical model and then implemented in the software described in Section 6.

### *5.1. Preliminaries for Mathematical Model Definition*

Loading the box pallets in covered railcars has to fulfill the following general constraints: Orientation constraint

1. All boxes must be strictly placed as per the given orientation. The pallets are stacked in such a way that their lateral side is parallel with the lateral side of the wagon, while their longitudinal side is parallel with the longitudinal side of the wagon. The number of pallet units to be placed lengthways N*<sup>l</sup>* is determined by the length of the wagon and the length of the pallet and calculated as an integer part of their quotient. The number of pallet units to be placed widthways Nw is calculated in the same way, as well as the number of the available loading levels Nh. This means that the maximum number of pallet units to be loaded into wagons is *n*= Nl ·Nw·N<sup>h</sup>

Complexity constraint

2. These constraints depend on the number of railcar doors used for loading/unloading and the loading/unloading method. For example, in closed Habis, Hbis and Hbbikks railcars, doors can slide open up to 2/3 of the railcar's side, enabling a forklift truck to board the railcar and manipulate the pallets. In such railcars, the forklift loads the pallets into the railcar starting from the front toward the middle and from the bottom to the top ("vertically").

Positioning constraints



**Figure 3.** Loading scheme for a Habis railcar *(N<sup>h</sup>* = 2*, N<sup>l</sup>* = 14 and *N<sup>w</sup>* = 3): RailLoad application start window.

Weight constraint

5. The maximum wagon carrying capacity must not be exceeded. Please note that, in the trading of goods, weight is taken to mean the same as mass.

Balance constraints


wagon is fully or partially loaded as well as the railway line category. In this model, the axle construction weight ratio can be 1:1.25 at the most.


Stability constraint

10. Taking into account the fact that pallet units are stacked vertically on top of each other and that this is usually done by interlocking the layers, it is clear that an upper-level position can only be filled once a lower-level position has been filled.

### *5.2. Basic Mathematical Model for a Single Railcar*

All box pallets have the same volume and gross weight which depends on the weight of goods. Let us assume that each loaded box pallet type *i*(PT*i*), *i* = {1, ... , *m*} has the same gross weight *gwi* and that the profit made by its transport is *pi*.*vi* is the loading priority coefficient PT*i*. This priority coefficient gives a multi-criteria aspect to the objective function. Namely, in some situations specific goods must be given priority in transportation, regardless of the profit to be made. a pallet can be assigned to one of positions *j*, *j* = {1, ... , *n*} in the covered railcar. In view of the fact that profit maximization is the most frequent optimization criterion when transporting goods, we define the objective function as follows:

$$f(\max)f(\mathbf{x}) = \sum\_{j=1}^{n} \sum\_{i=1}^{m} v\_i p\_i \mathbf{x}\_{ij\prime} \tag{1}$$

subject to:

$$\mathbf{x}\_{i\bar{j}} = \{ \begin{array}{c} 1, \text{ if a PT\bar{i} is assigned to position } j \\ 0, \text{ otherwise} \end{array} \}\tag{2}$$

where *xij* are 0–1 decision variables.

$$\sum\_{j=1}^{n} \sum\_{i=1}^{m} \text{gw}\_{i} \mathbf{x}\_{ij} \le T\_{\star} \tag{3}$$

where *T* is the carrying capacity of the railcar to be loaded.

$$\sum\_{i=1}^{m} x\_{ij} \le 1, j = \{1, \dots, n\}, \tag{4}$$

at the most one pallet is assigned to each position.

$$\sum\_{j=1}^{n} \sum\_{i=1}^{m} x\_{ij} \le n\_{\prime} \tag{5}$$

the total number of loaded pallets cannot exceed the number of available positions.

$$\frac{1}{3}a \le b \le 3a,\tag{6}$$

where: *<sup>a</sup>* <sup>=</sup> *<sup>n</sup> j*=1 *m i*=1 *ej <sup>d</sup> xijgwi*<sup>+</sup> *<sup>W</sup>* <sup>2</sup> and *<sup>b</sup>* <sup>=</sup> *<sup>n</sup> j*=1 *m i*=1 *d*−*ej <sup>d</sup> xijgwi*<sup>+</sup> *<sup>W</sup>* <sup>2</sup> are the bogie weight of bogies a and B respectively, while *W* is the wagon weight. *d* (distance between bogies a and B) and *ej* (distance between the center of gravity of the loading unit in position *j* and bogie A) are the spacing illustrated by Figure 4.

$$
\frac{a}{2} \le \gamma\_\prime \frac{b}{2} \le \gamma\_\prime \tag{7}
$$

by Figure 5.

where γ is the maximum permitted axle weight.

$$0.8L \le R \le 1.25L,\tag{8}$$

where *<sup>L</sup>* <sup>=</sup> *<sup>n</sup> j*=1 *m i*=1 *s*−*rj <sup>d</sup> xijgwi*<sup>+</sup> *<sup>W</sup>* <sup>2</sup> and *<sup>R</sup>* <sup>=</sup> *<sup>n</sup> j*=1 *m i*=1 *rj <sup>d</sup> xijgwi*<sup>+</sup> *<sup>W</sup>* <sup>2</sup> are the axle construction weight left/right, respectively. *s* (axle construction distance between the wheels) and *rj* (distance between the center of gravity of the loading unit in position *j* and the railcar's longitudinal axis) are the spacing illustrated

$$\sum\_{j=1}^{n} \sum\_{i=1}^{m} x\_{ij} g w\_i g \le w m,\tag{9}$$

where *g* = 1/*l* is the weight coefficient per linear meter, *l* is the length, and *wm* is the max permitted weight per wagon's linear.

$$\sum\_{i=1}^{m} \mathbf{x}\_{ij} = 0 \Rightarrow \sum\_{i=1}^{m} \mathbf{x}\_{ij+3} = \mathbf{0}, \; j = \; \{1, \dots, n\}, \tag{10}$$

if the lower-level position is not filled, the upper-level position must remain empty as well.

**Figure 4.** Bogie weight ratio calculation method.

**Figure 5.** Axel construction weight ratio calculation method.

These are mandatory constraints, while different loading model variants can include and/or relax some specific constraints, which will be discussed in the next section of the paper.

In addition to the objective function (1), other criteria for the evaluation of the solution can often be found such as: weight maximization, maximization of volume utilization, maximization of the number of priority pallets etc. These criteria are often conflicting, i.e., the CLP problem is a multi-criteria problem in its nature [43].

### **6. Results**

The paper presented here can be considered as an extension of our previous research, where the focus was on the mathematical model, while software was developed at the first prototype level only to test the model and did not include an appropriate graphical user interface. The main goal of our current efforts was to upgrade the software to a complete user-friendly load planning and monitoring system. Namely, the analysis made in Sections 4.2 and 4.3 has revealed that it is difficult to use commercial cargo loading optimization and planning software to monitor the sequential loading of pallets with a forklift truck. Consequently, a load planning and monitoring software system named RailLoad was designed and implemented for the purpose.

The mathematical model was coded using the OPL (Optimization Programming Language) and then solved using CPLEX Optimizer, wellknown mathematical programming solver. The generated solution, an achieved loading plan, the solver has been stored in an Access database. The Windows application RailLoad, implemented in Visual Basic, communicates with the Access database, visualizing the loading plan and enabling its user-friendly monitoring. All used software systems are available in the Microsoft Developer Network Academic Alliance.

In Serbia, closed H (Habis, Hbis, Hbbikks, Hbfkks, Hfkks) railcars are usually used to transport palletized goods. These railcars differ in their technical and, more importantly, operational characteristics, i.e., their carrying capacity and volume, usable floor length and width, mass, number of doors used for loading/unloading, maximum doorway width etc. It is therefore clear that a pallet loading scheme has to be different for each railcar type. The characteristics of different railcars and loading schemes for each of them should be stored in a separate database. In this research phase, this was done for Habis railcars. Figure 3 shows the loading scheme of a Habis railcar, while the railcar's characteristics are shown in Figure 6. One can see that the railcar's carrying capacity depends on the railway line category (A, B or C).


**Figure 6.** Habis wagon and its characteristics.

The implemented RailLoad software was tested on a large number of examples in which pallets (manufactured after 2011) were loaded into single or multiple Habis railcars. The selected examples are described in Sections 6.1 and 6.2. The initial assumption was that any type of cargo could be placed in any position. Depending on the example, additional requirements that have to be met will be defined. Also, we will consider that box pallets are homogeneous bodies i.e., that their center of gravity is in the volume center of gravity. The goods are going to be transported by means of C railway line category, where the max allowed one-axle weight is 20t and the max allowed weight by meter is 6.4t/m, when the standard track width is 1.435m. In view of the fact that pallets can be loaded onto

each other, they can be loaded into the Habis wagon in two levels, namely 14 pallets can be placed lengthways and three pallets width ways. The wagon's total volume capacity, in the number of pallets, is 84. All examples are tested in a running environment with Intel Core i5-6400 CPU 3.30 GHz and 8 GB RAM. Freight railcar load planning is not time critical, and most often it can be done even days before starting the loading itself. Our goal was not to find the optimal solution for one, but the "good enough" solution for more optimization criteria, and to eliminate incorrect loading and securing of goods. Having this in mind, CPLEX Optimizer CPU execution time was limited to 600 seconds [s] for all examples, but some simpler examples are resolved to optimality in a much shorter period.

The examples below solve the task of how to optimize the packing of maximally 10 different PTs in railcars. Table 2 shows the weight of each PT to be loaded, the profit made from transportation and the loading priority coefficient for each PT. In each example, the following three optimization criteria were observed: maximization of wagon load weight, maximization of wagon volume utilization and maximization of weighted profit, including loading priority coefficients.

**Table 2.** A loading task involving 10 box PTs.


*6.1. Single Loading Examples*

**Example 1.** *Fifty pallets of each PTi, i* = *{1,* ... *, 10} are available for loading. Only one type of goods should be loaded into wagon A.*

The following cargo-related constraint that depends on the number of items due to be loaded PT*i, i* = {1, ... , 10} is now included in the model:

$$\sum\_{i=1}^{m} \sum\_{j=1}^{n} x\_{ij} \le 50, \text{ for } \forall i \in \{1, \dots, \dots, 10\}. \tag{11}$$

The following separation constraint allows the loading of only one PT into a railcar:

$$\sum\_{j=1}^{n}\sum\_{i=1}^{m}\mathbf{x}\_{ij} = \sum\_{j=1}^{n}\mathbf{x}\_{kj\prime}\text{ for some }k\in\{1,\dots,\dots,10\}.\tag{12}$$

This is an example of the CLP classified as the IIPP. In the RailLoad application start window, the user selects option single railcar, railcar type: Habis, and activates the loading plan optimization software. The visualization of the achieved pallet arrangement generated by the RailLoad software is shown in Figure 7. In view of the objective function and the mathematical model described in Section 5.2, the loading of 50 PT7 pallets has been selected. We can see that, if a position on level I is empty, the corresponding position on level II has to be empty, too (e.g., 27 and 30, 63 and 66), while this, of course, is not the rule in the opposite case (e.g., positions 25 and 28, 26 and 29). The last position to be loaded is position 60. Thanks to the railcar loading monitoring panel, it is clear to the load organizer that there is a series of empty positions behind position 26.


**Figure 7.** Pallet loading plan and monitoring generated by the RailLoad software in Example 1.

### **Example 2.** *Fifty pallets of each Pti, i* = *{1,* ... *, 10} are available for loading. They are to be loaded into wagon A.*

The constraint (11) remains in force in this example as well. This problem is classified as a SLOPP: different PTs can be loaded into one wagon, so that the user selects the same options in the RailLoad application window as in the previous example. The software has generated a window with the problem solution, which will be left out here to save space. The solution details are outlined in Tables 3 and 4. The wagon can be filled to its volume capacity with the adequate number of four different PTs (PT2, PT7, PT8 and PT9), with a carrying capacity utilization rate of 99.85%.



**Table 4.** Wagon capacity utilization rate in Example 2.


*6.2. Multiple Loading Examples*

**Example 3.** *Twenty pallets of each PTi, i* = *{1, 2, 3} due to be loaded have to be loaded by all means and they all have to be loaded into the same wagon. Also, 30 pallets of each PTi, i* = *{4,* ... *, 10} are due to be loaded as well. There are two wagons, a and B, into which they can be loaded.*

We are faced here, and in the examples below, with a problem classified as the MILOPP: different PTs have to be loaded into identical wagons. It is evident that the basic mathematical model described in Section 5.2 has to be modified taking into account the fact that the loading of items into two wagons is considered simultaneously. In this context, decision variables are as follows:

$$\mathbf{x}\_{\mathrm{ijR}} = \begin{cases} 1, \text{ if a PTi is assigned to position } j \text{ in radiar R} \\ 0, \text{ otherwise} \end{cases}, \tag{13}$$

where *xijR* are 0–1 decision variables, *i* = {1, ... , *m*}, *j* = {1, ... , *n*}, *R*∈{*A, B*}.

Adapting other constraints to the loading of items into two wagons is not hard and will be left out here. Also, we have to introduce two additional constraints. Since items are to be loaded into two identical wagons (A and B), we can assume that the wagon into which all PT*i*, *i* = {1, 2, 3} will be loaded is wagon A. We therefore introduce the following connectivity constraint:

$$\sum\_{j=1}^{n} x\_{ijA} = 20,\text{ for } \forall i \in \{1, 2, 3\}. \tag{14}$$

Also, the following cargo-related constraint depends on the number of pallets due to be loaded PT*i, i* = {4, ... , 10}:

$$\sum\_{\substack{R\in\{A,B\}\ j=1}}\sum\_{j=1}^{n}\mathbf{x}\_{i\bar{j}R}\leq 30,\text{ for }\forall i\in\{4,\dots,\dots,10\}.\tag{15}$$

The RailLoad software has generated the solution, the details of which are outlined in Table 5, visualizing it in a user-friendly user interface. Each wagon can be filled to its volume capacity with 84 pallets, with the wagon a carrying capacity utilization rate amounting to 99.91% and the wagon B carrying capacity utilization rate amounting to 99.89%. Twenty pallets of each PT including PT1, PT2 and PT3 are in wagon A, as specified by the constraint (14). Also, 9 PT7, 2 PT8 and 13 PT9 pallets are in the same wagon. The loading included seven different PTs, five of which (PT1, PT2, PT3, PT7 and PT8) have been fully loaded.

**Table 5.** The number of pallets by PT in the solution of Example 3.


**Example 4.** *Forty pallets of PT1, have to be loaded into wagon A. Thirty pallets of each PTi, i* = *{2,* ... *,10} are due to be loaded. They can be loaded into two wagons, a and B. In total, 50 PT5 and PT8 pallets have to be loaded.*

The constraint that all 40 PT1 pallets must be placed in wagon a is the following positioning constraint:

$$\sum\_{j=1}^{n} x\_{1^j A} = 40.\tag{16}$$

The constraint (15) is valid in this example for ∀*i*∈{2, ... , 10}. Also, we introduce a special constraint defining the total number of PT5 and PT8 pallets to be loaded:

$$\sum\_{\{R \in \{A, B\}\}} (\sum\_{j=1}^{n} x\_{5jR} + \sum\_{j=1}^{n} x\_{8jR}) = 50.\tag{17}$$

The RailLoad software has generated the solution, the details of which are outlined in Table 6, visualizing it in a user-friendly user interface. Each wagon can be filled to its volume capacity with 84 pallets, with the wagon a carrying capacity utilization rate amounting to 99.89% and the wagon B carrying capacity utilization rate amounting to 99.91%. All 40 PT1 pallets are in wagon A, as specified by the constraint (16). The requirement (17) that 50 PT5 and PT8 pallets precisely have to be loaded has been met as well (three PT5 pallets are in wagon A, while 24 PT5 and 23 PT8 pallets are in wagon B). Of the available 10 PTs, the loading included seven different PTs, three of which (PT1, PT2 and PT7) have been fully loaded.

**Table 6.** The number of pallets by PT in the solution of Example 4.


**Example 5.** *Thirty pallets of each PTi, i* = *{1,* ... *, 10}, are available for loading. They are to be loaded into three railcars: A, B, and C. The PT6 and PT7 items must not be placed in the same wagon.*

Considering that three railcars are to be loaded, with the modification of the mathematical model described in Section 5.2 decision variables are now as follows:

$$\mathbf{x}\_{i\bar{j}\mathbb{R}} = \begin{cases} 1, \text{ if a PT\'i is assigned to position } j \text{ in radiar } \mathbb{R} \\ 0, \text{ otherwise} \end{cases},\tag{18}$$

where *xijR* are 0–1 decision variables, *i* = {1, ... , *m*}, *j*={1, ... , *n*}, *R*∈{*A, B, C*}.

The cargo-related constraint refers to the total number of pallets of each PT*i* available for loading:

$$\sum\_{\{R \in \{A, B, C\}}} \sum\_{j=1}^{n} x\_{ijR} \le \Im 0, \text{ for } \forall i \in \{1, \dots, \dots, 10\}. \tag{19}$$

The separation constraint specifies that, if there is just one PT6 pallet in railcar A, B or C, no PT7 pallet can be placed in that railcar, and vice versa.

$$(\sum\_{j=1}^{n} \mathbf{x}\_{6j\mathbb{R}} \ge 0 \Rightarrow \sum\_{j=1}^{n} \mathbf{x}\_{7j\mathbb{R}} = 0) \land (\sum\_{j=1}^{n} \mathbf{x}\_{7j\mathbb{R}} \ge 0 \Rightarrow \sum\_{j=1}^{n} \mathbf{x}\_{6j\mathbb{R}} = 0), \text{for } \forall R \in \{A, B, C\} \tag{20}$$

Once the user has selected the loading into multiple railcars (option: Multiple railcars) in the RailLoad application start window, and Railcar type for each (Habis) railcar, the loading plan optimization software is activated. It first displays the loading plan for the railcar with label A, but the user can also select the tab showing the loading plan for a railcar with a different label, e.g., label B as shown in Figure 8.

**Figure 8.** Pallet loading plan and monitoring generated by the RailLoad software in Example 5.

Analyzing the loading plan for railcar B, we have concluded that the railcar can be filled to its volume capacity with 27 PT5 pallets, 29 PT6 pallets, 19 PT9 and 9 PT10 pallets. Also, the load organizer can clearly see the order in which a forklift truck should load the cargo: PT6, PT6, PT5, ... , PT9, PT5, PT9.The railcar loading monitoring panel enables efficient monitoring of pallet loading. For example, if a forklift truck has just placed a PT5 pallet in position 3, it can be clearly seen that the next pallet to be placed in position 4 is also a PT5 pallet. The loading plan details for all three railcars are outlined in Tables 7 and 8. PT6 pallets are in railcars B (29) and C (1), while all available PT7 pallets (30) are in railcar A, so that the constraint (19) has been fulfilled. Of the available 10 PTs, the loading included nine different PTs, seven of which (PT3, PT5, PT6, PT7, PT8, PT9 and PT10), i.e., all 30 available pallets, have been fully loaded.


**Table 7.** Capacity utilization rate per railcar in Example 5.

**Table 8.** The number of pallets by PT in the solution of Example 5.


Although there is a large number of papers dealing with CLP, we did not find academic publications that dealt specifically with loading to covered railcars, including real-world constraints tested on realistic benchmark data sets. We are sorry that we did not validate our solution in this way, however we strongly believe that the making of realistic benchmark data sets for this problem represents a challenge for future research. The software was validated by three experts for freight railcar loading by the following criteria:


Average rating by all criteria (**in a range of 1 to 5, the higher the better**) is 4.6.

### **7. Discussion and Conclusions**

The brief analysis made in this paper has shown that incorrect loading is the dominant reason why freight railcars are withdrawn from service in the Republic of Serbia. In addition to jeopardizing safety, incorrect loading can result in cargo loss, damage or delivery delays, as well as damage to railway infrastructure and vehicles. It is clear that incorrect loading often generates unforeseen expenses (e.g., carriers' expenses associated with payers of transportation services and vice versa, goods owners' expenses associated with railcar owners and vice versa). The available loading software is insufficiently used for various reasons including the following: there are numerous exceptions to general loading rules which load planning users cannot include without the original code, loading plans are displayed in a non-standard form, or there are barriers in input/output communication between the loading software and the data store or the loading software and the user. Commercial software usually does not make it possible to monitor loading, especially sequential goods loading with a forklift truck. a change in the loading order of just two pallets if there is a significant difference in their weight can disrupt some important constraints affecting safety, e.g., the balance constraint.

Assuming that covered railcars are used as containers, this paper models and solves the loading of box pallets into railcars as a CLP-type C&P problem. As the loaded box pallets are of the same size and volume, although they can differ in gross weight, we can consider them a weakly heterogeneous set of cargo. The loading into a single railcar (CLP classified as the IIPP and SLOPP) as well as multiple identical railcars (CLP classified as the MILOPP) was observed. At the start, the basic mathematical model including standard orientation, complexity, positioning, weight, balance and stability constraints was formulated. Additional constraints were included in the models when necessary. The mathematical model was coded using the OPL and then solved with the help of CPLEX Optimizer, a mathematical programming solver. The generated solution of the mathematical model, the achieved loading plan, the solver stores in an Access database. The Windows application RailLoad, implemented in Visual Basic, communicates with the Access database and visualizes the loading plan, enabling its user-friendly monitoring. The software was tested in line with all standard criteria, while special attention was paid to its optimization and visualization features. The implemented software was tested on a large number of examples and it achieved "good enough" solutions for real-world usage regarding all observed optimization criteria: maximization of wagon load weight, maximization of wagon volume utilization and maximization of weighted profit. It can be concluded that the RailLoad software can offer full support to real planning problems and in particular to the clear monitoring of goods loading with a forklift truck.

The waysfor further research are numerous and make it necessary to extend both the model as well as the right software. Consequently, studying the loading of items into multiple heterogeneous railcars (a CLP classified as the MHLOPP) is one of the directions that maybe followed. Another direction could be analyzing how to load/unload at intermediate stations so as not to violate mandatory constraints at any point in time. Future researches must go towards the development of models that include all real-world constraints that will concern the loading of freight railcars [44]. This will increase the probability of their application in practice, but also give the possibility to evaluate and compare different solutions by using realistic benchmark data sets. Zhao et al. [16] emphasizes that "realistic and challenging data sets with clear real-world constraints would help move this research area forward."

**Author Contributions:** Each author has participated and contributed sufficiently to take public responsibility for appropriate portions of the content.

**Funding:** This research was funded by the Ministry of Education, Science and Technological Development of the Republic of Serbia, grant number TR 36012.

**Acknowledgments:** The authors would like to pay their gratitude to Makso Đuki´c and his company SIGNALING DOO BELGRADE for providing support and donations.

**Conflicts of Interest:** The authors declare no conflict of interest.

### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **A Dynamic Multi-Reduction Algorithm for Brain Functional Connection Pathways Analysis**

### **Guangyao Dai 1,\*, Chao Yang 2, Yingjie Liu 1, Tongbang Jiang <sup>1</sup> and Gervas Batister Mgaya <sup>1</sup>**


### Received: 22 April 2019; Accepted: 21 May 2019; Published: 22 May 2019

**Abstract:** Revealing brain functional connection pathways is of great significance in understanding the cognitive mechanism of the brain. In this paper, we present a novel rough set based dynamic multi-reduction algorithm (DMRA) to analyze brain functional connection pathways. First, a binary discernibility matrix is introduced to obtain a reduction, and a reduction equivalence theorem is proposed and proved to verify the feasibility of reduction algorithm. Based on this idea, we propose a dynamic single-reduction algorithm (DSRA) to obtain a seed reduction, in which two dynamical acceleration mechanisms are presented to reduce the size of the binary discernibility matrix dynamically. Then, the dynamic multi-reduction algorithm is proposed, and multi-reductions can be obtained by replacing the non-core attributes in seed reduction. Comparative performance experiments were carried out on the UCI datasets to illustrate the superiority of DMRA in execution time and classification accuracy. A memory cognitive experiment was designed and three brain functional connection pathways were successfully obtained from brain functional Magnetic Resonance Imaging (fMRI) by employing the proposed DMRA. The theoretical and empirical results both illustrate the potentials of DMRA for brain functional connection pathways analysis.

**Keywords:** brain functional connection pathways; rough set; multi-reduction; functional magnetic resonance imaging

### **1. Introduction**

The brain is an important organ that serves as the center of the nervous system of human [1]. The cerebral cortex plays a key role on brain cognitive functions [2]. A healthy person has corresponding cognitive activities while being stimulated by the external environment. Multiple brain regions in the cerebral cortex cooperate to complete these activities through different brain functional connection pathways [3]. The brain functional connection pathways have an important contribution to understand the brain functional network models [4,5]. Revealing brain functional connection pathways will provide a scientific basis and reference for diagnosis and treatment [6,7].

Analyzing brain functional connection pathways is one of the basic problems in the study of brain function. The component analysis algorithms were common methods in the early stage. Friston et al. proposed a PCA-based brain functional connection pathway analysis method in 1993 [8], and then they studied the function connection between different brain regions when people processed color and emotion tasks in 1999 [9]. Londei et al. used ICA to detect the activated brain areas and analyzed the functional correlation between the relevant areas by the Granger causality test [10]. Although these methods can obtain functional connection model of the whole brain, they wast a lot of time and space. Clustering algorithms are another kind of common methods to divide brain networks and study brain functional pathways, such as center-based algorithms [11] and heuristic algorithms [12]. However, the results of these algorithms depend heavily on the number of clustering centers and other

parameters that do not describe the nature of the brain functional pathways itself. In recent years, ROI-based algorithms have become new hotspots to analyze brain functional connection pathways, as the brain data can be effectively simplified [13,14]. The ROIs, namely regions of interests, have to be selected as the prior knowledge empirically [15] while performing brain functional analysis by these methods. The division of brain structure or the coordinates published in the latest research are usually used as the basis for selecting ROIs [16]. Liu et al. obtained a brain functional connection pathway by an attribute reduction algorithm of rough set successfully and interpreted these knowledge from ROIs [17].

The brain functional connection pathways analysis has been widely used to study the neural mechanism associated with different types of mental disorders. They provide a great help in diagnosis, monitoring and treatment of mental disorders. Desseilles et al. found the functional connection pathway is abnormal between frontal parietal lobe network and visual cortex in patients with severe depression [18]. Salomon et al. compared the difference of whole brain function between schizophrenia patients and normal control group in both resting state and working state. The results show that the whole brain functional connection pathways of schizophrenia patients are lower than the normal in both resting state and working state, and the abnormality was more prominent in resting state [19]. Admon et al. compared brain functional connectivity patterns of 33 servicemen in Iraq before and after their service. They found that the subjects with decreased hippocampus gray matter density exhibit more post-traumatic stress disorder (PTSD) symptoms than those with increased hippocampus gray matter density. Meanwhile, the former's function connection between hippocampus and prefrontal cortex is significantly reduced [20].

However, only a single brain functional connection pathway is obtained by using the previous methods. In fact, there are multi-pathways in the brain, and these multi-pathways are very important for us to understand the relationship between structure and function of different brain regions. The research on multi-pathways will certainly provide more ideas for the study of brain function. Thus, it is necessary to study a multi-reduction method and use it to the analysis of multiple brain functional connection pathways.

A set of multi-reduction contains different attribute combinations, which have the same decision capabilities [21]. Multi-reductions provide more insights from different perspectives than a single reduction and can form a multi-knowledge system [22]. Unfortunately, it is a major challenge to obtain multi-reduction [23]. Wu et al. [24] proposed a method to obtain multi-reduction based on positive region by replacing the non-core attributes. Firstly, the core attributes are collected to get a reduction. Then, the multi-reduction set is obtained through the replacement of non-core attributes one by one. However, their algorithm cost much more time in computing equivalence classes more than once. Thus, it is worth exploring some new reduction approaches to the multi-pathways from functional magnetic resonance imaging analysis.

In this paper, we propose a novel multi-reduction algorithm for analyzing the brain functional connection pathways from functional Magnetic Resonance Imaging (fMRI) data. After proposing and proving a reduction equivalence theorem, a binary discernibility matrix is introduced for obtaining a single reduction dynamically. Since the size of the binary discernibility matrix is dynamically decreased during attribute reduction, the computational time is significantly reduced. Then, the multi-reduction can be obtained by a strategy of non-core attributes replacement. After testing on benchmark data, we employ the proposed algorithm to obtain multiple pathways from brain cognitive functional imaging successfully. The multi-reduction obtained by our algorithm provides a novel comprehensive view for brain functional connection pathways.

### **2. Multi-Reduction and Binary Discernibility Matrix Methodology**

In this section, the relevant concepts of multi-reduction and binary discernibility matrix are defined and the reduction theorems are proved theoretically.

### *2.1. Multi-Reduction*

In rough set theory, an information system [25] is defined as a 4-tuple *S* = (*U*, *A*, *V*, *f*), where *U* is the universe of discourse, a non-empty finite set of *N* objects {*x*1, *x*2, ··· , *xN*}. *A* is also a non-empty finite set that contains all attributes. For every *a* ∈ *A*, *a* : *U* → *Va* and *Va* is the value set of the attribute *a*.

If *A* = {*C* ∪ *D*}, *C* ∩ *D* = ∅, the information system is denoted as a decision table by *T* = (*U*, *C*, *D*, *V*, *f*). *C* and *D* are, respectively, called the condition attribute and the decision attribute sets. For two subsets of attributes in decision table, the input features form the set *C* while the class indices are *D*. Let *I* be a subset of *A*, the equivalence relation [26] *IND*(*I*) is denoted as follows.

$$IND(I) = \{(\mathbf{x}, y) \in \mathcal{U} \times \mathcal{U} | \forall a \in I, f(\mathbf{x}, a) = f(y, a)\}\tag{1}$$

All equivalence classes of the relation *IND*(*I*) are denoted by *U*/*IND*(*I*). For simplicity of notation, *U*/*I* replaces *U*/*IND*(*I*). The condition and decision classes are, respectively, noted *U*/*C* and *U*/*D*. For an attributes subset *B* ⊆ *C*, *U*/*B* = {*B*1, *B*2, ··· , *Bi*, ···} denotes a partition of the universe, where *Bi* is an equivalence class of *B*. The positive region on equivalence classes *U*/*B* for *D* is defined as follows:

$$POS\_B(D) = \bigcup\_{X \in \mathcal{U} \cap B \land |X / D| = 1} X \tag{2}$$

where |*X*/*D*| represents the cardinality of the set *X*/*D*.

According to the positive region on the equivalence relation of decision attribute *D*, the reduction can be defined as shown in Definition 1.

**Definition 1.** *[Reduction] For a given decision table T* = (*U*, *C*, *D*, *V*, *f*)*, the attributes subset B is called a reduction of C, if POSB*(*D*) = *POSC*(*D*) *and* ∀*b* ∈ *B POSB*(*D*) <sup>=</sup> *POS*(*B*−{*b*})(*D*)*.*

According to Definition 1, a reduction is an attributes subset of the condition attributes, which retains the capacity of the same classification to partition the universe as the whole condition attributes. In fact, the reduction is usually unique.

**Definition 2.** *[Multi-reduction] Let RED represent the multi-reduction set, it is a set including multiple reductions of C defined by Equation* (3)*.*

$$RED = \{ B | POS\_B(D) = POS\_C(D), POS\_{(B-\{b\})}(D) \neq POS\_B(D) \}. \tag{3}$$

### *2.2. Binary Discernibility Matrix*

To obtain reduction from an information system, positive region based reduction algorithms [27] have been widely used in the past. However, many symbolic logic operations have to be conducted by using these algorithms. A binary discernibility matrix [28] can transform the equivalence relation between different attributes into a matrix containing only 0 and 1. Thus, the binary discernibility matrix based reduction algorithms are simpler, more intuitive and easier to understand.

**Definition 3.** *[Binary discernibility matrix] The binary discernibility matrix of the reduced decision table T* = (*U*, *C*, *D*, *V*, *f*) *is denoted by M* = (*m*(*xi*, *xj*, *ck*)) *where ck* ∈ *C* (*k* = 1, 2, ··· , |*C*|)*, xi*, *xj* ∈ *U.* (*xi*, *xj*) *is an unordered object pair and m*(*xi*, *xj*, *ck*) *can be defined as follows:*

$$\begin{aligned} M &= m(\mathbf{x}\_i, \mathbf{x}\_j, \mathbf{c}\_k) = \\ &\begin{cases} 1 & \text{if } f(\mathbf{x}\_i, \mathbf{c}\_k) \neq f(\mathbf{x}\_j, \mathbf{c}\_k), D(\mathbf{x}\_i) \neq D(\mathbf{x}\_j), \mathbf{x}\_i, \mathbf{x}\_j \in \mathcal{U}\_{\text{pos}} \\ 1 & \text{if } f(\mathbf{x}\_i, \mathbf{c}\_k) \neq f(\mathbf{x}\_j, \mathbf{c}\_k), \mathbf{x}\_i \in \mathcal{U}\_{\text{pos}}, \mathbf{x}\_j \in \mathcal{U}\_{\text{neg}} \\ 0 & \text{otherwise}. \end{cases} \end{aligned} \tag{4}$$

According to Definition 3, the two objects in positive regions but with different decision values and the two objects in positive and non-positive regions, respectively, are distinguished by "1" in the matrix. Otherwise, the two objects are equivalent, and the corresponding value in the discernibility matrix is "0".

A reduction equivalence theorem between positive region based reduction algorithms and binary discernibility matrix based algorithms are proposed and proved as follows.

**Theorem 1** (Reduction Equivalence Theorem)**.** *For a given decision table T* = (*U*, *C*, *D*, *V*, *f*) *, if a reduction is obtained through the binary discernibility matrix, it must be equivalent to a reduction through its positive region.*

**Proof of Theorem 1.** Let *B* be a reduction through the binary discernibility matrix *M*. Let *Upos* be the objects set of positive region in *T*. For ∀[*xi*]*B*, (([*xi*]*<sup>B</sup>* ∈ *Upos*) ∧ (|[*xi*]*B*/*D*| = 1)) according to Equation (4). The objects set of positive region in *M* is denoted as *U<sup>M</sup> <sup>B</sup>* . Thus, *<sup>U</sup><sup>M</sup> <sup>B</sup>* = [*xi*]*B*∈*Upos* [*xi*]*<sup>B</sup>* =

*Upos* = *POSB*(*D*). Therefore, *POSB*(*D*) = *POSC*(*D*). *B* is a reduction which satisfies Definition 1, the theorem follows.

As Theorem 1 illustrated, the binary discernibility matrix provides an efficient approach to obtain the reduction. We just need to partition the objects in different object pairs through the binary discernibility matrix. According to the generated objects pairs by Equation (4), an unordered object pair (*xi*, *xj*) can be discerned only by considering the attributes *ck* with *m*(*xi*, *xj*, *ck*) = 1 instead of the whole attributes. All attributes would be added into a candidate reduction set according to the attribute importance that is defined as follows.

$$\mathcal{L}(\mathbf{c}\_k) = \sum\_{i=1}^{n-1} \sum\_{j=i+1}^n m'(\mathbf{x}\_i, \mathbf{x}\_j, \mathbf{c}\_k) \tag{5}$$

The larger the *ζ*(*ck*) is, the more important the *ck* is, and the stronger the ability of *ck* to partition the object pair (*xi*, *xj*) is. Especially, *ζ*(*ck*) = 0 means *ck* is a redundant attribute and cannot help us to discern any object pair. An attribute is called a core attribute when it can only be used to partition one object pair. The set of core attributes can be collected as follows.

$$I(\mathbb{C}) = \{ \mathbf{c}\_k | \forall \mathbf{x}\_{i\prime}, \mathbf{y}\_{i\prime} \text{ i } \neq \text{j}, \text{arg}\sum\_{k=1}^n m'(\mathbf{x}\_{i\prime} \mathbf{x}\_{j\prime} \mathbf{c}\_k) = 1\} \tag{6}$$

If *I*(*C*) = ∅, any attribute in condition attributes is not a core attribute.

### **3. Dynamic Multi-Reduction Algorithm**

In this section, the dynamic single reduction and multi-reduction algorithm based on a binary discernibility matrix are proposed successively. We analyze these two algorithms in detail.

To obtain multi-reduction, the decision table is transformed into a binary discernibility matrix by using Equation (4). A core attribute set can be obtained by using Equation (6), and then a reduction called a seed reduction is obtained from the core attribute set, as shown in Algorithm 1. In Algorithm 1, two acceleration mechanisms are used in Steps 3 and 4 to dynamically improve the algorithmic efficiency, respectively. Rows and columns that do not affect the next calculation are deleted in Step 3 to reduce the binary discernibility matrix. Then, the attributes with a value of "1" for the first objects pair in the current matrix are chosen in Step 4. Since only these attribute importance have to be calculated by Equation (5), the computational times of the algorithm are greatly reduced. In Steps 5 and 6, attributes are added to the seed reduction one by one according to their importance until the matrix becomes empty.

**Algorithm 1** Dynamic Single Reduction Algorithm (DSRA).

### **Input:**

*M* and the attributes set *R*,

### **Output:**

seed-reduction *R*


In Algorithm 1, the loop of Steps 2–7 is performed |*C* − *R*| times at most. The time complexities of Steps 3–6 are *<sup>O</sup>*(|*R*|), *<sup>O</sup>*(|*C*ˆ||*<sup>U</sup>* | <sup>2</sup>), *<sup>O</sup>*(|*C*ˆ|) and *<sup>O</sup>*(|*<sup>C</sup>* <sup>−</sup> *<sup>R</sup>*|), respectively. Thus, the time complexity of the loop is *max*(*O*(|*<sup>C</sup>* <sup>−</sup> *<sup>R</sup>*||*R*|),*O*(|*<sup>C</sup>* <sup>−</sup> *<sup>R</sup>*||*C*ˆ||*<sup>U</sup>* | <sup>2</sup>),*O*(|*<sup>C</sup>* <sup>−</sup> *<sup>R</sup>*||*C*ˆ|),*O*(|*<sup>C</sup>* <sup>−</sup> *<sup>R</sup>*<sup>|</sup> <sup>2</sup>)). The time complexity of Algorithm 1 is no more than *O*(|*C*| <sup>2</sup>|*U*| <sup>2</sup>). *R* , which is obtained based on *R* , is a seed-reduction for acquiring more reductions used in Algorithm 2.

The different reduction can be obtained by replacing the non-core attributes. According to *R* outputted by Algorithm 1, any non-core attribute *r* in *R* = *I*(*C*) − *R* can be replaced by attributes in *C* − *R*. Through recalling Algorithm 1, new reduction can be obtained by the replacement of different *r* . The dynamic multi-reduction algorithm (DMRA) is summarized in Algorithm 2.

In Algorithm 2, after initializing the attributes and multi-reduction set in Step 1, a binary discernibility matrix can be given using Equation (6) in Step 2. Then, the core attribute *I*(*C*) can be obtained using Equation (6) in Step 3. Algorithm 1 is called to find a seed reduction in Step 5, then more reductions are obtained by a non-core attributes replacement strategy from Step 7 to Step 14. Finally, in Steps 15 and 16, we remove the redundancy in the final reduction set *RED* and output it.

In Algorithm 2, the matrix *M* according to *T* is generated in Step 2 and its time complexity is *O*(|*U* | <sup>2</sup>). The time complexity of Step <sup>3</sup> for getting the core attributes set is *<sup>O</sup>*(|*C*||*<sup>U</sup>* | <sup>2</sup>). The time complexities of calling Algorithm 1 in Steps 5 and 9 is *O*(|*C*| <sup>2</sup>|*<sup>U</sup>* | <sup>2</sup>). The loop from Step 7 to Step 14 will run |*R* − *I*(*C*)| times. The procedure from Step 10 to Step 12 will cost *O*(|*R* − *I*(*C*)|) time. Thus, the time complexity of Steps 7–14 is *max*(*O*(|*C*| <sup>2</sup>|*<sup>U</sup>* | <sup>2</sup>|*<sup>R</sup>* − *<sup>I</sup>*(*C*)|),*O*(|*<sup>R</sup>* − *<sup>I</sup>*(*C*)| <sup>2</sup>)). The time complexity in Step 15 to remove the redundancy reduction is *O*(|*R* − *I*(*C*| <sup>2</sup>). Thus, the total

time complexity of Algorithm 2 is not more than *O*(|*C*| <sup>3</sup>|*<sup>U</sup>* | <sup>2</sup>). It can be found that the amount of the attributes has a greater influence on the time complexity of Algorithm 2.


### **4. Experimental Results**

We carried out multi-reduction experiments on 10 datasets from the UCI Machine Learning Repository to show the superiority of DMRA over PSORA in execution time and classification accuracy. Then, DMRA was used to obtain multiple brain functional connection pathways from brain functional magnetic resonance imaging.

### *4.1. Test and Comparative Experiments*

To illustrate the effectiveness of the proposed algorithm, we carried out multi-reduction experiments on 10 well-knowledge benchmark datasets from the UCI Machine Learning Repository, which are listed in Table 1. These datasets such as Glass, Heart and Iris are frequently used to test rough set methods. Some new datasets (e.g., Breast Tissue and SPECT Heart) were also considered in our experiments. The results in the number of core attributes and reductions are also listed in Table 1. Then, we compared the run time of getting the first reduction of DMRA with particle swam optimization based reduction algorithm (PSORA) [17]. The results are shown in Figure 1. We also compared the classification accuracy of DMRA and PSORA (Table 2).


**Table 1.** Results of DMRA on 10 benchmark datasets.

**Figure 1.** The comparison of execution time between DMRA and PSORA.

As shown in Figure 1, the run time of getting the first reduction by DMRA was always faster than PSORA. The more attributes the dataset contained, the more obvious the speed advantage of the DMRA was. For datasets with fewer attributes, namely Datasets 2, 6 and 7, there was little difference in the running time by using the two algorithms. However, for datasets with more attributes, namely Datasets 5, 9 and 10, DMRA had obvious time advantage. The brain data usually contain many attributes. Thus, the proposed DMRA could obtain reduction results more quickly than PSORA on brain data.

In Table 2, we list the highest, lowest and the average accuracy rate of the different multi-reductions obtained by DMRA, and compare these accuracy rates with raw data and PSORA. No matter using the DMRA or PSORA, the classification accuracy rate was improved, and the best classification accuracy could always be obtained by a reduction obtained by DMRA. The average accuracy rate of multi-reductions obtained by DMRA was higher on all datasets than PSORA. Thus, the proposed DMRA was superior to PSORA in execution time and classification accuracy rate.


**Table 2.** The comparison of classification accuracy.

Next, we applied the DMRA algorithm to the analysis of multiple brain functional connection pathways.

### *4.2. Experimental Design*

Our brain functional magnetic resonance imaging was acquired using a 3.0T Siemens Magnetron Vision Scanner on 21 young subjects (12 men and 9 women, aged from 17 to 20 years). All subjects were recruited from undergraduate students. Informed consent was obtained before their participation. The cognitive task was a kind of memory experiment, and the cognitive tasks were input in block design [29].

There were two kinds of stimuli in the experiment: images and words. The subjects were demanded to remember the stimuli shown in Condition 1 and to determine whether the stimulus shown in Condition 2 appeared in Condition 1 or not. The block design is shown in Figure 2. Conditions 1 and 2 lasted 12 s and 10 s, respectively, and the rest condition was 10 s so that the subjects could relax.

**Figure 2.** Block design of memory cognitive tasks.

Brodmann Areas (BA) [30] system was used in this study, which was originally defined and numbered by the German anatomist Korbinian Brodmann based on the cytoarchitectural organization of neurons. Each hemisphere of brain is divided into 52 areas in BA. According to our cognitive tasks, we selected 16 areas (BA4, BA6, BA17, BA18, BA19, BA22, BA27, BA37, BA38, BA39, BA40, BA41, BA42, BA44, BA45, and BA46) as the ROIs in the frontal lobe, as shown in Figure 3. For the sake of simplicity in rough set, the brain areas are described with corresponding attribute labels in Table 3.


**Figure 3.** The coronal view of 16 BA areas.

We used the statistical parameter mapping (SPM) [31] to obtain the activated brain area from the fMRI images and counted the number of activated voxels in every BA. In SPM, the data preprocessing covered the following steps: time correction, head correction, standardization and Gaussian smooth. Then, the generalized linear model (GLM) was employed to extract the activated voxels. After ascertaining the positions of the activated voxels through Student's T-test, the number of the activated voxels could be determined within every BA area. Then, the cognitive decision table *T* could be built, in which the condition attributes were the 16 BAs and the value of each objects in different attributes was the number of activated voxels.

We used the cognitive decision table *T* as the input of Algorithm 2. The attributes set *R* = ∅ and multi-reduction set *RED* = ∅ was initialized at first, and the binary discernibility matrix *M* could be obtained by Equation (4). Then, we obtained the set of core attribute *R* = *I*(*C*) = {*BA*17, *BA*19, *BA*27, *BA*38, *BA*39, *BA*41, *BA*42, *BA*45} by Equation (6). Algorithm 1 was used to obtain the first reduction *R* = {*BA*41, *BA*40, *BA*19, *BA*38, *BA*17, *BA*27, *BA*37, *BA*39, *BA*45, *BA*42}. Let the set of multi-reductions be *RED* = *RED* {*R* }. Then, the non-core attributes {*BA*37, *BA*37} in *R* were replaced according to Steps 7–14 in Algorithm 2. At the end of the algorithm cycle, *RED*, which contained three reduction results, could be outputted. The *RED* provided the attribute (namely BA) combinations and their correlations to describe the multiple brain cognitive functional connection pathways.

### *4.3. Discussion of Experimental Results*

Three reductions in *RED* were obtained by DMRA algorithm from the cognitive decision table, as shown in Table 4. The value of the table represents the order of attributes in an attribute reduction and "-" denotes the reduced attributes. The connection pattern of the brain functional pathway was dependent on the order of attributes in a reduction.


**Table 4.** Multi-reductions in *RED*.

Considering the three reductions obtained through our algorithm (Table 4), BA17, BA19, BA27, BA38, BA39, BA41, BA42 and BA45 were common in these reductions. We regard these eight brain areas as the core attributes, which are closely related to the memory behavior of brain. BA27, BA38, BA41 and BA42 are in the temporal lobe related to memory. BA17 and BA19 are in the occipital lobe related to visual. There are also some areas related to language, including BA39 and BA45. Reductions 1 and 2 have BA40, but BA46 in the frontal cortex replaces BA40 of the same order in Reduction 3. That means BA40 and BA46 may have similar effects in the brain, and they can be replaced by each other sometimes. The coronal position figures of the three reductions for brain functional connection pathways are, respectively, shown in Figures 4–6. The common brain areas are the blue highlight in these figures. BA40 or BA46 is the red highlight. The last area in each reduction is the yellow highlight.

According to the three multi-reductions, three brain functional connection pathways about memory can be formed by the order of attributes in Table 4 as follows.


The functional connection pathway derived from the first reduction in Table 4 are shown as an example in Figure 7. The different size of circles represents the activation intensity of brain areas.

**Figure 4.** The coronal position figures of Reduction 1.

**Figure 5.** The coronal position figures of Reduction 2.

**Figure 6.** The coronal position figures of Reduction 3.

(**a**) Left hemisphere

(**b**)Right hemisphere

### **5. Conclusions**

In this paper, we propose a dynamic multi-reduction approach, in which a binary discernibility matrix is formulated from the decision table in rough set theory. Only the attributes which can discern object pairs are considered, and the attributes importance is measured according to the discernibility matrix. The superiority of DMRA in execution time and classification accuracy are shown by testing on benchmark datasets and comparing with PSORA. Experiments show that the proposed DMRA can effectively deal with attribute reduction of numerical data. It not only helps us to obtain accurate multi-reductions, but also reduces the computational time complexity. The more are attributes contained in the dataset, the more obvious is the advantage of our algorithm over the traditional algorithm. Our DMRA was then applied to analyze brain functional imaging. The areas of interest and its activating features with cognitive tasks was transformed into a decision table. Finally, eight BAs closely related to the memory behavior of brain and three brain functional connection pathways about memory were obtained, while only one brain functional connection pathway can be obtained by the previous approaches. The multi-reduction theory provides a comprehensive analysis approach to complete the knowledge discovery in brain functional imaging. It would make a significant influence on brain functional connection analysis.

**Author Contributions:** Conceptualization, G.D. and C.Y.; Formal analysis, T.J.; Investigation, G.D.; Software, Y.L.; and writing—review and editing, G.D. and G.B.M.

**Funding:** This work was partly supported by the National Natural Science Foundation of China (Grant Nos. 61472058, 61602086 and 61772102).

**Conflicts of Interest:** The authors declare no conflict of interest.

### **References**


c 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## **A Scientific Decision Framework for Cloud Vendor Prioritization under Probabilistic Linguistic Term Set Context with Unknown**/**Partial Weight Information**

### **R. Sivagami 1, K. S. Ravichandran 1, R. Krishankumar 1, V. Sangeetha 1, Samarjit Kar 2,\*, Xiao-Zhi Gao <sup>3</sup> and Dragan Pamucar <sup>4</sup>**


Received: 22 April 2019; Accepted: 15 May 2019; Published: 17 May 2019

**Abstract:** With the tremendous growth of Cloud Vendors, Cloud vendor (CV) prioritization is a complex decision-making problem. Previous studies on CV selection use functional and non-functional attributes, but do not have an apt structure for managing uncertainty in preferences. Motivated by this challenge, in this paper, a scientific framework for prioritization of CVs is proposed, which will help organizations to make decisions on service usage. Probabilistic linguistic term set (PLTS) is adopted as a structure for preference information, which manages uncertainty better by allowing partial information ignorance. Decision makers' (DMs) relative importance is calculated using the programming model, by properly gaining the advantage of the partial knowledge and attributes, the weights are calculated using the extended statistical variance (SV) method. Further, DMs preferences are aggregated using a hybrid operator, and CVs are prioritized, using extended COPRAS method under the PLTS context. Finally, a case study on CV prioritization is provided for validating the scientific framework and the results are compared with other methods for understanding the strength and weakness of the proposal.

**Keywords:** cloud vendors; COPRASmethod; muirheadmean; programmingmodel and statistical variance

### **1. Introduction**

*Cloud computing* is a powerful internet-based concept, that provides services to customers, based on their demand. It is a self-contained, independent entity, which provides hardware, as well as software resources, on demand [1]. The three prominent categories of services offered in the cloud are infrastructure as a service (IaaS), software as a service (SaaS), and platform as a service (PaaS), which, in general, is called X-as a service (XaaS) [2]. From a survey on cloud technologies [3], it was forecasted that by 2020, almost 50% of the government sectors would migrate to cloud paradigms for their daily activities. Further, IDC (www.idc.com) predicted that almost 70% of the software revenue would be from cloud code and ENISA (www.enisa.europa.eu) identified that 68% of the organization feel cloud as a feasible alternative to traditional IT support.

Although these surveys provide the attractive side of the cloud, Mondal et al. [4] presented a counter analysis, and argued that more than 45% of the organization are still hesitant to use cloud paradigms. Buyya et al. [2] claimed that the cloud could be viewed as a basic amenity like water, gas, etc., which can be rented or purchased. As organizations have decided to migrate to cloud technology,

choosing a suitable CVs is a crucial task. Generally, CVs are selected, based on the quality of service (QoS) attributes that satisfy the needs of customers. However, a scientific framework, for prioritization of CVs, is still an unresolved problem and, owing to several CVs and trade-offs between QoS attributes in the market, the challenge becomes substantial, and there is an urge for a systematic decision framework [5]. As no two CVs are the same, the process of decision-making becomes complicated.

Many researchers attempted to develop a systematic approach to select suitable CVs, in order to satisfy the needs of the organization. From the literature, it is observed that most of the established scientific framework falls under three categories namely, *crisp data with multi-criteria decision-making (MCDM), fuzzy data with MCDM,* and *other optimization and similarity-based methods.*

### • *Crisp data with MCDM methods*

Kumar et al. [6,7], designed a hybrid method for CVs selection, using an analytic hierarchy process (AHP) and techniques for order of preference, by similarity to an ideal solution (TOPSIS) method. AHP was used to calculate the weights of the attributes and TOPSIS method was adopted for prioritization. Rădulescu et al. [8], proposed a systematic approach for ranking the vendors, based on a simple additive weighted (SAW) and modified TOPSIS. The SAW method was used to calculate the weights of the attributes and TOPSIS for prioritization.

Under the *crisp data with MCDM* category, popular methods that were used were AHP and TOPSIS. Generally, the crisp data are difficult to obtain, and the uncertainty and vagueness in the process of preference elicitation, are not properly realized. Although the AHP method calculate the weights of the attributes, they are complex because of the pairwise comparison and yield unreasonable weight values, without capturing the hesitation of the decision makers. Further, the TOPSIS method determines the ranking of CVs by considering the rank index measure, which produces irrational ranking, due to the ignorance of relative distance measure. Moreover, the TOPSIS method suffers from the rank reversal issue [9].

### • *Fuzzy data with MCDM methods*

Kumar et al. and Patiniotakis et al. [10,11], proposed a fuzzy AHP method for CVs selection under uncertainty. Subjective and objective attributes are considered for analysis of the CVs. A scientific decision framework is proposed for CV selection by integrating different MCDM methods, under subjective and objective attributes' preferences [12]. Wagle et al. and Krishankumar et al. [13–15] proposed a decision framework for CV selection, under intuitionistic fuzzy sets (IFS) context. Wagle et al. [13] adopted a new ranking algorithm by considering cloud users, auditors, and service delivery measurements. Krishankumar et al. [14,15], proposed a hybrid method for prioritization by aggregation preferences, by calculating the attributes' weights and ranking CVs.

Although these methods handle uncertainty to some extent, the originality of the preference information is not completely retained. Further, the weight estimation methods, discussed in this category, do not capture the hesitation properly, and the aggregation of preferences ignores the calculation of decision maker's weight and interrelationship among attributes.

### • *Other methods*

In this category, the literature related to optimization and similarity-based methods were discussed. Somu et al. [16] devised the hypergraph-based binary fruit fly optimization algorithm (HBFFOA) to estimate the trustworthiness, and rank the cloud alternatives, by considering both, the subjective and objective assessment of the CVs. Ding et al., Zeng et al., and Pan et al. [17–19] proposed frameworks, based on the similarity index measure. Ding et al. [17] presented a two-step ranking system, based on Kendal ranking cross-correlation (KRCC) and similarity of the neighbors significance, while Zeng et al. [18] designed a recommender system that adopts a collaborative filtering technique, using spearman coefficient that can predict QoS ratings and rankings. Pan et al. [19] proposed an approach by measuring the trust and degrees, and estimated the similarity, based on the Jaccard similarity and

Pearson correlation coefficients. Ghosh et al. [20] developed a new algorithm for selecting the CVs, by using two QoS parameters namely trust and competency.

The methods discussed in this category address the problem of CV selection from a different perspective, but these suffer from the problem of optimal parameter setting that complicates the decision-making process. Moreover, uncertainty and vagueness are not properly handled by these methods.

Table 1 presents a summary of recent studies on CVs selection from the viewpoint of multi-attribute group decision-making (MAGDM) perspective. Motivated by the critical insights made above, the following key challenges are encountered:


Motivated by these challenges and to address the same, some contributions are made:


The remainder of the paper consists of the following sections. Section 2 presents the basic concepts of linguistic term set (LTS), HFLTS, and PLTS. Section 3 provides the proposed decision framework which is the core research focus that consists of methods for attributes' and DMs' weight calculation, aggregation of preferences, and prioritization. Section 4 contains a numerical example for CV selection, and Section 5 conducts a comparative analysis of proposed and state-of-the-art methods. Finally, Section 6 provides concluding remarks and future research directions.



### **2. Basic Concepts of LTS, HFLTS, and PLTS**

**Definition 1.** *Let S* = {*sv*|*v* = 0, 1, ... , *t*} *be an LTS with t being a positive integer and so and st are the initial and final terms. The following features hold true,*

*If r* > *u then, sr* > *su; Negation of sr* = *su with r* + *u* = *t. Zadeh [24] formed the initial idea of a linguistic variable, and Herrera et al. [25–27] made its apt usage in group decision-making.*

**Definition 2.** *Consider S as before and HFLTS is given by,*

$$H\_S = \left\{ \mathbf{x}, h\mathbf{u}\_S(\mathbf{x}) \middle| \mathbf{x} \in \mathcal{X} \right\} \tag{1}$$

*where hHS* (*x*) = *h*(*x*) *be some linguistic terms from S*.

**Definition 3.** *Consider S as before and PLTS is given by,*

$$L(p) = \left\{ L^k(p^k) \middle| L^k \in \mathcal{S}, k = 1, 2, \dots, \#L(p), 0 \le p^k \le 1, \sum\_k p^k \le 1 \right\} \tag{2}$$

*For convenience, Li*(*p*) = *Lk i pk i* ∀*i* > 0 *is the probabilistic linguistic element (PLE) and collection of such PLEs forms the PLTS L*(*p*)*.*

**Definition 4.** *Consider two PLEs h*<sup>1</sup> *and h*<sup>2</sup> *as define before then the operational laws are given by,*

$$L\_1(p) \bigoplus \mathcal{L}\_2(p) = \mathcal{g}^{-1}(\mathcal{g}(h\_1) + \mathcal{g}(h\_2)) \tag{3}$$

$$L\_1(p)\bigotimes \mathcal{L}\_2(p) = \mathcal{g}^{-1}(\mathcal{g}(h\_1) \times \mathcal{g}(h\_2))\tag{4}$$

*where g and g*−<sup>1</sup> *are adopted from [28].*

### **3. Proposed Decision Framework for CV Selection**

Before presenting the core methods of the decision framework, it is substantial to discuss the specifics of the problem being addressed. There are *l* DMs each forming a decision matrix of order *m* × *n*, where *m* is the number of CVs and *n* is the number of attributes. The PLTS information is used for rating CVs over each attribute. Initially, the *l* decision matrices of order *m* × *n* are aggregated to form a single decision matrix of order *m* × *n*. During the process of aggregation, the interrelationship among attributes is considered effectively and weights of each DM are calculated in a systematic manner. Then, attributes' weights are calculated by using a matrix of order *l* × *n*. This is a vector of order 1 × *n*, By using this vector and the aggregated matrix, a vector of order 1 × *m* is formed (from the ranking method) which is used for the prioritization of CVs.

### *3.1. Proposed Attributes' Weight Calculation Method*

This section puts forward a new method for calculating weights of attributes under PLTS context. The idea is to extend the SV method to PLTS. Previous weight calculation methods viz., analytical hierarchy process (AHP), optimization model, entropy measures, etc., suffer from the following weaknesses, such as, (i) complex implementation procedure, (ii) unreasonable weight values, and (iii) ineffective capturing of hesitation of DMs. To address the weaknesses, Liu et al. used the SV method for weight calculation, which enjoys the following advantages: (i) SV method is simple and easy to implement, (ii) produces reasonable weight values by focusing on all data points before determining

the distribution, and (iii) captures hesitation effectively, by assigning high weights to those attributes, which cause confusion to DMs during preference elicitation.

Motivated by the strength of the SV method, in this paper, we extend the SV method to PLTS context. The procedure for calculating weights when the information about attributes is completely unknown is given below:

**Step 1:** Form a weight calculation matrix with PLTS information of order *l* × *n* where *l* denotes the number of DMs and *n* denotes the number of attributes.

**Step 2:** Transform the PLTS information into a single value matrix by using Equation (5).

$$\text{sval}\_{ij} = \sum\_{k=1}^{\#L(p)} v^k p^k \tag{5}$$

where *vk* is the *kth* subscript of the linguistic term, #*L*(*p*) is the total number of instances, and *p<sup>k</sup>* is the *kth* occurring probability associated with that linguistic term.

**Step 3:** Determine the SV for each attribute by using Equation (6). The SV is a vector of order 1 × *n*.

$$\sigma\_j^2 = \frac{\sum\_{l=1}^{\#DM} \{ \text{sval}\_{lj} - \overline{\text{sval}\_j} \}^2}{\#DM - 1} \tag{6}$$

where *svalj* is the mean value of the *j th* attribute, σ<sup>2</sup> *<sup>j</sup>* is the SV of the *j th* attribute, and #*DM* is the total number of DMs.

**Step 4:** Normalize the SV from step 3 to obtain weights of attributes. Use Equation (7) to obtain the weight vector of order 1 × *n*.

$$w\_{\dot{j}} = \frac{\sigma\_{\dot{j}}^2}{\sum\_{\dot{j}} \sigma\_{\dot{j}}^2} \tag{7}$$

where *wj* is the weight of the *j th* attribute.

### *3.2. Proposed DMs' Weight Calculation Method*

This section proposes a new method for DMs' weight calculation under PLTS context. Generally, DMs' weight values are directly obtained, which causes inaccuracies in the decision-making process and are prone to imprecision, due to external factors like time, cost, environmental pressure, etc. [29]. Motivated by this issue, researchers started proposing methods for DMs' weight calculation. Koksalmis and Kabak [30] conducted an attractive analysis of different methods for DMs' weight calculation, and claimed that weights of DMs must be systematically calculated for reducing inaccuracies and imprecision in the decision-making process.

Motivated by this claim, in this paper, a new programming model is proposed for DMs' weight calculation under PLTS context. To the best of our knowledge, this is the first study that calculates DMs' weights with PLTS information. Moreover, in this paper, we make use of the partially known information about each DM to calculate the weights. Some advantages of the proposed method are: (i) it provides rational weight values, which reduce inaccuracies in the decision-making process and (ii) utilizes the partial information about each DM effectively to calculate weights. Attracted by these advantages, the procedure is presented below for DMs' weight calculation.

**Step 1:** Transform the decision matrix from each DM into weighted decision matrices by using Equation (8).

$$L(p) = \left\{ \upsilon^k \left( 1 - \left( 1 - p^k \right)^{w\_{\dot{\gamma}}} \right) \right\} \tag{8}$$

where *v<sup>k</sup>* is the subscript of the *kth* linguistic term, *pk* is the probability associated with the *kth* linguistic term, and *wj* is the weight of the *j th* attribute.

*Symmetry* **2019**, *11*, 682

Equation (8) is applied to all the elements of the decision matrix from each DM. Now all matrices are transformed into weighted decision matrices.

**Step 2:** Calculate positive ideal solution (PIS) and negative ideal solution (NIS) from the decision matrices obtained from step 1. The PIS and NIS values are calculated for each attribute, and it is given by Equations (9) and (10).

$$h^{+} = \max\_{j \in \text{beu}} \mu\_{j \in \text{beu}} \mu\_{j}^{k} \sum\_{k=1}^{\#L(p)} v^{k} p^{k} \Big) \text{ or } \min\_{j \in \text{ctot}} \left( \sum\_{k=1}^{\#L(p)} v^{k} p^{k} \right) \tag{9}$$

$$h^{-} = \max\_{j \in \text{cost}} \left( \sum\_{k=1}^{\#L(p)} v^k p^k \right) \\
\text{or } \min\_{j \in \text{Item} \\
\text{fit}} \left( \sum\_{k=1}^{\#L(p)} v^k p^k \right) \tag{10}$$

where *h*<sup>+</sup> is the PIS and *h*<sup>−</sup> is the NIS.

Equations (9) and (10) calculate PIS and NIS for each attribute and the PLTS information corresponding to the respective obtained value is considered for further process.

**Step 3:** A programming model is proposed for determining weights of DMs. This model is solved using MATLAB® optimization toolbox for calculating the weights of DMs.

**Model 1.**

$$\dim Z = \sum\_{l=1}^{\#DM} dw\_l \sum\_{i=1}^m \sum\_{j=1}^n \left( d\left( L\_{ij}(p), h\_j^+ \right) - d\left( L\_{ij}(p), h\_j^- \right) \right)^2$$

*Subject to*

$$0 \le dw\_l \le 1$$

$$\sum\_l dw\_l = 1$$

*Here d*(*a*, *b*) *is calculated using Equation (11) with a and b being any two PLEs.*

$$d(a,b) = \sqrt{\sum\_{k=1}^{\#L(p)} \left( \left( \upsilon\_a^k p\_a^k \right) - \left( \upsilon\_b^k p\_b^k \right) \right)^2} \tag{11}$$

*Model 1 is solved by properly making use of the partial information about each DM which provides the weight for each DM.*

### *3.3. Proposed Hybrid Aggregation Operator under PLTS Context*

This section presents a hybrid aggregation operator for aggregating PLEs. The operator has two stages. In the first stage, the linguistic terms are aggregated, and in the second stage, the occurring probability values associated with each linguistic term are aggregated. A new procedure is proposed for aggregating the linguistic term. The MM operator is extended under the PLTS context for aggregating the occurring probability value associated with each linguistic term.

Previous aggregation operators under PLTS context do not capture the interrelationship among attributes and produce virtual sets. Motivated by this challenge and to address the same, in this paper, a hybrid operator is proposed, which captures the inter-relationship between attributes properly and avoids the formation of virtual sets.

**Definition 5.** *The aggregation of PLEs by using the proposed hybrid operator is a mapping D<sup>n</sup>* → *D, and it is given by,*

$$Hybr\text{id}\{v\_1^k, v\_{2^\*}^k, \dots, v\_{\text{tfDM}}^k\} = \begin{pmatrix} \text{condition 1 if the frequency of linguistic term occurrence is unique} \\ \text{condition 2 if the frequency of linguistic term occurrence is not unique} \end{pmatrix} \tag{12}$$

*where condition 1 calculates the mean value of the linguistic term and then round-o*ff *is used to avoid virtual sets; condition 2 calculates the frequency of the linguistic term and the term with maximum frequency is chosen as aggregated value.*

$$Hybrid\left(p\_1^k, p\_2^k, \dots, p\_{\#DM}^k\right) = \left(\prod\_{l=1}^{\#DM} \left(\prod\_{q=1}^{\#DM} \left(p\_l^k\right)^{\lambda\_q}\right)^{\text{dv}\_l}\right)^{\frac{1}{\sum\_{q} \lambda\_q}}\tag{13}$$

*where dwl is the weight of the l th DM obtained from* Section 3.2*,* λ1, λ2, ... , λ#*DM are the risk appetite values associated with each DM which can have possible values from the set* {1, 2, ... , #*DM*}*.*

*Here Equation (12) is used to aggregate the linguistic term, and Equation (13) is used to aggregate the associated occurring probability values.*

### **Property 1: Commutativity**

If *L*∗ *l* (*p*) <sup>∀</sup>*<sup>l</sup>* <sup>=</sup> 1, 2, ... , #*DM* is any permutation of PLEs then, *Hybrid L*∗ <sup>1</sup>(*p*), *<sup>L</sup>*<sup>∗</sup> <sup>2</sup>(*p*), ... , *<sup>L</sup>*<sup>∗</sup> #*DM*(*p*) = *Hybrid*(*L*1(*p*), *L*2(*p*), ... , *L*#*DM*(*p*)).

### **Property 2: Bounded**

If *Li*(*p*) ∀*l* = 1, 2, ... , #*DM* is a collection of PLEs then, *L*−(*p*) ≤ *Hybrid*(*L*1(*p*), *L*2(*p*), ... , *L*#*DM*(*p*)) ≤ *L*+(*p*). Here, *L*−(*p*) = *mini* ⎛ ⎜⎜⎜⎜⎝ #*Li* (*p*) *k*=1 *vk i pk i* ⎞ ⎟⎟⎟⎟⎠ and *L*−(*p*) = *maxi* ⎛ ⎜⎜⎜⎜⎝ #*Li* (*p*) *k*=1 *vk i pk i* ⎞ ⎟⎟⎟⎟⎠. **Property 3: Idempotent**

If *L*1(*p*), *L*2(*p*), ... , *L*#*DM*(*p*) = *L*(*p*) then, *Hybrid*(*L*1(*p*), *L*2(*p*), ... , *L*#*DM*(*p*)) = *L*(*p*).

### **Property 4: Monotonicity**

Consider a set of PLEs *L*∗ *l* (*p*) ∀*l* = 1, 2, ... , #*DM* such that *L*<sup>∗</sup> *l* (*p*) ≥ *Li*(*p*) ∀*l* = 1, 2, ... , #*DM* then, *Hybrid L*∗ <sup>1</sup>(*p*), *<sup>L</sup>*<sup>∗</sup> <sup>2</sup>(*p*), ... , *<sup>L</sup>*<sup>∗</sup> #*DM*(*p*) ≥ *Hybrid*(*L*1(*p*), *L*2(*p*), ... , *L*#*DM*(*p*)).

**Theorem 1.** *The aggregation of PLEs by using the proposed hybrid operator produces a PLE*.

**Proof.** The proposed hybrid operator aggregates PLEs in two stages. In the first stage, linguistic terms are aggregated, and in the second stage, the associated occurring probability values are aggregated. Clearly, from Equation (12), no virtual element is obtained and hence, the linguistic information is rationally aggregated. Now, we must prove that the aggregation of associated occurring probability value yields a probability value. For this, we make use of the Bounded property. This property shows that the aggregated value is bounded within the lower and upper limits. By extending the property,

$$\text{we get } 0 \le \left(\prod\_{l=1}^{\#DM} \left(\prod\_{q=1}^{\#DM} \left(p\_l^k\right)^{\lambda\_q}\right)^{\text{div}\_l}\right)^{\frac{1}{\sum\_{q} \lambda\_q}} \le 1.$$

Thus, 0 <sup>≤</sup> *Hybrid pk* <sup>1</sup>, *pk* <sup>2</sup>, ... , *<sup>p</sup><sup>k</sup>* #*DM* ≤ 1 holds true. By combining the idea, we can infer that the aggregation produces a PLE. -

Some advantages of the proposed hybrid operator are:


### *3.4. Extended COPRAS Method Under PLTS Context*

This section puts forward a new extension to the COPRAS method under PLTS context. The initial idea for COPRAS method was obtained from [31] and Zavadskas et al. [32] analyzed different MAGDM methods, and described the importance of COPRAS method, in solving decision-making problems. COPRAS is a simple and straightforward method for ranking alternatives. It captures the direct and proportional relationship between alternatives and attributes with significance and utility degrees. Further, the COPRAS method could provide ranking from different angles, which promote rational decision-making [31].

Motivated by these strengths, researchers used the COPRAS method for various decision-making problems. Zavadskas et al. [33,34] used the COPRAS method, with grey numbers for ranking project managers and contractors. Vahdani et al. [35] and Gorabe et al. [36] used the COPRAS method for robot selection in industries. Chatterjee et al. [37,38] and Nasab et al. [39] extended the COPRAS method for material selection. Chatterjee and Kar [40] extended COPRAS for Z-numbers and applied the same for renewable energy source selection. Yazdani et al. [41] presented a hybrid method for the green supplier selection by combining quality function deployment (QFD) and the COPRAS method.

From the analysis made above, it can be inferred that the COPRAS method is powerful for prioritization of alternatives and its extension to PLTS context is not developed so far. Motivated by the advantages of the COPRAS method, in this paper, the COPRAS method is extended for PLTS context. The systematic procedure is given below:


$$P\_i = \bigoplus\_{j=1}^{z} w\_j v\_{ij}^k \times \bigoplus\_{j=1}^{z} 1 - \left(1 - p\_{ij}^k\right)^{w\_j} \tag{14}$$

$$R\_i = \bigoplus\_{j=1}^{z+1} w\_j v\_{ij}^k \times \bigoplus\_{j=1}^{z+1} 1 - \left(1 - p\_{ij}^k\right)^{w\_j} \tag{15}$$

**Step 3:** Determine the prioritization order by using Equation (16). This parameter is also calculated for each alternative.

$$Q\_i = qP\_i + (1 - \wp) \frac{\sum\_i R\_i}{R\_i \left(\frac{1}{\sum\_i R\_i}\right)}\tag{16}$$

where ϕ is the strategy value in the range 0 to 1.

Before demonstrating the numerical example of CV selection, it is better to present the diagrammatic representation of the decision framework to clearly understand the working of the proposed decision framework (refer Figure 1 for clarity).

Figure 1 depicts the overall workflow of the proposed decision framework for CV selection. The framework is used to initially determine the weights of the attributes. Then, the DMs' weights are determined, and the preferences are aggregated, using these weight vectors. Further, the CVs are prioritized by using a ranking method under PLTS context. The aggregated matrix and the attributes' weight values are taken as input for prioritization. Finally, the proposed framework is validated by using a numerical example which is presented below.

**Figure 1.** Proposed decision framework for cloud vendor selection.

### **4. Numerical Example of Cloud Vendor Selection**

This section demonstrates the practical use of the proposed framework by prioritizing CVs. An organizationin Chennai wants to attain global standards, and so they think ofmigrating theirinfrastructural needs to cloud. This brings enough time and resources for planning the core developmental activities. For achieving the desired objective, the board decides to constitute a panel of three DMs including., senior technical officer *d*1, audit and finance personnel *d*<sup>2</sup> and senior computer engineer *d*<sup>3</sup> who provide their preferences over each CV for a specific attribute.

The panel analyzes different CVs who offer IaaS, and by adopting the Delphi method, they picked 13 CVs for analysis. By the process of repeated discussion and brainstorming, the panel finalized five CVs (A1, A2, A3, A4, A5) who actively offer IaaS. Then, the DMs made a literature analysis, and by the method of voting, six attributes were chosen for analysis. These six attributes (C1, C2, C3, C4, C5, C6) were chosen after a detailed discussion and analysis of various benchmark standards

being, information communication technology service quality (ICTSQ), application performance index (APDEX), service measurement index (SMI) and ISO/IEC 9126 [42]. Along with these benchmarks, some literature reviews are also analyzed for obtaining proper attributes for evaluation. All attributes were listed and based on the scoring method, six attributes are finalized, and they are accountability, assurance, agility, performance, cost, and risk of CV.

The systematic procedure for the proposed decision framework is given below:

**Step 1:** Start.

**Step 2:** Form three decision matrices of order 5 × 6 where five CVs are rated by using six attributes. The DMs use PLTS information for preference elicitation.

Table 2 depicts the PLTS information provided by each DM for rating CVs over a specific attribute. Each DM uses two instances to rate the CVs, and they associate an occurring probability value for each linguistic term. The LTS used for analysis is a 5-Likert rating scale given by *S* = *so* = *very low*, *s*<sup>1</sup> = *low*,*s*<sup>2</sup> = *medium*, *s*<sup>3</sup> = *high*,*s*<sup>4</sup> = *very high*! .


**Table 2.** PLTS information provided by *d*<sup>1</sup>

**Step 3:** Form an attribute weight calculation matrix of order 3 × 6 where three DMs provide their preferences on each of the six attributes.

Table 3 presents the evaluation matrix for calculating attributes' weights. Each DM provides his/her preference over each attribute and using these preferences, the attributes' weights are calculated by using the procedure given in Section 3.1. The mean value for each attribute is given by (2.23, 3.08, 1.82, 1.82, 2.03, 1.16) and the variance value for each attribute is given by (2.96, 0.34, 0.65, 0.86, 0.88, 1.16). By normalizing the variance, we get the weight of values as (0.17, 0.24, 0.14, 0.14, 0.16, 0.15).


**Table 3.** Attributes weight calculation matrix.

**Step 4:** Aggregate the three decision matrices by using the proposed aggregation operator (refer Section 3.3). The operator uses DMs' weights calculated from Section 3.2 for aggregation.

Table 4 depicts the weighted preference information of each DM which is obtained by using Equation (8). These values are used for calculating the PIS and NIS values for each attribute, and they are shown in Table 5. The attributes C1 to C4 are of benefit type, and the remaining are cost type attributes. By using Equations (9) and (10), the PIS and NIS values are calculated for each attribute, and they are used to determine the weights of the DMs.


**Table 4.** Weighted PLTS information of decision makers.


**Table 5.** Ideal solution for each attribute from decision makers' matrix.

By applying Model 1, we can obtain the objective function which is solved by using optimization toolbox in MATLAB® to obtain the weight values. The objective function is given by 5.24*dw*<sup>1</sup> + 1.63*dw*<sup>2</sup> + 5.61*dw*3, and the constraints are given by *dw*<sup>1</sup> ≤ 0.35, *dw*<sup>2</sup> ≤ 0.35 and *dw*<sup>3</sup> ≤ 0.40. The weights of DMs are given by *dw*<sup>1</sup> = 0.35, *dw*<sup>2</sup> = 0.35, and *dw*<sup>3</sup> = 0.30.

By using the DMs' weights calculated above, the hybrid aggregation operator aggregates the preferences, and it is shown in Table 6. The risk appetite values are taken as 2, 2, and 1.


**Table 6.** Aggregated PLTS preferences by using proposed hybrid operator.

**Step 5:** Results are compared with other methods, and the strengths and weaknesses are discussed in Section 5.

Table 7 depicts the parameter values of the COPRAS ranking method. The values are calculated for each CV, and at ϕ = 0.5, the CVs are prioritized by using the *Q* values for both equal and unequal weights. For unequal weights, the ranking order is given by A2 > A4 > A3 > A5 > A1 and for equal weights, the ranking order is given by A2 > A4 > A3 > A5 > A1.


**Table 7.** COPRAS parameters values for equal and unequal attributes' weights.

From Figure 2, it can be observed that the prioritization order of highly preferred CV does not change, even after adequate changes are made to the strategy values. For both equal and unequal attributes' weights, the prioritization order changes after 0.7, but the order of the CV that is ranked first remains unchanged, ensuring the stability of the proposed method.

**Figure 2.** Analysis of strategy values of decision-makers: Equal and unequal weights.

**Step 6:** End.

### **5. Comparative Investigation of Proposed Framework vs. Others**

This section deals with the comparative analysis of the proposed framework with other state-of-the-art methods. To ascertain homogeneity in the process of comparison, we consider other methods pertaining to MADM contexts. The methods considered for comparison between the following authors: Garg et al. [1], Kumar et al. [6], Kumar et al. [7], and Liu et al. [12]. The factors for analysis are gathered from the literature and intuition, and used for investigation. Table 8 depicts the comparative analysis of different methods.

From Table 8, some of the advantages of the proposed framework can be realized:



**Table 8.** Investigation of different factors: Proposed versus others.

To further realize the strength of the proposed framework, a simulation study is carried out, which determines the processing time of different methods for different CVs being considered. Initially, we formed matrices in a random fashion of order *m* × *n* with PLTS information. Here, *m* is the number of CVs and *n* is the number of attributes, and we vary *m* in a step-wise manner by considering 300, 500, 3000, 5000, 30,000, and 50,000 CVs for analysis. These CVs are rated with respect to six attributes, whose weights are already determined (refer to the previous section). We calculated the processing time of the proposed ranking method and other methods through a PLTS based AHP [43], PLTS based VIKOR [44], and PLTS based TOPSIS [21], presented in Figure 3.

**Figure 3.** Simulation study for analyzing processing time: Proposed versus Others.

From Figure 3, it is evident that the proposed ranking method consumes less time to execute, compared to other state-of-the-art methods. Also, as the size of *m* grows, the state-of-the-art methods consume enormous time to execute. Further, the proposed ranking method takes a reasonable amount of time to execute the larger size of *m*. The values presented in Figure 3 are mean values of 100 iterations.

### **6. Conclusions**

This paper proposes a new decision framework under the PLTS context for the rational prioritization of CVs. The framework provides a systematic method for attributes' weight calculation and DMs' weight calculation. Also, the preferences from each DM are aggregated effectively by capturing the interrelationship among attributes. CVs are prioritized by extending the COPRAS

method under PLTS context and from the sensitivity analysis of weights and strategy values, it can be inferred that the proposed framework is stable.

The proposed framework is a 'ready-made' tool for CVs selection. Based on the preference information (PLTS) provided by the DMs, a suitable CV is selected in a systematic manner for the process. The framework helps the vendor plan their strategies on various attributes to compete in the global market and helps customers make rational decisions regarding their purchase and use of services. DMs need some training with the data structure for effectively using the framework to make rational decisions, and these are the implications derived from the study.

For future research, plans have been made to use the concepts proposed in this paper to properly recommend CVs to a group of customers. Also, plans have been made to propose new decision frameworks for CV selection under other fuzzy variants including, picture fuzzy sets [45], m-polar fuzzy sets [46], and neutrosophic fuzzy sets [47].

**Author Contributions:** The contributions and responsibilities of authors were as follows. R.S., R.K., and V.S. prepared the groundwork, designed the research model, developed the prototype, and conducted the experiments. K.S.R., S.K., X.-Z.G., and D.P. provided their valuable insights and suggestion throughout the research, and validated our result and helped us fully in the preparation of the manuscript. All authors have read the manuscript and agreed on its submission in the journal.

**Funding:** The authors thank the following funding agencies: Council for Scientific and Industrial Research (CSIR), India, University Grants Commission (UGC), India, Department of Science and Technology (DST), India, and National Natural Science Foundation of China (NFSC), China for their financial aid from the grant nos. 09/1095(0033)18-EMR-I; F./2015-17/RGNF-2015-17-TAM-83, 09/1095(0026)18-EMR-I, SR/FST/ETI-349/2013, and 51875113.

**Acknowledgments:** The authors thank the editor and the anonymous reviewers for their valuable comments, which improved the quality of our research.

**Conflicts of Interest:** The authors declare that there is no conflict of interest.

### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

MDPI St. Alban-Anlage 66 4052 Basel Switzerland Tel. +41 61 683 77 34 Fax +41 61 302 89 18 www.mdpi.com

*Symmetry* Editorial Office E-mail: symmetry@mdpi.com www.mdpi.com/journal/symmetry

MDPI St. Alban-Anlage 66 4052 Basel Switzerland

Tel: +41 61 683 77 34 Fax: +41 61 302 89 18