Advances in Fuzzy Decision Theory and Applications

Edited by Jun Ye and Yanhui Guo

mdpi.com/journal/mathematics

## **Advances in Fuzzy Decision Theory and Applications**

## **Advances in Fuzzy Decision Theory and Applications**

Editors

**Jun Ye Yanhui Guo**

Basel • Beijing • Wuhan • Barcelona • Belgrade • Novi Sad • Cluj • Manchester

*Editors* Jun Ye School of Civil and Environmental Engineering Ningbo University Ningbo, China

Yanhui Guo Department of Computer Science University of Illinois Springfield Springfield, IL, USA

*Editorial Office* MDPI St. Alban-Anlage 66 4052 Basel, Switzerland

This is a reprint of articles from the Special Issue published online in the open access journal *Mathematics* (ISSN 2227-7390) (available at: https://www.mdpi.com/journal/mathematics/special issues/advances fuzzy decision theory applications).

For citation purposes, cite each article independently as indicated on the article page online and as indicated below:

Lastname, A.A.; Lastname, B.B. Article Title. *Journal Name* **Year**, *Volume Number*, Page Range.

**ISBN 978-3-0365-9720-1 (Hbk) ISBN 978-3-0365-9721-8 (PDF) doi.org/10.3390/books978-3-0365-9721-8**

© 2023 by the authors. Articles in this book are Open Access and distributed under the Creative Commons Attribution (CC BY) license. The book as a whole is distributed by MDPI under the terms and conditions of the Creative Commons Attribution-NonCommercial-NoDerivs (CC BY-NC-ND) license.

## **Contents**


## **About the Editors**

#### **Jun Ye**

Jun Ye is a Professor in the School of Civil and Environmental Engineering and Deputy Director of the Institute of Rock Mechanics at Ningbo University. Since 2019, he has served as the Vice President of the China Branch of the Neutrosophic Science International Association (NSIA). He is currently Editor-in-Chief of the international journals *Current Computer Science and Decision Making and Analysis*. He is Academic Editor of the *Journal of Classification*, *Journal of Mathematics*, *Mathematical Problems in Engineering*, and *Neutrosophic Sets and Systems*. He is also an Editorial Board Member for other journals. He hosted or participated in three projects with the National Natural Science Foundation and two projects with the Zhejiang Natural Science Foundation. He has published more than 300 papers in domestic and foreign journals. From 2019 to 2022, he was selected as the "Elsevier China Highly Cited Scholar". He was selected as one of the World's Top Computer Scientists by Research.com in 2022 and 2023.

#### **Yanhui Guo**

Yanhui Guo received his Ph.D. in the Department of Computer Science, Utah State University, USA. He was a Research Fellow in the Department of Radiology at the University of Michigan and an Assistant Professor at St. Thomas University. Dr. Guo is currently an Associate Professor in the Department of Computer Science at the University of Illinois Springfield. Dr. Guo's research area includes computer vision, machine learning, data analytics, neutrosophic sets, computer-aided detection/diagnosis, and computer-assisted surgery. He has published three books, more than 110 journal papers, and 40 conference papers, completed more than 10 grant-funded research projects, has two patents, and has worked as an Associate Editor for different international journals and as a reviewer for top journals and conferences. Dr. Guo successfully applied neutrosophic sets to image processing in 2008 and has published much research work in this area. Dr. Guo was a Co-Founder and Chief Scientist of MedSights Tech Inc., a high technology company focusing on a computer-assisted surgery system. Dr. Guo was awarded a University Scholar in 2019, the university system's highest faculty honor, recognizing outstanding teaching and scholarship.

## **Preface**

In the realm of complex decision making, characterized by inherent incompleteness and uncertainty, the foundational work of Lotfi A. Zadeh on fuzzy set theory has been instrumental. The efficacy of classical fuzzy sets in addressing vagueness has prompted an exploration of various extensions, each catering to the intricacies of real-world decision-making problems. This book delves into an array of advanced fuzzy theories, including type-2 fuzzy sets, hesitant fuzzy sets, multivalued fuzzy sets, cubic sets, intuitionistic fuzzy sets, Pythagorean fuzzy sets, spherical fuzzy sets, neutrosophic sets, and more. The richness of these extensions reflects the dynamism of fuzzy theories in diverse decision-making applications.

Comprising ten research papers, this book presents a synthesis of the latest progress and achievements in fuzzy decision theories. The contributions span theoretical developments and practical applications, demonstrating the versatility of advanced fuzzy theories across domains such as finance, healthcare, engineering, and beyond. The collective knowledge presented here serves as a testament to the growing significance of fuzzy decision theories in addressing the complexities of contemporary decision science.

In the paper entitled "A Hybrid MCDM Approach Based on Fuzzy MEREC-G and Fuzzy RATMI" by Anas A. Makki and Reda M. S. Abdulaal, the authors address the critical realm of multi-criteria decision making (MCDM), which plays a pivotal role in navigating complex problems where diverse alternatives must be evaluated against conflicting criteria. Traditional MCDM methods have been integral in this regard, yet the increasing prevalence of uncertain and ambiguous decision-maker inputs in real-world scenarios necessitates the application of fuzzy logic. The authors introduce a novel hybrid fuzzy MCDM approach that combines the strengths of two recent methodologies: fuzzy MEREC-G, designed to handle linguistic input terms from multiple decision makers and generate consistent fuzzy weights, and fuzzy RATMI, which ranks alternatives based on their fuzzy performance scores on each criterion. Notably, this paper presents the first fuzzy extension of both MEREC-G and RATMI methods. The study provides detailed algorithms for fuzzy MEREC-G and fuzzy RATMI and demonstrates their application in solving real-world problems. Through correlation and scenario analyses, the authors validate the new approach's accuracy, consistency, and sensitivity, highlighting its potential to deliver robust and reliable decision-making outcomes in the face of uncertain and dynamic decision contexts.

The paper titled "MemConFuzz: Memory Consumption Guided Fuzzing with Data Flow Analysis", by Chunlai Du, Zhijian Cui, Yanhui Guo, Guizhi Xu, and Zhongru Wang, tackles the critical issue of uncontrolled heap memory consumption—a software vulnerability exploited by attackers to consume significant amounts of heap memory, leading to system crashes. Existing efforts in vulnerability fuzzing of heap consumption, such as MemLock and PerfFuzz, often fall short in considering the impact of data flow. In response, the authors present MemConFuzz, a novel heap memory consumption-guided fuzzing model. MemConFuzz leverages static data flow analysis to extract the locations of heap operations and data-dependent functions. Notably, the paper introduces a seed selection algorithm based on data dependency, allocating more energy to samples with higher priority scores during the fuzzing process. Experimental results demonstrate that MemConFuzz outperforms existing approaches like AFL, MemLock, and PerfFuzz, showcasing superior efficiency in both quantity and time consumption when exploiting vulnerabilities related to heap memory consumption. This innovative contribution enhances our understanding and approach to addressing critical software vulnerabilities associated with uncontrolled heap memory consumption.

In the scholarly work titled "Some Operations and Properties of the Cubic Intuitionistic Set with Application in Multi-Criteria Decision-Making" authored by Shahzad Faizi, Heorhii Svitenko, Tabasam Rashid, Sohail Zafar, and Wojciech Sałabun, the authors present a comprehensive exploration of operations and properties associated with the cubic intuitionistic set, offering valuable insights with potential applications in multi-criteria decision making (MCDM). The introduced concepts include the internal cubic intuitionistic set (ICIS), external cubic intuitionistic set (ECIS), P-order, R-order (P-(R-) order), P-union, R-union (P-(R-) union), and P-intersection, R-intersection (P-(R-) intersection). The paper delves into the investigation of various properties related to the P-(R-) union and P-(R-) intersection of ICISs and ECISs, accompanied by illustrative examples to elucidate these theoretical constructs. Additionally, the authors put forth significant theorems pertaining to ICISs and ECISs, supported by rigorous proofs. The practical applicability of these operations is demonstrated through a real-world scenario, applying the proposed concepts to solve a multi-criteria decision-making problem. This work not only contributes to the theoretical foundation of cubic intuitionistic sets but also showcases their relevance in addressing complex decision-making challenges, thereby enriching the toolkit available for researchers and practitioners in the field.

In the paper titled "Study on Chaotic Multi-Attribute Group Decision Making Based on Weighted Neutrosophic Fuzzy Soft Rough Sets" authored by Fu Zhang and Weimin Ma, the authors delve into the realm of multi-attribute group decision making (MAGDM) with a distinctive dimension termed Chaotic MAGDM. This novel scenario incorporates considerations not only for the weights of decision makers (DMs) and decision attributes but also for the familiarity of DMs with these attributes. The authors leverage the weighted neutrosophic fuzzy soft rough set theory to address the complexities inherent in Chaotic MAGDM, presenting a new algorithm tailored for MAGDM applications. A notable contribution lies in the integration of familiarity into MAGDM within the framework of neutrosophic fuzzy soft rough sets. Furthermore, the paper introduces a novel MAGDM model grounded in neutrosophic fuzzy soft rough sets and develops a sorting/ranking algorithm based on the same set of theories. To illustrate the practical utility of the proposed algorithm, a case study is provided, showcasing the application of the devised model. This work not only advances the theoretical foundations of decision making under chaotic conditions but also provides a valuable methodology for handling real-world MAGDM challenges through the fusion of weighted neutrosophic fuzzy soft rough sets and chaos theory.

The paper titled "A Novel Driving-Strategy Generating Method of Collision Avoidance for Unmanned Ships Based on Extensive-Form Game Model with Fuzzy Credibility Numbers," authored by Haotian Cui, Fangwei Zhang, Mingjie Li, Yang Cui, and Rui Wang, addresses the crucial issue of intelligent collision avoidance for unmanned ships at sea. The study introduces an innovative approach by proposing a novel driving strategy generation method rooted in an extensive-form game model employing fuzzy credibility numbers. The key contribution lies in formulating an extensive-form game model that accounts for the two-sided clamping situation of unmanned ships, validated through a fuzzy credibility assessment. The research quantitatively divides the head-on situations of ships at sea, facilitating targeted collision avoidance decisions for unmanned ships. The utilization of an extensive-form game model, particularly in scenarios involving two-sided clamping, is a notable aspect of the study. The integration of fuzzy credibility degrees into the game model allows for the assessment of whether the collision avoidance decisions made by unmanned ships achieve optimal results. Through case analysis and simulation, the effectiveness of the introduced game model is confirmed, demonstrating its practical utility in real-time collision avoidance decision making for unmanned ships in scenarios involving two-sided clamping. The proposed mathematical model, as illustrated in an example, stands as a promising tool for enhancing the ability of unmanned ships to navigate safely and make informed decisions in complex maritime environments.

The paper titled "Medical Diagnosis and Pattern Recognition Based on Generalized Dice Similarity Measures for Managing Intuitionistic Hesitant Fuzzy Information" by Majed Albaity and Tahir Mahmood addresses the intersection of medical diagnosis, pattern recognition, and the representation of intuitionistic hesitant fuzzy (IHF) information. Pattern recognition, a fundamental aspect of computer science, has wide-ranging applications, including machine learning, information compression, signal processing, and bioinformatics. The authors introduce the theory of generalized dice similarity (GDS) measures to establish relationships between pieces of IHF information, making valuable contributions to real-life problem solving. The GDS measures offer versatility by allowing the derivation of various measures through parameter variations, known as DGS measures. The paper leverages this theory to extend the well-established dice similarity measures (DSMs) to the context of IHF sets, which encompass both membership and non-membership grades within the finite subset [0, 1]. Pioneering the theory of generalized DSMs (GDSMs) computed based on IHF sets, the authors introduce the IHF dice similarity measure, IHF weighted dice similarity measure, IHF GDS measure, and IHF weighted GDS measure. The application of these measures is demonstrated through medical diagnosis and pattern recognition problems, showcasing their proficiency and capability. The authors conduct a comparative analysis with existing measures to enhance the practical value of the proposed measures, thereby contributing to the advancement of computational methodologies in the management of IHF information for medical diagnosis and pattern recognition.

In the paper titled "A Ship Fire Escape Speed Correction Method Considering the Influence of Crowd Interaction" authored by Jingyuan Li, Weile Liu, Fangwei Zhang, Taiyang Li, and Rui Wang, the authors delve into the critical issue of passenger ship fire evacuation and investigate the impact of various personnel attributes and interactions on evacuation efficiency. The study introduces a novel speed correction method designed to account for human attributes and interactions among different populations during evacuation scenarios. Initially, hesitant fuzzy sets and hesitant fuzzy average operators are employed to quantify four distinct personnel attributes. Subsequently, the study extracts a formula for acceleration that considers the interactive influence of different groups of people. Leveraging the first-order linear relationship between velocity and acceleration, the authors propose an interactive velocity correction method for ship personnel evacuation. To validate the effectiveness of the method, the study employs personnel evacuation simulation software, Pathfinder, conducting experiments with both corrected and uncorrected speeds introduced into the evacuation simulation process. The results demonstrate that the simulation outcomes of the revised speed plan align more closely with real-world scenarios, emphasizing the practical significance of considering personnel attributes and interactions in refining ship fire escape speed strategies for enhanced evacuation efficiency and safety.

The paper titled "A Hybrid Intuitionistic Fuzzy Group Decision Framework and Its Application in Urban Rail Transit System Selection" by Bing Yan, Yuan Rong, Liying Yu, and Yuting Huang addresses the complex decision-making process of selecting an urban rail transit system, focusing on green and low-carbon perspectives to promote sustainable urban development. Acknowledging the uncertainty arising from conflicting criteria and the inherent fuzziness in decision-makers cognition, the authors present a hybrid intuitionistic fuzzy multi-criteria group decision making (MCGDM) framework. The proposed methodology addresses various aspects of the decision process. Firstly, the weights of experts are determined using an improved similarity method. Subsequently, the subjective and objective weights of criteria are calculated using DEMATEL and CRITIC methods, and a comprehensive weight is obtained through linear integration. Considering the experts' regret degree and risk preference, the COPRAS method based on regret theory is introduced to determine the prioritization of the urban rail transit system ranking. The practicality and effectiveness of the developed method are demonstrated through a case study of urban rail transit system selection for City N. The results reveal that a metro system (P1) is the most suitable option for City N's urban rail transit system construction, followed by a municipal railway system (P7). A sensitivity analysis, a comparative analysis, and a thorough case study validate the robustness, stability, and practicality of the proposed decision-making framework, showcasing its efficacy in supporting informed decisions in the context of urban rail transit system selection.

The paper titled "Group Decision-Making Problems Based on Mixed Aggregation Operations of Interval-Valued Fuzzy and Entropy Elements in Single- and Interval-Valued Fuzzy Environments" by Weiming Li and Jun Ye addresses the intricate challenges of operational problems in group decision making (GDM) scenarios involving single- and interval-valued fuzzy multivalued hybrid information expressions. Fuzzy sets and interval-valued fuzzy sets serve as crucial tools for representing uncertain and vague information in real-world contexts. To tackle the complexity of these mixed multivalued information expressions and operational challenges, the study introduces the concept of single- and interval-valued fuzzy multivalued set/element (SIVFMS/SIVFME) with identical and/or different fuzzy values. The conversion of SIVFMS/SIVFME into the interval-valued fuzzy and entropy set/element (IVFES/IVFEE) is presented, relying on the mean and information entropy of SIVFME to address operational problems with varying lengths. The study defines the operational relationships of IVFEEs, introduces expected value functions and sorting rules, and proposes IVFEE-weighted averaging and geometric operators, along with their mixed-weighted-averaging operation. Leveraging these operations and functions, a GDM method is developed for multicriteria GDM problems within the SIVFMS environment. The proposed method is applied to a supplier selection problem in a supply chain as a practical example, demonstrating the rationality and efficiency of SIVFMSs. A comparative analysis with other decision-making methods highlights the superiority of the developed GDM method, providing a more reasonable and flexible approach that compensates for existing methodological deficiencies in the GDM process.

The paper titled "An Intelligent Expert Combination Weighting Scheme for Group Decision Making in Railway Reconstruction" by Lihua Zeng, Haiping Ren, Tonghua Yang, and Neal Xiong introduces an intelligent approach to expert combination weighting for group decision making in the context of railway reconstruction. The study addresses the limitations of existing intuitionistic fuzzy entropies by proposing an improved version based on the cotangent function. This enhanced entropy not only considers the deviation between membership and non-membership but also incorporates the hesitancy degree of decision makers, providing a more comprehensive measure of uncertainty for intuitionistic fuzzy sets. Furthermore, the paper introduces a novel intuitionistic fuzzy (IF) similarity measure, whose values are IF numbers. The improved entropy and similarity measures are then applied to the determination of expert weights in group decision making. The study presents an intelligent expert combination weighting scheme, leveraging the new intuitionistic fuzzy similarity to transform the decision matrix into a similarity matrix. Through the analysis of threshold change rates and the design of risk parameters, the scheme achieves reasonable expert clustering results. In this scheme, each category is weighted, and experts within each category are weighted using entropy weight theory. The total weight of experts is then determined by synthesizing these two weights. This comprehensive approach provides a new method for objectively and reasonably determining expert weights in group decision-making scenarios. The proposed scheme is applied to the evaluation of a railway reconstruction scheme, demonstrating its feasibility and showcasing its potential in real-world applications.

As the editors responsible for curating this Special Issue, we extend our gratitude to the authors whose dedicated research has enriched this collection. Their insights and expertise have contributed to the depth and breadth of our exploration of fuzzy decision theories. We also express appreciation to the reviewers whose meticulous assessments have ensured the scholarly rigor and quality of each contribution.

We would like to acknowledge the support and collaboration of the editorial team and the publisher in bringing this book to fruition. Their commitment to academic excellence has been integral to the success of this endeavor.

This book is not just a static representation of the current state of fuzzy decision theory; it is an invitation to researchers, practitioners, and students to engage with the evolving landscape of decision science. We hope that the insights shared within these pages inspire further inquiry, spark innovative ideas, and pave the way for continued advancements in the fascinating and ever-expanding field of fuzzy decision theory.

> **Jun Ye and Yanhui Guo** *Editors*

**Lihua Zeng 1, Haiping Ren 1,\*, Tonghua Yang <sup>2</sup> and Neal Xiong <sup>3</sup>**


**Abstract:** The intuitionistic fuzzy entropy has been widely used in measuring the uncertainty of intuitionistic fuzzy sets. In view of some counterintuitive phenomena of the existing intuitionistic fuzzy entropies, this article proposes an improved intuitionistic fuzzy entropy based on the cotangent function, which not only considers the deviation between membership and non-membership, but also expresses the hesitancy degree of decision makers. The analyses and comparison of the data show that the improved entropy is reasonable. Then, a new IF similarity measure whose value is an IF number is proposed. The intuitionistic fuzzy entropy and similarity measure are applied to the study of the expert weight in group decision making. Based on the research of the existing expert clustering and weighting methods, we summarize an intelligent expert combination weighting scheme. Through the new intuitionistic fuzzy similarity, the decision matrix is transformed into a similarity matrix, and through the analysis of threshold change rate and the design of risk parameters, reasonable expert clustering results are obtained. On this basis, each category is weighted; the experts in the category are weighted by entropy weight theory, and the total weight of experts is determined by synthesizing the two weights. This scheme provides a new method in determining the weight of experts objectively and reasonably. Finally, the method is applied to the evaluation of railway reconstruction scheme, and an example shows the feasibility of the method.

**Keywords:** intuitionistic fuzzy entropy; hesitant degree information; intuitionistic fuzzy group decision making; clustering; intuitionistic fuzzy similarity

#### **1. Introduction**

With the characteristics of high speed, large volume, low energy consumption, little pollution, safety and reliability, railway transportation has become the main transportation mode in the modern transportation system in China (see Figures 1 and 2) [1–3] and plays an important role in the development of the national economy.

**Figure 1.** Business mileage of China's railways.

**Citation:** Zeng, L.; Ren, H.; Yang, T.; Xiong, N. An Intelligent Expert Combination Weighting Scheme for Group Decision Making in Railway Reconstruction. *Mathematics* **2022**, *10*, 549. https://doi.org/10.3390/ math10040549

Academic Editor: Vassilis C. Gerogiannis

Received: 10 December 2021 Accepted: 8 February 2022 Published: 10 February 2022

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

**Figure 2.** Total railway freight volume in China.

As an important national infrastructure and popular means of transportation, railway is the backbone of China's comprehensive transportation system. With the continuous acceleration of China's urbanization process and the urban expansion, railway construction has entered a period of rapid development, and the railway plays an increasingly important role in people's choice of travel mode (see Figure 3) [1,4].

With regard to railway reconstruction, due to the huge investment and complex factors [5–7], it is necessary to compare and select various construction schemes in order to optimize the scheme with more reasonable technology and economy. Therefore, the use of scientific evaluation methods is very important. At present, the method of expert scoring and evaluation with the help of fuzzy theory has been more common, but the expert scoring is more or less subjective. This paper proposes an intelligent expert combination weighting method to optimize the scheme.

The rest of this paper is structured as follows. Section 2 introduces the related work of this study. Section 3 introduces the preparatory knowledge. Section 4 puts forward the weighted scheme of intelligent expert combination. Section 5 introduces the risk factors of the railway reconstruction project and uses the method proposed in the fourth section to optimize the railway reconstruction scheme. Finally, Section 6 summarizes the whole paper.

#### **2. The Related Work**

Fuzziness, as developed in [8], is a kind of uncertainty that often appears in human decision-making problems. Fuzzy set theory deals with uncertainties happening in daily life successfully. The membership degrees can be effectively decided by a fuzzy set. However, in real-life situations, the non-membership degrees should be considered in many cases as well. Thus, Atanassov [9] introduced the concept of an intuitionistic fuzzy (IF) set that considers both membership and non-membership degrees. IF set has been implemented in

numerous areas due to its ability to handle uncertain information more effectively [10–24]. Tao et al. [10] provided an insight with an alternative queuing method and intuitionistic fuzzy set into dynamic group MCDM, which ranked the alternatives based on preference relation. Intuitionistic fuzzy sets based on the weighted average were adopted for aggregating individual suggestions of decision makers by Singh et al. [11]. Chaira [12] suggested a novel clustering approach for segmenting lesions/tumors in mammogram images using Atanassov's intuitionistic fuzzy set theory. Jiang et al. [13] studied a novel three-way group investment decision model under an intuitionistic fuzzy multi-attribute group decisionmaking environment. Wang et al. [14] put forward a novel three-way multi-attribute decision-making model in light of a probabilistic dominance relation with intuitionistic fuzzy sets. Wan and Dong [15] developed a new intuitionistic fuzzy best-worst method for multi-criteria decision making. Kumar et al. [16] formulated an intuitionistic fuzzy set theory-based, bias-corrected intuitionistic fuzzy c-means with spatial neighborhood information method for MRI image segmentation. In addition, intuitionistic fuzzy sets are extended to various forms and applied to practical problems. Senapati and Yager [17–19] proposed Fermatean fuzzy sets and introduced four new weighted aggregated operators, as well as defined basic operations over the Fermatean fuzzy sets. Ashraf et al. [20] introduced a new version of the picture fuzzy set, so-called spherical fuzzy sets (SFS), and discussed some operational rules. Khan et al. [21] introduced a method to solve decision-making problems using an adjustable weighted soft discernibility matrix in a generalized picture fuzzy soft set. Riaz and Hashmi [22] introduced the novel concept of the linear Diophantine fuzzy set (LDFS) with the addition of reference parameters.

Shannon used probability theory as a mathematical tool to measure information. He defined information as something that eliminates uncertainty, thus connecting information with uncertainty. Taking entropy as a measure of the uncertainty of information state, Shannon put forward the concept of information entropy. De Luca and Termini [25] studied the measurement of fuzziness of fuzzy sets, extended probabilistic information entropy to non-probabilistic information entropy and proposed axioms that fuzzy information entropy must satisfy. Szmidt and Kacprzyk [26] extended the axioms of De Luca and Termini and extended fuzzy information entropy to IF information entropy. Some scholars have conducted in-depth research in this aspect and constructed IF entropy formulae from different angles and applied it to the fields of multi-attribute decision making and pattern recognition [27–34]. Whether these entropy formulas can reasonably measure the uncertainty of IF sets is directly related to the rationality of their application. In this paper, some entropy formulas in existing literature are classified, and their advantages and disadvantages are analyzed with data. On this basis, a new IF entropy is constructed that not only considers the deviation between membership and non-membership but also includes the hesitancy in the entropy measure. The rationality of entropy is fully explained by data analysis and comparison.

In recent years, the decision-making problem with IF information has attracted many scholars' attention [35]. Due to the complexity and uncertainty of pragmatic problems, expert group decision-making method is commonly used in decision-making problems. Expert group decision making can fully gather the experience and knowledge of various experts, making the decision-making results more scientific and reasonable. However, in the actual evaluation, experts in group decision making are influenced by numerous factors, such as knowledge structure, understanding of scheme, interest correlation and so on. They often hold different views and attitudes. How to determine the weight of experts and effectively aggregate the decision-making information of experts with different preferences has become the focus of scholars [36–41].

In traditional group decision making, the expert weighting method usually uses the consistency ratio of the judgment matrix to construct the weight coefficient, which lacks the attention to the overall consistency of group decision-making objectives. In order to surmount the shortcomings of the traditional method, a cluster analysis method is often used to realize the expert weighting in group decision making. The basic principle of expert cluster analysis is to measure the similarity degree of expert evaluation opinions according to certain standards and cluster experts based on the similarity degree. He and Lei [36] extended fuzzy C-means clustering to IF C-means clustering and proposed a clustering algorithm based on IF sets. Zhang et al. [37] and He et al. [38] proposed the concept of IF similarity, whose value is an IF number; they also constructed the IF similarity matrix, the IF equivalent matrix and its *λ*- cut matrix and gave a clustering method based on the IF similarity matrix. Wang et al. [39] proposed a new method of an IF similar matrix, avoided the tedious process of calculating an IF equivalent matrix and used the membership degrees of elements in an IF similar matrix to cluster. Zhou et al. [40] conducted cluster analysis on experts according to the principle of entropy, used information similarity coefficients to measure the similarity degrees of expert opinions and then classified the experts.

The above clustering methods have the following problems when clustering IF information.


Considering the above situation, this paper proposes a method of clustering and weighting experts based on IF entropy. According to the evaluation information of IF numbers given by experts, a new IF similarity measure is constructed, whose value is an IF number. Then the decision matrix is transformed into a similar matrix. By analyzing the change rate of the threshold and designing the risk parameters, the decision maker can choose the appropriate clustering threshold and risk parameters so as to obtain the reasonable expert clustering results, and based on this result, experts are weighted between categories. It can make more experts in a category, so that the weight of the category is greater, which reflects the important principle of the minority obeying the majority in group decision making. Using the new IF entropy proposed in this paper, the experts in the same category with clear logic and an accurate evaluation can get a larger weight. The total weight of experts is determined by synthesizing the weight between categories and within categories. Finally, the IF weighted aggregation operator is used to aggregate weighted experts and their IF information, and the alternatives are optimized and sorted.

#### **3. Preliminaries**

In the following part, we introduce some basic concepts, which will be used in the next sections.

**Definition 1** ([9]). *Let X be a given universal set. An IF set is an object having the form A* = {< *xi*, *μA*(*xi*), *νA*(*xi*) >|*xi* ∈ *X*} *where the function μ<sup>A</sup>* : *X* → [0, 1] *defines the degree of membership, and ν<sup>A</sup>* : *X* → [0, 1] *defines the degree of non-membership of the element xi* ∈ *X, respectively, and for every xi* ∈ *X, it holds that* 0 ≤ *μA*(*xi*) + *νA*(*xi*) ≤ 1*. Furthermore, for any IF set A and xi* ∈ *X, πA*(*xi*) = 1 − *μA*(*xi*) − *νA*(*xi*) *is called the hesitancy degree of xi. All IF sets on X are denoted as IFSs*(*X*).

*For simplicity, Xu and Chen [41] denoted α* = (*μα*, *να*) *as an IF number (IFN), where μα and να are the degree of membership and the degree of non-membership of the element α* ∈ *X to A, respectively.*

The basic operational laws of IF set defined by Atanassov [9] are introduced as follows:

**Definition 2** ([9]). *Let <sup>A</sup>* <sup>=</sup> {<sup>&</sup>lt; *xi*, *<sup>μ</sup>A*(*xi*), *<sup>ν</sup>A*(*xi*) <sup>&</sup>gt; <sup>|</sup>*xi* <sup>∈</sup> *<sup>X</sup>*} *and <sup>B</sup>* <sup>=</sup> - < *xi*, *μB*(*xi*), *νB*(*xi*) > |*xi* ∈ *X be two IF sets; then,*


*if μB*(*xi*) ≥ *νB*(*xi*)*, then μA*(*xi*) ≥ *μB*(*xi*)*,νA*(*xi*) ≤ *νB*(*xi*)*.*

**Definition 3** ([9]). *Let <sup>A</sup>* <sup>=</sup> {<sup>&</sup>lt; *xi*, *<sup>μ</sup>A*(*xi*), *<sup>ν</sup>A*(*xi*) <sup>&</sup>gt; <sup>|</sup>*xi* <sup>∈</sup> *<sup>X</sup>*} *and <sup>B</sup>* <sup>=</sup> - < *xi*, *μB*(*xi*), *νB*(*xi*) > |*xi* ∈ *X be two IF sets and <sup>ω</sup>* = (*ω*1, *<sup>ω</sup>*2, ··· , *<sup>ω</sup>n*) *<sup>T</sup> be the weight vector of the element xi*(*<sup>i</sup>* <sup>=</sup> 1, 2, ··· , *<sup>n</sup>*)*, where <sup>ω</sup><sup>j</sup>* <sup>≥</sup> <sup>0</sup> *and <sup>n</sup>* ∑ *j*=1 *ω<sup>j</sup>* = 1*. The weighted Hamming distance for A and B is defined as follows:*

$$d(A,B) = \frac{1}{2} \sum\_{i=1}^{n} \omega\_i (|\mu\_A(\mathbf{x}\_i) - \mu\_B(\mathbf{x}\_i)| + |\nu\_A(\mathbf{x}\_i) - \nu\_B(\mathbf{x}\_i)| + |\pi\_A(\mathbf{x}\_i) - \pi\_B(\mathbf{x}\_i)|) \,.$$

**Definition 4** ([26])*. A map E* : *IFSs*(*X*) → [0, 1] *is called the IF entropy if it satisfies the following properties:*


**Definition 5** ([37]). *Let zij*(*i* = 1, 2, ··· , *m*; *j* = 1, 2, ··· , *n*) *be a collection of IFNs, and the matrix Z* = (*zij*) *<sup>m</sup>*×*<sup>n</sup> is called an IF matrix.*

**Definition 6** ([37]*). Let ψ* : *IFSs*(*X*) × *IFSs*(*X*) → *IFNs and C*1, *C*2, *C*<sup>3</sup> *be three IF sets. ψ*(*C*1, *C*2) *is called an IF similarity measure of C*<sup>1</sup> *and C*<sup>2</sup> *if it satisfies the following properties:*


**Definition 7** ([42]). *The membership degree μi*(*xj*) *is expressed as μij, and the non-membership degree νi*(*xj*) *is expressed as νij. If an IF matrix Z* = (*aij*) *<sup>m</sup>*×*<sup>n</sup> where aij* <sup>=</sup><sup>&</sup>lt; *<sup>μ</sup>ij*, *vij* <sup>&</sup>gt; *satisfies the following conditions:*


*then Z is called an IF similarity matrix.*

In order to compare the magnitudes of two IF sets, Xu and Yager [43] introduced the score and accuracy functions for IF sets and gave a simple comparison law as follows:

**Definition 8** ([43]). *Let A* =< *μ*, *ν* > *be an IFN; the score function M*(*A*) *and accuracy function* Δ(*A*) *of A can be defined, respectively, as follows:*

$$\begin{cases} \ M(A) = \mu - \nu \\ \Delta(A) = \mu + \nu \end{cases} \tag{1}$$

.

*Obviously, M*(*A*) ∈ [−1, 1], Δ(*A*) ∈ [0, 1]*.*

*Based on the score and accuracy functions, a comparison law for IF set is introduced as below: Let Aj and Ak be two IF sets, M*(*Aj*) *and M*(*Ak*) *be the scores of Aj and Ak, respectively, and* Δ(*Aj*) *and* Δ(*Ak*) *be the accuracy degrees of Aj and Ak, respectively; then,*

$$\begin{aligned} (1) \quad &\text{If } M(A\_j) > M(A\_k), \text{ then } A\_j > A\_k. \\ (2) \quad &\text{If } M(A\_j) = M(A\_k), \text{ then } \begin{cases} \Delta(A\_j) = \Delta(A\_k) \Rightarrow A\_j = A\_k \\ \Delta(A\_j) < \Delta(A\_k) \Rightarrow A\_j < A\_k \\ \Delta(A\_j) > \Delta(A\_k) \Rightarrow A\_j > A\_k \end{cases} \end{aligned}$$

The weighted aggregation operator for an IF set developed by Xu and Yager [43] is presented as follows:

**Definition 9** ([43]). *Let Aj* =< *μj*, *ν<sup>j</sup>* > (*j* = 1, 2, ··· , *n*) *be a collection of IF sets, and ω* = (*ω*1, *ω*2, ··· , *ωn*) *<sup>T</sup> be the weight vector of Aj*(*<sup>j</sup>* <sup>=</sup> 1, 2, ··· , *<sup>n</sup>*)*, where <sup>ω</sup><sup>j</sup> indicates the importance degree of Aj, satisfying <sup>ω</sup><sup>j</sup>* <sup>≥</sup> <sup>0</sup>(*<sup>j</sup>* <sup>=</sup> 1, 2, ··· , *<sup>n</sup>*) *and <sup>n</sup>* ∑ *j*=1 *ω<sup>j</sup>* = 1*, and let f <sup>A</sup> <sup>ω</sup>* : *<sup>F</sup><sup>n</sup>* → *<sup>F</sup> . If*

$$f^{A}\_{\omega}(A\_1, A\_2, \cdots, A\_n) = \sum\_{j=1}^{n} \omega\_j A\_j = <1 - \prod\_{j=1}^{n} (1 - \mu\_j)^{\omega\_j} \prod\_{j=1}^{n} \nu\_j^{\omega\_j}>\tag{2}$$

*then the function f <sup>A</sup> <sup>ω</sup> is called the IF weighted aggregation operator.*

#### **4. Our Proposed Intelligent Expert Combination Weighting Scheme**

#### *4.1. A New IF Entropy*

The uncertainty of IF sets is embodied in fuzziness and intuitionism. Fuzziness is determined by the difference between membership and non-membership. Intuitionism is determined by its hesitation. Therefore, entropy is used as a tool to describe the uncertainty of IF sets; the difference between membership and non-membership and their hesitation should be considered at the same time. Only in this way can the degree of uncertainty be reflected more fully. Next, we will classify the existing entropy formulas according to whether they describe the fuzziness and intuitiveness of IF sets. In addition, the motivation behind the origination of fuzzy and non-standard fuzzy models is their intimacy with human thinking. Therefore, if an entropy measure does not meet some cognitive aspect, we call it a counterintuitive case.

In this section, suppose that *A* = {< *xi*, *μA*(*xi*), *νA*(*xi*) >|*xi* ∈ *X*, *i* = 1, 2, ··· , *n*} is an IF set.

(1) The entropy measure only describes the fuzziness of IF sets. For example, the IF entropy measure of Ye [27] is

$$E\_Y(A) = \frac{1}{n} \sum\_{i=1}^n \left[ (\sqrt{2} \cos \frac{\mu\_A(\mathbf{x}\_i) - \nu\_A(\mathbf{x}\_i)}{4} \pi - 1) \times \frac{1}{\sqrt{2} - 1} \right]$$

The IF entropy measure of Zeng and Li [28] is

$$E\_Z(A) = 1 - \frac{1}{n} \sum\_{i=1}^n \left| \mu\_A(\mathfrak{x}\_i) - \nu\_A(\mathfrak{x}\_i) \right|.$$

The IF entropy measure of Zhang and Jiang [29] is

$$E\_{Zf}(A) = -\frac{1}{n} \sum\_{i=1}^{n} \quad \left[ \frac{\mu\_A(\mathbf{x}\_i) + 1 - \nu\_A(\mathbf{x}\_i)}{2} \log\_2(\frac{\mu\_A(\mathbf{x}\_i) + 1 - \nu\_A(\mathbf{x}\_i)}{2}) \right]$$

$$+ \frac{\nu\_A(\mathbf{x}\_i) + 1 - \mu\_A(\mathbf{x}\_i)}{2} \log\_2(\frac{\nu\_A(\mathbf{x}\_i) + 1 - \mu\_A(\mathbf{x}\_i)}{2})}{2}).$$

The exponential IF entropy measure of Verma and Sharma [30] is

$$\begin{split} E\_{VS}(A) &= \frac{1}{n(\sqrt{\varepsilon}-1)} \sum\_{i=1}^{n} \left[ \left( \frac{\mu\_{A}(\mathbf{x}\_{i}) + 1 - \nu\_{A}(\mathbf{x}\_{i})}{2} e^{1 - \frac{\mu\_{A}(\mathbf{x}\_{i}) + 1 - \nu\_{A}(\mathbf{x}\_{i})}{2}} \right) \right] \\ &\quad + \frac{\nu\_{A}(\mathbf{x}\_{i}) + 1 - \mu\_{A}(\mathbf{x}\_{i})}{2} e^{1 - \frac{\nu\_{A}(\mathbf{x}\_{i}) + 1 - \mu\_{A}(\mathbf{x}\_{i})}{2}} - 1) ]. \end{split}$$

**Example 1.** *Let A*<sup>1</sup> = {< *x*, 0.3, 0.4 >|*x* ∈ *X*} *and A*<sup>2</sup> = {< *x*, 0.2, 0.3 >|*x* ∈ *X*} *be two IF sets. Calculate the entropy of A*<sup>1</sup> *and A*<sup>2</sup> *with the entropy formulae EY, EZ, EZ J and EVS*.

*According to the above formulae, the results are as follows:*

*EY*(*A*1) = *EY*(*A*2) = 0.9895, *EZ*(*A*1) = *EZ*(*A*2) = 0.9, *EZ J*(*A*1) = *EZ J*(*A*2) = 0.9928, *EVS*(*A*1) = *EVS*(*A*2) = 0.9905.

*It can be seen that x belongs to IF sets A*<sup>1</sup> *and A*2*; the absolute value of deviation between membership and non-membership is equal; and the hesitation degree increases, so the uncertainty of A*<sup>1</sup> *is smaller than A*2*. However, the entropy formulae EY, EZ, EZ J and EVS calculated the entropy of two IF sets as equal. In fact, for any IF sets <sup>A</sup>* <sup>=</sup> - <sup>&</sup>lt; *xi*, *<sup>μ</sup><sup>A</sup>* (*xi*), *<sup>ν</sup><sup>A</sup>* (*xi*) <sup>&</sup>gt; *xi* ∈ *X and <sup>B</sup>* <sup>=</sup> - <sup>&</sup>lt; *xi*, *<sup>μ</sup><sup>B</sup>* (*xi*), *<sup>ν</sup><sup>B</sup>* (*xi*) <sup>&</sup>gt; *xi* ∈ *X if <sup>μ</sup><sup>A</sup>* (*xi*) <sup>−</sup> *<sup>ν</sup><sup>A</sup>* (*xi*) *for all xi* <sup>∈</sup> *X, then any entropy formula <sup>E</sup> above is adopted, and all of them have E*(*A* ) = *E*(*B* )*. These are counterintuitive situations.*

(2) The entropy measure only describes the intuitionism of IF sets.

For example, we show the IF entropy measure of Burillo and Bustince [31]:

$$E\_{B\_1}(A) = \sum\_{i=1}^{n} \left[ 1 - \left( \mu\_A(\mathbf{x}\_i) + \nu\_A(\mathbf{x}\_i) \right) \right] = \sum\_{i=1}^{n} \pi\_A(\mathbf{x}\_i)$$

$$E\_{B\_2}(A) = \sum\_{i=1}^{n} \left[ 1 - \left( \mu\_A(\mathbf{x}\_i) + \nu\_A(\mathbf{x}\_i) \right)^\lambda \right] \lambda = 2, 3, \cdots, \infty;$$

$$E\_{B\_3}(A) = \sum\_{i=1}^{n} \left[ 1 - \left( \mu\_A(\mathbf{x}\_i) + \nu\_A(\mathbf{x}\_i) \right) \right] \left[ \mathbf{e}^{\left[ 1 - \left( \mu\_A(\mathbf{x}\_i) + \nu\_A(\mathbf{x}\_i) \right) \right]} \right]$$

$$E\_{R4}(A) = \sum\_{i=1}^{n} \left[ 1 - \left( \mu\_A(\mathbf{x}\_i) + \nu\_A(\mathbf{x}\_i) \right) \right] \sin(\frac{\pi}{2}(\mu\_A(\mathbf{x}\_i) + \nu\_A(\mathbf{x}\_i)));$$

**Example 2.** *Let A*<sup>3</sup> = {< *x*, 0.09, 0.41 >|*x* ∈ *X*} *and A*<sup>4</sup> = {< *x*, 0.18, 0.32 >|*x* ∈ *X*} *be two IF sets. Calculate the entropy of A*<sup>3</sup> *and A*<sup>4</sup> *with the entropy formula EB*<sup>1</sup> *.*

*From Formula EB*<sup>1</sup> *, we can get the following results: EB*<sup>1</sup> (*A*3) = *EB*<sup>1</sup> (*A*4) = 0.5*. For IF sets A*<sup>3</sup> *and A*4*, the hesitancy degree of element x is equal, but the absolute value of the deviation between the membership degree and non-membership degree of A*<sup>3</sup> *is greater than that of A*4*, so the uncertainty of A*<sup>3</sup> *is obviously smaller than that of A*4*. However, the entropy formulae EB*<sup>1</sup> *, EB*<sup>2</sup> *, EB*<sup>3</sup> *and EB*<sup>4</sup> *calculated the entropy of two IF sets as equal, which is inconsistent with people's intuition. In fact, for any IF sets <sup>A</sup>* <sup>=</sup> - <sup>&</sup>lt; *xi*, *<sup>μ</sup><sup>A</sup>* (*xi*), *<sup>ν</sup><sup>A</sup>* (*xi*) <sup>&</sup>gt; *xi* ∈ *X and <sup>B</sup>* <sup>=</sup> - <sup>&</sup>lt; *xi*, *<sup>μ</sup><sup>B</sup>* (*xi*), *<sup>ν</sup><sup>B</sup>* (*xi*) <sup>&</sup>gt; *xi* ∈ *X , if μ<sup>A</sup>* (*xi*) + *<sup>ν</sup><sup>A</sup>* (*xi*) = *μ<sup>B</sup>* (*xi*) + *<sup>ν</sup><sup>B</sup>* (*xi*) *for all xi* ∈ *X, then any entropy formula E above is adopted, and all of them have E*(*A* ) = *E*(*B* )*.*

(3) The entropy measure includes both the fuzziness and intuitionism of IF sets. However, some situations cannot be well distinguished.

For example, we show the IF entropy measure of Wang and Wang [32]:

$$E\_W(A) = \frac{1}{n} \sum\_{i=1}^n \cot(\frac{\pi}{4} + \frac{|\mu\_A(\mathbf{x}\_i) - \nu\_A(\mathbf{x}\_i)|}{4(1 + \pi\_A(\mathbf{x}\_i))} \pi)$$

The IF entropy measure of Wei et al. [33] is the following:

$$E\_{WG}(A) = \frac{1}{n} \sum\_{i=1}^{n} \cos\left(\frac{\mu\_{\mathcal{A}}(\mathbf{x}\_{i}) - \nu\_{\mathcal{A}}(\mathbf{x}\_{i})}{2(1 + \pi\_{\mathcal{A}}(\mathbf{x}\_{i}))} \pi\right)$$

**Example 3.** *Let A*<sup>5</sup> = {< *x*, 0.2, 0.5 >|*x* ∈ *X*} *and A*<sup>6</sup> = {< *x*, 0.4, 0.04 >|*x* ∈ *X*} *be two IF sets. Obviously, the fuzziness of A*<sup>5</sup> *is greater than that of A*6*. Calculate the entropies of A*<sup>5</sup> *and A*<sup>6</sup> *with the entropy formulae EW and EWG.*

*We can get the following results:*

$$E\_W(A\_5) = E\_W(A\_6) = 0.6903,\ E\_{WG}(A\_5) = E\_{WG}(A\_6) = 0.93501$$

*which are counterintuitive.*

For example, the IF entropy measure of Liu and Ren [34] is

$$E\_{LR}(A) = \frac{1}{n} \sum\_{i=1}^{n} \cos \frac{\mu\_A \,^2(\mathfrak{x}\_i) - \nu\_A \,^2(\mathfrak{x}\_i)}{2} \pi$$

**Example 4.** *Let A*<sup>7</sup> = {< *x*, 0.2, 0.4 >|*x* ∈ *X*} *and A*<sup>8</sup> = {< *x*, 0.4272, 0.25 >|*x* ∈ *X*} *be two IF sets. Obviously, the fuzzinesses of A*<sup>7</sup> *and A*<sup>8</sup> *are not equal. However, calculating the entropy of A*<sup>7</sup> *and A*<sup>8</sup> *with the entropy formula ELR*, *we have ELR*(*A*7) = *ELR*(*A*8) = 0.9823*.*

Motivation: we can see that some existing cosine and cotangent function-based entropy measures have no ability to discriminate some IF sets, and there are counterintuitive phenomena, such as the cases of Example 1 to 4. In this paper, we are also devoted to the development of IF entropy measures. We propose a new intuitionistic fuzzy entropy based on a cotangent function, which is an improvement of Wang's entropy [32], as follows:

$$E\_{RZ}(A) = \frac{1}{n} \sum\_{i=1}^{n} \cot(\frac{\pi}{4} + \frac{|\mu\_A(\mathbf{x}\_i) - \nu\_A(\mathbf{x}\_i)|}{4 + \pi\_A(\mathbf{x}\_i)} \pi) \tag{3}$$

which not only considers the deviation between membership and non-membership degrees *μA*(*xi*) − *νA*(*xi*), but also considers the hesitancy degree *πA*(*xi*) of the IF set.

#### **Theorem 1.** *The measure given by Equation (3) is an IF entropy.*

**Proof.** To prove the measure *ERZ*(*A*) given by Equation (3) is an IF entropy, we only need to prove it satisfies the properties in Definition 4. Obviously, for every *xi*, we have:

$$0 \le \frac{|\mu\_A(\mathbf{x}\_i) - \nu\_A(\mathbf{x}\_i)|}{4 + \pi\_A(\mathbf{x}\_i)} \pi \le \frac{\pi}{4}.$$

then

$$0 \le \cot(\frac{\pi}{4} + \frac{|\mu\_A(\mathbf{x}\_i) - \nu\_A(\mathbf{x}\_i)|}{4 + \pi\_A(\mathbf{x}\_i)}\pi) \le 1$$

Thus, we have 0 ≤ *ERZ*(*A*) ≤ 1.

(i) Let *A* be a crisp set, i.e., for ∀*xi* ∈ *X*, we have *μA*(*xi*) = 1, *νA*(*xi*) = 0 or *μA*(*xi*) = 0, *νA*(*xi*) = 1. It is obvious that *ERZ*(*A*) = 0.

If *ERZ*(*A*) = 0, i.e., *ERZ*(*A*) = <sup>1</sup> *n n* ∑ *i*=1 cot( *<sup>π</sup>* <sup>4</sup> <sup>+</sup> <sup>|</sup>*μA*(*xi*)−*νA*(*xi*)<sup>|</sup> <sup>4</sup>+*πA*(*xi*) *<sup>π</sup>*) = 0, then <sup>∀</sup>*xi* <sup>∈</sup> *<sup>X</sup>*, we have *<sup>n</sup>* ∑ *i*=1 cot( *<sup>π</sup>* <sup>4</sup> <sup>+</sup> <sup>|</sup>*μA*(*xi*)−*νA*(*xi*)<sup>|</sup> <sup>4</sup>+*πA*(*xi*) *<sup>π</sup>*) = 0. Thus <sup>|</sup>*μA*(*xi*)−*νA*(*xi*)<sup>|</sup> <sup>4</sup>+*πA*(*xi*) <sup>=</sup> <sup>1</sup> <sup>4</sup> , amd then we have *μA*(*xi*) = 1 *νA*(*xi*) = 0 or *μA*(*xi*) = 0, *νA*(*xi*) = 1. Therefore, *A* is a crisp set. (ii) Let *μA*(*xi*) = *νA*(*xi*),∀*xi* ∈ *X*; according to Equation (3), we have *ERZ*(*A*) = *n*

$$\frac{1}{n}\sum\_{i=1}^{n}\cot(\frac{\pi}{4}) = 1.$$

Now we assume that *ERZ*(*A*) = 1; then for all *xi* <sup>∈</sup> *<sup>X</sup>*, we have: cot( *<sup>π</sup>* <sup>4</sup> <sup>+</sup> <sup>|</sup>*μA*(*xi*)−*νA*(*xi*)<sup>|</sup> <sup>4</sup>+*πA*(*xi*) *<sup>π</sup>*) = 1, then |*μA*(*xi*) − *νA*(*xi*)|= 0, and we can obtain the conclusion *μA*(*xi*) = *νA*(*xi*) for all *xi* ∈ *X*.

(iii) By *<sup>A</sup><sup>C</sup>* <sup>=</sup> {<sup>&</sup>lt; *xi*, *<sup>ν</sup>A*(*xi*), *<sup>μ</sup>A*(*xi*) <sup>&</sup>gt;|*xi* <sup>∈</sup> *<sup>X</sup>*} and Equation (3), we have:

$$E\_{RZ}(A^C) = \frac{1}{n} \sum\_{i=1}^n \cot(\frac{\pi}{4} + \frac{|\upsilon\_A(\mathbf{x}\_i) - \mu\_A(\mathbf{x}\_i)|}{4 + \pi\_A(\mathbf{x}\_i)} \pi) = E\_{RZ}(A).$$

(iv) Construct the function:

$$f(\mathbf{x}, y) = \cot(\frac{\pi}{4} + \frac{|\mathbf{x} - \mathbf{y}|}{5 - (\mathbf{x} + \mathbf{y})} \pi), \text{ where } \mathbf{x}, \mathbf{y} \in [0, 1].$$

Now, when *<sup>x</sup>* <sup>≤</sup> *<sup>y</sup>*, we have *<sup>f</sup>*(*x*, *<sup>y</sup>*) = cot( *<sup>π</sup>* <sup>4</sup> <sup>+</sup> *<sup>y</sup>*−*<sup>x</sup>* <sup>5</sup>−(*x*+*y*)*π*); we need to prove that the function *f*(*x*, *y*) is increasing with *x* and decreasing with *y*.

We can easily derive the partial derivatives of *f*(*x*, *y*) to *x* and to *y*, respectively:

$$\frac{\partial f}{\partial \mathbf{x}} = -\csc^2(\frac{\pi}{4} + \frac{y-\pi}{5-(\mathbf{x}+y)}\pi) \cdot \frac{(2y-5)\pi}{\left[5-(\mathbf{x}+y)\right]^2}$$

$$\frac{\partial f}{\partial y} = -\csc^2(\frac{\pi}{4} + \frac{y-\pi}{5-(\mathbf{x}+y)}\pi) \cdot \frac{(5-2\mathbf{x})\pi}{\left[5-(\mathbf{x}+y)\right]^2}$$

When *<sup>x</sup>* <sup>≤</sup> *<sup>y</sup>*, we have *<sup>∂</sup> <sup>f</sup> <sup>∂</sup><sup>x</sup>* <sup>≥</sup> 0, *<sup>∂</sup> <sup>f</sup> <sup>∂</sup><sup>y</sup>* ≤ 0; then, *f*(*x*, *y*) is increasing with *x* and decreasing with *y*; thus, when *μB*(*xi*) ≤ *νB*(*xi*) and *μA*(*xi*) ≤ *μB*(*xi*), *νA*(*xi*) ≥ *νB*(*xi*) are satisfied, we have *f*(*μA*(*xi*), *νA*(*xi*)) ≤ *f*(*μB*(*xi*), *νB*(*xi*)).

So cot( *<sup>π</sup>* <sup>+</sup> <sup>|</sup>*μA*(*xi*)−*νA*(*xi*)<sup>|</sup> +*πA*(*xi*) *<sup>π</sup>*) <sup>≤</sup> cot( *<sup>π</sup>* <sup>+</sup> <sup>|</sup>*μB*(*xi*)−*νB*(*xi*)<sup>|</sup> +*πB*(*xi*) *<sup>π</sup>*), that is, *ERZ*(*A*) <sup>≺</sup> *ERZ*(*B*) holds.

Similarly, we can prove that when *<sup>x</sup>* <sup>≥</sup> *<sup>y</sup>*, *<sup>∂</sup> <sup>f</sup> <sup>∂</sup><sup>x</sup>* <sup>≤</sup> 0, *<sup>∂</sup> <sup>f</sup> <sup>∂</sup><sup>y</sup>* ≥ 0, then *f*(*x*, *y*) is decreasing with *x* and increasing with *y*, thus when *μB*(*xi*) ≥ *νB*(*xi*) and *μA*(*xi*) ≥ *μB*(*xi*), *νA*(*xi*) ≤ *νB*(*xi*) is satisfied, so we have *f*(*μA*(*xi*), *νA*(*xi*)) ≤ *f*(*μB*(*xi*), *νB*(*xi*)).

Therefore, if *<sup>A</sup>* <sup>≺</sup> *<sup>B</sup>*, we have <sup>1</sup> *n n* ∑ *i*=1 *<sup>f</sup>*(*μA*(*xi*), *<sup>ν</sup>A*(*xi*)) <sup>≤</sup> <sup>1</sup> *n n* ∑ *i*=1 *f*(*μB*(*xi*), *νB*(*xi*)), i.e., *ERZ*(*A*) ≺ *ERZ*(*B*). -

From Equation (3), the entropies of *A*1, *A*2, *A*3, *A*4, *A*5, *A*6, *A*<sup>7</sup> and *A*<sup>8</sup> in Examples 1 to 4 can be obtained as follows:

$$\begin{array}{l} E\_{RZ}(A\_1) = 0.8634, E\_{RZ}(A\_2) = 0.8694, E\_{RZ}(A\_1) \prec E\_{RZ}(A\_2). \\ E\_{RZ}(A\_3) = 0.6298, E\_{RZ}(A\_4) = 0.8215, E\_{RZ}(A\_3) \prec E\_{RZ}(A\_4). \\ E\_{RZ}(A\_5) = 0.6356, E\_{RZ}(A\_6) = 0.5959, E\_{RZ}(A\_5) \succ E\_{RZ}(A\_6). \\ E\_{RZ}(A\_7) = 0.7486, E\_{RZ}(A\_5) = 0.7707, E\_{RZ}(A\_7) \prec E\_{RZ}(A\_8). \end{array}$$

The calculation results are in agreement with our intuition.

According to the above examples, we see that the proposed entropy measure has a better performance than the entropy measures *EY*, *EZ*, *EZ J*, *EVS*, *EB*<sup>1</sup> , *EW*, *EWG*, *ELR*. Furthermore, the new entropy measure considers the two aspects of the IF set (i.e., the uncertainty depicted by the derivation of membership and non-membership and the hesitancy degree reflected by the hesitation degree of the IF set), and thus the proposed entropy measure is a good entropy measure formula of the IF set.

#### *4.2. Clustering Method of Group Decision Experts*

For group decision-making problems, suppose that *X* = {*x*1, *x*2, ··· , *xm*} is a set of *m* schemes, and *O* = {*O*1,*O*2, ··· ,*On*} is a set of *n* decision makers. The evaluation values decision makers *Oj* ∈ *O* to schemes *xk* ∈ *X* are expressed by IF number < *μj*(*xk*), *νj*(*xk*) >, where *μj*(*xk*) and *νj*(*xk*) are the membership (satisfaction) and non-membership (dissatisfaction) degrees of the decision maker *Oj* ∈ *O* to the scheme *xk* ∈ *X* with respect to the fuzzy concept so that they satisfy the conditions 0 ≤ *μj*(*xk*) ≤ 1, 0 ≤ *νj*(*xk*) ≤ 1 and 0 ≤ *μj*(*xk*) + *νj*(*xk*) ≤ 1 (*j* = 1, 2, ··· , *n*; *k* = 1, 2, ··· , *m*).

Thus, a group decision-making problem can be expressed by the decision matrix *O* = [< *μkj*, *νkj* >] *<sup>m</sup>*×*<sup>n</sup>* as follows:

*O* = [< *μkj*, *νkj* >] *<sup>m</sup>*×*<sup>n</sup>* <sup>=</sup> *O*<sup>1</sup> *O*<sup>2</sup> ··· *On x*1 *x*2 . . . *xm* ⎡ ⎢ ⎢ ⎣ < *μ*11, *ν*<sup>11</sup> > < *μ*12, *ν*<sup>12</sup> > ··· < *μ*1*n*, *ν*1*<sup>n</sup>* > < *μ*21, *ν*<sup>21</sup> > < *μ*22, *ν*<sup>22</sup> > ··· < *μ*2*n*, *ν*2*<sup>n</sup>* > ··· ··· < *μm*1, *νm*<sup>1</sup> > < *μm*2, *νm*<sup>2</sup> > ··· < *μmn*, *νmn* > ⎤ ⎥ ⎥ ⎦ *m*×*n*

#### 4.2.1. A New IF Similarity Measure

To measure the similarities among any form of data is an important topic [44,45]. The measures used to find the resemblance between data is called a similarity measure. It has different applications in classification, medical diagnosis, pattern recognition, data mining, clustering [46], decision making and image processing. Khan et al. [47] proposed a newly similarity measure for a q-rung orthopair fuzzy set based on a cosine and cotangent function. Chen and Chang [48] proposed a new similarity measure between Atanassov's intuitionistic fuzzy sets (AIFSs) based on transformation techniques and applied the proposed similarity measure between AIFSs to deal with pattern recognition problems. Beliakov et al. [49] presented a new approach for defining similarity measures for AIFSs and applied it to image segmentation. Lohani et al. [50] presented a novel probabilistic similarity measure (PSM) for AIFSs and developed the novel probabilistic λ-cutting algorithm for clustering. Liu et al. [51] proposed a new intuitionistic fuzzy similarity measure, introduced it into intuitionistic fuzzy decision system and proposed an intuitionistic fuzzy three branch decision method based on intuitionistic fuzzy similarity. Mei [52] constructed a similarity model between intuitionistic fuzzy sets and applied it to dynamic intuitionistic fuzzy multi-attribute decision making.

At present, most of the existing similarity measures are expressed in real numbers, which is not in line with the characteristics of intuitionistic fuzzy sets. In this section, we define a new IF similarity measure whose value is an IF number.

For any two experts *Oj* and *Ok*, let

$$\mathbb{E}\left(X\_p(O\_j, O\_k)\right) = \sqrt[p]{\sum\_{i=1}^m w\_i (\nu\_{ij} - \nu\_{ik})^p} \text{and} \\ M\_p(O\_j, O\_k) = \sqrt[p]{\sum\_{i=1}^m w\_i (\mu\_{ij} - \mu\_{ik})^p},$$

where *wi* is the weight of scheme *xi* for all *<sup>i</sup>* <sup>∈</sup> {1, 2, ··· , *<sup>m</sup>*} and *<sup>m</sup>* ∑ *i*=1 *wi* = 1 and *p* ≥ 1 is a parameter.

Let

$$\overline{\mu}\_{jk} = 1 - \max \left\{ X\_p(O\_{j'}, O\_k)\_{'} M\_p(O\_{j'}, O\_k) \right\}\_{'} $$

$$\overline{\nu}\_{jk} = \min \left\{ X\_p(O\_{j'}, O\_k), M\_p(O\_{j'}, O\_k) \right\}$$

**Theorem 2.** *Let Oj and Ok be two IF sets; then,*

$$\#\psi(\mathcal{O}\_{\rangle}, \mathcal{O}\_{k}) = <\overline{\mu}\_{jk'} \overline{\nu}\_{jk} > \tag{4}$$

*is the IF similarity measure of Oj and Ok*.

**Proof.** To prove the measure given by Equation (4) is an IF similarity measure of *Oj* and *Ok*, we only need to prove that it satisfies the properties in Definition 6.

First, we prove that *ψ*(*Oj*,*Ok*) is the form of an IFN.

Because 0 ≤ *Xp*(*Oj*,*Ok*) = *<sup>p</sup> <sup>m</sup>* ∑ *i*=1 *wi*(*νij* − *νik*) *<sup>p</sup>* <sup>≤</sup> 1 and 0 <sup>≤</sup> *Mp*(*Oj*,*Ok*) =

$$\sqrt{\sum\_{i=1}^{m} w\_i (\mu\_{ij} - \mu\_{ik})^p} \le 1, \quad \text{so} \quad 0 \le \lambda - \max\{X\_p(\mathcal{O}\_j, \mathcal{O}\_k), M\_p(\mathcal{O}\_j, \mathcal{O}\_k)\} \quad \le \tag{1.0}$$

<sup>≤</sup> min-*Xp*(*Oj*,*Ok*), *Mp*(*Oj*,*Ok*) <sup>≤</sup> 1 and *<sup>μ</sup>jk* <sup>+</sup> *<sup>ν</sup>jk* <sup>≤</sup> 1. This proves that *<sup>ψ</sup>*(*Oj*,*Ok*) is the form of an IFN.

Let *ψ*(*Oj*,*Ok*) =< *μjk*, *νjk* >=< 1, 0 >; we have

$$\overline{\mu}\_{jk} = 1 - \max\{X\_p(O\_{j'}O\_k), M\_p(O\_{j'}O\_k)\} = 1$$

And *<sup>ν</sup>jk* = min- *Xp*(*Oj*,*Ok*), *Mp*(*Oj*,*Ok*) = 0, so *Xp*(*Oj*,*Ok*) = *Mp*(*Oj*,*Ok*). Because of the arbitrariness of *wi*, we get *μij* = *μik* and *νij* = *νik* for all *i* ∈ {1, 2, ··· , *m*}, that is, *Oj* = *Ok*.

Now we assume that *Oj* = *Ok*; then for all *i* ∈ {1, 2, ··· , *m*}, we have *μij* = *μik*, *νij* = *νik*; we can obtain *Xp*(*Oj*, *Ok*) = *Mp*(*Oj*, *Ok*) = 0 and *<sup>μ</sup>jk* <sup>=</sup> <sup>1</sup> <sup>−</sup> max- *Xp*(*Oj*,*Ok*), *Mp*(*Oj*,*Ok*) = 1,*νjk* = min- *Xp*(*Oj*,*Ok*), *Mp*(*Oj*,*Ok*) = 0, that is, *<sup>ψ</sup>*(*Oj*,*Ok*) =< 1, 0 >.

Property 3 clearly holds.

If *O*<sup>1</sup> ⊆ *O*<sup>2</sup> ⊆ *O*3, i.e.,*μi*<sup>1</sup> ≤ *μi*<sup>2</sup> ≤ *μi*3,*νi*<sup>1</sup> ≥ *νi*<sup>2</sup> ≥ *νi*<sup>3</sup> for all *i* ∈ {1, 2, ··· , *m*}, then (*μi*<sup>1</sup> − *μi*2) *<sup>p</sup>* <sup>≤</sup> (*μi*<sup>1</sup> <sup>−</sup> *<sup>μ</sup>i*3) *p* ,(*νi*<sup>1</sup> − *νi*2) *<sup>p</sup>* <sup>≤</sup> (*νi*<sup>1</sup> <sup>−</sup> *<sup>ν</sup>i*3) *<sup>p</sup>* for all *<sup>i</sup>* <sup>∈</sup> {1, 2, ··· , *<sup>m</sup>*}.

We have *Xp*(*O*1,*O*2) ≤ *Xp*(*O*1,*O*3) and *Mp*(*O*1,*O*2) ≤ *Mp*(*O*1,*O*3); therefore, *μ*<sup>12</sup> ≥ *μ*13, and *ν*<sup>12</sup> ≤ *ν*13, that is,*ψ*(*O*1,*O*3) ⊆ *ψ*(*O*1,*O*2). Similarly, it can be proved that *ψ*(*O*1,*O*3) ⊆ *ψ*(*O*2,*O*3).

This theorem is proved. -

For IF similarity measure Equation (4), since each scheme is equal, this paper takes *p* = 2, *wi* = <sup>1</sup> *<sup>m</sup>* for all *i* ∈ {1, 2, ··· , *m*}. Using this formula, the IF decision matrix *O* = [< *μkj*, *νkj* >] *<sup>m</sup>*×*<sup>n</sup>* can be transformed into the IF similar matrix *<sup>Z</sup>* = (*zjk*) *<sup>n</sup>*×*n*, where *zjk* = *ψ*(*Oj*,*Ok*) =< *μjk*, *νjk* > is an IFN.

The IF decision matrix can be transformed into the IF similarity matrix *Z* = (*zjk*) *<sup>n</sup>*×*<sup>n</sup>* by using the IF similarity formula proposed in this paper, where *zjk* = *ψ*(*Oj*, *Ok*) =< *μjk*, *νjk* > is an IFN.

People's pursuit of risk varies from person to person. Let *β* ∈ [0, 1] be the risk factor; then the IF similarity matrix *Z* = (*zjk*) *<sup>n</sup>*×*<sup>n</sup>* can be transformed into a real matrix *R* = (*rjk*) *<sup>n</sup>*×*<sup>n</sup>* where *rjk* <sup>=</sup> *<sup>μ</sup>jk* <sup>+</sup> *<sup>β</sup>*(<sup>1</sup> <sup>−</sup> *<sup>μ</sup>jk* <sup>−</sup> *<sup>ν</sup>jk*).

$$R = \left(r\_{jk}\right)\_{n \times n} = \begin{bmatrix} r\_{11} & r\_{12} & \cdots & r\_{1n} \\ r\_{21} & r\_{22} & \cdots & r\_{2n} \\ & \cdots & \cdots & \cdots \\ r\_{n1} & r\_{n2} & \cdots & r\_{nn} \end{bmatrix}$$

#### 4.2.2. Threshold Change Rate Analysis Method

The method of Zhou et al. [40] is adopted in this section.

Let the clustering threshold *θ* = *θt*, where *θ<sup>t</sup>* ∈ [0, 1]. If

$$r\_{jk} \ge \theta\_{t\prime} j \ne k \tag{5}$$

then elements *Ok* and *Oj* are considered to have the same properties. The closer the threshold is to 1, the finer the classification is.

In Zhou et al. [40], the selection of the optimal clustering threshold *θ<sup>i</sup>* can be determined by analyzing the change rate *Ci* of *θi*. The rate of change *Ci* is given as follows:

$$\mathbf{C}\_{i} = \frac{\theta\_{i-1} - \theta\_{i}}{n\_{i} - n\_{i-1}} \tag{6}$$

where *i* is the clustering times of *θ* from large to small, *ni* and *ni*−<sup>1</sup> are the number of objects in the *i*-th and (*i* − 1)-th clustering, respectively, and *θ<sup>i</sup>* and *θi*−<sup>1</sup> are the thresholds for the *i*-th and (*i* − 1)-th clustering, respectively. If

$$\mathcal{C}\_{i} = \max\_{j} \{ \mathcal{C}\_{j} \} \tag{7}$$

then the threshold value of *i* clustering is the best.

It can be seen from Equation (5) that the greater the change rate *Ci* of the clustering threshold *θ* is, the greater the difference between the corresponding two clusters and the more obvious the boundary between classes. When *Ci* is the maximum value, its corresponding *θ* is the optimal clustering threshold value, which can make the difference between the clusters obtained by the *i*-th clustering to be the largest, thus realizing the purpose and significance of classification.

#### *4.3. Analysis of Group Decision Making Expert Group Weighting*

In group decision-making problems, because each expert has a different specialty, experience and preference, their evaluation information should be treated differently. In order to reflect the status and importance of each expert in decision making, it is of great significance to determine the expert weight reasonably.

Two aspects need to be considered in expert weight, namely, the weight between categories and the weight within categories. The weight between categories mainly considers the number of experts in the category of experts. For the category with large capacity, the evaluation results given by experts represent the opinions of most experts, so the corresponding categories should be given a larger weight, which reflects the principle that the minority is subordinate to the majority, while the category with smaller capacity should be given a smaller weight.

Suppose that *n* experts are divided into *t* categories; the number of experts in the *i* category is *ϕi*(*ϕ<sup>i</sup>* ≤ *n*); and the weights between the expert categories *λ<sup>i</sup>* are as follows:

$$
\lambda\_i = \frac{\left. \frac{\varphi\_i^2}{t} \right|\_{t=1,2,\cdots,t}}{\sum\_{k=1}^t \varphi\_t^2}, k = 1, 2, \cdots, t. \tag{8}
$$

The weight of experts within the category can be measured by the information contained in an IF evaluation value given by experts. Entropy is a measure of information uncertainty and information quantity. If the entropy of the evaluation information given by an expert is smaller, the uncertainty of the evaluation information is smaller, which means that the logic of the expert is clearer; the amount of information provided is greater; and the role of the expert in the comprehensive evaluation is greater, so the expert should be given more weight. Therefore, the weight of experts within the category can be measured by IF entropy.

The evaluation vector of expert *k* is *Ok* = (< *μk*(*x*1), *νk*(*x*1) >, ··· , < *μk*(*x*5), *νk*(*x*5) >). The IF entropy corresponding to Equation (1) is expressed as follows:

$$E(k) = \frac{1}{5} \sum\_{i=1}^{5} \cot(\frac{\pi}{4} + \frac{|\mu\_k(\mathbf{x}\_i) - \nu\_k(\mathbf{x}\_i)|}{4 + \pi\_k(\mathbf{x}\_i)} \pi) \tag{9}$$

The internal weight *aik* of the *k* expert in category *i* is as follows:

$$a\_{ik} = \frac{1 - E(k)}{\frac{\wp(i)}{\sum\limits\_{i=1}^{}}[1 - E(i)]} \tag{10}$$

By linear weighting *λ<sup>i</sup>* and *aik*, the total weight of experts *ω<sup>k</sup>* is obtained:

$$
\omega\_k = \lambda\_i \cdot a\_{ik}, \space k = 1, 2, \space \cdot \cdot \text{ , } n. \tag{11}
$$

#### *4.4. Intelligent Expert Combination Weighting Algorithm*

A cluster analysis method is often used to realize the expert weighting in group decision making. The basic principle of expert cluster analysis is to measure the similarity degree of expert evaluation opinions according to certain standards and cluster experts based on the similarity degree. In short, Figure 4 shows the general scheme of the expert clustering method.

**Figure 4.** The general scheme of expert clustering method.

To sum up, this paper proposes an expert combination weighting scheme for group decision making, and obtains the following algorithm, which we call the intelligent expert combination weighting algorithm (see Algorithm 1).

**Algorithm 1.** Intelligent expert combination weighting algorithm

**Input** the IF decision matrix *O* = [< *μij*, *νij* >] *<sup>n</sup>*×*<sup>m</sup>* given by experts where

*I* = {1, 2, ··· , *n*} and*J* = {1, 2, ··· , *m*}.

1: **For** *j* ∈ *I* implement.

2: **For** *k* ∈ *I* implement.

3: **For** *i* ∈ *J* implement.

4: The IF similarity measure between experts *ψ*(*Oj*,*Ok*) =< *μjk*, *νjk* > is calculated according to formula (4).

5: **End for**

6: Let *zjk* = *ψ*(*Oj*,*Ok*) =< *μjk*, *νjk* > .

7: **End for**

$$8:\text{ End for}$$

9: The IF decision matrix *O* = [< *μij*, *νij* >] *<sup>n</sup>*×*<sup>m</sup>* is transformed into the similarity matrix *Z* = (*zjk*) *<sup>n</sup>*×*n*.

10: By selecting the risk factor *β*, the IF similarity matrix *Z* = (*zjk*) *<sup>n</sup>*×*<sup>n</sup>* is transformed into the real matrix *R* = (*rjk*) *<sup>n</sup>*×*<sup>n</sup>* .

11: According to the real matrix *R* = (*rjk*) *<sup>n</sup>*×*n*, the dynamic clustering graph is drawn, and the optimal clustering threshold is determined by Formulae (6) and (7). According to this threshold, experts are classified into L categories.

12: **For** *l* ∈ *L* implement.

13: Using Formula (8), the weight of experts between categories *λ<sup>l</sup>* is determined.

14: **For** *k* ∈ *I* implement.

15: Using Formula (8), the weight of experts between categories *alk* is determined.

16: Formula (11) is used to determine the total weight of experts. *ω<sup>k</sup>* is calculated.


19: **For** *i* ∈ *J* implement.

20: **For** *k* ∈ *I* implement.

21: The weighted operator (2) of IF sets is used to aggregate expert IF group decision-making information.

22: **End for**.

23: According to definition 8, the scores and accuracy values of each scheme *xi* are obtained.

24: **End for**.

25: **return** The results of the ranking of schemes *xi*.

#### **5. Performance Analysis**

The railway is an important national infrastructure and livelihood project. It is a resource-saving and environment-friendly mode of transportation. In recent years, China's railway development has made remarkable achievements, but compared with the needs of economic and social development, other modes of transportation and advanced foreign railway technique, the railway in China is still a weak part of the whole transportation system [53,54]. In order to further accelerate railway construction, expand the scale of railway network and improve the layout structure and quality, the state promulgated the medium and long term railway network plan, which puts forward a series of railway plans, including the plan for railway reconstruction.

The railway reconstruction project is carried out under a series of communication, coordination and cooperation efforts, and the complex work is arranged in a limited work area, so it has encountered many unexpected challenges, such as carelessness or inadequate planning, which may lead to accidents and cause significant damage to life, assets, environment and society. According to literature [55], we can conclude that there are about seven types of risks in railway reconstruction projects, including financial and economic risks, contract and legal risks, subcontractor related risks, operation and safety risks, political and social risks, design risks and force majeure risks.

It is assumed that nine experts *Oi*(*i* = 1, 2, ··· , 9) form a decision-making group to rank five alternatives *xj*(*j* = 1, 2, 3, 4, 5) from the seven evaluation attributes above. Evaluation alternatives always contain ambiguity and diversity of meaning. In addition, in

*Z* =

terms of qualitative attributes, human assessment is subjective and therefore inaccurate. In this case, an IF set is very advantageous; it can describe the decision process more accurately. IF sets are used in this study. After expert investigation and statistical analysis, we can get the satisfaction degree *μij* and dissatisfaction *νij* given by each expert *Oi*(*i* = 1, 2, ··· , 9) for each scheme *xj*(*j* = 1, 2, 3, 4, 5). The specific data are given in Table 1.


**Table 1.** Expert evaluation information on the program.

The calculation steps of the proposed method are given as follows: Step 1. According to Equation (4), the IF similarity matrix *Z* is obtained as follows:

> ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ < 1, 0 > < 0.805, 0.152 > < 0.675, 0.304 > < 0.953, 0.033 > < 0.672, 0.327 > < 1, 0 > < 0.685, 0.261 > < 0.821, 0.125 > < 0.685, 0.274 > < 1, 0 > < 0.694, 0.271 > < 0.946, 0.051 > < 1, 0 > < 0.689, 0.294 > < 1, 0 > < 0.746, 0.253 > < 0.668, 0.311 > < 0.945, 0.023 > < 0.752, 0.235 > < 0.758, 0.211 > < 0.706, 0.246 > < 0.816, 0.133 > < 0.876, 0.075 > < 0.751, 0.245 > < 0.938, 0.060 > < 0.691, 0.276 > < 0.689, 0.255 > < 0.762, 0.229 > < 0.688, 0.276 > < 0.966, 0.022 > < 0.768, 0.210 > < 0.722, 0.277 > < 0.954, 0.038 > < 0.686, 0.298 > < 0.698, 0.245 > < 1, 0 > < 0.745, 0.250 > < 0.755, 0.216 > < 0.705, 0.286 > < 1, 0 > < 0.683, 0.279 > < 0.720, 0.220 > < 1, 0 > < 0.763, 0.220 > < 1, 0 >

> Step 2. By selecting the risk factor *β* = 0.5, i.e., moderate risk, the real matrix *R* is obtained.


Step 3. According to Equation (5), let *i* take all the values in turn to get a series of classifications, and then draw a dynamic clustering graph according to Equations (5) and (6), as shown in Figure 5.

**Figure 5.** Dynamic clustering graph.

According to Equation (6), we have

 $\mathbf{C}\_{1} = \frac{1 - 0.972}{2 - 0} = 0.014$ ,  $\mathbf{C}\_{2} = \frac{0.972 - 0.961}{3 - 2} = 0.011$ ,  $\mathbf{C}\_{3} = \frac{0.961 - 0.958}{5 - 3} = 0.0015$ ,  $\mathbf{C}\_{4} = \frac{0.958 - 0.948}{6 - 5} = 0.01, \mathbf{C}\_{5} = \frac{0.948 - 0.901}{8 - 6} = 0.0235, \mathbf{C}\_{6} = \frac{0.901 - 0.770}{9 - 8} = 0.131$ .

Since it is meaningless for each expert to become a category or all experts to be classified into one category, we do not consider *C*6; then, we have *C*<sup>5</sup> = max{*C*1, *C*2, *C*3, *C*4, *C*5}.

Therefore, taking *θ* = 0.891 as the optimal clustering threshold, the clustering result is the most reasonable and consistent with the actual situation, and the clustering results are shown in Figure 6. We can see that the corresponding clustering results are as follows:

{(1 4 8), (3 5 7), (2 9), (6)}

**Figure 6.** Clustering results.

Step 4. According to Equation (8), the weight of experts between categories is as follows:

*λ*<sup>1</sup> = 0.3913, *λ*<sup>2</sup> = 0.3913, *λ*<sup>3</sup> = 0.1739, *λ*<sup>4</sup> = 0.0435.

Step 5. According to Equation (9), the entropy vector of the expert group is obtained as follows:

(0.6868, 0.7405, 0.5538, 0.7364, 0.4995, 0.5935, 0.5507, 0.7159, 0.7339)

According to Equation (10), the weight of experts within the category is shown in Table 2.

**Table 2.** The weight of experts within the category.


Step 6. We weight *λ<sup>i</sup>* and *aik* linearly to get the total weight vector *ω<sup>k</sup>* of experts as follows:

(0.1424, 0.0859, 0.1251, 0.1198, 0.1403, 0.0435, 0.1260, 0.1291, 0.0748).

Step 7. According to the total weight of nine experts, the weighted aggregation operator given by Equation (2) is used to aggregate the expert information, and the comprehensive evaluation vector is obtained as follows:

(0.3616, 0.4504), (0.5226, 0.3878), (0.5932, 0.3218), (0.4749, 0.3853), (0.4972, 0.3718).

According to Equation (1), the scores and accuracy values of the comprehensive evaluation vector are calculated as follows:

 $M(\mathbf{x}\_1) = -0.089, M(\mathbf{x}\_2) = 0.1348, M(\mathbf{x}\_3) = 0.2714, M(\mathbf{x}\_4) = 0.0896, M(\mathbf{x}\_5) = 0.1254.$   $\Delta(\mathbf{x}\_1) = 0.812, \Delta(\mathbf{x}\_2) = 0.9104, \Delta(\mathbf{x}\_3) = 0.915, \Delta(\mathbf{x}\_4) = 0.8602, \Delta(\mathbf{x}\_5) = 0.869$ 

Therefore, the priority of the five alternatives is *x*<sup>3</sup> *x*<sup>2</sup> *x*<sup>5</sup> *x*<sup>4</sup> *x*1, and the optimal one is *x*3.

#### **6. Conclusions and Future Work**

This article listed some counterintuitive phenomena of some existing intuitionistic fuzzy entropies. We defined an improved intuitionistic fuzzy entropy based on a cotangent function and a new IF similarity measure whose value is an IF number, applied them to the expert weight problem of group decision making and put forward the expert weight combination weighting scheme. Finally, this method was applied to a railway reconstruction case to illustrate the effectiveness of the method.

In the future, we will apply the expert weight combination weighting scheme proposed in this paper to situations in real life. We will also formulate this kind of entropy measure and similarity measures for an interval-valued IF set [56], Fermat fuzzy set, spherical fuzzy set, t-spherical fuzzy set, picture fuzzy set, single valued neutrosophic set [55,57], Plithogenic set [58] and linear fuzzy set.

While studying the theoretical method, this paper used numerical examples rather than the actual production data, which is the limitation of this paper. In the future research, we will apply the expert weight combination weighting scheme proposed in this paper to practical production problems.

**Author Contributions:** L.Z. and H.R. designed the method and wrote the paper; T.Y. and N.X. analyzed the data. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work was mainly supported by the National Natural Science Foundation of China (No. 71661012) and scientific research project of the Jiangxi Provincial Department of Education (No. GJJ210827).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** The data used to support the findings of this study are included within the article.

**Conflicts of Interest:** The authors declared that they have no conflict of interest to this work.

#### **References**


## *Article* **Group Decision-Making Problems Based on Mixed Aggregation Operations of Interval-Valued Fuzzy and Entropy Elements in Single- and Interval-Valued Fuzzy Environments**

**Weiming Li <sup>1</sup> and Jun Ye 2,\***


**Abstract:** Fuzzy sets and interval-valued fuzzy sets are two kinds of fuzzy information expression forms in real uncertain and vague environments. Their mixed multivalued information expression and operational problems are very challenging and indispensable issues in group decision-making (GDM) problems. To solve single- and interval-valued fuzzy multivalued hybrid information expression, operations, and GDM issues, this study first presents the notion of a single- and interval-valued fuzzy multivalued set/element (SIVFMS/SIVFME) with identical and/or different fuzzy values. To effectively solve operational problems for various SIVFME lengths, SIVFMS/SIVFME is converted into the interval-valued fuzzy and entropy set/element (IVFES/IVFEE) based on the mean and information entropy of SIVFME. Then, the operational relationships of IVFEEs and the expected value function and sorting rules of IVFEEs are defined. Next, the IVFEE weighted averaging and geometric operators and their mixed-weighted-averaging operation are proposed. In terms of the mixed-weighted-averaging operation and expected value function of IVFEEs, a GDM method is developed to solve multicriteria GDM problems in the environment of SIVFMSs. Finally, the proposed GDM method was utilized for a supplier selection problem in a supply chain as an actual sample to show the rationality and efficiency of SIVFMSs. Through the comparative analysis of relative decision-making methods, we found the superiority of this study in that the developed GDM method not only compensates for the defects of existing GDM methods, but also makes the GDM process more reasonable and flexible.

**Keywords:** single- and interval-valued fuzzy multivalued set; interval-valued fuzzy and entropy set; interval-valued fuzzy and entropy element weighted averaging operator; interval-valued fuzzy and entropy element weighted geometric operator; mixed-weighted-averaging operation; group decision making

**MSC:** 03E72; 91B06

#### **1. Introduction**

Fuzzy sets (FS) [1] and interval-valued fuzzy sets (IVFSs) [2] are two important tools of fuzzy information expressions in real uncertain and vague environments. A bag/fuzzy multiset [3,4] or an interval-valued fuzzy multiset (IVFM) [5] was proposed as the extension of FS or IVFS, where each element in a universe set can occur more times with different and/or identical fuzzy values or interval-valued fuzzy values. Therefore, they have been used in various areas [6–10]. In a hesitant situation, a hesitant fuzzy set (HFS) [11] can represent a set of a few of different fuzzy values of each element in the set. To express the hybrid information of HFS and IVFS, some researchers presented cubic HFSs and applied them to medical assessments of prostatic patients [12] and multicriteria decision-making

**Citation:** Li, W.; Ye, J. Group Decision-Making Problems Based on Mixed Aggregation Operations of Interval-Valued Fuzzy and Entropy Elements in Single- and Interval-Valued Fuzzy Environments. *Mathematics* **2022**, *10*, 1077. https:// doi.org/10.3390/math10071077

Academic Editor: Pasi Luukka

Received: 26 February 2022 Accepted: 25 March 2022 Published: 27 March 2022

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

problems [13]; then, other researchers introduced hesitant cubic fuzzy sets (HCFSs) and applied them to multicriteria (group) decision-making problems [14,15]. However, their hesitant information does not contain the same fuzzy values corresponding to the hesitant characteristics/concept [11], which is different from the fuzzy multiset concept.

Regarding the probability of an element belonging to a set, hesitant probabilistic fuzzy sets (HPFSs) [16,17] were introduced and applied to hesitant probabilistic fuzzy decisionmaking problems. However, an HPFS only contains the probabilistic values of a few of the same values, resulting in probabilistic distortion. Since the probabilistic method requires a lot of fuzzy data (more sample data) to maintain reasonable probabilistic values, the probabilistic values of small samples of data lead to irrationality/distortion. Therefore, it is difficult to apply the probabilistic method in actual group decision making (GDM) applications because the evaluation values of a lot of decision makers are required to ensure the rationality of the probabilistic values. Hence, it is obvious that the use of HPFSs may have some flaws from the perspective of probability.

Recently, Turkarslan et al. [18] introduced a consistency fuzzy set/element (CFS/CFE) based on the mean of a fuzzy sequence and the complement of the standard deviation of a fuzzy sequence in a fuzzy multiset to reasonably simplify the information expression and operation of different fuzzy sequence lengths, and then proposed a cosine similarity measure of CFSs for medical diagnosis in the case of fuzzy multisets. Furthermore, Du and Ye [19] presented cubic fuzzy multivalued sets (CFMSs) and converted them into cubic fuzzy consistency sets with the help of the mean of a fuzzy sequence and the complement of the standard deviation of a fuzzy sequence. Then, they developed a hybrid weighted arithmetic and geometric aggregation operator for GDM with CFMSs. In general, the concept of standard deviation is only applicable to the calculation of fuzzy sequences containing normal distributions, which exposes its limitations.

In real GDM problems, single- and interval-valued fuzzy hybrid multivalued information expression and operation problems are very challenging issues, due to the uncertainty and incompleteness of each decision-maker's judgement/cognition of the evaluated object. However, existing fuzzy multiset/HFS/HPFS/IVFM/CFMS cannot represent the singleand interval-valued fuzzy hybrid multivalued information with identical and/or different fuzzy values that are given by a group of decision makers in the GDM process. In the GDM problem, one of the experts/decision makers can assign his/her single-valued or interval-valued fuzzy evaluation value in terms of his cognition of the evaluated object in the assessment process. For example, five experts evaluate a car's "comfort" with a group of fuzzy values (0.5, 0.5, 0.6, [0.6, 0.7], [0.7, 0.8]). The fuzzy values 0.5, 0.5, and 0.6 are given by three of the five experts, and the interval-valued/uncertain fuzzy values [0.6, 0.7] and [0.7, 0.8] are given by the two of the five experts. In this issue, the existing fuzzy multiset/HFS/HPFS/IVFM/CFMS can only represent a fuzzy sequence or an interval-valued fuzzy sequence, but they cannot express such a group of single- and interval-valued fuzzy hybrid values (the hybrid set of two different fuzzy sequences) simultaneously. Meanwhile, there is no research on a single- and interval-valued fuzzy multivalued framework in the existing literature. Therefore, it is necessary to propose a new expression form to effectively express the single- and interval-valued fuzzy hybrid multivalued information to overcome the defect of existing various fuzzy expressions. Motivated by this new idea, this paper first puts forward the concept of a single- and interval-valued fuzzy multivalued set/element (SIVFMS/SIVFME). Then, a new information entropy measure of SIVFME is proposed to transform SIVFMS/SIVFME into an interval-valued fuzzy and entropy set/element (IVFES/IVFEE) based on the mean and information entropy of SIVFME, and then some operations of IVFEEs and the expected value function and sorting rules of IVFEEs are defined. Next, the IVFEE weighted averaging (IVFEEWA) and IVFEE weighted geometric (IVFEEWG) operators and their mixed-weighted-averaging operation are proposed to overcome the flaws of the IVFEEWA operator, which mainly attends to group arguments, and the IVFEEWG operator, which mainly attends to individual arguments [19], in the IVFEE aggregation process. According to the proposed mixed-weighted-averaging operation and the expected value function, a GDM method is developed to solve multicriteria GDM problems with SIVFMSs. Finally, the proposed GDM method is utilized for an actual supplier selection problem in a supply chain to show the rationality and effectiveness in the setting of SIVFMSs. The results indicate that the proposed GDM method makes the GDM process more reasonable and flexible.

This original study demonstrates the following main contributions and highlights:


The remainder of this article is made up of the following structures. In Section 2, we present the concepts of SIVFMS, SIVFME, information entropy, and IVFEE. Then, we define the operational laws of IVFEEs, and the expected value function and sorting rules of IVFEEs. The IVFEEWA and IVFEEWG operators and their mixed-weighted-averaging operation are presented in Section 3. In Section 4, a GDM method is given by using the mixed-weighted-averaging operation and the expected value function. In Section 5, the proposed GDM method is applied to an actual supplier selection problem in a supply chain to show its rationality and effectiveness when dealing with SIVFMSs, and then the superiorities of the proposed method are indicated by comparative analysis. Section 6 depicts conclusions and future research.

#### **2. SIVFMS and IVFES**

**Definition 1.** *Let U = {u1, u2,* ... *, us} be a finite universe set U. Then, a single- and interval-valued fuzzy multivalued set H in U is defined as follows:*

$$H = \{ \langle u\_k, F\_H(u\_k) \rangle | u\_k \in \mathcal{U} \}\tag{1}$$

*where FH(uk) for uk*∈ *U (k = 1, 2,* ... *, s) is a single- and interval-valued fuzzy sequence of the element uk in the set H, denoted as an increasing fuzzy sequence FH*(*uk*)=(*λ*<sup>1</sup> *<sup>H</sup>*(*uk*), *<sup>λ</sup>*<sup>2</sup> *<sup>H</sup>*(*uk*), ... , *λak <sup>H</sup>* (*uk*), [*λL*<sup>1</sup> *<sup>H</sup>* (*uk*), *<sup>λ</sup>U*<sup>1</sup> *<sup>H</sup>* (*uk*)], [*λL*<sup>2</sup> *<sup>H</sup>* (*uk*), *<sup>λ</sup>U*<sup>2</sup> *<sup>H</sup>* (*uk*)], ... , [*λLbk <sup>H</sup>* (*uk*), *<sup>λ</sup>Ubk <sup>H</sup>* (*uk*)]) *with identical and/or different fuzzy values, such that* <sup>0</sup> ≤ *<sup>λ</sup>*<sup>1</sup> *<sup>H</sup>*(*uk*) ≤ *<sup>λ</sup>*<sup>2</sup> *<sup>H</sup>*(*uk*), ... , <sup>≤</sup> *<sup>λ</sup>ak <sup>H</sup>* (*uk*) ≤ 1 *with ak single-valued fuzzy values and* [*λL*<sup>1</sup> *<sup>H</sup>* (*uk*), *<sup>λ</sup>U*<sup>1</sup> *<sup>H</sup>* (*uk*)] ⊆ [*λL*<sup>2</sup> *<sup>H</sup>* (*uk*), *<sup>λ</sup>U*<sup>2</sup> *<sup>H</sup>* (*uk*)] <sup>⊆</sup>, ... , <sup>⊆</sup> [*λLbk <sup>H</sup>* (*uk*), *<sup>λ</sup>Ubk <sup>H</sup>* (*uk*)] ⊆ [0, 1] *with bk interval-valued fuzzy values.*

Especially when all *bk* = 0 or *ak* = 0 for *k* = 1, 2, ... , *s*, SIVFMS degenerates to a fuzzy multiset or an IVFM.

For simplicity, the *k*th element *FH*(*uk*) in *H* is denoted as the *k*th SIVFME: *FHk* = (*λ*<sup>1</sup> *Hk*, *<sup>λ</sup>*<sup>2</sup> *Hk*, ..., *<sup>λ</sup>ak Hk*, [*λL*<sup>1</sup> *Hk*, *<sup>λ</sup>U*<sup>1</sup> *Hk*], [*λL*<sup>2</sup> *Hk*, *<sup>λ</sup>U*<sup>2</sup> *Hk*], ..., [*λLbk Hk* , *<sup>λ</sup>Ubk Hk* ]).

To solve the difficult conversions between different single- and interval-valued fuzzy sequence lengths, it is necessary to convert SIVFMS into IVFES in terms of the mean and information entropy of SIVFME.

First, the concept of the Shannon/probability entropy [20] is introduced below.

Set *R* = {*r*1, *r*2, ... , *rs*} as a probability distribution on a set of random variables. Thus, the Shannon entropy of the probability distribution *R* is denoted as [20]

$$E(R) = -\sum\_{i=1}^{s} r\_i \ln(r\_i) \tag{2}$$

where *ri* <sup>∈</sup> [0, 1] and *<sup>s</sup>* ∑ *i*=1 *ri* = 1.

If all probability values of *ri* (*i* = 1, 2, . . . , *s*) in *R* are the same, the probability entropy can reach the maximum value of *E*(*R*), which reflects the perfect consistency (the same probabilities) of all *ri*. Generally, the larger the probability entropy measure value, the better the consistency level of all probability values.

According to the probability entropy notion, the interval-valued entropy concept of SIVFME (an information entropy measure of SIVFME) is proposed, and SIVFMS is converted into IVFES based on the mean and information entropy of SIVFME, which is given by the following definition.

**Definition 2.** *An IVFES Z of a SIVFMS H in a finite universe set U = {u1, u2,* ... *, us} is defined as*

$$Z = \{ (\mathfrak{u}\_k, \mathfrak{m}\_Z(\mathfrak{u}\_k), \mathfrak{e}\_Z(\mathfrak{u}\_k)) \, | \, \mathfrak{u}\_k \in \mathcal{U} \},$$

*where mZ (uk)* ⊆ *[0, 1] and eZ(uk)* ⊆ *[0, 1] (k = 1, 2,* ... *, s) are the interval-valued mean and interval-valued entropy of SIVFME, which are obtained by using the following formulae:*

$$m\_{\mathbb{Z}}(\boldsymbol{u}\_{k}) = [m\_{\mathbb{Z}}^{L}(\boldsymbol{u}\_{k}), m\_{\mathbb{Z}}^{II}(\boldsymbol{u}\_{k})] = \begin{bmatrix} \frac{1}{\boldsymbol{a}\_{k} + \boldsymbol{b}\_{k}} \left( \sum\_{i=1}^{\boldsymbol{a}\_{k}} \lambda\_{H}^{i}(\boldsymbol{u}\_{k}) + \sum\_{i=1}^{\boldsymbol{b}\_{k}} \lambda\_{H}^{II}(\boldsymbol{u}\_{k}) \right),\\ \frac{1}{\boldsymbol{a}\_{k} + \boldsymbol{b}\_{k}} \left( \sum\_{i=1}^{\boldsymbol{a}\_{k}} \lambda\_{H}^{i}(\boldsymbol{u}\_{k}) + \sum\_{i=1}^{\boldsymbol{b}\_{k}} \lambda\_{H}^{III}(\boldsymbol{u}\_{k}) \right) \end{bmatrix},\\ m\_{\mathbb{Z}}(\boldsymbol{u}\_{k}) \subseteq [0, 1],\tag{3}$$

*eZ*(*uk*)=[*e<sup>L</sup> Z*(*uk*),*e<sup>U</sup> <sup>Z</sup>* (*uk*)] = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ min ⎛ ⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝ <sup>−</sup> <sup>1</sup> ln(*ak*+*bk* ) ⎛ ⎜⎜⎝ *ak* ∑ *i*=1 *<sup>λ</sup><sup>i</sup> <sup>H</sup>*(*uk* ) ∑*ak <sup>i</sup>*=<sup>1</sup> *<sup>λ</sup><sup>i</sup> <sup>H</sup>*(*uk* )+∑*bk <sup>i</sup>*=<sup>1</sup> *<sup>λ</sup>Li <sup>H</sup>* (*uk* ) ln *<sup>λ</sup><sup>i</sup> <sup>H</sup>*(*uk* ) ∑*ak <sup>i</sup>*=<sup>1</sup> *<sup>λ</sup><sup>i</sup> <sup>H</sup>*(*uk* )+∑*bk <sup>i</sup>*=<sup>1</sup> *<sup>λ</sup>Li <sup>H</sup>* (*uk* ) + *bk* ∑ *i*=1 *<sup>λ</sup>Li <sup>H</sup>* (*uk* ) ∑*ak <sup>i</sup>*=<sup>1</sup> *<sup>λ</sup><sup>i</sup> <sup>H</sup>*(*uk* )+∑*bk <sup>i</sup>*=<sup>1</sup> *<sup>λ</sup>Li <sup>H</sup>* (*uk* ) ln *<sup>λ</sup>Li <sup>H</sup>* (*uk* ) ∑*ak <sup>i</sup>*=<sup>1</sup> *<sup>λ</sup><sup>i</sup> <sup>H</sup>*(*uk* )+∑*bk <sup>i</sup>*=<sup>1</sup> *<sup>λ</sup>Li <sup>H</sup>* (*uk* ) ⎞ ⎟⎟⎠, <sup>−</sup> <sup>1</sup> ln(*ak*+*bk* ) ⎛ ⎜⎜⎝ *ak* ∑ *i*=1 *<sup>λ</sup><sup>i</sup> <sup>H</sup>*(*uk* ) ∑*ak <sup>i</sup>*=<sup>1</sup> *<sup>λ</sup><sup>i</sup> <sup>H</sup>*(*uk* )+∑*bk <sup>i</sup>*=<sup>1</sup> *<sup>λ</sup>Ui <sup>H</sup>* (*uk* ) ln *<sup>λ</sup><sup>i</sup> <sup>H</sup>*(*uk* ) ∑*ak <sup>i</sup>*=<sup>1</sup> *<sup>λ</sup><sup>i</sup> <sup>H</sup>*(*uk* )+∑*bk <sup>i</sup>*=<sup>1</sup> *<sup>λ</sup>Ui <sup>H</sup>* (*uk* ) + *bk* ∑ *i*=1 *<sup>λ</sup>Ui <sup>H</sup>* (*uk* ) ∑*ak <sup>i</sup>*=<sup>1</sup> *<sup>λ</sup><sup>i</sup> <sup>H</sup>*(*uk* )+∑*bk <sup>i</sup>*=<sup>1</sup> *<sup>λ</sup>Ui <sup>H</sup>* (*uk* ) ln *<sup>λ</sup>Ui <sup>H</sup>* (*uk* ) ∑*ak <sup>i</sup>*=<sup>1</sup> *<sup>λ</sup><sup>i</sup> <sup>H</sup>*(*uk* )+∑*bk <sup>i</sup>*=<sup>1</sup> *<sup>λ</sup>Ui <sup>H</sup>* (*uk* ) ⎞ ⎟⎟⎠ ⎞ ⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , max ⎛ ⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝ <sup>−</sup> <sup>1</sup> ln(*ak*+*bk* ) ⎛ ⎜⎜⎝ *ak* ∑ *i*=1 *<sup>λ</sup><sup>i</sup> <sup>H</sup>*(*uk* ) ∑*ak <sup>i</sup>*=<sup>1</sup> *<sup>λ</sup><sup>i</sup> <sup>H</sup>*(*uk* )+∑*bk <sup>i</sup>*=<sup>1</sup> *<sup>λ</sup>Li <sup>H</sup>* (*uk* ) ln *<sup>λ</sup><sup>i</sup> <sup>H</sup>*(*uk* ) ∑*ak <sup>i</sup>*=<sup>1</sup> *<sup>λ</sup><sup>i</sup> <sup>H</sup>*(*uk* )+∑*bk <sup>i</sup>*=<sup>1</sup> *<sup>λ</sup>Li <sup>H</sup>* (*uk* ) + *bk* ∑ *i*=1 *<sup>λ</sup>Li <sup>H</sup>* (*uk* ) ∑*ak <sup>i</sup>*=<sup>1</sup> *<sup>λ</sup><sup>i</sup> <sup>H</sup>*(*uk* )+∑*bk <sup>i</sup>*=<sup>1</sup> *<sup>λ</sup>Li <sup>H</sup>* (*uk* ) ln *<sup>λ</sup>Li <sup>H</sup>* (*uk* ) ∑*ak <sup>i</sup>*=<sup>1</sup> *<sup>λ</sup><sup>i</sup> <sup>H</sup>*(*uk* )+∑*bk <sup>i</sup>*=<sup>1</sup> *<sup>λ</sup>Li <sup>H</sup>* (*uk* ) ⎞ ⎟⎟⎠, <sup>−</sup> <sup>1</sup> ln(*ak*+*bk* ) ⎛ ⎜⎜⎝ *ak* ∑ *i*=1 *<sup>λ</sup><sup>i</sup> <sup>H</sup>*(*uk* ) ∑*ak <sup>i</sup>*=<sup>1</sup> *<sup>λ</sup><sup>i</sup> <sup>H</sup>*(*uk* )+∑*bk <sup>i</sup>*=<sup>1</sup> *<sup>λ</sup>Ui <sup>H</sup>* (*uk* ) ln *<sup>λ</sup><sup>i</sup> <sup>H</sup>*(*uk* ) ∑*ak <sup>i</sup>*=<sup>1</sup> *<sup>λ</sup><sup>i</sup> <sup>H</sup>*(*uk* )+∑*bk <sup>i</sup>*=<sup>1</sup> *<sup>λ</sup>Ui <sup>H</sup>* (*uk* ) + *bk* ∑ *i*=1 *<sup>λ</sup>Ui <sup>H</sup>* (*uk* ) ∑*ak <sup>i</sup>*=<sup>1</sup> *<sup>λ</sup><sup>i</sup> <sup>H</sup>*(*uk* )+∑*bk <sup>i</sup>*=<sup>1</sup> *<sup>λ</sup>Ui <sup>H</sup>* (*uk* ) ln *<sup>λ</sup>Ui <sup>H</sup>* (*uk* ) ∑*ak <sup>i</sup>*=<sup>1</sup> *<sup>λ</sup><sup>i</sup> <sup>H</sup>*(*uk* )+∑*bk <sup>i</sup>*=<sup>1</sup> *<sup>λ</sup>Ui <sup>H</sup>* (*uk* ) ⎞ ⎟⎟⎠ ⎞ ⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ,*eZ*(*uk*) ⊆ [0, 1] (4)

It is obvious that the IVFES *Z* consists of interval-valued fuzzy average values and entropy values to reasonably solve the expression and operation problems of different sequence lengths in SIVFMEs.

#### **Remark 1.**


**Example 1.** *Let us consider a GDM problem. When a group of four decision makers/experts is asked to assess product quality (u1) and service quality (u2) in U = {u1, u2} regarding a supplier A, they can give two groups of fuzzy assessment values, (u1, 0.7, 0.8, [0.6, 0.8], [0.7, 0.9]) and (u2, [0.6, 0.7], [0.6, 0.7], [0.6, 0.7], [0.6, 0.7], [0.6, 0.7]). Therefore, using Equations (2) and (3), their interval-valued fuzzy average values and entropy values are [0.7, 0.8] and [0.9963, 0.9972] for u1 and [0.6, 0.7] and [1, 1] for u2, respectively, which are expressed as the IVFES Z = {(u1, [0.7, 0.8], [0.9963, 0.9972]), (u2, [0.6, 0.7], [1, 1])} in the GDM example.*

In this example, it can be seen that the average values and entropy values can reflect the magnitude and consistency/consensus degree of the group evaluation values. The larger the entropy value, the better the consistency/consensus of the group evaluation values.

Then, the simplified expression form of a basic element *<sup>z</sup>*(*uk*) = *uk*, *mZ*(*uk*),*eZk*(*uk*) for [*m<sup>L</sup> <sup>Z</sup>*(*uk*), *<sup>m</sup><sup>U</sup> <sup>Z</sup>* (*uk*)] ⊆ [0, 1] and [*e<sup>L</sup> Z*(*uk*),*e<sup>U</sup> <sup>Z</sup>* (*uk*)] ⊆ [0, 1] in the IVFES *Z* can be denoted as *zk* = (*mZk*, *eZk*) for *m<sup>L</sup> Zk*, *<sup>m</sup><sup>U</sup> Zk* <sup>⊆</sup> [0, 1] and *eL Zk*,*e<sup>U</sup> Zk* ⊆ [0, 1], which is named IVFEE.

**Definition 3.** *Set two IVFEEs as z*<sup>1</sup> = ([*m<sup>L</sup> <sup>Z</sup>*1, *<sup>m</sup><sup>U</sup> <sup>Z</sup>*1], [*e<sup>L</sup> Z*1,*e<sup>U</sup> <sup>Z</sup>*1]) *and <sup>z</sup>*<sup>2</sup> = ([*m<sup>L</sup> <sup>Z</sup>*2, *<sup>m</sup><sup>U</sup> <sup>Z</sup>*2], [*e<sup>L</sup> Z*2,*e<sup>U</sup> <sup>Z</sup>*2]). *Thus, their operational relationships are defined as follows:*


**Definition 4.** *Set two IVFEEs as z*<sup>1</sup> = ([*m<sup>L</sup> <sup>Z</sup>*1, *<sup>m</sup><sup>U</sup> <sup>Z</sup>*1], [*e<sup>L</sup> Z*1,*e<sup>U</sup> <sup>Z</sup>*1]) *and <sup>z</sup>*<sup>2</sup> = ([*m<sup>L</sup> <sup>Z</sup>*2, *<sup>m</sup><sup>U</sup> <sup>Z</sup>*2], [*e<sup>L</sup> Z*2,*e<sup>U</sup> <sup>Z</sup>*2])*. Thus, their operational laws are defined as follows:*

$$\begin{array}{ll}(1) & z\_1 \oplus z\_2 = \left( \begin{bmatrix} m\_{Z1}^{\mathcal{L}} + m\_{Z2}^{\mathcal{L}} - m\_{Z1}^{\mathcal{L}} m\_{Z2}^{\mathcal{L}}, m\_{Z1}^{\mathcal{U}} + m\_{Z2}^{\mathcal{U}} - m\_{Z1}^{\mathcal{U}} m\_{Z2}^{\mathcal{U}} \end{bmatrix}, \begin{array}{l} \\ \left[ \mathfrak{e}\_{Z1}^{\mathcal{L}} + \mathfrak{e}\_{Z2}^{\mathcal{L}} - \mathfrak{e}\_{Z1}^{\mathcal{L}} \mathfrak{e}\_{Z2}^{\mathcal{U}}, \mathfrak{e}\_{Z1}^{\mathcal{U}} + \mathfrak{e}\_{Z2}^{\mathcal{U}} - \mathfrak{e}\_{Z1}^{\mathcal{U}} \mathfrak{e}\_{Z2}^{\mathcal{U}} \right], \\ \left[ \mathfrak{e}\_{Z1}^{\mathcal{L}} + \mathfrak{e}\_{Z2}^{\mathcal{L}} - \mathfrak{e}\_{Z1}^{\mathcal{L}} \mathfrak{e}\_{Z2}^{\mathcal{L}}, \mathfrak{e}\_{Z1}^{\mathcal{U}} + \mathfrak{e}\_{Z2}^{\mathcal{U}} - \mathfrak{e}\_{Z1}^{\mathcal{L}} \mathfrak{e}\_{Z2}^{\mathcal{U}} \right] \end{array} \right);$$

*(3) z<sup>λ</sup>* <sup>1</sup> = ([(*m<sup>L</sup> <sup>Z</sup>*1) *λ* ,(*m<sup>U</sup> <sup>Z</sup>*1) *λ* ], [(*e<sup>L</sup> <sup>Z</sup>*1) *λ* ,(*e<sup>U</sup> <sup>Z</sup>*1) *λ* ]) *for λ > 0;*

$$(4)\quad \dot{\lambda}z\_1 = \left( \left[ 1 - \left( 1 - \overset{\cdots}{m\_{Z1}^L} \right)^\lambda, 1 - \left( 1 - \overset{\cdots}{m\_{Z1}^L} \right)^\lambda \right], \left[ 1 - \left( 1 - \mathrm{e}\_{Z1}^L \right)^\lambda, 1 - \left( 1 - \mathrm{e}\_{Z1}^{\mathrm{II}} \right)^\lambda \right] \right) \text{ for } \lambda > 0.1$$

However, it is obvious that the above operational results are still IVFEEs.

To compare two IVFEEs *zk* = ([*m<sup>L</sup> Zk*, *<sup>m</sup><sup>U</sup> Zk*], [*e<sup>L</sup> Zk*,*e<sup>U</sup> Zk*]) for *k* = 1, 2, the expected value function is defined as

$$Q(z\_k) = \left(m\_{\mathbb{Z}k}^L \varepsilon\_{\mathbb{Z}k}^L + m\_{\mathbb{Z}k}^{\text{II}} \varepsilon\_{\mathbb{Z}k}^{\text{II}}\right) / 2 \text{ for } Q(z\_k) \in [0, 1] \tag{5}$$

 ;

Then, the sorting rules of the two IVFEEs are given as follows:


**Example 2.** *Assume that two IVFEEs are z1 = ([0.7, 0.8], [0.8, 0.9]) and z2 = ([0.6, 0.7], [0.7, 0.8]). Then, their sorting is yielded below:*

*Using Equation (5), there are Q(z1) = (0.7* × *0.8 + 0.8* × *0.9)/2 = 0.56 and Q(z2)=(0.6* × *0.7 + 0.7* × *0.8)/2 = 0.54. Since Q(z1) > Q(z2), their sorting is z1 > z2.*

#### **3. Two Weighted Aggregation Operators of IVFEEs and Their Mixed-Weighted-Averaging Operation**

In this section, we propose the IVFEEWA and IVFEEWG operators according to the operational laws in Definition 4, and then define their mixed-weighted-averaging operation to make up for their flaws in aggregating IVFEEs; that is, the weighted averaging aggregation operator mainly tends to group arguments, and the weighted geometric aggregation operator tends to group personal arguments.

#### *3.1. Weighted Averaging Aggregation Operator of IVFEEs*

Based on the operational laws in Definition 4, the IVFEEWA operator is defined to aggregate IVFEE information.

**Definition 5.** *Let zk* = ([*m<sup>L</sup> Zk*, *<sup>m</sup><sup>U</sup> Zk*], [*e<sup>L</sup> Zk*,*e<sup>U</sup> Zk*]) *(k = 1, 2,* ... *, s) be a group of IVFEEs and IVFEEWA:* <sup>Ω</sup>*<sup>s</sup>* → <sup>Ω</sup>*. Then, the IVFEEWA operator is defined as*

$$IVFEWA(z\_1, z\_2, \dots, z\_s) = \underset{k=1}{\stackrel{s}{\oplus}} \lambda\_k z\_k \tag{6}$$

*where <sup>λ</sup><sup>k</sup> is the weight of zk with 0* <sup>≤</sup> *<sup>λ</sup><sup>k</sup>* <sup>≤</sup> *1 and* <sup>∑</sup>*<sup>s</sup> <sup>k</sup>*=<sup>1</sup> *λ<sup>k</sup>* = 1.

**Theorem 1.** *Let zk* = ([*m<sup>L</sup> Zk*, *<sup>m</sup><sup>U</sup> Zk*], [*e<sup>L</sup> Zk*,*e<sup>U</sup> Zk*]) *(k = 1, 2,* ... *, s) be a group of IVFEEs with the weight vector = (λ1, <sup>λ</sup>2,* ... *, <sup>λ</sup>n) for 0* <sup>≤</sup> *<sup>λ</sup><sup>k</sup>* <sup>≤</sup> *1 and* <sup>∑</sup>*<sup>s</sup> <sup>k</sup>*=<sup>1</sup> *λ<sup>k</sup>* = 1. *Then, the aggregated result of the IVFEEWA operator is still IVFEE, which is obtained by the equation:*

$$\begin{split} & IVEEWA(z\_1, z\_2, \dots, z\_s) = \bigoplus\_{k=1}^s \lambda\_k z\_k \\ &= \left( \left[ 1 - \prod\_{k=1}^s \left( 1 - m\_{\overline{Z}k}^L \right)^{\lambda\_k}, 1 - \prod\_{k=1}^s \left( 1 - m\_{\overline{Z}k}^{IL} \right)^{\lambda\_k} \right], \left[ 1 - \prod\_{k=1}^s \left( 1 - e\_k^L \right)^{\lambda\_k}, 1 - \prod\_{k=1}^s \left( 1 - e\_k^{IL} \right)^{\lambda\_k} \right] \right) \end{split} \tag{7}$$

**Proof.** Regarding mathematical induction, Equation (7) can be proved.

(1) When *s* = 2, by the operational laws in Definition 4, the aggregation result is yielded as follows:

*IVFEEWA*(*z*1, *z*2) = *λ*1*z*<sup>1</sup> ⊕ *λ*1*z*<sup>2</sup> = ⎛ ⎜⎜⎜⎜⎜⎜⎝ ⎡ ⎣ <sup>1</sup> − (<sup>1</sup> − *<sup>m</sup><sup>L</sup> <sup>Z</sup>*1) *<sup>λ</sup>*<sup>1</sup> <sup>+</sup> <sup>1</sup> <sup>−</sup> (<sup>1</sup> <sup>−</sup> *<sup>m</sup><sup>L</sup> <sup>Z</sup>*2) *<sup>λ</sup>*<sup>2</sup> <sup>−</sup> ! <sup>1</sup> − (<sup>1</sup> − *<sup>m</sup><sup>L</sup> <sup>Z</sup>*1) *λ*1 "!<sup>1</sup> <sup>−</sup> (<sup>1</sup> <sup>−</sup> *<sup>m</sup><sup>L</sup> <sup>Z</sup>*2) *λ*2 " , <sup>1</sup> − (<sup>1</sup> − *<sup>m</sup><sup>U</sup> <sup>Z</sup>*1) *<sup>λ</sup>*<sup>1</sup> <sup>+</sup> <sup>1</sup> <sup>−</sup> (<sup>1</sup> <sup>−</sup> *<sup>m</sup><sup>U</sup> <sup>Z</sup>*2) *<sup>λ</sup>*<sup>2</sup> <sup>−</sup> ! <sup>1</sup> − (<sup>1</sup> − *<sup>m</sup><sup>U</sup> <sup>Z</sup>*1) *λ*1 "!<sup>1</sup> <sup>−</sup> (<sup>1</sup> <sup>−</sup> *<sup>m</sup><sup>U</sup> <sup>Z</sup>*2) *λ*2 " ⎤ ⎦, ⎡ ⎣ <sup>1</sup> − (<sup>1</sup> − *<sup>e</sup><sup>L</sup> <sup>Z</sup>*1) *<sup>λ</sup>*<sup>1</sup> <sup>+</sup> <sup>1</sup> <sup>−</sup> (<sup>1</sup> <sup>−</sup> *<sup>e</sup><sup>L</sup> <sup>Z</sup>*2) *<sup>λ</sup>*<sup>2</sup> <sup>−</sup> ! <sup>1</sup> − (<sup>1</sup> − *<sup>e</sup><sup>L</sup> <sup>Z</sup>*1) *λ*1 "!<sup>1</sup> <sup>−</sup> (<sup>1</sup> <sup>−</sup> *<sup>e</sup><sup>L</sup> <sup>Z</sup>*2) *λ*2 " , <sup>1</sup> − (<sup>1</sup> − *<sup>e</sup><sup>U</sup> <sup>Z</sup>*1) *<sup>λ</sup>*<sup>1</sup> <sup>+</sup> <sup>1</sup> <sup>−</sup> (<sup>1</sup> <sup>−</sup> *<sup>e</sup><sup>U</sup> <sup>Z</sup>*2) *<sup>λ</sup>*<sup>2</sup> <sup>−</sup> ! <sup>1</sup> − (<sup>1</sup> − *<sup>e</sup><sup>U</sup> <sup>Z</sup>*1) *λ*1 "!<sup>1</sup> <sup>−</sup> (<sup>1</sup> <sup>−</sup> *<sup>e</sup><sup>U</sup> <sup>Z</sup>*2) *λ*2 " ⎤ ⎦ ⎞ ⎟⎟⎟⎟⎟⎟⎠ = <sup>1</sup> <sup>−</sup> <sup>2</sup> ∏ *k*=1 (<sup>1</sup> − *<sup>m</sup><sup>L</sup> Zk*) *λk* , 1 <sup>−</sup> <sup>2</sup> ∏ *k*=1 (<sup>1</sup> − *<sup>m</sup><sup>U</sup> Zk*) *λk* , <sup>1</sup> <sup>−</sup> <sup>2</sup> ∏ *k*=1 (<sup>1</sup> − *<sup>e</sup><sup>L</sup> Zk*) *λk* , 1 <sup>−</sup> <sup>2</sup> ∏ *k*=1 (<sup>1</sup> − *<sup>e</sup><sup>U</sup> Zk*) *λk* . (8)

(2) When *s* = *n*, Equation (7) can keep the following result:

$$IVFEWA(z\_1, z\_2, \dots, z\_n) = \prod\_{k=1}^{\frac{n}{2}} \lambda\_k z\_k = \left( \begin{bmatrix} 1 - \prod\_{k=1}^n \left( 1 - m\_{\widetilde{Z}k}^L \right)^{\lambda\_k} , 1 - \prod\_{k=1}^n \left( 1 - m\_{\widetilde{Z}k}^{II} \right)^{\lambda\_k} \\\ 1 - \prod\_{k=1}^n \left( 1 - \epsilon\_{\widetilde{Z}k}^L \right)^{\lambda\_k} , 1 - \prod\_{k=1}^n \left( 1 - \epsilon\_{\widetilde{Z}k}^{II} \right)^{\lambda\_k} \end{bmatrix} \right) \tag{9}$$

(3) When *s* = *n* + 1, by the operational laws in Definition 4 and Equations (8) and (9), the aggregated result is given as follows:

*IVFEEWA*(*z*1, *<sup>z</sup>*2,..., *zn*, *zn*+1) = *<sup>n</sup>* ⊕ *k*=1 *λkzk* ⊕ *λn*+1*zn*+<sup>1</sup> = <sup>1</sup> <sup>−</sup> *<sup>n</sup>* ∏ *k*=1 (<sup>1</sup> − *<sup>m</sup><sup>L</sup> Zk*) *λk* , 1 <sup>−</sup> *<sup>n</sup>* ∏ *k*=1 (<sup>1</sup> − *<sup>m</sup><sup>U</sup> Zk*) *λk* , <sup>1</sup> <sup>−</sup> *<sup>n</sup>* ∏ *k*=1 (<sup>1</sup> − *<sup>e</sup><sup>L</sup> Zk*) *λk* , 1 <sup>−</sup> *<sup>n</sup>* ∏ *k*=1 (<sup>1</sup> − *<sup>e</sup><sup>U</sup> Zk*) *λk* ⊕ *λn*+1*zn*+<sup>1</sup> = ⎛ ⎜⎜⎝ <sup>1</sup> <sup>−</sup> *<sup>n</sup>* ∏ *k*=1 (<sup>1</sup> − *<sup>m</sup><sup>L</sup> Zk*) *λk* (<sup>1</sup> − *<sup>m</sup><sup>L</sup> Zn*+1) *λn*+<sup>1</sup> , 1 <sup>−</sup> *<sup>n</sup>* ∏ *k*=1 (<sup>1</sup> − *<sup>m</sup><sup>U</sup> Zk*) *λk* (<sup>1</sup> − *<sup>m</sup><sup>U</sup> Zn*+1) *λn*+<sup>1</sup> , <sup>1</sup> <sup>−</sup> *<sup>n</sup>* ∏ *k*=1 (<sup>1</sup> − *<sup>e</sup><sup>L</sup> Zk*) *λk* (<sup>1</sup> − *<sup>e</sup><sup>L</sup> Zs*+1) *λn*+<sup>1</sup> , 1 <sup>−</sup> *<sup>n</sup>* ∏ *k*=1 (<sup>1</sup> − *<sup>e</sup><sup>U</sup> Zk*) *λk* (<sup>1</sup> − *<sup>e</sup><sup>U</sup> Zs*+1) *λn*+<sup>1</sup> ⎞ ⎟⎟⎠ = <sup>1</sup> <sup>−</sup> *<sup>n</sup>*+<sup>1</sup> ∏ *k*=1 (<sup>1</sup> − *<sup>m</sup><sup>L</sup> Zk*) *λk* , 1 <sup>−</sup> *<sup>n</sup>*+<sup>1</sup> ∏ *k*=1 (<sup>1</sup> − *<sup>m</sup><sup>U</sup> Zk*) *λk* , <sup>1</sup> <sup>−</sup> *<sup>n</sup>*+<sup>1</sup> ∏ *k*=1 (<sup>1</sup> − *<sup>e</sup><sup>L</sup> Zk*) *λk* , 1 <sup>−</sup> *<sup>n</sup>*+<sup>1</sup> ∏ *k*=1 (<sup>1</sup> − *<sup>e</sup><sup>U</sup> Zk*) *λk* . (10)

Obviously, Equation (7) exists for any *s*. -

**Theorem 2.** *The IVFEEWA operator implies these properties:*


**Proof.** (1) For *zk* = *z* = ([*m<sup>L</sup> <sup>Z</sup>*, *<sup>m</sup><sup>U</sup> <sup>Z</sup>* ], [*e<sup>L</sup> Z*,*e<sup>U</sup> <sup>Z</sup>* ]) (*k* = 1, 2, ... , *s*), by Equation (7) the result is yielded below:

$$\begin{split} IVEEWA(z\_{1},z\_{2},\ldots,z\_{s}) &= \mathop{\begin{subarray}{c} \frac{s}{\Omega}, \lambda\_{k}z\_{k} = \left( \begin{bmatrix} 1 - \prod\_{k=1}^{s} (1 - m\_{\widetilde{Z}k}^{L})^{\lambda\_{k}}, 1 - \prod\_{k=1}^{s} (1 - m\_{\widetilde{Z}k}^{\boldsymbol{U}})^{\lambda\_{k}} \end{bmatrix} \right), \\ &\left( \begin{bmatrix} 1 - \prod\_{k=1}^{s} (1 - e\_{\widetilde{Z}k}^{L})^{\lambda\_{k}}, 1 - \prod\_{k=1}^{s} (1 - e\_{\widetilde{Z}k}^{\boldsymbol{U}})^{\lambda\_{k}} \end{bmatrix} \right) \\ &= \left( \left[ 1 - (1 - m\_{\widetilde{Z}}^{L})^{\sum\_{k=1}^{s} \lambda\_{k}}, 1 - (1 - m\_{\widetilde{Z}}^{L})^{\sum\_{k=1}^{s} \lambda\_{k}} \right], \left[ 1 - (1 - e\_{\widetilde{Z}}^{L})^{\sum\_{k=1}^{s} \lambda\_{k}}, 1 - (1 - e\_{\widetilde{Z}}^{L})^{\sum\_{k=1}^{s} \lambda\_{k}} \right] \right) \\ &= \left( \left[ m\_{\mathcal{Z}}^{L}, m\_{\widetilde{Z}}^{\mathrm{II}} \right], \left[ e\_{\mathcal{Z}}^{L}, e\_{\widetilde{Z}}^{L} \right] \right) = z. \end{split} \tag{11}$$

(2) There exists the inequality *zmin* ≤ *zk* ≤ *zmax* when *zmin* and *zmax* are the minimum and maximum IVFEEs. Thus, there also exists *<sup>s</sup>* ⊕ *k*=1 *<sup>λ</sup>kz*min <sup>≤</sup> *<sup>s</sup>* ⊕ *k*=1 *<sup>λ</sup>kzk* <sup>≤</sup> *<sup>s</sup>* ⊕ *k*=1 *λkz*max. Then, the inequality *zmin* <sup>≤</sup> *<sup>s</sup>* ⊕ *k*=1 *λkzk* ≤ *zmax* can be kept regarding the above property (1); i.e., there is *z*min ≤ *IVFEEWA*(*z*1, *z*2,..., *zs*) ≤ *z*max. (3) For *zk* ≤ *z*<sup>∗</sup> *<sup>k</sup>* , there is the inequality *<sup>s</sup>* ⊕ *k*=1 *<sup>λ</sup>kzk* <sup>≤</sup> *<sup>s</sup>* ⊕ *k*=1 *λkz*<sup>∗</sup> *<sup>k</sup>* ; i.e., *IVFEEWA*(*z*1, *z*2,..., *zs*)

$$\leq \underline{IV}\underline{F}\underline{E}\underline{E}\underline{W}\underline{A}\left(z\_{1\prime}^\*z\_{2\prime}^\*\dots\_{\prime}z\_s^\*\right)\text{ exists.}$$

Therefore, all the above properties are true. -

#### *3.2. Weighted Geometric Aggregation Operator of IVFEEs*

**Definition 6.** *Let zk* = ([*m<sup>L</sup> Zk*, *<sup>m</sup><sup>U</sup> Zk*], [*e<sup>L</sup> Zk*,*e<sup>U</sup> Zk*]) *(k = 1, 2,* ... *, s) be a group of IVFEEs and IVFEEWG:* <sup>Ω</sup>*<sup>s</sup>* → <sup>Ω</sup>*. Then, the IVFEEWG operator is defined as*

$$IVFEING(z\_1, z\_2, \dots, z\_s) = \underset{k=1}{\stackrel{\circ}{\otimes}} z\_k^{\lambda\_k} \tag{12}$$

*where <sup>λ</sup><sup>k</sup> is the weight of zk with 0* <sup>≤</sup> *<sup>λ</sup><sup>k</sup>* <sup>≤</sup> *1 and* <sup>∑</sup>*<sup>s</sup> <sup>k</sup>*=<sup>1</sup> *λ<sup>k</sup>* = 1.

**Theorem 3.** *Let zk* = ([*m<sup>L</sup> Zk*, *<sup>m</sup><sup>U</sup> Zk*], [*e<sup>L</sup> Zk*,*e<sup>U</sup> Zk*]) *(k = 1, 2,* ... *, s) be a group of IVFEEs along with the weight vector <sup>λ</sup> = (λ1, <sup>λ</sup>2,* ... *, <sup>λ</sup>s) for 0* <sup>≤</sup> *<sup>λ</sup><sup>k</sup>* <sup>≤</sup> *1 and* <sup>∑</sup>*<sup>s</sup> <sup>k</sup>*=<sup>1</sup> *λ<sup>k</sup>* = 1. *Then, the aggregated result of the IVFEEWG operator is still IVFEE, which is yielded by the equation:*

$$\text{IVFEEWG}(z\_1, z\_2, \dots, z\_s) = \mathop{\otimes}\_{k=1}^s z\_k^{\lambda\_k} = \left( \left[ \prod\_{k=1}^s \left( m\_{\text{Zk}}^{\text{L}} \right)^{\lambda\_k}, \prod\_{k=1}^s \left( m\_{\text{Zk}}^{\text{II}} \right)^{\lambda\_k} \right], \left[ \prod\_{k=1}^s \left( e\_{\text{Zk}}^{\text{L}} \right)^{\lambda\_k}, \prod\_{k=1}^s \left( e\_{\text{Zk}}^{\text{II}} \right)^{\lambda\_k} \right] \right) \tag{13}$$

Similarly to Theorem 1, Theorem 3 can easily be proved, which is omitted here.

**Theorem 4.** *The IVFEEWG operator implies these properties:*


$$\begin{array}{lclcl} z\_{\min} &=& \begin{bmatrix} \cdots & \cdots & \cdots\\ \left( \left[ \min\_{k} \left( m\_{\text{Zk}}^{\text{IL}} \right), \min\_{k} \left( m\_{\text{Zk}}^{\text{IL}} \right) \right], \left[ \min\_{k} \left( e\_{\text{Zk}}^{\text{L}} \right), \min\_{k} \left( e\_{\text{Zk}}^{\text{IL}} \right) \right] \right) & \text{and} \\\left( \left[ \right], \left[ \min\_{k} \left( m\_{\text{Zk}}^{\text{L}} \right), \left[ \right], \left[ \right] \right), \left[ \left[ \right], \dots, \left[ \right] \right] \end{array} & \text{and} \\\end{array}$$

$$z\_{\text{max}} = \left( \left[ \max\_{k} \left( m\_{\text{Zk}}^{\text{L}} \right), \max\_{k} \left( m\_{\text{Zk}}^{\text{UL}} \right) \right], \left[ \max\_{k} \left( e\_{\text{Zk}}^{\text{L}} \right), \max\_{k} \left( e\_{\text{Zk}}^{\text{UL}} \right) \right] \right) \text{ be the minimum and maximum.}\\ \text{numIVFFEs. Then, } z\_{\text{min}} \le \text{IVFFESWG}(z\_{1:}, z\_{2:}, \dots, z\_{\text{s}}) \le z\_{\text{max}} \text{ exists.}\\ \text{numIVGFEs.} \quad \left( z\_{\text{max}} \le z\_{\text{max}} \right), \left( z\_{\text{max}} \le z\_{\text{max}} \right), \dots, z\_{\text{s}} \le z\_{\text{max}} \le z\_{\text{max}} \le z\_{\text{max}} \quad \forall \text{s}$$

*(3) Monotonicity: Let zk* = ([*m<sup>L</sup> Zk*, *<sup>m</sup><sup>U</sup> Zk*], [*e<sup>L</sup> Zk*,*e<sup>U</sup> Zk*]) *and z*<sup>∗</sup> *<sup>k</sup>* <sup>=</sup> *mL*<sup>∗</sup> *Zk*, *<sup>m</sup>U*<sup>∗</sup> *Zk* , *eL*<sup>∗</sup> *Zk*,*eU*<sup>∗</sup> *Zk (k = 1, 2,* ... *, s) be two groups of IVFEEs. Then, there exists IVFEEWG*(*z*1, *z*2, ... , *zs*) ≤ *IVFEEWG*(*z*∗ <sup>1</sup>, *z*<sup>∗</sup> <sup>2</sup>,..., *z*<sup>∗</sup> *<sup>s</sup>* ) *for zk* ≤ *z*<sup>∗</sup> *k* .

Theorem 4 can be proved similarly to Theorem 2 (omitted).

#### *3.3. Mixed-Weighted-Averaging Operation for the IVFEEWA and IVFEEWG Operators*

Since the IVFEEWA operator and the IVFEEWG operator mainly tend to group arguments and individual arguments, respectively, here we propose a mixed-weightedaveraging operation for the IVFEEWA and IVFEEWG operators.

**Definition 7.** *Set η* ∈ *[0, 1] as a weight parameter. Then, a mixed-weighted-averaging operation of the IVFEEWA and IVFEEWG operators with a weight parameter η is defined below:*

$$z(\eta) = \eta \times \text{IVFEW}(z\_1, z\_2, \dots, z\_s) \oplus (1 - \eta) \times \text{IVFEWG}(z\_1, z\_2, \dots, z\_s) \tag{14}$$

**Theorem 5.** *Let η* ∈ *[0, 1] be a weight parameter. Then, the operational result of Equation (14) with a weight parameter η is still IVFEE, which is obtained by the following equation:*

$$\begin{split} z(\eta) &= \eta \times IVFEEWA(z\_1, z\_2, \dots, z\_s) \oplus (1 - \eta) \times IVFEEWG(z\_1, z\_2, \dots, z\_s) \\ &= \left( \begin{bmatrix} 1 - \left( \prod\_{k=1}^{s} (1 - m\_{\widetilde{Z}k}^{\mathrm{I}})^{\lambda\_k} \right)^{\eta} \left( 1 - \prod\_{k=1}^{s} (m\_{\widetilde{Z}k}^{\mathrm{I}})^{\lambda\_k} \right)^{(1-\eta)} \\ 1 - \left( \prod\_{k=1}^{s} (1 - m\_{\widetilde{Z}k}^{\mathrm{II}})^{\lambda\_k} \right)^{\eta} \left( 1 - \prod\_{k=1}^{s} (m\_{\widetilde{Z}k}^{\mathrm{II}})^{\lambda\_k} \right)^{(1-\eta)} \\ & \left[ 1 - \left( \prod\_{k=1}^{s} (1 - \epsilon\_{\widetilde{Z}k}^{\mathrm{I}})^{\lambda\_k} \right)^{\eta} \left( 1 - \prod\_{k=1}^{s} (\epsilon\_{\widetilde{Z}k}^{\mathrm{I}})^{\lambda\_k} \right)^{(1-\eta)} \right] \\ & \left[ 1 - \left( \prod\_{k=1}^{s} (1 - \epsilon\_{\widetilde{Z}k}^{\mathrm{II}})^{\lambda\_k} \right)^{\eta} \left( 1 - \prod\_{k=1}^{s} (\epsilon\_{\widetilde{Z}k}^{\mathrm{II}})^{\lambda\_k} \right)^{(1-\eta)} \right] \end{bmatrix} \tag{15}$$

**Proof.** Based on Equations (7), (13), and (14), along with the operational laws in Definition 4, the following result is obtained below:

*z*(*η*) = *η* × *IVFEEWA*(*z*1, *z*2,..., *zs*) ⊕ (1 − *η*) × *IVFEEWG*(*z*1, *z*2,..., *zs*) = <sup>1</sup> <sup>−</sup> *s* ∏ *k*=1 (<sup>1</sup> − *<sup>m</sup><sup>L</sup> Zk* ) *λk <sup>η</sup>* , 1 − *s* ∏ *k*=1 (<sup>1</sup> − *<sup>m</sup><sup>U</sup> Zk* ) *λk <sup>η</sup>* , 1 − *s* ∏ *k*=1 (<sup>1</sup> − *<sup>e</sup><sup>L</sup> Zk* ) *λk <sup>η</sup>* , 1 − *s* ∏ *k*=1 (<sup>1</sup> − *<sup>e</sup><sup>U</sup> Zk* ) *λk η* ⊕ %1 − <sup>1</sup> <sup>−</sup> *<sup>s</sup>* ∏ *k*=1 (*m<sup>L</sup> Zk* ) *λk* (1−*η*) , 1 − <sup>1</sup> <sup>−</sup> *<sup>s</sup>* ∏ *k*=1 (*m<sup>U</sup> Zk* ) *λk* (1−*η*) & , % 1 − <sup>1</sup> <sup>−</sup> *<sup>s</sup>* ∏ *k*=1 (*e<sup>L</sup> Zk* ) *λk* (1−*η*) , 1 − <sup>1</sup> <sup>−</sup> *<sup>s</sup>* ∏ *k*=1 (*e<sup>U</sup> Zk* ) *λk* (1−*η*) & = ⎛ ⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝ ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ 1 − *s* ∏ *k*=1 (<sup>1</sup> − *<sup>m</sup><sup>L</sup> Zk* ) *λk <sup>η</sup>* + 1 − <sup>1</sup> <sup>−</sup> *<sup>s</sup>* ∏ *k*=1 (*m<sup>L</sup> Zk* ) *λk* (1−*η*) − 1 − *s* ∏ *k*=1 (<sup>1</sup> − *<sup>m</sup><sup>L</sup> Zk* ) *λk η* 1 − <sup>1</sup> <sup>−</sup> *<sup>s</sup>* ∏ *k*=1 (*m<sup>L</sup> Zk* ) *λk* (1−*η*) , 1 − *s* ∏ *k*=1 (<sup>1</sup> − *<sup>m</sup><sup>U</sup> Zk* ) *λk <sup>η</sup>* + 1 − <sup>1</sup> <sup>−</sup> *<sup>s</sup>* ∏ *k*=1 (*m<sup>U</sup> Zk* ) *λk* (1−*η*) − 1 − *s* ∏ *k*=1 (<sup>1</sup> − *<sup>m</sup><sup>U</sup> Zk* ) *λk η* 1 − <sup>1</sup> <sup>−</sup> *<sup>s</sup>* ∏ *k*=1 (*m<sup>U</sup> Zk* ) *λk* (1−*η*) ⎤ ⎥ ⎥ ⎥ ⎥ ⎦ , ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ 1 − *s* ∏ *k*=1 (<sup>1</sup> − *<sup>e</sup><sup>L</sup> Zk* ) *λk <sup>η</sup>* + 1 − <sup>1</sup> <sup>−</sup> *<sup>s</sup>* ∏ *k*=1 (*e<sup>L</sup> Zk* ) *λk* (1−*η*) − 1 − *s* ∏ *k*=1 (<sup>1</sup> − *<sup>e</sup><sup>L</sup> Zk* ) *λk η* 1 − <sup>1</sup> <sup>−</sup> *<sup>s</sup>* ∏ *k*=1 (*e<sup>L</sup> Zk* ) *λk* (1−*η*) , 1 − *s* ∏ *k*=1 (<sup>1</sup> − *<sup>e</sup><sup>U</sup> Zk* ) *λk <sup>η</sup>* + 1 − <sup>1</sup> <sup>−</sup> *<sup>s</sup>* ∏ *k*=1 (*e<sup>U</sup> Zk* ) *λk* (1−*η*) − 1 − *s* ∏ *k*=1 (<sup>1</sup> − *<sup>e</sup><sup>U</sup> Zk* ) *λk η* 1 − <sup>1</sup> <sup>−</sup> *<sup>s</sup>* ∏ *k*=1 (*e<sup>U</sup> Zk* ) *λk* (1−*η*) ⎤ ⎥ ⎥ ⎥ ⎥ ⎦ ⎞ ⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ = ⎛ ⎜⎜⎜⎜⎝ % 1 − *s* ∏ *k*=1 (<sup>1</sup> − *<sup>m</sup><sup>L</sup> Zk* ) *λk η* <sup>1</sup> <sup>−</sup> *<sup>s</sup>* ∏ *k*=1 (*m<sup>L</sup> Zk* ) *λk* (1−*η*) , 1 − *s* ∏ *k*=1 (<sup>1</sup> − *<sup>m</sup><sup>U</sup> Zk* ) *λk η* <sup>1</sup> <sup>−</sup> *<sup>s</sup>* ∏ *k*=1 (*m<sup>U</sup> Zk* ) *λk* (1−*η*) & , % 1 − *s* ∏ *k*=1 (<sup>1</sup> − *<sup>e</sup><sup>L</sup> Zk* ) *λk η* <sup>1</sup> <sup>−</sup> *<sup>s</sup>* ∏ *k*=1 (*e<sup>L</sup> Zk* ) *λk* (1−*η*) , 1 − *s* ∏ *k*=1 (<sup>1</sup> − *<sup>e</sup><sup>U</sup> Zk* ) *λk η* <sup>1</sup> <sup>−</sup> *<sup>s</sup>* ∏ *k*=1 (*e<sup>U</sup> Zk* ) *λk* (1−*η*) & ⎞ ⎟⎟⎟⎟⎠ . (16)

When *η =* 1, 0, *z*(*η*) degenerates into the IVFEEWA operator of Equation (7) and the IVFEEWG operator of Equation (13), respectively. -

#### **4. GDM Method Using the Mixed-Weighted-Averaging Operation and Expected Value Function**

Here we propose a multicriteria GDM method using the mixed-weighted-averaging operation and expected value function for SIVFMSs.

A multicriteria GDM problem usually contains a set of alternatives *Y* = {*Y*1, *Y*2,..., *Ym*}, which is assessed by a set of criteria *U* = {*u*1, *u*2, ... , *us*}. To consider the importance of different criteria *uk* (*k* = 1, 2, ... , *s*) in *U*, decision makers specify a weigh vector *λ* = (*λ*1, *λ*2, ... , *λs*) for the set of criteria. Regarding the uncertainty and certainty of decision makers' cognitions/judgments for the suitability assessment of alternatives over the criteria, the single- and interval-valued fuzzy values of the alternatives *Yj* (*j* = 1, 2, ... , *m*) over the criteria *uk* (*k* = 1, 2, ... , *s*) will be specified by various decision makers. Thus, the multicriteria GDM method is depicted by the following decision steps.

**Step 1.** A group of decision makers/experts is invited to give their single- and interval-valued fuzzy values of the alternatives *Yj* (*j* = 1, 2, ... , *m*) over the criteria *uk* (*k* = 1, 2, ... , *s*) and to set up the SIVFME decision matrix *D* = (*FHjk*)*m*×*s*, where *FHjk* = (*λ*<sup>1</sup> *Hjk*, *<sup>λ</sup>*<sup>2</sup> *Hjk*, ... , *λ ajk Hjk*, [*λL*<sup>1</sup> *Hjk*, *<sup>λ</sup>U*<sup>1</sup> *Hjk*], [*λL*<sup>2</sup> *Hjk*, *<sup>λ</sup>U*<sup>2</sup> *Hjk*], ... , [*λ Lbjk Hjk* , *λ Ubjk Hjk* ]) composed of *ajk* single-valued fuzzy values and *bjk* interval-valued fuzzy values (*j* = 1, 2, ... , *m; k* = 1, 2, ... , *s*) are SIVFMEs, such that 0 ≤ *<sup>λ</sup>*<sup>1</sup> *Hjk* ≤ *<sup>λ</sup>*<sup>2</sup> *Hjk*, ... , ≤ *λ ajk Hjk* <sup>≤</sup> 1 and [*λL*<sup>1</sup> *Hjk*, *<sup>λ</sup>U*<sup>1</sup> *Hjk*] <sup>⊆</sup> [*λL*<sup>2</sup> *Hjk*, *<sup>λ</sup>U*<sup>2</sup> *Hjk*] ⊆ ,..., ⊆ [*λ Lbjk Hjk* (*uk*), *λ Ubjk Hjk* ] ⊆ [0, 1] with identical and/or different fuzzy values.

**Step 2.** Using Equations (3) and (4) for the decision matrix *D* = (*FHjk*)*m*×*s*, the intervalvalued fuzzy average values *mZjk* and entropy values *eZjk* are obtained and IVFEEs are assembled by *zjk* = (*mZjk*, *eZjk*) for *mZjk* = # *m<sup>L</sup> Zjk*, *<sup>m</sup><sup>U</sup> Zjk*\$ ⊆ [0, 1] and *eZjk* = # *eL Zjk*,*e<sup>U</sup> Zjk*\$ ⊆ [0, 1] (*k* = 1, 2, ... , *s*; *j* = 1, 2, ... , *m*), which are constructed as the IVFEE decision matrix *M* = (*zjk*)*m*×*s*.

**Step 3.** Using Equation (15) with some values of *η*, the operational values of *zj*(*η*) for *Yj* (*j* = 1, 2, . . . , *m*) are obtained by the following equation:

$$\begin{split} z\_{j}(\boldsymbol{\eta}) &= \boldsymbol{\eta} \times \boldsymbol{IVFEE}WA(z\_{j1}, z\_{j2}, \dots, z\_{js}) \oplus (1-\boldsymbol{\eta}) \times \boldsymbol{IVFEE}WG(z\_{j1}, z\_{j2}, \dots, z\_{js}) \\ &= \left( \begin{bmatrix} 1 - \left( \prod\_{k=1}^{s} (1 - m\_{Zjk}^{\mathcal{L}})^{\lambda\_{k}} \right)^{\eta} \left( 1 - \prod\_{k=1}^{s} (m\_{Zjk}^{\mathcal{L}})^{\lambda\_{k}} \right)^{(1-\eta)} \\ 1 - \left( \prod\_{k=1}^{s} (1 - m\_{Zjk}^{\mathcal{U}})^{\lambda\_{k}} \right)^{\eta} \left( 1 - \prod\_{k=1}^{s} (m\_{Zjk}^{\mathcal{U}})^{\lambda\_{k}} \right)^{(1-\eta)} \\ & \left[ 1 - \left( \prod\_{k=1}^{s} (1 - e\_{Zjk}^{\mathcal{L}})^{\lambda\_{k}} \right)^{\eta} \left( 1 - \prod\_{k=1}^{s} (e\_{Zjk}^{\mathcal{L}})^{\lambda\_{k}} \right)^{(1-\eta)} \right. \\ & \left. \left. 1 - \left( \prod\_{k=1}^{s} (1 - e\_{Zjk}^{\mathcal{U}})^{\lambda\_{k}} \right)^{\eta} \right) \left( 1 - \prod\_{k=1}^{s} (e\_{Zjk}^{\mathcal{U}})^{\lambda\_{k}} \right)^{(1-\eta)} \right] \end{bmatrix} \right) \tag{17}$$

**Step 4.** The expected values of *Q*(*zj*(*η*)) (*j* = 1, 2, . . . , *m*) are given by Equation (5).

**Step 5.** Alternatives are sorted in descending order of the expected values, and the optimal one is selected depending on some specified value of *η*.

**Step 6.** End.

#### **5. GDM Example of a Supplier Selection Problem and Comparative Analysis**

#### *5.1. Actual GDM Example*

This section reports the application of the proposed GDM method to an actual example of a supplier selection problem in a supply chain to show the rationality and effectiveness of SIVFMSs.

Any enterprise tries to reduce the supply chain risks and uncertainty to improve customer service, inventory levels, and cycle times, which will increasing its competitiveness and profitability. Assume that a group of five suppliers is provided as a set of preliminary alternatives *Y* = {*Y*1, *Y*2, *Y*3, *Y*4, *Y*5}. Then, a group of decision makers is invited to evaluate the five suppliers with three criteria: performance (e.g., quality, delivery, and price) (*u*1), technology (e.g., design capability, manufacturing capability, and ability to deal with technology changes) (*u*2), and organizational culture and strategy (e.g., external and internal integration of suppliers, feeling of trust, compatibility across levels, and functions of the supplier and buyer) (*u*3). The weight vector of the three criteria is specified as *λ* = (0.3, 0.33, 0.37). Thus, the proposed GDM method can be applied to this GDM problem, which is depicted below.

**Step 1**. Suppose that three decision makers are invited to evaluate a set of five suppliers *Y* = {*Y*1, *Y*2, *Y*3, *Y*4, *Y*5} with a set of three criteria *U* = {*u*1, *u*2, *u*3}. For instance, the three decision makers can declare the degree that an alternative *Y*<sup>1</sup> should satisfy a criterion *u*1, and these values could be a group of three single- and interval-valued fuzzy values (0.7, 0.8, [0.7, 0.9]). In this manner, all their evaluation values of SIVFMEs are indicated in Table 1.


**Table 1.** Evaluation values of SIVFMEs provided by the three decision makers.

**Step 2.** Using Equations (3) and (4) on Table 1, IVFEEs can be obtained based on the average values and entropy values of various SIVFMVEs, and the IVFEE decision matrix *M* = (*zjk*)5×<sup>3</sup> is established as follows:


**Step 3.** Using Equation (17) with *η* = 0, 0.3, 0.5, 0.7, and 1, the operational values of *zj*(*η*) for *Yj* (*j* = 1, 2, 3, 4, 5) and the decision results are indicated in Table 2.




**Table 2.** *Cont.*

**Step 4.** By Equation (5), the expected values of *E*(*zj*(*η*)) (*j* = 1, 2, 3, 4, 5) are given in Table 2.

**Step 5.** The sorting orders of the alternatives are *Y*<sup>5</sup> > *Y*<sup>3</sup> > *Y*<sup>4</sup> > *Y*<sup>2</sup> > *Y*1, *Y*<sup>3</sup> > *Y*<sup>5</sup> > *Y*<sup>4</sup> > *Y*<sup>2</sup> > *Y*1, and *Y*<sup>3</sup> > *Y*<sup>5</sup> > *Y*<sup>4</sup> > *Y*<sup>1</sup> = *Y*2. The optimal one is *Y*<sup>5</sup> or *Y*3, depending on some specified value of *η*.

Regarding the decision results in Table 2, there are different sorting orders for the IVFEEWA operator and the IVFEEWG operator when *η* = 0, 1 (two special cases), since the IVFEEWA operator tends to group arguments and the IVFEEWG operator tends to group personal arguments. The mixed-weighted-averaging operation of the IVFEEWA and IVFEEWG operators can compensate for the different tendencies of both when *η* = 0, 1.

#### *5.2. Comparative Analysis*

To verify the efficiency of the proposed GDM method, the proposed GDM method is compared with the existing consistency fuzzy decision-making method and various fuzzy decision-making methods.

First, the proposed GDM method is compared with the existing consistency fuzzy decision-making method [19]. For a convenient comparison with the existing consistency fuzzy decision-making method [19], assume that all interval-valued fuzzy values and entropy values in the IVFEE decision matrix *M* are fuzzy average values and consistency degrees as a special case of the actual example mentioned above. Thus, the IVFEE decision matrix *M* is reduced to the decision matrix of CFEs:


Thus, the existing decision-making method [19] can be applied to the special case of the above actual example by the following CFE weighted averaging (CFEWA) and CFE weighted geometric (CFEWG) operators and score function [19]:

$$z'\_j = \text{CFEWA}(z'\_{j1}, z'\_{j2'}, \dots, z'\_{js}) = \stackrel{s}{\oplus} \lambda\_k z'\_{jk} = \left(1 - \prod\_{k=1}^{s} (1 - m'\_{Zjk})^{\lambda\_k}, 1 - \prod\_{k=1}^{s} (1 - e'\_{Zjk})^{\lambda\_k}\right) \tag{18}$$

$$z'\_j = \text{CFEWG}(z'\_{j1}, z'\_{j2}, \dots, z'\_{js}) = \stackrel{s}{\otimes} (z'\_{jk})^{\lambda\_k} = \left(\prod\_{k=1}^{s} (m'\_{Zjk})^{\lambda\_k}, \prod\_{k=1}^{s} (e'\_{Zjk})^{\lambda\_k}\right) \tag{19}$$

$$F(z'\_j) = (m'\_{Zj}e'\_{Zj} + (m'\_{Zj} + e'\_{Zj})/2)/2 \text{ for } F(z'\_j) \in [0, 1] \tag{20}$$

Using Equations (18)–(20), the aggregated values of the CFEWA and CFEWG operators, the score values of *F*(*z j* ) for *Yi* (*i* = 1, 2, 3, 4, 5), and the decision results were achieved. They are shown in Table 3.

**Table 3.** Decision results of the existing decision-making method in the case of CFEs [19].


In the decision results in Table 3, there exists their sorting difference, since there are the different tendencies for the CFEWA and CFEWG operators. The optimal alternatives are *Y*<sup>3</sup> and *Y*<sup>5</sup> according to the existing decision-making method with CFE information. Although the optimal ones, *Y*<sup>3</sup> and *Y*5, are the same according to the proposed GDM method and the existing decision-making method [19] in the example, the superiorities of the proposed GDM method over the existing decision-making method [19] are as follows:


In comparison with the PFDM methods [16,17], the PFDM methods need a lot of fuzzy data to maintain the rationality (no distortion) of probabilistic fuzzy values from the probabilistic viewpoint; otherwise, the probabilistic fuzzy values are infeasible and irrational, since a lot of fuzzy data are created with difficultly by several decision makers and obviously unrealized in the GDM application. Hence, the PFDM methods cannot represent this decision example involving three decision makers and also cannot express singleand interval-valued fuzzy data. In the case of SIVFMSs, the proposed GDM method with the mean and information entropy only needs a few of decision makers to perform GDM problems with several single- and interval-valued fuzzy data, which are easily handled

in actual applications. In this case, the proposed GDM method showed its rationality and efficiency and is superior to the existing PFDM methods regarding SIVFMSs.

Furthermore, with respect to the above GDM example in the SIVFMS setting, existing fuzzy multiset/IVFM/HFS/CHFS [9–15,18] cannot express SIVFMS, and then they also cannot be applied to this GDM problem with SIVFMS information.

However, our method not only solves the expression and operation problems of SIVFMEs, but also enhances the flexibility and rationality of GDM, which serve to highlight its advantages in the setting of SIVFMSs.

#### **6. Conclusions**

In this study, the presented SIVFMSs could effectively express single- and intervalvalued fuzzy sequences in hybrid fuzzy multivalued situations to solve the difficult problems of various existing fuzzy expressions. The proposed information entropy of SIVFME provides a reasonable mathematical tool for converting SIVFMEs into IVFEEs when dealing with SIVFMSs. IVFEEs converted by the mean and information entropy of SIVFMEs in SIVFMS can reasonably reflect the average and consistency level of group evaluation values and effectively solve the operational problems of different fuzzy sequence lengths in SIVFMSs. In addition, the proposed mixed-weighted-averaging operation of the IVFEEWA and IVFEEWG operators can reasonably and flexibly aggregate IVFEE information with a changeable weight parameter and compensate for the flaws of the IVFEEWA and IVFEEWG operators. Next, the multicriteria GDM method developed based on the proposed mixed-weighted-averaging operation solved flexible decision-making problems involving SIVFMSs. Furthermore, the proposed GDM method was utilized for an actual example of a supplier selection problem to indicate its application. Through the comparative analysis with existing relative decision-making methods, the proposed GDM method demonstrated its rationality and effectiveness. However, this study not only effectively solved the expression and operation problems of the mixed information of single- and interval-valued fuzzy sequences with identical and/or different fuzzy values, but also strengthened the GDM rationality and flexibility with the help of the presented information entropy and the proposed mixed-weighted-averaging operation, which highlighted its merits when dealing with SIVFMSs.

This original study demonstrated new contributions in mixed fuzzy information expression, presented a transformation method based on the mean and information entropy of SIVFME, and presented mixed aggregation operations of IVFEEs and their GDM method in the environment of SIVFMSs. However, the new techniques proposed in this paper can only handle GDM problems with SIVFMSs, but cannot solve GDM problems with the fuzzy information of truth and falsity membership degrees. Regarding future research, this study will be further extended to image processing, pattern recognition, clustering analysis, and their applications in the setting of SIVFMSs. Then, the Aczel–Alsina operations and aggregation operators [21,22], and their applications, will be further developed in the intuitionistic and interval-valued intuitionistic fuzzy multivalued context.

**Author Contributions:** Conceptualization, W.L. and J.Y.; methodology, W.L. and J.Y.; software, J.Y.; validation, W.L. and J.Y.; formal analysis, W.L.; investigation, W.L.; resources, W.L.; data curation, W.L.; writing—original draft preparation, W.L.; writing—review and editing, W.L.; visualization, J.Y.; supervision, J.Y.; project administration, J.Y. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** We did not use any data for this research work.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


**Bing Yan 1, Yuan Rong 2, Liying Yu <sup>2</sup> and Yuting Huang 2,\***


**Abstract:** The selection of an urban rail transit system from the perspective of green and low carbon can not only promote the construction of an urban rail transit system but also have a positive impact on urban green development. Considering the uncertainty caused by different conflict criteria and the fuzziness of decision-making experts' cognition in the selection process of a rail transit system, this paper proposes a hybrid intuitionistic fuzzy MCGDM framework to determine the priority of a rail transit system. To begin with, the weights of experts are determined based on the improved similarity method. Secondly, the subjective weight and objective weight of the criterion are calculated, respectively, according to the DEMATEL and CRITIC methods, and the comprehensive weight is calculated by the linear integration method. Thirdly, considering the regret degree and risk preference of experts, the COPRAS method based on regret theory is propounded to determine the prioritization of urban rail transit system ranking. Finally, urban rail transit system selection of City N is selected for the case study to illustrate the feasibility and effectiveness of the developed method. The results show that a metro system (P1) is the most suitable urban rail transit system for the construction of city N, followed by a municipal railway system (P7). Sensitivity analysis is conducted to illustrate the stability and robustness of the designed decision framework. Comparative analysis is also utilized to validate the efficacy, feasibility and practicability of the propounded methodology.

**Keywords:** urban rail transit; intuitionistic fuzzy set; regret theory; DEMATEL; CRITIC; COPRAS

**MSC:** 90B50; 94D05

#### **1. Introduction**

At present, environmental problems, such as acid rain, air pollution and global warming, are prominent. One of the important reasons for this series of environmental problems is the emission of a large number of greenhouse gases caused by urban traffic operation. Severe environmental problems affect the ecological balance and human health [1]. The large-scale increase in the number of cars stems from the deepening degree of urbanization. The process of urbanization is accelerating, the construction of urban infrastructure is gradually improving and many cities have successfully entered the automotive era with the progress of society and economic development. However, although the popularity of cars has greatly facilitated people's lives, a series of problems, such as vehicle exhaust pollution and traffic congestion, need to be paid attention to. Urban environmental problems caused by automobile operation restrict the green development of the city. As the center of population, economy and transportation, it is particularly important to realize urban sustainable development.

Green travel can save energy, alleviate traffic congestion, reduce environmental pollution and promote sustainable urban development. An urban public transport system plays an important role in promoting urban sustainable development [2]. As one of the

**Citation:** Yan, B.; Rong, Y.; Yu, L.; Huang, Y. A Hybrid Intuitionistic Fuzzy Group Decision Framework and Its Application in Urban Rail Transit System Selection. *Mathematics* **2022**, *10*, 2133. https://doi.org/ 10.3390/math10122133

Academic Editors: Jun Ye and Yanhui Guo

Received: 22 May 2022 Accepted: 16 June 2022 Published: 19 June 2022

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

most effective green and low-carbon transportation modes, urban public transport is an important part of green travel. It is mainly composed of buses and urban rail transit. Buses can meet the daily travel of the public in small cities, but buses are far from meeting the daily travel of the public in medium and large cities with a large population density, wide range of activities and large passenger flow. Therefore, in order to alleviate the traffic pressure in urban areas, the construction of an urban rail transit system has become the focus of attention. As the backbone of urban public transport, urban rail transit has the characteristics of being fast, convenient, efficient, safe and comfortable [3]. With the development of the economy and the progress of science and technology, urban rail transit has developed rapidly, but, in this process, its green standard has been formulated relatively late and a perfect development system has not been formed, resulting in a series of problems regarding that the existing urban rail transit does not adapt to the green development in terms of environment, resources and equipment allocation. Hence, it is particularly important to select the urban rail system from the perspective of green and low-carbon transportation.

Since the problem of urban rail transit system selection involves multiple criteria and different types of urban rail transit systems, it requires the joint discussion of experts in various fields to make decisions. Therefore, the problem of urban rail transit system selection can be regarded as an MCGDM problem. In addition, limited by the complexity of the decision-making environment and the inherent uncertainty of practical problems, a traditional deterministic decision is difficult to solve such complex and uncertain decision problems. As an effective tool to describe uncertainty, IFS [4] are proposed to use membership degree, non-membership degree and hesitation degree to express uncertain information more comprehensively by expanding fuzzy set theory. In terms of information measurement, Das et al. [5] studied the relationship between intuitionistic fuzzy information measurement and its similarity measurement, distance measurement and knowledge measurement based on the intuitionistic fuzzy framework. Mishra et al. [6] proposed a series of similarity measures and entropy measures based on the cosine function and logarithmic function under an intuitionistic fuzzy environment. In terms of decision methods, Ecer and Pamucar [7] proposed a method to rank insurance companies according to Marcos under an intuitionistic fuzzy environment. Schitea et al. [8] proposed a MCDM method based on IFS to select the best location for the summary location of hydrogen mobility in Romania. Mishra et al. [9] developed a fuzzy decision method for ranking and evaluating low-carbon sustainable suppliers by combining IFS and distance-based combined evaluation. As for the intuitionistic fuzzy preference relationship, Zhang et al. [10] studied the distance-based consistency measure in group-decision-making with an intuitionistic multiplication preference relationship and proposed some new distance measures between intuitionistic multiplication sets. Meng et al. [11] studied group-decision-making with heterogeneous intuitionistic fuzzy preference relations, including intuitionistic fuzzy preference relations, multiplicative intuitionistic fuzzy preference relations, etc.

Considering that the dimensions of different criteria are different, and there are differences, conflicts and mutual influences between criteria, the DEMATEL [12] method developed by the Geneva center of Battelle Geneva Research Centre can represent the causal logical relationship between criteria, which can visualize the structure of a complex causal relationship with the help of a matrix or graph. In the DEMATEL method, by calculating the cause degree and centrality of each criterion according to the relative importance of each criterion provided by experts, that is, the influence degree and influence degree of each criterion on other criteria, the subjective weight of each standard can then be determined according to the cause degree and centrality. This structured approach helps to analyze the interdependencies between criteria. The DEMATEL method is widely used. Many researchers use the DEMATEL method for criterion evaluation or factor analysis. For example, Topgul et al. [13] used the IF-DEMATEL method to evaluate the green degree of four stages of incoming logistics in plant logistics, outgoing logistics and reverse logistics in the supply chain. Roostaie et al. [14] used the DEMATEL method to analyze the factors affecting the sustainability of buildings. Tseng et al. [15] and Liu et al. [16], respectively, analyzed the

obstacles to the adoption of renewable energy and China's sustainable food consumption and production by using the DEMATEL method under the triangular fuzzy environment. In addition, DEMATEL can also be used to determine the subjective weight of criteria in MCDM problems, then evaluate the alternatives in combination with different evaluation methods and, finally, select the optimal alternative. For example, Hosseini et al. [17] and Li et al. [18], respectively, used the DEMATEL and VIKOR methods to evaluate solutions for ecotourism centers during the COVID-19 pandemic and select for a machine tool under the triangular fuzzy environment. Fang et al. [19] used the DEMATEL and TOPSIS methods to evaluate the energy investment risk and safety management system.

Experts have bounded rationality in the reality decision analysis procedure [20], and the psychological preference of experts will affect the decision-making results, so it is necessary to consider the psychological behavior of experts. As an important branch of behavioral decision-making theory, the regret theory proposed by Lomes and Suggen [21] and Bell [22] describes the regret avoidance behavior of decision-makers in the decision process through the regret–rejoice function and the risk preference coefficient of decisionmakers. For the application of regret theory, many researchers combine regret theory with decision methods to put forward a group decision framework [23,24]. In other respects, Zhang et al. [25] developed a case retrieval method based on regret theory. Liu and Cheng [26] combined the likelihood-based MABAC method with regret theory to establish a new MCGDM method. Liang and Wang [20] developed an extended scoring method of gain and loss of advantage based on regret theory and the interval evidence reasoning method. Huang and Zhan [27] proposed a three-way decision-making method based on regret theory. Liu et al. [28] proposed a new method combining regret theory and the evaluation method based on average solution distance.

In the past few decades, researchers have proposed many new methods to deal with MCDM problems in real life, such as TOPSIS, VIKOR, MABAC, COPRAS and so on. COPRAS is an MCDM method proposed by Zavadskas et al. [29] in 1994. This method can effectively evaluate the scheme step by step in combination with the importance and effectiveness of the evaluation criteria to obtain the best scheme. It has the characteristics of wide application range and good evaluation effect [30]. The COPRAS method is also widely used. For example, Büyüközkan and Göçer [31] combine AHP and COPRAS to select the best digital supply chain partner. Balali et al. [32] used ANP and COPRAS to rank the effective risks of human resource threats in natural gas supply projects. Mishra et al. [33] and Alipour et al. [34] proposed the combination of SWARA and COPRAS for the sustainability evaluation of the bioenergy production process and the selection of fuel cell and hydrogen component suppliers, respectively. Yuan et al. [35] and Narayanamoorty et al. [36], respectively, used DEMATEL and COPRAS to evaluate and select the third-party logistics suppliers and the best alternative fuel, but both of them only used the subjective weight determination method to determine the attribute weight. In addition, although the methodological framework proposed by many scholars takes into account the regret theory, there are, however, no studies combining regret theory with the COPRAS method to provide decision support for the selection of an urban rail transit system.

Based on the above analysis, the motivations of this study are as follows:


considering the subjective and objective influence to obtain more reasonable and credible decision-making results.

(3) Through literature analysis, it is found that the intuitionistic fuzzy group decision methods in the existing research rarely consider the interaction between criteria in the decision-making process, and most decision-making methods determine the optimal alternatives based on the traditional utility theory, ignoring the psychological behavior of experts in the decision process.

According to the above research motivation, the main contributions of this study are outlined as follows:


The rest of this study is organized as follows: the Section 2 is the introduction of preliminaries, including IFS and regret theory. The Section 3 first introduces the proposed intuitionistic fuzzy distance measurement model, and then introduces the detailed steps of the hybrid intuitionistic fuzzy group decision framework proposed in this study. The Section 4 is the application of practical cases and the corresponding sensitivity analysis and comparative analysis and the Section 5 provides the conclusions of this study.

#### **2. Preliminaries**

This section briefly introduces the background knowledge needed in this paper, including IFS theory and regret theory.

#### *2.1. Intuitionistic Fuzzy Sets*

The following introduces the basic concepts and related theories of IFS.

**Definition 1** ([4])**.** *Let X be a non-empty set, and then*

$$\tilde{A} = \left\{ \left( \mathbf{x}, \mu\_{\bar{A}}(\mathbf{x}), \gamma\_{\bar{A}}(\mathbf{x}) \right) \middle| \mathbf{x} \in X \right\} \tag{1}$$

*is called intuitionistic fuzzy set on X. Where <sup>μ</sup><sup>A</sup>* (*x*) : *<sup>X</sup>* <sup>→</sup> [0, 1] *and <sup>γ</sup><sup>A</sup>* (*x*) : *<sup>X</sup>* <sup>→</sup> [0, 1] *represent the membership degree and non-membership degree of the subset A of element x in X, respectively, and hold true for all <sup>x</sup>* <sup>∈</sup> *<sup>X</sup>*, 0 <sup>≤</sup> *<sup>μ</sup><sup>A</sup>* (*x*) + *<sup>γ</sup><sup>A</sup>* (*x*) <sup>≤</sup> <sup>1</sup> *on <sup>A</sup>* . *<sup>π</sup><sup>A</sup>* (*x*) = <sup>1</sup> <sup>−</sup> *<sup>μ</sup><sup>A</sup>* (*x*) <sup>−</sup> *<sup>γ</sup><sup>A</sup>* (*x*)*,* <sup>0</sup> <sup>≤</sup> *<sup>π</sup><sup>A</sup>* (*x*) <sup>≤</sup> <sup>1</sup> *represents the hesitation degree or uncertainty degree that element <sup>x</sup> in <sup>X</sup> belongs to <sup>A</sup> . The ordinal number pair <sup>μ</sup><sup>A</sup>* (*x*), *<sup>γ</sup><sup>A</sup>* (*x*) *composed of membership degree <sup>μ</sup><sup>A</sup>* (*x*) *and non-membership degree <sup>γ</sup><sup>A</sup>* (*x*) *are IFNs.*

**Definition 2** ([4])**.** *Let <sup>α</sup>* <sup>=</sup> (*<sup>μ</sup> <sup>α</sup>*, *<sup>γ</sup> <sup>α</sup>*) *and <sup>β</sup>* = ! *μβ* , *γβ* " *be two IFN, the operational laws of IFNs are:*

$$(1)\quad \overline{\mathfrak{a}} \oplus \overline{\beta} = \left(\mathfrak{x}; \mu\_{\overline{\mathfrak{a}}} + \mu\_{\overline{\beta}} - \mu\_{\overline{\mathfrak{a}}}\mu\_{\overline{\beta}'}\gamma\_{\overline{\mathfrak{a}}}\gamma\_{\overline{\mathfrak{a}}}\right);$$


**Definition 3.** *The score function <sup>S</sup> and accuracy function <sup>H</sup> of IFN <sup>α</sup>* <sup>=</sup> (*<sup>μ</sup> <sup>α</sup>*, *<sup>γ</sup> <sup>α</sup>*) *are defined as <sup>S</sup>*( *<sup>α</sup>*) <sup>=</sup> *μα* <sup>−</sup> *γα and <sup>H</sup>*( *<sup>α</sup>*) <sup>=</sup> *μα* <sup>+</sup> *γα; however, when the membership degree is equal to the non-membership degree, the score function cannot be directly used to compare intuitionistic fuzzy numbers. So, Zeng et al.* [37] *proposed a novel score function as below:*

$$S(\overline{\mathfrak{a}}) = \mu\_{\overline{\mathfrak{a}}} - \gamma\_{\overline{\mathfrak{a}}} - \pi\_{\overline{\mathfrak{a}}} \times \frac{\log\_2(1 + \pi\_{\overline{\mathfrak{a}}})}{100}, \\ S(\overline{\mathfrak{a}}) \in [-1, 1]. \tag{2}$$

**Definition 4.** *Let <sup>α</sup>* = (*<sup>μ</sup> <sup>α</sup>*, *<sup>γ</sup> <sup>α</sup>*) *and <sup>β</sup>* = (*μβ* , *γβ* ) *be two IFNs; the order relations between them are defined as follows:*

	- (i) *If H*( *<sup>α</sup>*) <sup>&</sup>gt; *<sup>H</sup>* ! *β* " , *then <sup>α</sup> is better than <sup>β</sup> , written as <sup>α</sup> <sup>β</sup>* ; (ii) *If H*( *<sup>α</sup>*) <sup>=</sup> *<sup>H</sup>* ! *β* " , *then <sup>α</sup> is equal to <sup>β</sup> , written as <sup>α</sup>* <sup>=</sup> *<sup>β</sup>* .

**Definition 5** ([38])**.** *Let <sup>α</sup> <sup>j</sup>* <sup>=</sup> ! *μα j* , *γα <sup>j</sup>* " (*j* = 1, 2, ··· , *n*) *be a set of IFNs; the intuitionistic fuzzy weighted aggregation operator is defined as:*

$$IFWA\_{\omega}(\vec{a\_1}, \vec{a\_2}, \dots, \vec{a\_n}) = \left(1 - \prod\_{j=1}^{n} \left(1 - \mu\_{\vec{a\_j}}\right)^{\omega\_j}, \prod\_{j=1}^{n} \left(\gamma\_{\vec{a\_j}}\right)^{\omega\_j}\right). \tag{3}$$

*where <sup>ω</sup><sup>j</sup> is the weight of <sup>α</sup> <sup>j</sup>* <sup>=</sup> ! *μα j* , *γα <sup>j</sup>* " , *<sup>j</sup>* <sup>=</sup> 1, 2, ··· , *n, <sup>ω</sup><sup>j</sup>* <sup>∈</sup> [0, 1] *and <sup>n</sup>* ∑ *j*=1 *ω<sup>j</sup>* = 1.

#### *2.2. Regret Theory*

The main idea of regret theory is to compare the results obtained by the selected alternative with the possible results obtained by other alternatives and then characterize the degree of rejoice and regret of decision experts and select the optimal alternative that they will not regret.

**Definition 6** ([39])**.** *Let y*<sup>1</sup> *and y*<sup>2</sup> *be the evaluation values of alternatives P*<sup>1</sup> *and P*2*, and then the perceived utility value of experts on alternative P*<sup>1</sup> *is*

$$u(y\_1, y\_2) = v(y\_1) + R(v(y\_1) - v(y\_2)).\tag{4}$$

*where v*(·) *is a monotonically increasing concave utility function satisfying v* (·) > 0 *and v*(·) < 0. *R*(·) *is a monotonically increasing concave regret–rejoice function satisfying R*(0) = 0*, R* (·) > 0 *and R*(·) < 0. Δ*v* = *v*(*x*1) − *v*(*x*2) *represents the utility increment of alternatives P*<sup>1</sup> *and P*1. *R*(Δ*v*) > 0 *means that the decision-maker is willing to choose option P*<sup>1</sup> *and abandon option P*2*; otherwise, he will regret.*

#### **3. A Hybrid Intuitionistic Fuzzy Group Decision Framework**

This part introduces the proposed hybrid intuitionistic fuzzy group decision framework. Firstly, a new intuitionistic fuzzy distance measure is proposed, then the MCGDM problem studied in this paper is described and, finally, the detailed steps of the decision framework are given.

#### *3.1. A Novel Intuitionistic Fuzzy Distance Measure*

In this paper, IFS are used to deal with the fuzziness and uncertainty of decision information. In the process of decision-making, intuitionistic fuzzy distance needs to be used many times. In order to better measure intuitionistic fuzzy distance and reduce the lack of information, a novel intuitionistic fuzzy distance measurement method needs to be proposed.

**Definition 7.** *Let <sup>α</sup>* <sup>=</sup> - *<sup>α</sup>j*|*<sup>j</sup>* <sup>=</sup> 1, 2, ··· , *<sup>n</sup> and β* = ( *β <sup>j</sup>*|*j* = 1, 2, ··· , *n* ) *be two intuitionistic fuzzy number vectors, where <sup>α</sup> <sup>j</sup>* = (*μα <sup>j</sup>* , *γα <sup>j</sup>* )*, β <sup>j</sup>* = (*μβ j* , *γβ j* )*. The new generalized intuitionistic fuzzy distance measure is defined as follows:*

$$D''\left(\tilde{a},\bar{\beta}\right) = \left(\frac{1}{3n}\sum\_{j=1}^{n}\left(\left|\bar{\mu}\_{j}^{\bar{\alpha}} - \bar{\mu}\_{j}^{\bar{\beta}}\right|^{\sigma} + \left|\bar{\gamma}\_{j}^{\bar{\alpha}} - \bar{\gamma}\_{j}^{\bar{\beta}}\right|^{\sigma} + \left|\bar{\pi}\_{j}^{\bar{\alpha}} - \bar{\pi}\_{j}^{\bar{\beta}}\right|^{\sigma} + \left|\frac{1}{2}\left(S\left(\tilde{a}\_{j}\right) - S\left(\tilde{\beta}\_{j}\right)\right)\right|^{\sigma}\right)\right)^{\frac{1}{\sigma}}.\tag{5}$$

**Theorem 1.** *Let <sup>α</sup>* <sup>=</sup> - *<sup>α</sup>j*|*<sup>j</sup>* <sup>=</sup> 1, 2, ··· , *<sup>n</sup> , β* = ( *β <sup>j</sup>*|*j* = 1, 2, ··· , *n* ) *and <sup>χ</sup>* <sup>=</sup> - *<sup>χ</sup> <sup>j</sup>*|*<sup>j</sup>* <sup>=</sup> 1, 2, ··· , *<sup>n</sup> be three intuitionistic fuzzy number vectors, and then D<sup>σ</sup>* ! *<sup>α</sup>*, *<sup>β</sup>* " *is the intuitionistic fuzzy distance measure.*


(2) and (3) can be proved directly; only (1) and (4) are proved here. (1) Since 0 <sup>≤</sup> *<sup>μ</sup> <sup>α</sup> <sup>j</sup>* , *<sup>μ</sup> <sup>β</sup> <sup>j</sup>* <sup>≤</sup> 1, 0 <sup>≤</sup> *<sup>γ</sup> <sup>α</sup> <sup>j</sup>* , *<sup>γ</sup> <sup>β</sup> <sup>j</sup>* ≤ 1, then

$$0 \le \left| \bar{\mu}\_j^{\bar{\alpha}} - \bar{\mu}\_j^{\bar{\beta}} \right| \le 1,\\ \ 0 \le \left| \bar{\gamma}\_j^{\bar{\alpha}} - \bar{\gamma}\_j^{\bar{\beta}} \right| \le 1,\\ \left| \bar{\pi}\_j^{\bar{\alpha}} - \bar{\pi}\_j^{\bar{\beta}} \right| \to 0,$$

$$0 \le \left| \frac{1}{2} \left( S(\bar{\alpha}\_{\bar{\gamma}}) - S(\bar{\beta}\_{\bar{\gamma}}) \right) \right| \le 1.$$

Hence, 0 ≤ *μ α <sup>j</sup>* <sup>−</sup> *<sup>μ</sup> <sup>β</sup> j σ* + *γ α <sup>j</sup>* <sup>−</sup> *<sup>γ</sup> <sup>β</sup> j σ* + *π α <sup>j</sup>* <sup>−</sup> *<sup>π</sup> <sup>β</sup> j σ* + 1 2 ! *S α j* − *S* ! *β j* "" *σ* ≤ 3, for *σ* ≥ 1, i.e.,

$$\begin{split} 0 \leq \frac{1}{3n} \sum\_{j=1}^{n} \left( \left| \bar{\boldsymbol{\mu}}\_{j}^{\bar{\boldsymbol{\alpha}}} - \bar{\boldsymbol{\mu}}\_{j}^{\bar{\boldsymbol{\beta}}} \right|^{\sigma} + \left| \bar{\boldsymbol{\gamma}}\_{j}^{\bar{\boldsymbol{\alpha}}} - \bar{\boldsymbol{\gamma}}\_{j}^{\bar{\boldsymbol{\beta}}} \right|^{\sigma} + \left| \bar{\boldsymbol{\pi}}\_{j}^{\bar{\boldsymbol{\alpha}}} - \bar{\boldsymbol{\pi}}\_{j}^{\bar{\boldsymbol{\beta}}} \right|^{\sigma} + \left| \frac{1}{2} \left( \boldsymbol{S} \left( \boldsymbol{\tilde{\alpha}}\_{j} \right) - \boldsymbol{S} \left( \boldsymbol{\tilde{\beta}}\_{j} \right) \right) \right|^{\sigma} \right) \leq 1, \\ 0 \leq \left( \frac{1}{3n} \sum\_{j=1}^{n} \left( \left| \bar{\boldsymbol{\mu}}\_{j}^{\bar{\boldsymbol{\alpha}}} - \bar{\boldsymbol{\mu}}\_{j}^{\bar{\boldsymbol{\beta}}} \right|^{\sigma} + \left| \bar{\boldsymbol{\gamma}}\_{j}^{\bar{\boldsymbol{\alpha}}} - \bar{\boldsymbol{\gamma}}\_{j}^{\bar{\boldsymbol{\beta}}} \right|^{\sigma} + \left| \bar{\boldsymbol{\pi}}\_{j}^{\bar{\boldsymbol{\alpha}}} - \bar{\boldsymbol{\pi}}\_{j}^{\bar{\boldsymbol{\beta}}} \right|^{\sigma} + \left| \frac{1}{2} \left( \boldsymbol{S} \left( \bar{\boldsymbol{\alpha}}\_{j} \right) - \boldsymbol{S} \left( \bar{\boldsymbol{\beta}}\_{j} \right) \right) \right|^{\sigma} \right) \right)^{\frac{1}{\sigma}} \leq 1. \end{split}$$

 1 2

> *D<sup>σ</sup>* ! *β*

> > ≤

(4) Since *<sup>α</sup>* <sup>⊆</sup> *<sup>β</sup>* <sup>⊆</sup> *<sup>χ</sup>* , then *<sup>μ</sup> <sup>α</sup> <sup>j</sup>* <sup>≤</sup> *<sup>μ</sup> <sup>β</sup> <sup>j</sup>* <sup>≤</sup> *<sup>μ</sup> <sup>χ</sup> <sup>j</sup>* , *<sup>γ</sup> <sup>α</sup> <sup>j</sup>* <sup>≤</sup> *<sup>γ</sup> <sup>β</sup> <sup>j</sup>* <sup>≤</sup> *<sup>γ</sup> <sup>χ</sup> <sup>j</sup>* , *S α j* ≥ *S* ! *β j* " ≥ *S χ j* for all *xj* ∈ *X*. Then, we have *μ α <sup>j</sup>* <sup>−</sup> *<sup>μ</sup> <sup>β</sup> j σ* ≤ *μ α <sup>j</sup>* <sup>−</sup> *<sup>μ</sup> <sup>χ</sup> j σ* , *μ β <sup>j</sup>* <sup>−</sup> *<sup>μ</sup> <sup>χ</sup> j σ* ≤ *μ α <sup>j</sup>* <sup>−</sup> *<sup>μ</sup> <sup>χ</sup> j σ* ; *γ α <sup>j</sup>* <sup>−</sup> *<sup>γ</sup> <sup>β</sup> j σ* ≤ *γ α <sup>j</sup>* <sup>−</sup> *<sup>γ</sup> <sup>χ</sup> j σ* , *γ β <sup>j</sup>* <sup>−</sup> *<sup>γ</sup> <sup>χ</sup> j σ* ≤ *γ α <sup>j</sup>* <sup>−</sup> *<sup>γ</sup> <sup>χ</sup> j σ* ; ! *S α j* − *S* ! *β j* "" *σ* ≤ 1 2 *S α j* − *S χ j σ* ≤, 1 2 ! *S* ! *β j* " − *Sχ* " *σ* ≤ 1 2 *S α j* − *S χ j σ* . Thus, *μ α <sup>j</sup>* <sup>−</sup> *<sup>μ</sup> <sup>β</sup> j σ* + *γ α <sup>j</sup>* <sup>−</sup> *<sup>γ</sup> <sup>β</sup> j σ* + *π α <sup>j</sup>* <sup>−</sup> *<sup>π</sup> <sup>β</sup> j σ* + 1 2 ! *S α j* − *S* ! *β j* "" *σ* ≤ ! *μ α <sup>j</sup>* <sup>−</sup> *<sup>μ</sup> <sup>χ</sup> j σ* + *γ α <sup>j</sup>* <sup>−</sup> *<sup>γ</sup> <sup>χ</sup> j σ* + *π α <sup>j</sup>* <sup>−</sup> *<sup>π</sup> <sup>χ</sup> j σ* + 1 2 *S α j* − *S χ j <sup>σ</sup>*" *μ β <sup>j</sup>* <sup>−</sup> *<sup>μ</sup> <sup>χ</sup> j σ* + *γ β <sup>j</sup>* <sup>−</sup> *<sup>γ</sup> <sup>χ</sup> j σ* + *π β <sup>j</sup>* <sup>−</sup> *<sup>π</sup> <sup>χ</sup> j σ* + 1 2 ! *S* ! *β j* " − *S χ j* " *σ* ≤ ! *μ α <sup>j</sup>* <sup>−</sup> *<sup>μ</sup> <sup>χ</sup> j σ* + *γ α <sup>j</sup>* <sup>−</sup> *<sup>γ</sup> <sup>χ</sup> j σ* + *π α <sup>j</sup>* <sup>−</sup> *<sup>π</sup> <sup>χ</sup> j σ* + 1 2 *S α j* − *S χ j <sup>σ</sup>*"

Furthermore,

*D<sup>σ</sup>* ! *<sup>α</sup>*, *<sup>β</sup>* " = 1 3*n n* ∑ *j*=1 *μ α <sup>j</sup>* <sup>−</sup> *<sup>μ</sup> <sup>β</sup> j σ* + *γ α <sup>j</sup>* <sup>−</sup> *<sup>γ</sup> <sup>β</sup> j σ* + *π α <sup>j</sup>* <sup>−</sup> *<sup>π</sup> <sup>β</sup> j σ* + 1 2 ! *S α j* − *S* ! *β j* "" *σ* <sup>1</sup> *σ* ≤ 1 3*n n* ∑ *j*=1 ! *μ α <sup>j</sup>* <sup>−</sup> *<sup>μ</sup> <sup>χ</sup> j σ* + *γ α <sup>j</sup>* <sup>−</sup> *<sup>γ</sup> <sup>χ</sup> j σ* + *π α <sup>j</sup>* <sup>−</sup> *<sup>π</sup> <sup>χ</sup> j σ* + 1 2 *S α j* − *S χ j σ*" <sup>1</sup> *σ* <sup>=</sup> *<sup>D</sup>σ*( *<sup>α</sup>*, *<sup>χ</sup>* ) , *<sup>χ</sup>* " = 1 3*n n* ∑ *j*=1 *μ β <sup>j</sup>* <sup>−</sup> *<sup>μ</sup> <sup>χ</sup> j σ* + *γ β <sup>j</sup>* <sup>−</sup> *<sup>γ</sup> <sup>χ</sup> j σ* + *π β <sup>j</sup>* <sup>−</sup> *<sup>π</sup> <sup>χ</sup> j σ* + 1 2 ! *S* ! *β j* " − *S χ j* " *σ* <sup>1</sup> *σ* 1 3*n n* ∑ *j*=1 ! *μ α <sup>j</sup>* <sup>−</sup> *<sup>μ</sup> <sup>χ</sup> j σ* + *γ α <sup>j</sup>* <sup>−</sup> *<sup>γ</sup> <sup>χ</sup> j σ* + *π α <sup>j</sup>* <sup>−</sup> *<sup>π</sup> <sup>χ</sup> j σ* + 1 2 *S α j* − *S χ j σ*" <sup>1</sup> *σ* <sup>=</sup> *<sup>D</sup>σ*( *<sup>α</sup>*, *<sup>χ</sup>* )

Accordingly, *D<sup>σ</sup>* ! *<sup>α</sup>*, *<sup>β</sup>* " <sup>≤</sup> *<sup>D</sup>σ*( *<sup>α</sup>*, *<sup>χ</sup>* ) and *<sup>D</sup><sup>σ</sup>* ! *β* , *<sup>χ</sup>* " <sup>≤</sup> *<sup>D</sup>σ*( *<sup>α</sup>*, *<sup>χ</sup>* ).

**Definition 8.** *Let <sup>A</sup>* <sup>=</sup> *α*'*ij <sup>m</sup>*×*<sup>n</sup> and <sup>B</sup>* <sup>=</sup> ! *β* '*ij*" *<sup>m</sup>*×*<sup>n</sup> be two intuitionistic fuzzy matrices, where α*'*ij* = ! *μα*'*ij* , *γα*'*ij*" *and β* '*ij* = ! *μβ* '*ij* , *γβ* '*ij*" *are IFNs. Then, the distance between intuitionistic fuzzy matrices A and B is defined as follows:* 

$$D''\left(\tilde{A},\overline{B}\right) = \left(\frac{1}{3mn} \sum\_{i=1}^{m} \sum\_{j=1}^{n} \left( \left| \tilde{\mu}\_{i\overline{j}}^{\overline{A}} - \tilde{\mu}\_{i\overline{j}}^{\overline{B}} \right|^{\sigma} + \left| \tilde{\gamma}\_{i\overline{j}}^{\overline{A}} - \tilde{\gamma}\_{i\overline{j}}^{\overline{B}} \right|^{\sigma} + \left| \tilde{\pi}\_{i\overline{j}}^{\overline{A}} - \tilde{\pi}\_{i\overline{j}}^{\overline{B}} \right|^{\sigma} + \left| \frac{1}{2} \left( S\left(\tilde{A}\_{i\overline{j}}\right) - S\left(\overline{B}\_{i\overline{j}}\right) \right) \right|^{\sigma} \right) \right)^{\frac{1}{\sigma}}.\tag{6}$$

*when σ* = 1*, σ* = 2 *and σ* = +∞*, D*ˆ *<sup>σ</sup>* ! *A* , *B* " *are degenerated to the corresponding intuitionistic fuzzy Hamming distance D*ˆ <sup>1</sup> ! *A* , *B* " *, Euclidean distance D*ˆ <sup>2</sup> ! *A* , *B* " *and Chebyshev distance D*ˆ <sup>+</sup><sup>∞</sup> ! *A* , *B* " .

$$\hat{D}^{1}\left(\tilde{A},\tilde{B}\right) = \frac{1}{3mn} \sum\_{i=1}^{m} \sum\_{j=1}^{n} \left( \left| \bar{\mu}\_{ij}^{\tilde{A}} - \bar{\mu}\_{ij}^{\mathbb{B}} \right| + \left| \bar{\gamma}\_{ij}^{\tilde{A}} - \bar{\gamma}\_{ij}^{\mathbb{B}} \right| + \left| \bar{\pi}\_{ij}^{\tilde{A}} - \bar{\pi}\_{ij}^{\mathbb{B}} \right| + \left| \frac{1}{2} \left( S\left(\tilde{A}\_{i\bar{j}}\right) - S\left(\tilde{B}\_{i\bar{j}}\right) \right) \right| \right). \tag{7}$$

$$\hat{D}^2(\tilde{A}, \tilde{B}) = \sqrt{\frac{1}{3mn} \sum\_{i=1}^m \sum\_{j=1}^n \left( \left| \tilde{\mu}\_{\bar{i}\bar{j}}^{\tilde{A}} - \tilde{\mu}\_{\bar{i}\bar{j}}^{\tilde{B}} \right|^2 + \left| \tilde{\gamma}\_{\bar{i}\bar{j}}^{\tilde{A}} - \tilde{\gamma}\_{\bar{i}\bar{j}}^{\tilde{B}} \right|^2 + \left| \tilde{\pi}\_{\bar{i}\bar{j}}^{\tilde{A}} - \tilde{\pi}\_{\bar{i}\bar{j}}^{\tilde{B}} \right|^2 + \left| \frac{1}{2} \left( \mathcal{S} \left( \tilde{A}\_{\bar{i}\bar{j}} \right) - \mathcal{S} \left( \tilde{\mathcal{S}}\_{\bar{i}\bar{j}} \right) \right) \right|^2 \right)} \tag{8}$$

$$\hat{D}^{+\infty}\left(\tilde{A},\tilde{B}\right) = \max\_{\substack{1 \le i \le m \\ 1 \le j \le m}} \left( \left| \bar{\mu}\_{ij}^{\bar{A}} - \bar{\mu}\_{ij}^{\bar{B}} \right| \left| \tilde{\gamma}\_{ij}^{\bar{A}} - \tilde{\gamma}\_{ij}^{\bar{B}} \right| \left| \tilde{\pi}\_{ij}^{\bar{A}} - \bar{\pi}\_{ij}^{\bar{B}} \right| \cdot \left| \frac{1}{2} \left( S\left(\tilde{A}\_{ij}\right) - S\left(\tilde{B}\_{ij}\right) \right) \right| \right) \right. \tag{9}$$

#### *3.2. Problem Statement*

criterion *Qj* given by the kth expert.

For the MCGDM problem under the intuitionistic fuzzy environment, let *P*i(*i* = 1, 2, ··· , *m*) be the set of urban rail transit system types, *Qj*(*j* = 1, 2, ··· , *n*) be the set of criteria. *ωj*(*j* = 1, 2, ··· , *n*) is the weight of the criterion *Qj*(*j* = 1, 2, ··· , *n*) and satisfying <sup>0</sup> <sup>≤</sup> *<sup>ω</sup><sup>j</sup>* <sup>≤</sup> 1, *<sup>n</sup>* ∑ *j*=1 *ω<sup>j</sup>* = 1. *Dk*(*k* = 1, 2, ··· , *K*) is the set of experts. The corresponding weight of expert is expressed as *<sup>λ</sup>k*(*<sup>k</sup>* <sup>=</sup> 1, 2, ··· , *<sup>K</sup>*) and satisfying 0 <sup>≤</sup> *<sup>λ</sup><sup>k</sup>* <sup>≤</sup> 1, *<sup>K</sup>* ∑ *k*=1 *λ<sup>k</sup>* = 1. *<sup>E</sup> <sup>k</sup>* = ( *<sup>e</sup><sup>k</sup> ij*) *<sup>m</sup>*×*<sup>n</sup>* represents the evaluation value of the urban rail transit system *<sup>P</sup>*<sup>i</sup> under

#### *3.3. Detailed Steps of the Hybrid Intuitionistic Fuzzy Group Decision Framework*

This paper developed a hybrid group decision framework considering the psychological behavior of experts under the intuitionistic fuzzy environment. Firstly, experts express their qualitative evaluation through linguistic variables and then obtain the intuitionistic fuzzy decision matrix of experts. Secondly, the weight information of experts is determined by similarity method based on the proposed intuitionistic fuzzy distance measure, and then the aggregation decision matrix is obtained. Thirdly, the subjective weight and objective weight of attributes are obtained by DEMATEL and CRITIC methods, respectively, and the comprehensive weights of criteria are obtained by linear integration method. DEMATEL method can fully consider the relationship between criteria, making the final subjective weight results more accurate. CRITIC method is based on the contrast strength of criteria and the conflict between criteria to comprehensively measure the objective weight of criteria. The objective attribute of the data itself is fully used for scientific evaluation. In the stage of ranking, the COPRAS method based on regret theory is used to calculate the comprehensive evaluation value of the scheme and finally determine the ranking of the urban rail transit systems. COPRAS method is simple to operate, does not need standardization process and can reduce the lack of evaluation information. The detailed steps are as follows and the method framework is shown in Figure 1.

(1) Stage 1 Collect the evaluation information

Step 1.1: Obtain the linguistic decision matrix.

The evaluation value of *P*i(*i* = 1, 2, ··· , *m*) in criterion *Qj*(*j* = 1, 2, ··· , *n*) is given by expert *Dk*(*k* = 1, 2, ··· , *K*) in the form of linguistic variables.

**Figure 1.** The framework of the proposed method.

Step 1.2: Convert to the fuzzy decision matrix.

The linguistic evaluation value is transformed into intuitionistic fuzzy number, and then obtain the intuitionistic fuzzy evaluation matrix. Table 1 lists the linguistic variables, which reflect the transformation relationship between linguistic variables of decision matrix and IFNs.

$$
\vec{E}^{k} = \begin{pmatrix}
\vec{e}^{k}\_{11} & \vec{e}^{k}\_{12} & \cdots & \vec{e}^{k}\_{1n} \\
\vec{e}^{k}\_{21} & \vec{e}^{k}\_{22} & \cdots & \vec{e}^{k}\_{2n} \\
\vdots & \vdots & \vdots & \vdots \\
\vec{e}^{k}\_{m1} & \vec{e}^{k}\_{m2} & \cdots & \vec{e}^{k}\_{mn}
\end{pmatrix}, \vec{e}^{k}\_{ij} = (\vec{\mu}^{k}\_{ij}, \vec{\gamma}^{k}\_{ij}).
$$


**Table 1.** The transformation relationship of decision-making matrix linguistic variables [40].

(2) Stage 2 Determine the comprehensive evaluation matrix

Step 2.1: Similarity-based approach determines the weight of expert.

The determination of weights of experts is a key to MCGDM problem. In this study, the weights of experts are determined by similarity method. Generally speaking, the closer the expert's evaluation is to the evaluation of the whole expert group, the greater the expert's weight is.

Step 2.1.1 Obtain the average evaluation matrix of the expert group from Equation (10)

$$\overline{\epsilon\_{\vec{i}\vec{j}}} = \left( \overline{\mu\_{\vec{i}\vec{j}}}, \overline{\gamma\_{\vec{i}\vec{j}}} \right) = I FWA\_{\omega} \left( e\_{\vec{i}\vec{j}}^{1}, e\_{\vec{i}\vec{j}}^{2}, \dots, e\_{\vec{i}\vec{j}}^{K} \right) = \left( 1 - \prod\_{k=1}^{K} \left( 1 - \mu\_{\vec{i}\vec{j}}^{k} \right)^{\frac{1}{K}}, \prod\_{k=1}^{K} \left( \gamma\_{\vec{i}\vec{j}}^{k} \right)^{\frac{1}{K}} \right). \tag{10}$$

where *IFWA* is intuitionistic fuzzy weighted average operator.

Step 2.1.2 According to Definition 8, the distance between the kth expert's evaluation matrix *E<sup>k</sup>* = ! *ek ij*" *<sup>m</sup>*×*<sup>n</sup>* and the average evaluation matrix *<sup>E</sup>* <sup>=</sup> *eij <sup>m</sup>*×*<sup>n</sup>* of the expert group is expressed as:

$$\begin{split} \hat{D}^{1}\left(\boldsymbol{E}^{k},\overline{\boldsymbol{E}}\right) &= \frac{1}{3m} \sum\_{i=1}^{m} \sum\_{j=1}^{n} \left( \left| \mu\_{ij}^{k} - \overline{\mu}\_{ij} \right| + \left| \gamma\_{ij}^{k} - \overline{\gamma}\_{ij} \right| + \left| \pi\_{ij}^{k} - \overline{\pi}\_{ij} \right| + \left| \frac{1}{2} \left( \boldsymbol{S}\left(\varepsilon\_{ij}\right) - \boldsymbol{S}\left(\overline{\varepsilon}\_{ij}\right) \right) \right| \right) \right. \\ \left. \hat{D}^{+\infty}\left(\boldsymbol{E}^{k},\overline{\boldsymbol{E}}\right) &= \max\_{\substack{1 \leq i \leq m \\ 1 \leq j \leq n}} \left\{ \left| \mu\_{ij}^{k} - \overline{\mu}\_{ij} \right|, \left| \gamma\_{ij}^{k} - \overline{\gamma}\_{ij} \right|, \left| \pi\_{ij}^{k} - \overline{\pi}\_{ij} \right|, \left| \frac{1}{2} \left( \boldsymbol{S}\left(\varepsilon\_{ij}\right) - \boldsymbol{S}\left(\overline{\varepsilon}\_{ij}\right) \right) \right| \right\} \right. \\ & \left. \quad 1 \leq j \leq n \end{split} \tag{11}$$

Step 2.1.3 Through the control parameters, the comprehensive distance calculated from Equation (12) is:

$$D^\*\left(E^k, \overline{E}\right) = \theta \mathcal{D}^1\left(E^k, \overline{E}\right) + (1 - \theta) \mathcal{D}^{+\infty}\left(E^k, \overline{E}\right). \tag{12}$$

where *D*∗ ! *Ek*, *E* " represents comprehensive distance, *θ* represents balance coefficient, 0 ≤ *θ* ≤ 1.

Step 2.1.4 The smaller the distance *d* ! *Ek*, *E* " , the greater the weight of the expert. The corresponding weight *λ<sup>k</sup>* is obtained from Equation (13):

$$\lambda\_k = \frac{1 - D^\*\left(E^k, \overline{E}\right)}{\sum\_{k=1}^K \left(1 - D^\*\left(E^k, \overline{E}\right)\right)}, k = 1, 2, \cdots, K. \tag{13}$$

Step 2.2: Aggregate the fuzzy decision-making matrix.

Using Equation (14), expert decision matrices are aggregated to obtain the comprehensive evaluation decision matrix:

$$e\_{i\bar{j}} = IFAA\_{\omega} \left( e\_{i\bar{j}}^1, e\_{i\bar{j}}^2, \dots, e\_{i\bar{j}}^K \right) = \left( 1 - \prod\_{k=1}^K \left( 1 - \bar{\mu}\_{i\bar{j}}^k \right)^{\lambda\_k}, \prod\_{k=1}^K \left( \bar{\gamma}\_{i\bar{j}}^k \right)^{\lambda\_k} \right). \tag{14}$$

(3) Stage 3 Obtain the comprehensive weight of criteria

Firstly, the subjective weights of criteria are calculated by DEMATEL method, and then the objective weights of criteria are calculated by CRITIC. Finally, the comprehensive weights of criteria are obtained by combining the weight preference coefficient with the subjective and objective weight.

Step 3.1: Determine the subjective weights of criteria with DEMATEL method.

Step 3.1.1 Construct the fuzzy direct-influence matrix

The direct influence relation matrix of criterion *Qj* to *Ql* is given by expert *Dk*(*k* = 1, 2, ··· , *K*) in the form of linguistic variables and then transformed into intuitionistic fuzzy numbers to obtain the intuitionistic fuzzy direct-influence matrix *T<sup>k</sup>* = (*t k jl*) .

*n*×*n* Step 3.1.2 Aggregate the direct-influence matrices with Equation (15) to determine the group direct-influence matrix *T* = ! *t jl*" *n*×*n* :

$$\boldsymbol{t}\_{jl}^{\top} = \boldsymbol{I} \boldsymbol{F} \boldsymbol{W} \boldsymbol{A}\_{\omega} \left( \boldsymbol{t}\_{jl'}^{1} \boldsymbol{t}\_{jl'}^{2} \boldsymbol{\cdot} \boldsymbol{\cdot} \boldsymbol{\cdot} \boldsymbol{t}\_{jl}^{K} \right) = \left( \boldsymbol{1} - \prod\_{k=1}^{K} \left( 1 - \mu\_{jl}^{k} \right)^{\lambda\_{k}} \boldsymbol{\cdot} \prod\_{k=1}^{K} \left( \boldsymbol{\gamma}\_{jl}^{k} \right)^{\lambda\_{k}} \right) . \tag{15}$$

where *<sup>μ</sup>jl* <sup>=</sup> <sup>1</sup> <sup>−</sup> *<sup>K</sup>* ∏ *k*=1 ! <sup>1</sup> − *<sup>μ</sup><sup>k</sup> jl*"*λk* , *<sup>γ</sup>jl* <sup>=</sup> *<sup>K</sup>* ∏ *k*=1 ! *γk jl*"*λk* , *λ<sup>k</sup>* is the weight of kth expert, *λ<sup>k</sup>* = <sup>1</sup> *K* .

Step 3.1.3 Use Equation (16) to standardize the direct-influence matrix to obtain the standardized direct-influence matrix *T* = (*t jl*) *n*×*n* :

$$t'\_{jl} = \frac{t\_{jl}}{\max\limits\_{1 \le j \le n} \left(\sum\_{l=1}^{n} t\_{jl}\right)}, j, l = 1, 2, \dots, n. \tag{16}$$

where *tjl* <sup>=</sup> *<sup>μ</sup>jl* <sup>−</sup> *<sup>γ</sup>jl* <sup>−</sup> *<sup>π</sup>jl* <sup>×</sup> log2(1+*πjl*) <sup>100</sup> , *<sup>π</sup>jl* <sup>=</sup> <sup>1</sup> <sup>−</sup> *<sup>μ</sup>jl* <sup>−</sup> *<sup>γ</sup>jl*. Step 3.1.4 Utilize Equation (17) to calculate the total impact matrix *T*∗ = ! *t* ∗ *jl*" *n*×*n* :

$$T^\* = T' \times \left(I - T'\right)^{-1}.\tag{17}$$

where *I* is the identity matrix.

Step 3.1.5 Employ Equations (18) and (19) to calculate importance *ξ* and influence *ζ*:

$$\mathcal{S}\_{\rangle} = R\_{\rangle} + \mathbb{C}\_{\text{j}},\\ j = 1, 2, \cdots, n. \tag{18}$$

$$\mathbb{Z}\_{\circ} = \mathbb{R}\_{\circ} - \mathbb{C}\_{\circ}, j = 1, 2, \cdots, n. \tag{19}$$

where *Rj* <sup>=</sup> *<sup>n</sup>* ∑ *l*=1 *tjl*, *Cj* <sup>=</sup> *<sup>n</sup>* ∑ *j*=1 *tjl*.

Step 3.1.6 Use Equation (20) to obtain the subjective weight *ω<sup>s</sup> <sup>j</sup>* of criterion *Qj*:

$$
\omega\_j^s = \frac{\sqrt{\xi\_j^2 + \zeta\_j^2}}{\sum\_{j=1}^n \sqrt{\xi\_j^2 + \zeta\_j^2}}.\tag{20}
$$

Step 3.2: Determinate the objective weights of criteria with CRITIC method.

Step 3.2.1 Use Equation (21) to normalize the fuzzy decision-making matrix *<sup>E</sup> <sup>k</sup>* = ( *<sup>e</sup><sup>k</sup> ij*) *m*×*n* :

$$\begin{aligned} \boldsymbol{e}\_{ij}^{k} &= \left(\mu\_{ij}^{k}, \gamma\_{ij}^{k}\right) = \left(\overline{\boldsymbol{\mu}}\_{ij}^{k}, \overline{\gamma}\_{ij}^{k}\right), \text{ for benefit criterion} \\ \boldsymbol{e}\_{ij}^{k} &= \left(\mu\_{ij}^{k}, \gamma\_{ij}^{k}\right) = \left(\overline{\gamma}\_{ij}^{k}, \overline{\mu}\_{ij}^{k}\right), \text{ for cost criterion} \end{aligned} \tag{21}$$

Step 3.2.2 Use Equation (22) to aggregate the fuzzy decision-making matrix:

$$
\varepsilon\_{\vec{i}\vec{j}}^{\*} = I FWA\_{\omega} \left( \varepsilon\_{\vec{i}\vec{j}\prime}^{1} \varepsilon\_{\vec{i}\vec{j}\prime}^{2} \cdot \cdots \cdot \varepsilon\_{\vec{i}\vec{j}\prime}^{K} \right) = \left( 1 - \prod\_{k=1}^{K} \left( 1 - \mu\_{\vec{i}\vec{j}}^{k} \right)^{\lambda\_{k}} \right) \prod\_{k=1}^{K} \left( \gamma\_{\vec{i}\vec{j}}^{k} \right)^{\lambda\_{k}}.\tag{22}
$$

Step 3.2.3 Use Equation (23) to obtain the standard deviation *τ<sup>j</sup>* of the criterion:

$$\tau\_j = \sqrt{\frac{1}{n-1} \sum\_{i=1}^{m} \left( D^{\sigma} \left( e\_{ij'}^{\*} \overline{e\_{j}} \right) \right)^2}, j = 1, 2, \dots, n. \tag{23}$$

where *ej* = <sup>1</sup> *m m* ∑ *i*=1 *e*∗ *ij* = *IFWA<sup>ω</sup>* ! *e*∗ 1*j* ,*e*∗ 2*j* , ··· ,*e*<sup>∗</sup> *mj*" = <sup>1</sup> <sup>−</sup> *<sup>m</sup>* ∏ *i*=1 ! 1 − *μ*<sup>∗</sup> *ij*" 1 *m* , *m* ∏ *i*=1 ! *γ*∗ *ij*" 1 *m* .

Use Equation (24) to evaluate correlation coefficient *ρjl* between criteria:

$$\rho\_{jl} = \frac{\sum\_{i=1}^{m} \left[ D^{\sigma} \left( e\_{ij}^{\*}, \overline{e\_{j}^{\*}} \right) \cdot D^{\sigma} \left( e\_{ij}^{\*}, \overline{e\_{j}^{\*}} \right) \right]}{\sqrt{\sum\_{i=1}^{m} \left( D^{\sigma} \left( e\_{ij}^{\*}, \overline{e\_{j}^{\*}} \right) \right)^{2}} \sqrt{\sum\_{i=1}^{m} \left( D^{\sigma} \left( e\_{ij}^{\*}, \overline{e\_{j}} \right) \right)^{2}}}, l, l = 1, 2, \cdots, n. \tag{24}$$

where *ej* = <sup>1</sup> *m m* ∑ *i*=1 *e*∗ *ij*,*el* <sup>=</sup> <sup>1</sup> *m m* ∑ *i*=1 *e*∗ *ij*, *j*, *l* = 1, 2, ··· , *n*.

Step 3.2.4 Use Equation (25) to obtain the objective weight *ω<sup>o</sup> <sup>j</sup>* of criterion *Qj*:

$$\omega\_{j}^{o} = \frac{\tau\_{j} \sum\_{l=1}^{n} \left(1 - \rho\_{jl}\right)}{\sum\_{j=1}^{n} \left[\tau\_{j} \sum\_{l=1}^{n} \left(1 - \rho\_{jl}\right)\right]}, j = 1, 2, \dots, n. \tag{25}$$

Step 3.3: Obtain the comprehensive weights *ω<sup>j</sup>* of criteria.

$$
\omega\_{\dot{\jmath}} = \varrho \omega\_{\dot{\jmath}}^s + (1 - \varrho)\omega\_{\dot{\jmath}}^o. \tag{26}
$$

where *ϕ*(0 ≤ *ϕ* ≤ 1) indicates the relative importance of subjective weight and objective weight severally. Here, it is assumed that the subjective and objective weights are of equal importance, so *ϕ* = 0.5.

#### (4) Stage 4 Determine the ranking of urban rail transit systems

In this paper, the power function *u*(*x*) = *x<sup>ε</sup>* is used as the utility function of attribute value, where *ε*(0 ≤ *ε* ≤ 1) is risk aversion coefficient, to describe the risk attitude of experts in decision-making, and, the smaller it is, the higher the risk aversion degree of experts is. *R*(*x*) = 1 − exp(−*ϑ* · *x*) is used as the regret and joy function, where it is the regret avoidance coefficient of experts, and the greater the *ϑ*(*ϑ* ∈ [0, +∞]) is, the higher the expert's regret avoidance degree is [41].

Let the evaluation value of *Pi*(*i* = 1, 2, ··· , *m*) be *yi*(*i* = 1, 2, ··· , *m*), and then the perceived utility value of experts on *Pi* is *ui* = *v*(*yi*) + *R*(*v*(*yi*) − *v*(*y*∗)). Where *y*<sup>∗</sup> = max1≤*i*≤*m*{*yi*} is the utility value of the ideal urban rail transit system type. *R*(*v*(*yi*) − *v*(*y*∗)) ≤ 0 indicates the regret value when the decision-maker chooses *Pi* and abandons the ideal urban rail transit system type. Therefore, the perceived utility value

of experts on the urban rail transit system type includes the utility value of the *Pi* and the regret value of *Pi* compared with the ideal urban rail transit system type.

Step 4.1: Determinate comprehensive evaluation value of urban rail transit systems based on COPRAS method considering regret theory.

Step 4.1.1 Determinate the weighted decision matrix:

$$
\widehat{\mathcal{U}} = \begin{pmatrix}
\widehat{\boldsymbol{u}\_{\vec{ij}}} & \widehat{\boldsymbol{u}\_{\vec{ij}}} & \cdots & \widehat{\boldsymbol{u}\_{\vec{ij}}} \\
\widehat{\boldsymbol{u}\_{\vec{ij}}} & \widehat{\boldsymbol{u}\_{\vec{ij}}} & \cdots & \widehat{\boldsymbol{u}\_{\vec{ij}}} \\
\vdots & \vdots & \ddots & \vdots \\
\widehat{\boldsymbol{u}\_{\vec{ij}}} & \widehat{\boldsymbol{u}\_{\vec{ij}}} & \cdots & \widehat{\boldsymbol{u}\_{\vec{ij}}}
\end{pmatrix}.
$$

where - *uij* = *wj* · *uij*.

Step 4.1.2 Use Equation (27) to calculate the utility value of the *Pi* under the criterion:

$$\kappa\_{ij} = \left( D^{\sigma} \left( e\_{ij}, e\_{\bar{j}}^{\*} \right) \right)^{\varepsilon}. \tag{27}$$

where *ε* is the risk aversion coefficient of decision-making experts. Based on the previous studies [39,42], *ε* = 0.88, *e*∗ *<sup>j</sup>* is ideal point. For benefit criteria, *e*<sup>∗</sup> *<sup>j</sup>* = max 1≤*i*≤*m μij*, min 1≤*i*≤*m γij* ; for cost criteria, *e*∗ *<sup>j</sup>* = min 1≤*i*≤*m μij*, max 1≤*i*≤*m γij* .

Step 4.1.3 Use Equation (28) to calculate the regret value of *Pi*:

$$\mathcal{J}\_{ij} = 1 - \exp\left(-\vartheta \cdot (\Delta u)\right). \tag{28}$$

where Δ*u* = *κ*∗ *<sup>j</sup>* − *κij*, *κ*<sup>∗</sup> *<sup>j</sup>* = min 1≤*i*≤*m* - *κij* is the utility value of ideal point. *ϑ* is the regret avoidance coefficient of expert.

Step 4.1.4 Utilize Equation (29) to calculate the perceived utility value of *Pi*:

$$
\mu\_{ij} = \kappa\_{ij} + \mathfrak{J}\_{ij}.\tag{29}
$$

Step 4.1.5 Obtain the benefit value and cost value of *Pi*:

For benefit criteria, use Equation (30) to calculate comprehensive benefit value *G*<sup>+</sup> *i* of *Pi*:

$$G\_i^+ = \sum\_{j=1}^r \mu\_{ij}{}^+, i = 1, 2, \cdot \cdot \cdot \,, m. \tag{30}$$

For cost criteria, use Equation (31) to calculate comprehensive cost value *G*<sup>+</sup> *<sup>i</sup>* of *Pi*:

$$G\_i^- = \sum\_{j=r+1}^n \mu\_{ij}^- , i = 1, 2, \cdots, m. \tag{31}$$

where "+" and "-" represent "benefit" and "cost", respectively, *r* is the number of benefit criteria.

Step 4.1.6 Use Equation (32) to determine the comprehensive evaluation value of *Pi*:

$$H\_i = G\_i^+ + \frac{\min\_i G\_i^- \sum\_{i=1}^m G\_i^-}{G\_i^- \sum\_{i=1}^m \frac{\min G\_i^-}{G\_i^-}} = G\_i^+ + \frac{\sum\_{i=1}^m G\_i^-}{G\_i^- \sum\_{i=1}^m \frac{1}{G\_i^-}}, i = 1, 2, \dots, m; \min\_i G\_i^- = \min\_{1 \le i \le m} \left\{ G\_i^- \right\} \tag{32}$$

Step 4.2: Select the optimal urban rail transit system.

During the process of urban rail transit system selection, the optimal urban rail transit system shall be determined according to the comprehensive utility value *Hi* calculated by Equation (32). That is, sort *Hi* from small to large. The larger *Hi* is, the better the scheme is.

#### **4. Case Study**

In this part, firstly, seven types of urban rail transit systems and eight criteria are listed. Secondly, the proposed hybrid decision model is used for the selection of the urban rail transit system of City N, and the optimal urban rail transit system is selected to prove the applicability and effectiveness of the proposed method. Finally, the stability and robustness of the model are verified through sensitivity analysis and comparative analysis.

Therefore, the types and related evaluation criteria of urban rail transit are systematically studied. Based on the existing research and discussion with four experts (Table 2 for experts' background), seven types of urban rail transit and eight criteria were determined to evaluate the types of urban rail transit (Table 3). After the preliminary analysis, an expert group composed of four experts was responsible for the evaluation of urban rail transit types. These decision-makers have played a role in rail transit, universities and government agencies. Next, the steps of the developed method in evaluating the type selection of urban rail transit will be introduced.

**Table 2.** The background of experts.




#### *4.1. The Types of Urban Rail Transit System*

As the backbone of urban public transport, urban rail transit has the characteristics of being fast, convenient, efficient, safe and comfortable. Under the current green and sustainable development policy, this type of system caters to the needs of the new era. According to the research on the classification of various forms of urban rail transit systems, this paper divides the urban rail transit system into seven forms: metro system, light rail system, monorail system, modern tram system, mid–low-speed maglev system, automatic guided track system and municipal railway system.


#### *4.2. Relevant Criteria*

The criteria for urban rail transit system selection are obtained based on literature research and expert consultation summary. The evaluation criteria proposed in this study is from the perspectives of characteristic, technology, economy and environment, with a total of eight criteria, including five benefit criteria and three cost criteria. The detailed description of the criteria is shown in Table 3.

#### *4.3. Method Implementation*

In this subsection, based on the above-listed seven urban rail transit system types and eight urban rail transit evaluation criteria, City N is selected as an example to implement the hybrid group decision framework in order to select the most suitable urban rail transit system type for City N. By the end of 2020, the total resident population of city N was 9.404 million, and the population density was 622.52 people per square kilometer. Throughout the year, the whole society completed 75.1333 million passenger trips, including 24.264 million road passenger trips and 40.516 million railway passenger trips. In terms of public transport, at the end of the year, there were 10,035 standard public transport vehicles in the city. Further, 1272 lines were operated, an increase of 8.3%. Rail transit completed 158 million passenger trips in the whole year. At the end of the year, there were

42,000 public bicycles in the city, with a total of 22.597 million car rentals in the whole year. At the end of the year, there were 6281 taxis in the city.

The decision group is still composed of the above four experts, who provide the linguistic evaluation decision matrix and the linguistic attribute direct influence matrix, respectively, as shown in Tables 4 and 5.


**Table 4.** The linguistic decision-making matrix.

**Table 5.** The fuzzy direct-influence matrix.



**Table 5.** *Cont.*

Then, the linguistic assessment matrix is transformed into a fuzzy evaluation matrix and a fuzzy direct influence matrix represented by intuitionistic fuzzy numbers by the intuitionistic fuzzy scale (adapted from Refs. [33,40]) listed in Tables 1 and 6, as shown in Tables 7 and 8. Then, the expert weight (Table 9) is calculated from Equations (10)–(13), the subjective weight is calculated from Equations (15)–(20), the objective weight is calculated from Equations (21)–(25) and the final comprehensive weight (Table 10) is calculated from Equation (26), and then the ranking of the urban rail transit system most suitable for City N is calculated according to Equations (27)–(32), as shown in Table 11.

**Table 6.** The transformation relationship of directly affected matrix linguistic variables.


**Table 7.** The fuzzy decision-making matrix.



**Table 7.** *Cont.*

**Table 8.** The intuitionistic fuzzy direct-influence matrix.



**Table 8.** *Cont.*

**Table 9.** The weight of DEs.



**Table 10.** The weight of criteria and ranking.



It can be seen from Table 10 that the ranking of the subjective weight and objective weight of criteria are quite different. The weight determination method combining subjective and objective weight can make the evaluation results more objective. The top three final criteria are technology maturity Q3, transportation speed Q2 and construction difficulty Q5. The ranking of criteria may change due to different cities. For City N, the first consideration is the three attributes of technology maturity, transportation speed and construction difficulty.

It can be seen from Table 11 that the ranking of the urban rail transit system in City N can be obtained through the comprehensive evaluation value. Here, the comprehensive evaluation value is negative because the regret theory is considered. P1 ranks first; that is, the type of urban rail transit most suitable for City N is metro system. City N is the third largest city in Z Province, with a large population and high requirements for transportation capacity. In addition, the metro system has high technical maturity and fast transportation speed. The natural geographical environment of city N also makes the construction of the metro system relatively difficult. Therefore, the metro system is the most suitable urban rail transit for city N. The municipal railway system (P7) and light rail system (P2) rank second and third, respectively. These two types are two other options that can be considered for construction in city N in addition to the metro system. They also have the characteristics of high technical maturity and fast transportation speed. Other criteria can be comprehensively considered for selection. The final results of the ranking of the urban rail transit system type can prove the applicability and effectiveness of the evaluation index and evaluation framework proposed in this study.

#### *4.4. Sensitivity Analysis*

In this subsection, the stability and robustness of the proposed hybrid intuitionistic fuzzy group decision framework will be explored through sensitivity analysis. The sensitivity analysis of this study is divided into two parts. The first part is the sensitivity analysis of the relative importance coefficient of subjective and objective weights. The second part is the sensitivity analysis of the regret avoidance coefficient of experts.

#### 4.4.1. The Impact Analysis of Parameter *ϕ* on Decision Results

The relative importance coefficient *ϕ* of subjective and objective weights can express the preference of decision-making experts for weights. In the previous example analysis, the value of *ϕ* is 0.5. Next, by changing the value of *ϕ*, different criteria weight values are obtained, and then the adjusted criteria ranking results are observed. In this paper, *ϕ* ∈ [0, 1], first, let *ϕ* = 0, increasing by 0.1; the final ranking results and ranking changes are shown in Table 12 and Figure 2.

**Table 12.** The ranking of urban rail transit types under different *ϕ* values.


**Figure 2.** The ranking change of decision results under different parameter *ϕ* values.

As can be seen from Figure 2, the final ranking is relatively stable by changing the proportion of subjective and objective weights, and the top three are always *P*<sup>1</sup> *P*<sup>7</sup> *P*2. When *ϕ* = 0 and 1, it means that only objective weight and only subjective weight are considered, respectively. When only the subjective weight is considered, the ranking of the fourth and fifth types will be exchanged and the other rankings will not change. Therefore, comprehensive consideration of the subjective and objective weight can make the decisionmaking results more stable.

#### 4.4.2. The Impact Analysis of Parameter *ϑ* on Decision Results

The second part considers the influence of the expert regret avoidance coefficient on the final decision outcome. The larger *ϑ* is, the higher the degree of the regret of experts. The initial value of *ϑ* is 5. In the analysis, *ϑ* takes 1 to 10 and increases by 1. The ranking and changes of the decision results are shown in Table 13 and Figure 3.


**Table 13.** The ranking of urban rail transit types under different *ϑ* values.

**Figure 3.** The ranking change in decision results under different parameter *ϑ* values.

As can be seen from Figure 3, changing the value of the regret avoidance coefficient has little impact on the final ranking result, which is still relatively stable, and the top three are still *P*<sup>1</sup> *P*<sup>7</sup> *P*2; only when the value of *ϑ* = 1 and 2, the medium–low-speed maglev system (P5) and monorail system (P3) rank fourth and fifth, respectively. When the value of *ϑ* is greater than or equal to 3, the rankings of the two types are exchanged. Monorail system (P3) ranks fourth, while medium–low-speed maglev system (P5) ranks fifth. From the sensitivity analysis of the above two parts, it can be seen that the model proposed in this paper has strong stability.

#### *4.5. Comparative Analysis*

The same as this study uses IFS to deal with the uncertainty and inaccuracy in decisions, the weight determination method remains unchanged based on IFS in the comparative analysis part. Three MCDM methods are selected to compare with the results of this study. The first is the traditional COPRAS method, which does not consider the regret theory. The other two methods are TOPSIS and ARAS. The comparison results are shown in Table 14 and Figure 4.


**Table 14.** The ranking under different evaluation methods.

**Figure 4.** The ranking results based on different evaluation methods.

Figure 4 shows the comparison of the ranking results under the four methods. It can be seen that the ranking results under different evaluation methods are different, but the overall trend is the same. The top three are mainly P1, P2 and P7. The best scheme changes between P1 and P7, and the last three are concentrated among P4, P5 and P6. By calculating the Spearman correlation coefficient of the ranking results of the original method and other methods, it can be seen that all the correlation coefficients are greater than 0.78, which shows that the evaluation model proposed in this study is relatively stable. The detailed comparison analyses with other intuitionistic fuzzy decision approaches are illustrated below.

Compared with the results of the traditional IF-COPRAS method, it is found that the results obtained by the two methods are different, and the Spearman correlation coefficient is 0.786. The reason for this difference is that the evaluation model proposed in this study considers the regret theory; that is, the expert risk aversion coefficient and regret aversion coefficient are considered at the same time. The result is the optimization of the traditional IF-COPRAS method.

Compared with the results of the IF-TOPSIS method, the ranking results obtained by the two methods are more consistent, and the ranking of 1, 2 and 7 are the same. This can also be proven by the Spearman correlation coefficient of 0.821. The TOPSIS method is a classical MCDM method, which has wide applicability. Through the consistency of the results of the two methods, it can be seen that the method proposed in this study has stability and robustness.

Compared with the results of the IF-ARAS method, the results of the ARAS method are the same as those of the traditional COPRAS method. Therefore, the Spearman correlation coefficient is also 0.786. This indicates that regret theory will affect the results.

Based on the above discussion and comparative analysis, the proposed hybrid intuitionistic fuzzy group decision framework for the urban rail transit system selection of this paper has the following advantages:


#### **5. Conclusions**

In view of the shortcomings of the existing research, the main goal of this study is to develop a hybrid MCGDM evaluation model for the selection of an urban rail transit system. In order to overcome the uncertainty and inaccuracy in the process of expert evaluation and make the evaluation information more reliable, this study put forward a hybrid intuitionistic fuzzy group decision framework to select the satisfactory urban rail transit system. The DEMATEL and CRITIC methods were selected to determine the subjective and objective weight of the criteria, and the COPRAS method based on regret theory was used to rank the types of urban rail transit systems and select the optimal urban rail transit system. The sensitivity analysis and comparative analysis prove the stability and robustness of the evaluation model. The results show that, no matter how the coefficient changes, the top three schemes have not changed. Furthermore, the ranking results still have high consistency by a detailed comparison analysis with other prior methodologies. Therefore, the hybrid decision-making framework model proposed in this study has strong practicability. It not only considers the subjective randomness of experts in the decision-making process but also considers the risk preference and regret degree of experts. It is more comprehensive and has more advantages in evaluating the selection of urban rail transit.

The method proposed in this study also has some limitations. For example, when calculating the weights of experts, only the relative distance of expert evaluation information is considered, and the information, such as experts' own experience, is ignored. In the process of expert information fusion, the relationship of decision information under different criteria is not considered. In the future, it can be further studied from the following aspects. Firstly, this research model can be applied to other related MCGDM problems [43,44]. Secondly, this research is based on the intuitionistic fuzzy environment. In the future, different fuzzy linguistic environments and MCDM methods can be applied to this research model. Thirdly, in the face of decision-making experts from different fields, it is difficult to reach a consensus on the preference information provided by different experts, and small-group-decision-making cannot fully ensure the credibility of the final decision-making results; therefore, it is a hot issue to establish a large-scale group consensus decision-making model [45–47] in an intuitionistic fuzzy environment and solve the actual group-decision-making problem combined with big data artificial intelligence technology.

**Author Contributions:** Conceptualization, B.Y., Y.R. and Y.H.; Formal analysis, B.Y., Y.R. and Y.H.; Funding acquisition, L.Y.; Investigation, Y.R., L.Y. and Y.H.; Methodology, B.Y., Y.R. and Y.H.; Project administration, L.Y.; Visualization, L.Y.; Writing—original draft, B.Y.; Writing—review & editing, B.Y., Y.R. and L.Y. All authors have read and agreed to the published version of the manuscript.

**Funding:** The research was funded by the General Program of National Natural Science Foundation of China (No: 12071280).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** The data presented in this study are available in the article.

**Conflicts of Interest:** The authors declare that they have no conflict of interest.

#### **Abbreviations**


#### **References**


**Jingyuan Li 1, Weile Liu 1, Fangwei Zhang 1,2,\*, Taiyang Li <sup>1</sup> and Rui Wang <sup>1</sup>**

<sup>1</sup> School of Navigation and Shipping, Shandong Jiaotong University, Weihai 264209, China;


**\*** Correspondence: fangweirzhang@163.com

**Abstract:** The aim of this study is to explore the effect of different personnel attributes and the relationship between different people on evacuation efficiency in the case of a passenger ship fire. As such, this study proposes a speed correction method that considers human attributes and interactions between different populations. Firstly, a hesitant fuzzy set and hesitant fuzzy average operator are adopted to quantify four kinds of personnel attributes. Secondly, considering the influence of different people, this study extracts the formula for acceleration under the interactive influence of different groups of people. At the same time, based on the first-order linear relationship between velocity and acceleration, an interactive velocity correction method is presented in the evacuation of ship personnel. Finally, this study uses the personnel evacuation simulation software Pathfinder to conduct experiments, and introduces the corrected speed and the uncorrected speed into the evacuation simulation process, respectively. The results show that the simulation results of the revised speed plan are more consistent with reality.

**Keywords:** ship fire; emergency evacuation; hesitation fuzzy sets

**MSC:** 41-02

#### **1. Introduction**

In recent years, the density of maritime navigation has been increasing. The number of ships, as the main means of water transportation, is also increasing year by year. During navigation, ship fire is one of the main threats to navigation safety. In shipwreck incidents over the years, accidents caused by fire account for about 11% [1]. Fire on ships is particularly dangerous because rescue is more difficult and fires spread fast, and the internal structure of ships is complex. Therefore, it is difficult to evacuate and put out fires. Once a fire occurs, it will cause a lot of economic losses and seriously threaten the safety of people's lives [2]. The occurrence of such accidents is closely related to the evacuation behavior and evacuation time [3]. Therefore, the behavior mechanism and rules of different groups in the evacuation process are studied extensively. It is beneficial to formulate scientific and efficient emergency evacuation measures. It is also of great significance to ensure the personal safety of evacuees in case of fire on ships.

Relevant studies have found that in the event of a fire, passengers will not only be affected by personal attributes such as their emergency ability, cognitive ability, psychological endurance, and value orientation, but also by different groups of people [4]. Therefore, in the ship scenario, the interactive hesitancy fuzzy integration operator is used to integrate the information of different groups. At the same time, the velocity and acceleration formulas under the influence of different crowd interaction are extracted. Then the cognitive ability and emergency response ability of the population at the fire scene are quantitatively analyzed. This study revises and supplements the effects of capacity, value orientation, psychological bearing capacity and group effect on evacuation efficiency. The models of

**Citation:** Li, J.; Liu, W.; Zhang, F.; Li, T.; Wang, R. A Ship Fire Escape Speed Correction Method Considering the Influence of Crowd Interaction. *Mathematics* **2022**, *10*, 2749. https://doi.org/10.3390/ math10152749

Academic Editor: Yanhui Guo

Received: 27 June 2022 Accepted: 20 July 2022 Published: 3 August 2022

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

escape acceleration and escape speed are also established, which provide theoretical basis and decision support for the evacuation of ship fire personnel.

#### *1.1. Literature Review*

Ensuring the safety of people in a fire is the fundamental goal of evacuating people. To achieve this goal, it is necessary to study the behavioral laws of evacuated people. The study of evacuation behavior is one of the nine key research directions of fire science. At present, the research on pedestrian evacuation mainly focuses on three aspects: evacuation model construction, evacuation decision-making and personnel evacuation efficiency.

Concerning the construction of the evacuation model, Treuille et al. [5] proposed a real-time crowd model with congestion in public places in multiple cities as the research object. Helbing et al. [6,7] established a social model to describe the walking behavior of the crowd in evacuation according to the calculation formula of Newton's second theorem. Wang et al. [8] integrated human factors into emergency evacuation and analyzed the influence of various factors on evacuation behavior in different stages by building an evacuation model. Wang et al. [9] constructed an evacuation model that considers Openness, Conscientiousness, Extroversion, Agreeableness, and Neuroticism (OCEAN) to analyze the impact of passengers' personality traits on evacuation behavior. Hu et al. [10] considered the interaction between the fire environment and evacuees from the perspective of the system and established a manual evacuation procedure.

Concerning the evacuation decision part, Feng et al. [11] proposed an evacuation decision-making model consisting of three parts: pedestrian distribution prediction model, pedestrian flow calculation model and path situation and feedback correction model. Lovreglio et al. [12] introduced an evacuation decision model predicting pre-evacuation behavior, and the model simulates the probability of evacuees' behavioral state. Peng et al. [13] established a two-level decision-making model for emergency evacuation paths of high-rise buildings based on BIM, which realized the optimal planning of emergency evacuation paths. Sun et al. [14] used game-based theory in a small-world network context and built an evolutionary game model of evacuation decision diffusion between evacuees in the context of a complex network. Tian et al. [15] have designed a mobile-based system to collect medical and temporal data produced during an emergency response to mass casualty incidents.

Concerning the evacuation efficiency, Koo et al. [16] studied the psychological panic effect coefficient of evacuation speed by combining theoretical derivation with threedimensional simulation technology. Chen et al. [17] adopted an improved social force model to study the influences of the total number of pedestrians, required speed, and specific location of obstacles on the evacuation efficiency of multi-exit configuration. Jeon [18] studied the impact of escape routes and emergency exits on evacuation speed under different environmental conditions. Yu et al. [19] conducted experimental and numerical simulation study on evacuation time and average evacuation speed of personnel in railway tunnels under train fire conditions.

The above research provides a theoretical basis for this research. At present, however, there are few studies on fire evacuation in the scenario of ships. The research on the behavior of personnel evacuation is not refined enough. Therefore, this study takes ships as the research scene and adopts the method of questionnaire survey to collect data. Then, the factors affecting evacuation efficiency of different groups are quantified. The relationship between the behavior and psychological characteristics of people in a fire and the evacuation speed are difficult to directly quantify into an accurate mathematical relationship, therefore, this paper uses fuzzy logic to quantify their influence, and selects the classical hesitant fuzzy weighted average operator for information integration.

#### *1.2. Objective Contribution*

The purpose of this study is to explore the influence of the interaction between different attribute groups on the evacuation efficiency in the ship fire scenario. The main research contributions are as follows.

Firstly, this study is based on fuzzy mathematics theory, using the classical hesitant fuzzy average operator. Then, the four attributes that affect crew escape on board are integrated. Through the quantification of four objective influencing factors, evacuation research is more realistic.

Secondly, a speed correction model considering the interaction of different populations is developed. The reduction in evacuation speed caused by different crowd interaction is quantified. It provides a reference for realizing evacuation research under the influence of multiple factors.

Finally, this study collects data through questionnaires and calculates the model. The simulation software is used to compare the modified speed plan with the unmodified speed plan, and more realistic simulation results are obtained.

The contents of this study are arranged as follows. In Section 2, the research ideas are described and the factors affecting the escape of personnel on board are analyzed. In Section 3, a fuzzy set containing four kinds of people is established by using fuzzy mathematics theory. Then, a speed correction method considering personnel attributes and the interaction between different groups is developed. Section 4 verifies the validity of the revised velocity model through simulation.

#### **2. Research Foundation**

#### *2.1. Research Idea*

The ship fire evacuation efficiency is affected by many factors. This study mainly considers the influence of the interaction between different groups of people on the evacuation efficiency. Then, a more realistic fire evacuation velocity model is extracted. The specific research ideas are as follows.

Based on the above research goals, this study has carried out the following work. Firstly, this study establishes the hesitant fuzzy sets of four kinds of people. The influence of the attributes of emergency response ability, cognitive ability, psychological bearing ability and value orientation is quantitatively analyzed. Secondly, this study introduces the interaction between the target population and other groups of people, combined with the classical universal gravitation formula. The acceleration formula of interaction between the influence of different attributes and the influence of different people is extracted. Finally, this study collects data through questionnaires, and uses simulation software to compare the revised speed plan with the uncorrected speed plan to verify the validity of the model (please, see Figure 1 for the research process).

**Figure 1.** Research process.

#### *2.2. Analysis of Key Factors for the Escape of Personnel on Board*

In the process of fire evacuation, emergency ability, cognitive ability, psychological endurance and value orientation are the key factors that affect the survival of people [20]. The sudden stimuli of fire make the crowd react instantaneously, and the instantaneous response is closely related to the above four abilities of different people. The specific explanations of emergency ability, cognitive ability [21], psychological bearing ability and value orientation are as follows.

(i) Emergency ability: When people encounter an emergency, the brain immediately deals with it based on past experience and the ability to think for itself. Self-thinking is a subconscious response. Because children and the elderly have far less physical function than adults, once a fire breaks out, children and the elderly will become vulnerable groups. Their evacuation speed is also significantly lower than that of adults.

(ii) Cognitive ability: People with higher education levels have weaker fear, faster reactions and stronger ability to escape. On the other hand, people with lower education levels have slower reactions and weaker escape ability in the face of fire.

(iii) Psychological endurance: When a fire occurs on a ship, people will have a fear of fire due to a lack of understanding of fire. In this state, people are prone to irrational behavior. Adults have a strong psychological bearing capacity, while the elderly and children have a weak psychological bearing capacity. There is a certain gap in the psychological response of different groups of people in terms of psychological bearing capacity.

(iv) Value orientation: When a ship fire occurs, the value orientation of the elderly is conservative, which greatly affects the escape ability of the elderly. Therefore, value orientation is also one of the key factors affecting the escape speed of the crew on board.

#### *2.3. Research Tools*

When a fire occurs on a ship, panic and chaotic behavior are bound to occur in the crowd. In a ship with concentrated personnel, the personnel's emergency ability, cognitive ability, psychological bearing ability and value orientation vary greatly. In addition, under fire conditions, the four abilities of different groups are complex and abstract, and the relationship with evacuation speed cannot be directly quantified. Therefore, this study proposes a fire escape velocity correction model based on hesitant fuzzy sets.

Due to the complexity and uncertainty of objective information and the ambiguity of human thinking, Zadeh introduced the concept of fuzzy sets [22]. Hesitant fuzzy sets are fuzzy set extensions to handle hesitant situations that were not well handled by previous tools [23]. Operator theory is an important part of fuzzy theory. Based on the arithmetic ensemble method, this study uses the classical hesitancy fuzzy weighted average operator (HFWA) and the classical hesitant fuzzy average (HFA). Then, the cognitive ability, emergency response ability, value orientation, psychological bearing ability and group effect of people in the fire scene are integrated. Furthermore, the objective information of different groups is quantified [24]. The relevant definitions of the hesitant fuzzy set weighted average operator, acceleration and velocity formulas used in this study are as follows.

**Definition 1.** *Let X be a given finite set, then E* = { *x*, *hE*(*x*)|*x* ∈ *X* } *is called hesitant fuzzy set. Among them, hE*(*x*) *represents the possible membership degree of x belonging to X, which is a subset of the interval* [0, 1]*, and let h*1,*h*2,*h*3, *be three hesitant fuzzy elements, then their basic operations are as follows*

$$\begin{array}{l} h\_1 \cap h\_2 = \cup\_{\gamma\_1 \subset h\_1, \gamma\_2 \subset h\_2} \{ \min(\gamma\_1, \gamma\_2) \}; \\ h\_1 \cup h\_2 = \cup\_{\gamma\_1 \subset h\_1, \gamma\_2 \subset h\_2} \{ \max(\gamma\_1, \gamma\_2) \}; \\ h\_1 \oplus h\_2 = \cup\_{\gamma\_1 \subset h\_1, \gamma\_2 \subset h\_2} \{ \gamma\_1 + \gamma\_2 - \gamma\_1 \gamma\_2 \}; \\ h\_1 \otimes h\_2 = \cup\_{\gamma\_1 \subset h\_1, \gamma\_2 \subset h\_2} \{ \gamma\_1 \gamma\_2 \}. \end{array}$$

In addition, *hj*(*j* = 1, 2 ··· , *n*) is a set of hesitant fuzzy elements, and the operation of hesitant fuzzy weighted average operator *<sup>H</sup><sup>n</sup>* → *<sup>H</sup>* is the mapping of *<sup>H</sup><sup>n</sup>* → *<sup>H</sup>* , and the specific operation is as follows [25]

$$\text{HFWA}(h\_1, h\_2, \dots, h\_n) = \bigcap\_{j=1}^n (w\_j h\_j) = \cup\_{\gamma\_1 \subset h\_1, \gamma\_2 \subset h\_2} \left\{ 1 - \prod\_{j=1}^n (1 - \gamma\_j)^{w\_j} \right\};\tag{1}$$

where *w* = (*w*1, *w*2, ··· *wn*) *<sup>T</sup>* is the weight vector of *hj*, *wj* <sup>&</sup>gt; 0, *<sup>n</sup>* ∑ *j*=1 *wj* = 1.

**Definition 2.** *To calculate the escape speed of people on board, this study introduces relevant acceleration formulas. For research convenience, the probability that the target group perceives the other group is denoted as θ. The influence of the other group on the target group is denoted as μ, and the influence direction of the other group on the target group is denoted as sgn vj* − *vi , When vj < vi, its value is* −*1, vj > vi, its value is +1. tact is the instantaneous reaction time of personnel escape. The acceleration equation is obtained as*

$$a = \frac{\Delta v}{\Delta t} = \frac{\theta\_i \cdot \mu \cdot \text{sgn}(v\_j - v\_i)}{t\_{act}}.\tag{2}$$

**Definition 3.** *Considering the influence of hesitancy fuzzy average operator and based on the firstorder linear relationship between velocity and acceleration, this study gives the velocity correction formula of different people under different influences when fire occurs. The influence of all attributes on a single population was denoted as Qi, vi is the expected speed of group i. The corrected speed v can be obtained as*

$$v' = (1 - Q\_i) \cdot v\_i + a \cdot t\_{act} \,. \tag{3}$$

#### **3. Model Formulation**

Based on the above analysis of human behavior characteristics in a passenger ship fire, this study constructs its model as follows. Firstly, fuzzy mathematical theory is applied to acquire fuzzy sets including different groups of people. Secondly, an ensemble operator with different attributes is obtained by using the classical hesitant fuzzy average operator. Thirdly, the escape speed of each crowd is obtained by combining the universal gravitation formula and the relationship between velocity and acceleration. Finally, considering various special cases, the properties and inferences are acquired. The model building process is shown in Figure 2.

#### *3.1. Consider Different Attributes and the Speed Correction Model of the Crowd*

The steps of model construction are as follows. Step 1 is to construct the hesitant fuzzy sets including an adult male, adult female, children and the elderly, and comprehensively consider the factors affecting the escape speed of people on the ship. The classical hesitancy fuzzy integration operator is used to consider and quantify various factors in Step 2. Step 3 is inspired by the classical universal gravitation formula, and extracts the formula of escape acceleration of people on board under the interaction of two factors, i.e., different attributes and different people [26]. Step 4, combined with the relationship between acceleration and velocity, further extracts the escape speed of people on board under this interactive influence [27].

**Step 1.** This study establishes hesitant fuzzy sets of four groups of people. For the convenience of research, in this study, {*Hi*1, *Hi*2, *Hi*3, *Hi*4} represents the hesitant fuzzy sets under the four attributes of the four groups of people. *Nijk* is the hesitant fuzzy elements under the four attributes of the four groups of people. *ρijk* is the percentage of *i* crowd, *j* attribute, and *k* ability judgment options. Among them, *i* represents the four groups

of people, respectively; *i* ∈ {1, 2, 3, 4}. *j* represents the four attributes,*j* ∈ {1, 2, 3, 4}. *k* represents the evaluation options of the four attributes; *k* ∈ {1, 2, 3, 4}. The hesitant fuzzy sets under the four attributes of the four groups of people are expressed as follows:

**Figure 2.** Model building process.

**Step 2.** Based on the fuzzy mathematics theory, this study transforms the qualitative problem into the quantitative problem. According to the above hesitant fuzzy sets, the objective factors affecting crowd evacuation efficiency are integrated. Then, the influence of a single attribute on a single population can be denoted as *Qij*. The influence of all attributes on a single population was denoted as *Qi* by integrating *Qij*. Based on experience, the literature, and current research, this study assumes that the weight of the four groups is equal. According to the classical hesitant fuzzy average operator [24], *Qi* is obtained as

$$Q\_{ij} = \left(1 - \prod\_{i=1}^{4} \left(1 - \rho\_{ijk}\right)^{\frac{1}{4}}\right),\tag{4}$$

*Qi* = 4 ∑ *j*=1 *Qij* <sup>4</sup> , (5)

respectively, where *j* = 1, 2, 3, 4; *i* = 1, 2, 3, 4.

**Step 3.** Consider that there is only a single target population, its escape speed is only affected by four attributes and its expected speed. However, in a case of multiple groups, the escape speed of the target group will be affected not only by cognitive ability, emergency response ability, value orientation and psychological endurance but also by other groups. To simplify the research work, this study divides the influences from other groups into three categories, as follows. The probability that the target group perceives the other group is denoted as *θ*, the magnitude of the influence of the other group on the target group is denoted as *μ*, and the direction of the influence of the other group on the target group is denoted as sgn *vj* − *vi* . Since this study only considers the influence between the four groups of people, other influencing factors are not considered. Therefore, in this study, other unconsidered factors are denoted as *λ*<sup>∗</sup> and defaulted to 0.375 [28]. *Mi* is used to represent the number of single people, with *vi* representing the expectations of a single population escape velocity, with *v <sup>i</sup>* representing a single population after a reaction time of the final velocity. *λij* denotes mutual influence between the two groups, *tact* instantaneous response time for escape, *ω*<sup>1</sup> is the weight of the probability that the target group feels other groups of people, and *ω*<sup>2</sup> is the weight of the influence of other groups on the target population. Based on experience, the literature, and current research [29], this study considers that the instantaneous reaction time of adult men and women is *t*1*act* = *t*2*act* = 2 s, and the elderly and children is *t*3*act* = *t*4*act* = 3 s. Thus, the formula of escape acceleration (*ai*) of a single crowd is obtained as

$$\begin{split} \left| a\_{i} \right| &= \sum\_{i=1}^{4} \left( \lambda\_{ij} \right) \\ &= \sum\_{j=1}^{4} \frac{4 \cdot \arctan \left( \left( \frac{M\_{j}}{M\_{i} + M\_{j}} \right)^{w\_{1}} \cdot \right)}{\left( \left( \frac{|v\_{i} - v\_{j}|}{\max \left\{ v\_{i}, v\_{j} \right\}} \right)^{w\_{2}} \right)^{w\_{3}}} \Big| \operatorname{sgn} \left( v\_{j} - v\_{i} \right) \cdot \left| v\_{j} - v\_{i} \right| \\ &= \sum\_{j=1}^{4} \frac{4}{\left( \operatorname{wt} \right)^{w\_{1}} \frac{1}{t\_{\operatorname{int}}}} \Big| \operatorname{dist} \end{split} \tag{6}$$

Among them, sgn *vj* − *vi* represents the influence direction of other groups on the target group. In *i* = {1, 2, 3, 4}, *j* ∈ {1, 2, 3, 4}, 1, 2, 3, and 4 represent adult males, adult females, the elderly, and children, respectively. When *vj* > *vi*, the value of sgn *vj* − *vi* is +1, indicating that other groups have a positive influence on the target group. When *vj* < *vi*, the value of sgn *vj* − *vi* is −1, indicating that other groups have a negative influence on the target group. When *vj* = *vi*, the value of sgn *vj* − *vi* is 0, and other groups are in the same direction as the target group.

**Step 4.** This study integrates the influence of four attributes and other populations on the target population, substitute it into Equation (3), *v <sup>i</sup>* is obtained as

$$
\upsilon'\_i = (1 - Q\_i) \cdot \upsilon\_i + \sum\_{j=1}^4 \left(\lambda\_{ij}\right) \cdot \lambda^\* \cdot t\_{\text{iact}\prime} \tag{7}
$$

where, *i* = 1, 2, 3, 4.

Then, substitute Equation (7) into Equation (6), and it is obtained as

$$\begin{array}{l} \mathbf{v}\_{i}^{\prime} &= (1 - Q\_{i}) \cdot v\_{i} + \\ & \frac{4}{\pi} \cdot \arctan \left( \begin{array}{c} \left( \frac{M\_{i}}{M\_{i} + M\_{j}} \right)^{\mathbf{w}\_{1}} \cdot \\ \left( \frac{\left| v\_{i} - v\_{j} \right|}{\max \left\{ v\_{i}, v\_{j} \right\}} \right)^{\mathbf{w}\_{2}} \end{array} \right) \text{sgn} \left( v\_{j} - v\_{i} \right) \cdot \left| v\_{j} - v\_{i} \right| \\ & \frac{4}{\pi} \cdot \frac{4}{\left( \mathbf{v}\_{i} \right)^{\mathbf{w}\_{1}} \cdot \mathbf{v}\_{i}} \cdot \lambda^{\*} \cdot t\_{\text{iact}} \end{array} \tag{8}$$

where, *i* = 1, 2, 3, 4.

#### *3.2. Supplement and Description*

The corollary and properties of Equation (8) are as follows.

**Corollary 1.** *When Mi is much larger than Mj, Mi is regarded as the maximum value and Mj as the minimum value, thus, which is substituted into Equation (8) to obtain v <sup>i</sup>* = (1 − *Qi*) · *vi*.

**Theorem 1.** *When the number of the target population is considered larger than that of other groups, the speed of the group is not affected by other groups but is only related to its expected speed and its cognitive ability, emergency response-ability, value orientation and psychological bearing capacity.*

**Corollary 2.** *When Mj is much larger than Mi, Mj is regarded as the maximum value and Mi as the minimum value, thus Mj Mi*+*Mj* = 1*, which is substituted into Equation (8) to obtain the simplified Equation (9).*

$$\begin{array}{lcl}\upsilon\_{i}^{\prime} &= (1 - Q\_{i}) \cdot \upsilon\_{i} + \\ & \sum\_{j=1}^{4} \frac{\frac{4}{\pi} \cdot \arctan\left(\frac{|\upsilon\_{i} - \upsilon\_{j}|}{\max\left\{\upsilon\_{i}, \upsilon\_{j}\right\}}\right)^{w\_{2}}}{t\_{\text{iact}}} \frac{\text{sgn}\left(\upsilon\_{j} - \upsilon\_{i}\right) \cdot |\upsilon\_{j} - \upsilon\_{i}|}{} \cdot \lambda^{\*} \cdot t\_{\text{iact}} \end{array} \tag{9}$$

*where, i* = 1, 2, 3, 4.

**Theorem 2.** *When the number of other groups is much larger than the number of target groups, the impact of the number of groups can be ignored.*

**Corollary 3.** *When vi and vj differs greatly, there will be the following two situations.*

*(i) vj is regarded as the maximum value and vi as the minimum value, so* <sup>|</sup>*vi*−*vj*<sup>|</sup> max{*vi*,*vj*} <sup>=</sup> <sup>1</sup> *are substituted into Equation (8) to obtain simplified Equation (10).*

$$v'\_i = (1 - Q\_i) \cdot v\_i + \sum\_{j=1}^4 \frac{\frac{4}{\pi} \cdot \arctan\left(\frac{M\_j}{M\_i + M\_j}\right)^{w\_1} \text{sgn}\left(v\_j - v\_i\right) \cdot v\_i}{t\_{\text{iact}}} \cdot \lambda^\* \cdot t\_{\text{iact}} \tag{10}$$

*where, i* = 1, 2, 3, 4.

*(ii) Similarly, vi is regarded as the maximum value and vj as the minimum value, so* |*vi*−*vj*| max{*vi*,*vj*} <sup>=</sup> <sup>1</sup> *can be substituted into Equation (8) to obtain simplified Equation (10).*

**Theorem 3.** *When the speed of the target crowd differs considerably from that of another crowd, the evacuation speed of the target crowd is not affected by the speed of others.*

**Corollary 4.** *When vi <sup>=</sup> vj, then* <sup>|</sup>*vi*−*vj*<sup>|</sup> *max*{*vi*,*vj*} <sup>=</sup> *vi* − *vj* = 0*, substitute the sub-data into Equation (8) to obtain v <sup>i</sup>* = (1 − *Qi*) · *vi.*

**Theorem 4.** *When the speed of the target group is the same as that of other groups, the speed of the group is not affected by other groups, but is only related to its own speed and the impact of cognitive ability, emergency response-ability, psychological bearing capacity and value orientation.*

#### **4. Simulation Example**

In order to verify the effectiveness of the interactive speed correction method, a comparative simulation experiment is carried out in this study. Firstly, a questionnaire survey was conducted. Based on the results of the questionnaire survey, the expected speed of four different groups of people is revised by using the interactive speed modification method. Secondly, the single deck of a ro-ro passenger ship is selected as a simulation example. The deck is then modeled by Pathfinder evacuation software. Finally, this study sets up two evacuation plans, namely, ordinary evacuation and evacuation under the speed correction of interactive influence. Through the comparison of simulation results, it is concluded that the speed correction method of interactive influence proposed in this study is in line with reality.

#### *4.1. Personnel Evacuation Speed Correction*

The age, gender, cultural background, and other factors of the people on board will not only affect their judgment of the degree of fire risk, but also affect the evacuation speed. Therefore, this study combines the previous research results to design a questionnaire on ship fire evacuation behavior [30]. The questionnaire structure of personnel evacuation behavior in a ship fire situation is shown in Table 1. The questionnaire topic is mainly set to investigate the cognitive ability, emergency response ability, value orientation, and psychological endurance of different groups of people. Questionnaires were randomly distributed on an online questionnaire survey platform. The respondents were then divided into different age groups. A total of 129 questionnaires were distributed, and 105 questionnaires were finally effectively recovered. Finally, the reliability of the questionnaire was tested. The Cronbach reliability coefficient α of the questionnaire was 0.67. This indicates that the data reliability of the questionnaire is good and meets the requirements of usability. Among them, the proportion of adult men, adult women, the elderly, and children surveyed is 8:8:3:2. The information summary is shown in Table 2.


**Table 1.** Questionnaire structure of personnel evacuation behavior in a ship fire situation.

**Table 2.** Summary of questionnaire information.



**Table 2.** *Cont*.

Based on the questionnaire data, this study brings the above data into the speed correction model and obtains the expected correction speed of four groups. The specific model calculation steps are as follows.

**Step 1.** Bring *ρijk* into a fuzzy set based on the questionnaire data in Table 2. It can be obtained that the impact of a single attribute on a single population is recorded as *Qij*. Then, *Qij* is integrated to calculate the impact of all attributes on a single population, which is recorded as *Qi*. Among them, *M*1= 40, *M*2= 40, *M*3= 15, *M*4= 10. The effects of all attributes on adult men, adult women, the elderly, and children are

$$Q\_1 = 0.246, \ Q\_2 = 0.266, \ Q\_3 = 0.275, \ Q\_4 = 0.277$$

**Step 2.** In this study, the expected speeds of adult men, adult women, children and the elderly are set to be 1.5 m/s, 1.3 m/s, 1.1 m/s, and 0.9 m/s, respectively. Combined with the questionnaire data in Table 2 and substituted into Formula (6), the escape acceleration *ai* of the four groups is, respectively,

$$a\_1 = 0.205 \text{ m/s}^2, \ a\_2 = 0.056 \text{ m/s}^2, \ a\_3 = -0.076 \text{ m/s}^2, \ a\_4 = -0.236 \text{ m/s}^2.$$

**Step 3.** Substituting *vi*, *Qi*, *ai* and *tiact* into Formula (8), the correction speed of adult males, adult females, the elderly, and children can be obtained as

 $v\_1 = 1.28 \text{ m/s}$ ,  $v\_2 = 1 \text{ m/s}$ ,  $v\_3 = 0.71 \text{ m/s}$ ,  $v\_4 = 0.38 \text{m/s}$ .

The above results are used as examples to simulate the modified expected speed of adult men, adult women, children, and the elderly.

#### *4.2. Simulation Model Construction*

This study takes a ro-ro passenger ship as the simulation object. The ship has a length of 196.27 m, a width of 28.60 m, a seating capacity of 1588 people, a passenger quota of 1500 people, and 10 decks. The seven and eight decks of the ship belong to the passenger activity area, and both decks have independent evacuation assembly areas, vertical single channel marine evacuation system, and fully enclosed lifeboats. In this study, the eight decks of the ship are selected as the simulation model for the evacuation of ship fire personnel.

In this study, Pathfinder software is used to model the above simulation examples. The deck model is 196 m long, 28.6 m wide and 3 m high. A total of 620 people need to be evacuated on the deck, and the proportion of adult men, adult women, children, and the elderly is 8:8:3:2, who are randomly distributed on the deck. According to the internal structure of the eighth deck, the evacuation routes and exits shall be set. Considering that the vertical single channel marine evacuation system and the fully enclosed lifeboat are difficult to set up in Pathfinder software, this paper sets the personnel evacuation deck as the form of successful personnel evacuation. The 3D model of personnel evacuation on the ship's deck is shown in Figure 3. The green column represents adult men, the blue column represents adult women, the yellow column represents children, and the black column represents the elderly. The green line indicates the evacuation exit, that is, the evacuation from the green line indicates the successful evacuation.

**Figure 3.** Ship Deck Evacuation 3D model.

#### *4.3. Comparative Analysis of Evacuation Results*

In order to verify the effectiveness of the correction method, this study sets up two evacuation plans, namely ordinary evacuation and speed correction method of interactive influence. At the same time, the velocity correction methods of ordinary evacuation and interactive influence are compared and analyzed experimentally. In addition, the specific details of the plan are as follows.

**Plan 1:** ordinary evacuation. The range of passenger evacuation speed is set to be 0.51~1.50 m/s, and remains unchanged. The evacuation path is uniform evacuation at the exits on both sides of the deck.

**Plan 2:** speed correction method of interactive influence. Based on the calculation results in Section 4.3, the expected evacuation speeds of adult men, adult women, the elderly, and children are set to 1.28 m/s, 1 m/s, 0.71 m/s, and 0.38 m/s, respectively. The evacuation speed obeys the normal distribution. The expected speed of each group is compared with the corrected speed (please see Figure 4 for details). The experimental results of the two evacuation plans are compared, as shown in Figure 5.

**Figure 4.** Speed comparison chart.

**Figure 5.** Experimental results of two plans.

According to Figure 5 and Table 3, the total evacuation time of ordinary evacuation is 265.28 s, and the overall evacuation efficiency is high. The evacuation time of the interactive speed correction method is 349.03 s, and the overall evacuation efficiency is low. With the development of time, the evacuation efficiency decreases significantly. After 325 s, the evacuation efficiency is close to 0. The evacuation efficiency (curve slope) of the speed correction method of interactive influence is significantly lower than that of ordinary evacuation, and the evacuation efficiency decreases significantly with the development of time. This is due to the consideration of the negative psychology of personnel in the fire and the influence of fire smoke. As time goes on, smoke concentration and temperature will gradually increase, causing harm to the human body and resulting in reduced evacuation efficiency. After 325 s, the evacuation efficiency of the interactive speed correction method is close to 0. This is because the concentration and temperature of fire smoke are enough to threaten the lives of people at 325 s. However, considering the mutual assistance behavior of the group, people around will actively help those who are slow and unconscious, so that they can keep the same speed and continue to move.


**Table 3.** Data comparison between two plans.

To sum up, ordinary evacuation oversimplifies the evacuation behavior of people. It is assumed that people only evacuate evenly according to the evacuation path during the evacuation process, and the impact of fire smoke and group behavior on people is not considered, which is not in line with reality and there is a large error. The aim of the speed correction method of interactive influence is to study evacuation from the human point of view. Through the analysis of the complex psychology and group effect on evacuation behavior, the evacuation path and evacuation speed are modified and supplemented. This is more realistic.

#### **5. Conclusions**

This study adopts the HFA to research evacuation from the perspective of the people on board, analyzes the influence of cognitive ability, emergency response ability, value orientation, psychological tolerance and group effect on evacuation behavior of the people in the fire scene. Then the escape acceleration and escape speed are corrected and supplanted to make them more realistic. The innovation of this study is mainly reflected in the following three points.

First, this study considers the interaction among fire escape personnel. The main influencing factors among escaped groups are summarized as emergency ability, cognitive ability, psychological bearing ability and value orientation. The interactive effects of the four attributes of the four groups of people are introduced into the evacuation model, which makes the evacuation research more realistic.

Secondly, this study uses the hesitant fuzzy integration operator to integrate the four attributes of the four groups of people, and realizes the quantification of the interaction between the groups. Then, the acceleration formula and velocity correction modulus formula considering the interaction effect of different groups of people are extracted, and the influence from other types of people is introduced into the evacuation research.

Finally, this study collects data through questionnaires and calculates the revised speed for different populations. Then, simulation software is used to compare the revised speed plan with the uncorrected speed plan, it is concluded that the revised speed plan is more realistic and provides a reference for subsequent evacuation research.

The shortcoming of this study is that it fails to take into account the emotional contagion of panic among pedestrians, the influence of fire smoke toxicity and fire temperature as well as other factors. Further study will be carried out by combining the above influencing factors with simulation examples.

**Author Contributions:** Conceptualization, F.Z.; Data curation, W.L.; Formal analysis, T.L.; Investigation, R.W.; Writing—original draft, J.L. All authors have read and agreed to the published version of the manuscript.

**Funding:** The Fangwei Zhang's work is partially supported by Shanghai Pujiang Program (No. 2019PJC062), the Natural Science Foundation of Shandong Province (No. ZR2021MG003), the Research Project on Undergraduate Teaching Reform of Higher Education in Shandong Province (No. Z2021046).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Informed consent was obtained from all subjects involved in the study.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Article* **Medical Diagnosis and Pattern Recognition Based on Generalized Dice Similarity Measures for Managing Intuitionistic Hesitant Fuzzy Information**

**Majed Albaity 1,\* and Tahir Mahmood 2,\***


**Abstract:** Pattern recognition is the computerized identification of shapes, designs, and reliabilities in information. It has applications in information compression, machine learning, statistical information analysis, signal processing, image analysis, information retrieval, bioinformatics, and computer graphics. Similarly, a medical diagnosis is a procedure to illustrate or identify diseases or disorders, which would account for a person's symptoms and signs. Moreover, to illustrate the relationship between any two pieces of intuitionistic hesitant fuzzy (IHF) information, the theory of generalized dice similarity (GDS) measures played an important and valuable role in the field of genuine life dilemmas. The main influence of GDS measures is that we can easily obtain a lot of measures by using different values of parameters, which is the main part of every measure, called DGS measures. The major influence of this theory is to utilize the well-known and valuable theory of dice similarity measures (DSMs) (four different types of DSMs) under the assumption of the IHF set (IHFS), because the IHFS covers the membership grade (MG) and non-membership grade (NMG) in the form of a finite subset of [0, 1], with the rule that the sum of the supremum of the duplet is limited to [0, 1]. Furthermore, we pioneered the main theory of generalized DSMs (GDSMs) computed based on IHFS, called the IHF dice similarity measure, IHF weighted dice similarity measure, IHF GDS measure, and IHF weighted GDS measure, and computed their special cases with the help of parameters. Additionally, to evaluate the proficiency and capability of pioneered measures, we analyzed two different types of applications based on constructed measures, called medical diagnosis and pattern recognition problems, to determine the supremacy and consistency of the presented approaches. Finally, based on practical application, we enhanced the worth of the evaluated measures with the help of a comparative analysis of proposed and existing measures.

**Keywords:** intuitionistic hesitant fuzzy sets; generalized dice similarity measures; medical diagnosis; pattern recognition; artificial intelligence

**MSC:** 03B52; 68T27; 68T37; 94D05; 03E72

#### **1. Introduction**

The decision-making procedure covers four main stages: intelligence, design, choice, and implementation. The principle of the decision-making technique begins with the intelligence stage. In this stage, the intellectual determines reality and identifies and explains the troubles. However, before 1965, no one had utilized or studied the decisionmaking troubles in the environment of the fuzzy set (FS) theory. For this, the well-known idea of FS was initiated by Zadeh [1] by modifying the technique of crisp set into FS, which covers the MG belonging to [0, 1]. FS has received considerable attention from the distinct intellectual, and certain applications have been carried out by different scholars. For example, Aydin [2] proposed the fuzzy multicriteria decision-making technique by

**Citation:** Albaity, M.; Mahmood, T. Medical Diagnosis and Pattern Recognition Based on Generalized Dice Similarity Measures for Managing Intuitionistic Hesitant Fuzzy Information. *Mathematics* **2022**, *10*, 2815. https://doi.org/10.3390/ math10152815

Academic Editor: Pasi Luukka

Received: 21 June 2022 Accepted: 1 August 2022 Published: 8 August 2022

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

using the Fermatean fuzzy sets, John [3] discussed the certain application of the type-2 FSs, Mandel and John [4] explored the type-2 fuzzy sets made simple, and Mahmood [5] initiated the idea of a bipolar soft set, discussed operational laws, and applied it in decision-making problems.

FS has received attention from the distinct intellectual, and certain applications have been carried out by different scholars. However, if an intellectual faces information in the shape of {0.8,0.9,0.7}, then the principle of FS has been neglected. For this, the well-known idea of hesitant FS (HFS) was initiated by Torra [6] by modifying the technique of FS into HFS, which covers the MG, whose supremum value is belonging to [0, 1]. HFS is a modified version of FS and has received attention from the distinct intellectual; certain applications have been performed by different scholars. For example, Meng and Chen [7] developed the correlation measures for HFSs, Li et al. [8] investigated the distance and similarity measures for HFSs, Su et al. [9] proposed certain measures based on dual HFSs, and Wei et al. [10] investigated the entropy and certain types of measures based on HFSs.

If a piece of intellectual faced information in the shape of "yes" or "no", then the principle of FS has been neglected. For this, the well-known idea of intuitionistic FS (IFS) was initiated by Atanassov [11] by modifying the technique of FS into IFS, which covers the MG and NMG, whose sum is belonging to [0, 1]. IFS is a modified version of FS and has received attention from the distinct intellectual; certain applications have been carried out by different scholars. For example, Ye [12] initiated the certain cosine measure by using IFSs, Rani and Garg [13] developed the distance measures by using complex IFSs, Liang and Shi [14] also explored certain measures based on IFSs, Xu and Chen [15] examined the distance and similarity measures for IFSs, Xu [16] proposed the intuitionistic fuzzy similarity measures, Garg and Rani [17] presented the correlation among any number of complex IFSs, Zeshui [18] utilized certain measures for interval-valued IFSs, Wei et al. [19] investigated the entropy and similarity measures for interval-valued IFSS, and Wang and Xin [20] proposed the distance measures for IFSs.

It was demonstrated that the prevailing information computed based on FSs, HFSs, and IFSs has a variety of applications in many different fields, for instance, computer science, economics and finance, engineering sciences, and road signals. However, it is also clear that they have many limitations and restrictions. For instance, we know that IFS has managed only with two-dimensional information in a singleton set, and each dimension of information can express only one value, but what if someone provided twodimensional information in the shape of singleton sets, and each dimension of information could represent more than one value? In such a situation, experts noticed that the theory of IFS was not able to proceed with the above information accurately. For this, the well-known idea of an intuitionistic hesitant fuzzy set (IHFS) was initiated by Beg and Rashid [21] by modifying the technique of IFS into IHFS, which covers the MG and NMG in the form of a finite subset of [0, 1], whose sum of the supremum of the duplet is belonging to [0, 1]. IHFS is a modified version of IFS and HFS to cope with complicated and unreliable information in genuine life troubles, and it has gotten massive attraction from the distinct intellectual. Certain applications have been carried out by different scholars. For example, Peng et al. [22] initiated the cross-entropy measures by using the IHFSs, and Zhai et al. [23] examined probabilistic interval-valued IHFSs.

In statistics and related theories, a similarity function, i.e., similarity metric or similarity measure, is a real-valued function that computes the similarity among two terms. Even though no single idea of similarity exists, generally such measures are, in a particular sense, the inverse of distance metrics. Cosine similarity, Tangent similarity, hamming similarity, Euclidean similarity, dice similarity, and generalized dice similarity measures are the commonly employed types of similarity measures for real-valued vectors, used in data retravel to score the similarity of documents in the vector space model. In machine learning, common kernel mappings such as the Radial based function kernel can be observed as similarity measures. In all these measures, we noticed that the GDS measures are massively valuable and effective, as they are more generalized than the prevailing studied measures. Furthermore, GDS measures are a very significant part of the decision-making technique to determine the closeness between any number of attributes. A certain application has been performed by different scholars. By using different values of the parameter, we can easily obtain the prevailing measures of cosine similarity, tangent similarity, hamming similarity, Euclidean similarity, dice similarity. However, the principle of dice and GDS measures are not implemented in the environment of IHFSs. The main goal of this study is to utilize the principle of GDS measures in the environment of IHFS to improve the quality of the research. We propose this theory, due to the following reasons:


To handle the above questions, we aim to illustrate the following investigations, which are briefly explained in the form of certain points below:


The main contribution of this study is constructed as follows: In Section 2, we briefly recall the idea of IFSs, HFSs, and IHFSs. The main idea of dice similarity measure (DSM) is also revised. In Section 3, we propose certain types of DSM measures based on IHFSs. In Section 4, we explore the IHF GDS measure and IHF-weighted GDS measure. Based on the investigated measures, certain special cases are also evaluated. In Section 5, we utilize two different types of applications, called medical diagnosis and pattern recognition, based on pioneered measures and discuss their comparative analysis. The conclusion of this study is discussed in Section 6.

#### **2. Preliminaries**

The theory of IFSs, HFSs, IHFSs, and DSMs are the parts of this section. Further, the mathematical term *X*, represented as a universal set with MG "M*I*" and NMG "N*I*".

**Definition 1 ([11]).** *An IFS I is investigated by:*

$$I = \{ (\mathfrak{X}, \mathfrak{M}\_I(\mathfrak{x}), \mathfrak{U}\_I(\mathfrak{x})) : \mathfrak{X} \in X \},$$

*with a rule:* 0 ≤ M*I*(X) + N*I*(X) ≤ 1*. Moreover, the hesitancy degree is shown by: dI*(X) = 1 − (M*I*(X) + N*I*(X))*. During this study, the IFN is elaborated by I* = (M, N).

**Definition 2 ([6]).** *A HFS I is investigated by:*

$$I = \{ (\mathfrak{X}, \mathfrak{M}\_I(\mathfrak{X})) : \mathfrak{X} \in \mathfrak{X} \}$$

*where* M*<sup>I</sup>* = {M1,M2, . . . .,M*n*} *with a rule:* 0 ≤ sup(M*I*) ≤ 1.

**Definition 3 ([21]).** *An IHFS Ξ is investigated by:*

$$\Xi = \{ (\mathfrak{X}, \mathfrak{M}^{\Xi}(\mathfrak{X}), \mathfrak{U}^{\Xi}(\mathfrak{X})) : \mathfrak{X} \in \mathfrak{X} \}$$

*where* M*Ξ*(X) *and* N*Ξ*(X) *are expressed the hesitant fuzzy numbers (HFNs), with a rule:* 0 ≤ MAX(M*Ξ*(X)) + *max*(N*Ξ*(X)) ≤ 1*. Moreover, the refusal grade is initiated by: πΞ*(X) =

1 − (MAX(M*Ξ*(X)) + *max*(N*Ξ*(X)))*. The intuitionistic hesitant fuzzy number is expressed by: Ξ* = ! M*j <sup>Ξ</sup>*, <sup>N</sup>*<sup>j</sup> Ξ* " .

**Definition 4 ([24]).** *For any two-positive vector X and Y, the DSM is initiated by:*

$$D(X,Y) = \frac{2X.Y}{\|X\|\_2^2 + \|Y\|\_2^2} = \frac{2\sum\_{j=1}^l \mathfrak{X}\_j y\_j}{\sum\_{j=1}^l \mathfrak{X}\_j^2 + \sum\_{j=1}^l y\_j^2}$$

*where X*.*Y* = ∑*<sup>l</sup> <sup>j</sup>*=<sup>1</sup> X*jyj is expressed as the inner product and X*<sup>2</sup> = - ∑*l <sup>j</sup>*=<sup>1</sup> X<sup>2</sup> *<sup>j</sup> and Y*<sup>2</sup> = - ∑*l <sup>j</sup>*=<sup>1</sup> *y*<sup>2</sup> *<sup>j</sup> is expressed in the Euclidean or L*<sup>2</sup> *norms of X and Y*.

#### **3. DSM for IHFSs**

To illustrate the relationship between any two pieces of IHF information, the theory of DSMs played an important and valuable role in the field of genuine life dilemmas. The main influence of GDS measures is that we can easily obtain many measures by using different values of parameters, which is the main part of every measure, called DGS measures. In this study, we chose one of the most flexible and genuine principles, called the IHFS, which covers the MG and NMG in the form of a finite subset of [0, 1], with the rule that the sum of the supremum of the duplet is limited to [0, 1] and GDS measures are to develop the four sorts of IHF dice similarity measure and IHF weighted dice similarity measure. Based on the investigated measures, certain special cases were also evaluated.

**Definition 5.** *By using any two IHFNS Ξ and Ξ , a DSM D*<sup>1</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) *is investigated by:*

$$\begin{split} &D^{1}P\_{\mathsf{P}\mathsf{E}\mathsf{V}}\{\boldsymbol{\Xi},\boldsymbol{\Xi}^{\prime}\} \\ &=\frac{1}{M}\sum\_{i=1}^{M}\frac{2\left(\frac{1}{3\mathfrak{N}}\sum\_{j=1}^{l}\mathfrak{M}\_{\boldsymbol{\Xi}^{i}}^{j}(\boldsymbol{\mathfrak{X}}\_{i}^{i})\mathfrak{M}\_{\boldsymbol{\Xi}^{i}}^{j}(\boldsymbol{\mathfrak{X}}\_{i}^{i})+\frac{1}{3!}\sum\_{j=1}^{l}\mathfrak{N}\_{\boldsymbol{\Xi}^{i}}^{j}(\boldsymbol{\mathfrak{X}}\_{i}^{i})\mathfrak{N}\_{\boldsymbol{\Xi}^{j}}^{j}(\boldsymbol{\mathfrak{X}}\_{i}^{i})\right)}{\left(\frac{1}{3\mathfrak{N}\_{\boldsymbol{\Xi}^{i}}(\boldsymbol{\mathfrak{X}}^{i})}\sum\_{j=1}^{l}\left(\mathfrak{N}\_{\boldsymbol{\Xi}^{i}}^{j}(\boldsymbol{\mathfrak{X}}\_{i}^{j})\right)^{2}+\frac{1}{15!}\sum\_{j=1}^{l}\left(\mathfrak{N}\_{\boldsymbol{\Xi}^{i}}^{j}(\boldsymbol{\mathfrak{X}}\_{i}^{i})\right)^{2}+\frac{1}{15!}\sum\_{j=1}^{l}\left(\mathfrak{N}\_{\boldsymbol{\Xi}^{i}}^{j}(\boldsymbol{\mathfrak{X}}\_{i}^{i})\right)^{2}+\frac{1}{15!}\sum\_{j=1}^{l}\left(\mathfrak{N}\_{\boldsymbol{\Xi}^{i}}^{j}(\boldsymbol{\mathfrak{X}}\_{i}^{j})\right)^{2}\right)^{l}}\end{split}$$

*which holds the necessary rules:*

$$1. \qquad 0 \le D^1 \, \_{P\Xi F} \left(\varXi\_{\prime} \, \Xi^{\prime}\right) \le 1$$

$$
\mathcal{Q}. \quad D^1{}\_{P\Xi F}(\Xi, \Xi') = D^1{}\_{P\Xi F}(\Xi', \Xi)
$$

*3*. *D*<sup>1</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) = 1 ⇔ *Ξ* = *Ξ*

Using some conditions, we can easily obtain further particular cases from the above theory; for instance, to put N*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>=</sup> <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>=</sup> 0 in *<sup>D</sup>*<sup>1</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ), then *D*<sup>1</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) will change for HFSs. Furthermore, to put M*<sup>j</sup> Ξ*(X*i*),M*<sup>j</sup> <sup>Ξ</sup>*(X*i*) and <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*), <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*) as a singleton set, then *D*<sup>1</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) will change for IFSs, meaning the theory diagnosed in this study is massively powerful and dominant compared to others.

**Definition 6.** *By using any two IHFNS Ξ and Ξ , a WDSM WD*<sup>1</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) *is investigated by:*

*WD*<sup>1</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) <sup>=</sup> *<sup>M</sup>* ∑ *i*=1 *wi* 2 ! <sup>1</sup> . <sup>M</sup> <sup>∑</sup>*<sup>l</sup> <sup>j</sup>*=<sup>1</sup> <sup>M</sup>*<sup>j</sup> Ξ*(X*i*)M*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>+</sup> <sup>1</sup> . <sup>N</sup> <sup>∑</sup>*<sup>l</sup> <sup>j</sup>*=<sup>1</sup> <sup>N</sup>*<sup>j</sup> Ξ*(X*i*)N*<sup>j</sup> <sup>Ξ</sup>*(X*i*) " <sup>1</sup> *<sup>L</sup>*M*Ξ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! M*<sup>j</sup> <sup>Ξ</sup>*(X*i*) "2 <sup>+</sup> <sup>1</sup> *<sup>L</sup>*N*Ξ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! N*j <sup>Ξ</sup>*(X*i*) "2 <sup>+</sup> <sup>1</sup> *<sup>L</sup>*M*Ξ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! M*<sup>j</sup> <sup>Ξ</sup>*(X*i*) "2 <sup>+</sup> <sup>1</sup> *<sup>L</sup>*N*Ξ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! N*j <sup>Ξ</sup>*(X*i*) "2 

*which holds the necessary rules of Definition 5.*

Using some conditions, we can easily obtain further particular cases from the above theory, for instance, to put N*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>=</sup> <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>=</sup> 0 in *WD*<sup>1</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ), *WD*<sup>1</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) will change for HFSs. Furthermore, to put M*<sup>j</sup> Ξ*(X*i*),M*<sup>j</sup> <sup>Ξ</sup>*(X*i*) and <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*), <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*) as a singleton set, *WD*<sup>1</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) will change for IFSs, meaning the theory diagnosed in this manuscript is massively powerful and dominant as compared to others. For *w* = ! <sup>1</sup> *<sup>M</sup>* , <sup>1</sup> *<sup>M</sup>* ,..., <sup>1</sup> *M* "*T* , the WDSM is converted for DSM based on IHFS such that *WD*<sup>1</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) = *D*<sup>1</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ).

**Definition 7.** *By using any two IHFNS Ξ and Ξ , a DSM D*<sup>2</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) *is investigated by:*

*D*2 *PΞF Ξ*, *Ξ* <sup>=</sup> <sup>1</sup> *M M* ∑ *i*=1 2 ! <sup>1</sup> . <sup>M</sup> <sup>∑</sup>*<sup>l</sup> <sup>j</sup>*=<sup>1</sup> <sup>M</sup>*<sup>j</sup> Ξ*(X*i*)M*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>+</sup> <sup>1</sup> . <sup>N</sup> <sup>∑</sup>*<sup>l</sup> <sup>j</sup>*=<sup>1</sup> <sup>N</sup>*<sup>j</sup> Ξ*(X*i*)N*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>+</sup> <sup>1</sup> . *<sup>π</sup>* <sup>∑</sup>*<sup>l</sup> <sup>j</sup>*=<sup>1</sup> *<sup>π</sup><sup>j</sup> Ξ*(X*i*)*π<sup>j</sup> <sup>Ξ</sup>*(X*i*) " ⎛ ⎜⎝ <sup>1</sup> *<sup>L</sup>*M*Ξ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! M*j <sup>Ξ</sup>*(X*i*) "2 <sup>+</sup> <sup>1</sup> *<sup>L</sup>*N*Ξ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! N*j <sup>Ξ</sup>*(X*i*) "2 <sup>+</sup> <sup>1</sup> *<sup>L</sup>πΞ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! *πj <sup>Ξ</sup>*(X*i*) "2 + <sup>1</sup> *<sup>L</sup>*M*Ξ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! M*j <sup>Ξ</sup>*(X*i*) "2 <sup>+</sup> <sup>1</sup> *<sup>L</sup>*N*Ξ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! N*j <sup>Ξ</sup>*(X*i*) "2 <sup>+</sup> <sup>1</sup> *<sup>L</sup>πΞ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! *πj <sup>Ξ</sup>*(X*i*) "2 ⎞ ⎟⎠

*which holds the necessary rules of Definition 5.*

Using some conditions, we can easily obtain a lot of further particular cases from the above theory; for instance, to put N*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>=</sup> <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>=</sup> 0 in *<sup>D</sup>*<sup>2</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ), then *D*2 *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) will change for HFSs. Furthermore, to put M*<sup>j</sup> Ξ*(X*i*),M*<sup>j</sup> <sup>Ξ</sup>*(X*i*) and <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*), N*j <sup>Ξ</sup>*(X*i*) as a singleton set, *<sup>D</sup>*<sup>2</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) will change for IFSs, meaning the theory diagnosed in this study is massively powerful and dominant compared to others.

**Definition 8.** *By using any two IHFNS Ξ and Ξ , a WDSM WD*<sup>2</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) *is investigated by:*

*WD*<sup>2</sup> *PΞF Ξ*, *Ξ* = *M* ∑ *i*=1 *wi* 2 ! <sup>1</sup> . <sup>M</sup> <sup>∑</sup>*<sup>l</sup> <sup>j</sup>*=<sup>1</sup> <sup>M</sup>*<sup>j</sup> Ξ*(X*i*)M*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>+</sup> <sup>1</sup> . <sup>N</sup> <sup>∑</sup>*<sup>l</sup> <sup>j</sup>*=<sup>1</sup> <sup>N</sup>*<sup>j</sup> Ξ*(X*i*)N*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>+</sup> <sup>1</sup> . *<sup>π</sup>* <sup>∑</sup>*<sup>l</sup> <sup>j</sup>*=<sup>1</sup> *<sup>π</sup><sup>j</sup> Ξ*(X*i*)*π<sup>j</sup> <sup>Ξ</sup>*(X*i*) " ⎛ ⎜⎝ <sup>1</sup> *<sup>L</sup>*M*Ξ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! M*j <sup>Ξ</sup>*(X*i*) "2 <sup>+</sup> <sup>1</sup> *<sup>L</sup>*N*Ξ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! N*j <sup>Ξ</sup>*(X*i*) "2 <sup>+</sup> <sup>1</sup> *<sup>L</sup>πΞ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! *πj <sup>Ξ</sup>*(X*i*) "2 + <sup>1</sup> *<sup>L</sup>*M*Ξ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! M*j <sup>Ξ</sup>*(X*i*) "2 <sup>+</sup> <sup>1</sup> *<sup>L</sup>*N*Ξ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! N*j <sup>Ξ</sup>*(X*i*) "2 <sup>+</sup> <sup>1</sup> *<sup>L</sup>πΞ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! *πj <sup>Ξ</sup>*(X*i*) "2 ⎞ ⎟⎠

*which holds the necessary rules of Definition 5.*

Using some conditions, we can easily obtain a lot of further particular cases from the above theory; for instance, to put N*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>=</sup> <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>=</sup> 0 in *WD*<sup>2</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ), *WD*<sup>2</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) will change for HFSs. Furthermore, to put M*<sup>j</sup> Ξ*(X*i*),M*<sup>j</sup> <sup>Ξ</sup>*(X*i*) and <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*), <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*) as a singleton set, *WD*<sup>2</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) will change for IFSs, meaning that the theory diagnosed in this study is massively powerful and dominant compared to others. For *w* = ! <sup>1</sup> *<sup>M</sup>* , <sup>1</sup> *<sup>M</sup>* ,..., <sup>1</sup> *M* "*T* then *WD*<sup>2</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) = *D*<sup>2</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ).

**Definition 9.** *By using any two IHFNS Ξ and Ξ , a DSM D*<sup>3</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) *is investigated by:*

$$\begin{split} D^{3}\_{P\Xi F}(\mathfrak{S},\mathfrak{T}') &= \frac{\sum\_{i=1}^{M} 2\left(\frac{1}{3\mathfrak{N}}\sum\_{j=1}^{l} \mathfrak{M}\_{\Xi}^{j}(\mathfrak{X}\_{i})\mathfrak{M}\_{\Xi'}^{j}(\mathfrak{X}\_{i}) + \frac{1}{3\mathfrak{N}}\sum\_{j=1}^{l} \mathfrak{N}\_{\Xi}^{j}(\mathfrak{X}\_{i})\mathfrak{N}\_{\Xi'}^{j}(\mathfrak{X}\_{i})\right)}{\sum\_{i=1}^{M} \left(\frac{1}{\mathfrak{N}\_{\mathfrak{N}\_{\Xi}(\mathfrak{X})}}\sum\_{j=1}^{l} \left(\mathfrak{M}\_{\Xi}^{j}(\mathfrak{X}\_{i})\right)^{2} + \frac{1}{\mathfrak{N}\_{\mathfrak{N}\_{\Xi}(\mathfrak{X})}}\sum\_{j=1}^{l} \left(\mathfrak{N}\_{\Xi}^{j}(\mathfrak{X}\_{i})\right)^{2}\right) + \\ &\sum\_{i=1}^{M} \left(\frac{1}{\mathfrak{N}\_{\mathfrak{N}\_{\Xi}(\mathfrak{X})}}\sum\_{j=1}^{l} \left(\mathfrak{M}\_{\Xi'}^{j}(\mathfrak{X}\_{i})\right)^{2} + \frac{1}{\mathfrak{N}\_{\mathfrak{N}\_{\Xi}(\mathfrak{X})}}\sum\_{j=1}^{l} \left(\mathfrak{N}\_{\Xi'}^{j}(\mathfrak{X}\_{i})\right)^{2}\right) \end{split}$$

*which holds the necessary rules of Definition 5.*

Using some conditions, we can easily obtain further particular cases from the above theory; for instance, to put N*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>=</sup> <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>=</sup> 0 in *<sup>D</sup>*<sup>3</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ), *D*<sup>3</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) will change for HFSs. Furthermore, to put M*<sup>j</sup> Ξ*(X*i*),M*<sup>j</sup> <sup>Ξ</sup>*(X*i*) and <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*), <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*) as a singleton set, *D*<sup>3</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) will change for IFSs, meaning that the theory diagnosed in this study is massively powerful and dominant compared to others.

**Definition 10.** *By using any two IHFNS Ξ and Ξ , a WDSM WD*<sup>3</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) *is investigated by:*

$$\begin{split} WD^{3}\_{P\Sigma F}(\varXi,\varXi') &= \frac{\sum\_{i=1}^{M} 2w\_{i}^{2} \left(\frac{1}{3\mathfrak{N}} \sum\_{j=1}^{l} \mathfrak{M}\_{\varXi}^{j}(\mathfrak{X}\_{i}) \mathfrak{M}\_{\varXi'}^{j}(\mathfrak{X}\_{i}) + \frac{1}{3\mathfrak{N}} \sum\_{j=1}^{l} \mathfrak{N}\_{\varXi}^{j}(\mathfrak{X}\_{i}) \mathfrak{N}\_{\varXi'}^{j}(\mathfrak{X}\_{i})\right)}{\sum\_{i=1}^{M} w\_{i}^{2} \left(\frac{1}{\mathfrak{N}\_{\mathfrak{N}\_{\varXi}(\mathfrak{X})}} \sum\_{j=1}^{l} \left(\mathfrak{M}\_{\varXi}^{j}(\mathfrak{X}\_{i})\right)^{2} + \frac{1}{\mathfrak{N}\_{\mathfrak{N}\_{\varXi}(\mathfrak{X})}} \sum\_{j=1}^{l} \left(\mathfrak{N}\_{\varXi}^{j}(\mathfrak{X}\_{i})\right)^{2}\right) + \\ &\sum\_{i=1}^{M} w\_{i}^{2} \left(\frac{1}{\mathfrak{N}\_{\mathfrak{N}\_{\varXi}(\mathfrak{X})}} \sum\_{j=1}^{l} \left(\mathfrak{M}\_{\varXi'}^{j}(\mathfrak{X}\_{i})\right)^{2} + \frac{1}{\mathfrak{N}\_{\mathfrak{N}\_{\varXi}(\mathfrak{X})}} \sum\_{j=1}^{l} \left(\mathfrak{N}\_{\varXi'}^{j}(\mathfrak{X}\_{i})\right)^{2}\right) \end{split}$$

*which holds the necessary rules of Definition 5.*

Using some conditions, we can easily obtain further particular cases from the above theory; for instance, to put N*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>=</sup> <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>=</sup> 0 in *WD*<sup>3</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ), *WD*<sup>3</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) will change for HFSs. Furthermore, to put M*<sup>j</sup> Ξ*(X*i*),M*<sup>j</sup> <sup>Ξ</sup>*(X*i*) and <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*), <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*) as a singleton set, *WD*<sup>3</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) will change for IFSs, meaning that the theory diagnosed in this study is massively powerful and dominant compared to others. For *w* = ! <sup>1</sup> *<sup>M</sup>* , <sup>1</sup> *<sup>M</sup>* ,..., <sup>1</sup> *M* "*T* then *WD*<sup>3</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) = *D*<sup>3</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ).

**Definition 11.** *By using any two IHFNS Ξ and Ξ , a DSM D*<sup>4</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) *is investigated by:*

*D*4 *PΞF Ξ*, *Ξ* <sup>=</sup> <sup>2</sup> <sup>∑</sup>*<sup>M</sup> i*=1 ! <sup>1</sup> . <sup>M</sup> <sup>∑</sup>*<sup>l</sup> <sup>j</sup>*=<sup>1</sup> <sup>M</sup>*<sup>j</sup> Ξ*(X*i*)M*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>+</sup> <sup>1</sup> . <sup>N</sup> <sup>∑</sup>*<sup>l</sup> <sup>j</sup>*=<sup>1</sup> <sup>N</sup>*<sup>j</sup> Ξ*(X*i*)N*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>+</sup> <sup>1</sup> . *<sup>π</sup>* <sup>∑</sup>*<sup>l</sup> <sup>j</sup>*=<sup>1</sup> *<sup>π</sup><sup>j</sup> Ξ*(X*i*)*π<sup>j</sup> <sup>Ξ</sup>*(X*i*) " ∑*<sup>M</sup> i*=1 <sup>1</sup> *<sup>L</sup>*M*Ξ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! M*j <sup>Ξ</sup>*(X*i*) "2 <sup>+</sup> <sup>1</sup> *<sup>L</sup>*N*Ξ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! N*j <sup>Ξ</sup>*(X*i*) "2 <sup>+</sup> <sup>1</sup> *<sup>L</sup>πΞ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! *πj <sup>Ξ</sup>*(X*i*) "2 + ∑*<sup>M</sup> i*=1 <sup>1</sup> *<sup>L</sup>*M*Ξ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! M*j <sup>Ξ</sup>*(X*i*) "2 <sup>+</sup> <sup>1</sup> *<sup>L</sup>*N*Ξ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! N*j <sup>Ξ</sup>*(X*i*) "2 <sup>+</sup> <sup>1</sup> *<sup>L</sup>πΞ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! *πj <sup>Ξ</sup>*(X*i*) "2 

*which holds the necessary rules of Definition 5.*

Using some conditions, we can easily obtain further particular cases from the above theory; for instance, to put N*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>=</sup> <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>=</sup> 0 in *<sup>D</sup>*<sup>4</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ), *D*<sup>4</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) will change for HFSs. Furthermore, to put M*<sup>j</sup> Ξ*(X*i*),M*<sup>j</sup> <sup>Ξ</sup>*(X*i*) and <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*), <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*) as a singleton set, *D*<sup>4</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) will change for IFSs, meaning the theory diagnosed in this study is massively powerful and dominant compared to others.

**Definition 12.** *By using any two IHFNS Ξ and Ξ , a WDSM WD*<sup>4</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) *is investigated by:*

*WD*<sup>4</sup> *PΞF Ξ*, *Ξ* <sup>=</sup> <sup>2</sup> <sup>∑</sup>*<sup>M</sup> <sup>i</sup>*=<sup>1</sup> *<sup>w</sup>*<sup>2</sup> *i* ! <sup>1</sup> . <sup>M</sup> <sup>∑</sup>*<sup>l</sup> <sup>j</sup>*=<sup>1</sup> <sup>M</sup>*<sup>j</sup> Ξ*(X*i*)M*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>+</sup> <sup>1</sup> . <sup>N</sup> <sup>∑</sup>*<sup>l</sup> <sup>j</sup>*=<sup>1</sup> <sup>N</sup>*<sup>j</sup> Ξ*(X*i*)N*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>+</sup> <sup>1</sup> . *<sup>π</sup>* <sup>∑</sup>*<sup>l</sup> <sup>j</sup>*=<sup>1</sup> *<sup>π</sup><sup>j</sup> Ξ*(X*i*)*π<sup>j</sup> <sup>Ξ</sup>*(X*i*) " ∑*<sup>M</sup> <sup>i</sup>*=<sup>1</sup> *<sup>w</sup>*<sup>2</sup> *i* <sup>1</sup> *<sup>L</sup>*M*Ξ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! M*j <sup>Ξ</sup>*(X*i*) "2 <sup>+</sup> <sup>1</sup> *<sup>L</sup>*N*Ξ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! N*j <sup>Ξ</sup>*(X*i*) "2 <sup>+</sup> <sup>1</sup> *<sup>L</sup>πΞ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! *πj <sup>Ξ</sup>*(X*i*) "2 + ∑*<sup>M</sup> <sup>i</sup>*=<sup>1</sup> *<sup>w</sup>*<sup>2</sup> *i* <sup>1</sup> *<sup>L</sup>*M*Ξ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! M*j <sup>Ξ</sup>*(X*i*) "2 <sup>+</sup> <sup>1</sup> *<sup>L</sup>*N*Ξ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! N*j <sup>Ξ</sup>*(X*i*) "2 <sup>+</sup> <sup>1</sup> *<sup>L</sup>πΞ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! *πj <sup>Ξ</sup>*(X*i*) "2 

*which holds the necessary rules of Definition 5.*

Using some conditions, we can easily obtain further particular cases from the above theory; for instance, to put N*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>=</sup> <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>=</sup> 0 in *WD*<sup>4</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ), *WD*<sup>4</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) will change for HFSs. Furthermore, to put M*<sup>j</sup> Ξ*(X*i*),M*<sup>j</sup> <sup>Ξ</sup>*(X*i*) and <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*), <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*) as a singleton set, *WD*<sup>4</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) will change for IFSs, meaning the theory diagnosed in this manuscript is massively powerful and dominant compared to others. For *w* = ! <sup>1</sup> *<sup>M</sup>* , <sup>1</sup> *<sup>M</sup>* ,..., <sup>1</sup> *M* "*T* , *WD*<sup>4</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) = *D*<sup>4</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ).

#### **4. GDSM for IHFSs**

To illustrate the relationship between any two pieces of IHF information, the theory of GDS measures played an important and valuable role in the field of genuine life dilemmas. The main influence of GDS measures is that we can easily obtain a large number of measures by using different values of parameters, which is the main part of every measure, called DGS measures. In this study, we chose one of the most flexible and genuine principles, called the IHFS, which covers the MG and NMG in the form of a finite subset of [0, 1], with the rule that the sum of the supremum of the duplet is limited to [0, 1], GDS measures are to develop the four sorts of IHF GDS measure, and IHF weighted GDS measure. Based on the investigated measures, certain special cases are also evaluated, with 0 ≤ *ρ* ≤ 1.

**Definition 13.** *By using any two IHFNS Ξ and Ξ , a GDSM GD*<sup>1</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) *is investigated by:*

$$\begin{split} \mathbb{E}\big{\mathrm{G}\big{D}^{1}\big{}}\_{P\boldsymbol{\Xi}\boldsymbol{\Xi}}\Big{(}\boldsymbol{\Xi},\boldsymbol{\Xi}^{\prime}\big{)}=\frac{1}{M}\sum\_{i=1}^{M}\frac{\left(\frac{1}{\mathfrak{N}}\sum\_{j=1}^{l}\mathfrak{N}^{j}\_{\boldsymbol{\Xi}}(\mathfrak{X}\_{i})\mathfrak{N}^{j}\_{\boldsymbol{\Xi}\boldsymbol{\Xi}^{\prime}}(\mathfrak{X}\_{i})+\frac{1}{\mathfrak{N}}\sum\_{j=1}^{l}\mathfrak{N}^{j}\_{\boldsymbol{\Xi}^{\prime}}(\mathfrak{X}\_{i})\mathfrak{N}^{j}\_{\boldsymbol{\Xi}^{\prime}}(\mathfrak{X}\_{i})\right)}{\left(\begin{array}{c}\gamma\left(\frac{1}{\mathfrak{N}\mathfrak{N}\_{\boldsymbol{\Xi}^{\prime}\boldsymbol{\Xi}^{\prime}}}\sum\_{j=1}^{l}\left(\mathfrak{N}^{j}\_{\boldsymbol{\Xi}^{\prime}}(\mathfrak{X}\_{i})\right)^{2}+\frac{1}{\mathfrak{N}^{l}\_{\boldsymbol{\Xi}^{\prime}\boldsymbol{\Xi}^{\prime}}(\mathfrak{X}\_{i})\mathfrak{N}^{l}\_{\boldsymbol{\Xi}^{\prime}}(\mathfrak{X}\_{i})\right)}\\\left(1-\gamma\right)\left(\frac{1}{\mathfrak{N}^{\mathfrak{N}\_{\boldsymbol{\Xi}^{\prime}\boldsymbol{\Xi}^{\prime}}}\sum\_{j=1}^{l}\left(\mathfrak{N}^{j}\_{\boldsymbol{\Xi}^{\prime}}(\mathfrak{X}\_{i})\right)^{2}+\frac{1}{\mathfrak{N}\_{\boldsymbol{\Xi}^{\prime}\boldsymbol{\Xi}^{\prime}}(\mathfrak{X}\_{i})\right)^{2}+\frac{1}{\mathfrak{N}\_{\boldsymbol{\Xi}^{\prime}\boldsymbol{\Xi}^{\prime}}(\mathfrak{X}\_{i})}\end{split}$$

*which holds the necessary rules of Definition 5.*

Using some conditions, we can easily obtain further particular cases from the above theory; for instance, to put N*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>=</sup> <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>=</sup> 0 in *GD*<sup>1</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ), *GD*<sup>1</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) will change for HFSs. Furthermore, to put M*<sup>j</sup> Ξ*(X*i*),M*<sup>j</sup> <sup>Ξ</sup>*(X*i*) and <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*), <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*) as a singleton set, *GD*<sup>1</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) will change for IFSs, meaning the theory diagnosed in this study is massively powerful and dominant compared to others.

**Definition 14.** *By using any two IHFNS Ξ and Ξ , a WGDSM WGD*<sup>1</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) *is investigated by:*

$$\begin{split} & \mathcal{WGD}^{1}{}\_{P\Xi F} \left( \mathfrak{S}, \mathfrak{S}' \right) = \sum\_{i=1}^{M} w\_{i} \frac{\left( \frac{1}{\mathfrak{N}} \sum\_{j=1}^{l} \mathfrak{N}\_{\Xi}^{j} (\mathfrak{X}\_{i}) \mathfrak{M}\_{\Xi'}^{j} (\mathfrak{X}\_{i}) + \frac{1}{\mathfrak{N}} \sum\_{j=1}^{l} \mathfrak{N}\_{\Xi}^{j} (\mathfrak{X}\_{i}) \mathfrak{N}\_{\Xi'}^{j} (\mathfrak{X}\_{i}) \right)}{\left( \gamma \left( \frac{1}{\mathfrak{N}\_{\mathfrak{M}\_{\Xi'} (\mathfrak{X})}} \sum\_{j=1}^{l} \left( \mathfrak{M}\_{\Xi}^{j} (\mathfrak{X}\_{i}) \right)^{2} + \frac{1}{\mathfrak{N}\_{\mathfrak{N}\_{\Xi} (\mathfrak{X})}} \sum\_{j=1}^{l} \left( \mathfrak{N}\_{\Xi}^{j} (\mathfrak{X}\_{i}) \right)^{2} \right) + \\ & \left( 1 - \gamma \right) \left( \frac{1}{\mathfrak{N}\_{\mathfrak{M}\_{\Xi'} (\mathfrak{X})}} \sum\_{j=1}^{l} \left( \mathfrak{M}\_{\Xi'}^{j} (\mathfrak{X}\_{i}) \right)^{2} + \frac{1}{\mathfrak{N}\_{\mathfrak{N}\_{\Xi'} (\mathfrak{X})}} \sum\_{j=1}^{l} \left( \mathfrak{N}\_{\Xi'}^{j} (\mathfrak{X}\_{i}) \right)^{2} \right) \end{split}$$

*which holds the necessary rules of Definition 5.*

Using some conditions, we can easily obtain further particular cases from the above theory; for instance, to put N*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>=</sup> <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>=</sup> 0 in *WGD*<sup>1</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ), *WGD*<sup>1</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) will change for HFSs. Furthermore, to put M*<sup>j</sup> Ξ*(X*i*),M*<sup>j</sup> <sup>Ξ</sup>*(X*i*) and <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*), <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*) as a singleton set, *WGD*<sup>1</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) will change for IFSs, meaning the theory diagnosed in this study is massively powerful and dominant compared to others. For *w* = ! <sup>1</sup> *<sup>M</sup>* , <sup>1</sup> *<sup>M</sup>* ,..., <sup>1</sup> *M* "*T* , *WGD*<sup>1</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) = *GD*<sup>1</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ).

**Definition 15.** *By using any two IHFNS Ξ and Ξ , a GDSM GD*<sup>2</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) *is investigated by:*

*GD*<sup>2</sup> *PΞF Ξ*, *Ξ* <sup>=</sup> <sup>1</sup> *M M* ∑ *i*=1 ! <sup>1</sup> . <sup>M</sup> <sup>∑</sup>*<sup>l</sup> <sup>j</sup>*=<sup>1</sup> <sup>M</sup>*<sup>j</sup> Ξ*(X*i*)M*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>+</sup> <sup>1</sup> . <sup>N</sup> <sup>∑</sup>*<sup>l</sup> <sup>j</sup>*=<sup>1</sup> <sup>N</sup>*<sup>j</sup> Ξ*(X*i*)N*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>+</sup> <sup>1</sup> . *<sup>π</sup>* <sup>∑</sup>*<sup>l</sup> <sup>j</sup>*=<sup>1</sup> *<sup>π</sup><sup>j</sup> Ξ*(X*i*)*π<sup>j</sup> <sup>Ξ</sup>*(X*i*) " ⎛ ⎜⎜⎝ *γ* <sup>1</sup> *<sup>L</sup>*M*Ξ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! M*j <sup>Ξ</sup>*(X*i*) "2 <sup>+</sup> <sup>1</sup> *<sup>L</sup>*N*Ξ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! N*j <sup>Ξ</sup>*(X*i*) "2 <sup>+</sup> <sup>1</sup> *<sup>L</sup>πΞ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! *πj <sup>Ξ</sup>*(X*i*) "2 + (1 − *γ*) <sup>1</sup> *<sup>L</sup>*M*Ξ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! M*j <sup>Ξ</sup>*(X*i*) "2 <sup>+</sup> <sup>1</sup> *<sup>L</sup>*N*Ξ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! N*j <sup>Ξ</sup>*(X*i*) "2 <sup>+</sup> <sup>1</sup> *<sup>L</sup>πΞ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! *πj <sup>Ξ</sup>*(X*i*) "2 ⎞ ⎟⎟⎠

*which holds the necessary rules of Definition 5.*

Using some conditions, we can easily obtain further particular cases from the above theory; for instance, to put N*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>=</sup> <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>=</sup> 0 in *GD*<sup>2</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ), *GD*<sup>2</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) will change for HFSs. Furthermore, to put M*<sup>j</sup> Ξ*(X*i*),M*<sup>j</sup> <sup>Ξ</sup>*(X*i*) and <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*), <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*) as a singleton set, *GD*<sup>2</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) will change for IFSs, meaning the theory diagnosed in this study is massively powerful and dominant compared to others.

**Definition 16.** *By using any two IHFNS Ξ and Ξ , a WGDSM WGD*<sup>2</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) *is investigated by:*

*WGD*<sup>2</sup> *PΞF Ξ*, *Ξ* = *M* ∑ *i*=1 *wi* 2 ! <sup>1</sup> . <sup>M</sup> <sup>∑</sup>*<sup>l</sup> <sup>j</sup>*=<sup>1</sup> <sup>M</sup>*<sup>j</sup> Ξ*(X*i*)M*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>+</sup> <sup>1</sup> . <sup>N</sup> <sup>∑</sup>*<sup>l</sup> <sup>j</sup>*=<sup>1</sup> <sup>N</sup>*<sup>j</sup> Ξ*(X*i*)N*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>+</sup> <sup>1</sup> . *<sup>π</sup>* <sup>∑</sup>*<sup>l</sup> <sup>j</sup>*=<sup>1</sup> *<sup>π</sup><sup>j</sup> Ξ*(X*i*)*π<sup>j</sup> <sup>Ξ</sup>*(X*i*) " ⎛ ⎜⎜⎝ *γ* <sup>1</sup> *<sup>L</sup>*M*Ξ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! M*j <sup>Ξ</sup>*(X*i*) "2 <sup>+</sup> <sup>1</sup> *<sup>L</sup>*N*Ξ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! N*j <sup>Ξ</sup>*(X*i*) "2 <sup>+</sup> <sup>1</sup> *<sup>L</sup>πΞ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! *πj <sup>Ξ</sup>*(X*i*) "2 + (1 − *γ*) <sup>1</sup> *<sup>L</sup>*M*Ξ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! M*j <sup>Ξ</sup>*(X*i*) "2 <sup>+</sup> <sup>1</sup> *<sup>L</sup>*N*Ξ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! N*j <sup>Ξ</sup>*(X*i*) "2 <sup>+</sup> <sup>1</sup> *<sup>L</sup>πΞ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! *πj <sup>Ξ</sup>*(X*i*) "2 ⎞ ⎟⎟⎠

*which holds the necessary rules of Definition 5.*

Using some conditions, we can easily obtain further particular cases from the above theory; for instance, to put N*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>=</sup> <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>=</sup> 0 in *WGD*<sup>2</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ), *WGD*<sup>2</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) will change for HFSs. Furthermore, to put M*<sup>j</sup> Ξ*(X*i*),M*<sup>j</sup> <sup>Ξ</sup>*(X*i*) and <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*), <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*) as a singleton set, *WGD*<sup>2</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) will change for IFSs, meaning the theory diagnosed in this study is massively powerful and dominant compared to others. For *w* = ! <sup>1</sup> *<sup>M</sup>* , <sup>1</sup> *<sup>M</sup>* ,..., <sup>1</sup> *M* "*T* , *WD*<sup>2</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) = *D*<sup>2</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ).

**Definition 17.** *By using any two IHFNS Ξ and Ξ , a GDSM GD*<sup>3</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) *is investigated by:*

$$\begin{split} \operatorname{GDP}^{3}\_{P\boxplus\mathcal{F}}\left(\varXi,\varDelta^{\prime}\right) &= \frac{\sum\_{i=1}^{M}\left(\frac{1}{\mathfrak{M}}\sum\_{j=1}^{l}\mathfrak{M}^{j}\_{\varXi}(\mathfrak{X}\_{i})\mathfrak{M}^{j}\_{\varXi^{\prime}}(\mathfrak{X}\_{i}) + \frac{1}{\mathfrak{M}}\sum\_{j=1}^{l}\mathfrak{N}^{j}\_{\varXi}(\mathfrak{X}\_{i})\mathfrak{N}^{j}\_{\varXi^{\prime}}(\mathfrak{X}\_{i})\right)}{\gamma\sum\_{i=1}^{M}\left(\frac{1}{L\_{\mathfrak{M}\_{\varXi^{\prime}}(\mathfrak{X})}}\sum\_{j=1}^{l}\left(\mathfrak{M}^{j}\_{\varXi}(\mathfrak{X}\_{i})\right)^{2} + \frac{1}{L\_{\mathfrak{M}\_{\varXi^{\prime}}(\mathfrak{X})}}\sum\_{j=1}^{l}\left(\mathfrak{N}^{j}\_{\varXi}(\mathfrak{X}\_{i})\right)^{2}\right) + \\ & \left(1-\gamma\right)\sum\_{i=1}^{M}\left(\frac{1}{L\_{\mathfrak{M}\_{\varXi^{\prime}}(\mathfrak{X})}}\sum\_{j=1}^{l}\left(\mathfrak{M}^{j}\_{\varXi^{\prime}}(\mathfrak{X}\_{i})\right)^{2} + \frac{1}{L\_{\mathfrak{M}\_{\varXi^{\prime}}(\mathfrak{X})}}\sum\_{j=1}^{l}\left(\mathfrak{N}^{j}\_{\varXi^{\prime}}(\mathfrak{X}\_{i})\right)^{2}\right). \end{split}$$

*which holds the necessary rules of Definition 5.*

Using some conditions, we can easily obtain further particular cases from the above theory; for instance, to put N*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>=</sup> <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>=</sup> 0 in *GD*<sup>3</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ), *GD*<sup>3</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) will change for HFSs. Furthermore, to put M*<sup>j</sup> Ξ*(X*i*),M*<sup>j</sup> <sup>Ξ</sup>*(X*i*) and <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*), <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*) as a singleton set, *GD*<sup>3</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) will change for IFSs, meaning the theory diagnosed in this study is massively powerful and dominant compared to others.

**Definition 18.** *By using any two IHFNS Ξ and Ξ , a WGDSM WGD*<sup>3</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) *is investigated by:*

$$\begin{split} \mathcal{WGD}^{3}\_{P\Xi F}(\boldsymbol{\Xi},\boldsymbol{\Xi}') &= \frac{\sum\_{i=1}^{M} w\_{i}^{2} \Big( \frac{1}{\mathfrak{M}} \sum\_{j=1}^{l} \mathfrak{M}\_{\Xi}^{j}(\boldsymbol{\mathfrak{X}}\_{i}) \mathfrak{M}\_{\Xi'}^{j}(\boldsymbol{\mathfrak{X}}\_{i}) + \frac{1}{\mathfrak{M}} \sum\_{j=1}^{l} \mathfrak{N}\_{\Xi}^{j}(\boldsymbol{\mathfrak{X}}\_{i}) \mathfrak{N}\_{\Xi'}^{j}(\boldsymbol{\mathfrak{X}}\_{i}) \Big)}{\gamma \sum\_{i=1}^{M} w\_{i}^{2} \Big( \frac{1}{\mathfrak{M}\_{\Xi\backslash(\boldsymbol{\mathfrak{X}})}} \sum\_{j=1}^{l} \Big( \mathfrak{M}\_{\Xi}^{j}(\boldsymbol{\mathfrak{X}}\_{i}) \Big)^{2} + \frac{1}{\mathfrak{M}\_{\Xi\backslash(\boldsymbol{\mathfrak{X}})}} \sum\_{j=1}^{l} \Big( \mathfrak{N}\_{\Xi}^{j}(\boldsymbol{\mathfrak{X}}\_{i}) \Big)^{2} \Big) + \\ & (1-\gamma) \sum\_{i=1}^{M} w\_{i}^{2} \Big( \frac{1}{\mathfrak{M}\_{\Xi\backslash(\boldsymbol{\mathfrak{X}})}} \sum\_{j=1}^{l} \Big( \mathfrak{M}\_{\Xi'}^{j}(\boldsymbol{\mathfrak{X}}\_{i}) \Big)^{2} + \frac{1}{\mathfrak{L}\_{\Xi\backslash(\boldsymbol{\mathfrak{X}})}} \sum\_{j=1}^{l} \Big( \mathfrak{N}\_{\Xi'}^{j}(\boldsymbol{\mathfrak{X}}\_{i}) \Big)^{2} \Big). \end{split}$$

*which holds the necessary rules of Definition 5.*

Using some conditions, we can easily obtain further particular cases from the above theory; for instance, to put N*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>=</sup> <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>=</sup> 0 in *WGD*<sup>3</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ), *WGD*<sup>3</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) will change for HFSs. Furthermore, to put M*<sup>j</sup> Ξ*(X*i*),M*<sup>j</sup> <sup>Ξ</sup>*(X*i*) and <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*), <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*) as a singleton set, *WGD*<sup>3</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) will change for IFSs, meaning the theory diagnosed in this study is massively powerful and dominant compared to others. For *w* = ! <sup>1</sup> *<sup>M</sup>* , <sup>1</sup> *<sup>M</sup>* ,..., <sup>1</sup> *M* "*T* , *WGD*<sup>3</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) = *GD*<sup>3</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ).

**Definition 19.** *By using any two IHFNS Ξ and Ξ , a GDSM GD*<sup>4</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) *is investigated by:*

*GD*<sup>4</sup> *PΞF Ξ*, *Ξ* <sup>=</sup> <sup>∑</sup>*<sup>M</sup> i*=1 ! <sup>1</sup> . <sup>M</sup> <sup>∑</sup>*<sup>l</sup> <sup>j</sup>*=<sup>1</sup> <sup>M</sup>*<sup>j</sup> Ξ*(X*i*)M*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>+</sup> <sup>1</sup> . <sup>N</sup> <sup>∑</sup>*<sup>l</sup> <sup>j</sup>*=<sup>1</sup> <sup>N</sup>*<sup>j</sup> Ξ*(X*i*)N*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>+</sup> <sup>1</sup> . *<sup>π</sup>* <sup>∑</sup>*<sup>l</sup> <sup>j</sup>*=<sup>1</sup> *<sup>π</sup><sup>j</sup> Ξ*(X*i*)*π<sup>j</sup> <sup>Ξ</sup>*(X*i*) " *γ* ∑*<sup>M</sup> i*=1 <sup>1</sup> *<sup>L</sup>*M*Ξ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! M*j <sup>Ξ</sup>*(X*i*) "2 <sup>+</sup> <sup>1</sup> *<sup>L</sup>*N*Ξ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! N*j <sup>Ξ</sup>*(X*i*) "2 <sup>+</sup> <sup>1</sup> *<sup>L</sup>πΞ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! *πj <sup>Ξ</sup>*(X*i*) "2 + (<sup>1</sup> <sup>−</sup> *<sup>γ</sup>*) <sup>∑</sup>*<sup>M</sup> i*=1 <sup>1</sup> *<sup>L</sup>*M*Ξ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! M*j <sup>Ξ</sup>*(X*i*) "2 <sup>+</sup> <sup>1</sup> *<sup>L</sup>*N*Ξ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! N*j <sup>Ξ</sup>*(X*i*) "2 <sup>+</sup> <sup>1</sup> *<sup>L</sup>πΞ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! *πj <sup>Ξ</sup>*(X*i*) "2 

*which holds the necessary rules of Definition 5.*

Using some conditions, we can easily obtain further particular cases from the above theory; for instance, to put N*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>=</sup> <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>=</sup> 0 in *GD*<sup>4</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ), *GD*<sup>4</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) will change for HFSs. Furthermore, to put M*<sup>j</sup> Ξ*(X*i*),M*<sup>j</sup> <sup>Ξ</sup>*(X*i*) and <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*), <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*) as a singleton set, *GD*<sup>4</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) will change for IFSs, meaning the theory diagnosed in this study is massively powerful and dominant compared to others.

**Definition 20.** *By using any two IHFNS Ξ and Ξ , a WGDSM WGD*<sup>4</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) *is investigated by:*

*WGD*<sup>4</sup> *PΞF Ξ*, *Ξ* <sup>=</sup> <sup>∑</sup>*<sup>M</sup> <sup>i</sup>*=<sup>1</sup> *<sup>w</sup>*<sup>2</sup> *i* ! <sup>1</sup> . <sup>M</sup> <sup>∑</sup>*<sup>l</sup> <sup>j</sup>*=<sup>1</sup> <sup>M</sup>*<sup>j</sup> Ξ*(X*i*)M*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>+</sup> <sup>1</sup> . <sup>N</sup> <sup>∑</sup>*<sup>l</sup> <sup>j</sup>*=<sup>1</sup> <sup>N</sup>*<sup>j</sup> Ξ*(X*i*)N*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>+</sup> <sup>1</sup> . *<sup>π</sup>* <sup>∑</sup>*<sup>l</sup> <sup>j</sup>*=<sup>1</sup> *<sup>π</sup><sup>j</sup> Ξ*(X*i*)*π<sup>j</sup> <sup>Ξ</sup>*(X*i*) " *γ* ∑*<sup>M</sup> <sup>i</sup>*=<sup>1</sup> *<sup>w</sup>*<sup>2</sup> *i* <sup>1</sup> *<sup>L</sup>*M*Ξ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! M*j <sup>Ξ</sup>*(X*i*) "2 <sup>+</sup> <sup>1</sup> *<sup>L</sup>*N*Ξ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! N*j <sup>Ξ</sup>*(X*i*) "2 <sup>+</sup> <sup>1</sup> *<sup>L</sup>πΞ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! *πj <sup>Ξ</sup>*(X*i*) "2 + (<sup>1</sup> <sup>−</sup> *<sup>γ</sup>*) <sup>∑</sup>*<sup>M</sup> <sup>i</sup>*=<sup>1</sup> *<sup>w</sup>*<sup>2</sup> *i* <sup>1</sup> *<sup>L</sup>*M*Ξ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! M*j <sup>Ξ</sup>*(X*i*) "2 <sup>+</sup> <sup>1</sup> *<sup>L</sup>*N*Ξ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! N*j <sup>Ξ</sup>*(X*i*) "2 <sup>+</sup> <sup>1</sup> *<sup>L</sup>πΞ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! *πj <sup>Ξ</sup>*(X*i*) "2 

*which holds the necessary rules of Definition 5.*

Using some conditions, we can easily obtain further particular cases from the above theory; for instance, to put N*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>=</sup> <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>=</sup> 0 in *WGD*<sup>4</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ), *WGD*<sup>4</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) will change for HFSs. Furthermore, to put M*<sup>j</sup> Ξ*(X*i*),M*<sup>j</sup> <sup>Ξ</sup>*(X*i*) and <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*), <sup>N</sup>*<sup>j</sup> <sup>Ξ</sup>*(X*i*) as a singleton set, *WGD*<sup>4</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) will change for IFSs, meaning the theory diagnosed in this study is massively powerful and dominant compared to others. For *w* = ! <sup>1</sup> *<sup>M</sup>* , <sup>1</sup> *<sup>M</sup>* ,..., <sup>1</sup> *M* "*T* , *WGD*<sup>4</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) = *GD*<sup>4</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ).

By using the investigated measures, we discussed certain special cases of the DSM, WDSM, GDSM, and WGDSM.

For *γ* = 0, in *GD*<sup>1</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ), we obtained

$$GD^{1}\_{\,P\Xi F}\left(\Xi,\Xi'\right) = \frac{1}{M} \sum\_{i=1}^{M} \frac{\left(\frac{1}{\mathfrak{M}} \sum\_{j=1}^{l} \mathfrak{M}^{j}\_{\Xi}(\mathfrak{X}\_{i})\mathfrak{M}^{j}\_{\Xi'}(\mathfrak{X}\_{i}) + \frac{1}{\mathfrak{M}} \sum\_{j=1}^{l} \mathfrak{M}^{j}\_{\Xi}(\mathfrak{X}\_{i})\mathfrak{M}^{j}\_{\Xi'}(\mathfrak{X}\_{i})\right)}{\left(\frac{1}{\mathfrak{M}\_{\Xi'}(\mathfrak{X})} \sum\_{j=1}^{l} \left(\mathfrak{M}^{j}\_{\Xi'}(\mathfrak{X}\_{i})\right)^{2} + \frac{1}{\mathfrak{M}\_{\Xi'}(\mathfrak{X})} \sum\_{j=1}^{l} \left(\mathfrak{M}^{j}\_{\Xi'}(\mathfrak{X}\_{i})\right)^{2}\right)}$$

Similarly, for *γ* = 0.5,

*GD*<sup>1</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* )= <sup>1</sup> *M M* ∑ *i*=1 2 ! <sup>1</sup> . <sup>M</sup> <sup>∑</sup>*<sup>l</sup> <sup>j</sup>*=<sup>1</sup> <sup>M</sup>*<sup>j</sup> Ξ*(X*i*)M*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>+</sup> <sup>1</sup> . <sup>A</sup> <sup>∑</sup>*<sup>l</sup> <sup>j</sup>*=<sup>1</sup> <sup>A</sup>*<sup>j</sup> Ξ*(X*i*)A*<sup>j</sup> <sup>Ξ</sup>*(X*i*) <sup>+</sup> <sup>1</sup> . <sup>N</sup> <sup>∑</sup>*<sup>l</sup> <sup>j</sup>*=<sup>1</sup> <sup>N</sup>*<sup>j</sup> Ξ*(X*i*)N*<sup>j</sup> <sup>Ξ</sup>*(X*i*) " ⎛ ⎜⎜⎜⎜⎝ <sup>1</sup> *<sup>L</sup>*M*Ξ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! M*j <sup>Ξ</sup>*(X*i*) "2 <sup>+</sup> <sup>1</sup> *<sup>L</sup>*A*Ξ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! A*j <sup>Ξ</sup>*(X*i*) "2 <sup>+</sup> <sup>1</sup> *<sup>L</sup>*N*Ξ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! N*j <sup>Ξ</sup>*(X*i*) "2 + <sup>1</sup> *<sup>L</sup>*M*Ξ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! M*j <sup>Ξ</sup>*(X*i*) "2 <sup>+</sup> <sup>1</sup> *<sup>L</sup>*A*Ξ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! A*j <sup>Ξ</sup>*(X*i*) "2 <sup>+</sup> <sup>1</sup> *<sup>L</sup>*N*Ξ*(X) <sup>∑</sup>*<sup>l</sup> j*=1 ! N*j <sup>Ξ</sup>*(X*i*) "2 ⎞ ⎟⎟⎟⎟⎠ = *D*<sup>1</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* )

For *γ* = 0.5, in *GD*<sup>1</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ), we obtained

$$GD^{1}{}\_{P\Xi F}(\Xi,\Xi') = \frac{1}{M} \sum\_{i=1}^{M} \frac{\left(\frac{1}{\mathfrak{M}} \sum\_{j=1}^{l} \mathfrak{M}\_{\Xi}^{j}(\mathfrak{X}\_{i}) \mathfrak{M}\_{\Xi'}^{j}(\mathfrak{X}\_{i}) + \frac{1}{\mathfrak{M}} \sum\_{j=1}^{l} \mathfrak{M}\_{\Xi}^{j}(\mathfrak{X}\_{i}) \mathfrak{M}\_{\Xi'}^{j}(\mathfrak{X}\_{i})\right)}{\left(\frac{1}{\mathfrak{M}\_{\Xi\cdot\left(\mathfrak{X}\right)}} \sum\_{j=1}^{l} \left(\mathfrak{M}\_{\Xi}^{j}(\mathfrak{X}\_{i})\right)^{2} + \frac{1}{\mathbb{L}\_{\mathfrak{M}\_{\Xi}\left(\mathfrak{X}\right)}} \sum\_{j=1}^{l} \left(\mathfrak{M}\_{\Xi}^{j}(\mathfrak{X}\_{i})\right)^{2}\right)}$$

For *γ* = 0 and 0.5, *GD*<sup>2</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ), *GD*<sup>3</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ), and *GD*<sup>4</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ) are similar.

#### **5. Decision-Making Processes**

Pattern recognition is the computerized identification of shapes, designs, and reliabilities in information. It has applications in information compression, machine learning, statistical information analysis, signal processing, image analysis, information retrieval, bioinformatics, and computer graphics. Similarly, a medical diagnosis is a procedure to illustrate or identify diseases or disorders, which would account for a person's symptoms and signs. The decision-making procedure covers four main stages: intelligence, design, choice, and implementation. The principle of decision-making technique begins with the intelligence stage. In this stage, the intellectual determines reality and identifies and explains the troubles. The main influence of this theory is to explore the main idea of medical diagnosis and pattern recognition under the consideration of IHF information. The main importance and briefing explanation about every application is available below. These applications are taken from Ref. [17]. By using the proposed measures, the applications of medical diagnosis and pattern recognition are discussed below.

#### *5.1. Medical Diagnosis*

Certain sorts of diseases have distinct symptoms and different affection; the medical diagnosis procedure is determined by the distinct symptoms of the required diseases of the intellectual which is safer from them. The diseases are expressed using the symbols *Ξ*1, *Ξ*2, ... , *Ξn*, and their symptoms are expressed by the values of universal sets. Using the proposed measures, the numerical example is discussed below.

**Example 1.** *For any set of diseases whose expressions are in the form of <sup>Ξ</sup> <sup>=</sup> Ξ*1(*Typhoid*), *Ξ*2(*Flu*), *Ξ*3(*Heart Probelms*), *Ξ*4(*Pneumonia*), *Ξ*5(*coronavirus*) . *and their symptoms whose expressions are in*

*the form of X* = *Fever*, *Cough*, *Heart pain*, *Loss o f appetite*, *Short o f breath*. *. The symptoms of the distinct diseases are discussed below in the form of unknown diseases:*

$$
\begin{split}
\mathfrak{S}\_{1} = \begin{pmatrix}
(\{0.1,0.2\},\{0.2,0.3,0.4\}),\\
(\{0.1,0.21\},\{0.21,0.31,0.41\}),\\
(\{0.1,0.22\},\{0.22,0.22,0.42\}),\\
(\{0.13,0.23\},\{0.23,0.33,0.43\}),\\
(\{0.13,0.24\},\{0.24,0.34\}),\\
\end{pmatrix}, \mathfrak{S}\_{2} = \begin{pmatrix}
(\{0.2,0.3\},\{0.1,0.3,0.21\}),\\
(\{0.21,0.32\},\{0.10,0.31,0.21\}),\\
(\{0.20,0.32\},\{0.12,0.32,0.22\}),\\
(\{0.23,0.33\},\{0.13,0.33,0.22\}),\\
(\{0.32,0.12\},\{0.52,0.21\}),\\
(\{0.33,0.13\},\{0.53,0.22,0.12\}),\\
(\{0.33,0.13\},\{0.53,0.23,0.13\}),\\
(\{0.33,0.14\},\{0.54,0.24\},\{0.14\})
\end{pmatrix}, \mathfrak{S}\_{4} = \begin{pmatrix}
(\{0.1,0.11,0.11,0.21,0.41\}),\\
(\{0.11,0.11,0.21,0.41\}),\\
(\{0.12,0.12\},\{0.22,0.22,0.42\}),\\
(\{0.13,0.13\},\{0.23,0.23,0.43\}),\\
(\{0.13,0.13\},\{0.24,0.24\}),\\
(\{0.14,0.14\},\{0.24,0.24,0.44\})
\end{pmatrix}
\end{split}
$$

⎧ ⎪⎪⎪⎪⎨ ⎪⎪⎪⎪⎩ ({0.3, 0.5}, {0.1, 0.2, 0.3}), ({0.31, 0.51}, {0.11, 0.21, 0.31}), ({0.32, 0.52}, {0.12, 0.22, 0.32}), ({0.33, 0.53}, {0.13, 0.23, 0.33}), ({0.34, 0.54}, {0.14, 0.24, 0.34}) ⎫ ⎪⎪⎪⎪⎬ ⎪⎪⎪⎪⎭ . *For this, we choose the known diseases Ξ* = ⎧ ⎨ ⎩ ({1, 1}, {0.0, 0.0.0.0}), ({1, 1}, {0.0, 0.0.0.0}),({1, 1}, {0.0, 0.0.0.0}), ({1, 1}, {0.0, 0.0.0.0}),({1, 1}, {0.0, 0.0.0.0}) ⎫ ⎬ ⎭ *. Then, by using the GD*<sup>1</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ), *WGD*<sup>1</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ), *GD*<sup>2</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ), *and WGD*<sup>2</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* )*, the examined measures are discussed*

*in the form of Table 1 by using the weight vector 0.2, 0.3, 0.2, 0.2, and 0.1. For this, we chose the value of γ* = 1*, then*

**Table 1.** Expressions of the measured values by using different measures.


Further, information computed in Table 2 is constructed based on the information available in Table 1.


**Table 2.** Contained ranking analysis of the information in Table 1.

From Table 2, all sorts of measures are provided with the same ranking results. the best optimal is *Ξ*2. Additionally, by using distinct types of measures based on IFSs and IHFSs, the comparative analysis of the elaborated measures with certain prevailing measures are discussed in the form of Table 3. The information related to prevailing measures is as follows: Ye [12] initiated certain cosine measures based on IFSs, Beg and Rashid [21] proposed certain measures based on IHFSs, and Peng et al. [22] proposed the cross-entropy measures based on IHFSs. By using the information in Section 5.1, the comparative analysis is discussed in the form of Table 3.

**Table 3.** Contained comparative information.


From Table 2, all sorts of measures are provided with the same ranking results. The best optimal is *Ξ*2.

#### *5.2. Pattern Recognition*

By using the elaborated measures, we aimed to use a practical application called pattern recognition and try to evaluate it by using pioneered information.

**Example 2.** *Without any complication or difficulty, the construction of any building is very complicated. For this, a decision-maker collects the information for different places and resolves it using the elaborated measures; then a very safe decision can be made. For this, we chose the different types of building material, the information associated with which is discussed below.*

*Ξ*<sup>1</sup> = ⎧ ⎪⎪⎪⎪⎨ ⎪⎪⎪⎪⎩ ({0.1, 0.2}, {0.1, 0.2, 0.3}), ({0.11, 0.21}, {0.11, 0.21, 0.31}), ({0.12, 0.22}, {0.12, 0.22, 0.32}), ({0.13, 0.23}, {0.13, 0.23, 0.33}), ({0.14, 0.24}, {0.14, 0.24, 0.34}) ⎫ ⎪⎪⎪⎪⎬ ⎪⎪⎪⎪⎭ , *Ξ*2= ⎧ ⎪⎪⎪⎪⎨ ⎪⎪⎪⎪⎩ ({0.2, 0.3}, {0.2, 0.3, 0.4}), ({0.21, 0.31}, {0.21, 0.31, 0.41}), ({0.22, 0.32}, {0.22, 0.32, 0.42}), ({0.23, 0.33}, {0.23, 0.33, 0.43}), ({0.24, 0.34}, {0.24, 0.34, 0.44}) ⎫ ⎪⎪⎪⎪⎬ ⎪⎪⎪⎪⎭ , *Ξ*3= ⎧ ⎪⎪⎪⎪⎨ ⎪⎪⎪⎪⎩ ({0.1, 0.3}, {0.2, 0.1, 0.1}), ({0.11, 0.31}, {0.21, 0.11, 0.11}), ({0.12, 0.32}, {0.22, 0.12, 0.12}), ({0.13, 0.33}, {0.23, 0.13, 0.13}), ({0.14, 0.34}, {0.24, 0.14, 0.14}) ⎫ ⎪⎪⎪⎪⎬ ⎪⎪⎪⎪⎭ , *Ξ*4= ⎧ ⎪⎪⎪⎪⎨ ⎪⎪⎪⎪⎩ ({0.1, 0.2}, {0.3, 0.2, 0.4}), ({0.11, 0.21}, {0.31, 0.21, 0.41}), ({0.12, 0.22}, {0.32, 0.22, 0.42}), ({0.13, 0.23}, {0.33, 0.23, 0.43}), ({0.14, 0.24}, {0.34, 0.24, 0.44}) ⎫ ⎪⎪⎪⎪⎬ ⎪⎪⎪⎪⎭ , *Ξ*5= ⎧ ⎪⎪⎪⎪⎨ ⎪⎪⎪⎪⎩ ({0.4, 0.5}, {0.1, 0.1, 0.1}), ({0.41, 0.51}, {0.11, 0.11, 0.11}), ({0.42, 0.52}, {0.12, 0.12, 0.12}), ({0.43, 0.53}, {0.13, 0.13, 0.13}), ({0.44, 0.54}, {0.14, 0.14, 0.14}) ⎫ ⎪⎪⎪⎪⎬ ⎪⎪⎪⎪⎭

For this, we choose the known diseases, which are expressed below:

$$\Xi' = \left\{ (\{1,1\}, \{0.0, 0.0.0.0\}), (\{1,1\}, \{0.0, 0.0.0.0\}), \right\}$$

$$\begin{pmatrix} (\{1,1\}, \{0.0, 0.0.0.0\}), (\{1,1\}, \{0.0, 0.0.0.0\})\\ (\{1,1\}, \{0.0, 0.0.0\}), (\{1,1\}, \{0.0, 0.0.0\}) \end{pmatrix}$$

Then, by using the *GD*<sup>1</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ), *WGD*<sup>1</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ), *GD*<sup>2</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ), and *WGD*<sup>2</sup> *<sup>P</sup>ΞF*(*Ξ*, *Ξ* ), the examined measures are discussed in the form of Table 4 by using the weight vector 0.2, 0.3, 0.2, 0.2, and 0.1. For this, we chose the value of *γ* = 1.

**Table 4.** Expressions of the measured values using different measures.


Further, the information computed in Table 5 was constructed based on the information available in Table 4.


**Table 5.** Contained ranking analysis.

From Table 5, all sorts of measures are provided with the different ranking results. the best optimal is *Ξ*<sup>5</sup> and *Ξ*3. Additionally, by using distinct types of measures based on IFSs and IHFSs, the comparative analysis of the elaborated measures with certain prevailing measures are discussed in the form of Table 6. The information related to prevailing measures is as follows: Ye [12] initiated certain cosine measures based on IFSs, Beg and Rashid [21] proposed certain measures based on IHFSs, and Peng et al. [22] proposed the cross-entropy measures based on IHFSs. By using the information from Example 1, the comparative analysis is discussed in the form of Table 6.

**Table 6.** Contained comparative analysis.


From Table 2, all sorts of measures are provided with the different ranking results. the best optimal is *Ξ*<sup>5</sup> and *Ξ*3. In the future, we will utilize different types of operators, methods, and measures in the environment of picture hesitant fuzzy sets and neutrosophic hesitant fuzzy sets [24–31] to improve the quality of the proposed works. Therefore, the elaborated measures based on IHFS are more powerful and more fixable than the prevailing ideas [23–31].

#### **6. Conclusions**

The main and major features of this analysis are described below:


Our recent work focused on the prevailing information computed based on complex q-rung orthopair FSs [32], spherical FSs (SFSs) [33], Aczel-Alsina operational laws [34], different types of measures [35,36], Aczel-Alsina aggregation operators [37], Maclaurin operators [38], Complex SFSs [39,40], linguistic group decision-making techniques [41], and

unbalanced linguistic information [42], and we aim to employ it in the field of computer science, road signals, software engineering, and decision-making.

**Author Contributions:** Conceptualization, M.A.; Formal analysis, M.A.; Funding acquisition, M.A.; Investigation, M.A. and T.M.; Methodology, T.M.; Project administration, M.A. and T.M.; Supervision, T.M.; Validation, T.M. All authors contributed equally. All authors have read and agreed to the published version of the manuscript.

**Funding:** The Deanship of Scientific Research (DSR) at King Abdulaziz University (KAU), Jeddah, Saudi Arabia has funded this Project under grant No. (G: 445-130-1443).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** The data utilized in this study are hypothetical and artificial, and one can use these data before prior permission by citing this paper.

**Acknowledgments:** This project was funded by the Deanship of Scientific Research (DSR) at King Abdulaziz University (KAU), Jeddah, Saudi Arabia under grant No. (G:445-130-1443). The authors, therefore, acknowledge with thanks DSR for technical and financial support.

**Conflicts of Interest:** The authors declare no conflict of interest.

**Ethics Declaration Statement:** The authors state that this is their original work, and it is neither submitted nor under consideration in any other journal simultaneously.

#### **References**


## *Article* **A Novel Driving-Strategy Generating Method of Collision Avoidance for Unmanned Ships Based on Extensive-Form Game Model with Fuzzy Credibility Numbers**

**Haotian Cui 1, Fangwei Zhang 1,2,\*, Mingjie Li 1, Yang Cui <sup>1</sup> and Rui Wang <sup>1</sup>**


**Abstract:** This study aims to solve the problem of intelligent collision avoidance of unmanned ships at sea, and it proposes a novel driving strategy generating method of collision avoidance based on an extensive-form game mode with fuzzy credibility numbers. The innovation of this study is to propose an extensive-form game model of unmanned ships under the situation of twosides clamping and verify the validity by fuzzy credibility. Firstly, this study divides the head-on situation of the ship at sea quantitatively to help the unmanned ship take targeted measures when making collision avoidance decisions. Secondly, this study adopts an extensive-form game model to model the problem of collision avoidance of an unmanned ship in the case of clamping on two sides. Thirdly, the extensive-form game model is organically combined with the fuzzy credibility degree to judge whether the collision avoidance game of unmanned ship achieves the optimal collision avoidance result. The effectiveness of the introduced game model is verified by case analysis and simulation. Finally, an illustrative example shows that the proposed mathematical model can better help unmanned ships make real-time game decisions at sea in the scenario of two-sides clamping effectively.

**Keywords:** collision avoidance; encounter situation; fuzzy credibility numbers; intelligent unmanned ships; extensive-form game model

**MSC:** 90C70

#### **1. Introduction**

In actual maritime navigation, the entire collision avoidance operation of unmanned ships revolves around the three stages of "observation, judgment and decision making" [1]. At the same time, the specific water environment and different encounter states will also affect the collision avoidance decision-making process of the unmanned ship. Under the above background, to help unmanned ships take targeted measures to avoid collision decisions, this study analyzes the situation of ships under the condition of both sides.

The two-sides clamping scenario is a condition in which a ship sails between two ships while at sea. Investigation illustrates that it is dangerous when a ship is trapped in this two-sides clamping situation. Considering that the collision avoidance operations of unmanned ships is a game process, this study proposes an anti-collision decision model for unmanned ships based on extensive-form game model [2].

#### *1.1. Literature Review*

According to investigation, the issues of unmanned ship collision avoidance in a two-side scenario is focused on. At present, scholars' research on ship collision avoidance mostly focuses on three aspects: strategies for avoiding ship collisions, application of game theory to ships, and practical application of fuzzy credibility numbers.

**Citation:** Cui, H.; Zhang, F.; Li, M.; Cui, Y.; Wang, R. A Novel Driving-Strategy Generating Method of Collision Avoidance for Unmanned Ships Based on Extensive-Form Game Model with Fuzzy Credibility Numbers. *Mathematics* **2022**, *10*, 3316. https://doi.org/10.3390/ math10183316

Academic Editor: Mariano Luque

Received: 2 August 2022 Accepted: 5 September 2022 Published: 13 September 2022

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

In the past five years, avoiding collision problems have been mainly studied from the viewpoints of risk assessment, variable distribution, safety domain, etc. Scholars have performed research on strategies for ships avoiding collisions. In the study by Li et al. [3], by balancing the safety and economy of ship collision avoidance, the avoidance angle and the time to the action point are used as the variables encoded by the algorithm, and the fuzzy ship domain is used to calculate the collision avoidance risk to achieve collision avoidance. Thereafter, Lee et al. [4] proposed a heuristic search technology for collision avoidance operations for autonomous ships. Based on the multi-vessel collision avoidance problem, Wang et al. [5] researched the decision-making for obstacle avoidance based on deep reinforcement learning to solve the problem of intelligent collision avoidance for unmanned ships in unknown environments. Based on the mathematical model group's ship motion mathematical model, Xing et al. [6] proposed an open sea ship collision prevention approach to enhance the prediction of ship collision risk and the real-time and dependability of collision avoidance method.

At present, the application fields of extensive-form game mode are concluding containing transportation; Lisowski [7] introduced the application of the game control processes in marine navigation. The control goal has been defined first. Then, the approximated models of multi-stage positional game and multi-step matrix game of the safe ship steering in a collision situation has been presented. Subsequently, Lisowski et al. [8] described six methods of optimal and game theory and artificial neural network for synthesis of safe control in collision situations at sea. The optimal control algorithm and game control algorithm were used to determine the safe track. Afterwards, Zou et al. [9] identified the safety evaluation indicator system and evaluation standards and established an aftercollisions safety evaluation model of maritime ships based on the extension cloud theory. Considering the defects of the classic extensive game method in ship collision avoidance decision-making, Tu et al. [10] proposed the improved extensive game method based on the velocity obstacle method.

Up to now, fuzzy credibility numbers were mainly used to solve decision making problem, project scheduling problem, multi-objective fuzzy-interval credibility-constrained non-linear programming, etc. Ran et al. [11] aimed at the problems of inaccurate evaluation results caused by experts in the process of simulation credibility evaluation based on traditional fuzzy comprehensive evaluation according to personal preferences or expectations, and unreasonable selection of fuzzy synthetic calculations, and a simulation credibility evaluation method based on improved fuzzy comprehensive evaluation was proposed. Moreover, Ye et al. [12] proposed the concept of a fuzzy credibility number as a new extension of the fuzzy concept. Thereafter, Vercher et al. [13] presented a new forecasting scheme based on the credibility distribution of fuzzy events. In the same year, Zhou et al. [14] proposed a decision support model for USVs to improve the accuracy of collision avoidance decision-making.

Based on the aforementioned analysis, the collision avoidance of unmanned ships is studied. The main innovation of this study is combining extensive-form game model and FCN together. Specifically, by using the extensive-form game model, the collision avoidance strategy of unmanned ships is studied for the special situation between the two-sides. By using FCN, the danger of collision is quantified.

#### *1.2. Goals and Contributions*

The purpose of this study is to explore the decision-making problem of collision avoidance for unmanned ships at sea under the two-sides clamp scenario. In response to the aforementioned problems, this study establishes an extensive-form game model based on the two-sides clamping scenario and applies it to solve the specific collision avoidance problem.

The contribution of this study is as follows. Firstly, based on the extensive-form game model, this study establishes a description of the ship collision avoidance structure under the situation of two-sides clamping. Secondly, this study chooses driving strategy following a priority principle on ship collision avoidance and introduces a utility function to describe it. By combining this utility function and the extensive-form game model, a set of utilities of the own ship and the target ship are collected and compared to find the optimal collision avoidance decision. Thirdly, this study establishes a ship collision risk fuzzy credibility operator to judge whether the ship has escaped from collision danger.

The rest of this study is organized as follows. Section 2 clarifies the research basis. Section 3 proposes the driving-strategy generating method for collision avoidance. Section 4 carries out simulation verification for the proposed method. Section 5 summarizes and points out possible future work. The structure of this study is shown in Figure 1.

**Figure 1.** Research process.

#### **2. Research Basis**

This part mainly introduces the conflict identification of the ships' encounter situation at sea, which quantitatively analyzes the ship's encounter situation, and introduces the relevant knowledge of extensive-form game tree and sub-game refinement Nash equilibrium.

#### *2.1. Route Conflict Situation Identification*

The identification of the conflict situation on the route and the division of ship responsibilities are based on the 1972 International Regulations for Preventing Collisions at Sea, namely COLREGS. In actual navigation, the collision avoidance measures taken by unmanned ships are based on the collision avoidance rules listed in COLREGS combined with various ship identification devices for automatic collision avoidance [15]. The uncoordinated collision avoidance measures may lead to the uncoordinated collision avoidance process of the entire ship so that the best avoidance opportunity is missed [16]. According to the different angles of encounter of ships, the encounter situation will be divided into three types: head-on situation, overtaking situation, and cross encounter situation. Head-on situation is the situation that ships often encounter at sea, and it is also the main situation that causes the ship to be in imminent danger or to collide. Therefore, this study researches the collision avoidance strategy of ships in the confrontation situation.

#### *2.2. Judgment of Head-On Ship Situation*

The "International Regulations for Preventing Collisions at Sea" has the following four points to judge the head-on situation of ship [17]. Firstly, both ships must be motorized ships. Secondly, the sailing directions of the two ships are in an opposite or almost opposite confrontation on the route. Thirdly, one motorized ship is sailing directly in front of or nearly in front of the other. Finally, the two ships are seeing each other and constitute a collision hazard. Therein, the heading angle of *B* the two ships in the confrontation situation Δ*C* is the relative azimuth. The heading opposite or close to the opposite means that the heading difference between the two ships is within 174◦ ≤ <sup>Δ</sup>*<sup>C</sup>* ≤ <sup>186</sup>◦ the range. From the point of view of encountering the relative orientation of the two ships, the heading of the two ships is close to the opposite, which means that one ship is located within 6◦ on the left and right in front of the other ship. Therefore, the relative orientation of the confronting situation should satisfy *<sup>B</sup>* ≤ <sup>005</sup>◦ or *<sup>B</sup>* ≥ <sup>351</sup>◦ ; the specific details are shown in Figure 2.

**Figure 2.** Schematic diagram of Head-on situation.

#### *2.3. Extensive-Form Game Model Tree*

The extensive-form game is dynamic. The difference between it and the static game is that the extensive-form dynamic game needs to determine the order of actions [18]. Each knot on the "game tree" represents a player's decision point, and this point is said to belong to the player acting at that point [19]. The branches represent the possible actions of the players, and each branch connects two knots, which has a direction from one knot to the other. Each branch of the game tree may or may not be expanded. Meanwhile, each branch in the game tree can be regarded as a new game tree, called a sub-tree, as shown in Figure 3. The part of A is the sub-game of B, and A is also the sub-game of the whole game. The nodes are expanded outward, as shown in Figure 4.

**Figure 3.** Extensive-form game model tree.

**Figure 4.** The sub-tree of the game tree.

#### *2.4. Sub-Game Refinement Nash Equilibrium*

The Nash equilibrium strategy is that all players in the game adopt the best strategy that is beneficial to them [20]. In the whole process of the game, the players of each game are rational and intelligent. The combination of actions taken in each game is the optimal strategy, and the sub-game developed by the game tree is also the optimal solution. The combination of action strategies taken in the game process conforms to the Nash equilibrium strategy. Sub-game refined Nash equilibrium is the most effective tool for analyzing perfect information dynamic games in the game theory [21].

#### **3. Unmanned Ship Collision Avoidance Model in Two-Sides Clamp Scenario**

This subsection adopts the fuzzy mathematics method, which organically combines the extensive-form game with the collision risk fuzzy credibility numbers. This study analyzes the collision avoidance game problem of route conflict in the situation where the unmanned ship is under two-sides clamping situation particularly. In this model, the fuzzy confidence degree of collision risk is used to calculate whether the ship escapes from the collision risk after the collision avoidance game, so as to judge whether the ship adopts the optimal collision avoidance strategy.

#### *3.1. A Novel Ship Collision Avoidance Model*

In this subsection, a ship collision avoidance model is proposed in two-sides clamp scenario. Specific steps are as follows.

**Step 1: Determination of priority**. When ships encounter emergency and dangerous situations in the course of navigation, if they want to recognize each other's game information through various ship identification equipment on unmanned ships, they also need to play sequential dynamic game on ships. To determine the action sequence of the players in the game process, this study proposes a ship priority function. This study makes two assumptions about the gross tonnage of the ship and the sailing speed of the ship regarding the actual sailing experience. The larger the gross tonnage of the ship in the voyage, the higher the priority in the game situation. Then, it is assumed that the higher the speed of the ship during the voyage, the higher the priority in the ship game. The following formula is given for the aforementioned two assumptions:

$$p\_i = w\_1(G\_i / \sum\_{i=1}^n G\_i) + w\_2(V\_i / \sum\_{i=1}^n V\_i). \tag{1}$$

In Equation (1), it is noteworthy that *pi* represents the priority index of player *i* in the game, *Gi* represents the total tonnage of player *i* in the game, and *Vi* represents *i* the speed of the player in the game. Among them *w*<sup>1</sup> and *w*<sup>2</sup> represents the weight of the gross tonnage of the ship and the speed, at the same time *w*<sup>1</sup> + *w*<sup>2</sup> = 1. After *pi* has been determined, players *pi* alternately make action decisions based on the magnitude of the index.

**Step 2: Action space (Action set)**. After obtaining the corresponding action sequence based on the ship collision avoidance priority in step 1, it is assumed that ship *i* start to act. The set of game decision it makes in the current situation is called the action set of the ship *i*. In this set of action strategies, the number of action strategies made by ship *i* is related to the complexity of the game situation; the number of action strategies made by the ship is related to the complexity of the game situation. The more complex the game model, the more actions can be made, the more combinations of actions, and the longer the solution process will take. This study only adopts steering avoidance as a collision avoidance measure to simplify the development space of the game and reduce the time required for the game-solving process. In sailing practice, the steering angle is too large, which will cause inconvenience in resuming the voyage. Therefore, the upper and lower limits of steering are 30◦ in this study, and each turn is 10◦ as an action strategy, then the action set of the ship *i* can be represented as *Ai* = - −30◦ , −20◦ , −10◦ , 0◦ , 10◦ , 20◦ , 30◦ .

**Step 3: Profit function**. After determining the ships' collision avoidance priority and the ships' decision-making action set, this study only considers the ships' offset as a profit on the premise of ensuring that the ship can sail safely and establishes a profit function. In collision avoidance, the lower the ships' drift, the lower the ships' cost, and the more the ship benefits throughout the game process. Set the initial position of the ship as , *x*0, *y*0,, the speed of the ship as *v*, the heading angle as *ψ*, and the time interval of the ship game as *t*. This study only studies a series of games between our ship and the other two ships under the special situation of the two-sides. It is assumed that one of the ships is an environmental variable, that is, the ship does not take any steering measures to maintain direction and speed. If the planned course is sailing at a constant speed, *t* is the displacement increments of the abscissa, and the increment of the *xl* ordinate of the ship in time *yl* are:

$$\begin{array}{ll} \text{x}\_{l} = \begin{cases} \text{ } & \text{vt} \sin(\psi), \quad \text{0}^{\circ} \le \psi \le 90^{\circ}; \\ \text{ } & \text{vt} \cos(\psi - 90^{\circ}), \quad 90^{\circ} < \psi \le 180^{\circ}; \\ -\text{ } & \text{vt} \sin(\psi - 180^{\circ}), \quad 180^{\circ} < \psi \le 270^{\circ}; \\ -\text{ } & \text{-} \text{ } \cos(\psi - 270^{\circ}), \quad 270^{\circ} < \psi < 360^{\circ}. \end{array} \end{array} \tag{2}$$

$$y\_I = \begin{cases} \textit{vt}\cos(\psi), & \text{\$\upsilon^\circ \le \psi \le 90^\circ\$}; \\ -\textit{vt}\sin(\psi - 90^\circ), & \text{\$90^\circ < \psi \le 180^\circ\$}; \\ -\textit{vt}\cos(\psi - 180^\circ), & 180^\circ < \psi \le 270^\circ\$; \\ \textit{vt}\sin(\psi - 270^\circ), & 270^\circ < \psi < 360^\circ. \end{cases} \tag{3}$$

After the *i*-th decision is made, the coordinates where the ship arrives (*xp*, *yp*) according to the planned course and constant speed, it gets:

$$
\mathbf{x}\_p = \mathbf{x}\_0 + i\mathbf{x}, \mathbf{y}\_p = \mathbf{y}\_0 + i\mathbf{y}\_I. \tag{4}
$$

During the actual ship's action, the ship's expected position (*xi*, *yi*) will be affected by the last decision. If the ship's position after making a decision is (*xi*−1, *yi*−1), then:

$$\mathbf{x}\_{i} = \mathbf{x}\_{i-1} + \mathbf{x}\_{m}, \\ y\_{i} = y\_{i-1} + y\_{m} \tag{5}$$

where *ψ<sup>i</sup>* represents the new heading angle of the ship after the *i*-th decision is executed:

$$\psi\_i = \begin{cases} \psi\_i & 0^\circ \le \psi\_i < 360^\circ \\ \psi\_i - 360^\circ & \psi\_i \ge 360^\circ \\ \psi\_i + 360^\circ & \psi\_i < 0^\circ \end{cases} . \tag{6}$$

However, environmental variables should be taken into account when considering collision avoidance strategies. Therefore, the relevant distance variable is introduced in combination with the collision risk *μ*. The unmanned ship will take measures to avoid the ship when it encounters the nearest distance. The influence of bump measure on revenue function is as follows:

$$
\mu = \frac{1}{2} - \frac{1}{2}\sin\left[\frac{\pi}{d\_2 - d\_1}\left(\omega - \frac{d\_1 + d\_2}{2}\right)\right].\tag{7}
$$

Among them, *d*<sup>1</sup> and *d*<sup>2</sup> are the safety field value of the ship and the safe passing distance of the ship, respectively, and the distance *ω* between our ship and the environmental variable ship.

To sum up, it can be extracted that the ships' offset *S* in the *i*-th decision of the player *S* is shown in Equation (8):

$$S = \begin{cases} \begin{bmatrix} \left[x\_{0} + vt\sin(\mathfrak{p})(i-1) + vt\sin(\mathfrak{p}\_{i}) - (x\_{0} + vt\sin(\mathfrak{p})i)\right]^{2} \\ + \\ \sqrt{\left[y\_{0} + vt\cos(\mathfrak{p})(i-1) + vt\cos(\mathfrak{p}\_{i}) - (y\_{0} + vt\cos(\mathfrak{p})i)\right]^{2}} \\ \sqrt{\left[x\_{0} + vt\cos(\mathfrak{p} - \frac{\mathfrak{p}}{2}), i - 1, +vt\cos(\mathfrak{p}\_{i} - \frac{\mathfrak{p}}{2}) - (x\_{0} + vt\cos(\mathfrak{p} - \frac{\mathfrak{p}}{2})i)\right]^{2}} \\ + \\ \sqrt{\left[y\_{0} - vt\sin(\mathfrak{p} - 90^{\circ})(i-1) - vt\sin(\mathfrak{p}\_{i} - \frac{\mathfrak{p}}{2}) - (y\_{0} - vt\sin(\mathfrak{p} - \frac{\mathfrak{p}}{2}))\right]^{2}} \\ \sqrt{\left[x\_{0} - vt\sin(\mathfrak{p} - \pi), i - 1, -vt\sin(\mathfrak{p}\_{i} - \pi) - (x\_{0} - vt\sin(\mathfrak{p} - \pi)i)\right]^{2}} \\ + \\ \sqrt{\left[y\_{0} - vt\cos(\mathfrak{p} - \pi)(i-1) - vt\cos(\mathfrak{p}\_{i} - \pi) - (y\_{0} - vt\cos(\mathfrak{p} - \pi)i)\right]^{2}} \\ \sqrt{\left[x\_{0} - vt\cos(\mathfrak{p} - \frac{3\pi}{2}), i - 1, -vt\cos(\mathfrak{p}\_{i} - \frac{3\pi}{2}) - (x\_{0} - vt\cos(\mathfrak{p} - \frac{3\pi}{2}))\right]^{2}} \\ + \\ \sqrt{\left[y\_{0} +$$

**Step 4: Collision avoidance decision**. In the dynamic game with complete information, the reverse solution from the final decision position is the most effective method to solve Nash equilibrium [22]. In order to facilitate understanding, the following complete information dynamic game is taken as an example to analyze.

Suppose there are two ships No. 1 and No. 2, in which ship No. 1 can choose an action *a*<sup>1</sup> from the action set *A*<sup>1</sup> and ship No. 2 can choose an action *a*<sup>2</sup> from the action set *A*2. At the same time, *U*1(*a*1, *a*2) and *U*2(*a*1, *a*2) represent the value of the ship's profit of No. 1 and the ship's profit of No. 2, respectively. Based on the principle of the inverse solution method, it is assumed that ship No.1 in this example makes an action decision first, so the analysis starts from ship No. 2. Assuming that ship No. 1 is selected from the action set first *a*1, then ship No. 2 needs to choose an action from its own action set that is the most profitable for itself in the environment affected by the decision of ship No. 1. Therefore, ship No. 2 faces that the decision problem is denoted as max*U*2(*a*1, *a*2), *a*<sup>2</sup> ∈ *A*2, ∀*a*<sup>1</sup> ∈ *A*1, the optimal strategy made by ship No. 2 after ship No. 1 makes the action decision is denoted by *F*2(*a*1), and there is one and only one optimal strategy.

When inferring the decision made by ship No. 1 in the process of reverse solving, ship No. 1 predicts that ship No. 2 will take the next action according to its decision. Therefore, ship No. 1 only needs to arbitrarily find an action that can maximize its benefits in its own action set. So, the decision-making problem of ship No. 1 is written as follows. At this time, (*a*1, *a*2) represents the maximum value of ship No. 1 and ship No. 2 which are the best combination of actions.

In summary, the choice of ship collision avoidance strategy based on perfect information game is mainly divided into four steps. Firstly, the surrounding environment of the ship is checked during the voyage. Secondly, the occurrence of collision risk is judged in the encounter situation. Thirdly, priority action sequence is taken into account. Finally, the optimal strategy to play the game is calculated according to the action sequence. Specifically, the ship collision avoidance strategy and flow chart of the perfect information game are shown in Figure 5.

**Figure 5.** Flow chart of ship collision avoidance strategy in the perfect information game.

#### *3.2. Expansion of Unmanned Ship Collision Avoidance Game Tree*

The unmanned ship collision avoidance game model constructed in the previous section is the process of game tree expansion. The game tree designed in this study is a breadth search tree [23]. The nodes in the state space of the whole game tree can be divided into three categories: UNSEARCH nodes, OPEN node sets, and CLOSE node sets. Taking the game expansion tree with game round 3 as an example, node 1 is the head node, which contains the heading angle, offset, and collision risk of the unmanned ship in the current encounter situation. Node 1 is expanded to generate sub-nodes 2, 3, and 4. The three sub-nodes are respected in the new ship state formed by the combination of different actions taken by the ship in the situation. The aforementioned three nodes (including all the information in the new state) are initialized, listed in sequence after the head node, and pointed the parent pointer to node 1. After node 1 is expanded, the next node is sequentially expanded in the queue, namely node 2. Then, node 2 becomes the current node, and then it expands based on the space state of node 2. If the collision risk degree in the space state of node 3 is greater than 0.5, there is a possibility of collision risk if the node in this space state is expanded. So, node 3 is skipped and node 4 becomes the current node [24]. By analogy, until the end of the game round, the schematic diagram of the game algorithm is shown in Figure 6.

**Figure 6.** Game expansion tree of ship A and ship B.

The process of solving is to find the node with the largest profit in the last layer of nodes, that is, the node with the smallest sum of the offset of the two ships, in which the value of the collision risk of the node members must be less than 0.5. After finding the node with the greatest profit, it can follow its parent pointer for a reverse solution until the root node of the entire extended game tree is found, and the final optimal solution is the action combination information contained in the game strategy combination sequence node of the two ships.

#### *3.3. Collision Risk Fuzzy Credibility Number*

After the collision avoidance game, the ship collision risk can be determined by using the fuzzy credibility number of the ship collision risk [25]. There are many methods to calculate ship collision risk: fuzzy mathematical calculation method, BP neural network method, hazard mode immune control algorithm, bacterial foraging algorithm, and so on. The fuzzy mathematical method has high calculation accuracy. BP neural network method has strong self-learning ability, small calculation error, but high failure probability and long calculation time. Therefore, this study uses the fuzzy mathematics method to measure the ship collision risk.

In the introduced encounter situation, the judgment of whether there is a danger of collision between ships mainly depends on the distance to the closest point of approach *DCPA*, the time to the closest point of approach value between the ships *TCPA*, the ship speed ratio between the ships *K*, the distance between the ships *D*, the azimuth angle of the target ship relative to the own ship *θ*, and other related factors. In this study, the method of fuzzy mathematics is used to calculate the collision risk index (*CRI*) [26]. When *CRI* = 0, it means that there is no danger of collision between two ships. When *CRI* = 1, it means that the collision cannot be avoided and *CRI* = 1. Let *UDCPA*, *UTCPA*, *Uθ*, *UD*, *UK* be the *DCPA*, the *TCPA*, the azimuth angle between two ships, *D* between the two ships, and the risk membership degree of the shipping speed ratio *K*, respectively, and its belong to [0, 1]. Then, it gets:

$$\begin{array}{l} \text{CRI} = \quad a \left\{ \frac{1}{2} - \frac{1}{2} \sin \left[ \frac{\pi}{42 - d\_1} \left( D\_{CPA} - \frac{d\_1 + d\_2}{2} \right) \right] \right\} \\ \quad + b \left[ \left( \frac{t\_2 - |T\_{CPA}|}{t\_2 - t\_1} \right)^2 \right] \\ \quad + c \left\{ \frac{1}{2} \left[ \cos(\theta - 19^\circ) + \sqrt{\frac{440}{289} + \cos^2(\theta - 19^\circ)} \right] - \frac{5}{17} \right\} \\ \quad + d \left[ \left( \frac{H\_1 \cdot H\_2 \cdot 1.7 \cos(\theta - 19^\circ) + \sqrt{44 + 2.89 \cos^2(\theta - 19^\circ)} - D}{\left[ H\_1 \cdot H\_2 \cdot 1.7 \cos(\theta - 19^\circ) + \sqrt{44 + 2.89 \cos^2(\theta - 19^\circ)} \right] - (H\_1 \cdot H\_2 \cdot DLA)} \right)^2 \right] \\ \quad + \frac{c}{1 + \frac{2}{K \sqrt{k^2 + 1.2K} |\sin((q\_1 - q\_2))|}}. \end{array} \tag{9}$$

Among the *d*<sup>1</sup> and *d*<sup>2</sup> are the value of the safety field of the ship safety threshold and the safe passing distance of the ship, respectively. At the same time *a* + *b* + *c* + *d* + *e* = 1. Then, the aforementioned ship collision time *t*<sup>1</sup> and ship attention time *t*<sup>2</sup> are obtained as:

$$t\_1 = \begin{cases} \frac{\sqrt{D\_1^2 - D\_{\rm CAP}}^2}{V\_r}, D\_{\rm CAP} \le D\_{1\prime} \\ \qquad \frac{D\_1 - D\_{\rm CAP}}{V\_r}, D\_{\rm CAP} > D\_{1\prime} \end{cases} \tag{10}$$

and:

$$t\_2 = \frac{\sqrt{12^2 - D\_{\rm CAP}^2}}{V\_{\rm I}}.\tag{11}$$

It is noteworthy that, in Equations (10) and (11), *D*<sup>1</sup> represents the closest avoidance removal and *D*<sup>2</sup> represents the distance at which the approaching ship should take avoidance actions. *Vr* is defined as the velocity vector of the incoming ship relative to the present

ship. Meanwhile, the schematic diagram of the latest avoidance distance *D*<sup>1</sup> is shown in Figure 7, where *DLA* is defined as the latest distance to turn the rudder. Here, the value of *DLA* is valued as 12 times the length of the ship for convenience [27]. Especially, on the conditions that *DCPA* ≤ *d*1, 0 ≤ |*TCPA*| ≤ *t*<sup>1</sup> and *D* ≤ *D*1, the value of *UDCPA*, *UD*, *UTCPA*, and *CRI* are all obtained as 1. In this situation, the ship is collided. Meanwhile, on the conditions that *d*<sup>2</sup> < *DCPA* and *D*<sup>2</sup> ≤ *D*, it gets the value of *UDCPA*, *UD*, and *UTCPA* which are all 0, which means there is no danger of collision between the two ships.

**Figure 7.** Geometric diagram of the latest avoidance distance.

#### **4. Illustrative Example**

To explain and verify the aforementioned extensive-form game model, an illustrative example is given as follows.

#### *4.1. Problem Introduction*

On 26 March 2019, the Xinde Maritime Network released news that on the 24th local time in the port of Fujairah, the United Arab Emirates, a tragic and incredible ship collision accident occurred. An exceptionally large tanker collided with another LNG carrier. The accident is a typical conflict scenario where the two ships sail down, as shown in Figure 8.

**Figure 8.** Course conflict scenario between two ships.

In such a situation where the two-sides are clamped, the ships can judge by the conflict of the routes during the encounter: the ships in the blue route are the right-giveway vessels, the purple-route vessels under the right-give-way vessels are the left-giveway vessels, and the vessels located in the right-give-way vessels are the left-give-way vessels. The pink route ships aforementioned are treated as environmental parameter variables in the whole game situation. In this collision avoidance game, the action set

of the ship in the green route is - 10◦ , 20◦ , 30◦ , the action set of the ship in the purple route is - −30◦ , −20◦ , <sup>−</sup>10◦ , and then the action combination of the two ships is - (10◦ , −30◦ ),(10◦ , −20◦ ),...,(30◦ , −20◦ ),(30◦ , −10◦ ) . The next action combination will change, and the action set will change. Otherwise, the ship will be greatly offset, which is not in line with the benefits.

Consider the two-sides clamping scenario combine with the head-on situation, the target ships on two-sides of my ship approached at the 174◦ ≤ <sup>Δ</sup>*<sup>C</sup>* ≤ <sup>186</sup>◦ relative course of my ship. At this point, the ship is in the head-on sides of the two-sides clamping scenario. The schematic diagram of the head on scenario analysis is shown in Figure 9.

**Figure 9.** Head-on situation of two-sides clipping scenario schematic diagram.

#### *4.2. Simulation Process and Analysis*

In this section, two ships, i.e., "Own Ship" and the "Target Ship" are taken into account in this simulation sample, where the ship length of "Own Ship" is 105 m, the maximum speed of "Own Ship" is 18 kn, and the gross tonnage of "Own Ship" is 6000 tons, whereas the ship length of "Target Ship" is 139.8 m, the maximum speed of "Target Ship" is 13.5 kn, and the gross tonnage of "Target Ship "is 6000 tons.

For convenience, the game round is valued as 3. According to the relevant parameters of the two ships, the position of them is initialized. According to the 1972 International Collision Avoidance Regulations, "the two ships should each take a right turn to avoid collision" in encounter situation, which makes ship A as its own ship. In this case, the relevant parameter variables are obtained as in Table 1. By using Equation (9), the original collision risk between the two ships is 0.5911. Then, each ship starts to make a collision avoidance decision at 0 s [28]. In the first round, the own ship takes a 10◦ right turn to avoid collision. The target ship takes a 20◦ right turn to avoid collision. At the time node of 300 s in the second round, the own ship takes a 10◦ right turn to avoid collision, the target ship takes a 20◦ left turn to avoid collision, and the collision risk is 0.4635. At the time node of 600 s in the third round, the own ship takes a 10◦ left turn to avoid collision, the target ship takes a 10◦ left turn to avoid collision, and the collision risk is 0.4329. In the third round, because all ships completed the collision avoidance operation and there is no risk of subsequent collision, the course is readjusted, and the original course is restored. The simulation results of the confrontation situation based on the aforementioned are shown in Table 2. The optimal collision avoidance sequence combination composed of the obtained sub-game Nash equilibrium is - (10◦ , 20◦ ),(0 ◦ , 0◦ ),(0 ◦ , 0◦ ) . All collision avoidance behaviors are consistent with COLREGS.

**Ship Parameters** *Vr* 24 kn *t*<sup>2</sup> 1800 s *D* 6 n mile *U<sup>θ</sup>* 0.9558 *ϕ<sup>r</sup>* 180◦ *DCPA* 0 n mile *D*<sup>1</sup> 0.9057 n mile *UD* 0 *d*<sup>1</sup> 1.12 n mile *TCPA* 900 s *D*<sup>2</sup> 4.278 n mile *UK* 0.4143 *d*<sup>2</sup> 2.21 n mile *UDCPA* 1 *t*<sup>1</sup> 135.874 s *CRI* 0.5912 *θ* 0◦ *UTCPA* 0.2926

**Table 1.** Related parameter variables.



#### **5. Conclusions**

This study proposes a decision-making problem on the collision avoidance of unmanned ships at sea in the situation of two-sides clamping. This study introduces the decision process of collision avoidance of unmanned ships at sea based on the extensive-form game model and verifies the effectiveness of collision avoidance by using fuzzy credibility numbers. Specifically, the main innovations of this study are concluded as follows.

Firstly, this study proposes a two-sides clamping intelligent collision avoidance strategy for unmanned ships. This strategy can provide real-time collision avoidance measures for unmanned ships at sea. The example analysis shows that this strategy can effectively improve the efficiency of collision avoidance of unmanned ships.

Secondly, the simulation experiment is carried out with the navigation simulator to realize the ship's extended game collision avoidance decision-making system. The simulation of two unmanned ships is carried out in the situation where two unmanned ships in the case of clamping on two-sides. Aiming at the intelligent collision avoidance problem of unmanned ships in the situation of two-sides, this study establishes a dynamic collision avoidance game model for ships based on the extensive-form game model. The unmanned ship can make the optimal collision avoidance action in the situation of being clamped on two sides.

Thirdly, a novel collision risk fuzzy credibility number is used to calculate the ship collision risk at the comprehensive fuzzy assessment based on the same time. The evaluation indicators include *DCPA*, *TCPA*, the distance between the two ships, the relative orientation between the two ships, the speed ratio of the two ships, and other factors.

Moreover, by using fuzzy credibility numbers, the decision-making efficiency on collision avoidance of ships in uncertain environments can be improved. In future work, the fuzzy credibility numbers can be considered in more decision-making situations in shipping management. Furthermore, the Fermatean fuzzy sets [29] is applied to the collision avoidance process of unmanned ships. Considering the multiple fuzzy factors that affect the collision avoidance of unmanned ships at sea, this study combines Fermatean fuzzy sets and links it with extensive-form game to provide support for the intelligent collision avoidance of unmanned ships at sea.

**Author Contributions:** Conceptualization, F.Z.; Data curation, M.L.; Formal analysis, R.W.; Investigation, Y.C.; Writing—original draft, H.C. All authors have read and agreed to the published version of the manuscript.

**Funding:** The work is partially supported by Shanghai Pujiang Program (2019PJC062), the Natural Science Foundation of Shandong Province (ZR2021MG003), the Research Project on Undergraduate Teaching Reform of Higher Education in Shandong Province (No. Z2021046), the National Natural Science Foundation of China (51508319), and the Nature and Science Fund from Zhejiang Province Ministry of Education (Y201327642).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


**Fu Zhang 1,2 and Weimin Ma 1,\***


**Abstract:** In this article, we have proposed a multi-attribute group decision making (MAGDM) with a new scenario or new condition named Chaotic MAGDM, in which not only the weights of the decision makers (DMs) and the weights of the decision attributes are considered, but also the familiarity of the DMs with the attributes are considered. Then we applied the weighted neutrosophic fuzzy soft rough set theory to Chaotic MAGDM and proposed a new algorithm for MAGDM. Moreover, we provide a case study to demonstrate the application of the algorithm. Our contributions to the literature are as follows: (1) familiarity is rubbed into MAGDM for the first time in the context of neutrosophic fuzzy soft rough sets; (2) a new MAGDM model based on neutrosophic fuzzy soft rough sets has been designed; (3) a sorting/ranking algorithm based on a neutrosophic fuzzy soft rough set is constructed.

**Keywords:** multi-attribute group decision making; fuzzy soft rough sets; neutrosophic fuzzy soft rough sets

**MSC:** 90B50

#### **1. Introduction**

Multi-attribute decision making (MADM) is an important branch of modern decision theory and methodology with a wide range of practical contexts, such as human resource performance assessment, economic performance assessment, political election assessment, military performance assessment, etc. However, due to the limitations of human knowledge and the specialization of professions, as well as the diversity and complexity of real-world decision making, a single decision maker (DM) cannot make the optimal option. As a result, in most MADMs, decision makers (DMs) from diverse sectors, areas of expertise, or knowledge backgrounds are frequently required to collaborate in order to make more scientifically sound conclusions. That is, multi-attribute group decision making (MAGDM). In addition, there is a lot of uncertainty and ambiguity in practical MAGDM. Therefore, the study of MAGDM under fuzzy scenarios has become a popular research direction in recent years. Considering that different DMs have different professional backgrounds, areas of knowledge, expertise, etc. Therefore, in a MAGDM, how to engage DMs to evaluate the attributes of the alternatives in their areas of expertise and familiarity is an issue that must be considered in the decision making. In the existing literature or research results, there are two common approaches to address this concern. One is to assign weights to DMs, the other is to group DMs according to certain rules.

• Assigning weights to DMs.

Liu et al. [1] proposed a variable weighting approach by considering DMs and attributes weights together for MAGDM problems under interval-valued intuitionistic fuzzy sets. Yu et al. [2] developed a novel consensus reaching process for MAGDM based on hesitant fuzzy linguistic term sets (HFLTSs) which not only can deal with multi-granular

**Citation:** Zhang, F.; Ma, W. Study on Chaotic Multi-Attribute Group Decision Making Based on Weighted Neutrosophic Fuzzy Soft Rough Sets. *Mathematics* **2023**, *11*, 1034. https:// doi.org/10.3390/math11041034

Academic Editors: Yanhui Guo and Jun Ye

Received: 29 December 2022 Revised: 25 January 2023 Accepted: 14 February 2023 Published: 18 February 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

HFLTSs, but also considers the weight vectors of DMs and attributes in the proposed consensus model. Liu et al. [1] presented a hybrid approach based on variable weights for multi-attribute group decision making, and so on [3–12].

• Grouping DMs according to certain rules.

For example, Su et al. [13] proposed a MAGDM approach for evaluation and selfconfidence in online learning platforms based on probabilistic linguistic term sets. Sun et al. [14] provided diverse fuzzy multi-branch rough sets based on binary relations for MAGDM. Sun et al. [15] analyzed the diversified MAGDM problem with the personal preference parameters, etc. [16–19].

#### **2. Comparison and Motivation**

To date, research on MAGDM in fuzzy scenarios has produced a very large number of theoretical and practical results. All these studies have investigated the relationship between DM and attributes from different perspectives either by assigning weights or by grouping; however, the following issues still need to be further explored.


The following situations are often encountered in decision making. For example, in a large-scale fire rescue, an important attribute of the rescue plan is the ability to quickly rescue trapped people. This attribute is not only related to the cause of the fire, but also to the structure of the building, construction materials, etc. Therefore, both fire experts and building experts should be important evaluators of this attribute. Hence, it is obviously unreasonable to have only one of the expert group (e.g., the fire expert group) to evaluate it. In other words, simply grouping experts would result in a lack of many valuable evaluations. Furthermore, fires are related to weather conditions, and a meteorologist is a very well-known expert whose evaluation of this property will be less important if he or she is not familiar with the fire scene. On the contrary, the person who built the building, even if he is just a worker, will have a very important evaluation because he is more familiar with the structure of the building at the fire site.

This suggests that the weights of DM's evaluations are related to familiarity with the attributes, rather than just considering the weights of the DM. In other words, in MAGDM, not only the weight of the DM and the weight of the attribute are considered, but also the familiarity of the DM with the attributes.

There will be a lot of uncertainty and fuzziness in the actual MAGDM problem, and also the familiarity of the DM with the decision attributes will change in different scenarios. Therefore, in the absence of an explicit method to determine the familiarity between DM and decision attributes, using fuzzy theory to describe the familiarity between DM and attributes is a good choice. In order to allow DMs to focus their attention on evaluation scoring without considering the limitations of scoring values, we chose NFN (Neutrosophic Fuzzy Number) for evaluation scoring.

In summary, we have proposed a MAGDM with a new scenario or new condition in which not only the weights of the DMs and the weights of the decision attributes are considered, but also the familiarity of the DMs with the attributes needs to be considered. That is Chaotic MAGDM (CMAGDM) which was proposed by Zhang et al. [20]. Then we applied neutrosophic fuzzy soft rough set theory to CMAGDM and proposed a new approach for CMAGDM. Our contributions are mainly as follows.


The remainder of this paper is structured as follows: Section 3 briefly introduces the basic concepts and framework of MAGDM, provides a brief overview of fuzzy theory and several concepts in fuzzy theory. In Section 4, combining the neutrosophic fuzzy soft rough set and CMAGDM, we provide a new Chaotic MAGDM model based on neutrosophic fuzzy soft rough sets. A case study and the numerical analysis of the proposed model is illustrated in Section 5. At last, conclusions are given in Section 6.

#### **3. Theoretical Background**

In this section, first, we will review the basic concepts and framework of MAGDM. Second, we will provide a brief overview of fuzzy theory. Finally, we will review several important concepts in fuzzy theory, as well as their basic rules and properties.

#### *3.1. The Basic Concepts and Framework of MAGDM*

The problem of selecting the best answer from a list of potential solutions, based on a set of attributes or criteria, can be summarized as a decision problem. In real-world decision making, there will always be a group of DMs. The corresponding MADM translates into multi-attribute group decision making (MAGDM). The problem of MAGDM is represented by the following notation [20]:

	- (*k* = 1, 2, ··· , *l*, *i* = 1, 2, ··· , *m*, *j* = 1, 2, ··· , *n*);

$$X(e\_k) = \begin{pmatrix} c\_1 & c\_2 & \cdots & c\_n \\ a\_1 & \begin{pmatrix} x\_{11}^k & x\_{12}^k & \cdots & x\_{1n}^k \\ x\_{21}^k & x\_{22}^k & \cdots & x\_{2n}^k \\ \vdots & \vdots & \ddots & \vdots \\ x\_{m1}^k & x\_{m2}^k & \cdots & x\_{mn}^k \end{pmatrix} & k = 1, 2, \cdots, l. \tag{1}$$

In order to describe MAGDM more clearly and concisely, a MAGDM problem can usually be represented by a sextuple *A*, *C*, *E*, *w*, *τ*, *X*, that is:

$$MAGDM = \langle A, \mathbb{C}, E, w, \tau, X \rangle. \tag{2}$$

#### *3.2. A Brief Overview of Fuzzy Theory*

In order to better combine uncertainty in MAGDM with fuzzy theory, we will make a brief review of the development of fuzzy set (FS).

Zadeh [21] (1965) first proposed fuzzy theory. It breaks through the limitations of classical set theory by introducing a membership function to represent uncertainty. Atanassov [22] (1983) suggested a generalization of fuzzy sets making the degrees of membership (*μ*) and non-membership (*ν*) intervene to describe the vinculation of an element to a set, and the sum of these degrees is less or equal to 1 (*μ* + *ν* ≤ 1), that is, an intuitionistic fuzzy set (IFS). However, an IFS fails when the sum of these degrees is more then 1. So, Yager [23] (2016) developed the concept of q-rung orthopair fuzzy sets (*q*-ROFS) and considered an efficient method to explain the vagueness of MADM problems. In *q*-ROFS, the sum of two degrees can be more then 1, it just needs to satisfy the condition *<sup>μ</sup><sup>q</sup>* + *<sup>η</sup><sup>q</sup>* ≤ 1,(*<sup>q</sup>* ≥ <sup>1</sup>). It is clear that, for *q* = 1, it is an IFS, it is a Pythagorean fuzzy set (Yager and Abbasov [24] 2013) (P*y*FS) if *q* = 2, and it is a Fermatean fuzzy set (Senapati and Yager [25] 2020) (FFS) if *q* = 3, thus, *q*-ROFS generalize the IFS, P*y*FS, FFS. When we face human opinions involving more types of answers: yes, abstain, no, or refusal. Voting can be a good example of such a situation as the human voters may be divided into four group of those who : vote for, abstain, vote against, or refuse to vote. In order to solve this problem, Cuong and Kreinovich [26,27] in 2013 introduced a new notion of picture fuzzy set (PFS), which are directly extensions of FS and of intuitionistic fuzzy set (IFS). In PFS, the following three dimensions are considered simultaneously: degree of positive membership (*μ*), degree of neutral membership (*η*), and degree of negative membership (*ν*), and satisfy the following condition *μ* + *η* + *ν* ≤ 1. The structure of PFS is of great importance as it has the ability to deal with human opinion efficiently. It is observed that the constraint on PFS makes us unable to assign values by own choice. In simple words, one can say that the domain of PFS is restricted. The concept of spherical fuzzy set (SFS) and T-spherical fuzzy set (T-SFS) is introduced as a generalization of FS, IFS, and PFS by Mahmood et al. [28] in 2019. In T-SFS, the sum of three degrees can more then 1, instead, need to satisfy condition *<sup>μ</sup><sup>t</sup>* + *<sup>η</sup><sup>t</sup>* + *<sup>ν</sup><sup>t</sup>* ≤ 1,(*<sup>t</sup>* ≥ <sup>1</sup>). Obviously, when *t* = 1, T-SFS degenerates to PFS, and if *t* = 2, T-SFS is a SFS. Neutrosophic set (NS) was introduced by Smarandache [29]. In NS, there is a need to satisfy the condition *μ<sup>A</sup>* (*x*) + *η<sup>A</sup>* (*x*) + *ν<sup>A</sup>* (*x*) ≤ 3. Although there are many other fuzzy sets, such as fuzzy multi-set (FMS) [30], interval-valued fuzzy set (IVFS) [31], hesitant fuzzy set (HFS) [32], hybrid fuzzy set, and so on. These theories play a very important role in practice and theory. However, due to limited space and the focus of our article, we will not repeat them here.

According to the above review and analysis of FSs, we can clearly draw the relationship between different FSs, as shown in Figure 1. There are two ideas for the promotion of the FS, one is from the dimension of the variable, the other is from the domain of the variable.

**Figure 1.** Dimension-Based Fuzzy Set Classification [21–26,28,29].

Researchers have expanded the fuzzy set from the perspective of variable dimension and its domain. These works not only played a very important role in the development of fuzzy set theory, but also played a very important role in practical applications. However, these ideas for generalizing fuzzy sets only consider the dimension and domain of fuzzy variables and do not consider the ambiguity of the attribute. Here we still use [26]'s voting example to illustrate our opinions. Suppose there are two candidates *p*<sup>1</sup> and *p*<sup>2</sup> participating in the campaign. A voter is very familiar with *p*1, but only knows about *p*<sup>2</sup> based on his campaign speech. Now, the voter evaluates the two candidates using the PFS method and receives the same score. Clearly, it is unreasonable to consider these two evaluations as identical due to the difference in familiarity with the candidates. Phenomena such as the above will be frequently encountered in MAGDM problems. Fortunately, in addition to fuzzy sets, there are soft sets [33] and rough sets [34] that can describe unclear and fuzzy relations. Especially, the combination of these uncertainty theories can describe more details in MAGDM. These include fuzzy soft set (FSS) [35], fuzzy soft rough set (FSRS), picture fuzzy soft rough set (PFSRF), spherical fuzzy soft rough set (SFSRS), T-spherical fuzzy soft rough set (T-SFSRS) [36] and so on.

#### *3.3. Concepts of Fuzzy Set*

Classes and sets in the traditional mathematical sense do not include things such as "the class of all real numbers which are significantly bigger than 1," "the class of attractive ladies," or "the class of tall men." However, it is still true that such loosely defined "classes" are crucial to human thought. In essence, rather than the existence of random variables, the source of imprecision is the absence of well specified criteria for class membership. Zadeh [21] explored a concept which may be of use in dealing with "classes" of the type cited above. That is the fuzzy set (FS), a "class" with a continuum of grades of membership. It is defined as follows:

**Definition 1** (Zadeh [21])**.** *A fuzzy set A on a universe X is an object of the form*

$$A = \{ (\mathbf{x}, \mu\_A(\mathbf{x})) | \mathbf{x} \in X \}, \tag{3}$$

*where μ<sup>A</sup>* (*x*) ∈ [0, 1] *is called the "degree of membership of x in A". The variable μ<sup>A</sup>* (*x*) *is called a Fuzzy Number (FN).*

Intuitionistic fuzzy set (IFS) was developed by Atanassov [22] and is suitable for situations in which there is uncertainty about the degree of membership of an element in a defined set: each element in an IFS has a membership degree and a nonmembership degree between 0 and 1 [37].

**Definition 2** (Atanassov [22])**.** *A intuitionistic fuzzy set A on a universe X is an object of the form*

$$A = \{ (\mathbf{x}, \mu\_A(\mathbf{x}), \nu\_A(\mathbf{x})) | \mathbf{x} \in X \},\tag{4}$$

*where μ<sup>A</sup>* (*x*) ∈ [0, 1] *is called the "degree of membership of x in A", ν<sup>A</sup>* (*x*) ∈ [0, 1] *is called the "degree of non-membership of x in A", and where μ<sup>A</sup>* (*x*) *and ν<sup>A</sup>* (*x*) *satisfy the following condition:*

$$\forall \mathbf{x} \in X, \mu\_A(\mathbf{x}) + \nu\_A(\mathbf{x}) \le 1.$$

*The pair* (*μ<sup>A</sup>* (*x*), *ν<sup>A</sup>* (*x*)) *is called an Intuitionistic Fuzzy Number (IFN).*

When we face human opinions involving more types of answers such as yes, abstain, and refusal. Cuong and Kreinovich [26] introduced the concept of picture fuzzy set (PFS), and Mahmood et al. [28] provided the concept of T-spherical fuzzy set (T-SFS), both of which are direct extensions of the fuzzy set (FS) and the intuitonistic fuzzy set (IFS).

**Definition 3** (Cuong and Kreinovich [26])**.** *A picture fuzzy set A on a universe X is an object of the form*

$$A = \{ (\mathbf{x}, \mu\_A(\mathbf{x}), \eta\_A(\mathbf{x}), \nu\_A(\mathbf{x})) | \mathbf{x} \in X \}\tag{5}$$

*where μ<sup>A</sup>* (*x*) ∈ [0, 1] *is called the "degree of positive membership of x in A", η<sup>A</sup>* (*x*) ∈ [0, 1] *is called the "degree of neutral membership of x in A" and ν<sup>A</sup>* (*x*) ∈ [0, 1] *is called the "degree of negative membership of x in A", and where μ<sup>A</sup>* (*x*)*, η<sup>A</sup>* (*x*) *and ν<sup>A</sup>* (*x*) *satisfy the following condition:*

$$
\forall \mathbf{x} \in X, \quad \mu\_A(\mathbf{x}) + \eta\_A(\mathbf{x}) + \nu\_A(\mathbf{x}) \le 1.
$$

*Then for x* ∈ *X, let πA*(*x*) = 1 − (*μ<sup>A</sup>* (*x*) + *η<sup>A</sup>* (*x*) + *ν<sup>A</sup>* (*x*))*, πA*(*x*) *could be called the "degree of refusal membership of x in A". Let PFS*(*X*) *denote the set of all the picture fuzzy sets on a universe X. A triplet* (*μ<sup>A</sup>* (*x*), *η<sup>A</sup>* (*x*), *ν<sup>A</sup>* (*x*)) *can be referred to as a Picture Fuzzy Number (PFN).*

**Definition 4** (Mahmood et al. [28])**.** *A T-spherical fuzzy set A on a universe X is an object of the form*

$$A = \{ (\mathbf{x}, \mu\_A(\mathbf{x}), \eta\_A(\mathbf{x}), \nu\_A(\mathbf{x})) | \mathbf{x} \in X \}\tag{6}$$

*where μ<sup>A</sup>* (*x*) ∈ [0, 1] *is called the "degree of positive membership of x in A", η<sup>A</sup>* (*x*) ∈ [0, 1] *is called the "degree of neutral membership of x in A", and ν<sup>A</sup>* (*x*) ∈ [0, 1] *is called the "degree of negative membership of x in A", and where μ<sup>A</sup>* (*x*)*, η<sup>A</sup>* (*x*) *and ν<sup>A</sup>* (*x*) *satisfy the following condition:*

$$\forall \mathbf{x} \in X, \quad \mu\_{\mathcal{A}}^{\mathfrak{t}}(\mathbf{x}) + \eta\_{\mathcal{A}}^{\mathfrak{t}}(\mathbf{x}) + \nu\_{\mathcal{A}}^{\mathfrak{t}}(\mathbf{x}) \le 1.$$

*Then for <sup>x</sup>* ∈ *X, let <sup>π</sup>A*(*x*) = <sup>1</sup> − (*μ<sup>t</sup> <sup>A</sup>* (*x*) + *<sup>η</sup><sup>t</sup> <sup>A</sup>* (*x*) + *<sup>ν</sup><sup>t</sup> <sup>A</sup>* (*x*))*, πA*(*x*) *could be called the "degree of refusal membership of x in A". Let T-SFS*(*X*) *denote the set of all the T-spherical fuzzy sets (T-SFS)on a universe X. A triplet* (*μ<sup>A</sup>* (*x*), *η<sup>A</sup>* (*x*), *ν<sup>A</sup>* (*x*)) *can be identified as a spherical fuzzy number (T-SFN). If t* = 2 *the T-spherical fuzzy set is called spherical fuzzy set (SFS), and corresponding SFS*(*X*) *denote the set of all the spherical fuzzy sets on a universe X. A triplet* (*μ<sup>A</sup>* (*x*), *η<sup>A</sup>* (*x*), *ν<sup>A</sup>* (*x*)) *can be referred to as a spherical fuzzy number (SFN).*

Smarandache [29] generalized intuitionistic fuzzy sets (IFSs) to neutrosophic sets (NSs). A neutrosophic set (NS) contains three parameters: truth membership function, indeterminacy membership function, and falsity membership function. Unlike the PFS and T-SFS, the NS has a broader definition domain, giving DMs more options for evaluating scores in MAGDM.

**Definition 5** (Smarandache [29])**.** *A neutrosophic set A on a universe X is an object of the form*

$$A = \{ (\mathbf{x}, \mu\_{\mathcal{A}}(\mathbf{x}), \eta\_{\mathcal{A}}(\mathbf{x}), \nu\_{\mathcal{A}}(\mathbf{x})) | \mathbf{x} \in X \}\tag{7}$$

*where μ<sup>A</sup>* (*x*) ∈ [0, 1] *is called the "truth membership function of x in A", η<sup>A</sup>* (*x*) ∈ [0, 1] *is called the "indeterminacy membership function of x in A" and ν<sup>A</sup>* (*x*) ∈ [0, 1] *is called the "falsity membership function of x in A", and where μ<sup>A</sup>* (*x*)*, η<sup>A</sup>* (*x*) *and ν<sup>A</sup>* (*x*) *satisfy the following condition:*

$$\forall \mathbf{x} \in X, \quad \mu\_{\mathcal{A}}(\mathbf{x}) + \eta\_{\mathcal{A}}(\mathbf{x}) + \nu\_{\mathcal{A}}(\mathbf{x}) \le \mathbf{3}.$$

*A triplet* (*μ<sup>A</sup>* (*x*), *η<sup>A</sup>* (*x*), *ν<sup>A</sup>* (*x*)) *can be referred to as a Neutrosophic Fuzzy Number (NFN).*

By comparing the above concepts, it is easy to conclude that NS has a broader field of definition, thus allowing the DM to focus more on scoring the options without having to think too much about the constraints that need to be met for the evaluation scores. It is with this in mind that the Neutrosophic Fuzzy Number (*NSN*) will be chosen for scoring in this paper.

#### *3.4. The Concept of Rough Set*

Pawlak [34] introduced the concept of rough sets (RS) in 1982, which can handle uncertainty, imprecision, and ambiguity in sets. It is defined as follows.

**Definition 6** (Pawlak [34])**.** *Let <sup>R</sup> be an equivalence relation on the universe <sup>X</sup> (X* <sup>=</sup> <sup>∅</sup>*),* (*X*, *<sup>R</sup>*) *be a Pawlak approximation space. A subset A* ⊆ *X is called definable if R*(*x*) = *R*(*x*)*; in the opposite case, i.e., if R*(*x*) − *R*(*x*) = ∅*, A is said to be a rough set, where the two operations are defined as:*

$$\underline{R}(\mathbf{x}) = \{ \mathbf{x} \in X | [\mathbf{x}]\_{\mathbb{R}} \subseteq A \} \tag{8}$$

$$\overline{\mathcal{R}}(\mathbf{x}) = \{ \mathbf{x} \in X | [\mathbf{x}]\_R \cap A \neq \mathcal{Q} \}\tag{9}$$

As an illustration, let us consider the following example (Example 1).

**Example 1.** *Table 1 is a information system of RS. The universe X* = {*ai*|*i* = 1, 2, ··· , 8} *and A* = {*a*2, *a*3, *a*4, *a*5, *a*7}*. Suppose the equivalence relation R is that the attributes c*<sup>2</sup> *and c*<sup>4</sup> *have the same value, then* [*X*]*<sup>R</sup>* = {{*a*1}, {*a*2, *a*4, *a*6}, {*a*3, *a*7}, {*a*5}, {*a*8}}*. Thus, we can obtain the following results: R*(*A*) = {*a*3, *a*5, *a*7}, *R*(*A*) = {*a*2, *a*3, *a*4, *a*5, *a*6, *a*7}*. Since R*(*A*) − *R*(*A*) = ∅*, therefore, A is a rough set.*

**Table 1.** A information system of rough set.


#### *3.5. The Concept of Soft Set*

Molodtsov [33] proposed in 1999 a mathematical approach to dealing with uncertain information with the core idea of emphasizing the study of uncertainty and ambiguity of information from a parametric perspective, which is known as soft set (SS) theory. The concept of SS is as follows.

**Definition 7** (Molodtsov [33])**.** *Let X be the universe. C is a set of parameters (attributes) about objects in X,* (*X*, *<sup>C</sup>*) *is called a soft space, and* C ⊆ *C, <sup>ϕ</sup> is a mapping given by <sup>ϕ</sup>* : C → <sup>2</sup>*X; here* <sup>2</sup>*<sup>X</sup> is the power set of X, then a pair* (*ϕ*, C) *is named a soft set (SS) over the universe X.*

To illustrate the point, let us consider the following example (Example 2).

**Example 2.** *Suppose Table 2 is a information system of soft set. A* = {*a*1, *a*2, *a*3, *a*4} *is a universe of soft set, C* = {*c*1, *c*2, *c*3, *c*4, *c*5} *is the set of parameters.*

**Table 2.** A information system of soft set.


According to the Definition of soft set, we could easily come to the results as follows:

$$\begin{aligned} \varphi(c\_1) &= \{a\_1, a\_3\} \; ; \; \varphi(c\_2) = \{a\_2, a\_4\} \; ; \; \varphi(c\_3) = \{a\_1, a\_2, a\_3\} \; ; \\ \varphi(c\_4) &= \{a\_2, a\_3, a\_4\} \; ; \; \varphi(c\_5) = \{a\_2, a\_4\} \; ; \end{aligned}$$

Take *ϕ*(*c*3) as an example, it means that the objects with attribute *c*<sup>3</sup> are *a*1, *a*<sup>2</sup> and *a*3.

#### *3.6. The Concept of Fuzzy Soft Set*

Maji et al. [35] combined the theory of fuzzy sets and soft sets in 2001 and proposed the definition of fuzzy soft sets. A fuzzy soft set can essentially be seen as a parametric fuzzy set for a given universe, which is a representation model that combines parameters and fuzzy information together. It is no longer restricted to the 0 and 1 values of the parameters in the soft set, but is a more flexible form of parameter selection that can be used in a wide range of uncertainty areas [15]. The concept is as follows.

**Definition 8** (Maji et al. [35])**.** *Let X be a universal set, C be a collection of parameters regarding X, and FS*(*X*) *represents the collection of all FSs over the universe X. A pair* (*ϕ*, C) *is said to ba a Fuzzy Soft Set (FSS) over X, where* C ⊆ *C and ϕ* : C → *FS*(*X*)*, where FS*(*X*) *represents the collection of all the fuzzy sets (FSs) on a universe X. For every x* ∈ *X, the FSS can be defined as follows:*

$$S = \{ (\mathbf{x}, \boldsymbol{\varphi}(\mathbf{x})) | \mathbf{x} \in \mathcal{C}, \boldsymbol{\varphi}(\mathbf{x}) \in F\mathbf{S}(X) \}\tag{10}$$

*Particularly, when* |C| = 1*, the fuzzy soft set degenerates to a fuzzy set.*

Specially, when the fuzzy set is PFS, the corresponding concept has the following form:

**Definition 9** (Khan et al. [38])**.** *Let X be a universal set, C be a collection of parameters (attributes) regarding to X and PFS*(*X*) *represents the collection of all picture fuzzy set over the universe X. A pair* (*ϕ*, C) *is said to ba a Picture Fuzzy Soft Set (PFSS) over X, where* C ⊆ *C and ϕ* : C → *PFS*(*X*)*. For every x* ∈ *X, the PFSS can be defined as follows:*

$$S = \{ (\mathbf{x}, \boldsymbol{\varphi}(\mathbf{x})) | \mathbf{x} \in \mathcal{C}, \boldsymbol{\varphi}(\mathbf{x}) \in PFS(X) \}\tag{11}$$

*Obviously, when* |C| = 1*, the picture fuzzy soft set degenerates to a picture fuzzy set.*

#### *3.7. The Concept of Fuzzy Soft Rough Set*

Combining fuzzy sets, soft sets, and rough sets can lead to a more flexible method of describing parameters, namely fuzzy soft rough sets. If the fuzzy set is a picture fuzzy set, the corresponding fuzzy soft rough set is called a picture fuzzy soft rough set and is defined as follows.

**Definition 10** (Muahmmad and Martino [36])**.** *Let X be the universe. C be a set of parameters (attributes) about objects in X, PFS*{*X*} *be the collection of all picture fuzzy soft sets over the universe X,* <sup>R</sup> *be a picture fuzzy soft set relation from universe <sup>X</sup> to* <sup>C</sup> *(That is* <sup>∀</sup>*<sup>c</sup>* ∈C⊆ *<sup>C</sup>*, <sup>R</sup>(*c*) <sup>∈</sup> *PFSS*(*X*)*), and <sup>ψ</sup> be a mapping given by <sup>ψ</sup>* : C → *PFSS*{*X*}*. Then* (*ψ*, <sup>C</sup>, <sup>R</sup>) *is known as a Picture Fuzzy Soft Rough Approximation Space. For every* F ∈ *PFS*(C)*, the lower and upper approximation of* F *can be defined as follows:*

$$\mathbb{E}(\mathcal{F}) = \{ (\mathbf{x}, \underline{\boldsymbol{\mu}}(\mathbf{x}), \underline{\boldsymbol{\eta}}(\mathbf{x}), \underline{\boldsymbol{\eta}}(\mathbf{x})) | \mathbf{x} \in X \}\tag{12}$$

$$\overline{\mathbb{R}}(\mathcal{F}) = \{ (\mathbf{x}, \overline{\mu}(\mathbf{x}), \overline{\eta}(\mathbf{x}), \overline{\nu}(\mathbf{x})) | \mathbf{x} \in X \}\tag{13}$$

*where*

$$
\underline{\mu}(\mathbf{x}) = \wedge\_{\mathbf{c} \in \mathcal{C}} (\mu\_{\mathbb{R}}(\mathbf{x}, \mathbf{c}) \wedge \mu\_{\mathcal{F}}(\mathbf{c})),
\tag{14}
$$

$$\underline{\eta}(\mathbf{x}) = \vee\_{\mathcal{C} \in \mathcal{C}} (\eta\_{\mathbb{R}}(\mathbf{x}, \mathbf{c}) \vee \eta\_{\mathbb{F}}(\mathbf{c})),\tag{15}$$

$$
\underline{\nu}(\mathbf{x}) = \vee\_{\mathcal{C} \in \mathcal{C}} (\nu\_{\mathbb{R}}(\mathbf{x}, \mathbf{c}) \vee \nu\_{\mathcal{F}}(\mathbf{c})).\tag{16}
$$

*and*

$$
\overline{\mu}(\mathbf{x}) = \vee\_{\mathcal{C} \in \mathcal{C}} (\mu\_{\mathbb{R}}(\mathbf{x}, \mathbf{c}) \vee \mu\_{\mathcal{F}}(\mathbf{c})),
\tag{17}
$$

$$\overline{\eta}(\mathbf{x}) = \wedge\_{\mathbf{c}\in\mathcal{C}} (\eta\_{\mathbb{R}}(\mathbf{x}, \mathbf{c}) \wedge \eta\_{\mathbb{F}}(\mathbf{c})),\tag{18}$$

$$
\overline{\nu}(\mathfrak{x}) = \wedge\_{\mathfrak{c} \in \mathcal{C}} (\nu\_{\mathbb{R}}(\mathfrak{x}, \mathfrak{c}) \wedge \nu\_{\mathcal{F}}(\mathfrak{c})).\tag{19}
$$

$$Here, \ 0 \le \underline{\mu}(\mathbf{x}) + \underline{\eta}(\mathbf{x}) + \underline{\nu}(\mathbf{x}) \le 1, \ 0 \le \overline{\mu}(\mathbf{x}) + \overline{\eta}(\mathbf{x}) + \overline{\nu}(\mathbf{x}) \le 1.$$

*Then*

$$\mathbb{R}(\mathcal{F}) = (\underline{\mathbb{R}}(\mathcal{F}), \mathbb{R}(\mathcal{F})) = (\ge, (\underline{\mu}(\ge), \overline{\mu}(x)), (\underline{\eta}(x), \overline{\eta}(x)), (\underline{\nu}(x), \overline{\nu}(x))).\tag{20}$$

*The score function can be defined as:*

$$\mathcal{S}(\mathbb{R}(\mathcal{F})) = \underline{\mu}(\mathbf{x}) + \overline{\mu}(\mathbf{x}) - \underline{\eta}(\mathbf{x}) - \overline{\eta}(\mathbf{x}) - \underline{\nu}(\mathbf{x}) - \overline{\nu}(\mathbf{x}).\tag{21}$$

**Example 3.** *In a (MADM), suppose the alternatives set is A* = {*a*1, *a*2, *a*3} *and the attributes set is C* = {*c*1, *c*2, *c*3, *c*4}*, as presented in Table 3. Let* F *be a PFS over C as follows.*

$$\mathcal{F} = \{ (c\_1, 0.30, 0.30, 0.20), (c\_2, 0.50, 0.30, 0.10), (c\_3, 0.70, 0.20, 0.10), (c\_4, 0.20, 0.60, 0.10) \}$$

**Table 3.** A information system of PFS.


Then, we can calculate the corresponding lower and upper approximation of F as follows.

$$\underline{\mathbb{R}}(\mathcal{F}) = \{ (a\_1, 0.10, 0.60, 0.50), (a\_2, 0.20, 0.60, 0.20), (a\_3, 0.20, 0.60, 0.30) \};$$

$$\overline{\mathbb{R}}(\mathcal{F}) = \{ (a\_1, 0.70, 0.20, 0.10), (a\_2, 0.80, 0.10, 0.00), (a\_3, 0.70, 0.00, 0.10) \}.$$

Finally, according to Equation (21) we obtain:

$$\mathcal{S}(a\_1) = -0.6, \mathcal{S}(a\_2) = 0.1, \mathcal{S}(a\_3) = -0.1.$$

Obviously, S(*a*2) > S(*a*3) > S(*a*1). Therefore, it follows that: *a*<sup>2</sup> *a*<sup>3</sup> *a*1.

#### **4. A Novel Neutrosophic FSRS-Based Method for Chaotic MAGDM**

*4.1. Chaotic Multi-Attribute Group Decision Making*

Zhang et al. [20] proposed the concept of Chaotic MAGDM, in which not only the weights of DMs and decision attributes are considered, but also the familiarity of DMs with the decision attributes. With the crossover factor of familiarity, Chaotic MAGDM is brought closer to the real decision problem. The relevant concepts are as follows.

**Definition 11** (Zhang et al. [20])**.** *A MAGDM is called Chaotic MAGDM if there exists at least one decision attribute such that at least two DMs have the different familiarity with it.*

For convenience, the symbols of the variables used in Chaotic MAGDM are summarized as follows: [20]:


(*k* = 1, 2, ··· , *l*; *i* = 1, 2, ··· , *m*; *j* = 1, 2, ··· , *n*);


Since we consider the relationship between DMs and decision attributes in Chaotic MAGDM, we added the familiarity variable F to the MAGDM. That is using septuple *A*, *C*, *E*, *w*, *τ*, *X*, F to represent the Chaotic MAGDM. As shown in Equation (22)

$$\text{ChoticMAGDM} = \langle A, \mathbb{C}, E, w, \tau, X, \mathcal{F} \rangle. \tag{22}$$

Clearly, the diversified multi-attribute group decision making proposed by Sun et al. [14] is a special case of chaotic MAGDM. One of the core ideas of diversified MAGDM is that by establishing a pluralistic binary fuzzy relationship between the set of evaluation attribute indicators and different decision makers.

In order to describe a Chaotic MAGDM more visually, it can be represented by a information form. As shown in Table 4.


**Table 4.** The Chaotic MAGDM Information Form.

#### *4.2. Weighted Neutrosophic Fuzzy Soft Rough Sets*

In the Definition 10 proposed by Muahmmad and Martino. If, for example, in MADM, F denotes the familiarity of the DMs against to the attributes rather then the values of evaluation, then the defining functions of the upper and lower bounds of <sup>R</sup>(F) must be changed accordingly. So, we give the new definition as follows:

**Definition 12.** *Let X be the universe. C be a set of parameters (attributes) about objects in X, NFS*{*X*} *be the collection of all neutrosophic fuzzy soft sets over the universe X,* <sup>R</sup> *be a neutrosophic fuzzy soft set relation from universe <sup>X</sup> to* <sup>C</sup> *(That is* <sup>∀</sup>*<sup>c</sup>* ∈C⊆ *<sup>C</sup>*, <sup>R</sup>(*c*) <sup>∈</sup> *NFSS*(*X*)*), <sup>ψ</sup> be a mapping given by <sup>ψ</sup>* : C → *NFSS*{*X*}*. Then* (*ψ*, <sup>C</sup>, <sup>R</sup>) *is known as a Neutrosophic Fuzzy Soft Rough Approximation Space. For every* F ∈ *NFS*(C)*, the lower and upper approximation of* F *can be defined as follows:*

$$\mathbb{E}(\mathcal{F}) = \{ (\mathbf{x}, \underline{\boldsymbol{\mu}}(\mathbf{x}), \underline{\boldsymbol{\eta}}(\mathbf{x}), \underline{\boldsymbol{\eta}}(\mathbf{x})) | \mathbf{x} \in X \}\tag{23}$$

$$\overline{\mathbb{R}}(\mathcal{F}) = \{ (\mathbf{x}, \overline{\mu}(\mathbf{x}), \overline{\eta}(\mathbf{x}), \overline{\nu}(\mathbf{x})) | \mathbf{x} \in X \}\tag{24}$$

*where*

$$\underline{\mu}(\mathbf{x}) = \min\_{\mathcal{c} \in \mathcal{C}} (\mu\_{\mathbb{R}}(\mathbf{x}, \mathbf{c}) \cdot \min(\mu\_{\mathcal{F}}(\mathbf{c}), (2 - \eta\_{\mathcal{F}}(\mathbf{c}) - \nu\_{\mathcal{F}}(\mathbf{c})))),\tag{25}$$

$$\underline{\eta}(\mathbf{x}) = \max\_{\boldsymbol{\varepsilon} \in \mathcal{C}} (\eta\_{\mathbb{R}}(\mathbf{x}, \boldsymbol{\varepsilon}) \cdot \max(\eta\_{\mathbb{F}}(\boldsymbol{\varepsilon}), (2 - \mu\_{\mathbb{F}}(\boldsymbol{\varepsilon}) - \nu\_{\mathbb{F}}(\boldsymbol{\varepsilon})))),\tag{26}$$

$$\underline{\nu}(\boldsymbol{x}) = \max\_{\boldsymbol{\mathcal{c}} \in \mathcal{C}} (\nu\_{\mathbb{R}}(\boldsymbol{x}, \boldsymbol{\mathcal{c}}) \cdot \max(\nu\_{\mathcal{F}}(\boldsymbol{c}), (2 - \mu\_{\mathcal{F}}(\boldsymbol{c}) - \eta\_{\mathcal{F}}(\boldsymbol{c})))). \tag{27}$$

*and*

$$\overline{\mu}(\mathbf{x}) = \max\_{\mathcal{c} \in \mathcal{C}} (\mu\_{\mathbb{R}}(\mathbf{x}, \mathcal{c}) \cdot \max(\mu\_{\mathcal{F}}(\mathcal{c}), (2 - \eta\_{\mathcal{F}}(\mathcal{c}) - \nu\_{\mathcal{F}}(\mathcal{c})))),\tag{28}$$

$$\overline{\eta}(\mathbf{x}) = \min\_{\mathbf{c} \in \mathcal{C}} (\eta\_{\mathbb{R}}(\mathbf{x}, \mathbf{c}) \cdot \min(\eta\_{\mathbb{F}}(\mathbf{c}), (2 - \mu\_{\mathbb{F}}(\mathbf{c}) - \nu\_{\mathbb{F}}(\mathbf{c})))),\tag{29}$$

$$\overline{\boldsymbol{\nu}}(\mathbf{x}) = \min\_{\boldsymbol{c} \in \mathcal{C}} (\boldsymbol{\nu}\_{\mathbb{R}}(\mathbf{x}, \boldsymbol{c}) \cdot \min(\boldsymbol{\nu}\_{\mathbb{F}}(\mathbf{c}), (2 - \boldsymbol{\mu}\_{\mathbb{F}}(\mathbf{c}) - \boldsymbol{\eta}\_{\mathbb{F}}(\mathbf{c})))).\tag{30}$$

*where,* 0 ≤ *μ*(*x*) + *η*(*x*) + *ν*(*x*) ≤ 3*,* 0 ≤ *μ*(*x*) + *η*(*x*) + *ν*(*x*) ≤ 3

*Then,*

$$\mathbb{R}(\mathcal{F}) = (\underline{\mathbb{R}}(\mathcal{F}), \overline{\mathbb{R}}(\mathcal{F})) \tag{31}$$

$$= (\mathbf{x}, (\underline{\mu}(\mathbf{x}), \overline{\mu}(\mathbf{x})), (\underline{\eta}(\mathbf{x}), \overline{\eta}(\mathbf{x})), (\underline{\nu}(\mathbf{x}), \overline{\nu}(\mathbf{x}))) \tag{32}$$

*The score function is as following:*

$$\mathcal{S}(\mathbb{R}(\mathcal{F})) = \underline{\mu}(\mathbf{x}) + \overline{\mu}(\mathbf{x}) - \underline{\eta}(\mathbf{x}) - \overline{\eta}(\mathbf{x}) - \underline{\nu}(\mathbf{x}) - \overline{\nu}(\mathbf{x}).\tag{33}$$

**Example 4.** *Still analyzing the data in Example 3, under the new Definition 12, the corresponding results are:*

$$\begin{aligned} \mathbb{R}(\mathcal{F}) &= \{ (a\_1, 0.04, 0.85, 0.60), (a\_2, 0.04, 1.02, 0.22), (a\_3, 0.08, 0.24, 0.36) \}; \\ \mathbb{R}(\mathcal{F}) &= \{ (a\_1, 0.45, 0.06, 0.01), (a\_2, 1.28, 0.03, 0.00), (a\_3, 0.09, 0.00, 0.01) \}. \end{aligned}$$

*Furthermore,* S(*a*1) = −1.03, S(*a*2) = 0.05, S(*a*3) = 0.37*, then* S(*a*3) > S(*a*2) > S(*a*1)*. So, the final sorting is a*<sup>3</sup> *a*<sup>2</sup> *a*1*.*

If different attributes *c* (*c* ∈ *C*) have different weights, then the neutrosophic fuzzy soft rough set will become a weighted neutrosophic fuzzy soft rough set.

**Definition 13.** *Let X be the universe. C be a set of parameters (attributes) with the weight wC about objects in X, NFS*{*X*} *be the collection of all neutrosophic fuzzy soft sets over the universe X,* <sup>R</sup> *be a neutrosophic fuzzy soft set relation from universe <sup>X</sup> to* <sup>C</sup> *(That is* <sup>∀</sup>*<sup>c</sup>* ∈C⊆ *<sup>C</sup>*, <sup>R</sup>(*c*) <sup>∈</sup> *NFSS*(*X*)*), and <sup>ψ</sup> be a mapping given by <sup>ψ</sup>* : C → *NFSS*{*X*}*. Then* (*ψ*, <sup>C</sup>, <sup>R</sup>, *wC* ) *is known as a weighted neutrosophic Fuzzy Soft Rough Approximation Space. For every* F ∈*NFS*(C)*, the lower and upper approximation of* F *can be defined as follows:*

$$\underline{\mathbb{R}}(\mathcal{F}) = \{ (\mathbf{x}, \underline{\underline{\mu}}(\mathbf{x}), \underline{\underline{\eta}}(\mathbf{x}), \underline{\underline{\nu}}(\mathbf{x})) | \mathbf{x} \in X \}\tag{34}$$

$$\overline{\mathbb{R}}(\mathcal{F}) = \{ (\mathbf{x}, \overline{\mu}(\mathbf{x}), \overline{\eta}(\mathbf{x}), \overline{\nu}(\mathbf{x})) | \mathbf{x} \in X \}\tag{35}$$

*where*

$$\underline{\mu}(\mathbf{x}) = \min\_{\mathbf{c} \in \mathcal{C}} (w\_{\mathbb{C}} \cdot \mu\_{\mathbb{R}}(\mathbf{x}, \mathbf{c}) \cdot \min(\mu\_{\mathcal{F}}(\mathbf{c}), (2 - \eta\_{\mathcal{F}}(\mathbf{c}) - \nu\_{\mathcal{F}}(\mathbf{c})))),\tag{36}$$

$$\underline{\eta}(\mathbf{x}) = \max\_{\mathbf{c} \in \mathcal{C}} (w\_{\mathbf{c}} \cdot \eta\_{\mathbb{R}}(\mathbf{x}, \mathbf{c}) \cdot \max(\eta\_{\mathbb{F}}(\mathbf{c}), (2 - \mu\_{\mathbb{F}}(\mathbf{c}) - \nu\_{\mathbb{F}}(\mathbf{c})))),\tag{37}$$

$$\underline{\boldsymbol{w}}(\boldsymbol{x}) = \max\_{\boldsymbol{c} \in \mathcal{C}} (\boldsymbol{w}\_{\mathbb{C}} \cdot \boldsymbol{\nu}\_{\mathbb{R}}(\boldsymbol{x}, \boldsymbol{c}) \cdot \max(\boldsymbol{\nu}\_{\mathcal{F}}(\boldsymbol{c}), (2 - \boldsymbol{\mu}\_{\mathcal{F}}(\boldsymbol{c}) - \boldsymbol{\eta}\_{\mathcal{F}}(\boldsymbol{c})))).\tag{38}$$

*and*

$$\overline{\mu}(\mathbf{x}) = \max\_{\mathcal{c}\in\mathcal{C}} (w\_{\mathcal{c}} \cdot \mu\_{\mathbb{R}}(\mathbf{x}, \mathcal{c}) \cdot \max(\mu\_{\mathcal{F}}(\mathcal{c}), (2 - \eta\_{\mathcal{F}}(\mathcal{c}) - \nu\_{\mathcal{F}}(\mathcal{c})))),\tag{39}$$

$$\overline{\eta}(\mathbf{x}) = \min\_{\mathbf{c} \in \mathcal{C}} (w\_{\mathcal{C}} \cdot \eta\_{\mathbb{R}}(\mathbf{x}, \mathbf{c}) \cdot \min(\eta\_{\mathcal{F}}(\mathbf{c}), (2 - \mu\_{\mathcal{F}}(\mathbf{c}) - \nu\_{\mathcal{F}}(\mathbf{c})))),\tag{40}$$

$$\overline{\boldsymbol{\nu}}(\mathbf{x}) = \min\_{\boldsymbol{c} \in \mathcal{C}} (\boldsymbol{w}\_{\boldsymbol{c}} \cdot \boldsymbol{\nu}\_{\mathbb{R}}(\mathbf{x}, \boldsymbol{c}) \cdot \min(\boldsymbol{\nu}\_{\mathcal{F}}(\mathbf{c}), (2 - \boldsymbol{\mu}\_{\mathcal{F}}(\mathbf{c}) - \boldsymbol{\eta}\_{\mathcal{F}}(\mathbf{c})))). \tag{41}$$

*here,* 0 ≤ *μ*(*x*) + *η*(*x*) + *ν*(*x*) ≤ 3*,* 0 ≤ *μ*(*x*) + *η*(*x*) + *ν*(*x*) ≤ 3

*Then,*

$$\mathbb{R}(\mathcal{F}) = (\underline{\mathbb{R}}(\mathcal{F}), \overline{\mathbb{R}}(\mathcal{F})) \tag{42}$$

$$= (\mathfrak{x}, (\mathfrak{\mu}(\mathfrak{x}), \mathfrak{\mu}(\mathfrak{x})), (\mathfrak{\eta}(\mathfrak{x}), \overline{\mathfrak{\eta}}(\mathfrak{x})), (\underline{\mathfrak{\eta}}(\mathfrak{x}), \overline{\mathfrak{\nu}}(\mathfrak{x}))).\tag{43}$$

The evaluation function is

$$\mathcal{S}(\mathbb{R}(\mathcal{F})) = \underline{\mu}(\mathbf{x}) + \overline{\mu}(\mathbf{x}) - \underline{\eta}(\mathbf{x}) - \overline{\eta}(\mathbf{x}) - \underline{\nu}(\mathbf{x}) - \overline{\nu}(\mathbf{x}).\tag{44}$$

Considering that as the parameters *η*, *η*, *ν* and *ν* increase, the value of the evaluation function becomes very close to zero or even negative, this does not facilitate numerical calculations and comparisons. Therefore, the evaluation function needs to be improved accordingly. The new evaluation function is as following:

**Definition 14.** *The score function is as following:*

$$\mathcal{S}(\mathbb{R}(\mathcal{F})) = \underline{\mu}(\mathbf{x}) + \overline{\mu}(\mathbf{x}) + (2 - \underline{\eta}(\mathbf{x}) - \underline{\nu}(\mathbf{x})) + (2 - \overline{\eta}(\mathbf{x}) - \overline{\nu}(\mathbf{x})) \tag{45}$$

$$=4+\underline{\mu}(\mathbf{x})+\overline{\mu}(\mathbf{x})-\underline{\eta}(\mathbf{x})-\overline{\eta}(\mathbf{x})-\underline{\nu}(\mathbf{x})-\overline{\nu}(\mathbf{x}).\tag{46}$$

**Example 5.** *Still analyzing the data in Example 3 with the weight vector <sup>w</sup>* = {0.50, 0.25, 0.15, 0.10}*T, under the new Definition 13 and Equation (14), the corresponding upper and lower bounds and score functions are as follows.*

<sup>R</sup>(F) = {(*a*1, 0.0040, 0.1500, 0.2800),(*a*2, 0.0040, 0.1500, 0.0700),(*a*3, 0.0080, 0.0750, 0.1400)}; <sup>R</sup>(F) = {(*a*1, 0.2250, 0.0120, 0.0010),(*a*2, 0.3750, 0.0075, 0.0000),(*a*3, 0.4500, 0.0000, 0.0010)}; S(*a*1) = 3.7860; S(*a*2) = 4.1515; S(*a*3) = 4.2420,

*Obviously,* S(*a*3) > S(*a*2) > S(*a*1)*, so the final ordering is a*<sup>3</sup> *a*<sup>2</sup> *a*1*.*

In general, in chaotic multi-attribute group decision making (CMAGDM), different DMs have different decision weights, so the total evaluation function of the corresponding CMAGDM is as follows.

**Definition 15** (Zhang et al. [20])**.** *The total evaluation score function for the CMAGDM is* S(*ai*)*:*

$$\mathbb{S}(a\_i) = \sum\_{k=1}^{l} \pi\_k \mathbb{S}\_k(a\_i) \tag{47}$$

*where* S*k*(*ai*) *is the kth DM's score for the ith alternative (i* = 1, ..., *m; k* = 1, 2, ..., *l) ; τ<sup>k</sup> is the weight of the kth DM (τ<sup>k</sup>* 0, ∑*<sup>l</sup> <sup>k</sup>*=<sup>1</sup> *τ<sup>k</sup>* = 1*).*

#### *4.3. The Algorithm for CMAGDM*

Through the analysis in the previous subsection, we summarize the algorithm for solving CMAGDM as Algorithm 1.


**Input:**

*A*, *C*, *E*, *w*, *τ*, *X*, F.

#### **Output:**

Optimal sorting: *a*∗ <sup>1</sup> *a*<sup>∗</sup> <sup>2</sup> ··· *a*<sup>∗</sup> *m*


S(*a*<sup>∗</sup> <sup>1</sup> ) <sup>≥</sup> <sup>S</sup>(*a*<sup>∗</sup> <sup>2</sup> ) ≥···≥ <sup>S</sup>(*a*<sup>∗</sup> *<sup>m</sup>*).

16: **Return** *a*∗ <sup>1</sup> *a*<sup>∗</sup> <sup>2</sup> ··· *a*<sup>∗</sup> *m*

#### **5. Numerical Analysis**

In this section, we will analyze a real-life home purchase problem to explain the specific application of our proposed method.

#### *5.1. Problem Statement*

A family with three members (husband, wife, and daughter) are planning to buy one of four houses. Suppose the family considers the following factors in purchasing: price, construction materials, decoration, convenience for shopping (e.g., availability of supermarkets, food markets, shops, etc.), and convenience of transportation (e.g., availability of public parking, bus stops, metro stations, etc.). Assume that the purchase decision weight of the husband is 40%, the wife is 35%, and the daughter is 25%. The weights of the purchasing factors are as follows: 35% for price, 20% for construction materials, 15% for decoration, 10% for convenience of shopping, and 20% for convenience of transportation.

Obviously, this is a MAGDM problem, it can be represented by a sextuple *A*, *C*, *E*, *w*, *τ*, *X*, that is:

$$MAGDM = \langle A, \mathcal{C}, E, w, \tau, X \rangle. \tag{48}$$

where *A* = {*a*1, *a*2, *a*3, *a*4}, *a*1, *a*2, *a*3, *a*<sup>4</sup> denote the first house, the second house, the third house, and the fourth house, respectively. *C* = {*c*1, *c*2, *c*3, *c*4, *c*5}, *c*1, *c*2, *c*3, *c*4, *c*<sup>5</sup> denote price, construction materials, decoration, convenience for shopping, and convenience of transportation, respectively, and the corresponding weight vector is *w*, where *w* = {0.35, 0.20, 0.15, 0.10, 0.20}. *E* = {*e*1,*e*2,*e*3}, *e*1,*e*2,*e*<sup>3</sup> denote husband, wife, and daughter, respectively, the corresponding weight vector is *τ*, where *τ* = {0.40, 0.35, 0.25}. *X* = {*X*(*ek*)|*ek* ∈ *E*} is the decision matrix set, and *X*(*ek*) is the decision matrix of the *k*th DM.

Usually, however, the wife and daughter are not familiar with attribute (indicator) construction materials, while the husband is not particularly familiar with attribute (indicator) decoration. Since DMs have different levels of familiarity with attributes, this is not a regular MAGDM problem, but a CMAGDM problem. It can be represented by a septuple *A*, *C*, *E*, *w*, *τ*, *X*, F , that is:

$$
\mathbb{C}MAGDM = \langle A, \mathbb{C}, E, w, \mathfrak{r}, X, \mathcal{F} \rangle. \tag{49}
$$

Assuming that the DMs select NFSS for scoring evaluation, the evaluation form is shown in Table 5.


**Table 5.** CMAGDM information of house purchase.

#### *5.2. Numerical Computations*

According to the Algorithm 1, we can obtain following results:

**Step 1:** By Equation (42), the NFSRSs vector <sup>R</sup>(F) can be found. To illustrate the exact process of calculation, we take the husband's evaluation of alternative *a*<sup>1</sup> as an example.

$$\begin{split} \underline{\mu}(a\_1) &= \min\_{c \in \mathcal{C}} (w\_c \cdot \mu\_\mathbb{R}(a\_1, c) \cdot \min(\mu\_\mathcal{F}(c), (2 - \eta\_\mathcal{F}(c) - \nu\_\mathcal{F}(c)))) \\ &= \min(0.35 \times 0.97 \times \min(0.92, (2 - 0.21 - 0.07)), \\ &0.20 \times 0.91 \times \min(0.87, (2 - 0.10 - 0.16)), \\ &0.15 \times 0.91 \times \min(0.83, (2 - 0.48 - 0.43)), \\ &0.10 \times 0.71 \times \min(0.86, (2 - 0.29 - 0.32)), \\ &0.20 \times 0.76 \times \min(0.77, (2 - 0.48 - 0.35))) \\ &= 0.06106, \end{split}$$

$$\begin{split} \overline{\mu}(a\_{1}) &= \max\_{c \in \mathcal{C}} (w\_{c} \cdot \mu\_{\mathbb{R}}(a\_{1}, c) \cdot \max(\mu\_{\mathcal{F}}(c), (2 - \eta\_{\mathcal{F}}(c) - \nu\_{\mathcal{F}}(c)))) \\ &= \max(0.35 \times 0.97 \times \max(0.92, (2 - 0.21 - 0.07)), \\ &0.20 \times 0.91 \times \max(0.87, (2 - 0.10 - 0.16)), \\ &0.15 \times 0.91 \times \max(0.83, (2 - 0.48 - 0.43)), \\ &0.10 \times 0.71 \times \max(0.86, (2 - 0.29 - 0.32)), \\ &0.20 \times 0.76 \times \max(0.77, (2 - 0.48 - 0.35))) \\ &= 0.58394. \end{split}$$

Similarly, the values of *η*(*a*1), *η*(*a*1), *ν*(*a*1), *ν*(*a*1) can be calculated. The same approach could be used to obtain the husband's evaluation of alternative *a*2, *a*3, *a*4. Using the same method, we can obtain all the evaluation of wife and daughter. Finally, the NFSRSs vector <sup>R</sup>(F) can be found. As shown in Table 6.

**Table 6.** Information of the NFSRSs vector <sup>R</sup>(F).


**Step 2:** By Definition (14), the score vector S can be obtained. Here is an example of the calculation process using the husband's scoring of the alternative *a*1.

$$\begin{split} \mathcal{S}\_1(a\_1) &= 4 + \underline{\mu}(a\_1) + \overline{\mu}(a\_1) - \underline{\eta}(a\_1) - \overline{\eta}(a\_1) - \underline{\nu}(a\_1) - \overline{\nu}(a\_1) \\ &= 4 + 0.06106 + 0.58394 - 0.33110 - 0.14448 - 0.00462 - 0.00125 \\ &= 4.16356. \end{split}$$

Using the same method, all evaluation scores can be derived, then the information of the score vector S can be found, as shown in Table 7.

**Table 7.** Information of the score vector S.


**Step 3:** By Definition 15, we can obtain the corresponding total evaluation score for four houses as shown in Table 8. Using alternative *a*<sup>1</sup> as an example, the process for calculating its overall evaluation score is as follows.

**Table 8.** Final evaluation score for all the alternative.


<sup>S</sup>(*a*1) = 4.16356 <sup>×</sup> 0.40 <sup>+</sup> 4.22360 <sup>×</sup> 0.35 <sup>+</sup> 4.20444 <sup>×</sup> 0.25 = 4.19479.

**Step 4:** Obtain the final ranking. By the calculation in the previous step, obviously, we can obtain: <sup>S</sup>(*a*4) <sup>&</sup>gt; <sup>S</sup>(*a*2) <sup>&</sup>gt; <sup>S</sup>(*a*1) <sup>&</sup>gt; <sup>S</sup>(*a*3). That is, *<sup>a</sup>*<sup>4</sup> *<sup>a</sup>*<sup>2</sup> *<sup>a</sup>*<sup>1</sup> *<sup>a</sup>*3. The optimal alternative *a*<sup>∗</sup> = *a*4, so the 4*th* house is the optimal choice.

#### **6. Conclusions**

Most of the current studies on multi-attribute group decision problems mainly give the corresponding solutions in different practical applications or when DMs use different fuzzy sets [1,9–11,39–41]. Typically, the study of traditional group decision models and methods consists of two main aspects: consensus building and optimal choice. The former refers to how to make the opinions of all experts as consensual as possible among all candidate alternatives, while the latter focuses on how to select the optimal decision alternative from all candidates based on group preference opinions [14,15]. However, there are few research results that consider the structure of the multi-attribute group decision problem itself, such as the relationship between DMs and attributes. Based on such considerations, we propose a chaotic multi-attribute group decision model that considers the familiarity of DMs with attributes, which can well avoid the drawbacks arising from grouping or weighting of decision makers by introducing familiarity in multi-attribute group decision making. At the same time, we combine neutrosophic set with a wider definition domain with soft set and rough set to give the concept of weighted neutrosophic fuzzy soft rough set and apply it to chaotic multi-attributes group decision making to obtain the corresponding algorithm. The validity of the model and the flexibility of the algorithm are well illustrated by practical case studies.

Despite our attempts to solve more realistic problems, however, there are still many shortcomings in our work. We have only considered the evaluation scoring of decision makers using neutrosophic fuzzy sets, whereas in practical decision making, decision makers can choose different evaluation methods, such as using different fuzzy sets or precise numbers or linguistic variables, etc. This is a drawback of our work and is certainly a direction for future research. In addition, this paper does not give a scheme for determining familiarity. However, how to determine the familiarity, just like how to assign weights to decision makers or decision attributes, is still the key to multi-attribute cluster decision making, so this will be another popular direction for future research.

**Author Contributions:** Conceptualization, F.Z. and W.M.; methodology, F.Z. and W.M.; software, F.Z.; writing—original draft preparation, F.Z.; writing—review and editing, F.Z.; funding acquisition, W.M. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by National Social Science Foundation of China grant number 20BGL115.

**Acknowledgments:** The authors would like to thank the editors and the anonymous reviewers for their constructive comments and suggestions, which have helped to improve the paper.

**Conflicts of Interest:** The authors declare no conflicts of interest.

#### **Abbreviations**

The following abbreviations are used in this manuscript:


#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

### *Article* **Some Operations and Properties of the Cubic Intuitionistic Set with Application in Multi-Criteria Decision-Making**

**Shahzad Faizi 1, Heorhii Svitenko 2,3, Tabasam Rashid 4, Sohail Zafar <sup>4</sup> and Wojciech Sałabun 3,\***

<sup>1</sup> Department of Mathematics, Virtual University of Pakistan, Lahore 54000, Pakistan


**Abstract:** This paper proposes some operations on the cubic intuitionistic set along with useful properties. We propose the internal cubic intuitionistic set (ICIS), the external cubic intuitionistic set (ECIS), P-order, R-order order (P-(R-) order), P-union, R-union (P-(R-) union), P-intersection, and R-intersection (P-(R-) intersection). We further investigate several properties of the P-(R-) union and P-(R-) intersection of ICISs and ECISs, and present some examples in this context. Some important theorems related to ICISs and ECISs are also presented with proof. Finally, an application example is given to measure the effectiveness and significance of the proposed operations by solving a multi-criteria decision-making (MCDM) problem.

**Keywords:** fuzzy set; interval-valued fuzzy set; intuitionistic fuzzy set; interval-valued intuitionistic fuzzy set; cubic set; cubic intuitionistic set

**MSC:** 03E72;94D05

#### **1. Introduction**

Zadeh [1] proposed the idea of fuzzy sets in 1965 and further extended this idea to an interval-valued fuzzy set (IVFS) [2]. Some complex decision-making problems in the economy, engineering, social science, environmental science, etc., exist that cannot be completely modeled by methods of classical mathematics because of the presence of various types of uncertainties. Others, on the other hand, use certain data processed by methods that are hybrid approaches, such as the INVAR method [3] or the CODAS-COMET method [4]. However, to handle the vagueness and uncertainty occurring in such decisionmaking problems, some well-known mathematical theories have been introduced, such as fuzzy set theory [1], intuitionistic fuzzy set (IFS) theory [5], interval-valued intuitionistic fuzzy set (IVIFS) theory [6,7], hesitant fuzzy set theory [8], hesitant fuzzy linguistic set theory [9], soft set theory [10], fuzzy soft set theory [11], etc. An example of this could be the use of triangular fuzzy numbers in a fuzzy extension of a simplified best–worst method [12].

At times, uncertainty research uses generalized approaches to better cope with the decision-making process via approaches related to the Dempster–Shafer evidence theory (DSET) [13], or quantum evidence theory (QET) [14]. Other ways are to use methods based on either entropy [15] or distance measures [16]. Most of the researchers studied IVFS [12]. For example, Zhang et al. [17] investigated the entropy of IVFSs based on distance measures. Zeng and Guo [18] discussed the similarity measure, inclusion of the measure, and entropy of IVFSs, while Grzegorzewski [19] proposed IVFSs based on the

**Citation:** Faizi, S.; Svitenko, H.; Rashid, T.; Zafar, S.; Sałabun, W. Some Operations and Properties of the Cubic Intuitionistic Set with Application in Multi-Criteria Decision-Making. *Mathematics* **2023**, *11*, 1190. https://doi.org/ 10.3390/math11051190

Academic Editors: Jun Ye and Yanhui Guo

Received: 9 January 2023 Revised: 15 February 2023 Accepted: 21 February 2023 Published: 28 February 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

Hausdorff metric. Furthermore, IVFSs have been widely used and applied in real-life applications. For example, Sambuc [20] and Kohout [21] used the concept of IVFSs in medical diagnoses in thyroid pathology and medicine in a CLINAID system, respectively. Gorzalczany [22] used the idea of IVFSs in approximate reasoning. Turksen [23,24] further used the same idea of IVFSs in interval-valued logic in preference modeling [25].

Jun et al. [26] proposed the idea of a cubic set and presented its two important types, called the internal cubic set and the external cubic set by using the idea of the fuzzy set and IVFS. They further introduced some operations of union and intersection regarding the cubic sets, such as the P-(R-) union and P-(R-) intersection, and studied important related properties. Jun [27] further extended the idea of the cubic set, introduced the notion of the cubic intuitionistic set, and discussed its useful applications in BCK/BCI-algebras. Recently, studies on the cubic set theory have rapidly grown. For example, Jun et al. [28] proposed the concept of cubic IVIFS and discussed its important applications in BCK/BCI-algebra. With the help of using a cubic set and a neutrosophic set, Ali et al. [29] presented the notion of a neutrosophic cubic set and studied some useful properties. Kang and Kim [30] investigated the images and inverse images of almost-stable cubic sets and discussed the complement, the P-union, and the P-intersection of inverse images of almost-stable cubic sets. Chinnadurai et al. [31] investigated several properties of the P-(R-) union and P-(R-) intersection of cubic sets and studied some properties of cubic ideals of near rings. Jun et al. [32] proposed the ideas of cubic *α*-ideals and cubic *p*-ideals and studied several useful properties.

Cubic sets are widely studied and are important in many areas, as discussed in the literature by various researchers. Motivated by the advantages of cubic sets, this paper proposes the notion of CIS based on IVFSs and intuitionistic fuzzy sets. Although Jun [27] previously introduced the idea of CIS as cubic intuitionistic sets and discussed their applications in BCK/BCI-algebras, this paper presents a completely different research work under the framework of CIS. We first propose two important types of CIS, named ICIS and ECIS. We then investigate the complement of CIS, the P-(R-) cubic intuitionistic subsets, and the P-(R-) union and the intersection of CISs. Furthermore, we prove various important theorems and results related to the proposed union and intersection operations. Finally, we present an application example to demonstrate the validity of the proposed operations by solving a MCDM problem.

The remainder of the paper can be summarized briefly as follows. Some basic concepts related to the work are presented in Section 2. The notions of CIS, ICIS, and ECIS are introduced in Section 3. We further investigate P-(R-)order, P-(R-)union, P-(R-)intersection, and related important properties with proof in the same section. A MCDM approach using CISs is presented in Section 4 along with an application example. We conclude the paper with some concluding remarks in Section 5.

#### **2. Preliminary**

This section introduces necessary notions and presents a few auxiliary results that we need in the rest of the paper. Throughout this paper, we let [*I*], *IX*, and [*I*] *<sup>X</sup>* stand for the set of all closed subintervals of [0, 1], the collection of all fuzzy sets in a set *X*, and IVFSs in *X*, respectively.

**Definition 1.** *Let X be a non-empty set. A fuzzy set in set X is defined as function f* : *X* → [0, 1]*. the relation* ≤*, join* (∨)*, meet* (∧)*, and complement of <sup>I</sup><sup>X</sup> for all <sup>x</sup>* ∈ *<sup>X</sup> can be defined, respectively, as follows:*

$$f\_1 \le f\_2 \Leftrightarrow f\_1(\mathbf{x}) \le f\_2(\mathbf{x}) \text{ for all} \\ f\_1, f\_2 \in I^X,$$

$$(f\_1 \lor f\_2)(\mathbf{x}) = f\_1(\mathbf{x}) \lor f\_2(\mathbf{x}) = \max\{f\_1(\mathbf{x}), f\_2(\mathbf{x})\},$$

$$(f\_1 \land f\_2)(\mathbf{x}) = f\_1(\mathbf{x}) \land f\_2(\mathbf{x}) = \min\{f\_1(\mathbf{x}), f\_2(\mathbf{x})\},$$

$$f\_1^c(\mathbf{x}) = 1 - f\_1(\mathbf{x}),$$

*where f <sup>c</sup>* <sup>1</sup> *represents the complement of f*1*.*

**Definition 2.** *By an interval number, we mean a closed sub-interval a* = [*a*−, *a*+] *of Iwhere* <sup>0</sup> <sup>≤</sup> *<sup>a</sup>*<sup>−</sup> <sup>≤</sup> *<sup>a</sup>*<sup>+</sup> <sup>≤</sup> <sup>1</sup>*. The complement a<sup>c</sup> of a* <sup>∈</sup> [*I*] *is defined as follows:*

$$a^c = [1 - a^+, 1 - a^-].$$

*The refined minimum and refined maximum (briefly, rmin and rmax) and the symbols , ,* = *of the elements a*<sup>1</sup> = [*a*<sup>−</sup> <sup>1</sup> , *<sup>a</sup>*<sup>+</sup> <sup>1</sup> ] *and a*<sup>2</sup> = [*a*<sup>−</sup> <sup>2</sup> , *<sup>a</sup>*<sup>+</sup> <sup>2</sup> ] *of* [*I*] *is defined as follows:*

$$\text{rmin}\{a\_1, a\_2\} = [\min\{a\_1^-, a\_2^-\}, \min\{a\_1^+, a\_2^+\}]\_{\prime}$$

$$\text{rmax}\{a\_1, a\_2\} = [\max\{a\_1^-, a\_2^-\}, \max\{a\_1^+, a\_2^+\}]\_{\prime}$$

$$a\_1 \succeq a\_2 \text{if and only if } a\_1^- \ge a\_2^- \text{ and } a\_1^+ \ge a\_2^+.$$

*Similarly, we can define a*<sup>1</sup> *a*<sup>2</sup> *and a*<sup>1</sup> = *a*2*.*

**Definition 3.** *For a non-empty set X, a function A* : *X* → [*I*] *is called an IVFS in X. The element <sup>A</sup>* = [*A*−(*x*), *<sup>A</sup>*+(*x*)] *for every <sup>A</sup>* <sup>∈</sup> [*I*] *<sup>X</sup> and <sup>x</sup>* ∈ *X, is called the membership degree of an element x to the set A. The IVFS is simply denoted as A* = [*A*−, *A*+]*. The complement A<sup>c</sup> of A can be defined as A<sup>c</sup>* = [<sup>1</sup> <sup>−</sup> *<sup>A</sup>*+, 1 <sup>−</sup> *<sup>A</sup>*−].

For every *A*1, *A*<sup>2</sup> ∈ [*I*] *<sup>X</sup>*, the following are true:

> *A*<sup>1</sup> ⊆ *A*<sup>2</sup> if and only if*A*<sup>1</sup> *A*2, *A*<sup>1</sup> = *A*<sup>2</sup> if and only if*A*<sup>1</sup> = *A*2.

**Definition 4** ([5])**.** *Let E be a crisp set. An IFS A can be defined as* ˜

$$\bar{A} = \{ \langle \mathfrak{x}, \mu\_{\bar{A}}(\mathfrak{x}), \nu\_{\bar{A}}(\mathfrak{x}) \rangle : \mathfrak{x} \in E \}. \lnot$$

*where μA*˜ : *E* → [0, 1] *and νA*˜ : *E* → [0, 1] *indicate, respectively, the membership and nonmembership degrees of x* ∈ *E with the condition* 0 ≤ *μA*˜(*x*) + *νA*˜(*x*) ≤ 1 *for every x* ∈ *E.*

**Definition 5** ([6])**.** *An expression of the form given by*

$$B = \{ \langle \mathfrak{x}, \mathcal{M}\_B(\mathfrak{x}), \mathcal{N}\_B(\mathfrak{x}) \rangle : \mathfrak{x} \in X \}$$

*is called the IVIFS in X, where MB* : *X* → [*I*] *and NB* : *X* → [*I*] *are IVFSs with the condition that*

$$0 \le M\_B^+(\mathfrak{x}) + N\_B^+(\mathfrak{x}) \le 1 \text{ for all } \mathfrak{x} \in X.$$

*The intervals MB and NB denote, respectively, the membership and non-membership degrees of x* ∈ *X.*

**Definition 6** ([26])**.** *A mathematical structure of the form*

$$A = \{ \langle \mathfrak{x}, A(\mathfrak{x}), \lambda(\mathfrak{x}) \rangle : \mathfrak{x} \in X \},$$

*is called the cubic set in X, where A and λ are, respectively, the IVFS and a fuzzy set in X. Jun [27] introduced the notion of the cubic intuitionistic set as follows:*

**Definition 7** ([27])**.** *A mathematical structure of the form*

$$A = \{ \langle \mathbf{x}, A(\mathbf{x}), \lambda(\mathbf{x}) \rangle : \mathbf{x} \in X \},$$

*is called the cubic intuitionistic set where A is an IVIFS in X and λ is an IFS in X.*

#### **3. Some Operations on the Cubic Intuitionistic Set**

This section introduces the concept of CIS with some modifications as proposed by Jun in [27] as follows:

**Definition 8.** *By CIS in a non-empty set X, we mean a mathematical structure of the form*

$$\mathbf{A} = \{ \langle \mathbf{x}, \mathcal{M}\_A(\mathbf{x}) / \mathfrak{a}\_A(\mathbf{x}), \mathcal{N}\_A(\mathbf{x}) / \beta\_A(\mathbf{x}) \rangle | \mathbf{x} \in X \}$$

*where MA* : *<sup>X</sup>* <sup>→</sup> [*I*] *and NA* : *<sup>X</sup>* <sup>→</sup> [*I*] *are IVFSs of the form MA*(*x*)=[*M*−(*x*), *<sup>M</sup>*+(*x*)]*, NA*(*x*)=[*N*−(*x*), *N*+(*x*)] *with the conditions that*

$$0 \le M\_A^+(\mathbf{x}) + N\_A^+(\mathbf{x}) \le 1 \, and \, 0 \le a\_A(\mathbf{x}) + \beta\_A(\mathbf{x}) \le 1 \, for \, all \, \mathbf{x} \in X.$$

*MA*(*x*) *and NA*(*x*) *denote, respectively, the membership and non-membership degrees of x and α<sup>A</sup>* : *X* → [0, 1]*, β<sup>A</sup>* : *X* → [0, 1] *are fuzzy sets in X. For simplicity, we denote CIS*(*X*) *as the collection of all CISs* **A** = *MA*/*αA*, *NA*/*βA in X. In the rest of the paper, we will use the same notations with symbols for CIS as presented in the above definition.*

**Remark 1.** *For any non-empty set X, let* 1(*x*) = 1 *and* 0(*x*) = 0 *for all x* ∈ *X*. *Then,* **A** = *MA*/1, *NA*/0(*x*)*,* **<sup>B</sup>** <sup>=</sup> *MB*/0, *NB*/1 *and* **<sup>C</sup>** <sup>=</sup> *MC*/ *<sup>M</sup>*<sup>−</sup> *<sup>C</sup>* <sup>+</sup>*M*<sup>+</sup> *C* <sup>2</sup> , *NC*/ *<sup>N</sup>*<sup>−</sup> *<sup>C</sup>* <sup>+</sup>*N*<sup>+</sup> *C* <sup>2</sup> *are all CISs in X.*

**Definition 9.** *For* **A** = *MA*/*αA*, *NA*/*βA* ∈ *CIS*(*X*)*, the score value of* **A** *is defined as*

$$Sc(\mathbf{A}) = \frac{1}{3} \left[ \left( M\_A^- + M\_A^+ + \mathfrak{a}\_A \right) - \left( N^-(\mathbf{x}) + N^+(\mathbf{x}) + \mathfrak{f}\_A \right) \right]$$

*where Sc*(**A**) ∈ [−1, 1]*.*

**Definition 10.** *Let* **A** = *MA*/*αA*, *NA*/*βA and* **B** = *MB*/*αB*, *NB*/*βB* ∈ *CIS*(*X*)*, then*

**(i) A** = **B** ⇔ *MA* = *MB*, *α<sup>A</sup>* = *αB*; *NA* = *NB*, *β<sup>A</sup>* = *β<sup>B</sup> (Equality)* **(ii) A** ⊆*<sup>P</sup>* **B** ⇔ *MA* ⊆ *MB*, *α<sup>A</sup>* ≤ *αB*; *NA* ⊇ *NB*, *β<sup>A</sup>* ≥ *β<sup>B</sup> (P-order)* **(iii) A** ⊆*<sup>R</sup>* **B** ⇔ *MA* ⊆ *MB*, *α<sup>A</sup>* ≥ *αB*; *NA* ⊇ *NB*, *β<sup>A</sup>* ≤ *β<sup>B</sup> (R-order)*

**Definition 11.** *Let* **0** = [0, 0] *and* **1** = [1, 1]*. Then, a CIS* **A** = *MA*/*αA*, *NA*/*βA in which MA* = **0***, α<sup>A</sup>* = 1*, NA* = **1** *and β<sup>A</sup>* = 0 *(respectively, MA* = **1***, α<sup>A</sup>* = 0*, NA* = **0** *and β<sup>A</sup>* = 1) *is denoted by* 0¨ *(respectively* 1¨*).*

*A CIS* **B** = *MB*/*αB*, *NB*/*βB in which MB* = **0***, α<sup>B</sup>* = 0*, NB* = **1***, β<sup>B</sup>* = 1 *(respectively MB* = **1***, α<sup>B</sup>* = 1*, NB* = **0** *and β<sup>A</sup>* = 0) *is denoted by* 0ˆ *(respectively,* 1ˆ*).*

We can see that the score values of 0¨, 1¨, 0ˆ and 1ˆ can be computed, respectively, as *Sc*(0¨) = <sup>−</sup>0.33, *Sc*(1¨) = 0.33, *Sc*(0ˆ) = <sup>−</sup>1 and *Sc*(1ˆ) = 1.

**Definition 12.** *Consider the family of CISs* **<sup>A</sup>***<sup>i</sup>* <sup>=</sup> *Mi*/*αi*, *Ni*/*βi, i* <sup>∈</sup> *in X, we define*

**(a)** *P-union*

$$\bigcup\_{i \in \mathcal{U}} \mathbf{A}\_{i} = \left\langle \bigcup\_{i \in \mathcal{U}} \mathbf{M}\_{i} \Big/ \bigvee\_{i \in \mathcal{U}} \mathbf{a}\_{i}, \bigcap\_{i \in \mathcal{U}} \mathbf{N}\_{i} \Big/ \bigwedge\_{i \in \mathcal{U}} \beta\_{i} \right\rangle$$

**(b)** *P-intersection*

$$\bigcap\_{i \in \mathcal{U}} \mathbf{A}\_{i} = \left\langle \bigcap\_{i \in \mathcal{U}} \mathbf{M}\_{i} \Big/ \bigwedge\_{i \in \mathcal{U}} \mathbf{a}\_{i}, \bigsqcup\_{i \in \mathcal{U}} \mathbf{N}\_{i} \Big/ \bigvee\_{i \in \mathcal{U}} \mathbf{\mathcal{S}}\_{i} \right\rangle$$

**(c)** *R-union*

$$\bigcup\_{i \in \mathcal{U}} \mathbf{A}\_{i} = \left\langle \bigcup\_{i \in \mathcal{U}} \mathbf{M}\_{i} / \left( \bigwedge\_{i \in \mathcal{U}} \mathbf{a}\_{i} \right) \right\rangle \uplus \left\langle \mathbf{N}\_{i} \right\rangle \bigvee\_{i \in \mathcal{U}} \mathfrak{F}\_{i} / \bigvee\_{i \in \mathcal{U}} \mathfrak{F}\_{i} / \mathfrak{F}\_{i}$$

**(d)** *R-intersection*

$$\bigcap\_{i \in \mathcal{U}} \mathbf{A}\_i = \left\langle \bigcap\_{i \in \mathcal{U}} \mathbf{M}\_i \Big/ \bigvee\_{i \in \mathcal{U}} \mathbf{a}\_i \right\rangle\_{\iota \in \mathcal{U}} \coprod\_{i \in \mathcal{U}} \mathbf{N}\_i / \left( \bigwedge\_{i \in \mathcal{U}} \beta\_i \right)$$

**Remark 2.** *The complement of* **A** = *MA*/*αA*, *NA*/*βA is defined as*

$$\mathbf{A}^c = \langle \mathcal{M}\_A^c / 1 - \mathfrak{a}\_{A'} \mathcal{N}\_A^c / 1 - \mathcal{B}\_A \rangle.$$

*Obviously,* (**A***c*)*<sup>c</sup>* = **A***,* 0¨ *<sup>c</sup>* = 1¨*,* 1¨ *<sup>c</sup>* = 0¨*,* 0ˆ *<sup>c</sup>* = 1ˆ*,* 1ˆ *<sup>c</sup>* = 0ˆ*.*

**Remark 3.** *For the family of CISs* **<sup>A</sup>***<sup>i</sup>* <sup>=</sup> *Mi*/*αi*, *Ni*/*βi, i* <sup>∈</sup> *in X, we have* (∪*P i*∈ **<sup>A</sup>***i*)*<sup>c</sup>* <sup>=</sup> <sup>∩</sup>*<sup>P</sup> <sup>i</sup>*∈ (**A***i*)*c,* (∩*<sup>P</sup> <sup>i</sup>*∈ **<sup>A</sup>***i*)*<sup>c</sup>* <sup>=</sup> <sup>∪</sup>*<sup>P</sup> <sup>i</sup>*∈ (**A***i*)*c*, (∪*<sup>R</sup> <sup>i</sup>*∈ **<sup>A</sup>***i*)*<sup>c</sup>* <sup>=</sup> <sup>∩</sup>*<sup>R</sup> <sup>i</sup>*∈ (**A***i*)*<sup>c</sup> and* (∩*<sup>R</sup> <sup>i</sup>*∈ **<sup>A</sup>***i*)*<sup>c</sup>* <sup>=</sup> <sup>∪</sup>*<sup>R</sup> <sup>i</sup>*∈ (**A***i*)*c*.

**Definition 13.** *Let X be a non-empty set.*


**Example 1.** *For a non-empty set X,*


**Remark 4.** *Every CIS in X can be considered a Zadeh fuzzy set, IFS, IVFS, IVIFS, and cubic set according to* (*M* = *N* = **0**, *β* = 0)*,* (*M* = *N* = **0**)*,* (*N* = **0**, *β* = 0)*,* (*β* = *α* = 0) *and* (*N* = **0**, *β* = 0)*, respectively.*

**Theorem 1.** *Let* **A** = *MA*/*αA*, *NA*/*βA be A CIS which is not an ECIS in X. Then there exist x* ∈ *X such that αA*(*x*) ∈ (*M*<sup>−</sup> *<sup>A</sup>* (*x*), *<sup>M</sup>*<sup>+</sup> *<sup>A</sup>* (*x*)) *and βA*(*x*) ∈ (*N*<sup>−</sup> *<sup>A</sup>* (*x*), *<sup>N</sup>*<sup>+</sup> *<sup>A</sup>* (*x*)).

**Proof.** Straightforward.

**Theorem 2.** *Let* **A** = *MA*/*αA*, *NA*/*βA be A CIS in X. If* **A** *is both ICIS and ECIS, then <sup>α</sup>*(*x*) <sup>∈</sup> *<sup>U</sup>*(*M*) <sup>∪</sup> *<sup>L</sup>*(*M*) *and <sup>β</sup>*(*x*) <sup>∈</sup> *<sup>U</sup>*(*N*) <sup>∪</sup> *<sup>L</sup>*(*N*) *f or all x* <sup>∈</sup> *<sup>X</sup> where <sup>U</sup>*(*M*) = {*M*+(*x*)|*<sup>x</sup>* <sup>∈</sup> *<sup>X</sup>*}*, L*(*M*) = {*M*−(*x*)|*<sup>x</sup>* <sup>∈</sup> *<sup>X</sup>*}*, U*(*N*) = {*N*+(*x*)|*<sup>x</sup>* <sup>∈</sup> *<sup>X</sup>*} *and L*(*N*) = {*N*−(*x*)|*<sup>x</sup>* <sup>∈</sup> *<sup>X</sup>*}.

**Proof.** Assume that **A** is both ICIS and ECIS. Then, using Definition 13, we have *M*−(*x*) ≤ *<sup>α</sup>*(*x*) <sup>≤</sup> *<sup>M</sup>*+(*x*), *<sup>N</sup>*−(*x*) <sup>≤</sup> *<sup>β</sup>*(*x*) <sup>≤</sup> *<sup>N</sup>*+(*x*) and *<sup>α</sup>*(*x*) <sup>∈</sup>/ (*M*−(*x*), *<sup>M</sup>*+(*x*)), *<sup>β</sup>*(*x*) <sup>∈</sup>/ (*N*−(*x*), *<sup>N</sup>*+(*x*)) for all *<sup>x</sup>* <sup>∈</sup> *<sup>X</sup>*. Thus *<sup>α</sup>*(*x*) = *<sup>M</sup>*−(*x*) or *<sup>α</sup>*(*x*) = *<sup>M</sup>*+(*x*) and *<sup>β</sup>*(*x*) = *<sup>N</sup>*−(*x*) or *<sup>β</sup>*(*x*) = *<sup>N</sup>*+(*x*). Hence *<sup>α</sup>*(*x*) <sup>∈</sup> *<sup>U</sup>*(*M*) <sup>∪</sup> *<sup>L</sup>*(*M*) and *<sup>β</sup>*(*x*) <sup>∈</sup> *<sup>U</sup>*(*N*) <sup>∪</sup> *<sup>L</sup>*(*N*) for all *<sup>x</sup>* <sup>∈</sup> *<sup>X</sup>*.

**Theorem 3.** *Let* **A** = *MA*/*αA*, *NA*/*βA be A CIS in X. If* **A** *is ICIS (respectively, ECIS), then A<sup>c</sup> is ICIS (respectively ECIS).*

**Proof.** Since **A** = *MA*/*αA*, *NA*/*βA* is ICIS in *X*, we have

$$M\_A^- \le \mathfrak{a}\_A \le M\_A^+ \\ and N\_A^- \le \mathcal{B}\_A \le N\_A^+$$
 
$$\text{respectively} \\ \mathfrak{a}\_A \notin (M\_{A'}^- \\ M\_A^+) \text{ and } \mathcal{B}\_A \notin (N\_{A'}^- \\ N\_A^+)$$

 .

This implies that

$$1 - M\_A^+ \le 1 - \mathfrak{a}\_A \le 1 - M\_A^- \\
and \ 1 - N\_A^+ \le 1 - \beta\_A \le 1 - N\_A^- $$

 respectively,1 <sup>−</sup> *<sup>α</sup><sup>A</sup>* <sup>∈</sup>/ (<sup>1</sup> <sup>−</sup> *<sup>M</sup>*<sup>+</sup> *<sup>A</sup>*, 1 − *M*<sup>−</sup> *<sup>A</sup>* ) *and* <sup>1</sup> <sup>−</sup> *<sup>β</sup><sup>A</sup>* <sup>∈</sup>/ (<sup>1</sup> <sup>−</sup> *<sup>N</sup>*<sup>+</sup> *<sup>A</sup>* , 1 − *N*<sup>−</sup> *A* ) .

Hence **<sup>A</sup>***<sup>c</sup>* = *<sup>M</sup><sup>c</sup> <sup>A</sup>*/1 − *<sup>α</sup>A*, *<sup>N</sup><sup>c</sup> <sup>A</sup>*/1 − *βA* is ICIS (respectively, ECIS)

We will show (through the following example) that the P-union and P-intersections of ECISs are not necessarily ECISs.

**Example 2.** *Let* **A** = *MA*/*αA*, *NA*/*βA and* **B** = *MB*/*αB*, *NB*/*βB be two ECISs in X. Let MA* = [0.1, 0.3]*, α<sup>A</sup>* = 0.5*, NA* = [0.4, 0.6]*, β<sup>A</sup>* = 0.2*, MB* = [0.4, 0.6]*, α<sup>B</sup>* = 0.2*, NB* = [0.1, 0.3] *and β<sup>B</sup>* = 0.5 *for all x* ∈ *X. Then* **A** ∪*<sup>p</sup>* **B** = *MB*/*αA*, *NB*/*βA and* **A** ∩*<sup>p</sup>* **B** = *MA*/*αB*, *NA*/*βB. Hence,* **A** ∪*<sup>p</sup>* **B** *and* **A** ∩*<sup>p</sup>* **B** *are not ECISs.*

From the following example, it can be easily seen that the R-union and R-intersection of ICIS need not be ICISs.

**Example 3.** *Let* **A** = *MA*/*αA*, *NA*/*βA and* **B** = *MB*/*αB*, *NB*/*βB be two ICISs in X. Let MA* = [0.1, 0.3]*, α<sup>A</sup>* = 0.2*, NA* = [0.5, 0.7]*, β<sup>A</sup>* = 0.6*, MB* = [0.5, 0.7]*, α<sup>B</sup>* = 0.6*, NB* = [0.1, 0.3] *and β<sup>B</sup>* = 0.2 *for all x* ∈ *X. Then* **A** ∪*<sup>R</sup>* **B** = *MB*/*αA*, *NB*/*βA and* **A** ∩*<sup>R</sup>* **B** = *MA*/*αB*, *NA*/*βB. Hence,* **A** ∪*<sup>R</sup>* **B** *and* **A** ∩*<sup>p</sup>* **B** *are not ICISs.*

In the following examples, we will show that the R-union and R-intersection of ECIS may not be ECIS.

#### **Example 4.**


**Theorem 4.** *Let* **A** = *MA*/*αA*, *NA*/*βA and* **B** = *MB*/*αB*, *NB*/*βB be two ICISs in X, such that* max{*M*<sup>−</sup> *<sup>A</sup>*, *M*<sup>−</sup> *<sup>B</sup>* } ≤ (*α<sup>A</sup>* <sup>∧</sup> *<sup>α</sup>B*) *and* min{*N*<sup>+</sup> *<sup>A</sup>* , *<sup>N</sup>*<sup>+</sup> *<sup>B</sup>* } ≥ (*β<sup>A</sup>* ∨ *βB*)*. Then the R-union and R-intersection of* **A** *and* **B** *are ICISs.*

**Proof. A** and **B** are ICISs; therefore,

$$M\_A^- \le \alpha\_A \le M\_{A'}^+ N\_A^- \le \beta\_A \le N\_A^+$$

$$M\_B^- \le \alpha\_B \le M\_B^+ \text{ and } N\_B^- \le \beta\_B \le N\_B^+$$

which implies that

$$(\mathfrak{a}\_A \wedge \mathfrak{a}\_B) \le (M\_A \cup M\_B)^+ \text{ and } (\mathfrak{\beta}\_A \vee \mathfrak{\beta}\_B) \ge (N\_A \cap N\_B)^-.$$

It follows that

$$(M\_A \cup M\_B)^- = \max\{M\_{A'}^- M\_B^-\} \le (\alpha\_A \wedge \alpha\_B) \le (M\_A \cup M\_B)^+$$

and

$$(N\_A \cap N\_B)^- \le (\beta\_A \lor \beta\_B) \le \min\{N\_{A'}^+ N\_B^+\} = (N\_A \cap N\_B)^+.$$

Hence, **A** ∪*<sup>R</sup>* **B** is ICIS. Similar arguments work in the case of **A** ∩*<sup>R</sup>* **B**.

Given two CISs **A** = *MA*/*αA*, *NA*/*βA* and **B** = *MB*/*αB*, *NB*/*βB* in *X*. If we exchange *α<sup>A</sup>* for *α<sup>B</sup>* and *β<sup>A</sup>* for *βB*, we denote these CISs by **A**<sup>∗</sup> = *MA*/*αB*, *NA*/*βB* and **B**<sup>∗</sup> = *MB*/*αA*, *NB*/*βA*, respectively.

The next example shows that, for any two ECISs in *X*, **A**∗ and **B**∗ need not be ICISs in *X*.

#### **Example 5.**


**Table 1.** CISs **A** and **B**.


We will show through the following example that the P-union of two ECISs in *X* may not be an ICIS in *X*.

**Example 6.** *Consider again two ECISs,* **A** *and* **B***, as shown in Table 1. In this case,* **A** ∪*<sup>P</sup>* **B** *is not ICIS in X because* (*α<sup>A</sup>* ∨ *αB*)(*a*) = 0.65 ∈/ [0.4, 0.6] = *MA* ∪ *MB*,(*β<sup>A</sup>* ∧ *βB*)(*a*) = 0.25 ∈/ [0.1, 0.2] = *NA* ∩ *NB*.

In the following result, we will find a condition for the P-union of two ECISs to be an ICIS.

**Theorem 5.** *For two ECISs* **A** = *MA*/*αA*, *NA*/*βA and* **B** = *MB*/*αB*, *NB*/*βB in X. If* **A**<sup>∗</sup> = *MA*/*αB*, *NA*/*βB and* **B**<sup>∗</sup> = *MB*/*αA*, *NB*/*βA are ICISs in X. Then* **A** ∪*<sup>P</sup>* **B** *and* **A** ∩*<sup>P</sup>* **B** *are ICISs in X.*

**Proof.** Since **A** and **B** are ECISs in *X*, then

$$
\alpha\_A \notin (\mathcal{M}^-\_{A'}, \mathcal{M}^+\_A)\_{\prime} \beta\_A \notin (\mathcal{N}^-\_{A'}, \mathcal{N}^+\_A)\_{\prime}
$$

$$
\alpha\_B \notin (M\_B^- \; \mathcal{M}\_B^+) \; and \; \beta\_B \notin (N\_B^- \; N\_B^+).
$$

For all *x* ∈ *X*. Since **A**<sup>∗</sup> and **B**<sup>∗</sup> are ICISs in *X*, then

$$M\_A^- \le \alpha\_B \le M\_{A'}^+ N\_A^- \le \beta\_B \le N\_A^+ $$

$$M\_B^- \le \varkappa\_A \le M\_B^+ \text{ and } N\_B^- \le \beta\_A \le N\_B^+$$

for all *x* ∈ *X*. Thus, we can consider the following cases for any *x* ∈ *X*.

**Case** 1

*α<sup>A</sup>* ≤ *M*<sup>−</sup> *<sup>A</sup>* <sup>≤</sup> *<sup>α</sup><sup>B</sup>* <sup>≤</sup> *<sup>M</sup>*<sup>+</sup> *<sup>A</sup>*, *β<sup>A</sup>* ≤ *N*<sup>−</sup> *<sup>A</sup>* <sup>≤</sup> *<sup>β</sup><sup>B</sup>* <sup>≤</sup> *<sup>N</sup>*<sup>+</sup> *A* , *α<sup>B</sup>* ≤ *M*<sup>−</sup> *<sup>B</sup>* <sup>≤</sup> *<sup>α</sup><sup>A</sup>* <sup>≤</sup> *<sup>M</sup>*<sup>+</sup> *<sup>B</sup> and β<sup>B</sup>* ≤ *N*<sup>−</sup> *<sup>B</sup>* <sup>≤</sup> *<sup>β</sup><sup>A</sup>* <sup>≤</sup> *<sup>N</sup>*<sup>+</sup> *B* .

**Case** 2

$$M\_A^- \le \mathfrak{a}\_B \le M\_A^+ \le \mathfrak{a}\_{A\prime} N\_A^- \le \mathcal{B}\_B \le N\_A^+ \le \mathcal{B}\_{A\prime}$$

*M*− *<sup>B</sup>* <sup>≤</sup> *<sup>α</sup><sup>A</sup>* <sup>≤</sup> *<sup>M</sup>*<sup>+</sup> *<sup>B</sup>* ≤ *α<sup>B</sup> and N*<sup>−</sup> *<sup>B</sup>* <sup>≤</sup> *<sup>β</sup><sup>A</sup>* <sup>≤</sup> *<sup>N</sup>*<sup>+</sup> *<sup>B</sup>* ≤ *βB*.

**Case** 3

$$\begin{aligned} \mathfrak{a}\_A \le M\_A^- \le \mathfrak{a}\_B \le M\_{A'}^+ \beta\_A \le N\_A^- \le \beta\_B \le N\_{A'}^+ \\\ M\_B^- \le \mathfrak{a}\_A \le M\_B^+ \le \mathfrak{a}\_B \text{ and } N\_B^- \le \beta\_A \le N\_B^+ \le \beta\_B. \end{aligned}$$

**Case** 4

$$M\_A^- \le \alpha\_B \le M\_A^+ \le \alpha\_{A\prime} \\ N\_A^- \le \beta\_B \le N\_A^+ \le \beta\_{A\prime}$$
 
$$\alpha\_B \le M\_B^- \le \alpha\_A \le M\_B^+ \text{ and } \beta\_B \le N\_B^- \le \beta\_A \le N\_B^+ $$

The arguments in all cases are similar; therefore, we consider the first case. We have *α<sup>A</sup>* = *M*<sup>−</sup> *<sup>A</sup>* = *M*<sup>−</sup> *<sup>B</sup>* = *α<sup>B</sup>* and *β<sup>A</sup>* = *N*<sup>−</sup> *<sup>A</sup>* = *N*<sup>−</sup> *<sup>B</sup>* = *βB*. Since **A**∗ and **B**∗ are ICISs in *X*, then

$$
\alpha\_B \le M\_{A'}^+ \alpha\_A \le M\_{B'}^+ \beta\_B \le N\_A^+ \text{ and } \beta\_A \le N\_B^+.
$$

It follows that

$$(M\_A \cup M\_B)^- = \max\{M\_{A'}^- M\_B^-\} = (\mathfrak{a}\_A \vee \mathfrak{a}\_B)$$

$$\leq \max\{M\_{A'}^+ M\_B^+\} = (M\_A \cup M\_B)^+ \text{ and }$$

$$(N\_A \cap N\_B)^- = \min\{N\_{A'}^- N\_B^-\} = (\beta\_A \wedge \beta\_B)$$

$$\leq \min\{N\_{A'}^+ N\_B^+\} = (N\_A \cup N\_B)^+.$$

Hence, **A** ∪*<sup>P</sup>* **B** is ICIS. Similar steps can be used for **A** ∩*<sup>P</sup>* **B**.

From Example 2, it can be easily seen that the P-union and P-intersections of ECISs are not necessarily the ECISs in *X*. In the next result, we will show when the P-union and P-intersection of two ECISs are ECISs in *X*.

**Theorem 6.** *Let* **A** = *MA*/*αA*, *NA*/*βA and* **B** = *MB*/*αB*, *NB*/*βB be two ECISs in X, such that* min{max{*M*<sup>+</sup> *<sup>A</sup>*, *M*<sup>−</sup> *<sup>B</sup>* }, max{*M*<sup>−</sup> *<sup>A</sup>*, *<sup>M</sup>*<sup>+</sup> *<sup>B</sup>* }} ≥ (*α<sup>A</sup>* ∧ *αB*)

$$\begin{split} \{ \text{max}\{ \text{max}\{ \text{M}\_{A}^{+}, \text{M}\_{B}^{-} \}, \text{min}\{ \text{M}\_{A}^{+}, \text{M}\_{B}^{+} \} \} & \leq \text{max}\{ \text{min}\{ \text{M}\_{A}^{+}, \text{M}\_{B}^{+} \} \} \text{and} \\ \text{min}\{ \text{max}\{ \text{M}\_{A}^{+}, \text{M}\_{B}^{-} \}, \text{max}\{ \text{N}\_{A}^{-}, \text{N}\_{B}^{+} \} \} & > (\beta\_{A} \vee \beta\_{B}) \\ \geq \max\{ \text{min}\{ \text{N}\_{A}^{+}, \text{N}\_{B}^{-} \}, \text{min}\{ \text{N}\_{A}^{-}, \text{N}\_{B}^{+} \} \} \text{,} \end{split}$$

*then* **A** ∩*<sup>P</sup>* **B** *is ECIS in X.*

**Proof.** Take

$$\begin{aligned} \alpha\_{\boldsymbol{x}} &= \min\{\max\{M\_{A'}^+, M\_B^-\}, \max\{M\_{A'}^-, M\_B^+\}\}, \\ \beta\_{\boldsymbol{x}} &= \max\{\min\{M\_{A'}^+, M\_B^-\}, \min\{M\_{A'}^-, M\_B^+\}\}, \\ \alpha\_{\boldsymbol{x}}^\* &= \min\{\max\{N\_A^+, N\_B^-\}, \max\{N\_A^-, N\_B^+\}\} and \\ \beta\_{\boldsymbol{x}}^\* &= \max\{\min\{N\_{A'}^+, N\_B^-\}, \min\{N\_{A'}^-, N\_B^+\}\} \end{aligned}$$

then *α<sup>x</sup>* is one of *M*<sup>−</sup> *<sup>A</sup>*, *M*<sup>−</sup> *<sup>B</sup>* , *<sup>M</sup>*<sup>+</sup> *<sup>A</sup>*, *<sup>M</sup>*<sup>+</sup> *<sup>B</sup>* and *α*<sup>∗</sup> *<sup>x</sup>* is one of *N*<sup>−</sup> *<sup>A</sup>* , *N*<sup>−</sup> *<sup>B</sup>* , *<sup>N</sup>*<sup>+</sup> *<sup>A</sup>* , *<sup>N</sup>*<sup>+</sup> *<sup>B</sup>* . We will consider the case when *α<sup>x</sup>* = *M*<sup>−</sup> *<sup>A</sup>* and *α*<sup>∗</sup> *<sup>x</sup>* = *N*<sup>−</sup> *<sup>A</sup>* or *<sup>α</sup><sup>x</sup>* <sup>=</sup> *<sup>M</sup>*<sup>+</sup> *<sup>A</sup>* and *α*<sup>∗</sup> *<sup>x</sup>* = *<sup>N</sup>*<sup>+</sup> *<sup>A</sup>* . Similar arguments will work for all remaining cases.

If *α<sup>x</sup>* = *M*<sup>−</sup> *<sup>A</sup>* and *α*<sup>∗</sup> *<sup>x</sup>* = *N*<sup>−</sup> *<sup>A</sup>* , then

$$M\_B^- \le M\_B^+ \le M\_A^- \le M\_A^+$$

$$N\_B^- \le N\_B^+ \le N\_A^- \le N\_A^+$$

and so *β<sup>x</sup>* = *M*<sup>+</sup> *<sup>B</sup>* and *β*<sup>∗</sup> *<sup>x</sup>* = *<sup>N</sup>*<sup>+</sup> *<sup>B</sup>* . Thus, *M*− *<sup>B</sup>* = (*MA* <sup>∩</sup> *MB*)<sup>−</sup> <sup>≤</sup> (*MA* <sup>∩</sup> *MB*)<sup>+</sup> <sup>=</sup> *<sup>M</sup>*<sup>+</sup> *<sup>B</sup>* = *β<sup>x</sup>* < (*α<sup>A</sup>* ∧ *αB*), *N*− *<sup>A</sup>* = (*NA* ∪ *NB*)<sup>−</sup> = *α<sup>x</sup>* > (*β<sup>A</sup>* ∨ *βB*) and, hence,

$$(\mathfrak{a}\_A \wedge \mathfrak{a}\_B) \notin ((M\_A \cap M\_B)^-, (M\_A \cap M\_B)^+) \text{and}$$

$$(\mathfrak{\beta}\_A \vee \mathfrak{\beta}\_B) \notin ((N\_A \cup N\_B)^-, (N\_A \cup N\_B)^+).$$

$$\text{If } \mathfrak{a}\_x = M\_A^+ \text{ and } \mathfrak{a}\_x^\* = N\_{A'}^+ \text{ then}$$

$$M\_B^- \le M\_A^+ \le M\_B^+ \\ and \\ N\_B^- \le N\_A^+ \le N\_B^+ $$

so

$$\beta\_x = \max\{M\_{A'}^-, M\_B^-\} \\ and \beta\_x^\* = \max\{N\_{A'}^-, N\_B^-\} .$$

*B*

Assume that *β<sup>x</sup>* = *M*<sup>−</sup> *<sup>A</sup>* and *β*<sup>∗</sup> *<sup>x</sup>* = *N*<sup>−</sup> *<sup>A</sup>* , then

$$M\_B^- \le M\_A^- < (\mathfrak{a}\_A \wedge \mathfrak{a}\_B) \le M\_A^+ \le M\_B^+ and$$

$$N\_B^- \le N\_A^- \le (\mathfrak{\beta}\_A \vee \mathfrak{\beta}\_B) < N\_A^+ \le N\_B^+.$$

From the above inequality, we have the following cases Case-1

$$M\_B^- \le M\_A^- < (\mathfrak{a}\_A \wedge \mathfrak{a}\_B) < M\_A^+ \le M\_B^+ and$$

$$N\_B^- \le N\_A^- < (\mathfrak{\beta}\_A \vee \mathfrak{\beta}\_B) < N\_A^+ \le N\_B^+$$

Case-2

$$M\_B^- \le M\_A^- < (\alpha\_A \wedge \alpha\_B) = M\_A^+ \le M\_B^+ and$$

$$N\_B^- \le N\_A^- = (\beta\_A \vee \beta\_B) \le N\_A^+ \le N\_B^+.$$

Case-1 contradicts the fact that CISs **A** and **B** are ECISs. From Case-2, it implies that

$$(\alpha\_A \wedge \alpha\_B) \notin ((M\_A \cap M\_B)^{-}, (M\_A \cap M\_B)^{+}) \text{and}$$

$$(\beta\_A \vee \beta\_B) \notin ((N\_A \cup N\_B)^{-}, (N\_A \cup N\_B)^{+})$$

since

$$(\alpha\_A \wedge \alpha\_B) = M\_A^+ = (M\_A \cap M\_B)^+ and$$

$$(\beta\_A \vee \beta\_B) = N\_A^- = (N\_A \cup N\_B)^-.$$

Assume that *β<sup>x</sup>* = *M*<sup>−</sup> *<sup>B</sup>* and *β*<sup>∗</sup> *<sup>x</sup>* = *N*<sup>−</sup> *<sup>B</sup>* , then

$$M\_A^- \le M\_B^- < (\alpha\_A \land \alpha\_B) \le M\_A^+ \le M\_B^+ and$$

$$N\_A^- \le N\_B^- \le (\beta\_A \lor \beta\_B) \le N\_A^+ \le N\_B^+.$$

We now have two cases.

Case-1

$$M\_A^- \le M\_B^- < (\alpha\_A \wedge \alpha\_B) < M\_A^+ \le M\_B^+ \text{ and }$$

$$N\_A^- \le N\_B^- < (\beta\_A \vee \beta\_B) < N\_A^+ \le N\_B^+.$$

Case-2

$$M\_A^- \le M\_B^- < (\alpha\_A \wedge \mathfrak{a}\_B) = M\_A^+ \le M\_B^+ \text{ and }$$

$$N\_A^- \le N\_B^- = (\beta\_A \vee \beta\_B) < N\_A^+ \le N\_B^+.$$

Case-1 contradicts that **A** and **B** are ECISs. From Case-2, it implies that

$$(\mathfrak{a}\_A \wedge \mathfrak{a}\_B) \notin ((M\_A \cap M\_B)^{-}, (M\_A \cap M\_B)^{+}) \text{ and}$$

$$(\mathfrak{\beta}\_A \vee \mathfrak{\beta}\_B) \notin ((N\_A \cup N\_B)^{-}, (N\_A \cup N\_B)^{+})$$

since

$$(\alpha\_A \wedge \alpha\_B) = M\_A^+ = (M\_A \cap M\_B)^+ and$$

$$(\beta\_A \vee \beta\_B) = N\_B^- = (N\_A \cup N\_B)^-.$$

Similar results can be obtained if we assume

$$\beta\_{\mathbf{x}} = M\_{\mathcal{B}}^{-} \operatorname{and} \beta\_{\mathbf{x}}^{\*} = N\_{\mathcal{A}}^{-} \operatorname{or} \beta\_{\mathbf{x}} = M\_{\mathcal{A}}^{-} \operatorname{and} \beta\_{\mathbf{x}}^{\*} = N\_{\mathcal{B}}^{-}$$

Hence, the P-intersection of **A** and **B** is ECIS in *X*.

**Theorem 7.** *Let* **A** = *MA*/*αA*, *NA*/*βA and* **B** = *MB*/*αB*, *NB*/*βB be two ECISs in X, such that* min{max{*M*<sup>+</sup> *<sup>A</sup>*, *M*<sup>−</sup> *<sup>B</sup>* }, max{*M*<sup>−</sup> *<sup>A</sup>*, *<sup>M</sup>*<sup>+</sup> *<sup>B</sup>* }} > (*α<sup>A</sup>* ∨ *αB*) <sup>≥</sup> max{min{*M*<sup>+</sup> *<sup>A</sup>*, *M*<sup>−</sup> *<sup>B</sup>* }, min{*M*<sup>−</sup> *<sup>A</sup>*, *<sup>M</sup>*<sup>+</sup> *<sup>B</sup>* }}*and* min{max{*N*<sup>+</sup> *<sup>A</sup>* , *N*<sup>−</sup> *<sup>B</sup>* }, max{*N*<sup>−</sup> *<sup>A</sup>* , *<sup>N</sup>*<sup>+</sup> *<sup>B</sup>* }} ≥ (*β<sup>A</sup>* ∧ *βB*)

*<sup>A</sup>* , *N*<sup>−</sup>

*<sup>B</sup>* }, min{*N*<sup>−</sup>

*<sup>A</sup>* , *<sup>N</sup>*<sup>+</sup> *<sup>B</sup>* }},

*then* **A** ∪*<sup>P</sup>* **B** *is ECIS in X.*

**Proof.** The proof is similar to Theorem 6; therefore, we omit the details.

<sup>&</sup>gt; max{min{*N*<sup>+</sup>

**Example 7.** *Let* **A** = *MA*/*αA*, *NA*/*βA and* **B** = *MB*/*αB*, *NB*/*βB be two ECISs in X* = {*a*, *b*, *c*} *as shown in Table 2. Then,* **A** *and* **B** *always satisfy the following conditions.*

$$\min\{\max\{M\_{A'}^+M\_B^-\}, \max\{M\_{A'}^-M\_B^+\}\} = (\alpha\_A \vee \alpha\_B)$$

$$> \max\{\min\{M\_{A'}^+M\_B^-\}, \min\{M\_{A'}^-M\_B^+\}\} \, \text{and}$$

$$\min\{\max\{N\_{A'}^+N\_B^-\}, \max\{N\_{A'}^-N\_B^+\}\} > (\beta\_A \wedge \beta\_B)$$

$$= \max\{\min\{N\_{A'}^+N\_B^-\}, \min\{N\_{A'}^-N\_B^+\}\}.$$

*However, the P-union of* **A** *and* **B** *is not ECIS because* (*α<sup>A</sup>* ∨ *αB*)(*a*) = 0.2 ∈ [0.1, 0.3] = [(*MA* ∪ *MB*)−(*a*),(*MA* <sup>∪</sup> *MB*)+(*a*)] *and* (*β<sup>A</sup>* <sup>∧</sup> *<sup>β</sup>B*)(*a*) = 0.45 <sup>∈</sup> [0.4, 0.5] = [(*NA* <sup>∩</sup> *NB*)−(*a*),(*NA* <sup>∩</sup> *NB*)+(*a*)].

**Table 2.** CISs **A** and **B**.


From Example 4, it can be easily observed that the R-union and R-intersection of ECISs may not be ECISs in *X*. In the next result, we will show that the R-union and R-intersection of two ECISs are ECISs in *X*.

**Theorem 8.** *Let* **A** = *MA*/*αA*, *NA*/*βA and* **B** = *MB*/*αB*, *NB*/*βB be two ECISs in X, such that*

min{max{*M*<sup>+</sup> *<sup>A</sup>*, *M*<sup>−</sup> *<sup>B</sup>* }, max{*M*<sup>−</sup> *<sup>A</sup>*, *<sup>M</sup>*<sup>+</sup> *<sup>B</sup>* }} > (*α<sup>A</sup>* ∧ *αB*)

$$\geq \max\{\min\{M\_{A'}^+M\_B^-\}, \min\{M\_{A'}^-M\_B^+\}\}\} \,\text{and}$$

$$\min\{\max\{N\_A^+N\_B^-\}, \max\{N\_{A'}^-N\_B^+\}\} \geq (\beta\_A \vee \beta\_B),$$

$$> \max\{\min\{N\_{A'}^+N\_B^-\}, \min\{N\_{A'}^-N\_B^+\}\},$$

*then* **A** ∪*<sup>R</sup>* **B** *is ECIS in X.*

#### **Proof.** Take

$$\kappa\_{\boldsymbol{x}} = \min\{\max\{M\_{\boldsymbol{A}'}^{+}, M\_{\boldsymbol{B}}^{-}\}, \max\{M\_{\boldsymbol{A}'}^{-}, M\_{\boldsymbol{B}}^{+}\}\},$$

$$\beta\_{\boldsymbol{x}} = \max\{\min\{M\_{\boldsymbol{A}'}^{+}, M\_{\boldsymbol{B}}^{-}\}, \min\{M\_{\boldsymbol{A}'}^{-}, M\_{\boldsymbol{B}}^{+}\}\},$$

$$\kappa\_{\boldsymbol{x}}^{\*} = \min\{\max\{N\_{\boldsymbol{A}'}^{+}, N\_{\boldsymbol{B}}^{-}\}, \max\{N\_{\boldsymbol{A}'}^{-}, N\_{\boldsymbol{B}}^{+}\}\} and$$

$$\beta\_{\boldsymbol{x}}^{\*} = \max\{\min\{N\_{\boldsymbol{A}'}^{+}, N\_{\boldsymbol{B}}^{-}\}, \min\{N\_{\boldsymbol{A}'}^{-}, N\_{\boldsymbol{B}}^{+}\}\}$$

then *α<sup>x</sup>* is one of *M*<sup>−</sup> *<sup>A</sup>*, *M*<sup>−</sup> *<sup>B</sup>* , *<sup>M</sup>*<sup>+</sup> *<sup>A</sup>*, *<sup>M</sup>*<sup>+</sup> *<sup>B</sup>* and *α*<sup>∗</sup> *<sup>x</sup>* is one of *N*<sup>−</sup> *<sup>A</sup>* , *N*<sup>−</sup> *<sup>B</sup>* , *<sup>N</sup>*<sup>+</sup> *<sup>A</sup>* , *<sup>N</sup>*<sup>+</sup> *<sup>B</sup>* . We will consider the case when *α<sup>x</sup>* = *M*<sup>−</sup> *<sup>B</sup>* and *α*<sup>∗</sup> *<sup>x</sup>* = *N*<sup>−</sup> *<sup>B</sup>* or *<sup>α</sup><sup>x</sup>* <sup>=</sup> *<sup>M</sup>*<sup>+</sup> *<sup>B</sup>* and *α*<sup>∗</sup> *<sup>x</sup>* = *<sup>N</sup>*<sup>+</sup> *<sup>B</sup>* . Similar arguments will work for all remaining cases.

If *α<sup>x</sup>* = *M*<sup>−</sup> *<sup>B</sup>* and *α*<sup>∗</sup> *<sup>x</sup>* = *N*<sup>−</sup> *<sup>B</sup>* , then

$$M\_A^- \le M\_A^+ \le M\_B^- \le M\_B^+ and$$

$$N\_A^- \le N\_A^+ \le N\_B^- \le N\_B^+$$

so *β<sup>x</sup>* = *M*<sup>+</sup> *<sup>A</sup>* and *β*<sup>∗</sup> *<sup>x</sup>* = *<sup>N</sup>*<sup>+</sup> *<sup>A</sup>* . Thus,

$$M\_B^- = (M\_A \cup M\_B)^- = \alpha\_x > (\alpha\_A \wedge \alpha\_B) and^-$$

$$N\_A^+ = (N\_A \cap N\_B)^+ = \beta\_x^\* < (\beta\_A \wedge \beta\_B)$$

and, hence,

$$(\alpha\_A \land \alpha\_B) \notin ((M\_A \cup M\_B)^{-} , (M\_A \cup M\_B)^{+} ) \\ and$$

$$(\beta\_A \lor \beta\_B) \notin ((N\_A \cap N\_B)^{-} , (N\_A \cap N\_B)^{+} ) .$$

If *α<sup>x</sup>* = *M*<sup>+</sup> *<sup>B</sup>* and *α*<sup>∗</sup> *<sup>x</sup>* = *<sup>N</sup>*<sup>+</sup> *<sup>B</sup>* , then

$$M\_A^- \le M\_B^+ \le M\_A^+ \\ and \\ N\_A^- \le N\_B^+ \le N\_A^+ $$

and so

$$\beta\_x = \max\{M\_{A'}^-, M\_B^-\} \\ and \beta\_x^\* = \max\{N\_{A'}^-, N\_B^-\} .$$

Assume that *β<sup>x</sup>* = *M*<sup>−</sup> *<sup>A</sup>* and *β*<sup>∗</sup> *<sup>x</sup>* = *N*<sup>−</sup> *<sup>A</sup>* , then

$$M\_B^- \le M\_A^+ < (\alpha\_A \wedge \alpha\_B) < M\_B^+ \le M\_A^+ and$$

$$N\_B^- \le N\_A^- < (\beta\_A \vee \beta\_B) \le N\_B^+ \le N\_A^+.$$

We have two cases Case-1

$$M\_B^- \le M\_A^- < (\alpha\_A \wedge \alpha\_B) < M\_B^+ \le M\_A^+ and$$

$$N\_B^- \le N\_A^- < (\beta\_A \vee \beta\_B) < N\_B^+ \le N\_A^+.$$

Case-2

$$M\_B^- \le M\_A^- = (\mathfrak{a}\_A \wedge \mathfrak{a}\_B) \le M\_B^+ \le M\_A^+ and$$

$$N\_B^- \le N\_A^- < (\beta\_A \vee \beta\_B) = N\_B^+ \le N\_A^+.$$

Case-1 contradicts the fact that CISs **A** and **B** are ECISs. From Case-2, it implies that

$$(\alpha\_A \wedge \alpha\_B) \notin ((M\_A \cup M\_B)^{-}, (M\_A \cup M\_B)^{+}) \text{and}$$

$$(\beta\_A \vee \beta\_B) \notin ((N\_A \cap N\_B)^{-}, (N\_A \cap N\_B)^{+})$$

since

$$(\alpha\_A \wedge \alpha\_B) = M\_A^- = (M\_A \cup M\_B)^+ \, and$$

$$(\beta\_A \vee \beta\_B) = N\_B^+ = (N\_A \cap N\_B)^+.$$

Assume that *β<sup>x</sup>* = *M*<sup>−</sup> *<sup>B</sup>* and *β*<sup>∗</sup> *<sup>x</sup>* = *N*<sup>−</sup> *<sup>B</sup>* , then

$$M\_A^- \le M\_B^- \le (\alpha\_A \land \alpha\_B) \le M\_B^+ \le M\_A^+ and$$

$$N\_A^- \le N\_B^- < (\beta\_A \lor \beta\_B) \le N\_B^+ \le N\_A^+.$$

We have two cases Case-1

$$M\_A^- \le M\_B^- < (\alpha\_A \wedge \alpha\_B) < M\_B^+ \le M\_A^+ and$$

$$N\_A^- \le N\_B^- < (\beta\_A \vee \beta\_B) < N\_B^+ \le N\_A^+$$

Case-2

$$M\_A^- \le M\_B^- = (\alpha\_A \wedge \alpha\_B) < M\_B^+ \le M\_A^+ and$$

$$N\_A^- \le N\_B^- < (\beta\_A \lor \beta\_B) = N\_B^+ \le N\_A^+.$$

Case-1 contradicts the fact that CISs **A** and **B** are ECISs. From Case-2, it implies that

$$(\alpha\_A \wedge \alpha\_B) \notin ((M\_A \cup M\_B)^{-}, (M\_A \cup M\_B)^{+}) \text{and}$$

$$(\beta\_A \vee \beta\_B) \notin ((N\_A \cap N\_B)^{-}, (N\_A \cap N\_B)^{+})$$

since

$$(\alpha\_A \wedge \alpha\_B) = M\_B^- = (M\_A \cup M\_B)^- \, and$$

$$(\beta\_A \vee \beta\_B) = N\_B^+ = (N\_A \cap N\_B)^+.$$

Similar results can be obtained if we assume

$$
\beta\_{\underline{x}} = M\_{\underline{B}}^- \\
and \\
\beta\_{\underline{x}}^\* = N\_A^- \\
or \\
\beta\_{\underline{x}} = M\_A^- \\
and \\
\beta\_{\underline{x}}^\* = N\_B^- \\
$$

Hence **A** ∪*<sup>R</sup>* **B** is ECIS in *X*.

**Example 8.** *Let* **A** = *MA*/*αA*, *NA*/*βA and* **B** = *MB*/*αB*, *NB*/*βB be two ECISs in a set X* = {*a*, *b*, *c*} *as shown in Table 3. Then it is easy to see that* **A** *and* **B** *satisfy the conditions*

$$\min\{\max\{M\_{A'}^+M\_B^-\}, \max\{M\_{A'}^-M\_B^+\}\} = (\mathfrak{a}\_A \wedge \mathfrak{a}\_B)$$

$$> \max\{\min\{M\_{A'}^+M\_B^-\}, \min\{M\_{A'}^-M\_B^+\}\} \, \text{and}$$

$$\min\{\max\{N\_{A'}^+N\_B^-\}, \max\{N\_{A'}^-N\_B^+\}\} > (\beta\_A \vee \beta\_B)$$

$$= \max\{\min\{N\_{A'}^+N\_B^-\}, \min\{N\_{A'}^-N\_B^+\}\}.$$

*However,* **A** ∪*<sup>R</sup>* **B** *is not ECIS because*

(*α<sup>A</sup>* <sup>∧</sup> *<sup>α</sup>B*)(*a*) = 0.7 <sup>∈</sup> [0.6, 0.8] = [(*MA* <sup>∪</sup> *MB*)−(*a*),(*MA* <sup>∪</sup> *MB*)+(*a*)] *and* (*β<sup>A</sup>* <sup>∨</sup> *<sup>β</sup>B*)(*a*) = 0.1 <sup>∈</sup> [0.05, 0.15] = [(*NA* <sup>∩</sup> *NB*)−(*a*),(*NA* <sup>∩</sup> *NB*)+(*a*)].

**Table 3.** CISs **A** and **B**.


The following theorems can be easily verified and proved; therefore, we omit the details.

**Theorem 9.** *Let* **A** = *MA*/*αA*, *NA*/*βA and* **B** = *MB*/*αB*, *NB*/*βB be two ECISs in X, such that*

$$\min\{\max\{M\_{A'}^+M\_B^-\}, \max\{M\_{A'}^-M\_B^+\}\} \ge \left(\alpha\_A \lor \alpha\_B\right)$$

$$\succ \max\{\min\{M\_{A'}^+, M\_B^-\}, \min\{M\_{A'}^-, M\_B^+\}\} \,\big} \,\big|\,\, \text{and}\,\ell$$

$$\min\{\max\{N\_{A'}^+N\_B^-\}, \max\{N\_{A'}^-N\_B^+\}\} > (\beta\_A \wedge \beta\_B)$$

$$\geq \max\{\min\{N\_{A'}^+, N\_B^-\}, \min\{N\_{A'}^-, N\_B^+\}\},$$

*then* **A** ∩*<sup>R</sup>* **B** *is also an ECIS in X.*

$$\text{Theorem 10. Let } \mathbf{A} = \langle M\_A/\mathfrak{a}\_A, \mathrm{N}\_A/\mathfrak{f}\_A \rangle \text{ and } \mathbf{B} = \langle M\_B/\mathfrak{a}\_B, \mathrm{N}\_B/\mathfrak{f}\_B \rangle \text{ be two ICISs in X. If } \mathbf{A} \text{ is a set of } \mathbf{B} \text{ then the pair } \mathfrak{a}\_A \text{ is a set of } \mathbf{B} \text{ is a set of } \mathbf{A} \text{ which is a set of } \mathbf{B} \text{ since}$$

$$(\alpha\_A \wedge \alpha\_B) \le \max\{M\_{A'}^- M\_B^-\}$$

$$(\beta\_A \vee \beta\_B) \ge \min\{N\_{A'}^+ N\_B^+\}\_{\prime}$$

*then* **A** ∪*<sup>R</sup>* **B** *is an ECIS in X.*

**Theorem 11.** *Let* **A** = *MA*/*αA*, *NA*/*βA and* **B** = *MB*/*αB*, *NB*/*βB be two ICISs in X*. *If*

$$(\alpha\_A \lor \alpha\_B) \ge \min\{M\_{A'}^+ M\_B^+\}$$

$$(\beta\_A \land \beta\_B) \le \max\{N\_{A'}^- N\_B^-\}\_{\prime}$$

*then* **A** ∩*<sup>R</sup>* **B** *is ECIS in X.*

#### **4. MCDM Method Based on Cubic Intuitionistic Sets**

In this section, we will apply the proposed operations to deal with the MCDM problems using CISs.

Let *A* = {*A*1, *A*2, ... , *Am*} be a set of alternatives, *C* = {*C*1, *C*2, ... , *Cn*} be a set of criteria, and *E* = {*e*1,*e*2, ... ,*eK*} be a set of experts. Suppose each alternative *Ai*(*i* = 1, 2, ... , *m*) is assessed by the expert *ek*(*k* = 1, 2, ... , *K*) with respect to the criteria *Cj*(*j* = 1, 2, . . . , *n*) using CISs. The proposed MCDM method is based on the following steps.


#### *An Application Example*

Let us suppose that a technical committee composed of three technicians/experts *E* = {*e*1,*e*2,*e*3} wishes to select the best available washing machine on the market. Suppose, there are four types of washing machines *A* = {*A*1, *A*2, *A*3, *A*4} available in the market and the experts are requested to select the best one amongst the four with respect to the criteria set *C* = {*C*<sup>1</sup> = eco-friendly, *C*<sup>2</sup> = capacity, *C*<sup>3</sup> = price}. Suppose the expert *ek*(*k* = 1, 2, 3) assessed each alternative *Ai*(*i* = 1, 2, ... , 4) under the criteria *Cj*(*j* = 1, 2, 3) by using the CISs. We will now proceed with the following steps.

**Step 1** According to the expert's opinion, the individual decision matrices *R*1, *R*2, *R*<sup>3</sup> are constructed, which can be seen in Tables 4–6.


**Table 4.** Decision matrix *R*<sup>1</sup> provided by expert *e*1.

**Table 5.** Decision matrix *R*<sup>2</sup> provided by expert *e*2.


**Table 6.** Decision matrix *R*<sup>3</sup> provided by expert *e*3.


**Step 2** The aggregated decision matrix *R* = (*rij*)4×<sup>3</sup> is calculated with the help of the proposed operation (P-union) as introduced in Definition 12 where *rij* <sup>=</sup> <sup>∪</sup>*<sup>P</sup> <sup>k</sup>*=1,2,3 *rk ij*.

The aggregated decision matrix *R* is shown in Table 7.

**Table 7.** Aggregated decision matrix *R* by applying the P-union operation.


**Step 3** By using Definition 9, we will calculate the score value of each *rij* of the aggregated decision matrix *R*. The matrix of the score values of the elements of *R* is shown in Table 8.

**Table 8.** Score values of the aggregated decision matrix.


**Step 4,5** Finally, the preference value *P*(*Ai*), *i* = 1, 2, ..., 4 of each alternative is calculated where *P*(*Ai*) = ∑<sup>4</sup> *<sup>i</sup>*=<sup>1</sup> <sup>∑</sup><sup>3</sup> *<sup>j</sup>*=<sup>1</sup> *rij*. The preference values of alternatives by using the P-union operation are given below:

$$P(A\_1) = 0.5111, P(A\_2) = 0.1222, P(A\_3) = 0.6500, P(A\_4) = 0.4000.$$

We can see that the ranking order of alternatives according to the non-increasing order of their preference values is *A*<sup>3</sup> *A*<sup>1</sup> *A*<sup>4</sup> *A*2. Similarly, the preference value of each alternative by using the R-union operation is calculated and given as follows:

$$P(A\_1) = 0.2778, P(A\_2) = -0.0444, P(A\_3) = 0.4389, P(A\_4) = 0.2333$$

In this case, the ranking order of alternatives is *A*<sup>3</sup> *A*<sup>1</sup> *A*<sup>4</sup> *A*2.

We can observe that the ranking order of alternatives by using the R-union operation is exactly the same as that obtained with the help of the P-union operation, which shows the robustness of the proposed approach. We can easily see that by using the P-intersection and R-intersection operations as discussed in Definition 12, the ranking order of alternatives will lead to the reverse order of the raking orders obtained in the P-union and R-union operations, respectively.

#### **5. Conclusions**

In this research work, we introduced a new modified form of CIS and discussed some of its related properties. We further introduced two types of CISs, i.e., ICIS and ECIS. The P-(R-) order, P-(R-) union, P-(R-) intersection of CISs, and some useful properties were also discussed with necessary examples. As a supplement, we proved that the Punion and P-intersection of ICISs are also ICISs. Some conditions for the P-(R-) union and P-(R-) intersection of two ECISs to be ICISs were also provided in this paper. We also provided a few conditions for the P-(R-) union and P-(R-) intersection of two ECISs to be ECISs. To check the effectiveness and validity of the proposed operations, we provided an application example at the end by solving a MCDM problem.

In future work, more research can be conducted regarding the intuitionistic cubic soft set and its application in information science and knowledge systems. We intend to apply the intuitionistic cubic soft sets to algebraic structures.

**Author Contributions:** Conceptualization, S.F., T.R., S.Z., H.S. and W.S.; methodology, S.F., T.R., S.Z., H.S. and W.S.; software, S.F., T.R., S.Z., H.S. and W.S.; validation, S.F., T.R., S.Z., H.S. and W.S.; formal analysis, S.F., T.R., S.Z., H.S. and W.S.; investigation, S.F., T.R., S.Z., H.S. and W.S.; resources, S.F., T.R., S.Z., H.S. and W.S.; data curation, S.F., T.R., S.Z., H.S. and W.S.; writing—original draft preparation, S.F., T.R., S.Z., H.S. and W.S.; writing—review and editing, S.F., T.R., S.Z., H.S. and W.S.; visualization, S.F., T.R., S.Z., H.S. and W.S.; supervision, S.F., T.R., S.Z., H.S. and W.S.; project administration, S.F., T.R., S.Z., H.S. and W.S.; funding acquisition, S.F., T.R., S.Z., H.S. and W.S. All authors have read and agreed to the published version of the manuscript.

**Funding:** The work was supported by the National Science Centre 2021/41/B/HS4/01296 (W.S.). and 2022/01/4/ST6/00028 (G.S.)

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The authors would like to thank the editor and the anonymous reviewers, whose insightful comments and constructive suggestions helped us to significantly improve the quality of this paper.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

**Chunlai Du 1, Zhijian Cui 1, Yanhui Guo 2,\*, Guizhi Xu <sup>1</sup> and Zhongru Wang 1,3**


**Abstract:** Uncontrolled heap memory consumption, a kind of critical software vulnerability, is utilized by attackers to consume a large amount of heap memory and consequently trigger crashes. There have been few works on the vulnerability fuzzing of heap consumption. Most of them, such as MemLock and PerfFuzz, have failed to consider the influence of data flow. We proposed a heap memory consumption guided fuzzing model named MemConFuzz. It extracts the locations of heap operations and data-dependent functions through static data flow analysis. Based on the data dependency, we proposed a seed selection algorithm in fuzzing to assign more energy to the samples with higher priority scores. The experiment results showed that the MemConFuzz has advantages over AFL, MemLock, and PerfFuzz with more quantity and less time consumption in exploiting the vulnerability of heap memory consumption.

**Keywords:** fuzzing; memory consumption; data flow; taint analysis

**MSC:** 90C70

#### **1. Introduction**

Fuzzing is a kind of random testing technique and is widely used to discover vulnerabilities in computer programs. Blind samples mutation fuzzing models and coverageguided fuzzing models fail to select interesting seeds and waste testing time. Many fuzzing models are currently guided by exploring ways to improve path coverage. It is believed that the more code blocks that can be covered, the more likely potential vulnerability will be triggered. Many state-of-the-art fuzzing models typically use information from the programs' control flow graph by the program under test (PUT) to determine which samples would be selected as seeds for further mutation. Although there has been a lot of research work on memory overflow vulnerability, most of these methods have mainly exploited memory corruption vulnerabilities, such as stack buffer overflow, use-after-free (UAF), out-of-bounds reading, and out-of-bounds writing, etc. Memory corruption occurs when the contents of memory are overwritten due to malicious instructions or normal instructions with unexpected data beyond the program's original intention. For example, a buffer overflow occurs when a program tries to copy data into a variable whose required memory length is larger than the target. When corrupted memory contents are later used, the program triggers a crash or turns into a shellcode. Most fuzzing models of memory corruption vulnerability depend on the control flow, and seldom on the data semantics.

Memory consumption is a different kind of memory vulnerability in contrast to memory corruption, which is more like a logical vulnerability potentially existing in the action sequence of memory allocation and deallocation. With one goal of making more efficient use of the memory, different code segments in general are stored in different memory areas, among which the stack area and heap area are the two most important types of memory areas. In the process of a program running, the stack area grows up or down

**Citation:** Du, C.; Cui, Z.; Guo, Y.; Xu, G.; Wang, Z. MemConFuzz: Memory Consumption Guided Fuzzing with Data Flow Analysis. *Mathematics* **2023**, *11*, 1222. https://doi.org/ 10.3390/math11051222

Academic Editor: Ivan Lorencin

Received: 30 January 2023 Revised: 23 February 2023 Accepted: 27 February 2023 Published: 2 March 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

by calling subfunctions. It contains local variables, stack register ebp of parent function, return address, and parameters from the parent function. Generally, the heap areas are a series of memory blocks allocated and freed by the programs, which can be used by the pointer of heap blocks. Memory consumption occurs in the process of heap allocation and release. When a program triggers instructions for heap memory allocation enough times without deallocating unused memory in time, it would likely lead to a crash. Uncontrolled heap memory consumption is therefore a critical issue of software security, and can also become an important vulnerability when attackers control execution flow to consume large amounts of memory, and thus, launch denial-of-service attacks.

To solve the problems in vulnerability fuzzing of heap consumption, we propose a heap memory consumption-guided fuzzing model named MemConFuzz in considering the data flow analysis. This paper makes the following contributions:


The rest of the paper is organized as follows. Section 2 introduces related work. Section 3 presents the algorithm for extracting locations of heap operations through taint analysis based on data dependency. In Section 4, the proposed MemConFuzz model is described. In Section 5, the experimental process and the results are discussed. Finally, we conclude the paper in Section 6.

#### **2. Related Work**

Methods of discovering vulnerability are divided into static techniques and dynamic techniques. Static methods are used to make classification between the target program and known CVE (Common Vulnerabilities and Exposures) code based on structural similarity or statistical similarity by artificial intelligence technology. Dynamic methods include generation fuzzing, coverage-guided fuzzing, and symbolic execution.

Generation fuzzing adopts a generator to create required samples by mapping out all possible fields of the target program. The generator then separately mutates each of these fields to potentially cause crashes. In the generating process, those methods may result in a large number of invalid samples being rejected by the program as they do not follow the correct format. Coverage-guided fuzzing models integrate instrumentation into the target program before tracing the running information. To discover the special target areas in the program, a directed greybox fuzzing is proposed. Symbolic execution analyzes the target program to determine what inputs cause each part of this program to execute. Through symbolic execution, the required samples that execute the constraint code path to reach the target basic block are solved by an SMT (Satisfiability Modulo Theories) solver.

#### *2.1. Static Techniques Based on Artificial Intelligence*

During the research of discovering the vulnerability, the bottlenecks are related to how to generate good samples, how to improve path coverage, and how to provide more knowledge support for dynamic methods. Artificial intelligence has been used in the field of vulnerability discovery in recent years.

Machine learning is the most important technology of artificial intelligence, which attains knowledge about features obtained by analyzing an existing vulnerability-related dataset. This knowledge can be used to analyze new objects and thus predict potentially vulnerable locations in static mode. Machine learning methods can be divided into traditional machine learning, deep learning, and reinforcement learning.

Rajpal [4] used neural networks to learn patterns in past samples to highlight useful locations for future mutations, and then improved the AFL approach. Samplefuzz [5] combined learn and fuzz algorithms to leverage learned samples' probability distribution to make the generation of grammar suitable samples by using past samples and a neural network-based statistical machine learning. NEUZZ [6] leveraged neural networks to model the branching behavior of programs, generating interesting seeds by strategically modifying certain bytes of existing samples to trigger edges that had not yet been executed. Angora [7] modeled the target behavior, treated the mutation problem as a search problem, and applied the search algorithm in machine learning, which used a discrete function to represent the path from the beginning of the program to a specific branch under path constraints, and thus used the gradient descent search algorithm to find a set of inputs that satisfied the constraint and make the program go through that particular branch. Cheng [8] used RNNs to predict new paths of the program and then fed these paths into a Seq2Seq model, increasing the coverage of samples in PDF, PNG, and TFF formats. SySeVR [9] proposed a systematic framework for using deep learning to discover vulnerabilities. Based on Syntax, Semantics, and Vector Representations, SySeVR focuses on obtaining program representations that can accommodate syntax and semantic information pertinent to vulnerabilities. VulDeePecker [10] is a deep learning-based vulnerability detection system, which has presented some preliminary principles for guiding the practice of applying deep learning to vulnerability detection. μVulDeePecker [11] proposed a deep learning-based system for multiclass vulnerability detection. It introduced the concept called code attention to learn local features and pinpoint types of vulnerabilities.

However, most of these works are computationally intensive. The cost is very high because deep learning requires a large amount of data and computing power. The quality and quantity of the training data set have a direct impact on the accuracy of the training model, and there is also a key challenge to accurately locate the instructions where the vulnerability occurs.

#### *2.2. Dynamic Execution Fuzzing Technique*

Fuzzing has gained popularity as a useful and scalable approach for discovering software vulnerabilities. In the process of dynamic execution, that is, the fuzzing loop, the fuzzer generally uses the seed selection algorithm to select favorable seeds based on the feedback information of PUT execution, and then performs seed mutation according to a series of strategies to generate new samples and explore paths of the target program. Fuzzing is widely used to test application software, libraries, kernel codes, protocols, etc. Furthermore, symbolic execution is another important approach that can create a sample corresponding to a specific constraint path by the SMT solver. The following mainly introduces several popular dynamic technologies and methods in fuzzing.

A. Coverage-guide fuzzing

Coverage-guide greybox fuzzing (CGF) is one of the most effective techniques to discover vulnerabilities. CGF usually uses path coverage information to guide path exploration. In order to improve the coverage of fuzzers, researchers have focused on optimizing the coverage guide engine, which is the main component of fuzzers.

LibFuzzer [12] provided samples into the library through a specific fuzzing entry point, used LLVM's SanitizerCoverage tool to obtain code coverage, and then performed mutations on the sample to maximize coverage. Honggfuzz [13] proposed a genetic algorithm to efficiently mutate seeds. AFL [1] is a coverage-based fuzzing tool that captures basic block transitions by instrumentation and records the path coverage, thereby adjusting the samples to improve the coverage and increase the probability of finding vulnerabilities. OSS-FUZZ [14] was a common platform built by Google to support fuzzing engines in combination with Sanitizers for fuzzing open source programs. GRIMOIRE [15], Superion [16], and Zest [17] leveraged the knowledge in highly structured files to generate well-formed samples and traced the coverage of the program to reach deeper levels of code. Therefore, branch coverage was increased. CollAFL [18] proposed a coverage-sensitive fuzzing scheme to reduce path conflicts and thus improve program branch coverage. TensorFuzz [19] used the activation function as the coverage indicator and leveraged the algorithm of fast-approximate nearest neighbor to check whether the coverage increases to accordingly adjust the neural network. PerfFuzz [3] generated input samples by using multi-dimensional feedback and independently maximizing execution counts for all program locations. Fw-fuzz [20] obtained the code coverage of firmware programs of MIPS, ARM, PPC, and other architectures through dynamic instrumentation of physical devices, and finally implemented a coverage-oriented firmware protocol fuzzing method. T-fuzz [21] used coverage to guide the generation of input, and when the new path could not be accessed, the sanity check was removed to ensure that the fuzzer could continue to discover new paths and vulnerabilities.

Most coverage-based fuzzers treat all codes of a program as equals. However, some vulnerabilities hide in the corners of the code. As a result, the efficiency of CGF suffers and efforts are wasted on bug-free areas of the code.

B. Symbolic execution

Symbolic execution is a technique to systematically explore the paths of a program, which executes programs with symbolic inputs. When used in the field of discovering vulnerabilities, symbolic execution can generate new input samples that have a path reaching target codes from the initial code by solving path constraints with the SMT solver. It can also be said to deduce input from results under constraints.

Driller [22] leveraged fuzzing and selective concolic execution in a complementary manner. Angr [23], which is based on the model popularized and refined by S2E [24] and Mayhem [25], was used by Driller to be a dynamic symbolic execution engine for the concolic execution test. Driller uses selective concolic execution to only explore the paths deemed interesting by the fuzzer and to generate inputs for conditions that the fuzzer cannot satisfy. SAGE [26] is equipped with whitebox fuzzing instead of blackbox fuzzing, with symbolic execution to record path information and constraint solvers to explore different paths. QSYM [27] adopted a symbol execution engine for a greybox fuzzing approach to reach deeper code levels of the program. SAFL [28] augmented the AFL fuzzing approach by additionally leveraging KLEE as the symbolic execution engine.

However, the disadvantage of symbolic execution is that the increased analysis process leads to the program running overhead. In addition, as the depth of the path increases, the path conditions will become more and more complex, which will also pose a great challenge to the constraint solver.

C. Directed greybox fuzzing

Directed Greybox Fuzzing (DGF) is a fuzzing approach based on the target location or the specific program behavior obtained from the characteristics of a vulnerable code. Unlike CGF, which blindly increases path coverage, DGF aims to reach a predetermined set of places in the code (potentially vulnerable parts) and spends most of the time budget getting there, without wasting resources emphasizing irrelevant parts.

AFLgo [29] and Hawkeye [30] used distance metrics in their programs to perform user-specified target sites. A disadvantage of the distance-based approach is that it only focuses on the shortest distance, so when there are multiple paths to the same goal, longer paths may be ignored, resulting in lower efficiency. MemFuzz [31] focused on code regions related to memory access, and further guided the fuzzer by memory access information executed by the target program. UAFuzz [32] and UAFL [33] focused on UAF vulnerabilityrelated code regions, leveraging target sequences to find use-after-free vulnerabilities, where memory operations must be performed in a specific order (e.g., allocate, free then store/write). Memlock [2] mainly focused on memory consumption vulnerabilities, took memory usage as the fitness goal, and searched for uncontrolled memory consumption vulnerabilities, but did not consider the influence of data flow. AFL-HR [34] triggered

hard-to-show buffer overflow and integer overflow vulnerabilities through coevolution. IOTFUZZER [35] used a lightweight mechanism based on IoT mobile device APP, and proposed a black-box fuzzing model without protocol specifications to discover memory corruption vulnerabilities of IoT devices.

However, these works focus more on specific measurement strategies. When looking for the optimal path, it is easy to get stuck in local blocks of the program and ignore other paths that may lead to vulnerabilities, thus making the fuzzing results inaccurate.

D. Data flow guided fuzzing

Data flow analysis increases the knowledge set of the fuzzer and semantic information of the PUT by adding data flow information, and thus essentially makes the code characteristics and program behavior clear. Data flow analysis methods, such as taint analysis, can reflect the impact of the mutation on samples that could help optimize seed mutation strategy, input generation, and the seed selection process.

SemFuzz [36] tracked kernel function parameters on which key variables depend through reverse data flow analysis. SeededFuzz [37] proposed a dynamic taint analysis (DTA) approach to identify seed bytes that influence the values of security-sensitive program sites. TIFF [38] proposed a mutation strategy to infer input types through in-memory data structure identification and DTA, which increased the probability of triggering memory corruption vulnerabilities. However, data flow analysis, especially DTA, often increases runtime overhead and slows down the program while obtaining accurate data information of PUT. Fairfuzz [39] and Profuzzer [40] all adopted lightweight taint analysis to find the guiding mutation solution and obtain the variable taint attributes. GREYONE [41] equipped fuzzing with lightweight Fuzzing-Based Taint Inference (FTI) to carry out taint calibration for the branch jump variables of the program control flow. In the process of fuzzing, they mutate the specific bytes of samples and observe the changes of tainted variables to obtain the data dependency relationship between seed bytes and tainted variables.

However, it is impossible to understand the semantics of control flow by simply using data flow for vulnerability discovery, and detailed data flow analysis will increase overhead and reduce fuzzing efficiency. Usually, it can only be used as an important supplementary method of vulnerability discovery based on control flow analysis.

In summary, data flow analysis has become a future research trend, as more additional information of PUT can be obtained for better guidance of fuzzers. Therefore, the performance of the fuzzer can be better played for different vulnerabilities.

#### **3. Enhanced Heap Operation Location Based on Data Semantics**

In order to focus on discovering heap vulnerability, we first analyze the program in static mode to identify the locations of heap operation. We not only try to obtain the subsequence of heap operation, but also deduct the relations of heap operation based on data semantics. To achieve this goal, we build CPG including CFG and DDG (Data Dependency Graph). CFG is used to describe the sequence of operations, while DDG is used to point out the relationship between heap pointers. Based on data dependency deduced from CPG, we propose an algorithm to extract the locations of suspected dangerous heap operation code areas.

#### *3.1. Examples of Memory Consumption Vulnerability*

If an attacker can control the allocation of limited software resources and use a large number of system resources, the attacker may consume all available resources and then trigger a denial of service attack, which belongs to the category of resource consumption vulnerability CWE-400. This kind of vulnerability may prevent authorized users from accessing the software and have harmful effects on the surrounding memory environment. For example, a memory exhaustion attack could render software or even the operating system unusable. Therefore, we focus on the heap memory consumption vulnerability of code blocks, which is divided into two types named uncontrolled memory allocation and memory leaks.

**Definition 1.** *Memory consumption is defined as a vulnerability occupying process memory resources by triggering data storage instructions several times, which affects the normal running of the process and leads to a denial-of-service attack.*

**Definition 2.** *Uncontrolled memory allocation is defined as a vulnerability related to heap memory allocation and release, which allocates memory based on untrusted size values, but does not validate or incorrectly validate the size, and allows any amount of memory to be allocated. Its CWE number is CWE-789.*

**Definition 3.** *Memory leak is defined as a vulnerability also related to heap memory allocation and release, in which the program does not adequately track and free the allocated memory after allocation, and thus slowly consumes the remaining memory. Its CWE number is CWE-401.*

Compared with non-memory consuming vulnerabilities, uncontrolled memory allocation vulnerability and memory leak vulnerability are more difficult to discover because their conditions of triggering crashes are stricter.

CVE-2019-6988 is a public CVE, and this vulnerability occurs in the *opj\_calloc* function. This vulnerability is formed because the program code lacks the detection of the allocated amount and the security mechanism for specially crafted files. In Figure 1, the code snippet related to an uncontrolled memory allocation vulnerability (CVE-2019-6988) exists in the executable program OpenJPEG version 2.3.0. In the source code project, the function *opj\_tcd\_init\_tile* in file tcd.c is called when the OpenJPEG is running to decompress the "specially-crafted" images. This vulnerability allows a remote attacker to attempt too much memory allocation by function *opj\_calloc* in the file opj\_malloc.c, which calls the system function *calloc* to allocate a large amount of heap memory and ultimately results in a denial-of-service attack due to a lack of enough free heap memory.

**Figure 1.** Code snippet from tcd.c/tgt.c in OpenJPEG v2.3.0.

As shown in Figure 2, the code snippet concerning memory leaks vulnerability exists in a program case of Samate Juliet Test Suite. This case is a memory leak vulnerability caused by allocating heap memory without release. Specifically, the case uses the function *malloc* on line 5 to allocate memory and checks whether the allocation is successful or not on line 7. However, at the end of the function, the allocated memory data is not effectively released, eventually resulting in a heap memory leak.

**Figure 2.** Code snippet from Samate Juliet Test Suite.

#### *3.2. Location of Heap Operation Code Based on Data Semantic*

In order to directionally discover heap-memory-consumed vulnerabilities, how to obtain the locations of suspected heap operations is the first essential goal. Once the locations are identified, they will be used as a guided factor to optimize the guidance strategy of vulnerability fuzzing, which is our second essential goal.

We first construct CPG based on the static analysis tool Joern. Then, a scheme is proposed to deduce the explicit and implicit semantic relations between heap pointers based on data flows from CPG. In addition, based on the semantic relations between heap pointers, we analyze the abnormal sequence of heap memory operation concerning allocation and release, and thus demarcate the heap operation code areas with suspected heap consumption. These locations will serve as an important indicator for selecting seeds from input samples during the fuzzing procedure.

#### 3.2.1. Construct CPG

CPG is a graph combining multi-level code information where the information at each level can be related to each other. CPG can be obtained by combining AST (Abstract Syntax Trees), CFG, DDG, and CDG (Control Dependency Graph). Compared with other structures, CPG contains much richer data and relational information, which enables more complex and detailed static analysis of the program source code.

The CPG is composed of nodes and edges. Nodes represent the components of PUT, including functions, variables, etc. Each node has a type, such as a type METHOD representing a method, PARAM representing a parameter, and LOCAL representing a local variable. The directed edges represent the relationship between nodes, and the label is the description of the relationship, such as a label DDG from node A to node B represents B's data dependency on A.

The program files can be parsed using the source code analysis tool Joern to obtain the CPG. In order to show what useful data can be obtained from CPG for data relationship derivation, we analyze OpenJPEG v2.3.0 containing CVE-2019-6988 introduced in Section 3.1. Due to the huge number of codes, we only show the partial CPG shown in Figure 3. Figure 3a is the full CPG of the *opj\_calloc* function, in which the *calloc* method is the partial zoom shown as Figure 3b. From Figure 3b, we can find the *calloc* method is dependent on the parament t\_nmemb and t\_size. We also find the parament t\_nmemb and t\_size are dependent on the return method. Combined with CFG, we can derive the potential dependent relationship between the *calloc* function and the *calloc* function and the return function.

(**b**) **CPG of** *calloc* **method.** 

**Figure 3.** CPG of OpenJPEG v2.3.0 *opj\_calloc* and *calloc* methods.

Therefore, after constructing the CPG of the program, we analyze data dependencies using taint analysis on CPG and determine the location of heap operations.

3.2.2. Location Extraction Based on Data Dependency

Current research faces the challenge of finding accurate locations for code areas related to heap operations. In this section, we introduce how to obtain the data dependency by taint analysis. Taint analysis is an effective technology for data flow analysis. In our research work, we use a lightweight static taint analysis method to locate potentially vulnerable code areas.

Because CFG can reflect the jump of code and show all branches, most state-ofthe-art fuzzers use CFG as an analysis object. Meanwhile, the data flow can reflect the direct relationship between variables and the function parameters, so some fuzzers also consider data flow as the analysis object. The data flow and the dependence on data semantics can provide positive help for understanding the real behavior of CFG, so we use these advantages to better serve the seed selection for discovering our required types of vulnerabilities. Using CPG for program analysis has many advantages. After using Joern to parse the source code into CPG, it does not need to be further compiled. CPG will be loaded into memory, and we can perform traversal queries, evaluate function leakage problems, perform data flow analysis, etc.

Dynamic taint analysis usually increases program runtime overhead. To this end, we use a lightweight static data flow analysis method to obtain the suspected locations in the target program, thereby reducing the impact on the program runtime overhead. We mainly focus on data dependencies in CPG during static taint analysis. We analyze the CPG to obtain relevant function points that have data dependencies between program input and memory allocation, that is, taint attributes information, and record them. Specifically, we use a static taint analysis approach to obtain the location of functions including heap operation.

Algorithm 1 is proposed to extract location by taint analysis based on data dependency. The static taint analysis is used to track the data flow of heap operation functions such as *malloc*, *calloc*, *realloc*, *free*, *new*, *delete*, and their deformation functions. The source set of the algorithm is the parameter of all the program methods and all the called functions of the program, and the sink set is the function arguments. In the process of data flow analysis, all relevant nodes from source to sink are traversed, and we use Joern's built-in functions, such

as *reachableBy* and *reachablebyFlows*, to query the paths from all sources to the sink points in its CPG. Finally, matched functions are obtained, and duplicated ones are removed.

**Algorithm 1:** Taint analysis approach for locating potentially vulnerable functions

```
Input: CPG of program under test P
  Output: Set of dataflow functions Setfuncs
; = Setfuncs 1
2Target heap functions funcs ՚ {malloc, calloc, free...}; 
3 for f in funcs do
4 Source = called functions, methods' params of P in CPG;
5 Sink = args of f in CPG;
6 if Sink dataFlowReachable by Source then
7 Nodesrelated =  ;//nodes which are data related; 
8Nodesrelated ՚ NodesrelatedǕTraverse(CPG of P); 
9 paths = Query(Nodesrelated, CPG); //dataflow paths of heap opreations; 
10 for p in paths do 
11 (statements, line num, funcsp, source locations) ՚ regularMatch(p);
12 if funcsp not  then 
13 Remove duplicate items in funcsp;
14Setfuncs ՚ SetfuncsǕfuncsp;
15 return Setfuncs;
```
Figure 4 shows the partial data flow of the heap memory allocation function obtained through static taint analysis in OpenJPEG v2.3.0, which is a data flow path to the parameter value of the standard library function *malloc*. Each path contains four aspects of information. Among them, the tracked column contains the statements in the queried nodes, the lineNumber column contains the line number in the source code file, the method column displays the method names where the statements are located, and the file column displays the locations of the source code file. To construct the source set, we mark the parameters of all methods and call all functions in the CPG into the source set. We find all call-sites for all methods in the graph with the name *malloc* and mark their arguments into the sink set. After identification, we obtain a data flow path to *malloc* in our query of OpenJPEG's CPG. Eventually, we collect dataflow-related functions *jpip\_to\_jp2, fread\_jpip*, and *opj\_malloc*, which were obtained in the dataflow path after the static taint analysis.


**Figure 4.** OpenJPEG v2.3.0 partial data flow path.

In summary, the proposed algorithm 1 analyzes the data flow related to heap memory allocation and release in the program, and obtains the locations, variables, and parameters related to heap operation, which guide seed selection in the following fuzzing process.

#### **4. MemConFuzz Model**

After analyzing the CPG in the static analysis stage, we obtain the function locations related to the data flow of the heap memory allocation and release functions and quantitatively record the sizes of the memory block allocated and released by the heap operations. We feed these back to the fuzzer to prioritize the detection of relevant vulnerable code areas in the fuzzing loop.

The prioritizing discovery of consumption-type vulnerabilities is a novel contribution to this paper. Through the calibration of suspected heap memory-consuming instructions, the priority discovering of them is realized. Through investigating the existing vulnerability discovery models, we found that there are few studies on the discovery of heap memory consumption vulnerabilities. At the same time, the existing methods for discovering such vulnerabilities have many deficiencies, such as the lack of the important data flow analysis related to heap operations. Therefore, we propose our model, which has benefits for our purpose of focusing on heap consumption vulnerabilities discovery, while taking into account the discovery of other vulnerabilities.

#### *4.1. Overview*

To address problems mentioned in the previous sections, we propose a memory consumption-guided fuzzing model, MemConFuzz, as shown in Figure 5. The main components of MemConFuzz contain a static analyzer, an executor, fuzz loop feedback, a seed selector, and a seed mutator. In MemConFuzz, the static analyzer marks the dataflowrelated edges and records the trigger value for each edge by scanning the source code and then inserts code fragments to update the value in the running program. The executor executes the instrumented program. Fuzz loop feedback is used to record and update related information to guide the seed selector after the program execution. The seed selector adopts a priority strategy to select seeds according to the different scores of the seed bank. The seed mutator mutates the selected seed to test the program in the fuzzing loop.

**Figure 5.** MemConFuzz model.

MemConFuzz contains two main stages: the static analysis stage and the fuzzing loop stage. Dark colors indicate optimized changes to the original AFL approach. Static analysis performs taint position identification and memory function identification for instrumentation. We use lightweight instrumentation to capture basic block transitions, heap memory function locations, and data-flow-dependent function locations at compiletime, while gaining coverage information, heap memory size, and data-flow-dependent information during runtime.

In the static analysis stage, we instrument all the captured target locations and then recompile to obtain an instrumented file. In the main fuzzing loop, the seed is selected for mutation and delivered to the instrumented file for execution. The model continuously tracks the state of the target program and records the cases that cause the program to crash. At the same time, the recorded feedback information is continuously submitted to the seed selector for priority selection, which helps to discover more heap memory consumption vulnerabilities.

#### *4.2. Code Instrumentation at Locations of Suspected Heap Operation*

In order to record the execution information in the fuzzing process, the bitmap of AFL records the number of branch executions, and the *perf\_bits* of Memlock records the size of the heap allocation. The MemConFuzz also adopts a shared memory and incrementally adds *dataflow\_shm* to store the numbers of the data-dependent functions triggered.

The MemConFuzz is derived from AFL. MemConFuzz adds two shared memory areas in AFL and mainly expands the afl-llvm-rt.o.c and afl-llvm-pass.so.cc files for instrumentation. The instrumented contents include branch coverage information, heap memory allocation functions, and data-dependent functions.

The first shared memory *perf\_bits* records the size of the memory allocation and release during runtime. In the static analysis stage, we use LLVM [42] to obtain the function Call Graph (CG) and CFG of the program. Through traversing CG and CFG, we search the locations of basic blocks related to heap memory allocation and release functions, including *malloc, calloc, realloc, free, new, delete,* and their variant functions, and locate the call-sites of heap functions for instrumentation. During the fuzzing loop, *perf\_bits* records the amount of consumed heap memory.

As shown in Figure 6 below, there are four basic blocks A, B, C and D representing nodes in the CFG of the program. The program will first go to B or C according to a branch condition. Once the branch condition for block C is met, the variable *size* is initialized, and then the memory allocation operation is performed in the block D. We traverse branches of the basic block described by IR language from the beginning of the program. Once we find a match among all our target heap functions, we locate the potential block D and instrument it.

**Figure 6.** An example of basic block transition.

Meanwhile, we add the second shared memory *dataflow\_shm* to record the numbers of data-dependent functions. We traverse the basic blocks of the program to search the locations that belong to *Setfuncs*, and then complete instrumentation. Specifically, after using these locations to analyze the program, we instrument the code of increasing count in *dataflow\_shm*. In the fuzzing loop stage, MemConFuzz can increase the count value in *dataflow\_shm* corresponding to triggered data-dependent functions when the target program executes an input sample. Thus, we can get the coverage information of heap operation when the execution of an input sample is completed.

In the instrumentation pass file, we declare a pointer variable, *DataflowPtr*, pointing to the shared memory area *dataflow\_shm*. Then, the values of *dataflow\_shm* are changed based on the number of data-dependent functions triggered. We inject instrumentation codes into the program during compilation. The approximate formulas for instrumentation are shown below, where Formula (1) marks the ID of the current block with a random number *cur\_location*, Formula (2) shows that by applying the XOR operation on two IDs of the current block and previous block as the key, the corresponding value in *dataflow\_shm* is updated by adding *dataflowfunc\_cnt* to self-value, where the *shared\_mem[]* array is our *dataflow\_shm*, the size of which is 64 Kb, and *dataflowfunc\_cnt* is the count of data-dependent functions triggered on this branch, and, in Formula (3), in order to distinguish the paths in different directions between two blocks, the *cur\_location* is moved to the right by one bit as the *prev\_location* to complete the marking of these two blocks.

$$
\text{car\\_location} = \text{$$

*shared\_mem*[*cur\_location* ˆ *prev\_location*]+= *dataflowfunc\_cnt* (2)

$$prev\\_location = cur\\_location \gg 1\tag{3}$$

We instrument the program based on static analysis to get the instrumented program. Therefore, we prioritize guidance to the suspected heap operation areas in the fuzzing loop stage to realize our directed fuzzing on the heap consumption vulnerability.

#### *4.3. Strategy of Seed Priority Selection*

This model proposes a fine-grained seed priority strategy for discovering heap memory consumption vulnerabilities. Seeds are mainly scored by the following indicators:


The first two strategies will help the fuzzer trigger more potential heap memory consumption vulnerabilities.

**Principle 1.** *During execution, the more data dependent functions that are recorded, the greater the coefficient increase. In the end, the seed score increases and the energy obtained increases.*

**Principle 2.** *The original scoring strategy of AFL should also be taken into account. The final score of the seed should not be too large, because the execution process may be trapped in local code blocks of the program.*

According to the above two seed selection formula design principles, and in order to evaluate the excellence degree of each input sample, a scoring formula is proposed, in which the more data-dependent functions are recorded, the larger the coefficients are increased, and the higher scores are achieved. In addition, we set the parameters 1.2 and 1/5 according to the design principle, making sure to set the multiplier factor of the increase to the maximum value of 1.2, so that the evaluating strategy does not have too much impact

to avoid missing out on other good samples, which are not used for discovering memory consumption vulnerability, but can be used for non-memory consumption vulnerability.

$$\text{Priority\\_score}(\text{sample}\_i) = \begin{cases} \begin{array}{l} P\_{\text{aff}}(\text{sample}\_i) \cdot \left(1.2 - \frac{1}{8}e^{-\text{dataflow\\_func}}\right), & \text{dataflow\\_func} \cdot \left| \begin{array}{l} n \cdot m \\_\text{max\\_size} \\\\ n \cdot m \\_\text{max\\_size} \end{array} \right| \\\\ P\_{\text{aff}}(\text{sample}\_i) & \text{otherwise} \end{cases} \tag{4}$$

Equation (4) shows the seed priority strategy adopted by MemConFuzz. Specifically, for each *samplei* in the sample queue, when the data-dependent function is triggered or the new maximum memory is reached, we multiply the original AFL score value *Pafl(samplei)* by our formula and then obtain different seed scores under different numbers of datadependent functions. The *dataflow\_funcs* is the total number of data-dependent functions triggered by the sample during the fuzzing loop. Otherwise, we adopt the original AFL strategy, which is to perform sample scoring according to the execution speed and length of the samples. At last, we choose some samples with high *Priority\_score* values from the sample queue as seeds.

In summary, every time the program executes, we detect code coverage, memory usage, and data-dependent functions triggered. For the impact of heap operations, we adopt two equations for different cases. The samples that trigger more data-related functions, allocate larger heap memory sizes, and have higher program path coverage are preferentially given higher power. Furthermore, we set up a maximum time in the havoc mutation phase to prevent wasting too much test time.

#### *4.4. Proposed Model*

We implement a directed fuzzing model MemConFuzz to discover heap memory consumption vulnerabilities. Unlike AFL, our model first performs a static analysis approach to analyze program data flow, and then uses the data flow information as a guide in discovering heap memory consumption vulnerabilities. Algorithm 2 describes the workflow of MemConFuzz.

The current vulnerability discovery models are faced with the challenge of not being able to prioritize the discovery of heap memory consumption vulnerabilities, and there is a problem of inaccurate static analysis caused by a lack of data flow information. Algorithm 2 is the pseudo code of our proposed fuzzing model, which is improved based on AFL, and we have optimized and improved the seed selection.

In the seed queue *Queue*, we select a seed *q* based on our seed priority strategy and then assign energy to the mutation. Meanwhile, we record the hashes, memory size, and data-dependent functions in each running process. If the *q'* causes the program to crash, add it to the crash set. Otherwise, we select those seeds that can trigger a new path, more heap memory allocation, or trigger more data-dependent functions, set them as interesting samples, and add them to the seed queue for the mutation of the next loop. Finally, the collection set of seeds that trigger heap memory consumption vulnerabilities and cause crashes is obtained.

```
Algorithm 2: Memory Consumption Fuzzing
  Input: Instrumented program P, Initial seed input S
  Output: Set of crash outputs Setcrash
; = Setcrash 1
2Queue ՚ S; 
3 while time and resource budget do not expire do
4 if Queue not  then 
5 q = ChooseNext(Queue); // Our Modifications; 
6 e = AssignEnergy(q); 
7 if i from 1 to e then
8 q' = Mutate(q);
9(tracebitsi, memoryi, dataflowfuncsi) ՚ Run(q', P); 
10 hashi = Hash(tracebitsi);
11 if q' triggers crash then
12Setcrash ՚ Setcrash Ǖ q'; 
13 else
14 if NewCoverage(q') then
15Queue ՚ Queue Ǖ q';
16 if NewMaxSize(q') then
17Queue ՚ Update(q', memoryi[hashi]);
18 if DataflowFuncs(q') then
19Queue ՚ Add and Prioritize(q', dataflowfuncsi);
20 return Setcrash;
```
#### **5. Experimental Results and Discussions**

We implement the MemConFuzz based on the AFL-2.52b framework. We mainly write additional codes for LLVM-mode (based on LLVM v6.0.0) to realize our program static analysis approach related to memory consumption based on data flow and modify *afl-fuzz.c* to support our interaction module with instrumentation information and the fine-grained seed priority selection strategy.

We chose popular open source programs OpenJPEG v2.3.0, jasper v2.0.14, and readelf v2.28 with heap memory consumption vulnerabilities as test datasets, and compared them against AFL, MemLock, and PerfFuzz. Our experiments were performed on Ubuntu LTS 18.04 with a Linux kernel v4.15.0, Intel(R) Xeon(R) CPU E7-4820 processor, and 4GB RAM. The experiment results show that MemConFuzz outperforms the state-of-the-art fuzzing techniques, including AFL, MemLock, and PerfFuzz, in discovering heap memory consumption vulnerabilities. MemConFuzz can discover heap memory consumption CVEs faster and trigger a higher number of heap memory consumption crashes.

#### *5.1. Evaluation Scheme*

During the experiment, since the fuzzer heavily relies on random mutations, there may be performance fluctuations between different experiments on our machine, resulting in different experimental results each time. We have taken effective measures to configure experimental parameters and have taken two measures to mitigate the randomness caused by the properties of the fuzzing technology. First, we conduct a uniform long-term test of the experimental process of each PUT performed by each fuzzer until the fuzzer reaches a relatively stable state. Specifically, our stable results are obtained after a uniform 24-h period during every fuzzing execution. Second, we add the -d option to all fuzzers in the experiment to skip the deterministic mutation stage, so that more mutation strategies can be performed in the havoc and splicing stages to discover heap memory consumption vulnerabilities.

Due to factors such as different computer performance and randomness of mutation, the results of each experiment will be different. For the experiments in the comparison model, such as Memlock, we reproduced them on the same machine in order to ensure that each model is based on the same initial experimental conditions. We give the definition of "relatively stable state".

**Definition 4.** *Relatively Stable State is defined as a state in which test data smoothly changes. On the same machine, after a certain period of time, the results of multiple experiments are relatively stable compared to the growth rate in the initial stage, and then the test results reach a "relatively stable state".*

Figure 7 below is an experimental record of fuzzing readelf; the ordinate shows the number, and the abscissa shows the time. We mainly focus on the changes in the number of unique crashes. It shows that the growth rate is the fastest in the first 2 h, and the growth rate slows down after about 22 h, which fully meets the definition of a "relatively stable state". The other tests also meet the definition of a "relatively stable state" around 24 h. We consulted a large number of vulnerability discovery studies and methods, and many studies also selected 24 h as the test standard. In addition, MemConFuzz, Memlock, and PerfFuzz are all improved based on AFL, so if the time is too long, when almost no heap consumption vulnerabilities can be found in the end, it will gradually degenerate into AFL's general vulnerability discovering, and the discovery efficiency for heap consumption vulnerability cannot be demonstrated at this time. In order to comprehensively ensure accuracy and efficiency, we uniformly select 24 h as our test standard, which can reflect the ability of vulnerability discovering and also reduce unnecessary time overhead.

**Figure 7.** Experimental record of fuzzing readelf.

We enable ASAN [43] compilation of the source program file, and set the *allocator\_may\_return\_null* option so that the program will crash when the heap memory allocation fails due to the allocation of too much memory, which is convenient for us to observe and analyze. In addition, we used LeakSanitizer to detect memory leak vulnerabilities and conduct subsequent analyses.

#### *5.2. Experimental Results and Discussions*

We perform fuzzing on the selected real-world program datasets and record the experimental data according to the evaluation metrics.

To demonstrate our work, we compare against some fuzzing techniques, recording the number of triggering heap memory consumption vulnerabilities and the time of triggering real-world CVEs. We select large-scale programs with tens of thousands of lines, which are continuously maintained in the open source community and have high popularity. These programs are from the comparison model. The name and version of the test software are mentioned by Memlock and some other fuzz testing tools. They contain heap consumption vulnerabilities and other types of vulnerabilities as interference items to comprehensively evaluate the models. Because we use the analysis method of source code and semantic heap operation code, other corresponding open-source source codes are difficult to find. There are very few fuzzing research works related to this type of vulnerability. In order to better evaluate the horizontal performance of the model, we choose these programs and

ensure that these softwares are publicly available for download. The download link has been added. Additionally, the source code of MemConFuzz will be available for request.

Table 1 shows the crashes related to memory consumption vulnerabilities obtained by fuzzing the programs jasper, readelf, and openjpeg. UA stands for uncontrolled-memoryallocation vulnerabilities, ML stands for memory leak vulnerabilities, and SLoC stands for Source Lines of Code. For each 24-h fuzzing experiment, we use Python to analyze the obtained crashes and automatically reproduce them. We classify the crashes according to the obtained Address Sanitizer function call chain and its output summary information of vulnerability types, and then obtain the memory consumption-related vulnerabilities we need, that is, the number of UA and ML. Among them, most of the crashes triggered by jasper are ML, while the crashes triggered by other programs are UA. The results show that MemConFuzz has an improvement of 43.4%, 13.3%, and 561.2% in the discovery of heap memory consumption vulnerabilities compared with the advanced fuzzing techniques AFL, MemLock, and PerfFuzz, respectively.

**Table 1.** Number of heap memory consumption vulnerabilities.


The test programs [44–46] selected are all historical versions. After our automated crash analysis, the discovered vulnerabilities are all historically reported vulnerabilities. Our experimental comparison mainly focuses on the number and speed of discovering heap memory consumption vulnerabilities. We may consider discovering and analyzing additional new vulnerabilities in future research.

The AFL framework shows that vulnerabilities with the same crash point belong to the same vulnerability. Vulnerabilities are divided into many types. Since we are targeting heap consumption vulnerabilities, the only thing we need to confirm is whether the discovered vulnerabilities belong to heap consumption vulnerabilities. We wrote automated crash analysis scripts and compared the crash function stacks reported by ASAN. Through the ASAN report, the function call relationship, and the location of the crashed code, we spent a lot of time confirming that the vulnerability mentioned in this experiment belonged to the heap consumption vulnerability.

Furthermore, we also recorded the time of triggering real-world CVEs. In order to facilitate experimental comparison, we conducted a 24-h test for each test, and T/O stands for a timeout during the 24-h test. Table 2 shows the time of real-world CVEs triggered after we fuzzed on our dataset. Likewise, we used ASAN to reproduce crashes to detect memory error information. We did not use Valgrind because it slows the program down too much, while ASAN only slows the program down about 2×. We use Python to automatically analyze crashes and search the crash points, and compare the obtained Address Sanitizer function the call chain and crash point with the function location described by the real-world CVE information, therefore gaining the time of the first matching crash. Our experimental results show that MemConFuzz has significant time reduction compared to the state-ofthe-art fuzzing techniques AFL, MemLock, and PerfFuzz, respectively. Among them, CVE-2017-12982 has more obvious advantages, which can make the program allocate large heap memory faster and trigger the vulnerability faster. The reason is that the proposed model focuses on the location of functions that are data-dependent on memory consumption, and pays attention to the size of allocated memory, which is more targeted for memory consumption vulnerabilities than other fuzzing models.


**Table 2.** Trigger time of real-world vulnerability.

#### **6. Conclusions and Future Work**

In this paper, we propose a directed fuzzing approach MemConFuzz model based on data flow analysis of heap operations to discover heap memory consumption vulnerabilities. The MemConFuzz uses the coverage information, memory consumption information, and data dependency information to guide the fuzzing process. The coverage information guides the fuzzer to explore different program paths, the memory consumption information guides the fuzzer to search for program paths that show increasing memory consumption, and the data information guides the fuzzer to explore paths with increasing dependencies on heap memory data flow. Experimental results show that the MemConFuzz outperforms the state-of-the-art fuzzing technologies, AFL, MemLock, and PerfFuzz, in both the number of heap memory vulnerabilities and the time to discovery.

In the future, we plan to enhance the heap memory consumption vulnerability discovery capabilities and vulnerability coverage of our approach with more efficient and more complete data flow analysis. Furthermore, we will add support for binaries to our proposed vulnerability discovery methodology. We will disassemble the binary code to obtain the instruction code set, complete the analysis of the control flow and the data flow, and discover the heap memory consumption vulnerabilities of the binary program more effectively.

**Author Contributions:** Conceptualization, C.D. and Y.G.; methodology, C.D. and Y.G.; software, Z.C. and G.X.; validation, C.D. and Y.G.; investigation, Z.C. and Z.W.; writing—original draft preparation, Z.C. and Y.G; writing—review and editing, C.D. and Y.G.; visualization, Z.C.; project administration, C.D.; funding acquisition, Z.W. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by the National Natural Science Foundation of China grant number 62172006 and the National Key Research and Development Plan of China grant number 2019YFA0706404.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** The test results data presented in this study are available on request. The data set can be found in public web sites.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Abbreviations**


#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

**Anas A. Makki 1,\* and Reda M. S. Abdulaal <sup>2</sup>**


**Abstract:** Multi-criteria decision-making (MCDM) assists in making judgments on complex problems by evaluating several alternatives based on conflicting criteria. Several MCDM methods have been introduced. However, real-world problems often involve uncertain and ambiguous decision-maker inputs. Therefore, fuzzy MCDM methods have emerged to handle this problem using fuzzy logic. Most recently, the method based on the removal effects of criteria using the geometric mean (MEREC-G) and ranking the alternatives based on the trace to median index (RATMI) were introduced. However, to date, there is no fuzzy extension of the two novel methods. This study introduces a new hybrid fuzzy MCDM approach combining fuzzy MEREC-G and fuzzy RATMI. The fuzzy MEREC-G can accept linguistic input terms from multiple decision-makers and generates consistent fuzzy weights. The fuzzy RATMI can rank alternatives according to their fuzzy performance scores on each criterion. The study provides the algorithms of both fuzzy MEREC-G and fuzzy RATMI and demonstrates their application in adopted real-world problems. Correlation and scenario analyses were performed to check the new approach's validity and sensitivity. The new approach demonstrates high accuracy and consistency and is sufficiently sensitive to changes in the criteria weights, yet not too sensitive to produce inconsistent rankings.

**Keywords:** fuzzy MEREC-G; fuzzy RATMI; fuzzy logic; hybrid; MCDM

**MSC:** 03E72; 90B50

#### **1. Introduction**

Multi-criteria decision-making (MCDM), a major subdiscipline of the operations research domain, assists in making judgments in complex real-world challenges. It allows for formulating problems comprising several alternatives in a structured format to find the best ranking or select the best alternative based on multiple conflicting criteria. The criteria are conflicting in the sense of being benefit criteria and non-benefit criteria to reflect their roles in maximizing or minimizing the alternatives, respectively. Moreover, the criteria are weighted to represent the problem better and make the best decision on the alternatives. Several MCDM methods have emerged, with different characteristics and purposes, with broad applications in many disciplines [1,2]. The two primary components of MCDM are weighing the criteria and ranking the alternatives.

The first component of MCDM, weighting the criteria, entails designating importance or preference values to each criterion. Depending on whether the weights are based on quantified qualitative inputs from the decision-maker's judgments using a predefined scale (i.e., subjective data) [3–5], based on quantitative data (i.e., objective data) [6–10], or a combination of both (i.e., a mix of subjective and objective data) [11–13], there are various MCDM methods for weighting criteria. Methods like the analytic hierarchy process (AHP), analytic network process (ANP), and best-worst method (BWM) are examples of subjective

**Citation:** Makki, A.A.; Abdulaal, R.M.S. A Hybrid MCDM Approach Based on Fuzzy MEREC-G and Fuzzy RATMI. *Mathematics* **2023**, *11*, 3773. https://doi.org/10.3390/ math11173773

Academic Editors: Yanhui Guo and Jun Ye

Received: 27 July 2023 Revised: 30 August 2023 Accepted: 31 August 2023 Published: 2 September 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

methodologies for finding the weights of criteria [4,5]. These pairwise-based methods compare criteria using a scale of preferences to quantify qualitative inputs. Entropy and criteria importance through inter-criteria correlation (CRITIC) are examples of objective methods [14]. These data-based methods use mathematical algorithms to calculate the weights based on the information entropy, the correlation coefficients, or the compromise ranking of the alternatives. However, fuzzy AHP, fuzzy ANP, and fuzzy BWM accept a combination of subjective and objective data for finding the criteria weights. These methods base the calculations of weights in a fuzzy environment to account for uncertainty and ambiguity in decision-makers' inputs [15].

The second component of MCDM, ranking the alternatives, entails the performance scoring of each alternative on each criterion and finding the best ranking or choice accordingly. Various techniques for ranking alternatives based on multiple criteria have been developed. Such methods include outranking algorithms like "élimination et choix traduisant la realité" (ELECTRE), which translates to elimination and choice translating reality, and the preference ranking organization method for enrichment evaluations (PROMETHEE) [16–18], to mention two. These methods compare alternatives pair-wisely using measures of concordance and discordance between them on each criterion.

However, fuzzy MCDM alternative ranking methods have been developed and applied to enable them to handle the uncertainty and ambiguity of decision-makers' subjective scoring inputs. Such methods are the fuzzy BWM [19–26], fuzzy additive ratio assessment (ARAS) [27–29], fuzzy measurement alternatives and ranking according to compromise solution (MARCOS) [30–32], fuzzy technique for order preference by similarity to ideal solution (TOPSIS) [24,33,34], fuzzy multi-attributive border approximation area comparison (MABAC) [35–38], fuzzy VlseKriterijumska Optimizacija I Kompromisno Resenje (VIKOR) [39–42], fuzzy multi-attributive ideal–real comparative analysis (MAIRCA) [43–47], and, most recently, the fuzzy multiple criteria ranking by alternative trace (MCRAT) [48]. Several investigators applied the two components of MCDM in different fields [49–63].

Two of the most recent MCDM methods for weighting the criteria and ranking the alternatives are the method based on the removal effects of criteria (MEREC) [64–66] and ranking the alternatives based on the trace to median index (RATMI) techniques [67]. The MEREC was developed as an objective method for weighting the criteria. In 2023, an updated and enhanced version of the MEREC, labeled as the method for removal effects of criteria with a geometric mean (MEREC-G), was developed to enable it to process objective and subjective data [65]. Also, fuzzy extension and modification of the MEREC method were recently developed, enabling it to process subjective data using linguistic term judgments by decision-makers [68,69]. However, to date, there is no fuzzy extension to the enhanced MEREC-G. Additionally, in 2022, the RATMI was developed as an alternative ranking method. RATMI bases the ranking algorithm on the trace to median index, which combines ranking alternatives based on median similarity (RAMS), and the MCRAT methods, using a majority index and the concept of the VIKOR method [67]. In addition, despite this, the RATMI method is a relatively new alternative ranking method; it has proven its efficacy in real-world applications [70,71]. However, to date, there is no fuzzy extension to the RATMI method.

Therefore, this study aims to first develop a fuzzy MEREC-G as a weighting criteria method and a fuzzy RATMI as an alternative ranking method. Secondly, it proposes a new hybrid MCDM approach based on the developed fuzzy MEREC-G and fuzzy RATMI. The proposed new hybrid MCDM approach will provide advancements in that the fuzzy MEREC-G can accept linguistic input terms from multiple decision-makers, handle their ambiguous judgments on a complex problem, and produce consistent fuzzy weights of the criteria when converted to crisp values. This, in turn, will enable the use of the produced fuzzy weights from the fuzzy MEREC-G in the fuzzy RATMI, which will be able to accept and process fuzzy ranking scores of each alternative for each criterion and rank them accordingly.

The new proposed hybrid MCDM approach is provided in the following section. In the subsequent sections, along with a discussion, a numerical application of the proposed approach is provided to compare its results with other fuzzy MCDM methods to check its validity and sensitivity. Finally, the last section of this paper provides a conclusion to the proposed approach and some future research directions.

#### **2. Preliminaries of Fuzzy Sets**

**Definition 1 ([69]).** *a*˜ = (*k*, *l*, *m*) *is a representation of a triangular fuzzy number (TFN). The μa*˜(*z*) *membership function of a TEN, a*˜, *has the definition given by Equation (1).*

$$\mu\_{\mathbb{A}}(z) = \begin{cases} 0, & \text{if } z < k\_{\prime} \\ \frac{\frac{z-k}{l}}{l-k}, & \text{if } k \le z < l\_{\prime} \\ \frac{m-z}{m-l}, & \text{if } l \le z \le m\_{\prime} \\ 0, & \text{if } z > m\_{\prime} \end{cases} \tag{1}$$

**Definition 2 ([72]).** *Let x*˜ = (*a*1, *b*1, *c*1) *and y*˜ = (*a*2, *b*2, *c*2) *be two non-negative TFNs. According to the extension principle, the arithmetic operations are defined as follows:*


#### **3. The Proposed Hybrid Fuzzy MEREC-G and Fuzzy RATMI Methods**

Figure 1 illustrates the proposed fuzzy MEREC-G and fuzzy RATMI methods in three main phases. The first phase involves defining the problem under study by specifying the alternatives and criteria with their objective. The decision-maker invites the experts who will provide their initial fuzzy decision matrices between the alternatives and criteria. The second phase applies the fuzzy MEREC-G method to assign weights to each criterion based on the information from the first phase. The third step uses the fuzzy RATMI method to rank the alternatives according to the weighted fuzzy criteria obtained in the second phase. The following sections explain these phases in more detail.

#### *3.1. Phase 1: Formulate the Problem Using the MCDM Model*

Step 1.1: The decision-maker identifies "*m*" possible alternatives, "*n*" relevant criteria, and the nature of each criterion (i.e., whether it is a benefit criterion that should be maximized or a non-benefit criterion that should be minimized) for the problem at hand.

Step 1.2: The decision-maker determines "*k*" experts who have knowledge and experience about the problem to participate in the decision-making process by providing either subjective or objective input data represented by triangular fuzzy numbers (TFNs).

Step 1.3: The experts, *E* = {*E*1, *E*2, ..., *Ek*}, will provide a realistic evaluation of each alternative in *A* = {*A*1, *A*2, ..., *Am*} based on each criterion in *C* = {*C*1, *C*2, ..., *Cn*}, which is represented by the fuzzy number *x<sup>u</sup> ij* = ! *au ij*, *<sup>b</sup><sup>u</sup> ij*, *<sup>c</sup><sup>u</sup> ij*" , *i* = 1, ... , *m*; *j* = 1, ... , *n*; *u* = 1, ... , *k*. The fuzzy decision matrix, *Xu*, for each expert, "*u*", can be constructed using Equation (2).

$$X^{\mu} = \begin{bmatrix} x^{u}\_{ij} \end{bmatrix}\_{\text{m:zu}} = \begin{bmatrix} A/\mathbb{C} & \mathbb{C}\_{1} & \mathbb{C}\_{2} & \dots & \mathbb{C}\_{n} \\ A\_{1} & \mathbb{x}^{u}\_{11} & \mathbb{x}^{u}\_{12} & \dots & \mathbb{x}^{u}\_{1n} \\ A\_{2} & \mathbb{x}^{u}\_{21} & \mathbb{x}^{u}\_{22} & \dots & \mathbb{x}^{u}\_{2n} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ A\_{m} & \mathbb{x}^{u}\_{m1} & \mathbb{x}^{u}\_{m2} & \dots & \mathbb{x}^{u}\_{mn} \end{bmatrix} \tag{2}$$

Step 1.4: Construct the combined fuzzy decision matrix, *X*˜ , using Equation (3).

$$\mathcal{X} = \left[ \mathfrak{x}\_{ij} \right]\_{m \times n} \tag{3}$$

where

**Figure 1.** The framework of the proposed hybrid fuzzy MEREC-G and fuzzy RATMI methods.

*3.2. Phase 2: Fuzzy MEREC-G Method*

Step 2.1: Normalize the combined fuzzy decision matrix to reduce the disparity between the magnitude of alternatives and dimensions, with a normalized value within [0, 1]. The component of a normalized matrix, *e*˜*ij*, will be produced by the triangular fuzzy number (TFN) according to [69] using Equation (4) for benefit criteria and Equation (5) for non-benefit criteria.

$$\vec{\pi}\_{ij} = \left( r\_{ij\prime}^{l}, r\_{ij\prime}^{m}, r\_{ij}^{u} \right) = \left( \frac{a\_{i\dot{j}}}{c\_{\dot{j}}^{\max}}, \frac{b\_{i\dot{j}}}{c\_{\dot{j}}^{\max}}, \frac{c\_{i\dot{j}}}{c\_{\dot{j}}^{\max}} \right) \quad \forall \; i \in \left[ 1, \dots, m \right] \; , \; \forall \; j \in \left[ 1, \dots, n \right] \tag{4}$$

$$\mathfrak{E}\_{\vec{i}\vec{j}} = \left( r\_{\vec{i}\vec{j}}^{\vec{l}}, r\_{\vec{i}\vec{j}}^{m}, r\_{\vec{i}\vec{j}}^{u} \right) = \left( \frac{a\_{\vec{j}}^{\min}}{c\_{\vec{i}\vec{j}}}, \frac{a\_{\vec{j}}^{\min}}{b\_{\vec{i}\vec{j}}}, \frac{a\_{\vec{j}}^{\min}}{a\_{\vec{i}\vec{j}}} \right) \quad \forall \; i \in \left[ 1, \dots, m \right] \; , \; \forall \; j \in \left[ 1, \dots, n \right] \tag{5}$$

Step 2.2: Calculate the fuzzy overall performance value, *P*˜ *<sup>i</sup>*, of the alternatives using the geometric mean of the fuzzy normalized matrix, as presented by Equation (6).

$$\bar{P}\_{i} = \left( \sqrt[n]{\prod\_{j=1}^{n} r\_{ij}^{l}}, \sqrt[n]{\prod\_{j=1}^{n} r\_{ij}^{m}}, \sqrt[n]{\prod\_{j=1}^{n} r\_{ij}^{u}} \right) \quad \forall \ i \in [1, \dots, m] \tag{6}$$

Step 2.3: This step considers the core of the classical MEREC-G [65], in which the changes in the overall performance value of the alternatives will be calculated by removing the effect of each criterion from the overall performance. This step can be calculated for the fuzzy MEREC-G using Equation (7) to find the changes represented by the fuzzy number, ˜*tij*.

$$\bar{t}\_{ij} = \left( \sqrt[n]{\frac{\prod\_{j=1}^{n} \tau\_{ij}^{l}}{r\_{ik}^{l}}}, \sqrt[n]{\frac{\prod\_{j=1}^{n} r\_{ij}^{m}}{r\_{ik}^{m}}}, \sqrt[n]{\frac{\prod\_{j=1}^{n} r\_{ij}^{u}}{r\_{ik}^{u}}} \right) \qquad \forall \ i \in \left[1, \ldots, m\right], \ k \neq j \tag{7}$$

Step 2.4: Find the removal effect, *E*˜*j*, using Equation (8) to obtain the final fuzzy weights, *w*˜*j*, of each criterion using Equation (9) and Equation (10).

$$\tilde{E}\_{\vec{j}} = \left(\sum\_{i=1}^{m} \vec{t}\_{\vec{i}\vec{j}}^{l} \;/\sum\_{i=1}^{m} \vec{t}\_{\vec{i}\vec{j}}^{m} \;/\ \sum\_{i=1}^{m} \vec{t}\_{\vec{i}\vec{j}}^{u}\right) \quad \forall \; j \in \left[1, \ldots, n\right] \tag{8}$$

$$
\omega \vec{v}\_j = \left( \frac{\sum\_{i=1}^m \vec{t}\_{ij}^l}{\sum\_{j=1}^n \mathbb{E}\_j^u}, \frac{\sum\_{i=1}^m \vec{t}\_{ij}^m}{\sum\_{j=1}^n \mathbb{E}\_j^m}, \frac{\sum\_{i=1}^m \vec{t}\_{ij}^u}{\sum\_{j=1}^n \vec{\mathbb{E}}\_j^l} \right) \qquad \forall \ j \in [1, \dots, n] \tag{9}
$$

$$w\_j = \left(w\_j^l, \; w\_j^m, w\_j^u\right) \qquad \forall \; j \in [1, \ldots, n] \tag{10}$$

Step 2.5: To obtain the crisp weights, *w*∗ *<sup>j</sup>* , of the criteria, the obtained fuzzy weights, *w*˜*j*, are converted using Equation (11). The sum of the crisp weights equals one.

$$w\_j^\* = \frac{w\_j^l + 4w\_j^m + w\_j^u}{6} \tag{11}$$

#### *3.3. Phase 3: Fuzzy RATMI Method*

Step 3.1: The values in the combined fuzzy decision-making matrix will be normalized by the Equations (4) and (5) that are used for the fuzzy MEREC-G technique.

Step 3.2: The fuzzy weights of the criteria are multiplied by the fuzzy normalized values to obtain fuzzy weighted normalized values using Equation (12).

$$\mathfrak{g}\_{i\bar{\jmath}} = \left( \mathfrak{g}\_{i\bar{\jmath}'}^{l} \, \mathfrak{g}\_{i\bar{\jmath}'}^{m} \, \mathfrak{g}\_{i\bar{\jmath}}^{u} \right) = \mathfrak{w}\_{\bar{\jmath}} \times \mathfrak{e}\_{i\bar{\jmath}} = \left( w\_{\bar{\jmath}}^{l} \times r\_{i\bar{\jmath}'}^{l} \, w\_{\bar{\jmath}}^{m} \times r\_{i\bar{\jmath}'}^{m} \, \, w\_{\bar{\jmath}}^{l} \times r\_{i\bar{\jmath}}^{u} \right) \tag{12}$$

Step 3.3: Determine the fuzzy optimal alternative using Equations (13) and (14). Then, decompose the fuzzy optimal alternative into two components using Equations (15) and (16), followed by decomposing the other alternatives into two components using Equations (17) and (18).

$$\mathfrak{q}\_{j} = \max \left( \mathfrak{g}\_{j} | 1 \le j \le n \right) \tag{13}$$

$$\vec{Q} = \{\vec{q}\_1, \vec{q}\_2, \dots, \vec{q}\_n\} \tag{14}$$

$$
\vec{Q} = \vec{Q}^{\text{max}} \cup \vec{Q}^{\text{min}} \tag{15}
$$

$$\bar{Q} = \{\vec{q}\_1, \,\,\vec{q}\_2, \,\,\dots, \,\,\,\vec{q}\_k\} \cup \{\vec{q}\_1, \,\,\vec{q}\_2, \,\,\dots, \,\,\,\vec{q}\_h\}; k+h=j\tag{16}$$

$$
\mathcal{V} = \mathcal{V}^{\text{max}} \cup \mathcal{V}^{\text{min}} \tag{17}
$$

$$\mathcal{V} = \{\mathfrak{v}\_1, \mathfrak{v}\_2, \dots, \mathfrak{v}\_k\} \cup \{\mathfrak{v}\_1, \mathfrak{v}\_2, \dots, \mathfrak{v}\_h\}; k + h = j \tag{18}$$

Step 3.4: Calculate the fuzzy magnitude of optimal alternative components using Equations (19) and (20) and the fuzzy magnitude of other alternative components using Equations (21) and (22).

$$\vec{Q}\_k = \begin{pmatrix} q\_1^l, & q\_k^m, & q\_k^u \end{pmatrix} = \left( \sqrt{\left(q\_1^l\right)^2 + \left(q\_2^l\right)^2 + \dots + \left(q\_k^l\right)^2}, \sqrt{\left(q\_1^m\right)^2 + \left(q\_2^m\right)^2 + \dots + \left(q\_k^m\right)^2}, \sqrt{\left(q\_1^u\right)^2 + \left(q\_2^u\right)^2 + \dots + \left(q\_k^u\right)^2} \right) \tag{19}$$

$$\mathcal{Q}\_{\mathbf{h}} = \left( q\_{\mathbf{h}'}^{l}, q\_{\mathbf{h}''}^{m}, q\_{\mathbf{h}}^{u} \right) = \left( \sqrt{\left( q\_{1}^{l} \right)^{2} + \left( q\_{2}^{l} \right)^{2} + \dots + \left( q\_{h}^{l} \right)^{2}}, \sqrt{\left( q\_{1}^{m} \right)^{2} + \left( q\_{2}^{m} \right)^{2} + \dots + \left( q\_{h}^{m} \right)^{2}}, \sqrt{\left( q\_{1}^{u} \right)^{2} + \left( q\_{2}^{u} \right)^{2} + \dots + \left( q\_{h}^{u} \right)^{2}} \right) \tag{20}$$

$$\mathcal{V}\_{k} = \left(\mathbf{v}\_{k'}^{l}, \mathbf{v}\_{k'}^{u}, \mathbf{v}\_{k}^{u}\right) = \left(\sqrt{\left(\mathbf{v}\_{1}^{l}\right)^{2} + \left(\mathbf{v}\_{2}^{l}\right)^{2} + \dots + \left(\mathbf{v}\_{k}^{l}\right)^{2}}, \sqrt{\left(\mathbf{v}\_{1}^{m}\right)^{2} + \left(\mathbf{v}\_{2}^{m}\right)^{2} + \dots + \left(\mathbf{v}\_{k}^{m}\right)^{2}}, \sqrt{\left(\mathbf{v}\_{1}^{u}\right)^{2} + \left(\mathbf{v}\_{2}^{u}\right)^{2} + \dots + \left(\mathbf{v}\_{k}^{u}\right)^{2}}\right) \tag{21}$$

$$\mathcal{V}\_{\mathbf{h}} = \left(\mathbf{v}\_{\mathbf{h}'}^{l}, \mathbf{v}\_{\mathbf{h}}^{m}, \mathbf{v}\_{\mathbf{h}}^{u}\right) = \left(\sqrt{\left(\mathbf{v}\_{1}^{l}\right)^{2} + \left(\mathbf{v}\_{2}^{l}\right)^{2} + \dots + \left(\mathbf{v}\_{h}^{l}\right)^{2}}, \sqrt{\left(\mathbf{v}\_{1}^{m}\right)^{2} + \left(\mathbf{v}\_{2}^{m}\right)^{2} + \dots + \left(\mathbf{v}\_{h}^{m}\right)^{2}}, \sqrt{\left(\mathbf{v}\_{1}^{u}\right)^{2} + \left(\mathbf{v}\_{2}^{u}\right)^{2} + \dots + \left(\mathbf{v}\_{h}^{u}\right)^{2}}\right) \tag{22}$$

Step 3.5: In this step, the alternatives will be ranked twice. The first uses the fuzzy MCRAT [48], and the second uses fuzzy RAMS as a part of the proposed fuzzy RATMI. Ranking by fuzzy MCRAT uses the following sub-steps:

Step 3.5.1: Create the matrix, *Y*˜, composed of the optimal alternative component, as shown in Equation (23).

$$
\tilde{Y} = \begin{bmatrix} \tilde{Q}\_k & 0 \\ 0 & \tilde{Q}\_h \end{bmatrix} \tag{23}
$$

Step 3.5.2: Create the matrix, *B*˜ *<sup>i</sup>*, composed of the alternative's component using Equation (24).

$$
\mathcal{B}\_i = \begin{bmatrix} \mathcal{V}\_{ik} & 0 \\ 0 & \mathcal{V}\_{ih} \end{bmatrix} \tag{24}
$$

Step 3.5.3: Create the matrix, *Z*˜*i*, using Equation (25).

$$\mathcal{Z}\_{\bar{i}} = \bar{Y} \times \mathcal{B}\_{\bar{i}} = \begin{bmatrix} \mathbb{Z}\_{11;i} & 0 \\ 0 & \mathbb{Z}\_{22;i} \end{bmatrix} \tag{25}$$

Step 3.5.4: Then, the fuzzy trace of the matrix, *Z*˜*i*, can be obtained using Equation (26).

$$\text{tr}\left(\mathcal{Z}\_{i}\right) = \mathbb{z}\_{11,i} + \mathbb{z}\_{22,i} = \left(z\_{11,i}^{l} + z\_{22,i}^{l}, z\_{11,i}^{m} + z\_{22,i}^{m}, z\_{11,i}^{u} + z\_{22,i}^{u}\right) \tag{26}$$

In Equation (26), *tr Z*˜*i* = ! *Zl i* , *Z<sup>m</sup> <sup>i</sup>* , *<sup>Z</sup><sup>u</sup> i* " indicates the fuzzy trace of the *Zi* matrix, and the value is defuzzied to obtain *tr*(*Zi*) by using Equation (27). Here, rank the alternatives in descending order of the *tr*(*Zi*) values.

$$Z\_{\dot{i}} = \frac{Z\_{\dot{j}}^1 + 4Z\_{\dot{j}}^m + Z\_{\dot{j}}^u}{6} \tag{27}$$

Ranking by fuzzy alternatives median similarity (RAMS) uses the following sub-steps:

Step 3.5.5: Determine the fuzzy median of similarity of the optimal alternative using Equation (28).

$$
\bar{D} = \left( d^l, \, d^m, \, d^u \right) = \left( \sqrt{\bar{Q}\_k^2 + \bar{Q}\_h^2} \right) / 2 \tag{28}
$$

Step 3.5.6: Determine the fuzzy median of similarity of the alternatives using Equation (29).

$$D\_i = \left(d\_{i'}^l \ d\_{i'}^m \ d\_i^l\right) = \left(\sqrt{\mathcal{V}\_{ik}^2 + \mathcal{V}\_{il}^2}\right) / 2 \tag{29}$$

Step 3.5.7: Calculate the fuzzy median similarity, *ms M*˜ *i* , which represents the ratio between the perimeter of each alternative and the optimal alternative using Equation (30).

$$ms(\bar{M}\_i) = \frac{\mathcal{D}\_i}{\bar{D}} = \left(\frac{d\_i^l}{d^{u'}}, \frac{d\_i^m}{d^{m'}}, \frac{d\_i^u}{d^l}\right) \tag{30}$$

In Equation (30), *ms M*˜ *i* = ! *Ml i* , *M<sup>m</sup> <sup>i</sup>* , *<sup>M</sup><sup>u</sup> i* " indicates the median similarity of the *Mi* matrix, and the value is defuzzied to obtain *m*s(*Mi*) by using Equation (31). Here, rank the alternatives in descending order of the *m*s(*Mi*) values.

$$M\_{\hat{l}} = \frac{M\_{\hat{j}}^{l} + 4M\_{\hat{j}}^{m} + M\_{\hat{j}}^{u}}{6} \tag{31}$$

Step 3.6: If *v* is the weight of fuzzy MCRAT's strategy, and (1 − *v*) is the weight of RAMS's strategy, then the majority index, *Ei*, between the two strategies can be calculated using Equation (32). Then, find the final rank of the alternatives in descending order of *Ei*.

$$E\_i = v \frac{(tr(Z\_i) - tr^\*)}{(tr^- - tr^\*)} + (1 - v) \frac{(ms(M\_i) - ms^\*)}{(ms^- - ms^\*)} \tag{32}$$

where

*tr*<sup>∗</sup> = min(*tr*(*Zi*), ∀*i* ∈ [1, 2, . . . , *m*]); *tr*<sup>−</sup> = max (*tr*(*Zi*), ∀*i* ∈ [1, 2, . . . , *m*]); *ms*<sup>∗</sup> = min (*ms*(*Mi*), ∀*i* ∈ [1, 2, . . . , *m*]); *ms*<sup>−</sup> = max(*ms*(*Mi*), ∀*i* ∈ [1, 2, . . . , *m*]); *v* is a value from 0 to 1. Here, *v* = 0.5.

#### **4. Applications and Results**

This section applies the proposed hybrid fuzzy MEREC-G and fuzzy RATMI methods using the data from Uluta¸s et al. [48] to purchase a forklift that laborers can use in the warehouse. The following is an application of the three phases previously mentioned to rank the alternatives based on weighted criteria.

#### *4.1. Phase 1: Formulate the Problem Using the MCDM Model*

Following step 1.1, the decision-maker determined eight criteria and six forklifts as alternatives. The criteria for assessment of the forklifts were C1 (purchasing price), C2 (lifting height), C3 (lowering speed), C4 (loading capacity), C5 (lifting speed), C6 (movement area requirement), C7 (image of the manufacturer company), and C8 (supply of spare parts). Only two criteria (C1 and C6) were non-benefit, and the others were benefit criteria. Using steps 1.2, 1.3, and 1.4, the decision maker determined six experts to evaluate the performance of the forklifts under each criterion using the linguistic phrases shown in Stankovi´c et al. [31]. The experts' assessments were transformed into fuzzy values using those linguistic phrases and aggregated using Equation (3). The combined fuzzy decision matrix, as given by Uluta¸s et al. [48], is presented in Table 1.


**Table 1.** The combined fuzzy decision matrix [48].

#### *4.2. Phase 2: Application and Results of the Fuzzy MEREC-G Method*

Equations (4) and (5) of step 2.1 have been used to determine the fuzzy decision matrix with normalization. Table 2 presents the results obtained from this step.

**Table 2.** The normalized fuzzy decision matrix.


Steps 2.2 and 2.3 have been applied with the help of Equations (6) and (7), respectively, to calculate the overall performance of alternatives in the fuzzy decision matrix and then calculate the changes in this overall performance by removing each fuzzy number. Table 3 shows the results of Equation (7) of step 2.3.

**Table 3.** The changes in the overall performance of alternatives.



**Table 3.** *Cont.*

Equations (8)–(10) from step 2.4 have been used to calculate the fuzzy criteria weight of each criterion. Then, Equation (11) from step 2.5 was used to calculate the crisp value of each criterion. Table 4 shows the results of these calculations.

**Table 4.** Resulting effect and weights of the fuzzy MEREC-G.


#### *4.3. Phase 3: Application and Results of the Fuzzy RATMI Method*

The fuzzy MEREC-G method is used to determine the fuzzy criteria weights, which are then combined with the decision matrix to form the decision-making matrix. The fuzzy RATMI method is applied to this matrix to rank the alternatives. From step 3.1, the fuzzy decision-making matrix is normalized using Equations (4) and (5), which are the same as those used in the fuzzy MEREC-G. The fuzzy weighted decision-making matrix is obtained using Equation (12) from step 3.2 and shown in Table 5.

First, the fuzzy optimal alternatives are determined using Equations (13) and (14), and then they are decomposed into their components using Equations (15) and (16). Next, Equations (17) and (18) are used to decompose the alternatives into their components. Finally, the fuzzy magnitude of the components is calculated using Equations (19) and (20). The values of the fuzzy magnitude of components are shown in Table 6.

The same process is performed for the alternatives using Equations (21) and (22). Then, with Equations (23)–(25), the values of *z*˜11;*<sup>i</sup>* and *z*˜22;*i*, which are the elements of the *Z*˜*i*, are found. Equation (26) is used to obtain the fuzzy trace, *tr Z*˜*i* , of the matrix, *Z*˜*i*. Finally, this fuzzy value is defuzzified using Equation (27). Table 7 shows these values and the results of the fuzzy MCRAT method.


**Table 5.** The fuzzy weighted decision-making matrix.

**Table 6.** The fuzzy magnitude of components' values.


**Table 7.** Results of the fuzzy MCRAT method.


Another ranking will be obtained by the fuzzy RAMS method. In this method, the alternatives are ranked based on the median similarity between the optimal alternatives and other alternatives by applying Equations (28)–(31). This was followed by finding the majority index between the fuzzy MCRAT and fuzzy RAMS methods using Equation (32) with *v* = 0.5. The results of these calculations are shown in Tables 8 and 9, along with the alternative rankings according to the fuzzy RATMI method.


**Table 8.** Results of the fuzzy RAMS technique.

**Table 9.** Alternatives rankings according to the fuzzy RATMI method.


Another application of the proposed fuzzy MCDM approach was conducted using two other problems [61,62] that are demonstrated in Table 10. The computations of these two examples are attached in the Supplementary Materials as Table S1 for Example 1 and Table S2 for Example 2.


**Table 10.** Details of the selected problems and

comparisons

 with the proposed approaches.

#### **5. Discussion**

The numerical application of the proposed hybrid MCDM approach based on fuzzy MEREC-G and fuzzy RATMI methods in this research study showed that it can generate alternative rankings. However, ensuring its validity and checking how those generated alternative rankings compare with rankings of other fuzzy MCDM methods is essential. Moreover, it is also necessary to check the sensitivity of the proposed model. Therefore, the validity and sensitivity analyses are provided in the following subsections.

#### *5.1. Validity Analysis of the Proposed Approach*

The validity of the resulting alternative rankings from the fuzzy MCRAT, fuzzy RAMS, and fuzzy RATMI methods presented in Tables 7–9, respectively, are checked. This was done by comparing these rankings from the proposed methods in this study with those resulting from multiple fuzzy MCDM methods presented in Table 11. Those other MCDM methods are the fuzzy ARAS, fuzzy MARCOS, fuzzy TOPSIS, fuzzy MABAC, fuzzy VIKOR, and fuzzy MAIRCA. It is worth mentioning that the researchers who created these fuzzy MCDM methods applied criteria with established fuzzy weights. In contrast, in this research study, the fuzzy weights were unknown and determined by the proposed MEREC-G method. The nonparametric correlation coefficients of ranked data, Spearman's *rho*, and Kendall's *tau\_b,* which might be better for smaller samples [73], were found as shown in Tables 12 and 13, respectively. The correlation analyses show high correlations with statistical significance levels between the resulting alternative rankings from the fuzzy MCRAT, fuzzy RAMS, and fuzzy RATMI methods and those resulting from the other fuzzy MCDM methods. This result indicates high accuracy and consistency between the alternative rankings of the proposed hybrid MCDM approach based on fuzzy MEREC-G and fuzzy RATMI methods in this research study and the other fuzzy MCDM methods. Therefore, the proposed approach is deemed valid.

**Table 11.** Alternative rankings resulted from multiple fuzzy MCDM methods.


\* Alternative ranking adopted from [48]. \*\* Alternative ranking based on Tables 7–9.

**Table 12.** Spearman's *rho* correlation coefficients between alternative rankings resulted from multiple fuzzy MCDM methods.


Note: All Spearman's *rho* correlation coefficients are significant at the *p* ≤ 0.01 level (2-tailed).


**Table 13.** Kendall's *tau\_b* correlation coefficients between alternative rankings resulted from multiple fuzzy MCDM methods.

\* Correlation is significant at the *p* ≤ 0.05 level (2-tailed). \*\* Correlation is significant at the *p* ≤ 0.01 level (2-tailed).

#### *5.2. Sensitivity Analysis of the Proposed Approach*

The sensitivity of the proposed MCDM approach in this study is checked by analyzing the effect of different criteria weights on the resulting rankings of alternatives (A1–A6) from the fuzzy RATMI. The sensitivity analysis was performed by calculating different fuzzy criteria weights of each of the eight criteria (C1–C8) based on a range of 10% to 90% with 10% increments and equally distributing the remainder of the 100% on the reset of criteria in each scenario. This has created a total of 72 run scenarios of the fuzzy RATMI algorithm (i.e., nine sets of criteria weights × eight criteria = 72 run scenarios). This procedure enabled comparing the effect of different weights of each criterion on the resulting alternative rankings.

Figure 2 shows the resulting alternative rankings from the sensitivity analysis. As shown in Figure 2a, criterion C1 demonstrated its sensitivity in most of the alternative rankings in the 10% and 20% scenarios and provided consistent rankings for the 30% to 90% scenarios. Figure 2b shows that criterion C2 changed the rankings of the alternatives A3 and A4 only in the 10% scenario and showed consistent alternative rankings in the 20% to 90% scenarios. For criterion C3, the analysis shows that it gave consistent alternative rankings for the whole range of scenarios from 10% to 90%, as presented in Figure 2c, indicating that changing its weight does not influence the decision-making problem. Figure 2d shows that criterion C4 changed the rankings of the alternatives in the 10%, 80%, and 90% scenarios and gave consistent alternative rankings in the 20% to 70% scenarios. Figure 2e shows that criterion C5 changed the rankings of the alternatives A2, A3, and A4 only in the 10% scenario and showed consistent alternative rankings in the 20% to 90% scenarios. Figure 2f shows that criterion C6 changed the rankings of the alternatives in the 10% and 20% scenarios while giving consistent alternative rankings in the 30% to 90% scenarios. Figure 2g shows that criterion C7 changed the rankings of the alternatives in the 10%, 20%, and 70% scenarios while giving consistent alternative rankings in the other scenarios. Finally, Figure 2h shows that criterion C8 changed the rankings of the alternatives in the 10%, 20%, and 30% scenarios and gave consistent alternative rankings in the 40% to 90% scenarios. These results indicate that the proposed approach is sensitive enough to changes in the criteria weights and reflects those changes on the alternative rankings, yet not too sensitive and capable of producing consistent rankings based on alternatives' performance scoring.

#### **6. Conclusions**

Decision-making can be challenging when faced with multiple conflicting criteria and uncertain or vague information. Fuzzy logic can model the uncertainty and ambiguity in the decision process and provide a framework for fuzzy MCDM methods. These methods help decision-makers assign weights to the criteria and rank the alternatives systematically. This paper introduces a new hybrid fuzzy MCDM approach that combines two novel methods: fuzzy MEREC-G for criteria weighting and fuzzy RATMI for alternative rankings. The new approach was tested with real-world problem data adopted from Uluta¸s et al. [48] and compared with other MCDM methods: fuzzy ARAS, fuzzy MARCOS, fuzzy TOPSIS, fuzzy MABAC, fuzzy VIKOR, and fuzzy MAIRCA, fuzzy MCRAT, and fuzzy RAMS. The validity and sensitivity of the proposed hybrid MCDM approach were evaluated. The validity was measured using the nonparametric Spearman's *rho* and Kendall's *tau\_b* correlation coefficients of ranked data. The correlation coefficients were 0.943 and 1.00 using Spearman's *rho* methodology, while they were 0.867 and 1.00 using Kendall's *tau\_b* methodology. These figures indicate that the proposed approach was valid and can be applied to different real problems with fuzzy data, such as supplier selection [49,52] and selecting pandemic hospital sites [55]. The sensitivity was checked by analyzing how different criteria weights affected the alternative rankings from the fuzzy RATMI, which showed that the approach was sensitive enough to reflect the changes in the criteria weights on the alternative rankings, but not too sensitive and able to produce consistent rankings based on the alternatives' performance scorings. Therefore, this study's new hybrid fuzzy approach is deemed valid.

There are always opportunities for further studies in any new approach. The following are possible future directions to extend the study on the proposed hybrid fuzzy MEREC-G and fuzzy RATMI approach:


**Supplementary Materials:** The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/math11173773/s1, Table S1: Example 1; Table S2: Example 2.

**Author Contributions:** Conceptualization, A.A.M. and R.M.S.A.; data curation, A.A.M. and R.M.S.A.; formal analysis, A.A.M. and R.M.S.A.; investigation, A.A.M. and R.M.S.A.; methodology, A.A.M. and R.M.S.A.; project administration, A.A.M. and R.M.S.A.; resources, A.A.M. and R.M.S.A.; software, A.A.M. and R.M.S.A.; supervision, A.A.M. and R.M.S.A.; validation, A.A.M. and R.M.S.A.; visualization, A.A.M. and R.M.S.A.; writing—original draft, A.A.M. and R.M.S.A.; writing—review and editing, A.A.M. and R.M.S.A. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

MDPI St. Alban-Anlage 66 4052 Basel Switzerland www.mdpi.com

*Mathematics* Editorial Office E-mail: mathematics@mdpi.com www.mdpi.com/journal/mathematics

Disclaimer/Publisher's Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Academic Open Access Publishing

mdpi.com ISBN 978-3-0365-9721-8