1. Introduction
Group decision-making (GDM) refers to the process where a group of evaluators works together to develop a unified solution for problems with multiple alternatives [
1]. With the rapid development of society, management and decision-making demands have become more complex, requiring the independent evaluation of options by decision makers from various backgrounds. Meanwhile, technological advancements have also simplified interpersonal communication, making it practical and effective for large groups to participate in complex decision-making processes. Against this backdrop, GDM problems have evolved from involving just a few evaluators to requiring the participation of a large number of evaluators, giving rise to the concept of large-scale group decision making (LSGDM) [
2,
3]. Policy assessment [
4], venture capital management [
5,
6], site selection [
7,
8,
9], and tourism management [
10] are just a few examples of LSGDM application areas. The in-depth research into LSGDM has attracted considerable attention from scholars in recent years.
Given the complexity of the decision environment, it is impractical for evaluators to provide precise numerical evaluations of the alternatives. In such scenarios, fuzzy sets, introduced by Zadeh [
11], provide an effective means to express uncertainty and ambiguous information. This proposal has led to various extensions of fuzzy sets, each designed to better handle uncertainty in decision-making scenarios. The 2-tuple linguistic model [
12] was introduced to address linguistic uncertainty by combining symbolic terms with numerical values through symbolic translation. Type-2 fuzzy sets (T2FSs) [
13] incorporate a secondary membership function to deal with more complex uncertainty. The q-rung orthopair fuzzy sets (qROFSs) [
14] extend intuitionistic fuzzy sets by relaxing the sum constraint on membership and non-membership degrees, while Pythagorean fuzzy sets (PFSs) [
15] further generalize this concept by allowing the squared sum of these degrees to be at most 1. Although these extensions enhance the adaptability of fuzzy set theory in uncertain environments, they also have inherent limitations. The 2-tuple linguistic model, while improving linguistic representation, is restricted to a single linguistic term, which may be insufficient for capturing experts’ hesitation [
16]. T2FSs, though effective in managing high uncertainty, involve significant computational complexity [
17], making them less suitable for LSGDM problems. Both qROFSs and PFSs, despite offering greater flexibility in uncertainty representation, still rely on a single value pair to express membership and non-membership degrees.
However, in practice, evaluators may struggle to provide precise assessments due to hesitation, making it difficult to elicit uncertain information effectively. Motivated by this, Torra [
18] proposed hesitant fuzzy sets (HFSs) to better express situations where there is indecision between multiple values. The introduction of HFSs also addresses the challenge in GDM where multiple evaluators hold different opinions, making it difficult to select a single value. For instance, when two evaluators assess an alternative, one might express their evaluation as “0.3” while the other might express it as “0.5”. This can be represented by a HFS as follows:
. As we can see, using an HFS can accommodate the opinions of multiple decision evaluators and retain as much information as possible. However, it also presents certain challenges for subsequent computations. For example, when performing calculations between HFS with different numbers of elements, it is often necessary to adjust the original information, such as adding values to the shorter one based on the evaluators’ risk preferences until the elements’ numbers are aligned. This adjustment introduces additional information into the preferences, which might potentially provoke unreasonable decision outcomes. To solve this limitation, Hao at al. [
19] introduced a new information expression tool based on HFS, namely, normal-type hesitant fuzzy sets (N-HFSs), which maintain the advantages of HFSs in effectively depicting hesitant information and reducing the complexity of computation. In the LSGDM framework, the primary and fundamental steps involve collecting evaluation information from individuals and effectively processing this information. The ways in which HFS and N-HFS can be used to facilitate this process warrants further exploration.
Compared to traditional GDM problems, LSGDM problems involve a significantly larger number of participants, resulting in a more complex and time-consuming decision-making process [
20]. To reduce dimensional complexity, clustering methods are typically employed to divide a large group of evaluators into several manageable subgroups, as it is nearly impossible to manage a large decision-making group effectively. After clustering, evaluators are divided into several subgroups, with each subgroup considered a decision unit, enabling a more efficient consensus to be reached. The consensus reaching process (CRP) is another crucial component of GDM, and it is a tool designed to resolve or reduce conflicts among evaluators’ viewpoints and facilitate the achievement of consensus within the group. It is a dynamic iterative process that continuously modifies individual opinions, assesses whether the group has reached a predetermined consensus threshold, and decides whether to continue iterating or to stop. Consensus improvement methods can be categorized into two main types as follows: optimization-based consensus models [
21,
22,
23] and feedback consensus models [
24,
25,
26,
27,
28]. The former one typically uses optimization techniques to minimize the total adjustments needed to achieve consensus within a predetermined threshold, while the latter identifies the evaluators, alternatives, and criteria that need modification and then provides advice to evaluators in changing their preferences. Therefore, in terms of the functional principles, the optimization-based CRP automatically adjusts evaluators’ opinions, generally serving as a support tool for the moderator. In contrast, CRP with a feedback mechanism can simulate a real CRP and provides control over the modifications made to evaluators’ opinions. For this reason, we mainly consider the CRP with a feedback mechanism for LSGDM problems in this paper. In the literature, CRP with a feedback mechanism has yielded numerous outcomes [
25,
28,
29,
30]. However, there are some issues that should be noted. First, consensus measurement represents the degree of agreement among evaluators, and it is defined based on the concept of similarity or dissimilarity between their preferences. A way to obtain the consensus degree with hesitant fuzzy information within the LSGDM framework remains an unresolved issue. Although some definitions exist in the current research [
31,
32,
33], they incorporate additional information into the calculations, which inevitably leads to unreasonable decision outcomes [
19]. Second, the feedback mechanism plays a crucial role in CRPs, as it provides guidance to evaluators on how to adjust their preferences in order to achieve consensus. Therefore, the ability to offer detailed instructions to evaluators becomes a critical aspect of the feedback mechanism, something that has been insufficiently addressed in existing HFS CRPs research. Finally, it is crucial to consider the evaluators’ willingness to modify their preferences in CRPs, as it reflects the practical applicability of the CRP to some extent. Exploring ways of integrating this aspect into the feedback mechanism is a key issue worth investigating.
Since LSGDM involves a large number of evaluators, and considering the outstanding advantage of HFS in expressing hesitant information, namely, its ability to accommodate diverse information, some scholars have introduced HFS as an information representation tool into the LSGDM framework. For example, Rodríguez et al. [
33] proposed an adaptive consensus model for LSGDM problems, aimed at reducing both the time required and the supervision cost involved in the CRP. Lu et al. [
32] developed a CRP for incomplete hesitant fuzzy preferences, taking distrust behavior into account. To the best of our knowledge, only a few studies [
32,
33] have utilized HFS information to address LSGDM problems, yet several critical issues remain to be addressed and significantly improved. (1) Although there has been some research on HFS within the LSGDM framework, none has resolved the problem of needing to include a normalization process that adds values to carry out the computations, and this process might imply bias in the results. (2) Measuring the level of consensus is a crucial aspect in a CRP, typically constructed based on distance. However, the current consensus measurement in an HFS environment often requires that the two HFSs have identical elements, which results in artificially adding decision information during the computation process [
31,
32,
33], and a reasonable consensus measure is still lacking. (3) There is a lack of CRP with a feedback mechanism that provide detailed suggestions for evaluators who need to change their opinion and considers evaluators’ willingness.
Taking into account the aforementioned drawbacks, this proposal aims to develop a novel LSGDM methodology for hesitant fuzzy information, which includes a clustering method based on opinion similarity and a consensus model with a feedback mechanism. The main contributions can be outlined as follows:
A distance measure based on the statistical information is developed to quantify the dissimilarity between two N-HFEs (Normal-type hesitant fuzzy elements). On this basis, a consensus measure is subsequently defined.
Construct a CRP with a feedback mechanism, which not only provides detailed and clear guidance to the evaluators who need to revise their opinion but also considers the extent to which evaluators are willing to modify their views.
A LSGDM framework based on hesitant fuzzy information is introduced, which does not require any normalization to deal with the original decision information.
A case study is offered to show the feasibility of the proposed method, and its merits and stability are illustrated through sensitivity and comparative analyses.
The reminder of this paper is organized as follows:
Section 2 outlines the basic concepts related to the proposed method. In
Section 3, we introduce a hesitant fuzzy CRP with a feedback mechanism.
Section 4 presents a numerical example and conducts the sensitivity and comparative analyses to validate the effectiveness of the proposed approach.
3. Hesitant Fuzzy Consensus Reaching Process for Large-Scale Group Decision-Making Problems
In this section, a hesitant fuzzy CRP framework for LSGDM is proposed, which mainly includes the following three phases: (i) clustering evaluators into different subgroups based on the c-means algorithm; (ii) developing a consensus measure for hesitant fuzzy information; and (iii) constructing a feedback mechanism to help evaluators modify their opinions.
3.1. Problem Description
Let
be a finite set of alternatives, and a group of evaluators, denoted as
, needs to assess the alternatives under several criteria
. Due to the increasing complexity of the environment, evaluators give their evaluations by fuzzy numbers to better handle the uncertainty that appears in the decision-making process. The decision matrix is expressed as follows:
Considering the advantages that HFS information has to model hesitant and uncertain information, the opinions of different evaluators in the same position constitute the HFE, which is obtained by Equation (
7).
Generally, the moderator needs to predetermine two parameters, the consensus threshold and the maximum number of iteration rounds as follows:
: The consensus threshold, which is established to obtain a solution accepted by most evaluators.
: This parameter is the maximum number of discussion rounds allowed for LSGDM problems to avoid endless discussions.
3.2. C-Means-Based Clustering Method
As described in
Section 2.1, the LSGDM problems are more complex than traditional GDM due to the inclusion of a greater number of evaluators. For this reason, a clustering algorithm is required to reduce the dimensionality of evaluators in such problems, thereby simplifying the decision process and enhancing the efficiency of decision making [
47]. Clustering is an unsupervised machine learning technique that divides data into a specific number of clusters (groups, subsets), and it has been widely studied in the fields of data mining [
48], machine learning [
49], and image segmentation [
29,
50]. After years of research, the relevant scholars have proposed various clustering methods that can generally be categorized into two types as follows [
51]: (i) hierarchical clustering, which partitions the dataset into successive layers of clusters, with each subsequent layer’s clusters based on the results of the previous layer, generally with either agglomerative or divisive modes; (ii) partitioning clustering, which simultaneously finds partitions of the entire dataset without a hierarchical structure and divides the data into multiple non-overlapping subsets (clusters), with each subset representing a cluster. Among the various partitioning clustering algorithms, the k-means algorithm [
52] stands out for its robustness and becomes the most commonly and widely used algorithm. As decision-making environments become more complex, an extension of k-means, known as c-means [
53], has been developed to be applicable in fuzzy environments. It retains the robustness of k-means for determining clustering results while being well-suited for fuzzy environments. Motivated by these advantages, the c-means algorithm is utilized in this paper to cluster the evaluators.
The objective of the c-means algorithm is to find a clustering result that minimizes the objective function value. To achieve this, a set of data points is initially chosen randomly as the cluster centroids. The algorithm then proceeds iteratively through two main steps. (1) Assignment: All remaining data points are assigned to their nearest centroid. (2) Update: Each cluster centroid is recalculated and the data points are assigned to them. The above two steps are repeated until the clusters no longer change. According to the basic idea of c-means, the detailed process of c-means is expressed as follows:
- Step 1.
Each decision matrix
is transformed into a vector, also denoted by
,
.
where the dimension of
is
. Let
, and
can be denoted as
- Step 2.
The number of cluster centroid is selected randomly. In this proposal, the initial number of cluster centroids is set by the number of different alternatives
m. Centroids can be either randomly initialized or assigned to a value from the dataset. To avoid subjectivity, the centroids are initially set randomly here. Cluster centroid
is denoted as follows:
- Step 3.
To calculate the distance between
and cluster centroids
and assign it to the closest one.
- Step 4.
To recompute the cluster centroids using the current information. Suppose the new cluster centroids are denoted by the following:
- Step 5.
When the distance between cluster centroids of two consecutive iterations is lower than a threshold
, then, we generally think the clusters keep stable and the clustering process stops, that is,
- Step 6.
Output the clusters
3.3. Consensus Measurement
In this subsection, a consensus measure is defined to compute the level of agreement between evaluators. First, we define the distance measure between two N-HFEs based on the Euclidean distance as follows:
Definition 6. Let and be two N-HFEs and their normal distribution membership functions are and . According to the concept of Euclidean distance, the distance of two N-HFEs can be defined as follows: Proposition 1. For the distance measure between two N-HFEs defined in Equation (14), the following properties hold: - (1)
.
- (2)
if and only if .
- (3)
.
- (4)
.
Proof of Proposition 1. - (1)
Due to , , then, we have and . Accordingly, the upper limits of the distance is . However, in a real situation, the distance value is actually no more than 1 since and will not be maximized at the same time. There are three extreme situations.
When , , , , the distance is 1.
When , , , , the distance is 1.
In other cases, the difference will be smaller. For instance, when , , , , the distance is 0.25.
In summary, is proved.
- (2)
If , we have and , that is, and , so is obtained.
If , we have and , then, is proved.
- (3)
As this is obvious, the proof is omitted here.
- (4)
We assume
, then, we have
. According to the Minkowski inequality, we have the following:
Thus, is proved.
□
To facilitate a clearer comprehension for the reader, an exemplary case is employed to elucidate the computation of the proposed distance measure.
Example 1. Let the preferences for alternative , provided by subgroup , be and those provided by subgroup be , respectively. Then, if we express it by NHFS, we have and . Using Equation (14), the distance between two N-HFEs is as follows: In the above example, we can see that if we use NHFS as the information representation form, the number of elements does not matter. However, if the traditional HFE is adopted to express the above situation, then, they are and . When we calculate the distance between and , the two HFEs must first be normalized. This involves adding elements to the shorter HFE based on the decision makers’ risk preferences until the number of elements in both HFEs is equal. This process not only distorts the original decision-making information but also overly relies on the decision makers’ subjective judgment, potentially leading to unreasonable outcomes.
Then, the consensus measure based on the distance measure is defined, and this is decomposed into four levels, expressed as follows:
- (1)
The consensus degree of subgroup
about alternative
with respect to criterion
is defined as follows:
- (2)
The consensus degree of subgroup
of alternative
is defined as follows:
- (3)
The consensus degree of subgroup
is calculated by the following:
- (4)
The overall consensus degree can be obtained by the following:
where
represents the weight of the subgroup
. In this paper, we assume the weights of subgroups are equal, that is
.
After obtaining the overall consensus level , it is necessary to determine whether the group has reached an agreement by comparing with a predefined consensus threshold . If , this means that the whole group has reached a consensus and can move on to the selection process. Otherwise, it is necessary to perform another round of discussion and identify which evaluations provided by evaluators need to be modified. To do so, the following steps are performed.
- (1)
Identification rule for the subgroup with the lowest consensus in round
r. Once the subgroup is identified, all evaluators in this subgroup need to change their evaluations.
- (2)
Identification rule for the alternative with the lowest consensus in the subgroup
in round
r.
- (3)
Identification rule for the criteria with the lowest consensus regarding alternative
in the subgroup
in round
r.
3.4. Feedback Mechanism
After determining the evaluators who need to reconsider their evaluations, we propose a feedback mechanism for providing detailed advice to evaluators and improve the consensus of the group. Consequently, a feedback mechanism is constructed as follows:
Let
be the
round evaluation of evaluator
, which belongs to subgroup
. Motivated by the proposal of Liang et al. [
27], the evaluation can be modified based on the willingness of the evaluators as follows:
where
represents the mean of the normal distribution of the collective opinion; and
represents the willingness of evaluator
to modify their original opinion, where
. A higher value of
indicates a greater willingness to adjust the initial preference, while a lower value suggests a stronger inclination to retain the original judgment. Since this coefficient is predefined by each evaluator, it varies based on individual attitudes toward opinion adjustment.
To better explain the process, the flowchart shown in
Figure 2 summarizes the entire CRP model. Additionally, the CRP for handling hesitant fuzzy information is detailed in Algorithm 1.
Algorithm 1 The proposed hesitant fuzzy CRP |
- Require:
Map preferences - Require:
List of alternatives - Require:
List of criteria - Require:
List of evaluators - Require:
Coefficient of modification - Require:
Consensus threshold - Require:
Maximum iteration - 1:
length (alternatives) - 2:
length (criteria) - 3:
length (evaluators) - 4:
- 5:
- 6:
Clustering method based on c-means - 7:
length (subgroups) - 8:
for to q do - 9:
for to P do - 10:
for to m do - 11:
for to n do - 12:
computeConsensusLevelOfSubgroupOfAlternativeUnderCriterion() - 13:
computeConsensusLevelOfSubgroupOfAlternative() - 14:
computeConsensusLevelOfSubgroup() - 15:
computeConsensusLevel() - 16:
if OR then - 17:
identifyChanges() - 18:
feedbackProcess() - 19:
updateClusters(subgroups(q)) - 20:
end if - 21:
- 22:
end for - 23:
end for - 24:
end for - 25:
end for
|
3.5. Selection Process
Once the predefined consensus threshold is reached, the selection process is activated to generate the ranking of alternatives. In this paper, we aggregate the alternatives under each criterion by the NHFCWA operator and obtain the score value of each alternative by Equation (
5). The alternatives are ranked according to the score value. The larger the score value, the better alternative.
4. Case Study
In this section, a numerical example is provided to demonstrate the feasibility and applicability of the proposed method. We have uploaded the data for three scenarios and the code for this proposal to the github repository:
https://github.com/Wei-Liang913/HFS_CRP.git (accessed on 1 Febunary 2025).
4.1. Case Description
Since 2023, China’s tourism industry has fully recovered, with residents’ travel intentions consistently remaining above 90% each quarter, averaging 91.85% for the year. These data not only reflect a sustained enthusiasm for travel but also indicates a strong rebound in the tourism market. In the first quarter of 2024, the domestic tourism market continued its positive trend, with total domestic tourism trips reaching 1.419 billion, a 16.7% increase from the previous year. Domestic tourism revenue amounted to CNY 1.52 trillion, showing a 17.0% year-on-year growth. This increase highlights the booming development of the tourism industry, the improvement in residents’ consumption capabilities, and the rising demand for high-quality travel experiences.
However, with the rapid recovery of the tourism industry, the sector faces a series of new challenges. Sudden events such as public health crises and natural disasters pose threats to the stability and safety of the tourism market. For instance, the resurgence of pandemics, frequent natural disasters, and unexpected social events have impacted travel safety and tourism experiences. These issues not only raise higher demands for the operation and management of the tourism industry but also underscore the importance of strengthening emergency management and risk control. To ensure the sustainable and healthy development of the tourism sector, it is essential to actively address these emergencies and safeguard both visitor safety and industry stability.
In emergency management, the role of logistics services, especially emergency logistics services, is crucial. Emergency logistics service providers focus on delivering rapid and efficient transportation and distribution of supplies during crises. Compared to traditional logistics, emergency logistics services demand higher responsiveness and flexibility. These providers can quickly mobilize resources in emergencies, ensuring that critical supplies such as medical equipment, food, and rescue materials reach the affected areas promptly.
To quickly address the impact of sudden events on tourists, a large tourism company in Fujian aims to establish a long-term partnership with an emergency logistics service provider. After initial research and analysis, four well-know logistic companies in the industry are identified, denoted as and . These logistic companies need to be evaluated based on three main criteria , where denotes price, denotes speed, and denotes customer satisfaction.
Just as García-Zomora et al. [
37] said, advancements in technology have enabled LSGDM to extend beyond just 20 evaluators. Now, it is very common for hundreds or even thousands of participants to be involved. In order to show the feasibility and flexibility of the proposed method, there are three scenarios we constructed based on the number of evaluators involved as follows: 20 evaluators, 50 evaluators, and 100 evaluators. The moderator set the maximum number of CRP rounds to 5. Considering the large number of decision makers, achieving a high level of consensus, such as
, is relatively difficult, as it would require evaluators to be almost in complete agreement with the group opinion. Therefore, the moderator set the consensus threshold at
to facilitate reaching consensus.
4.2. Scenario 1
This subsection simulates the LSGDM problem with a lower number of evaluators, 20, to understand the CRP performance. Given the complexity inherent in the human mind and the external decision environment, evaluators assess the alternatives under different criteria by a fuzzy number. Due to space constraints,
Table 1 only presents a portion of the initial preferences. The proposed method is applied to solve this problem, the specific steps are listed as follows:
- Step 1.
Classify evaluators into several subgroups based on opinion similarity by the c-means algorithm. Evaluators are divided into four subgroups, as shown in
Table 2.
- Step 2.
The individual decision matrix is aggregated by subgroups into subgroup decision matrices by Equation (
7), denoted by N-HFE, and the results are shown in
Table 3,
Table 4,
Table 5 and
Table 6. The collective opinions are then obtained by the NHFWCA operator with Equation (
4); see
Table 7.
- Step 3.
To compute the different levels of consensus by Equations (
14)–(
18), we have four subgroups’ consensus levels as follows:
,
, and
. The overall consensus is
, which is less than the consensus threshold
. As
, the group does not reach the acceptable consensus level. The feedback mechanism is implemented to improve the consensus level.
- Step 4.
To use the feedback mechanism to modify the evaluators’ opinion.
Round 1:
According to the identification rules, evaluators in the subgroup that have a minimal consensus degree need to change their opinions, that is, all evaluators in the subgroup
, i.e.,
, are required to modify the evaluations of the three criteria of four alternatives by Equation (
22). Taking the example of evaluator
’s evaluation of alternative
on criterion
,
. By Equation (
22), the evaluation for the next round is
. Similarly, we can obtain a new round of evaluations for the other evaluations that need to be modified. After modification, we apply the c-means algorithm again for clustering 20 evaluators based on their current preferences. The clustering results, updated with the modified preferences, are presented in
Table 8. To better visualize the evolution of the evaluators’ preferences during the CRP, we apply Principal Component Analysis (PCA) to project the decision matrices into a 2D space, as shown in
Figure 3. PCA is a dimensionality reduction technique that transforms high-dimensional data into a smaller number of uncorrelated components while preserving as much variance as possible. In this visualization, each principal component is a linear combination of the original decision matrices, with Principal Component 1 capturing the most variance and Principal Component 2 capturing the second most variance. The closer two evaluators appear in this space, the more similar their opinions are.
Due to the updated clustering results, the statistical information for subgroups
, and
has also changed. The updated results and group evaluations are shown in
Table 9,
Table 10,
Table 11 and
Table 12. Accordingly, the consensus levels of each subgroup are obtained as follows:
, and
. Then, the overall consensus is improved as
. However, it still falls short of reaching the consensus threshold.
Round 2:
Similarly, the evaluators in subgroup
are advised to revise their opinions based on the identification rules. The consensus degrees for subgroups are calculated as follows:
, and
. This results in an overall consensus degree of
, which meets the predefined consensus threshold
.
Table 13 exhibits the iterations of the CRP. It is evident that with each iteration, both overall consensus and subgroup consensus demonstrate steady improvement, ultimately reaching the predefined consensus level.
The consensus level is acceptable, and the selection process starts.
- Step 5.
The alternatives are aggregated by means of the NHFCWA operator by Equation (
4). Then, based on the score value obtained by Equation (
5), the final ranking is obtained as follows:
, as shown in
Table 14.
4.3. Scenario 2
In this subsection, 50 evaluators are involved in the decision-making process. The assessment provided by evaluators is still in the form of fuzzy numbers. The computational procedure is identical to that described in the previous subsection. Due to space constraints, the detailed computational process has been omitted.
Following the clustering process, we identified four subgroups with initial opinion and visualized the clustering process in each round using Principal Component Analysis, as shown in
Figure 4. After that, we calculated their consensus degrees as follows:
and
. Then, the overall consensus is
, which did not meet the established threshold, indicating that the evaluators’ opinions required adjustment to improve agreement. Notably, the subgroup
has the lowest level of consensus, necessitating that all evaluators within this subgroup revise their opinions. After modification, the consensus levels of each subgroup are as follows:
and
. The overall consensus level increased as
, which is still less than the consensus threshold. We repeated the process, and evaluators in subgroup
then revised their preferences. The resulting consensus degrees of the subgroups in round two are as follows:
and
. This yielded an overall consensus level as
, reaching the required threshold. With consensus achieved, the CRP ends and the entire process is detailed in
Table 15. We then proceed to the selection phase, where the final ranking of alternatives obtained based on score values is as follows:
.
4.4. Scenario 3
In this scenario, we simulated 100 evaluators to deal with such a LSGDM problem. In round 0, the evaluators were classified.
Figure 5 shows the evolution of the clustering process. The consensus degrees for each subgroup are naturally calculated as follows:
and
, resulting in an overall consensus of
. This level of overall consensus does not meet the predefined threshold, prompting the activation of CRP to improve evaluator agreement. The CRP steps remain consistent with the previous scenarios, identifying subgroup
as having the lowest consensus level. After one round of modifying evaluators’ opinions, the updated consensus levels for subgroups are as follows:
,
, and
, with an overall consensus degree of
, as shown in
Table 16. With consensus at an acceptable level, the selection process commenced, resulting in a final ranking of all alternatives as follows:
.
4.5. Sensitivity Analysis
Sensitivity analysis is regarded as one of the classic methods for testing the stability of a given method in the field of decision making. It primarily involves altering the input values within the method and observing the impact of these changes on the final decision results. In this paper, the evaluators’ modification coefficient is a crucial factor that influence the consensus process and final decision outcome. Consequently, the sensitivity analysis is carried out to demonstrate how these parameters influence the outcomes in this subsection. It should be noted that, given the increased complexity introduced by a larger number of evaluators, Scenario 3, which involves 100 evaluators, is used for the sensitivity analyses. Therefore, in this section, we will vary the value from 0 to 1 to study its influence on the proposed method.
Firstly,
Figure 6 illustrates the following two important factors related to the CRP: the number of rounds required to reach consensus and the final consensus degree. It is evident from the figure that as the value of
increases, the number of rounds required tends to decrease. This is because the modification coefficient of the evaluators represents the extent to which they are willing to adjust their evaluations. When
, it signifies that the evaluators are barely willing to modify their initial opinions. Consequently, the influence of group opinions on the evaluators is relatively minimal, leading to consensus being achieved only after four rounds of opinion adjustments, while when
, only one round is needed to reach the predetermined consensus level. On the other hand, as the value of
increases, the final consensus level also improves. From
Figure 6, we can see that when
is set to
and
, consensus is achieved within just one round in both cases. However, the final consensus levels differ, reaching
and
, respectively. This also shows that the stronger the evaluators’ willingness to adjust their opinions, the more easily they are influenced by collective opinion, and they are more likely to adjust their opinions significantly in each round. As a result, fewer rounds are needed to reach a consensus, and a higher level of consensus is ultimately achieved.
Figure 7 displays the experimental results, illustrating how the scores and rankings of alternatives change as the value of
fluctuates between 0 and 1. After examining
Figure 6, it is evident that different
values influence the final scores, while the ranking of alternatives remains consistent. From this observation, the following conclusions can be drawn: (i) The proposed method demonstrates robustness, ensuring the stability of rankings under varying parameter values. Specifically, regardless of changes in the
value, the ranking of alternatives remains unchanged, suggesting that the ranking results are insensitive to variations in
. (ii) The
value has a significant impact on the decision-making process. As shown in the figure, a general trend can be observed as follows: as the
value changes, the score values also fluctuate, and the differences between alternative scores become more pronounced. This phenomenon indicates that the evaluators’ willingness to revise their opinions impacts the process of decision making. The greater the willingness, the more significant the effect.
4.6. Comparative Analyses
In this paper, we introduce a novel CRP for LSGDM in handling uncertain and hesitant information using N-HFS, without incorporating any normalizations during the consensus and decision processes. Solving this kind of decision problems through N-HFS is creative, and it was previously unexplored in the literature. To emphasize the advantages and validity of the proposed method, we will conduct qualitative comparative analysis with the existing methodologies. Additionally, we have implemented a traditional HFS-based method for a quantitative comparison with the proposed approach. These comparative analyses aim to highlight the unique contributions and feasibility provided by the proposed method.
4.6.1. Qualitative Comparison
Previous studies have explored HFS-CRPs, but certain research limitations warrant further attention. However, a direct quantitative comparison is challenging due to several factors. First, some existing HFS-CRPs are designed for traditional GDM with only a few evaluators, and this cannot be directly extended to LSGDM, making direct comparisons impractical [
54]. Furthermore, different CRPs use distinct types of information representations (e.g., preference relations), which require transformation for comparison. This transformation inevitably alters the results, making direct comparisons unreliable. Given these challenges, we provide a qualitative comparison instead. The comparison results are presented in
Table 17, highlighting the distinctiveness of the proposed consensus model.
Firstly, relying on only a few evaluators is increasingly inadequate in today’s complex decision-making environments. The involvement of more decision makers has become an inevitable trend. However, the studies developed by Ding et al. [
31], Zhang et al. [
45], and Zhang et al. [
46] only address a small number of evaluators, such 4 or 5. This falls short of meeting the current decision-making needs and results in decision outcomes lacking the collective intelligence of a larger group. Secondly, as is well known, HFS can include multiple elements within a membership degree, which is the primary reason it can handle hesitation in decision process. However, this also introduces difficulties when performing calculations between different HFSs. When you make computations with two or more HFSs, the existing consensus models either assume the number of elements is uniform [
32,
45], which is often not the case in reality, or require normalization [
31,
33,
46], leading to the distortion of the original decision information. Finally, an appropriate feedback mechanism is needed to guide the evaluators’ adjustments, helping align their viewpoints towards enhancing group consensus. Among the compared methods, the CRPs proposed by Lu et al. [
32,
45,
46] includes suggestions for evaluators modifications. However, it does not consider the evaluators’ willingness to adjust their evaluations, assuming instead that evaluators are always willing to accept the moderator’s recommendations, which makes the CRP difficult to apply realistically.
In summary, the CRP proposed in this paper is not only suitable for LSGDM but also provides a feedback mechanism while considering the evaluators’ personal willingness. The entire CRP does not require normalization, thus avoiding the issue of decision outcomes being skewed by artificially added information.
4.6.2. Quantitative Comparison
Due to the differences in information representation, it is difficult to find comparable methods in the existing literature. To better demonstrate the effectiveness of the proposed method, we have developed two comparison approaches.
HFS-based CRP. The evaluation information is expressed with traditional HFSs instead of N-HFSs, while keeping the overall decision process unchanged.
AHC-based CRP. The clustering method is replaced with an agglomerative hierarchical clustering (AHC), while the rest of the process remains the same.
To ensure a fair comparison, we conduct experiments using a scenario with 100 evaluators across different methods. The detailed discussion of the comparisons is presented below.
- (1)
Comparison with HFS-based CRP.
Due to the change in the form of information, the corresponding methods for aggregating different HFEs and calculating the distance between them also needed to be adjusted accordingly. The specific steps are listed as follows:
- Step 1.
To use the c-means algorithm to divide evaluators into several subgroups.
- Step 2.
To aggregate subgroup opinions and collective opinion using the adjusted hesitant fuzzy averaging (AHFA) operator [
55], as defined in Equation (
23).
- Step 3.
The
normalization method is applied to maintain consistency in the number of elements within subgroups and the collective opinion. Then, the consensus degree for different levels is calculated based on the Hamming hesitant fuzzy distance [
56], as shown in Equation (
24).
- Step 4.
If the overall consensus level reaches the predefined threshold, the process advances to the selection stage. If the threshold is not met, an identification rule is applied to determine which evaluators within each subgroup should adjust their preferences. Given the distinct structures of HFS and N-HFS, evaluations are modified using the following equation. The CRP continues to run until consensus is reached.
We conducted numerical experiments using 100 evaluators (Scenario 3) with the traditional HFS-based method. The initial consensus obtained is , which is less than the consensus threshold, triggering a CRP. After two rounds of iteration, the decision group reached full consensus with a final consensus degree of . The ranking result is generated as . Compared with the ranking result obtained by the proposed method (), we can see that the final outcomes obtained by the two methods are not the same. The cause of this phenomenon is summarized as follows:
It can be seen that we have employed the aggregation operator proposed by Liao et al. [
55] for aggregating collective decision-making information. This operator extracts information through ranking orders, addressing the issue of information overload often encountered with traditional aggregation operators. In other words, using the AHFA aggregator mitigates the issue of the number of elements in the HFE increasing exponentially. However, aggregating through ranking essentially leads to information loss, which affects the reliability of the decision outcomes [
19].
Subgroup opinions are represented using HFS information in the traditional HFS-based method. This implies that the number of HFE elements contained in different groups is different. When calculating the degree of consensus, the -normalization method is used based on the evaluators’ risk preferences to standardize the HFEs, ensuring they have the same cardinality. However, this can distort the decision information and increase the subjectivity of the decision outcomes.
- (2)
AHC-based CRP.
To highlight the importance of evaluator clustering, we have compared our proposed method with the Agglomerative Hierarchical Clustering (AHC) approach. Specifically, we have replaced the clustering step in our framework with AHC, and we have obtained the corresponding clustering and decision results, as shown in
Figure 8 and
Table 18.
As we can see from
Figure 5 and
Figure 8, the clustering results differ due to the distinct approaches used to group evaluators. The AHC method is a bottom–up clustering method that starts with each data point as an individual cluster and iteratively merges the closest clusters until a single cluster remains. In contrast, c-means clustering is a partition-based method that assigns evaluators to a fixed number of clusters by minimizing intra-cluster variance. These fundamental differences in clustering mechanisms lead to variations in the initial cluster structure, which subsequently impact the CRP and final decision outcomes. As seen in
Table 18, although both methods ultimately reached the same final consensus level (0.82) within one iteration, their initial consensus levels vary slightly due to their different method for evaluator grouping. This variation affects how individual opinions are aggregated, ultimately leading to different ranking orders of alternatives. Specifically, in AHC, the hierarchical merging process tends to form more imbalanced clusters, which might cause certain evaluators’ opinions to have a stronger influence on the final decision. On the other hand, c-means ensures more balanced clusters by iteratively optimizing evaluator assignments, leading to a more stable aggregation of preferences. Given that AHC has a higher computational complexity and does not improve the efficiency of reaching consensus, c-means remains a more suitable choice for LSGDM. Moreover, the observed ranking differences reinforce the idea that clustering strategies significantly affect the CRP, emphasizing the importance of selecting an appropriate method for evaluator grouping.
5. Conclusions
With the surge of social media and continuous advancements in science and technology, LSGDM has become increasingly common in various real-world scenarios. In light of this trend, we introduced a new CRP framework for LSGDM that integrates a detailed feedback mechanism in this paper. By incorporating N-HFS for LSGDM problems, we address the shortcomings of traditional HFS methods, which often require additional subjective information, thus expanding the applicability of LSGDM. Below, we outline the key features of our proposed method:
The use of N-HFS accommodates different evaluation preferences provided by multiple evaluators, resolving the computational difficulties associated with traditional HFS due to varying numbers of elements, which makes it difficult to apply to LSGDM.
Utilizing N-HFS information to develop a consensus model with a feedback mechanism not only expands the application scope of N-HFS but also enhances the evaluators’ acceptance of recommendations, thereby advancing the field of LSGDM.
A consensus feedback mechanism which provides evaluators with detailed recommendations and considers their acceptance of the suggestions has been introduced. This mechanism helps to increase low feedback acceptance, thereby facilitating the consensus process.
This framework can be applied to various real-world decision-making fields where large-scale and hesitant fuzzy decision making is prevalent. For instance, in healthcare decision making, the framework assists in aggregating expert evaluations for diagnosis and treatment planning, especially in cases involving uncertainty and hesitation. In supplier selection within supply chain management, it provides a structured methodology for assessing multiple criteria, ensuring more informed and strategic procurement decisions. These applications demonstrate the adaptability of our approach in handling complex decision-making problems across different domains.
A possible future work could explore the functionality and effectiveness of minimum cost considerations within the CRP framework and incorporate Kolmogorov–Smirnov distance. Another intriguing research direction would be to investigate the impact of uncooperative behavior among evaluators who may refuse to adhere to the recommendations of moderators.