Next Article in Journal
Co-Compact Separation Axioms and Slight Co-Continuity
Previous Article in Journal
Exploring Possible Triangle Singularities in the Ξ b − → K − J / ψ Λ Decay
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Eliciting Correlated Weights for Multi-Criteria Group Decision Making with Generalized Canonical Correlation Analysis

by
Francisco J. dos Santos
* and
André L. V. Coelho
*
Graduate Program in Applied Informatics, University of Fortaleza, Fortaleza 60811-905, CE, Brazil
*
Authors to whom correspondence should be addressed.
Symmetry 2020, 12(10), 1612; https://doi.org/10.3390/sym12101612
Submission received: 7 September 2020 / Revised: 22 September 2020 / Accepted: 23 September 2020 / Published: 28 September 2020

Abstract

:
The proper solution of a multi-criteria group decision making (MCGDM) problem usually involves a series of critical issues that are to be dealt with, among which two are noteworthy, namely how to assign weights to the (possibly distinct) judgment criteria used by the different decision makers (DMs) and how to reach a satisfactory level of agreement between their individual decisions. Here we present a novel methodology to address these issues in an integrated and robust way, referred to as the canonical multi-criteria group decision making (CMCGDM) approach. CMCGDM is based on a generalized version of canonical correlation analysis (GCCA), which is employed for simultaneously computing the criteria weights that are associated with all DMs. Because the elicited weights maximize the linear correlation between all criteria at once, it is expected that the consensus measured between the DMs takes place in a more natural way, not necessitating the creation and combination of separate rankings for the different groups of criteria. CMCGDM also makes use of an extended version of TOPSIS, a multi-criteria technique that considers the symmetry of the distances to the positive and negative ideal solutions. The practical usefulness of the proposed approach is demonstrated through two revisited examples that were taken from the literature as well as other simulated cases. The achieved results reveal that CMCGDM is indeed a promising approach, being more robust to the problem of ranking irregularities than the extended version of TOPSIS when applied without GCCA.

1. Introduction

Multi-criteria group decision making (MCGDM) is usually described as the process of selecting the best alternative(s) from a given set of viable options that are based on the opinions that are provided by multiple domain experts, frequently referred to as decision makers (DMs), concerning multiple judgment criteria [1,2]. In recent years, a number of MCGDM techniques have been proposed and widely applied in many distinct fields, such as sustainable development [3,4], personnel evaluation [5], social network analysis [6], software selection [7,8], supplier selection [9,10], and economics [11].
Among others, Herrera-Viedma et al. [12] have argued that solving a group decision making (GDM) problem usually involves the carrying out of two complementary processes: a consensus reaching process (generally guided by a moderator), which refers to how to obtain, via one or more stages of negotiation, the maximum degree of agreement between the experts; and, a selection process, which achieves the final solution via the aggregation of the experts’ individual preferences over the different alternatives available. Various approaches have been developed so far to help undertaking these processes, especially the latter, in different circumstances; for instance, by addressing dynamic sets of alternatives [13] and criteria [14], changes of preferences [15] and opinions [16,17], as well as differences in the knowledge level between the accessible DMs [18,19].
Specifically concerning the modelling of the DMs’ preferences, the assignment of weights to their evaluation criteria turns out to be a crucial task to be accomplished, since the final decision that is delivered by a given MCGDM method usually depends on such weights to a large extent. However, properly calibrating criteria weights in multi-criteria decision making is a hard task to pursue, even when considering the single DM setting [20]. This task becomes even more relevant (and harder to attempt) when different criteria are adopted by different DMs. In this regard, Fan et al. [21] recently pointed out that research concerning this more complex scenario is still relatively scarce in the literature and, thus, developed a method for tackling MCGDM with different evaluation criteria sets.
Based on these considerations, this paper investigates a novel GDM approach that is aimed to address, in an integrated manner, both the elicitation of the DMs’ preferences and the consensus of their individual decisions. The approach, which is referred to as canonical multi-criteria group decision making (CMCGDM, for short), can also deal with MCGDM problems having different criteria sets for different DMs and makes use of a generalized version of canonical correlation analysis (CCA) [22] to automatically compute the values of criteria weights.
In a nutshell, the goal of CCA is to maximize the linear correlation between two sets of variables, so as to yield a novel set of canonical variates, which, in turn, may replace the original ones. This procedure involves computing the weights for both sets of original variables that result in the highest possible correlation between the canonical variates [23]. Although the standard CCA only handles two sets of variables, there are extensions that handle more, such as the generalized CCA (GCCA) version that was proposed by Kettenring [24].
CMCGDM also adopts the extended version of TOPSIS (technique for order preference by similarity solution) [25] that was proposed by Shih et al. [26], which is specific for MCGDM. The practical usefulness of CMCGDM is demonstrated by revisiting two examples, one on a human resource selection [26] and the other on a machine acquisition [21]. The results achieved in these examples and other simulated cases reveal that CMCGDM is indeed a promising approach, being more robust to cope with the ranking irregularity problem [27] than the extended TOPSIS for GDM without using GCCA.
In short, the main contributions of this paper are: to show that GCCA is a viable approach to eliciting weights in the MCGDM context; to prove that CMCGDM is more robust for dealing with the problem of classification irregularity than the extended TOPSIS without using GCCA, and that it does it in a straightforward way without the need for changes in the extended TOPSIS procedure; and, to be able to reach the group’s consensus by reconciling the different canonical weights that were provided by the GCCA.
The rest of this paper is organized as follows. In Section 2 and Section 3, we review the main aspects that are related to standard MCGDM and GCCA. In Section 4, we present the main steps comprising the new CMCGDM methodology and point out some of its relevant properties. In Section 5 and Section 6, we evaluate CMCGDM on the examples considered by Shih et al. [26] and Fan et al. [21], whereas, in Section 7, we compare GCCA with other well-known criteria weighting methods on several simulated cases. Finally, Section 8 concludes the paper and brings some remarks on future work.

2. Multi-Criteria Group Decision Making

Briefly stated, MCGDM refers to the process of making decisions in group when there are multiple (but a finite list of) alternative solutions that are available to the decision problem in hand. In addition, the group of DMs assess the pros and cons of these alternatives by taking multiple (usually conflicting) judgment criteria into account [1,2]. Formally, an MCGDM problem with M alternatives, N criteria, and K DMs can be formulated by defining K decision matrices of the form:
D k = ( x i j k ) M × n k = A 1 A 2 . . . A M ( c 1 k c 2 k c n k k x 11 k x 12 k . . . x 1 n k k x 21 k x 22 k . . . x 2 n k k . . . . . . . . . . . . . . . . . . x M 1 k x M 2 k . . . x M n k k ) w k = ( w 1 k , w 2 k , , w n k k ) , k = 1 , , K ,
where A = { A 1 , A 2 , , A M } denotes the set of feasible alternatives, C k = { c 1 k , c 2 k , , c n k k } represents the n k evaluation criteria that are associated with the k-th DM, x i j k is the performance rating of alternative A i under criterion c j k ( j = 1 , , n k ), and w j k stands for the weight of this criterion. Notice that 0 w j k 1 and j = 1 n k w j k = 1 . On the other hand, the K criteria sets may be equal, partially overlap, or be completely disjoint, in which case k n k = N . Each criterion can be classified as either a benefit (“the higher, the better”) or cost (“the lower, the better”) criterion.
Several multi-criteria methods have been proposed or extended in order to cope with different variants of the MCGDM problem. In the sequel, we focus on the extended version of TOPSIS that was proposed by Shih et al. [26], which was adopted in the development of CMCGDM. Afterwards, we overview some of the most well-known methods used to compute criteria weights in TOPSIS, formalizing these methods in the context of GDM.

2.1. Extended TOPSIS for GDM

TOPSIS [25] is a well-known multi-criteria method, which is based on the notion that the chosen alternative should have the closest distance to a positive ideal solution (PIS) and the farthest distance to a negative ideal solution (NIS). A crucial assumption of this compensatory method is that the decision criteria are either monotonically increasing or decreasing [28]. The variant that was conceived by Shih et al. [26] keeps this assumption, but extends the scope of the method in order to acknowledge the existence of several DMs.
The main steps of the extended TOPSIS for GDM are:
Step 1: 
compose the decision matrix D k for the k-th DM—refer to Equation (1).
Step 2: 
compute the normalized decision matrix R k = [ r i j k ] , i = 1 , , M , j = 1 , , n k , for the k-th DM. For this purpose, the vector normalization scheme is usually employed [29]:
r i j k = x i j k i = 1 M ( x i j k ) 2 , i = 1 , , M ; j = 1 , , n k .
Step 3: 
calculate the positive ideal solution V k + and the negative ideal solution V k for the k-th DM, as follows:
V k + = v 1 k + , , v n k k + = ( max i { r i j k } | j J 1 ) , ( min i { r i j k } | j J 2 ) ,
V k = v 1 k , , v n k v = ( min i { r i j k } | j J 1 ) , ( max i { r i j k } | j J 2 ) ,
where J 1 and J 2 are the sets of benefit and cost criteria, respectively.
Step 4: 
assign a weight vector w k = ( w 1 k , w 2 k , , w n k k ) to the criteria set of the k-th DM, such that j = 1 n k w j k = 1 .
Step 5: 
compute the overall separation of a given alternative from the set of positive and negative ideal solutions. Here, two substeps should be performed. The first considers the PIS and NIS that are associated with each DM separately, while the second aggregates the measurements for the whole group.
Substep 5a: 
compute the distances S i k + and S i k of the i-th alternative, i = 1 , , M , to the pair of PIS and NIS associated with the k-th DM, k = 1 , , K . Here, we have considered the Euclidean distance:
S i k + = j = 1 n k w j k ( r i j k v j k + ) 2 , i = 1 , , M ,
S i k = j = 1 n k w j k ( r i j k v j k ) 2 , i = 1 , , M ,
where r i j k , v j k + and v j k are defined in Equations (2), (3) and (4), respectively.
Substep 5b: 
compute the overall separation measures S i + ¯ and S i ¯ for each alternative. For this purpose, one should calculate the geometric mean over the K values of S i k + (5) and S i k (6) to yield:
S i + ¯ = k = 1 K S i k + 1 K , i = 1 , , M ,
and
S i ¯ = k = 1 K S i k 1 K , i = 1 , , M .
Step 6: 
compute C i * ¯ , the overall relative closeness of the i-th alternative A i , i = 1 , , M , to the K positive ideal solutions, which can be expressed as:
C i * ¯ = S i ¯ S i + + S i , i = 1 , , M ,
where 0 C i * ¯ 1 , and S i + ¯ and S i ¯ are defined, respectively, in Equations (7) and (8). Hence, the alternatives can be ranked from the best (higher values of closeness) to worst.

2.2. Objective Methods for Criteria Weighting

In the literature, different methods have been proposed in order to ascertain the relevance of the different decision criteria [30]. In this section, we review the following: Entropy [31], Statistical Variance, Standard Deviation [32], CRITIC [33], DEMATEL [34], and DEMATEL-based ANP [35]. However, other promising methods could be considered, such as CSW-DEA [36,37,38], nonlinear programming methods [39], and swing-weighting [40], just to name a few.
Weighting methods, in particular, attach cardinal or ordinal values directly to the criteria, so as to reflect their relative importance. Wang et al. [41] classify the weighting methods into three categories: subjective, objective, and combination methods. The first determine the weights based on the subjective preferences of the DMs, whereas the second make use of mathematical models without any consideration of the DM’s preferences. Combination methods are hybrids of the former, including, for instance, multiplication and additive synthesis. In the following, we review some of the most well-known methods of this class because the use of GCCA for computing the criteria weights in the context of CMCGDM can be regarded as an objective method.

2.2.1. Entropy Method

In short, entropy is a measure of uncertainty in information, as formulated in probability theory [31]. Entropy also means that some information cannot be recovered or is lost, like noise in a message. Accordingly, the higher the entropy E of a system, the lower its information content I. In fact, E = 1 I , E [ 0 , 1 ] .
When applied to multi-criteria decision making, this concept can be used to quantitatively measure the capacity of a given criterion to discriminate the quality of different alternatives. High discrimination corresponds to low entropy and, thus, better information content. Conversely, low discrimination corresponds to high entropy and worse information. One particular advantage of using entropy as a criterion weight measure is that it weakens the bad effects from abnormal values (outliers), which makes the result of evaluation more accurate and reasonable.
When this procedure is applied to all criteria at once, it is possible to rank them according to their significance: the higher the information content I of a criterion, the more relevant it is comparatively. It is worthy noticing that these weights can be used for the assessment of alternatives, since they are built based on the dispersion (discrimination) of the alternatives’ performance values. Consequently, they are totally objective (unbiased), not depending upon the specific DM doing the analysis.
When considering the GDM context, in order to calculate the criteria weights w j k via the entropy method, the decision matrix D k (1) should be first normalized to yield the probabilities p i j k , as given by
p i j k = x i j k i = 1 M x i j k , i = 1 , , M , j = 1 , , n k ,
and then the following equations should be calculated:
E j k = i = 1 M p i j k log ( p i j k ) / log ( M ) , j = 1 , , n k ,
w j k = 1 E j k j = 1 n k ( 1 E j k ) , j = 1 , , n k ,
where log ( · ) denotes the logarithm function, and E j k and w j k are, respectively, the entropy and weight values that are associated with the j-th criterion of the k-th DM. As discussed above, the higher the value of E j k , the lower is the value of w j k .

2.2.2. Statistical Variance Method

In statistics, variance is defined as the expectation of the squared deviation of a random variable from its mean. In other words, it measures how far a set of quantitative observations are spread out from their average value. Studying variance allows for one to quantify how much variability is in a probability distribution. If the outcomes of the distribution vary wildly, then it will have a large variance. Otherwise, if the variance is null (it is always non-negative), then the random variable takes a single constant value, which is exactly its expected value.
Based on the above considerations, statistical variance can be used in multi-criteria decision making in order to assess the capability of the judgment criteria to discriminate between the different alternatives available. The larger the variance of a given criterion (viewed as a random variable), the more dispersed are the performance values of the alternatives, which allows the DM to have better judgement on their good/bad characteristics. In this way, the higher the variance, the higher should be the relative weight of a criterion.
In this method, the statistical variance of information is first calculated for the j-th criterion of the k-th DM based on the original score values:
V j k = 1 M 1 i = 1 M ( x i j k x j k ¯ ) 2 ,
where V j k is the statistical variance of the j-th criterion of the k-th DM and x j k ¯ is the average value of the original score values x i j k .
Subsequently, the weights are obtained via a simple normalization, so that they lie in [ 0 , 1 ] :
w j k = V j k j = 1 n k V j k , j = 1 , , n k .

2.2.3. Standard Deviation Method

Because the standard deviation (SD) is defined as the positive square root of the statistical variance, it can also be regarded as a measure of dispersion around the mean of a data set. However, calculating the variance involves squaring deviations, so it does not have the same unit of measurement as the original observations. This negative feature is not shared by SD, whose values are in the same unit of the original scale. On the other hand, both variance and SD can be greatly affected if the mean gives a poor measure of central tendency, which can happen due to the presence of outliers. A single outlier can raise the standard deviation and, thus, distort the picture of spread.
In the context of multi-criteria decision making, the SD measure can be used to assign a small weight to a criterion if it shows similar values across the alternatives; otherwise, the criteria with larger deviations should be assigned the larger weights. If all available alternatives score almost equally with respect to a given criterion, then such a criterion will be regarded as unimportant by most experts and, thus, could be removed from the analysis.
Formally, the SD method determines the weights of the criteria in terms of their SDs, according to the following equations [32]:
σ j k = i = 1 M ( x i j k x j k ¯ ) 2 M 1 , j = 1 , , n k ,
w j k = σ j k j = 1 n k σ j k , j = 1 , , n k ,
where x j k ¯ and σ j k are, respectively, the mean and SD values that are associated with the j-th criterion of the k-th DM.
Although the computation of variance and SD is similar, their differences are rather significant. By employing squared deviations, variance gives more weight to those criteria whose performance values are more spread around their average. Besides, the sum of square root values used in the denominator of Equation (16) yields a different normalization factor than the one adopted in Equation (14), which, in turn, is based on a sum of squared values. As a result, the rankings of alternatives that are produced by these measures need not be the same. In any case, both statistical measures are sensitive to a wide variation between the measurement scales of the different criteria. Of course, such a drawback is more aggravated in variance, due to the squared deviations. Entropy, in contrast, is robust to criteria scaling.

2.2.4. Criteria Importance through Inter-Criteria Correlation

The CRITIC (criteria importance through intercriteria correlation) method, as proposed by Diakoulaki et al. [33], uses correlation analysis to detect contrasts and dependencies between the criteria. Contrary to the other methods discussed so far, CRITIC was specifically conceived for the multi-criteria decision making domain, performing a detailed investigation of the decision matrix for extracting judicious information that is available in the evaluation criteria as well as their mutual/contrastive relationships. According to this method, objective weights are derived in order to quantify the intrinsic information of each evaluation criterion, while using both its standard deviation and its correlation (as calculated by Pearson correlation) with the other criteria. This way, both contrast and conflict intensity contained in the structure of the decision problem are captured by this method.
Consider the decision matrix of the k-th DM, as given in Equation (1). In order to calculate each weight w j k , the following symbols are used: r i j k is the normalized performance measure of the i-th alternative with respect to the j-th criterion, c j k denotes the quantity of contrastive information contained in the j-th criterion, σ j k stands for the standard deviation of the j-th criterion, and ρ j j k denotes the value of the Pearson correlation coefficient between the j-th and j -th criteria. Based on these notations, the steps of the CRITIC method are given as follows [32]:
Step 1: 
the score values associated with benefit/cost criteria are first normalized using Equations (17) and (18), respectively.
r i j k = x i j k min ( x i j k ) max ( x i j k ) min ( x i j k ) , i = 1 , , M ; j = 1 , , n k , ( benefit criteria )
r i j k = max ( x i j k ) x i j k max ( x i j k ) min ( x i j k ) , i = 1 , , M ; j = 1 , , n k , ( cos t criteria )
Step 2: 
the correlation between each pair of criteria is calculated via Equation (19).
ρ j j k = i = 1 M ( r i j k r j k ¯ ) ( r i j k r j k ¯ ) i = 1 M ( r i j r j k ¯ ) 2 · i = 1 M ( r i j k r j k ¯ ) 2 , j , j = 1 , , n k
Step 3: 
finally, Equations (20) and (21) are employed for producing the weights.
w j k = c j k j = 1 n k c j , j = 1 , , n k ,
c j k = σ j k j = 1 n k ( 1 ρ j k k ) , j = 1 , , n k .

2.3. DEMATEL

Decision Making Trial and Evaluation Laboratory (DEMATEL) was elaborated as a procedure for solving problems of identifying cause-and-effect relationships [34]. With time, this method has been well adapted for use in multi-criteria decision making. This way, some authors discuss the use of DEMATEL in order to determine the significance of the criteria [42,43,44]. This section describes the approach that was proposed by Kobryń [42], whose formulation was adapted for several DMs.
First, the direct-influence matrix is created for each DM, which is a square matrix whose size is equal to the number of alternatives/criteria. For this purpose, we have adopted a four-degree scale to express the influence of the i-th criterion on the j-th criterion, where: 0—no influence, 1—medium influence, …, 4—maximum influence [45]. Besides, in the direct-influence matrix B k of the k-th DM, k = 1 , , K , the elements on the main diagonal are null, while non-zero elements b i j k ( i j ) reflect the impact of the i-th criterion on the j-th criterion:
B k = 0 b 12 k b 1 n k k b 21 k 0 b 2 n k k b n k 1 k b n k 2 k 0 .
Matrix (22) is then normalized, as follows:
B ^ k = B k 1 max i j = 1 n k b i j k .
From (23), we calculate the total-influence matrix ( T k ), as described by
T k = B ^ k ( I B ^ k ) 1 ,
where I is the n k × n k identity matrix.
Subsequently, two vectors of indicators are determined based on T k to express a relation between the criteria, covering both direct and indirect influences. They are defined as importance indicator vector ( t k + ) and relation indicator vector ( t k ), whose components are given as follows:
t i k + = j = 1 n k t i j k + j = 1 n k t j i k ,
t i k = j = 1 n k t i j k j = 1 n k t j i k .
From Equations (25) and (26), the weights are determined as proportional to the average value ( t i k a v ) of the appropriate pair of indicators t i k + and t i k , given as follows:
t i k a v = 1 2 ( t i k + + t i k ) .
Next, the equation below can be used to calculate each normalized weight:
w i k = t i k a v j = 1 n k t j k a v .
It is necessary to correct the weight values calculated from Equation (28), as no criterion can be assigned to zero weight. The key issue is to determine the correction value δ , and the final decision should belong to the decision-maker [42]. The value δ should be as small as possible and as such Kobryń [42] suggests setting δ min i w i k , if w i k > 0 . The values w i k c o r and w i k n o r m are given as follows:
w i k c o r = w i k + δ ,
w i k n o r m = w i k c o r j = 1 n k w j k c o r .

2.4. DEMATEL-Based ANP (DANP)

The DANP method was proposed by Yang et al. [35]. It consists in forming a matrix S k , which is similar to the Analytic of Network Process (ANP) [46], on the basis of the modified total-influence matrix T k (24), representing the outcome of the application of DEMATEL [47]. The components of S k should be given as:
s i j k = t j i k w = 1 n k t j w k , i , j = 1 , 2 , , n k ,
where n k is the number of interrelated criteria adopted by the k-th DM, t j i k denotes the element of the matrix T k depicting the total influence of the j-th criterion on the i-th criterion, and s i j k represents a component of the matrix S k .
Next, the matrix S lim k is determined based on matrix S k , while using the procedure characteristic for ANP, as follows:
S lim k = lim w S w .
The matrix S lim k consists of n k identical columns. The elements of the individual columns depict the normalized weights of the criteria.

3. Generalized Canonical Correlation Analysis

In multivariate statistical analysis, data usually consist of multiple variables measured on a set of observations [48]. In this context, CCA comprises a family of statistical techniques that model the linear relationships between two (or more) sets of variables [22,49]. More specifically, in CCA, the variables of an observation can be partitioned into two or more sets, with each one regarded as a view of the data [22].
Like principal component analysis (PCA) and linear discriminant analysis (LDA) [48,49], CCA can reduce the dimensionality of the original variables, since only a few factor pairs are normally needed to represent the relevant information. Besides, CCA is invariant to any affine transformation of the input variables [50]. Another appealing property is that CCA does not assume a priori the direction of the relationship between the variable sets. This is in contrast to regression methods, which have to designate an independent and a dependent data set [51]. Finally, CCA characterizes relationships between data sets in an interpretable way, a property that is not displayed by other common correlation methods, which simply quantify the similarity between data sets [51].
Formally, given two zero-mean data sets X = ( x 1 , x 2 , , x n ) R d × n and Y = ( y 1 , y 2 , , y m ) R d × m , with x i and y i denoting d-dimensional column vectors, standard CCA finds a canonical coordinate space that maximizes correlations between the projections of the two variable sets onto that space [51]. Associated with the j-th dimension of this new space, there is a pair of projection weight vectors, a j = ( a 1 j , a 2 j , , a n j ) and b j = ( b 1 j , b 2 j , , b m j ) , named canonical weights. The resulting projections of variable sets X and Y onto the j-th dimension of the canonical space comprise a pair of d-dimensional vectors, u j = a j , X and v j = b j , Y , which are called canonical variates. Here, · , · denotes the inner (projection) product operator. CCA maximizes the linear correlations between each pair of canonical variates, as given by
ρ j = max u j , v j u j · v j ,
where u j denotes the norm of vector u j .
More precisely, r = min ( n , m ) pairs of projection vectors are generated, so that the correlation ρ 1 between u 1 and v 1 is maximum, the correlation ρ 2 between u 2 and v 2 is maximum, subject to the constraint that the canonical variates u 2 and v 2 are orthogonal to u 1 and v 1 , respectively, and so on and so forth, up to the point that the correlation ρ r between u r and v r is maximum, provided that they correlate with neither u 1 , u 2 , , u r 1 nor v 1 , v 2 , , v r 1 , respectively. The symmetry of the correlation matrices guarantees orthogonality.
To accomplish this, CCA solves the following optimization problem [51]:
ρ = max a T C X Y b a T C X X a · b T C Y Y b ,
where ρ represents the maximum value of the canonical correlation vector, C X Y denotes the sample covariance of the two variable sets, X and Y , and C X X and C Y Y are their autocovariances.
The objective function (34) has infinite solutions if no restriction is imposed on weights a and b . However, the size of the canonical weights can be constrained, such that a T C X X a = 1 and b T C Y Y b = 1 [49]. This leads to the following Lagrangian [22,51]:
L ( λ , a , b ) = a T C X Y b λ X 2 ( a T C X X a 1 ) λ Y 2 ( b T C Y Y b 1 ) ,
which, in turn, can be formulated as the following generalized eigenvalue problem:
0 C X Y C Y X 0 a b = ρ 2 C X X 0 0 C Y Y .
Several variants of CCA have been proposed along the years, among which those that extend the correlation analysis to encompass more than two sets of variables [22]. These extensions, which are typically referred to as GCCA, aim at generating a series of components (variates) that maximize the association between the multiple variable sets. For instance, when considering the availability of three views on the data, the generalized eigenvalue problem that is defined in (36) can be simply extended, as follows [24,51]:
0 C X Y C X Z C Y X 0 C Y Z C Z X C Z Y 0 a b c = ρ 2 C X X 0 0 0 C Y Y 0 0 0 C Z Z .
The fact that the sets of variables may differ significantly is an interesting property of this extended formulation. Hence, the number of variables in each set does not need to be the same. We argue that this feature is interesting for the MCGDM context, particularly in those circumstances when the DMs make use of different judgment criteria.

4. A Canonical Multi-Criteria Group Decision Making Approach

By drawing a parallel between GCCA and MCGDM, one can note that methods pertaining to both areas operate on numerical values arranged in two-dimensional (data/decision) matrices. While, in GCCA, there is a set of data instances (rows) represented by two or more variable sets (columns), in MCGDM one has a set of solution alternatives (rows), each assessed in accordance with two or more sets of criteria (columns). By establishing this correspondence, it is possible to make use of GCCA’s functionalities to automatically compute the values of the criteria weights used in MCGDM.
Assume that K decision matrices are available, with D k pertaining to the k-th DM, k = 1 , , K . GCCA is directly applied to all K matrices at once, yielding the criteria weights w k that are related to each matrix (refer to Figure 1). These weights are the canonical weights that are associated with the first dimension of the canonical space. To bring about the consensus decision between the DMs, CMCGDM employs the same steps of the extended TOPSIS variant proposed by Shih et al. [26] (see Section 2.1), with some minor modifications. More specifically, while the computation of V k + and V k (Equations (3) and (4)) in the third step now takes into account the positive/negative signs of the canonical weights as calculated by GCCA (i.e., before they are normalized), in the fifth step the computation of S i k + and S i k (Equations (5) and (6)) makes use of the normalized weights.
We claim that the above methodology is useful for capturing the intrinsic relationships between the DMs’ beliefs regarding the relative performance of the different alternatives. Usually, the weights of the criteria adopted by all DMs are separately calculated for each DM (sometimes via an unrelated methodology, such as in [52]) and, then, the group consensus is achieved by somehow reconciling the different weights adopted by the different DMs. In the case of CMCGDM, because the canonical weights maximize the correlation between the different decision matrices simultaneously, it is expected that the consensus between the experts can be captured more naturally and effectively, bringing about a more reliable ranking of the available alternatives. In this case, the canonical correlation index can be interpreted as a sort of consensus indicator between the DMs’ opinions.
When considering the more complicated setting where different groups of criteria are used by different DMs, our method can work well, even when the sets of criteria are completely disjoint (that is, with no overlap). Moreover, instead of generating separate rankings of alternatives for all criteria subsets, which are then aggregated to compose the final ranking by solving a specific optimization problem, such as in the approach that was proposed by Fan et al. [21], our methodology seems to be more straightforward, not demanding intermediary rankings for different criteria sets.
Another good property of CMCGDM is that it allows for one to interpret each criterion according to the sign of its associated canonical weight, taking the role played by the other criteria as reference. While positively weighted criteria can be considered as benefit ones, those with negative weights can be regarded as cost criteria.
In order to further clarify this important property, consider a fictitious MCGDM problem that involves four DMs and eight alternatives, each of which is evaluated via five judgment criteria, shared by all DMs. Table 1 and Table 2 show the criteria weights as elicited by GCCA, either before or after normalization is applied. As one can notice, for the first and second DMs, all criteria, except the last one, should be interpreted as of the cost type, since their weights are negative. For the fourth DM, in contrast, the number of benefit criteria is larger. Therefore, the interpretation is contextualized for each DM. Although the non-normalized weights are small in magnitude, they were induced by GCCA so as to maximize the correlation between the DMs’ decision matrices (not shown in this example). After normalization is applied, the magnitude of the criteria weights is rescaled, so that the more relevant ones are more noticeable (they are highlighted in Table 2).
Another good property of CMCGDM is that GCCA is invariant to any affine transformation of the input variables (criteria values), as mentioned before. Besides, like entropy, GCCA is robust against the problem of criteria scaling. By another perspective, the use of GCCA renders our approach flexible enough to allow the dynamic (re)calibration of the criteria weights once the sets of alternatives or criteria change over time. GCCA could be used without modification, even when the number of DMs is allowed to change. However, this important property will not be further explored in this paper and it should be investigated in future work.
We argue that these fine properties are not shared by other popular criteria weighting methods, such as those that are reviewed in Section 2.2. In fact, these methods were not conceived to elicit the weights of different groups of criteria based on the intrinsic relations between the DMs’ preferences (decision matrices). Moreover, as we will show in Section 7, the adopted MCGDM method becomes more resilient to the problem of ranking irregularity [27] when using GCCA in place of the aforementioned criteria weighting methods.

5. Example on Human Resource Selection

To demonstrate the utility of the proposed CMCGDM approach (our implementation uses the Python package Pyrcca [51] for GCCA, which is hosted in http://github.com/gallantlab/pyrcca), we first present in this section the same example considered by Shih et al. [26], which is about the recruitment of an on-line manager by a firm. Subsequently, we compare the performance displayed by the CMCGDM approach with that delivered by the extended TOPSIS version, as originally proposed by Shih et al. [26] (that is, without using GCCA to generate the criteria weights). This is done by considering how resilient each method is to the ranking irregularity problem.
According to the example, the human resource department of the firm coordinates some knowledge tests (namely, language, professional, and safety rule tests), skill tests (namely, professional and computer tests), and interviews (namely, panel and one-on-one interviews) with the candidates. In all, 17 qualified candidates are on the list, whereas four recruiters (DMs) are responsible for conducting the selection. The decision matrix that was used for the decision process is split in Table 3 and Table 4, according to the type of judgment criteria (objective vs subjective). Notice that the K = 4 criteria sets are the same for all DMs, and all criteria are regarded as of the benefit type. Moreover, all of the DMs share the same score values for the objective criteria, only changing their assessment on the two subjective criteria. In addition, the normalized criteria weights elicited by the DMs themselves (as originally used in [26]) are shown in Table 5, whereas the canonical weights that are generated by GCCA (normalized or not) are displayed in Table 6.
According to Wang and Triantaphyllou [27], an intriguing problem that may happen with different decision-making methods is that of generating disparate outcomes (rankings) when submitted to the same decision problem instance. Consequently, it is natural to raise the question of how to properly assess the performance of such methods. Because it is practically impossible to know which is the best alternative solution for a given decision problem, in [53] some tests capturing different ranking irregularities are discussed as a way for assessing the performance of multi-criteria methods in general:
Test #1: 
the best alternative selected should not alter when a non-optimal alternative is added or removed from the problem (assuming that the relative importance of each criterion remains unchanged) [54].
Test #2: 
the best alternative selected should not change if a non-optimal alternative is replaced by a worse one [27,55].
Test #3: 
the final ranking of the alternatives should not violate the transitivity property if a non-optimal alternative is added to (or removed from) the problem [27,55].
Table 7 brings the ranking of the alternatives delivered by the extended TOPSIS, as reported in [26], as well as the results of the application of the three ranking irregularity tests that are described above. Table 8 follows the same layout, but it relates to the extended TOPSIS with criteria weights estimated by GCCA (i.e., CMCGDM approach). In both tables, the second and third columns show the new rankings of the alternatives (and their associated scores) that result when an irrelevant alternative is added or removed, respectively, to the alternative set, whereas the last column exhibits the ranking that results from the replacement of a non-optimal alternative by a worse one. Those cases where the transitivity property is violated (Test #3) are indicated by shadowed alternatives.
The extended TOPSIS, using the weights given in Table 5, could pass well Tests #1 and #2, but failed to respect the transitivity requirement, since there is a ranking position change between alternatives A14 and A17 (A4 and A7) when an irrelevant alternative (namely, A1) is duplicated (removed) during the application of Test #1, as one can readily observe from Table 7. On the other hand, the proposed CMCGDM approach could get through all of the three ranking irregularity tests without generating any ranking inconsistencies.

6. Example on Machine Acquisition

In this section, we conduct the same previous analysis on a second example, taken from [21]. In this example, there are different criteria sets that are associated with different DMs. The decision problem relates to the acquisition of a novel machine by a given manufacturing company. There are seven products (alternatives) under analysis by the manufacture department (DM #1) and the financial department (DM #2). The last alternative was introduced by us in order to turn the decision problem a bit more complex. The criteria concerned by DM #1 include C 1 : positioning accuracy (mm), C 2 : maximum load (kg), C 3 : mean time to failure (h), C 4 : degree of standardization of parts, C 5 : reliable service life (h), and C 6 : delivery time (month). The criteria concerned by DM #2 include C 7 : price ($) and C 8 : down payment ratio (percent), as well as C 5 and C 6 .
The second, sixth, seventh, and eighth are cost criteria, while the others are benefit criteria. The weights of the criteria that are concerned by DM #1 are ( 0.3 , 0.2 , 0.1 , 0.05 , 0.2 , 0.15 , 0 , 0 ) , whereas those of the criteria concerned by DM #2 are ( 0 , 0 , 0 , 0 , 0.25 , 0.15 , 0.45 , 0.15 ) . The example, as proposed by Fan et al. [21], assumes that such weights were somehow calculated by the DMs themselves.
Table 9 and Table 10 bring the decision matrices that are associated with each DM, respectively, whereas Table 11 and Table 12 bring the results of the application of the three ranking irregularity tests described in the previous section. The layouts of these tables are the same of Table 7 and Table 8. As before, the extended TOPSIS could not keep up well with the transitivity requisite when a mid-ranked alternative (namely, A2) is duplicated. CMCGDM, in turn, could produce rankings showing no inconsistencies according to all ranking irregularity tests.

7. Simulated Cases

In the previous sections, the criteria weights that were used in the comparative analysis were set up either by the DMs themselves or via GCCA. In this section, we enlarge the assessment by also considering the objective criteria weighting methods that are discussed in Section 2.2. The idea is to study, via several simulated cases, how the choice of the weighting method affects the behavior of the extended TOPSIS that was proposed by Shih et al. [26] with respect to the ranking irregularity problem.
The simulations were performed by following the guidelines that were provided in [27,53,56]. According to these authors, the simulations comprise a reasonable expedient to conduct controllable and reproducible experiments in order to have a better understanding of the pros and cons of different multi-criteria decision making methods. By this means, these methods can be deeply assessed by considering different samples and amounts of parameters, numbers of criteria and alternatives, as well as distinct ways to assign weights to the criteria and scores to the alternatives. In our investigation, the main parameters considered along the simulations were the following:
  • number of DMs: { 5 , 7 , 10 } ;
  • number of criteria: 7;
  • number of alternatives: { 17 , 19 , 21 , 23 , 25 , 27 , 30 } ;
  • scores of the alternatives: randomly generated by a uniform distribution in the range [0–100];
  • criteria weighting approach: GCCA and the methods described in Section 2.2;
  • number of trials: 100 for each parameter configuration, thereby yielding 6300 different decision problem instances; and,
  • performance criteria: the three irregularity tests described in Section 5.
Table 13 shows the results that were delivered by extended TOPSIS when configured with the different criteria weighting methods (it is worthy reminding that extended TOPSIS using GCCA refers to the CMCGDM approach). The third and fourth columns of this table indicate the number of problem instances (out of 6300) for which the transitivity requirement was violated when an unimportant alternative was inserted or removed, respectively. The fifth column does the same transitivity test, but it refers to those simulation trials where a non-optimal alternative was replaced by a worse one. Conversely, the sixth and seventh columns express the number of cases where the best alternative solution was altered either by adding/removing an irrelevant alternative or by replacing a non-optimal alternative by a worse one. Finally, the last column indicates the aggregated sum of the values in the third, fourth, and fifth columns. As one can notice, using GCCA to elicit the criteria weights has usually achieved better performance than the other contestant methods, lagging behind the statistical variance procedure and CRITIC when the best alternative solutions were altered after adding irrelevant alternatives.
Figure 2, Figure 3 and Figure 4 show how the number of failure cases varied for each type of irregularity test when the number of alternatives increased from 17 (minimum) to 30 (maximum) in order to better understand the performance of the different criteria weighting methods across the simulations. Considering Figure 2 (Test #1), one can notice that GCCA’s overall performance falls short of CRITIC and VarProc, although, after the 25 mark, it is only surpassed by VarProc. While the number of test failures caused by DANP grew up steadily, the increase of this number for GCCA is less accentuated in the range of 25 to 30 alternatives. Regarding Figure 3 (Test #2), the overall performance of GCCA is much better than all other methods and its stability from the 21 mark is remarkable. In general, all methods (except DANP) show a reasonable resilience to the conditions imposed by this test. Finally, Figure 4 informs how the different methods handled the transitivity requirement (Test #3). The values correspond to the aggregated cases, as shown in the last column of Table 13. All methods displayed a similar monotonically increasing behavior in terms of number of failed trials; however, the performance of GCCA was usually better, regardless of the number of alternatives.

8. Final Remarks

In this paper, we introduced a novel GDM approach, CMCGDM, which adopts GCCA to automatically elicit the weights of the (possibly distinct) criteria used by the different DMs. By maximizing the correlation between the DMs’ decision matrices, we argue that the elicited weights can better reflect the consensus between the DMs’ preferences regarding the various alternatives. CMCGDM also makes use of the extended version of TOPSIS conceived by Shih et al. [26], which is specific for MCGDM. We revisited two examples taken from the literature and performed a series of simulations considering popular criteria weighting methods to demonstrate the usefulness of CMCGDM. Overall, the results revealed that CMCGDM is a promising approach, being more resilient to the ranking irregularity problem than the extended TOPSIS without using GCCA.
As future work, we plan to extend CMCGDM to work with other GDM methods available in the literature as well as with uncertain criteria, such as those that involve fuzzy, interval, incomplete, or random values [1,14,57]. We also plan to compare CMCGDM with other methods for eliciting criteria weights, such as CSW-DEA [36,37], nonlinear programming methods [39], and swing-weighting [40]. We shall also investigate the use of GCCA to recalibrate the criteria weights in dynamic settings, particularly in those circumstances where the sets of alternatives or criteria are allowed to change over time [58]. Finally, the use of non-linear versions of CCA (such as those that employ kernels [22]) seems to be a good theme to explore.

Author Contributions

Both authors contributed equally for this work. Both authors have read and agreed to the published version of the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The first author gratefully acknowledges the financial support given by Banco do Nordeste do Brasil.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Capuano, N.; Chiclana, F.; Fujita, H.; Herrera-Viedma, E.; Loia, V. Fuzzy group decision making with incomplete information guided by social influence. IEEE Trans. Fuzzy Syst. 2018, 26, 1704–1718. [Google Scholar] [CrossRef]
  2. Liu, W.; Dong, Y.; Chiclana, F.; Cabrerizo, F.; Herrera-Viedma, E. Group decision-making based on heterogeneous preference relations with self-confidence. Fuzzy Optim. Decis. Mak. 2016, 16, 429–447. [Google Scholar] [CrossRef] [Green Version]
  3. Heravi, G.; Fathi, M.; Faeghi, S. Multi-criteria group decision-making method for optimal selection of sustainable industrial building options focused on petrochemical projects. J. Clean. Prod. 2017, 142, 2999–3013. [Google Scholar] [CrossRef]
  4. Montajabiha, M. An extended PROMETHE II multi-criteria group decision making technique based on intuitionistic fuzzy logic for sustainable energy planning. Group Decis. Negot. 2015, 25, 221–244. [Google Scholar] [CrossRef]
  5. Yu, D.; Zhang, W.; Xu, Y. Group decision making under hesitant fuzzy environment with application to personnel evaluation. Knowl. Based Syst. 2013, 52, 1–10. [Google Scholar] [CrossRef]
  6. Wu, J.; Chiclana, F. A social network analysis trust–consensus based approach to group decision-making problems with interval-valued fuzzy reciprocal preference relations. Knowl. Based Syst. 2014, 59, 97–107. [Google Scholar] [CrossRef] [Green Version]
  7. Efe, B. An integrated fuzzy multi criteria group decision making approach for ERP system selection. Appl. Soft Comput. 2016, 38, 106–117. [Google Scholar] [CrossRef]
  8. Kara, S.; Cheikhrouhou, N. A multi criteria group decision making approach for collaborative software selection problem. J. Intell. Fuzzy Syst. 2014, 26, 37–47. [Google Scholar] [CrossRef]
  9. Liu, Y.; Jin, L.; Zhu, F. A multi-criteria group decision making model for green supplier selection under the ordered weighted hesitant fuzzy environment. Symmetry 2019, 11, 17. [Google Scholar] [CrossRef] [Green Version]
  10. Wan, S.-P.; Wang, F.; Lin, L.-L.; Dong, J.-Y. An intuitionistic fuzzy linear programming method for logistics outsourcing provider selection. Knowl. Based Syst. 2015, 82, 80–94. [Google Scholar] [CrossRef]
  11. Zhou, J.; Su, W.; Balezentis, T.; Streimikiene, D. Multiple criteria group decision-making considering symmetry with regards to the positive and negative ideal solutions via the Pythagorean normal cloud model for application to economic decisions. Symmetry 2018, 10, 140. [Google Scholar] [CrossRef] [Green Version]
  12. Herrera-Viedma, E.; Alonso, S.; Chiclana, F.; Herrera, F. A consensus model for group decision making with incomplete fuzzy preference relations. IEEE Trans. Fuzzy Syst. 2007, 15, 863–877. [Google Scholar] [CrossRef]
  13. Pérez, I.; Cabrerizo, F.; Herrera-Viedma, E. A mobile decision support system for dynamic group decision-making problems. IEEE Trans. Syst. Man Cybern. Part A Syst. Humans 2010, 40, 1244–1256. [Google Scholar] [CrossRef]
  14. Lourenzutti, R.; Krohling, R.A. A generalized TOPSIS method for group decision making with heterogeneous information in a dynamic environment. Inf. Sci. 2016, 330, 1–18. [Google Scholar] [CrossRef]
  15. Luukka, P.; Collan, M.; Fedrizzi, M. A dynamic fuzzy consensus model with random iterative steps. In Proceedings of the 2015 48th Hawaii International Conference on System Sciences, Kauai, HI, USA, 5–8 January 2015; pp. 1474–1482. [Google Scholar]
  16. Dong, Y.; Chen, X.; Liang, H.; Li, C.-C. Dynamics of linguistic opinion formation in bounded confidence model. Inf. Fusion 2016, 32, 52–61. [Google Scholar] [CrossRef] [Green Version]
  17. Dong, Y.; Zhang, H.; Herrera-Viedma, E. Consensus reaching model in the complex and dynamic MAGDM problem. Knowl. Based Syst. 2016, 106, 206–219. [Google Scholar] [CrossRef]
  18. Cabrerizo, F.; Al-Hmouz, R.; Morfeq, A.; Balamash, A.; Martínez, M.; Herrera-Viedma, E. Soft consensus measures in group decision making using unbalanced fuzzy linguistic information. Soft Comput. 2017, 21, 3037–3050. [Google Scholar] [CrossRef]
  19. Dong, Q.; Cooper, O. A peer-to-peer dynamic adaptive consensus reaching model for the group AHP decision making. Eur. J. Oper. Res. 2016, 250, 521–530. [Google Scholar] [CrossRef]
  20. Tervonen, T.; Figueira, J.; Lahdelma, R.; Dias, J.; Salminen, P. A stochastic method for robustness analysis in sorting problems. Eur. J. Oper. Res. 2009, 192, 236–242. [Google Scholar] [CrossRef]
  21. Fan, Z.-P.; Li, M.-Y.; Liu, Y.; You, T.-H. A method for multicriteria group decision making with different evaluation criterion sets. Math. Probl. Eng. 2018, 2018, 7189451. [Google Scholar] [CrossRef]
  22. Uurtio, V.; Monteiro, J.M.; Kandola, J.; Shawe-Taylor, J.; Fernandez-Reyes, D.; Rousu, J. A tutorial on canonical correlation methods. ACM Comput. Surv. 2017, 50, 95. [Google Scholar] [CrossRef] [Green Version]
  23. McGarigal, K.; Cushman, S.; Stafford, S. Multivariate Statistics for Wildlife Ecology Research; Springer: New York, NY, USA, 2000. [Google Scholar]
  24. Kettenring, J.R. Canonical analysis of several sets of variables. Biometrika 1971, 58, 433–451. [Google Scholar] [CrossRef]
  25. Hwang, C.; Yoon, K. Multiple Attribute Decision Making: Methods and Applications—A State-of-the-Art Survey; Springer: Berlin/Heidelberg, Germany, 1981. [Google Scholar]
  26. Shih, H.-S.; Shyur, H.-J.; Lee, E.S. An extension of TOPSIS for group decision making. Math. Comput. Model. 2007, 45, 801–813. [Google Scholar] [CrossRef]
  27. Wang, X.; Triantaphyllou, E. Ranking irregularities when evaluating alternatives by using some ELECTRE methods. Omega 2008, 36, 45–63. [Google Scholar] [CrossRef]
  28. Banihabib, M.; Hashemi-Madani, F.; Forghani, A. Comparison of compensatory and non-compensatory multi criteria decision making models in water resources strategic management. Water Resour. Manag. 2017, 31, 3745–3759. [Google Scholar] [CrossRef]
  29. Papathanasiou, J.; Ploskas, N. Multiple Criteria Decision Aid—Methods, Examples and Python Implementations; Springer: Cham, Switzerland, 2018. [Google Scholar]
  30. Pöyhönen, M.; Hämäläinen, R.P. On the convergence of multiattribute weighting methods. Eur. J. Oper. Res. 2001, 129, 569–585. [Google Scholar] [CrossRef] [Green Version]
  31. Deng, H.; Yeh, C.-H.; Willis, R.J. Inter-company comparison using modified TOPSIS with objective weights. Comput. Oper. Res. 2000, 27, 963–973. [Google Scholar] [CrossRef]
  32. Jahan, A.; Mustapha, F.; Sapuan, S.; Ismail, M.; Bahraminasab, M. A framework for weighting of criteria in ranking stage of material selection process. Int. J. Adv. Manuf. Technol. 2012, 58, 411–420. [Google Scholar] [CrossRef]
  33. Diakoulaki, D.; Mavrotas, G.; Papayannakis, L. Determining objective weights in multiple criteria problems: The critic method. Comput. Oper. Res. 1995, 22, 763–770. [Google Scholar] [CrossRef]
  34. Gabus, A. DEMATEL, Innovative Methods, Report No. 2 Structural Analysis of the World Problematique; Battelle Geneva Research Institute: Geneva, Switzerland, 1974. [Google Scholar]
  35. Yang, Y.-P.O.; Leu, J.-D.; Tzeng, G.-H. A novel hybrid MCDM model combined with DEMATEL and ANP with applications. Int. J. Oper. Res. 2008, 5, 160–168. [Google Scholar]
  36. Ramezani-Tarkhorani, S.; Khodabakhshi, M.; Mehrabian, S.; Nuri-Bahmani, F. Ranking decision-making units using common weights in DEA. Appl. Math. Model. 2014, 38, 3890–3896. [Google Scholar] [CrossRef]
  37. Hammami, H.; Ngo, T.; Tripe, D.; Vo, D.-T. Ranking with a Euclidean common set of weights in data envelopment analysis: With application to the Eurozone banking sector. Ann. Oper. Res. 2020. accepted. [Google Scholar]
  38. Ramón, N.; Ruiz, J.L.; Sirvent, I. Common sets of weights as summaries of DEA profiles of weights: With an application to the ranking of professional tennis players. Expert Syst. Appl. 2012, 39, 4882–4889. [Google Scholar] [CrossRef]
  39. Movafaghpour, M. An efficient nonlinear programming method for eliciting preference weights of incomplete comparisons. J. Appl. Res. Ind. Eng. 2019, 6, 131–138. [Google Scholar]
  40. Parnell, G.S.; Trainor, T.E. 2.3.1 using the swing weight matrix to weight multiple objectives. INCOSE Int. Symp. 2009, 19, 283–298. [Google Scholar] [CrossRef]
  41. Wang, J.-J.; Jing, Y.-Y.; Zhang, C.-F.; Zhao, J.-H. Review on multi-criteria decision analysis aid in sustainable energy decision-making. Renew. Sustain. Energy Rev. 2009, 13, 2263–2278. [Google Scholar] [CrossRef]
  42. Kobryń, A. DEMATEL as a weighting method in multi-criteria decision analysis. Mult. Criteria Decis. Mak. 2018, 4, 12. [Google Scholar] [CrossRef] [Green Version]
  43. Dytczak, M.; Ginda, G. DEMATEL-based ranking approaches. Cent. Eur. Rev. Econ. Manag. 2016, 16, 191–202. [Google Scholar] [CrossRef]
  44. Zhu, B.-W.; Zhang, J.-R.; Tzeng, G.-H.; Huang, S.-L.; Xiong, L. Public open space development for elderly people by using the danp-v model to establish continuous improvement strategies towards a sustainable and healthy aging society. Sustainability 2017, 9, 420. [Google Scholar] [CrossRef] [Green Version]
  45. Gabus, A.; Fontela, E. World Problems an Invitation to Further thought within the Frame-Work of DEMATEL; Battelle Geneva Research Institute: Geneva, Switzerland, 1972. [Google Scholar]
  46. Saaty, T.L. Decision Making with Dependence and Feedback: The Analytic Network Process; RWS: Singapore, 1996. [Google Scholar]
  47. Fontela, E.; Gabus, A. The DEMATEL Observer; Battelle Geneva Research Institute: Geneva, Switzerland, 1976. [Google Scholar]
  48. Manly, B.F.; Alberto, J.A.N. Multivariate Statistical Methods: A Primer, 4th ed.; Chapman and Hall/CRC: Boca Raton, FL, USA, 2016. [Google Scholar]
  49. Meloun, M.; Militký, J. Statistical Data Analysis—A Practical Guide; Woodhead Publishing: New Delhi, India, 2011. [Google Scholar]
  50. Donner, R.; Reiter, M.; Langs, G.; Peloschek, P.; Bischof, H. Fast active appearance model search using Canonical Correlation Analysis. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 1690–1694. [Google Scholar] [CrossRef] [Green Version]
  51. Bilenko, N.Y.; Gallant, J.L. Pyrcca: Regularized kernel canonical correlation analysis in python and its applications to neuroimaging. Front. Neuroinform. 2016, 10, 49. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  52. Shih, H.-S.; Wang, C.-H.; Lee, E. A multiattribute GDSS for aiding problem-solving. Math. Comput. Model. 2004, 39, 1397–1412. [Google Scholar] [CrossRef]
  53. Triantaphyllou, E. Multi-Criteria Decision Making Methods: A Comparative Study; Springer: Boston, MA, USA, 2013. [Google Scholar]
  54. Cinelli, M.; Coles, S.R.; Kirwan, K. Analysis of the potentials of multi criteria decision analysis methods to conduct sustainability assessment. Ecol. Indic. 2014, 46, 138–148. [Google Scholar] [CrossRef] [Green Version]
  55. Triantaphyllou, E. Two new cases of rank reversals when the AHP and some of its additive variants are used that do not occur with the multiplicative AHP. J. Multi Criteria Decis. Anal. 2001, 10, 11–25. [Google Scholar] [CrossRef]
  56. Aires, R.; Ferreira, L. A new approach to avoid rank reversal cases in the TOPSIS method. Comput. Ind. Eng. 2019, 132, 84–97. [Google Scholar] [CrossRef]
  57. Celik, E.; Gul, M.; Aydin, N.; Gumus, A.T.; Guneri, A.F.A. Comprehensive review of multi criteria decision making approaches based on interval type-2 fuzzy sets. Knowl. Based Syst. 2015, 85, 329–341. [Google Scholar] [CrossRef]
  58. Campanella, G.; Ribeiro, R. A framework for dynamic multiple-criteria decision making. Decis. Support Syst. 2011, 52, 52–60. [Google Scholar] [CrossRef]
Figure 1. Steps of the proposed canonical multi-criteria group decision making (CMCGDM) approach.
Figure 1. Steps of the proposed canonical multi-criteria group decision making (CMCGDM) approach.
Symmetry 12 01612 g001
Figure 2. Performance of the different criteria weighting methods on Test #1.
Figure 2. Performance of the different criteria weighting methods on Test #1.
Symmetry 12 01612 g002
Figure 3. Performance of the different criteria weighting methods on Test #2.
Figure 3. Performance of the different criteria weighting methods on Test #2.
Symmetry 12 01612 g003
Figure 4. Performance of the different criteria weighting methods on Test #3.
Figure 4. Performance of the different criteria weighting methods on Test #3.
Symmetry 12 01612 g004
Table 1. Fictitious multi-criteria group decision making (MCGDM) problem—non-normalized generalized version of canonical correlation analysis (GCCA) weights.
Table 1. Fictitious multi-criteria group decision making (MCGDM) problem—non-normalized generalized version of canonical correlation analysis (GCCA) weights.
CriteriaDM1DM2DM3DM4
C1−0.0116−0.0081−0.01050.0071
C2−0.0122−0.00630.00370.0033
C3−0.0049−0.0123−0.0053−0.0184
C4−0.0123−0.01320.0015−0.0167
C50.01480.0157−0.02290.0004
Table 2. Fictitious MCGDM problem—normalized GCCA weights.
Table 2. Fictitious MCGDM problem—normalized GCCA weights.
CriteriaDM1DM2DM3DM4
C10.02180.12270.15280.3769
C20.00250.16520.32840.3201
C30.21070.02060.21730.0000
C40.00000.00000.30140.0250
C50.76500.69150.00000.2780
Table 3. Decision matrix (objective criteria) for the first example, adapted from ([26] [Table 6a]).
Table 3. Decision matrix (objective criteria) for the first example, adapted from ([26] [Table 6a]).
No.CandidatesObjective Criteria
Knowledge TestsSkill Tests
Language TestProfessional TestSafety Rule TestProfessional SkillsComputer Skills
1James B. Wang8070877776
2Carol L. Lee8565768075
3Kenney C. Wu7890728075
4Robert M. Liang7584698565
5Sophia M. Cheng8467607585
6Lily M. Pai8578828179
7Abon C. Hsieh7783747071
8Frank K. Yang7882728078
9Ted C. Yang8590808890
10Sue B. Ho8975796777
11Vincent C. Chen6555686270
12Rosemary I. Lin7064656560
13Ruby J. Huang9580707570
14George K. Wu7080798085
15Philip C. Tsai6078877066
16Michael S. Liao9285889085
17Michelle C. Lin8687807072
Table 4. Decision matrix (subjective criteria) for the first example, adapted from ([26] [Table 6b]).
Table 4. Decision matrix (subjective criteria) for the first example, adapted from ([26] [Table 6b]).
No.Subjective Criteria
DM #1DM #2DM #3DM #4
Panel
Interview
1-on-1
Interview
Panel
Interview
1-on-1
Interview
Panel
Interview
1-on-1
Interview
Panel
Interview
1-on-1
Interview
18075858075709085
26575607070776070
39085808580909095
46570556068726272
57580758050557075
68080758577827575
76570706065726775
87060756575678285
98085958590859092
107075758068786570
115060626560656570
126065657550604550
137575808065757075
148070757280707575
157065757065706065
169095929085808890
178085707575807075
Table 5. Original criteria weights for the first example, adapted from ([26] [Table 6b]).
Table 5. Original criteria weights for the first example, adapted from ([26] [Table 6b]).
No.CriteriaThe Weights of the Group
DM #1DM #2DM #3DM #4
Knowledge tests
1Language test0.0660.0420.0600.047
2Professional test0.1960.1120.1340.109
3Safety rule test0.0660.0820.0510.037
Skill tests
4Professional skills0.1300.1760.1670.133
5Computer skills0.1300.1180.1000.081
Interviews
6Panel interview0.2160.2150.2030.267
71-on-1 interview0.1960.2150.2850.326
Sum1111
Table 6. Criteria weights (non-normalized values in parentheses) for the first example, as elicited by GCCA.
Table 6. Criteria weights (non-normalized values in parentheses) for the first example, as elicited by GCCA.
No.CriteriaThe Weights of the Group
DM #1DM #2DM #3DM #4
Knowledge tests
1Language test0.2665 (0.0003)0.1661 (−0.0001)0.1668 (0.0014)0.1251 (0.0005)
2Professional test0.0000 (−0.0119)0.0383 (−0.0092)0.0156 (−0.0107)0.0104 (−0.0109)
3Safety rule test0.2368 (−0.0011)0.0626 (−0.0075)0.1927 (0.0035)0.1827 (0.0062)
Skill tests
4Professional skills0.2057 (−0.0025)0.1885 (0.0015)0.0875 (−0.0049)0.1470 (0.0027)
5Computer skills0.0503 (−0.0096)0.2984 (0.0093)0.0798 (−0.0056)0.2033 (0.0083)
Interviews
6Panel interview0.1266 (−0.0061)0.2101 (0.0030)0.2985 (0.0119)0.2157 (0.0095)
71-on-1 interview0.1142 (−0.0067)0.0360 (−0.0094)0.1590 (0.0008)0.1158 (−0.0004)
Sum1111
Table 7. Assessment of the extended TOPSIS [26] on the ranking irregularity tests—first example.
Table 7. Assessment of the extended TOPSIS [26] on the ranking irregularity tests—first example.
Extended TOPSISTest #1–AdditionTest #1–RemovalTest #2
RankScoreRankScoreRankScoreRankScore
A160.8960A160.8956A160.8963A160.8960
A90.8797A90.8797A90.8797A90.8797
A30.7860A30.7862A30.7859A30.7860
A60.6611A60.6611A60.6611A60.6611
A10.6272A10.6259A140.5925A140.5924
A140.5924A10.6259A170.5915A170.5919
A170.5920A170.5925A80.5700A80.5701
A80.5701A140.5924A130.5565A130.5568
A130.5568A80.5701A100.5079A100.5080
A100.5080A130.5571A50.4660A50.4660
A50.4660A100.5082A70.4516A40.4524
A40.4527A50.4659A40.4514A70.4522
A70.4523A40.4538A20.4401A20.4404
A20.4404A70.4530A150.4092A10.4224
A150.4091A20.4406A110.2101A150.4091
A110.2097A150.4091A120.1673A110.2097
A120.1678A110.2093--------A120.1677
--------A120.1682----------------
Table 8. Assessment of the CMCGDM approach on the ranking irregularity tests–first example.
Table 8. Assessment of the CMCGDM approach on the ranking irregularity tests–first example.
CMCGDMTest #1–AdditionTest #1–RemovalTest #2
RankScoreRankScoreRankScoreRankScore
A160.9016A160.9013A160.9018A160.9016
A90.8797A90.8799A90.8794A90.8796
A30.7230A30.7231A30.7228A30.7230
A60.6720A60.6719A60.6722A60.6721
A140.6608A140.6608A140.6608A140.6608
A10.5964A10.5954A80.5854A80.5855
A80.5855A10.5954A50.5491A50.5495
A50.5495A80.5855A20.5170A20.5173
A20.5173A50.5499A170.4813A170.4811
A170.4811A20.5176A130.4778A130.4779
A130.4779A170.4810A40.4699A40.4705
A40.4706A130.4781A100.4634A100.4632
A100.4632A40.4713A70.3956A70.3957
A70.3957A100.4630A150.3627A10.3627
A150.3620A70.3958A110.2259A150.3621
A110.2254A150.3615A120.1408A110.2255
A120.1409A110.2250--------A120.1409
--------A120.1410----------------
Table 9. Decision matrix associated with DM #1 for the second example, adapted from ([21] [Table 2]).
Table 9. Decision matrix associated with DM #1 for the second example, adapted from ([21] [Table 2]).
C1C2C3C4C5C6
A10.055008508.630,5001.5
A20.015509258.226,5002.0
A30.0086009609.028,5003.0
A40.0084507209.225,8002.0
A50.0154006508.024,0001.5
A60.0124807108.423,5001.0
A70.0175056918.030,1001.3
Table 10. Decision matrix associated with DM #2 for the second example, adapted from ([21] [Table 2]).
Table 10. Decision matrix associated with DM #2 for the second example, adapted from ([21] [Table 2]).
C5C6C7C8
A130,5001.5530,00050
A226,5002.0420,00040
A328,5003.0450,00035
A425,8002.0480,00040
A524,0001.5380,00030
A623,5001.040,00060
A730,1001.3392,00057
Table 11. Assessment of the extended TOPSIS [26] on the ranking irregularity tests—second example.
Table 11. Assessment of the extended TOPSIS [26] on the ranking irregularity tests—second example.
Extended TOPSISTest #1–AdditionTest #1–RemovalTest #2
RankScoreRankScoreRankScoreRankScore
A11.0000A11.0000A11.0000A11.0000
A30.5892A30.5816A30.5984A30.5908
A40.5265A40.5189A40.5357A40.5281
A20.4741A70.4682A70.4704A70.4694
A70.4691A20.4667A50.4011A50.4004
A50.4002A20.4667A60.0000A20.3748
A60.0000A50.3996--------A60.0000
--------A60.0000----------------
Table 12. Assessment of the CMCGDM approach on the ranking irregularity tests–second example.
Table 12. Assessment of the CMCGDM approach on the ranking irregularity tests–second example.
CMCGDMTest #1–AdditionTest #1–RemovalTest #2
RankScoreRankScoreRankScoreRankScore
A30.8455A30.8425A30.8521A30.8478
A40.5798A40.5784A40.5800A40.5798
A20.5582A20.5581A10.4529A20.4631
A10.4578A20.5581A50.3574A10.4577
A50.3600A10.4627A70.2919A50.3581
A70.2965A50.3581A60.0000A70.2936
A60.0000A70.2942--------A60.0000
--------A60.0000----------------
Table 13. Assessment of the different criteria weighting methods on the ranking irregularity tests—simulated cases.
Table 13. Assessment of the different criteria weighting methods on the ranking irregularity tests—simulated cases.
Test #3Test #3Test #3
Dist.MethodAdditionRemovalReplacementTest #1Test #2Total Test #3
uniformGCCA11,73713,22627272311127,690
Entropy12,50412,75326402463127,897
Std12,50412,75326402463127,897
DEMATEL12,74112,83726262521728,204
DEMATEL-ANP12,14513,59227662832528,503
CRITIC12,41113,27628692181628,556
VarProc12,36613,31330591832328,738
Note: VarProc refers to the statistical variance procedure; Std refers to the standard deviation method.

Share and Cite

MDPI and ACS Style

Santos, F.J.d.; Coelho, A.L.V. Eliciting Correlated Weights for Multi-Criteria Group Decision Making with Generalized Canonical Correlation Analysis. Symmetry 2020, 12, 1612. https://doi.org/10.3390/sym12101612

AMA Style

Santos FJd, Coelho ALV. Eliciting Correlated Weights for Multi-Criteria Group Decision Making with Generalized Canonical Correlation Analysis. Symmetry. 2020; 12(10):1612. https://doi.org/10.3390/sym12101612

Chicago/Turabian Style

Santos, Francisco J. dos, and André L. V. Coelho. 2020. "Eliciting Correlated Weights for Multi-Criteria Group Decision Making with Generalized Canonical Correlation Analysis" Symmetry 12, no. 10: 1612. https://doi.org/10.3390/sym12101612

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop