Next Article in Journal
Tripartite Dynamic Game among Government, Bike-Sharing Enterprises, and Consumers under the Influence of Seasons and Quota
Next Article in Special Issue
Coverage of Disabled People in Environmental-Education-Focused Academic Literature
Previous Article in Journal
Reducing the Risk of Co-Optation in Alternative Food Networks: Multi-Stakeholder Cooperatives, Social Capital, and Third Spaces of Cooperation
Previous Article in Special Issue
Environmental Literacy Level Comparison of Undergraduates in the Conventional and ODLs Universities in Sri Lanka
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparing Multi-Criteria Decision Making Models for Evaluating Environmental Education Programs

Department of Environment, Ionian University, 29100 Zakynthos, Greece
Sustainability 2021, 13(20), 11220; https://doi.org/10.3390/su132011220
Submission received: 25 August 2021 / Revised: 23 September 2021 / Accepted: 30 September 2021 / Published: 12 October 2021
(This article belongs to the Special Issue Environmental Education Researches)

Abstract

:
Educators in the field of Environmental Education often have difficulty identifying and selecting programs that have the potential to best maximize needed resources to implement and achieve desired outcomes. This difficulty is, in part, due to their lack of expertise in evaluation knowledge and practice. The use of multi-criteria decision-making models in evaluating environmental education programs is new and, as a result, not many models have been used and tested in the specific domain. Comparisons of multi-criteria decision-making models have been implemented in various domains but not for environmental education programs’ evaluation. Therefore, we investigate the comparative performance of the SAW, WPM, TOPSIS, and PROMETHEE II models in evaluating and selecting the most appropriate environmental education program. The main objective of this paper is on presenting the different steps of the comparative analysis of multi-criteria decision-making models and on making conclusions on the suitability and robustness of the SAW, WPM, TOPSIS, and PROMETHEE II models in evaluating environmental education programs.

1. Introduction

The evaluation of the EE programs before their implementation can save time and effort [1]. The importance, as well as the difficulty of evaluations of environmental education (EE) programs, have been highlighted by many researchers [2,3,4,5,6,7,8,9,10,11,12,13,14]. Due to this difficulty, educators omit this stage and end up in many cases at implementing EE programs that do not fulfill their goals, objectives, and/or requirements.
An automated evaluation of EE programs would be very useful for many researchers and educators. Prior automated evaluations of EE programs such as the one proposed by Zint [12] mainly assisted the evaluation of one EE program after its implementation and not before. In Kabassi et al. [15] an automated system is designed to evaluate EE programs based on a multi-criteria decision making (MCDM) approach. In that approach, a set of criteria has been formed and the combination of Analytical Hierarchy Process (AHP) [16] with the Technique for the Order of Preference by Similarity to an Ideal Solution (TOPSIS) [17] has been applied to evaluate EE programs prior to their implementation, comparing them and selecting the one that seems more appropriate.
Generally, there are many MCDM methods available. However, finding the best MCDM model to apply is not easy. In MCDM, no single method has been considered as the most suitable for all types of decision-making situations [18,19,20]. Different methods may lead to different rankings of the evaluated objects [18,21]. A solution to the problem may be given by comparing the MCDM models [22]. Different comparative analyses of MCDM methods have been implemented [18,22,23,24,25,26] but none of these analyses concern evaluation of EE programs.
In light of the importance of usage of MCDM in evaluating EE programs and the need for comparison of MCDM approaches for each different domain, we perform a comparative analysis of four MCDM models for evaluating EE programs. For this purpose, we have used AHP for estimating the weights of the criteria as this theory has a well-defined way of forming the set of criteria and estimating the weights of the criteria based on pair-wise comparisons. Then, we apply four different MCDM models for processing the results of the evaluation. For this purpose, we apply SAW (Simple Additive Weighting) [17,27], WPM (Weighted Product Model) [28], TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution) [17], and PROMETHEE II model (Preference Ranking Organization METHod for Enrichment Evaluations II) [29,30]. The main reason for the selection of those four models is that they are among the most popular and widely used in the literature in general.
There is no formal way of comparing MCDM models as has been proved by the different comparison experiments [25,26,31,32,33,34,35,36,37,38,39,40]. Indeed, Salabun et al. [41] concluded in their paper that almost every combination of the method and its parameters may bring us a different result. Therefore, the comparison of MCDM models in the context of EE projects may be of particular interest. Furthermore, TOPSIS, SAW, WPM, and PROMETHEE II have not been compared before for the EE programs’ evaluation. For the comparison of the MCDM theories, we use the data from the EE programs in Greece. More specifically, we have used the data of the 52 EE programs for paths.
After the comparison of the MCDM models, we performed a sensitivity analysis. Sensitivity analysis examines the degree of change in the overall ranking of the alternatives when input data are slightly modified. More specifically, the scheme of weights of criteria changes to examine the robustness of the models and the alternatives. Although sensitivity analysis of the models has been implemented before in different domains and for different MCDM models [40,42,43,44,45], it is the first time that it is implemented for estimating the consistency of the MCDM models in the domain of EE programs.
The rest of the paper is organized as follows: Section 2 presents comparisons of the MCDM models described in the particular paper in other domains. Section 3 describes in detail the steps for the application of AHP for the criteria and their weights. Then, in Section 4, the four MCDM models that are compared are described. In Section 5 and Section 6, those models are applied for the evaluation of EE programs and the results are analyzed and compared. In Section 7, a sensitivity analysis is performed to evaluate the robustness of the four different MCDM models and the conclusions drawn by this study are discussed in Section 8.

2. Bibliographic Review

Quality environmental education involves many partners and stakeholders who collaborate in a research-implementation space where science, decision making, and local culture and environment intersect [46]; environmental education evaluation and assessment often struggle in these productive, yet complex, spaces [3]. Regular program evaluation is needed to better understand the linkages between the issues programs are designed to address, the metrics or indicators used to describe effectiveness, and the actual measured outcomes [6].
Many researchers have referred to the need for evaluation of EE projects [2,3,4,5,6,7,8,9,10,11,12,13,14] while most of them also emphasize to the lack of experiments due to the difficulty in implementing an evaluation experiment. For example, in the studies implemented by Norris & Jacobson [9] and O’Neil [46] reviewing 56 and 37 EE projects, correspondingly, it is concluded that less than one-third of the programs report evaluations. Since then more evaluation experiments and reviews have been done on EE activities [47], EE projects [6,48], or EE Centers [49].
Although progress had been made in terms of environmental education program evaluation practices and use, more emphasis is needed on the subject [48]. The first and most commonly addressed concerns the capacity of educators to evaluate [50]. For this purpose, in order to help stakeholders in EE, automated systems for evaluation have been developed [12]. However, this system mainly assisted the evaluation of one EE program after its implementation.
Indeed, most of the experiments of evaluation of EE projects involve surveys after the implementation of the project and/or observation during the implementation [2]. This means that an EE project has to be implemented before one can say if it is worth being selected and that costs time, effort, and, probably, money. Therefore, an evaluation of the EE projects prior to their implementation is essential.
In Kabassi et al. [15] an automated system has also been designed to evaluate EE programs based on a multi-criteria decision making (MCDM) approach. In this approach, the combination of the MCDM models proved rather effective for evaluating the EE projects prior to their implementation. However, a major concern with decision-making is that different MCDM methods can provide different results for the same problem. For this purpose, comparisons of MCDM models have been made in different domains and concern different MCDM theories [32,36,40,43,46,51,52,53,54,55,56,57]. The MCDM models compared in the particular paper for the domain of the evaluation of EE programs are SAW, WPM, TOPSIS, and PROMETHEE II. These models have been compared in the past in pairs and the comparison experiments are presented in Table 1.
From Table 1, one can easily observe that these four models have never been part of the same comparison. Furthermore, there is no previous experiment that tries to compare MCDM models in the domain of EE programs’ evaluation. As a result, the particular comparison may provide interesting insights and conclusions regarding the evaluation of EE programs.

3. AHP for the First Steps

Analytic Hierarchy Process [16] is one of the most popular MCDM techniques. AHP aims to analyze a qualitative problem through a quantitative method and seems to be very appropriate for implementing the first steps of any MCDM problem. As a result, it has been combined with many other MCDM models [58,59,60].
The application of AHP revealed the set of criteria as this was formed by a group of experts and is described in detail in [15]:
  • uc1-Adaptivity: This criterion reveals how flexible the program is and how adaptable it is to each age group of participants.
  • uc2-Completeness: This criterion shows if the available description of the program covers the topic and to what extent.
  • uc3-Pedagogy: This criterion shows whether the EE program is based on a pedagogical theory or if it uses a particular pedagogical method.
  • uc4-Clarity: This criterion represents whether or not the objectives of this program are explicitly expressed or stated.
  • uc5-Effectiveness: This criterion shows the overall impact, depending on the programming and available support material.
  • uc6-Knowledge: This criterion refers to the quantity and quality of a cognitive object offered to students
  • uc7-Skills: This criterion reveals if skills are cultivated through activities involving active student participation
  • uc8-Behaviors: This criterion reveals the change in the student’s intentions and behavior through the program.
  • uc9-Enjoyment: This criterion shows the enjoyment of the trainees throughout the EE project
  • uc10-Multimodality: This criterion represents whether the EE project provides many different kinds of activities, interventions, and methods.
Then AHP was applied to estimate the weights of the criteria. The estimation of the weights is presented in detail in [15] and the values of the weights were estimated as follows: w uc 1 = 0.072 , w uc 2 = 0.071 , w uc 3 = 0.036 , w uc 4 = 0.129 , w uc 5 = 0.133 , w uc 6 = 0.127 , w uc 7 = 0.171 , w uc 8 = 0.099 ,   w uc 9 = 0.111 ,   w uc 10 = 0.051 . The analysis of the weights revealed that the most important criterion while evaluating EE programs is ‘Skills’ while ‘Effectiveness’, ‘Clarity’ and ‘Knowledge’ were also considered rather important.

4. MCDM Models

The different MCDM models selected to be combined with AHP are SAW, WPM, TOPSIS, and PROMETHEE II. Using each model, the aim was to determine the value of EE programs by combining the values of the criteria. The different models differ in their basic principles and the way they combine the criteria values. However, the first three steps of the MCDM models are identical for all the four models implemented:
Forming the set of alternative EE programs: The set of alternative EE projects was set after running a study on the EE projects in Greece [15]. More specifically, we collected the 553 programs that had been implemented in the past by environmental education centers in Greece. After collecting all this information, we chose only the EE projects that related to the subject of Environmental Paths. These projects had been implemented by environmental education centers in different parts of Greece. The specific set was selected due to its eligible number of programs (52) and its characteristics.
Forming a set of evaluators: The group of evaluators comprised only of expert users. More specifically, three users participated in the experiment, all experts in EE programs.
Calculating the values of the criteria: In this step, the evaluators studied the 52 EE programs for paths in Greece and provided values to the 10 criteria for each program. Those values were on the nine-number scale. As soon as all the values of the three decision-makers were collected, the geometric mean was calculated for the corresponding values of each criterion for each EE program.
Application of the MCDM model to estimate the final value of each EE program and ranking of the alternatives: To implement this step and calculate a final value U ( E E p j ) for each EE program, four different MCDM models have been applied: SAW, WPM, TOPSIS, and PROMETHEE II.

4.1. SAW

SAW model consists of translating a decision problem into the optimization of some multi-attribute utility function U defined on A . The decision-maker estimates the value of the function U ( E E p j ) for every alternative E E p j and selects the one with the highest value. The multi-attribute utility function U can be calculated in the SAW method as a linear combination of the values of the n attributes:
U ( E E p j ) = i = 1 10 w u c i u c i j ,
where E E p j is one alternative and u c i j is the value of the c i criterion for the E E p j alternative. The higher is the value, the more desired is the alternative. The values of U ( E E p j ) using SAW are presented in Table 2.

4.2. WPM

Since the classic model of WPM is considered complicated in the case of pairwise comparison as it compares alternatives in pairs by calculating a ratio U ( E E p K / E E p L ) . However, in our case, the alternatives are 52 and this pair-wise comparison would be complicated and time-consuming. Therefore, we used an alternative application of WPM, which is proposed by Triantafyllou [21]. In this alternative approach of WPM, the decision-maker use only products without ratios. Therefore, for each alternative, the following value was calculated:
U ( E E p j ) = i = 1 10 ( u c i j ) w u c i ,   for   j = 1 , , 52
The term U ( E E p j ) denoted the total performance value of the alternative E E p j (Table 2). Similar to SAW, the alternative with the highest U ( E E p j ) is ranked first.

4.3. TOPSIS

The central principle in TOPSIS model is that the best alternative should have the shortest distance from the ideal solution while the farthest distance from the negative-ideal solution.
Calculating weighted ratings: The weighted value is calculated as: v i j = w u c i u c i j , where w u c i is the weight and u c i j is the value of the criterion c i .
Identify positive-ideal and negative-ideal solutions: The positive-ideal solution is the composite of all the best attribute ratings attainable and is denoted:
E E p * = { v 1 * , v 2 * , , v i * , , v 10 * }
Calculate the separation measure from the positive-ideal and negative-ideal alternative: In this step, the system calculates the n-dimensional Euclidean distance from the positive ideal solution S j = i = 1 n ( v i j v i ) 2 and the negative-ideal alternative S j = i = 1 n ( v i j v i ) 2 for each j alternative.
Calculate similarity indices and ranking EE projects: The similarity index represents the similarity to the positive-ideal solution for alternative j, which is finally given by U j * = S j S j + S j . The alternative EE projects are then ranked according to U j * , in descending order, and the one with the higher value is selected as the most desired (Table 2).

4.4. PROMETHEE II

The PROMETHEE methods belong to the family of the outranking methods. The steps of PROMETHEE II after having defined criteria and their weights of importance as well as the values of the criteria for all EE programs are:
Making comparisons and calculate preference degree: This step computes for each pair of possible EE programs and for each criterion, the value of the preference degree. Let g j ( E E p i ) be the value of a criterion j for a EE program E E p i . We note d j ( E E p i , b ) , the difference in the value of a criterion j for two EE programs a and b.
d j ( E E p i , b ) = g j ( E E p i ) g j ( b )
P j ( E E p i , b ) is the value of the preference degree of a criterion j for two EE programs E E p i and b. The preference functions used to compute these preference degrees are defined such as:
P j ( E E p i , b ) = 0 , d j ( E E p i , b ) < 0
P j ( E E p i , b ) = d j ( E E p i , b ) ,   if   d j ( E E p i , b ) > 0
Aggregating the preference degrees of all criteria for pair-wise EE programs: This step consists in aggregating the preference degrees of all criteria for each pair of possible EE programs. For each pair of possible EE programs, we compute a global preference index. Let C be the set of considered criteria and w j the weight associated with criterion j. The global preference index for a pair of possible EE programs E E p i and b is computed as follows:
π ( E E p i , b ) = [ j = 1 n w j P j ( E E p i , b ) ] / j = 1 n w j
Calculate positive and negative outranking flow: This step, which is the first that concerns the ranking of the possible EE programs, consists in computing the outranking flows. For each possible EE program E E p i , we compute the positive outranking flow ϕ + ( E E p i ) and the negative outranking flow ϕ ( E E p i ) . Let A be the set of possible EE programs and n the number of possible EE programs. The positive outranking flow of a possible EE program E E p i is computed by the following formulae:
ϕ + ( E E p i ) = 1 m 1 b = 1 m π ( E E p i , b ) , E E p i b
The negative outranking flow of a possible EE program E E p i is computed by the following formulae:
U ( E E p i ) = 1 m 1 b = 1 m π ( b , E E p i ) , E E p i b
Calculate the net outranking flow: The last step of the application of PROMETHEE II consists in using the outranking flows to establish a complete ranking between the possible EE programs. The ranking is based on the net outranking flows. These are computed for each possible EE program from the positive and negative outranking flows. The net outranking flow U ( E E p i ) of a possible EE program E E p i is computed as follows:
U ( E E p i ) = ϕ + ( E E p i ) ϕ ( E E p i )
Ranking EE programs: The ranking of EE programs is done according to the value U ( E E p i ) .

5. Application of MCDM Models

As soon as all the MCDM models have been applied, the final values of all EE programs using each model are calculated (Table 2).
After calculating the values of the different MCDM models, we estimated the ranking of each EE program (Table 3).
The similar rankings are highlighted in gray in the table. The similarity in those rankings can be further demonstrated by an analysis of pairwise correlation, which is presented in the next section.

6. Comparison of the Models

The main steps for comparing the MCDM models are:
  • Implementing pairwise comparisons of the values of the models by calculating the Pearson correlation coefficient.
  • Implementing pairwise comparison of the rankings by calculating the Spearman’s rho correlation
  • Estimating the Cohen’s kappa for testing the inter-rater comparability, using MCDM models as raters.
  • Performing a sensitivity analysis to evaluate the robustness of those models.
In order to estimate the correlation of the four MCDM models being evaluated, we have calculated the Pearson correlation coefficient, for all pairs of MCDM models using the values of Table 2. The values of the Pearson correlation coefficient are presented in Table 4, which revealed that all four methods perform very similarly (Pearson correlation coefficient of 0.969 to 0.995, which is very high).
After confirming the correlation of the MCDM models using the values of the Pearson correlation coefficient, we aim at implementing a pairwise comparison of the rankings produced by the MCDM models. This comparison is performed by estimating Spearman’s rho correlation coefficient. More specifically, the rankings of Table 3 are used for calculating Spearman’s rho correlation, for all pairs of MCDM models. The Spearman’s rho correlation is estimated by:
R = 1 6 i = 1 n d i 2 n ( n 2 1 )
where d i is the rank different at position i and n is the number of ranks.
The results are presented in Table 5 and the results are remarkable as all four methods perform very similarly (Spearman’s rho correlation of 0.983 to 0.995, which is very high).
Generally, the values of correlations are very high for all pairs of MCDM models. According to the values of the Pearson correlation coefficient, the highest correlation was between SAW and WPM, which was quite expected since their reasoning is very similar, and the lowest correlation was between TOPSIS and PROMETHEE II. The correlation of the rankings of the different alternative EE programs confirmed the high correlation of SAW and WPM but slightly higher was the correlation of SAW with PROMETHEE II.
In order to further analyze the comparison of the four MCDM models, we decided to check the effectiveness of the models in categorizing EE projects in groups. The EE programs were categorized into five different groups. The groups were five taking into account the five-Likert scale and are presented in Table 6 (1-very good, 2- good, 3-mediocre, 4-not good, 5- bad).
The 52 EE programs have been classified into five mutually exclusive categories; we use Cohen’s kappa to test the reliability of the MCDM models being investigated. Indeed, Cohen’s kappa measures the agreement between two raters, who each classify N items into C mutually exclusive categories. The raters in the particular decision problems are the MCDM models that rate the EE programs and we make pairwise comparisons.
The values of Cohen’s kappa for pairwise comparing the agreement of the MCDM models are presented in Table 7. A value of Cohen’s kappa above 0.6 is quite good. Thus the values of the Cohen’s kappa of the particular experiment (0.806–0.976) confirm the reliability of the four MCDM models being compared for the particular domain.
The raters’ resemblance according to Cohen’s kappa is quite high in general (above 0.806 for all pairs of MCDM models). This fact was also confirmed by the Pearson correlation coefficient and Spearman’s rho correlation presented in Table 4 and Table 5. Cohen’s kappa revealed the highest correlation was between PROMETHEE II and SAW, which is in line with the results of Spearman’s rho correlation. Both Spearman’s rho correlation and Cohen’s kappa revealed that the correlation of PROMETHEE II with WPM is also very high. Even though some theories seem to correlate more than others, the general correlation of the four methods is very high. A reasonable disagreement can be observed among the methods, but this does not affect their reliability.

7. Sensitivity Analysis

A sensitivity analysis was performed to investigate the robustness of each MCDM method compared in this paper. A way of performing sensitivity analysis is changing one by one the weights of the criteria. Another way of performing this analysis is using a weighting scheme, which assigns equal weight to each one of the criteria [40,61]. The values of the criteria are not modified. Since there are 10 criteria, the weight for each criterion was determined to be 0.1. Table 8 presents the values assigned to each one of the alternative EE programs being evaluated using the four MCDM models. First, the values calculated by MCDM models using the weighting scheme 1 is presented. This scheme uses the weights as these were estimated by AHP, and then for each model, the weighting scheme 2 is used in which equal weights are given to all criteria. Using the two different schemes of weights by the four different MCDM, the rankings have been estimated for each alternative EE program. As a result, Table 9 presents the ranking of the alternative EE programs using the four different MCDM models and the two schemes of weights.
The sensitivity analysis aims to check the consistency of the results and to evaluate the robustness of the ranking produced by the method as the scheme of weights changes. For this purpose, we are going to check the Pearson correlation coefficient for the values generated by each MCDM using the two different weighting schemes. The values of the Pearson correlation coefficient are very high and are presented in Table 10. The highest correlation was found between the schemes using PROMETHEE II.
However, what plays the most important role in sensitivity analysis is the consistency of rankings. The consistency is low if the ranking of the alternatives is completely modified after a slight variation of the weights. We compared the rankings of EE programs using the two different schemes by
  • Checking how many identical rankings were among the rankings of each model using the different schemes.
  • Estimating the Spearman’s rho correlation for each model using the two schemes of weights.
In Table 10 the percentage of identical rankings is presented for the four MCDM models and from these values, it is derived that SAW is less affected by the change in the weights of the criteria. Spearman’s rho correlation also confirmed the high correlation of rankings estimated by SAW using the two different schemes of weights (Table 10). The MCDM model that seems to be less affected by the choice of the set of weights is SAW, while the MCDM model that is most affected is TOPSIS as it has the lowest values of Pearson correlation coefficient, identical rankings, and Spearman’s rho correlation coefficient.

8. Conclusions

The front-end evaluation of EE programs prior to the application is very important because the application of an EE program costs time, money, and effort, and, therefore, one would like to implement only a program that is worth being implemented. Furthermore, the evaluation of EE programs can help the reuse of EE programs that require a lot of effort to be designed and have proven to have complete instructions and confirmed success in the application.
In view of the above, the contribution of this paper is on presenting the use of MCDM models for the front-end evaluation of EE programs and the comparison of the different models. Conclusions have been drawn regarding the application of different MCDM models. The implementation of the different MCDM models and their comparison can be found very useful by Environmental Education Centers, educators and/or generally stakeholders in EE that want to evaluate EE projects prior to their implementation and select the one that seems the most appropriate.
In this paper, we show that AHP has been used for defining the set of criteria as well as their weights. Then we combine AHP with four different MCDM models and compare them. In order to implement the MCDM models and run the comparison test between the model for an initial comparison of the EE projects prior to their implementation, we run a simulation using 52 EE projects of the EE Centers in Greece that involve paths. The data of the 52 EE projects that involved paths were incorporated by SAW, WPM TOPSIS, and PROMETHEE II, which are various MCDM models with different computational mechanisms as an additive or multiplicative combination, similarity to an ideal solution, etc.
The application of the different theories revealed the characteristics of each method as well as their advantages and disadvantages. The application of SAW is very easy, but the values of criteria must be positive to provide valuable results. WPM has the same advantages and disadvantages as SAW and provides similar results as it is proved by the values of Spearman’s rho and Cohen’s kappa. Both SAW and WPM have the ability to compensate among different criteria. Simplicity is also the main advantage of TOPSIS. An additional advantage of TOPSIS is its ability to maintain the same amount of steps regardless of problem size has allowed it to be utilized quickly to review other methods or to stand on its own as a decision-making tool [55]. However, the use of Euclidean Distance does not consider easy for the correlation of attributes but keeps the consistency of judgment. PROMETHEE II is also quite easy and additionally, it does not require the assumption that criteria are proportionate. However, all four models have the disadvantage of not having a clear methodology for estimating weights. For this purpose, in the current paper, AHP is used for estimating weights and combined in turn with SAW, WPM, TOPSIS, and PROMETHEE II.
The application of the MCDM models for the assessment of EE programs has revealed that the MCDM may prove rather effective. However, according to Mulliner et al. [18], different MCDM models can yield different results when applied to the same decision problem. This fact was also confirmed in this particular study. Therefore, we have used the Pearson correlation coefficient, Spearman’s rho correlation, and Cohen’s kappa for pairwise comparison of the different models and checking the inter-MCDM models’ reliability.
The sensitivity analysis that was performed in order to evaluate the robustness of the four different MCDM models in evaluating EE programs was implemented by applying a different scheme of weights and comparing the results of each model using the two different weighting schemes. The comparison involved estimating the Pearson correlation coefficient, identical rankings, and Spearman’s rho correlation. The results of the sensitivity test revealed that all models were quite robust. The MCDM model that proved to be more robust and less affected by the choice of the set of weights is SAW, while the MCDM that is most sensitive is TOPSIS.
The high values of correlation between the different MCDM models revealed that the ranking results mainly depend on the nature and the values of the criteria and less on the model selected. The reasonable disagreement that was observed among the methods did not affect their reliability. As a result, MCDM models proved generally very effective for EE programs before their implementation and selecting the best ones.
However, a possible limitation of this work is that this comparison has been made using only a set of alternative EE projects and it would provide safer conclusions if more sets were involved. Furthermore, the comparison could also involve more MCDM models in order to confirm that the results would not change. Therefore, it is among our future plans to extend the experiment with more MCDM models, such as ELECTRE, Delphi etc. Furthermore, we aim at re-implementing the experiment with other sets of EE projects with different characteristics in order to confirm that the set of projects does not affect the conclusions drawn by this study.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Carleton-Hug, A.; Hug, J.W. Challenges and opportunities for evaluating environmental education programs. Eval. Program Plan. 2010, 33, 159–164. [Google Scholar] [CrossRef]
  2. Ardoin, N.; Biedenweg, K.; O’Connor, K. Evaluation in Residential Environmental Education: An Applied Literature Review of Intermediary Outcomes. Appl. Environ. Educ. Commun. 2015, 14, 43–56. [Google Scholar] [CrossRef]
  3. Ardoin, N.M.; Bowers, A.W.; Gaillard, E. Environmental education outcomes for conservation: A systematic review. Biol. Conserv. 2019, 241, 108224. [Google Scholar] [CrossRef]
  4. Ardoin, N.M.; Bowers, A.W.; Roth, N.W.; Holthuis, N. Environmental education and K-12 student outcomes: A review and analysis of research. J. Environ. Educ. 2017, 49, 1–17. [Google Scholar] [CrossRef]
  5. Romero-Gutierrez, M.; Jimenez-Liso, M.R.; Martinez-Chico, M. SWOT analysis to evaluate the programme of a joint online/onsite master’s degree in environmental education through the students’ perceptions. Eval. Program Plan. 2016, 54, 41–49. [Google Scholar] [CrossRef]
  6. Thomas, R.E.W.; Teel, T.; Bruyere, B.; Laurence, S. Metrics and outcomes of conservation education: A quarter century of lessons learned. Environ. Educ. Res. 2018, 25, 172–192. [Google Scholar] [CrossRef]
  7. Marcinkowski, T.; Reid, A. Reviews of research on the attitude–behavior relationship and their implications for future envi-ronmental education research. Environ. Educ. Res. 2019, 25, 459–471. [Google Scholar] [CrossRef]
  8. McNamara, C. Basic Guide to Program Evaluation. 2008–2014; Free Management Library. Available online: https://managementhelp.org/ (accessed on 1 August 2021).
  9. Norris, K.; Jacobson, S.K. A content analysis of tropical conservation education programs: Elements of Success. J. Environ. Educ. 1998, 30, 38–44. [Google Scholar] [CrossRef]
  10. Fien, J.; Scott, W.; Tilbury, D. Education and conservation: Lessons from an evaluation. Environ. Educ. Res. 2001, 7, 379–395. [Google Scholar] [CrossRef]
  11. Zint, M.T.; Dowd, P.F.; Covitt, B.A. Enhancing environmental educators’ evaluation competencies: Insights from an examination of the effectiveness of theMy Environmental Education Evaluation Resource Assistant (MEERA) website. Environ. Educ. Res. 2011, 17, 471–497. [Google Scholar] [CrossRef]
  12. Zint, M. An introduction to My Environmental Education Evaluation Resource Assistant (MEERA), a web-based resource for self-directed learning about environmental education program evaluation. Eval. Program Plan. 2010, 33, 178–179. [Google Scholar] [CrossRef]
  13. Bourke, N.; Buskist, C.; Herron, J. Residential Environmental Education Center Program Evaluation: An Ongoing Challenge. Appl. Environ. Educ. Commun. 2014, 13, 83–90. [Google Scholar] [CrossRef]
  14. Linder, D.; Cardamone, C.; Cash, S.B.; Castellot, J.; Kochevar, D.; Dhadwal, S.; Patterson, E. Development, implementation, and evaluation of a novel multidisciplinary one health course for university undergraduates. One Health 2020, 9, 100121. [Google Scholar] [CrossRef]
  15. Kabassi, K.; Martinis, A.; Charizanos, P. Designing a tool for evaluating programs for environmental education. Appl. Environ. Educ. Commun. 2020, 1–18. [Google Scholar] [CrossRef]
  16. Saaty, T.L. The Analytic Hierarchy Process; McGraw-Hill: New York, NY, USA, 1980. [Google Scholar]
  17. Hwang, C.L.; Yoon, K. Multiple Attribute Decision Making: Methods and Applications A State-of-the-Art Survey. Notes in Economics and Mathematical Systems; Springer: Berlin/Heidelberg, Germany, 1981. [Google Scholar]
  18. Mulliner, E.; Malys, N.; Maliene, V. Comparative analysis of MCDM methods for the assessment of sustainable housing affordability. Omega 2016, 59, 146–156. [Google Scholar] [CrossRef]
  19. Guitouni, A.; Martel, J.M. Tentative guidelines to help choosing an appropriate MCDM method. Eur. J. Oper. Res. 1998, 109, 501–521. [Google Scholar] [CrossRef]
  20. Roy, B.; Slowinski, R. Questions guiding the choice of a multicriteria decision aiding method. EURO J. Decis. Proc. 2013, 1, 69–97. [Google Scholar] [CrossRef] [Green Version]
  21. Triantafyllou, F. Multi Criteria Decision Making Methods: A Comparative Study; Kluwer Academic Publishers: Dordrecht, The Netherlands, 2000. [Google Scholar]
  22. Zanakis, S.H.; Solomon, A.; Wishart, N.; Dublish, S. Multi-attribute decision making: A simulation comparison of select methods. Eur. J. Oper. Res. 1998, 107, 507–529. [Google Scholar] [CrossRef]
  23. Banaitiene, N.; Banaitis, A.; Kaklauskas, A.; Zavadskas, E. Evaluating the life cycle of a building: A multivariant and multiple criteria approach. Omega 2008, 36, 429–441. [Google Scholar] [CrossRef]
  24. Mahmoud, M.R.; Garcia, L.A. Comparison of different multicriteria evaluation methods for the Red Bluff diversion dam. Environ. Model Soft 2000, 15, 471–478. [Google Scholar] [CrossRef]
  25. Chitsaz, N.; Banihabib, M.E. Comparison of Different Multi Criteria Decision-Making Models in Prioritizing Flood Management Alternatives. Water Resour. Manag. 2015, 29, 2503–2525. [Google Scholar] [CrossRef]
  26. Kolios, A.; Mytilinou, V.; Lozano-Minguez, E.; Salonitis, K. A Comparative Study of Multiple-Criteria Decision-Making Methods under Stochastic Inputs. Energies 2016, 9, 566. [Google Scholar] [CrossRef] [Green Version]
  27. Fishburn, P.C. Additive Utilities with Incomplete Product Set: Applications to Priorities and Assignments. Oper. Res. 1967, 15, 537–542. [Google Scholar] [CrossRef]
  28. Triantaphyllou, F.; Mann, S.H. An examination of the effectiveness of multi-dimentional decision-making methods: A decision making paradox. Dec. Sup. Sys 1989, 5, 303–312. [Google Scholar] [CrossRef]
  29. Brans, J.P. L’elaboration d’instruments d’aide a la decision. In L’Aide a la Decision: Nature, Instruments et Perspectives d’Avenir; Nadeau, R., Landry, M., Eds.; Le Presses de l’ Universite Laval: Quebec, QC, Canada, 1986; pp. 183–214. [Google Scholar]
  30. Brans, J.P.; Vincke, P. A Preference Ranking Organisation Method (The Promethee Method for Multiple Criteria Deci-sion-Making). Manag. Sci. 1985, 31, 647–656. [Google Scholar] [CrossRef] [Green Version]
  31. Kabassi, K.; Virvou, M. Comparing Two Multi-Criteria Decision Making Theories for the Design of Web-based Individualised Assistance. In Proceedings of the 10th International Conference on Human Computer Interaction (HCI International 2005), Las Vegas, NV, USA, 22–27 July 2005; Lawrence Erlbaum Associates Publishers: Mahwah, NJ, USA, 2005. [Google Scholar]
  32. Hodgett, R.E. Comparison of multi-criteria decision-making methods for equipment selection. Int. J. Adv. Manuf. Technol. 2015, 85, 1145–1157. [Google Scholar] [CrossRef]
  33. Annette, J.R.; Banu, A.; Chandran, P.S. Comparison of Multi Criteria Decision Making Algorithms for Ranking Cloud Renderfarm Services. Indian J. Sci. Technol. 2016, 9. [Google Scholar] [CrossRef] [Green Version]
  34. Erdoğan, N.K.; Altınırmak, S.; Karamaşa, Ç. Comparison of multi criteria decision making (MCDM) methods with respect to performance of food firms listed in BIST. Copernic. J. Financ. Account. 2016, 5, 67–90. [Google Scholar] [CrossRef] [Green Version]
  35. Scholten, L.; Maurer, M.; Lienert, J. Comparing multi-criteria decision analysis and integrated assessment to support long-term water supply planning. PLoS ONE 2017, 12, e0176663. [Google Scholar] [CrossRef]
  36. Widianta, M.M.D.; Rizaldi, T.; Setyohadi, D.P.S.; Riskiawan, H.Y. Comparison of Multi-Criteria Decision Support Methods (AHP, TOPSIS, SAW & PROMENTHEE) for Employee Placement. J. Phys. Conf. Ser. 2018, 953, 12116. [Google Scholar] [CrossRef]
  37. Németh, B.; Molnár, A.; Bozóki, S.; Wijaya, K.; Inotai, A.; Campbell, J.D.; Kaló, Z. Comparison of weighting methods used in multicriteria decision analysis frameworks in healthcare with focus on low- and middle-income countries. J. Comp. Eff. Res. 2019, 8, 195–204. [Google Scholar] [CrossRef] [Green Version]
  38. Abounaima, M.C.; Lamrini, L.; El Makhfi, N.; Ouzarf, M. Comparison by Correlation Metric the TOPSIS and ELECTRE II Multi-Criteria Decision Aid Methods: Application to the Environmental Preservation in the European Union Countries. Adv. Sci. Technol. Eng. Syst. J. 2020, 5, 1064–1074. [Google Scholar] [CrossRef]
  39. Sean, H.; Luisa, N.; David, C. A Statistical Comparison between Different Multicriteria Scaling and Weighting Combinations. Int. J. Ind. Oper. Res. 2020, 3. [Google Scholar] [CrossRef]
  40. Vassoney, E.; Mochet, A.M.; Desiderio, E.; Negro, G.; Pilloni, M.G.; Comoglio, C. Comparing Multi-Criteria Decision-Making Methods for the Assessment of Flow Release Scenarios From Small Hydropower Plants in the Alpine Area. Front. Environ. Sci. 2021, 9. [Google Scholar] [CrossRef]
  41. Sałabun, W.; Wątróbski, J.; Shekhovtsov, A. Are MCDA Methods Benchmarkable? A Comparative Study of TOPSIS, VIKOR, COPRAS, and PROMETHEE II Methods. Symmetry 2020, 12, 1549. [Google Scholar] [CrossRef]
  42. Steele, K.; Carmel, Y.; Cross, J.; Wilcox, C. Uses and Misuses of Multicriteria Decision Analysis (MCDA) in Environmental Decision Making. Risk Anal. 2008, 29, 26–33. [Google Scholar] [CrossRef]
  43. Simanaviciene, R.; Ustinovichius, L. Sensitivity Analysis for Multiple Criteria Decision Making Methods: TOPSIS and SAW. Procedia—Soc. Behav. Sci. 2010, 2, 7743–7744. [Google Scholar] [CrossRef] [Green Version]
  44. Pamučar, D.S.; Božanić, D.; Ranđelović, A. Multi-Criteria Decision Making: An example of sensitivity analysis. Serb. J. Manag. 2017, 12, 1–27. [Google Scholar] [CrossRef] [Green Version]
  45. Yazdani, M.; Zavadskas, E.K.; Ignatius, J.; Abad, M.D. Sensitivity Analysis in MADM Methods: Application of Material Selection. Inzinerine Ekon.-Engine. Econ. 2016, 27, 382–391. [Google Scholar] [CrossRef] [Green Version]
  46. O’Neil, E. Conservation Audits: Auditing the Conservation Process—Lessons Learned, 2003–2007. In Conservation Measures Partnership; Conservation Standards: Bethesda, MD, USA, 2007. [Google Scholar]
  47. Silva, R.L.F.; Ghilard-Lopes, N.P.; Raimundo, S.G.; Ursi, S. Evaluation of Environmental Education Activities. In Coastal and Marine Environmental Education. Brazilian Marine Biodiversity; Ghilardi-Lopes, N., Berchez, F., Eds.; Springer: Cham, Switzerland; Available online: https://link.springer.com/chapter/10.1007%2F978-3-030-05138-9_5 (accessed on 1 August 2021).
  48. Stern, M.J.; Powell, R.B.; Hill, D. Environmental education program evaluation in the new millennium: What do we measure and what have we learned? Environ. Educ. Res. 2013, 20, 581–611. [Google Scholar] [CrossRef]
  49. Chao, Y.-L. A Performance Evaluation of Environmental Education Regional Centers: Positioning of Roles and Reflections on Expertise Development. Sustainability 2020, 12, 2501. [Google Scholar] [CrossRef] [Green Version]
  50. Monroe, M.C. Challenges for environmental education evaluation. Eval. Program Plan. 2010, 33, 194–196. [Google Scholar] [CrossRef]
  51. Kittur, J.; Vijaykumar, S.; Bellubbi, V.P.; Vishal, P.; Shankara, M.G. Comparison of different MCDM techniques used to evaluate optimal generation. In Proceedings of the 2015 International Conference on Applied and Theoretical Computing and Communication Technology, Davangere India, 29–31 October 2015; pp. 172–177. [Google Scholar] [CrossRef]
  52. Vakilipour, S.; Sadeghi-Niaraki, A.; Ghodousi, M.; Choi, S.-M. Comparison between Multi-Criteria Decision-Making Methods and Evaluating the Quality of Life at Different Spatial Levels. Sustainability 2021, 13, 4067. [Google Scholar] [CrossRef]
  53. Thor, J.; Ding, S.H.; Kamaruddin, S. Comparison of Multi Criteria Decision Making Methods from the Maintenance Alternative Selection Perspective. Int. J. Eng. Sci. 2013, 2, 27–34. [Google Scholar]
  54. Yildirim, B.F.; Önder, E. Evaluating Potential Freight Villages in Istanbul using Multi Criteria Decision Making Techniques. J. Logist. Manag. 2014, 3, 1–10. [Google Scholar] [CrossRef]
  55. Velasquez, M.; Hester, P.T. An Analysis of Multi-Criteria Decision Making Methods. Int. J. Oper. Res. 2013, 10, 56–66. [Google Scholar]
  56. Sarraf, R.; McGuire, M.P. Integration and comparison of multi-criteria decision making methods in safe route planner. Expert Syst. Appl. 2020, 154, 113399. [Google Scholar] [CrossRef]
  57. Zlaugotne, B.; Zihare, L.; Balode, L.; Kalnbalkite, A.; Khabdullin, A.; Blumberga, D. Multi-Criteria Decision Analysis Methods Comparison. Environ. Clim. Technol. 2020, 24, 454–471. [Google Scholar] [CrossRef]
  58. Ahmad, N.; Kasim, M.M.; Kalimuthu Rajoo, S. Comparative Analysis of Crisp and Fuzzy Multi- Criteria Decision Making Methods for Supplier Selection in an Automotive Manufacturing Industry. Int. J. Supply Chain Manag. 2019, 8, 951–957. [Google Scholar]
  59. Kabassi, K. Evaluating Museum Using a Combination of Decision-Making Theories. J. Herit. Tour. 2019, 14, 544–560. [Google Scholar] [CrossRef]
  60. Kabassi, K.; Mpalomenou, S.; Martinis, A. AHP & PROMETHEE II for Evaluation of Websites of Mediterranean Protected Areas’ Managing Boards. J. Manag. Inf. Decis. Sci. 2021, 24, 1–17. [Google Scholar]
  61. Kokaraki, N.; Hopfe, C.J.; Robinson, E.; Nikolaidou, E. Testing the reliability of deterministic multi-criteria decision-making methods using building performance simulation. Renew. Sustain. Energy Rev. 2019, 112, 991–1007. [Google Scholar] [CrossRef]
Table 1. Comparison studies that involve SAW, WPM, TOPSIS, and VIKOR.
Table 1. Comparison studies that involve SAW, WPM, TOPSIS, and VIKOR.
SAWWPMTOPSISPROMETHEE II
SAW-[40]
[51]
[40]
[51]
[43]
[36]
[52]
[53]
[36]
[55]
WPM -[40]
[51]
TOPSIS -[32]
[36]
[54]
[55]
[56]
[57]
PROMETHEE II -
Table 2. The values assigned to each one of the alternative EE programs by the four MCDM models.
Table 2. The values assigned to each one of the alternative EE programs by the four MCDM models.
EE ProgramSAWWPMTOPSISPROMETHEE II
EEp15.3805.0570.625−0.201
EEp25.0154.8660.568−0.383
EEp34.6014.3110.434−0.481
EEp42.6332.2600.120−0.789
EEp52.6332.2600.120−0.789
EEp64.0733.8890.327−0.609
EEp74.0733.8890.327−0.609
EEp83.2032.9180.187−0.743
EEp94.6274.5370.414−0.488
EEp102.8192.3410.126−0.732
EEp117.2577.4300.9230.459
EEp127.2027.2990.8970.448
EEp136.9847.0570.8860.373
EEp146.8826.8350.8810.343
EEp156.8826.8350.8810.343
EEp166.8826.8350.8810.343
EEp176.8826.8350.8810.343
EEp186.8826.8350.8810.343
EEp196.8826.8350.8810.343
EEp208.0668.1660.9710.798
EEp217.6577.4930.9720.692
EEp227.8637.7230.9790.714
EEp236.0185.8490.7630.031
EEp246.0185.8490.7630.031
EEp256.0185.8490.7630.031
EEp267.7717.7150.9700.666
EEp277.1066.9900.9350.420
EEp287.1066.9900.9350.420
EEp296.8366.7170.9040.324
EEp305.7195.3790.655−0.046
EEp314.9414.0200.455−0.282
EEp321.0001.0000.000−0.974
EEp333.3632.6950.218−0.588
EEp346.2425.9730.7860.064
EEp355.9275.6530.753−0.041
EEp366.6776.3580.8700.209
EEp374.1033.9290.317−0.618
EEp384.1033.9290.317−0.546
EEp397.2347.3650.9090.456
EEp406.0416.0280.758−0.022
EEp415.5545.4640.646−0.157
EEp426.3136.2500.8360.112
EEp435.4215.3070.676−0.153
EEp446.8326.8520.8860.286
EEp455.7505.5040.679−0.114
EEp467.2457.1190.9000.468
EEp477.2457.1190.9000.468
EEp484.6454.5100.448−0.484
EEp494.9574.9910.509−0.399
EEp506.0595.5700.720−0.035
EEp516.9946.9110.8890.363
EEp527.0666.9750.8970.394
Table 3. The ranking of the alternative EE program obtained by the four MCDM models.
Table 3. The ranking of the alternative EE program obtained by the four MCDM models.
EE ProgramSAWWPMTOPSISPROMETHEE II
EEp136363636
EEp237383738
EEp342414140
EEp450505050
EEp550505050
EEp645454345
EEp745454345
EEp848474849
EEp941394242
EEp1049494948
EEp115577
EEp1297129
EEp1314101513
EEp1415161715
EEp1515161715
EEp1615161715
EEp1715161715
EEp1815161715
EEp1915161715
EEp201131
EEp214423
EEp222212
EEp2328272626
EEp2428272626
EEp2528272626
EEp263344
EEp271011510
EEp281011510
EEp292122921
EEp3033343432
EEp3139423937
EEp3252525252
EEp3347484744
EEp3425262525
EEp3531303031
EEp3623232323
EEp3743434547
EEp3843434543
EEp398688
EEp4027252929
EEp4134333535
EEp4224242424
EEp4335353234
EEp4422151522
EEp4532323233
EEp4668105
EEp4768105
EEp4840404041
EEp4938373839
EEp5026313130
EEp5113141414
EEp5212131212
(The similar rankings are highlighted in different shades of gray).
Table 4. Pairwise correlation of the four MCDM models using Pearson correlation coefficient.
Table 4. Pairwise correlation of the four MCDM models using Pearson correlation coefficient.
SAWWPMTOPSISPROMETHEE II
SAW10.9950.9870.975
WPM-10.9860.973
TOPSIS--10.969
PROMETHEE II---1
Table 5. Pairwise correlation of the four MCDM models using Spearman’s rho correlation.
Table 5. Pairwise correlation of the four MCDM models using Spearman’s rho correlation.
SAWWPMTOPSISPROMETHEE II
SAW10.9940.9830.997
WPM-10.9830.991
TOPSIS--10.984
PROMETHEE II---1
Table 6. Grouping of the alternative EE program obtained by the four MCDM models.
Table 6. Grouping of the alternative EE program obtained by the four MCDM models.
ActionSAWWPMTOPSISPROMETHEE II
EEp14444
EEp24444
EEp35555
EEp45555
EEp55555
EEp65555
EEp75555
EEp85555
EEp95455
EEp105555
EEp111111
EEp121121
EEp132222
EEp142222
EEp152222
EEp162222
EEp172222
EEp182222
EEp192222
EEp201111
EEp211111
EEp221111
EEp233333
EEp243333
EEp253333
EEp261111
EEp272212
EEp282212
EEp293313
EEp304444
EEp314544
EEp325555
EEp335555
EEp343333
EEp354444
EEp363333
EEp375555
EEp385555
EEp391111
EEp403333
EEp414444
EEp423333
EEp434444
EEp443223
EEp454444
EEp461121
EEp471121
EEp485555
EEp494444
EEp503444
EEp512222
EEp522222
(The similar rankings are highlighted in different shades of gray).
Table 7. The values of Cohen’s kappa for pairwise comparing the agreement of the MCDM models.
Table 7. The values of Cohen’s kappa for pairwise comparing the agreement of the MCDM models.
SAWWPMTOPSISPROMETHEE II
SAW10.9030.8070.976
WPM-10.8060.927
TOPSIS--10.831
PROMETHEE II---1
Table 8. The values of the alternative EE programs using the two weighting schemes.
Table 8. The values of the alternative EE programs using the two weighting schemes.
ActionSAW-Scheme 1SAW-Scheme 2WPM-Scheme 1WPM-Scheme 2TOPSIS-Scheme 1TOPSIS-Scheme 2PROMETHEE II-Scheme 1PROMETHEE II-Scheme 2
EEp15.385.1005.0574.5990.6250.535−0.201−0.236
EEp25.0154.7004.8664.5470.5680.452−0.383−0.439
EEp34.6014.4004.3114.2050.4340.378−0.481−0.486
EEp42.6332.5002.262.1400.120.104−0.789−0.802
EEp52.6332.5002.262.1400.120.104−0.789−0.802
EEp64.0733.9003.8893.7970.3270.272−0.609−0.626
EEp74.0733.9003.8893.7970.3270.272−0.609−0.626
EEp83.2033.0002.9182.7130.1870.137−0.743−0.769
EEp94.6274.6004.5374.4470.4140.426−0.488−0.465
EEp102.8192.9002.3412.4210.1260.148−0.732−0.702
EEp117.2577.3007.437.2180.9230.9210.4590.468
EEp127.2027.3007.2997.2180.8970.9210.4480.498
EEp136.9847.1007.0577.0500.8860.9110.3730.432
EEp146.8826.9006.8356.8500.8810.8880.3430.373
EEp156.8826.9006.8356.8500.8810.8880.3430.373
EEp166.8826.9006.8356.8500.8810.8880.3430.373
EEp176.8826.9006.8356.8500.8810.8880.3430.373
EEp186.8826.9006.8356.8500.8810.8880.3430.373
EEp196.8826.9006.8356.8500.8810.8880.3430.373
EEp208.0668.2008.1668.1520.9710.9810.7980.821
EEp217.6577.5007.4937.4680.9720.9490.6920.651
EEp227.8637.6007.7237.5440.9790.9530.7140.653
EEp236.0185.8005.8495.6780.7630.6970.031−0.014
EEp246.0185.8005.8495.6780.7630.6970.031−0.014
EEp256.0185.8005.8495.6780.7630.6970.031−0.014
EEp267.7717.8007.7157.7490.970.9710.6660.688
EEp277.1066.9006.996.8500.9350.8960.420.362
EEp287.1066.9006.996.8500.9350.8960.420.362
EEp296.8366.7006.7176.6700.9040.8750.3240.284
EEp305.7195.6005.3795.3830.6550.645−0.046−0.061
EEp314.9415.1004.024.4940.4550.533−0.282−0.292
EEp3211.00011.00000.000−0.974−0.967
EEp333.3633.2002.6952.7310.2180.195−0.588−0.527
EEp346.2426.2005.9735.9880.7860.7710.0640.054
EEp355.9275.8005.6535.6620.7530.707−0.041−0.075
EEp366.6776.5006.3586.4130.870.8290.2090.13
EEp374.1034.0003.9293.9490.3170.285−0.618−0.61
EEp384.1034.0003.9293.9490.3170.285−0.546−0.468
EEp397.2347.4007.3657.3420.9090.9370.4560.476
EEp406.0416.0006.0285.8800.7580.750−0.022−0.01
EEp415.5545.6005.4645.5020.6460.657−0.157−0.111
EEp426.3136.1006.256.0170.8360.7640.1120.041
EEp435.4215.1005.3074.9810.6760.545−0.153−0.253
EEp446.8326.7006.8526.6240.8860.8550.2860.238
EEp455.755.8005.5045.7410.6790.704−0.114−0.070
EEp467.2457.4007.1197.3310.90.9330.4680.53
EEp477.2457.4007.1197.3310.90.9330.4680.53
EEp484.6454.6004.514.5730.4480.423−0.484−0.467
EEp494.9575.0004.9914.9590.5090.526−0.399−0.355
EEp506.0596.1005.575.9400.720.745−0.0350.004
EEp516.9947.0006.9116.9260.8890.8940.3630.379
EEp527.0667.1006.9757.0540.8970.9110.3940.422
Table 9. The ranking of the alternative EE programs using the two weighting schemes.
Table 9. The ranking of the alternative EE programs using the two weighting schemes.
ActionSAW-Scheme 1SAW-Scheme 2WPM-Scheme 1WPM-Scheme 2TOPSIS-Scheme 1TOPSIS-Scheme 2PROMETHEE II-Scheme 1PROMETHEE II-Scheme 2
EEp13635363736363635
EEp23739383937393839
EEp34242414241424043
EEp45050505050505050
EEp55050505050505050
EEp64545454543454546
EEp74545454543454546
EEp84848474848494949
EEp94140394142404240
EEp104949494949484848
EEp1158587879
EEp12987812897
EEp131410101115101310
EEp141513161317151513
EEp151513161317151513
EEp161513161317151513
EEp171513161317151513
EEp181513161317151513
EEp191513161317151513
EEp2011113111
EEp2144442434
EEp2223231323
EEp232828272926302628
EEp242828273026302628
EEp252828273126302628
EEp2632324242
EEp27101311135121019
EEp28101311135121019
EEp29212122219212121
EEp303333343434343231
EEp313935424039373737
EEp325252525252525252
EEp334747484747474444
EEp342524262525242524
EEp353128303230283133
EEp362323232323232323
EEp374343434345434745
EEp384343434345434342
EEp3985658588
EEp402727252729262927
EEp413433333335333534
EEp422425242424252425
EEp433535353532353436
EEp442221152215222222
EEp453228322832293332
EEp46658610655
EEp47658610655
EEp484040403840414141
EEp493835373638383938
EEp502625312631273026
EEp511310141214141412
EEp521210131012101211
Table 10. The statistical analysis of the comparison.
Table 10. The statistical analysis of the comparison.
Pearson Correlation CoefficientPercentage of Identical RankingsSpearman’s Rho Correlation
SAW0.99742%0.994
WPM0.99527%0.989
TOPSIS0.99013%0.977
PROMETHEE II0.99629%0.988
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kabassi, K. Comparing Multi-Criteria Decision Making Models for Evaluating Environmental Education Programs. Sustainability 2021, 13, 11220. https://doi.org/10.3390/su132011220

AMA Style

Kabassi K. Comparing Multi-Criteria Decision Making Models for Evaluating Environmental Education Programs. Sustainability. 2021; 13(20):11220. https://doi.org/10.3390/su132011220

Chicago/Turabian Style

Kabassi, Katerina. 2021. "Comparing Multi-Criteria Decision Making Models for Evaluating Environmental Education Programs" Sustainability 13, no. 20: 11220. https://doi.org/10.3390/su132011220

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop