Next Article in Journal
Injectiveness and Discontinuity of Multiplicative Convex Functions
Next Article in Special Issue
A Hybrid, Data-Driven Causality Exploration Method for Exploring the Key Factors Affecting Mobile Payment Usage Intention
Previous Article in Journal
Overlap Detection in 2D Amorphous Shapes for Paper Optimization in Digital Printing Presses
Previous Article in Special Issue
A Compromised Decision-Making Approach to Third-Party Logistics Selection in Sustainable Supply Chain Using Fuzzy AHP and Fuzzy VIKOR Methods
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Developing a Fuzzy TOPSIS Model Combining MACBETH and Fuzzy Shannon Entropy to Select a Gamification App

by
María Carmen Carnero
1,2
1
Department of Business Management, Technical School of Industrial Engineering, University of Castilla-La Mancha, 13071 Ciudad Real, Spain
2
CEG-IST, Instituto Superior Técnico, Universidade de Lisboa, 1649-004 Lisboa, Portugal
Mathematics 2021, 9(9), 1034; https://doi.org/10.3390/math9091034
Submission received: 5 April 2021 / Revised: 28 April 2021 / Accepted: 30 April 2021 / Published: 2 May 2021
(This article belongs to the Special Issue Advances in Multiple Criteria Decision Analysis)

Abstract

:
Due to the important advantages it offers, gamification is one of the fastest-growing industries in the world, and interest from the market and from users continues to grow. This has led to the development of more and more applications aimed at different fields, and in particular the education sector. Choosing the most suitable application is increasingly difficult, and so to solve this problem, our study designed a model which is an innovative combination of fuzzy Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) with the Measuring Attractiveness by a Categorical Based Evaluation Technique (MACBETH) and Shannon entropy theory, to choose the most suitable gamification application for the Industrial Manufacturing and Organisation Systems course in the degree programmes for Electrical Engineering and Industrial and Automatic Electronics at the Higher Technical School of Industrial Engineering of Ciudad Real, part of the University of Castilla-La Mancha. There is no precedent in the literature that combines MACBETH and fuzzy Shannon entropy to simultaneously consider the subjective and objective weights of criteria to achieve a more accurate model. The objective weights computed from fuzzy Shannon entropy were compared with those calculated from De Luca and Termini entropy and exponential entropy. The validity of the proposed method is tested through the Preference Ranking Organisation METHod for Enrichment of Evaluations (PROMETHEE) II, ELimination and Choice Expressing REality (ELECTRE) III, and fuzzy VIKOR method (VIsekriterijumska optimizacija i KOmpromisno Resenje). The results show that Quizizz is the best option for this course, and it was used in two academic years. There are no precedents in the literature using fuzzy multicriteria decision analysis techniques to select the most suitable gamification application for a degree-level university course.

1. Introduction

Gamification is defined as a process that applies gaming elements to non-game contexts [1,2]. Among the most commonly included game elements are levels, points, memes, quests, leader boards, combat, badges, gifting, boss fights, avatars, social graphs, certificates, and content unlocking [3,4].
Many benefits of the application of gamification to teaching have been described, including by Torres-Toukoumidis et al. [5] and Carnero [6]: it encourages autonomous, rigorous, and methodical working; leads to healthy competition; increases the intrinsic involvement and motivation of the participants, and motivates trying again, as feedback is immediate; improves group dynamics; maintains continuous intellectual activity by interacting constantly with the computer; incorporates fun into learning; uses a high level of interdisciplinarity; combines theory and practice, facilitating knowledge acquisition; increases the use of creativity; promotes interaction with other students; develops search and information selection skills; helps in problem solving, visualising simulations; increases interest in class participation and the number of communication channels between teacher and students; drives connectivity and interoperability in mixed distance-classroom learning; combines the application with other teaching methods; improves academic performance; and modernises the educational landscape in the new digital era [7], given the great importance of digital literacy in the modern world [8]. It also allows students to be surveyed on any aspect related to teaching, or to determine their previous level of knowledge of a subject easily. It facilitates analysis of the results obtained during an academic year and comparison of the results with those of previous years, as well as with other subjects. It facilitates the teachers’ self-assessment of their own teaching and control of attendance of the students [9]. Furthermore, some applications include social media, which allows students to create, share, and exchange content with classmates, and thereby create a sense of community [10]; this goes with the fact that student assessment is undertaken in an innovative way, since the level of knowledge acquisition is measured during the learning process, and not simply given as a final mark. Moreover, gamification provides a source of data on student learning, guaranteeing more effective, accurate, and useful information for teachers, parents, administrators, and public education policy managers [11].
In the literature review carried out on the databases Emerald, MDPI, Hindawi, Proquest, Science Direct, and Scopus using the terms “select app multicriteria”, “education game multicriteria”, and “gamification multicriteria”, the only precedents found were those of Kim [12] and Rajak and Shaw [13]. Kim [12] sets out a model built via AHP to assess three gamification platforms applied to a project in a company located in South Korea using the criteria typical in the selection of business software, such as credibility of supplier, competitiveness of product, continuity of service, etc. in a model designed for business managers. Rajak and Shaw [13] assesses 10 mHealth applications on the market via AHP and fuzzy-TOPSIS, recognising that a suitable framework for assessing the efficacy of mobile applications is not to be found in the literature. Therefore, there is no study in the literature analysing the choice of gamification apps for degree-level university courses with fuzzy multicriteria techniques. However, as recognised in Boneu [14], the selection of the e-learning platform is very important, as it identifies and defines the pedagogical methodologies that can be designed based on the tools and services they offer.
This study describes a method that combines fuzzy Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) with the Measuring Attractiveness by a Categorical Based Evaluation Technique (MACBETH) approach and fuzzy Shannon entropy for the choice of the most suitable gamification app for the Manufacturing Systems and Industrial Organisation course, taught in the second-year degree programmes of Electrical Engineering and Industrial and Automatic Electronic Engineering at the Higher Technical School of Industrial Engineering at the Ciudad Real campus of the University of Castilla-La Mancha. The objective weights calculated with fuzzy Shannon entropy are compared with those calculated from fuzzy De Luca and Termini entropy and exponential entropy. The feasibility and validity of the proposed method is tested through the Preference Ranking Organisation METHod for Enrichment of Evaluations (PROMETHEE) II, ELimination and Choice Expressing REality (ELECTRE) III and fuzzy VIKOR method (VIsekriterijumska optimizacija i KOmpromisno Resenje)) (some of the methods proposed by the application that recommend the most suitable Multi-Criteria Decision Analysis (MCDA) methods carried out by Wątróbski et al. [15]).
The inclusion of fuzzy logic allows the uncertainties, ambiguities, or indecisions typical of real decision-making processes to be taken into account. The choice of fuzzy TOPSIS rather than other fuzzy multicriteria decision analysis techniques is because it has been shown to be a robust technique for handling complex real-life problems [16] and is widely used in many areas [17]. This is the first study in the literature to integrate the subjective weights from the judgements given by the lecturer who teaches the course, processed using the MACBETH approach, with objective weights based on objective information (fuzzy decision matrix), calculated using fuzzy Shannon entropy. In this way, the deficiencies that occur both in a subjective and an objective approach are overcome. The subjective weighted methods are indeed highly subjective and it is difficult to fully express the effectiveness of the weights in evaluating the criteria. The objective weighted methods can easily lose key information due to the limited sample of measurement data [18]. Subsequently, fuzzy TOPSIS is used to obtain the classification of alternatives. The MACBETH approach was chosen because it provides additional tools for handling ambiguous, imprecise, or inadequate information, or the impossibility of giving precise values. In the literature, fuzzy TOPSIS is usually combined only with the subjective weighting methods AHP or fuzzy AHP to calculate the relative weights of the criteria [19], while fuzzy TOPSIS is used to rank the alternatives; examples of these combinations can be seen in Torfi et al. [20], Amiri [21], Sun [22], Kutlu and Ekmekçioğlu [23], Senthil et al. [24], Beikkhakhian et al. [25], Shaverdi et al. [26], Samanlioglu et al. [27], and Nojavan et al. [28]. This is because the strengths of the two methods are complementary, since while TOPSIS uses two for comparison and better visualisation, AHP gives the weightings of the criteria based on consistency ratio analysis [29]. However, this study chose MACBETH instead of AHP because, although both methods use pairwise comparisons, the scales used by the decision maker to give judgements are different; AHP uses a 9-point ratio scale whereas MACBETH uses an ordinal scale with six semantic values. AHP uses an eigenvalue method for determining the weights, while MACBETH uses linear programming. AHP allows up to 10% inconsistency in the judgements given in each matrix, while MACBETH does not allow any inconsistency [30]. The main advantage of MACBETH is that it provides a complete methodology for ensuring accuracy in the weightings of the criteria, such as the reference levels and the definition of a descriptor associated with each criterion; it also gives the aforementioned tools for including doubts or incomplete knowledge of the decision maker. Furthermore, MACBETH has the advantage of creating quantitative measurement scales based on qualitative judgements by linear programming.
It is difficult, but essential, to determine the most suitable MCDA method for any given problem [31], as none of the methods are perfect, nor can any one method be used for all decision problems [32]. This is an important question that is still being widely discussed in the literature, but to which no answer has yet been found. The reasons may be related to the high level of MCDA methods available, both those specific to certain areas and general-purpose methods [33]. Furthermore, different methods can give different results for the same problem [34], even when the same weights are applied to the criteria. One reason for this is that, at times, the alternatives are very similar and are close to each other. However, it may also come about because each MCDA technique can use weights in the calculations in a different way, because the algorithms are different, the algorithms try to scales the objectives, affecting the already-chosen weights, or because the algorithms introduce extra parameters which affect the classification [15]. Each method may, therefore, assign a different rating, depending on its exact working, and thus the final ranking can vary from one method to another [35]. Since the correct ranking is not known, and so cannot be compared to the results obtained, it is not possible to determine which method to choose [36]. The decision maker thus faces a paradox by which the choice of an MCDA method becomes a decision problem in itself [37,38]. It should also not be forgotten that MCDA methods include subjective information provided by the decision maker, such that a change of decision maker can lead to a change in the solution [32]. The literature agrees that a number of methods should be applied to the same problem, as in the literature review undertaken by Zavadskas et al. [39], which states that there is a significant number of publications that apply comparative analysis of separate MCDA methods (see for example [40,41,42]). If all or most of the methods agree on the first-placed alternative, it may be concluded that this alternative is the most suitable; this, however, does not lead to conclusions about how the behaviour of the methods might be generalised. As a result of this, the choice of methodology, and the framework for assessment of decisions, is a current and future line of research [15,38]. Initially, Guitouni and Martel [43] produced guidelines for the selection of the most suitable MCDA, and Zanakis et al. [34] used 12 measures of similarity of performance to compare the performance of eight MCDA methods. Subsequently, Ishizaka and Nemery [32] show how analysing the required input information (data and parameters), the outcomes (choice, sorting, or partial or complete ranking), and their granularity can be an approach making a choice of the appropriate MCDA method. Saaty and Ergu [38] proposed 16 criteria for evaluating a number of MCDA methods. More recently, Wątróbski et al. [15], set out a guideline for MCDA method selection, independently of the problem domain, taking into account the lack of knowledge about the description of the situation; an online application is also proposed to assist in making this choice at http://www.mcda.it/, accessed on 4 April 2021 [44]. Including a set of properties of the case study described in this paper, this application proposes, from a total of 56 MCDA methods, a group of suitable methods: fuzzy TOPSIS, fuzzy VIKOR, fuzzy AHP, fuzzy Analytic Network Process (ANP), fuzzy AHP + fuzzy TOPSIS, fuzzy ANP + fuzzy TOPSIS, PROMETHEE I, PROMETHEE II, ELECTRE III, ELECTRE TRI, ELECTRE IS, Organization, Rangement Et Synthese De Donnes Relationnelles (ORESTE), etc. The proposed model, combining MACBETH with fuzzy TOPSIS, will be compared with the ranking obtained from applying PROMETHEE II, ELECTRE III, and fuzzy VIKOR.
Shannon entropy was chosen as an objective weighting method because it is a method applied widely and with success in the literature and is a data-based weight-determination technique that computes optimal criteria weights based on the initial decision matrix. Therefore, its use is recognised as enhancing the reliability of results [45]. A comparison was made of the results obtained with objective fuzzy Shannon weights with those computed from fuzzy De Luca and Termini entropy and exponential Pal and Pal entropy.
TurningPoint, Socrative, Quizziz, Mentimeter, and Kahoot! are the apps assessed in this study because they are the ones most commonly used in teaching, they have a free version for the number of students signed up to the course analysed, and they do not place constraints on the number of questions that can be included in questionnaires [46].
The article is laid out as follows. Section 2 contains a review of the literature. Next, the fuzzy TOPSIS methodology is introduced. The model built is then described, with the structuring results, the subjective weighs obtained via the MACBETH approach, and the objective weights computed by fuzzy Shannon entropy, fuzzy De Luca and Termini entropy, and exponential Pal and Pal entropy, with a prior introduction to all methods. The intermediate decision matrices resulting from applying fuzzy TOPSIS are shown below. Finally, the results, the validity of the proposed method, the sensitivity analysis, the conclusions, and future lines of work are set out.

2. Literature Review

There is increasingly powerful evidence of the favourable acceptance of gamification and its effectiveness in promoting highly engaging learning experiences [7,47,48]. For example, Hamari et al. [49], in their literature review, analysed 24 empirical studies in which gamification of education or learning was the most common field of application, and all the studies considered the learning results to be mostly positive, in terms of increased motivation and participation in learning activities and enjoyment. However, some studies bring out the negative effects of greater competition, difficulties in assessing tasks, and the importance of the design characteristics of the application on the results. In their literature review of 93 studies using Kahoot!, Wang and Tahir [50], show that gamification can have a beneficial effect on learning at K-12 and higher education, reducing student anxiety and giving favourable results for attention, confidence, concentration, engagement, enjoyment, motivation, perceived learning, and satisfaction. It also has positive effects from the point of view of teachers, such as increasing their own motivation, ease of use, support for training, assessment of the knowledge of students in real time, stimulating students to express their opinions in class, increasing class participation, or reducing the teacher’s workload. Nevertheless, it also states that there are studies that show little or no effect, and that things such as unreliable internet connections, questions and answers that are difficult to read on projector screens, the impossibility of changing the answers once they have been given, the time pressure to respond, or not having enough time or fear of losing are some of the problems mentioned by students. Licorish et al. [10] note as results of the experiment with Kahoot! that the use of educational games probably minimises distractions and therefore improves the quality of teaching and learning beyond that which comes from the traditional teaching methods. Zainuddin et al. [48], in their experiment with 94 students using Socrative, Quizizz, and iSpring Learn LMS, showed that its application was effective in assessing students’ learning performance, especially with the formative assessments after finishing each unit. Dell et al. [51] describe how the performance of students during the game shows a significant correlation with the marks for the course, and also sees games as fun tools to review course content which, can serve as an effective method of determining students’ understanding, progress, and knowledge. Other authors, such as Knutas et al. [52], Iosup and Epema [53], Laskowski [54], and Dicheva et al. [55] also found improvements in the marks of the participants at all levels of the education system, especially university education [5]. Huang and Hew [56] suggest that university students in Hong Kong were more motivated to do activities using gamification outside class, while Huang et al. [57] worked with pre-degree students and concluded that the group with gamification-enhanced flipped learning was more likely to do pre-class and post-class activities on time, and achieved significantly better marks on the post-course test than those who did not use gamification.
Since the publication of the Gartner [58] and IEEE [59] reports predicting that most companies and organisations would be using gamification in the near future [60], gamification is one of the fastest-growing industries worldwide, with multi-billion-dollar profits, and interest in gamification from the market and from users is still growing [61]. This increasing interest in gamification has led to the production of evermore applications aimed at different fields, such as advertising, commerce, education, environmental behaviour, enterprise resource planning, exercise, intra-organisational communication and activity, government services and public engagement, science, health, marketing, etc. [60]. Gamification applications for the education sector include: Padlet, Blinkist, BookWidgets, Brainscape, Breakout EDU, Cerebriti, Classcraft, Mentimeter, ClassDojo, Arcademics, Coursera, Minecraft: Education Edition, Duolingo, Toovari, Edmodo Gamification, Maven, Quizlet, Goose Chase, Knowre, Tinycards, Kahoot!, keySkillset, Khan Academy, Gimkit, Memrise, Pear Deck, Google Forms and Flubaroo, Play Brighter, Udemy, Quizizz, TEDEd, CodeCombat, The World Peace Game, Trivinet, SoloLearn, Class Realm, Yousician, Edpuzzle, Virtonomics, etc. [62,63,64,65,66]; additionally, a number of applications produced for a less commercial environment or more related to research include StudyAid [67], GamiCAD [68], WeBWorK [69], the online quiz system designed by Snyder and Hartig [70] aimed at medical residents, or the gamification plugin of Domínguez et al. [1] designed as an e-learning platform, and the number of teaching apps is expected to rise considerably in the future [71].
However, the many apps available on the market make it difficult to choose the most suitable one for a particular degree or course, and although there are some studies, like that of Zainuddin et al. [7], that provide a literature review about gamification in the educational domain, and states that the platforms and apps most commonly used in research are: ClassDojo and ClassBadges, Ribbonhero of Microsoft, Rain classroom, Quizbot, Duolingo, Kahoot! and Quizizz, Math Widgets, Google + Communities, and iSpring Learn LMS. Acuña [72] says that FlipQuiz, Quizizz, Socrative, Kahoot, and uLearn Play are the five best applications for university students. Roger et al. [73] state that Kahoot! and Socrative are the two applications most commonly used in teaching, while Plump and LaRosa [74] say that Kahoot! is the most used gamification app, with more than 70 million users [50]. In the statistical study carried out by Göksün and Gürsoy [65], the activities gamified with Kahoot! had a more positive impact on academic performance and student engagement when compared with a control group and another group that did activities with Quizizz. It was seen that the impact of the activities carried out with Quizizz was lower than that of the instruction method used with the control group based both on academic performance and student engagement. [9] points out that the use of TurningPoint in the university subject pharmacology improves the performance of students by increasing their participation in class and fixing the knowledge provided by the teacher, as well as allowing the teacher to know what aspects of the class should be better explained, before taking the concepts as known. TurningPoint has also been used in the Faculty of Economics of the University of Valencia in different subjects and teaching sessions, with the result that 82.8% of students consider that its use in class is useful for the development and understanding of the subject; in addition, the participation of the attendees increased, since more than 90% of them participated in using the tool, and the interaction between the audience and the speakers increased notably [75]. Gokbulut [76] appreciates that Mentimeter actively engage students in classroom activities and enjoy learning as in Kahoot!. However, in Mentimeter, the personal information of the student is not collected or displayed on the teacher’s screen, so participation in class increases and students feel more comfortable, especially those who are less likely to participate due to the influence of cultural factors, gender, shyness, anxiety, or other factors such as speech impediments [77]. In the study carried out in [77], 68% of the students who answered indicated that Mentimeter did not increase learning, but other students, across disciplinary areas, stated that Mentimeter improved content retention and that most of the students increased their learning. A model is thus required that uses multiple criteria, objective and based on the perceptions of teachers and students, to facilitate decision-making in this field.
Contributions in the literature related to the selection of apps in different fields are very few; for example, Basilico et al. [78] analyse mobile apps for diabetes self-care, because the large number makes it difficult for patients who have no tools for judgement to assess them properly. A pictorial identification scheme was developed for diabetes self-care tools, which identifies the strengths and weaknesses of a diabetes self-care app. Similarly in the area of diabetes treatment, Krishnan and Selvam [79] use multiple regression analysis to identify success factors in diabetes smartphone apps. Mao et al. [80] propose a behavioural change technique based on an mHealth App recommendation method to choose the most suitable mHealth apps for users. They do this by codifying information on behavioural change techniques included in each mHealth app and in a similar way for each user. They next developed a prediction model which, together with the AdaBoost algorithm, related behavioural change techniques with a possible user; this then recommends the app with the highest behavioural change technique, matching levels to a possible user. Păsărelu et al. [81] identify 109 apps that analyse assessment, treatment, or both in attention-deficit/hyperactivity disorder. The following information was collected for each app: target population, confidentiality, available language besides English, cost, number of downloads, category, ratings, main purpose, and empirical support and type of developer. Descriptive statistics were produced for each of these categories. Robillard et al. [82] assess mental health apps in terms of availability, readability, and privacy-related content of the privacy policies and terms of agreement. In a field other than medicine, Beck et al. [83] identify 57 apps out of a total 2400 that target direct energy use and include an element of gamification; the apps are then assessed statistically in the categories: gamification components, game elements, and behavioural constructs.

3. Fuzzy TOPSIS

TOPSIS was developed by Hwang and Yoon [84] as a method of choosing the alternatives with the shortest distance to a positive ideal solution (PIS) and the longest distance to a negative ideal solution (NIS). While PIS is the solution preferred by the decision maker, maximises the criteria of the benefit type, and minimises the criteria of cost type, NIS acts in the opposite way. TOPSIS provides a cardinal raking of the alternatives according to the best distance to PIS and the greatest distance to NIS. It also does not matter whether the attributes are independent [85,86]. A broad literature review including 266 studies up to the year 2012 can be seen in Behzadian et al. [87].
Subsequently, Chen [88] adapted TOPSIS to the fuzzy environment. Fuzzy TOPSIS has been widely and successfully used in real-world decision problems [19]. Some examples of these applications can be seen in the literature reviews carried out by Salih et al. [89] and Palczewski and Sałabun [19].
Bottani and Rizzi [90] and Asuquo [17] explain the advantages of choosing fuzzy TOPSIS as a multicriteria technique:
  • It is easy to understand;
  • It is a realistic compensatory method that can include or exclude alternatives based on hard cut-offs;
  • It is easy to add more criteria without the need to start again;
  • The mathematical notions behind fuzzy TOPSIS are simple.
However, TOPSIS and fuzzy TOPSIS have some disadvantages, such as rank reversal [91]. That is, ranking changes in the alternatives when an alternative is added to or removed from the hierarchy, and so the validity of the method could be in question. Furthermore, in fuzzy TOPSIS, the problems are related to the fact that there are no consistency and reliability checks, and these aspects are more relevant in decision-making and may lead to misleading results [92]. The assessment of alternatives is also carried out through linguistic expressions, in which the linguistic terms must be quantified within a previously established value scale. The quantifying of qualitative values generally involves translating the standard linguistic terms into values on a previously agreed scale. Therefore, to address problems defined in this way, the uncertain information given by the linguistic terms must be taken into account [93].
Zadeh [94] proposed fuzzy set theory to formulate real decision problems in which alternative ratings and criteria weights cannot be precisely defined, due to the existence of: unquantifiable information, incomplete information, unobtainable information, and partial ignorance [95].
A Triangular Fuzzy Number (TFN) A ˜ can be defined as a triplet ( l ,   m ,   u ) with a membership function μ A ˜ ( x ) : [ 0 ,   1 ] , as shown in Equation (1) [88]:
μ A ˜ ( x ) = { 0 , ( x l ) / ( m l ) , ( x u ) / ( m u ) , 0 ,                                                     x < l l x m   m x u x > u
where l m u , l, and u are the lower and upper value of fuzzy number A ˜ and m the modal value (see Figure 1).
Let A ˜ = ( l 1 ,   m 1 , u 1 ) and B ˜ = ( l 2 ,   m 2 , u 2 ) be two TFNs, then the operational laws of these triangular fuzzy numbers are as follows [96]:
A ˜ B ˜ = ( l 1 + l 2 ,   m 1 + m 2 , u 1 + u 2 )
A ˜ B ˜ = ( l 1 u 2 ,   m 1 m 2 , u 1 l 2 )
A ˜ B ˜ ( l 1 l 2 ,   m 1 m 2 , u 1 u 2 )
A ˜ B ˜ ( l 1 / u 2 ,   m 1 / m 2 , u 1 / l 2 )
A ˜ 1 ( 1 / u 1 ,   1 / m 1 , 1 / l 1 ) f o r   l ,   m ,   u > 0
k A ˜ ( k l 1 ,   k m 1 , k u 1 ) , k > 0 ,   k R
and the distance between the two TFN’s A ˜ and B ˜   d   ( A ˜ ,   B ˜ ) , according to the vertex method established in Chen [88] is calculated by Equation (8).
d   ( A ˜ ,   B ˜ ) = ( 1 3 [ ( l 1 l 2 ) 2 + ( m 1 m 2 ) 2 + ( u 1 u 2 ) 2 ] ) 1 / 2
In a decision problem with criteria ( C 1 ,   C 2 ,   ,   C n ) and alternatives ( A 1 ,   A 2 ,   ,   A m ) , the best alternative in fuzzy TOPSIS should have the shortest distance to a fuzzy positive ideal solution (FPIS) and the farthest distance from a fuzzy negative ideal solution (FNIS). The FPIS is computed using the best performance values for each criterion and the FNIS is generated from the worst performance values.
In fuzzy TOPSIS, the criteria should satisfy one of the following conditions to ensure that they are monotonic [17]:
  • As the value of the variable increases, the other variables will also increase;
  • As the value of the variable increases, the other variables decrease.
Monotonic criteria can be classified into benefit or cost type. A criterion can be classified as of benefit type if, the more desirable the alternative, the higher the score of the criterion. On the other hand, cost type criteria will classify the alternative as less desirable the higher its value in that criterion.
In fuzzy TOPSIS, the decision makers use linguistic variables to obtain the weightings of the criteria and the ratings of the alternatives. If there is a decision group made up of k individuals, the fuzzy weight and rating of the kth decision maker with respect to the ith alternative in the jth criterion are respectively:
w ˜ j k = ( w j 1 k ,   w j 2 k , w j 3 k )
x ˜ i j k = ( a i j k ,   b i j k , c i j k )
where i = 1 , 2 ,   ,   m and j = 1 ,   2 ,   ,   n .
The aggregate fuzzy weights w ˜ i j of each criterion given by k decision makers are calculated using Equation (11).
w ˜ j = 1 K ( w ˜ j 1   w ˜ j 2 w ˜ j k )
Equation (12) is used to calculate the aggregate ratings of the alternatives [97].
x ˜ i j = 1 K ( x ˜ i j 1   x ˜ i j 2 x ˜ i j k )
A fuzzy multicriteria decision-making problem can be expressed in matrix form as is shown in Equation (13) [88]:
D = [ x ˜ 11 x ˜ 12 x ˜ 1 n x ˜ 21 x ˜ 22 x ˜ 21 . . . . . . . . . . . . x ˜ m 1 x ˜ m 2 x ˜ m n ]
W ˜ = ( w ˜ 1 ,   w ˜ 2 , ,   w ˜ n )
with w ˜ j and x ˜ i j linguistic variables be described by triangular fuzzy numbers.
The weightings of the criteria can be calculated by assigning directly the following linguistic variables:
Very   low = ( 0 ,   0 ,   0.1 )
Low = ( 0 ,   0.1 ,   0.3 )
Medium   low = ( 0.1 ,   0.3 ,   0.5 )
Medium = ( 0.3 ,   0.5 ,   0.7 )  
Medium   high = ( 0.5 ,   0.7 ,   0.9 )
High = ( 0.7 ,   0.9 ,   1.0 )
Very   high = ( 0.9 ,   1.0 ,   1.0 )
The ratings of the alternatives are found using the linguistic variables of Table 1 [88].
The linear scale transformation is used to transform the various criteria scales into a comparable scale. Thus, we obtain the normalised fuzzy decision matrix R . The normalisation method should be used to transform the various criteria scales into a comparable scale, which ensures compatibility between the assessments of the criteria and the linguistic ratings of the subjective criteria [98].
R ˜ = [ r ˜ i j ]   m x n       i = 1 , 2 ,   ,   m ; j = 1 ,   2 ,   ,   n .
where r ˜ i j = ( l i j u j + , m i j u j + , u i j u j + ) a n d   u j + = m a x i u i j in the case of benefit type criteria, r ˜ i j = ( l j u i j , l j m i j , l j l i j ) a n d   l j = m a x i l i j in the case of cost type criteria
Next, the weighted normalised decision matrix V ˜ is calculated by multiplying the weightings of the criteria w ˜ j , by the elements r ˜ i j of the normalised fuzzy decision matrix.
V ˜ = [ v ˜ i j ]   m x n       where   v ˜ i j = x ˜ i j   w ˜ j
A positive ideal point A + and a negative ideal point A should be defined using the following equations [99]:
A + = { v ˜ 1 + , , v ˜ n +   } where   v ˜ j + = { max ( v ˜ i j ) i f   j J ; min ( v ˜ i j ) i f   j J }   j = 1 ,   2 ,   ,   n
A = { v ˜ 1 , , v ˜ n   }   where   v ˜ j = { min ( v ˜ i j ) i f   j J ; max ( v ˜ i j ) i f   j J }   j = 1 ,   2 ,   ,   n
The calculation of Euclidean distances d i + and d i of each weighted alternative from the FPIS ( A + ) and FNIS ( A ) are computed using Equations (18) and (19) [100].
d i + = { j = 1 n ( ν ˜ i j ν i j +   ) 2 } 1 / 2   i = 1 ,   2 , ,   m
d i = { j = 1 n ( ν ˜ i j ν i j   ) 2 } 1 / 2   i = 1 ,   2 , ,   m
Finally, the closeness coefficient, C C i , of each alternative i is calculated using Equation (20) [88].
C C i = d i d i + d i +  
The ranking of alternatives is calculated considering that an alternative is closer to the FPIS and further from the FNIS as C C i approaches 1.   C C i is the fuzzy satisfaction degree in the ith alternative. d i + d i + d i + is considered to be the fuzzy gap degree in the ith alternative.

4. Fuzzy TOPSIS Model Combining MACBETH and Fuzzy Shannon Entropy to Select a Gamification App

Figure 2 shows the flow diagram for this research.
This section firstly describes the structuring process, which allows the problem hierarchy to be built, then the subjective and objective weighting.

4.1. Structuring

The criteria used in this study are original and specific to the Manufacturing Systems and Industrial Organisation course, taught in the second-year degree programmes of Electrical Engineering and Industrial and Automatic Electronic Engineering (jointly) at the Higher Technical School of Industrial Engineering at the Ciudad Real campus of the University of Castilla-La Mancha (Spain).
The Manufacturing Systems and Industrial Organisation course has a large number of students registered, typically between 60 and 80 each year. For the purposes of the gamification experiment, the class was divided into two practical groups, so the number of students was half that in each experiment. Since only free versions of the gamification apps could be used, neither cost nor criteria related to the sales conditions, such as Price or Market program or Contract terms or Warranty—typical in other software selection models such as Kim [12], based on criteria used in the selection of Business Information Systems [101] and Information Security Management Systems [12]—have been considered, and therefore they are not applied to the specific field of gamification in university teaching under the conditions previously laid out. Rajak and Shaw [13] use the following criteria to choose an mHealth application: user satisfaction, compatibility, functionality, security, accessibility, ease of learning and use, empathy, information quality, and responsiveness. Thus, it is also not used in this study because issues of security of health data are not as important as the information contained in gamification apps for teaching, for example.
Therefore, after analysing the literature on gamification—as well as the international standard ISO/IEC 9126 [102] on evaluation of software quality, which takes the following factors into account: functionality, reliability, usability, efficiency, maintainability, and portability, as well as results of direct experiments on the use of gamification in the classroom with the alternatives assessed—the following decision criteria were established:
  • Capacity to combine with other methodologies or novel teaching tools (C1). The possibility of using weak Just In Time Teaching (JITT) was considered, with the consequent need for a prior study (open questions about this prior study, questions about what is understood and what is not, and about material to be revised or supporting activities to be provided), and a strong JITT (closed, direct questions to check knowledge of contents). Peer Instruction where, after the teacher has explained the concept, students must answer a series of multiple-choice or yes/no questions, with the aim of examining the understanding of the fundamental concepts of the subject. In Flipped Classroom, certain learning processes are moved out of the classroom, using class time to facilitate participation of students in active learning through questions, discussions, and applied activities to encourage exploration, articulation, and application of these ideas. The possibility of connecting the questionnaires with other tools—for example, Google Classroom—was also looked at;
  • Academic performance (C2). The capacity of the app to improve academic performance is also assessed. The following studies were considered in assessing the alternatives: Göksün and Gürsoy [65] about the academic results of applying Kahoot! and Quizizz, the literature review of Wang and Tahir [50] on the effects of Kahoot!, and Zainuddin et al. [48], who compare Quizizz, Socrative, and iSpring Learn LMS. The results obtained in [75] are used for TurningPoint, and those of [77] for Mentimeter;
  • Flexibility in the creation of questionnaires (C3). Ease in the creation of questionnaires was assessed, as well as the options for editing, duplicating, and downloading questionnaires in widely used formats such as pdf, or importing questionnaires and developing a questionnaire from the questions of others. Furthermore, the flexibility of the applications to include different types of questions (multiple choice, true/false, short questions), including mathematical equations in the questions, number of answers, and correct answers associated with each question, the possibility of including questions with images or video, or feedback to student questions, or providing teachers’ explanations of the correct answers, was also assessed;
  • Students’ perceptions (C4). The perception students have of the application with respect to motivation, engagement, concentration, perceived learning, attention, enjoyment, satisfaction, interest, enthusiasm, curiosity, and confidence was explored. The studies of Wang and Tahir [50] and Zainuddin et al. [48], among others, were used to assess the alternatives. The motivation provided by an app can be considered to be correlated with the quantity and quality of the gamification elements in the games, including memes, designing an avatar or choosing between those available, the possibility of adding images, getting rewards or bonuses, embedding YouTube videos, adding music to the questions or while completing the questionnaire, or showing the final ranking of participants;
  • Results reports (C5). The capacity to have reports in Excel files with aggregated data, by student and by question, as well as showing participants’ results during the game. The option of hiding names of participants in real time was also assessed, since in some cases, students prefer results to be anonymous for their classmates. The possibility and ease of sending the scores to students’ parents was also analysed;
  • Versatility in assessment of the questionnaire (C6). Versatility in assigning a score to each question was considered, as well as the option to count or not count the time used in answering, or just counting the number of right answers. The more options an app has for scoring the questionnaire, the more versatile the questionnaire is held to be, and the worst case is when it only counts the number of right answers;
  • Capacity for group competition (C7). The option of interacting by teams in different modes in the development of gamification experiences such as team or blackboard mode, as well as allowing random or predefined team set-up, was analysed;
  • Ease of use (C8). Our study analysed the need for students to install the app or to register to access the game, as well as the versatility of the application in class on a range of devices, and the need for auxiliary devices, such as overhead projectors. It also considered the consumption of Internet resources, since some applications might consume more resources than others, slowing the game down;
  • Support (C9). The existence and quality of office support from the app, the number of publicly available questionnaires on the platform, and the number of active forums to exchange experience, doubts about solutions, and information about the app;
  • Control of learning rate (C10). Each question on the questionnaire may take a different time to complete, which might change even from year to year, and so it is useful for the teacher to control the rate of activity, and so, of learning. This criterion assesses the capacity of the teacher and/or student to control the rate according to a predetermined time limit, unlimited time, and repeating the task as often as necessary.
The hierarchy of the model is shown in Figure 3.

4.2. Weighting

4.2.1. Subjective Weights

The subjective weights are generated from the judgements of decision makers to which mathematical methods such as the eigenvector method, weighted least squares method, Delphi method, mathematical programming models, etc., are applied [103].
The MACBETH approach [104] is a complete multicriteria methodology that only requires qualitative judgements provided by a decision maker or group to obtain a quantitative evaluation of the alternatives. The theoretical foundations can be seen in [105,106]; examples of real-world applications may be consulted in [107,108,109,110,111,112,113,114,115,116].
MACBETH is an additive value aggregation model due to the advantages of this type of model [117]: it is simple to apply and well known, the technical parameters must be clear and easily open to explanation, and it facilitates precise analysis of a complex problem and allows difficulties presented by ordinal aggregation to be avoided.
The MACBETH approach consists of an interactive methodology facilitating objective decision-making, with an exhaustive procedure, which other techniques do not have, such as the definition of indicators associated with each criterion, the assigning of reference levels for the scale levels of the descriptor, the construction of value functions that measure attractiveness in the range 100 and 0, and ensures a comparison of the criteria on a common scale, checking of the assigned values and results in each alternative and consistency in the judgements given.
The application first requires assessment criteria to be defined and structured into a value tree. For each criterion or fundamental point of view, a descriptor must be defined which identifies two reference levels of the scale. The decision maker makes pairwise comparisons between the scale levels of each descriptor, and between the criteria, based on difference of attractiveness using the semantic scales shown in Table 2.
The judgements of the decision maker obtained by pairwise comparisons are transformed into a MACBETH scale by linear programming. Let v ( x ) be the score assigned to option X and x + is at least as attractive as another option of X and x is at most equally as attractive as another option of X [117], the linear programming applied is:
M i n   [ v ( x + ) v ( x ) ]
Subject   to   v ( x ) = 0   ( arbitrarily   assigned )
( x , y ) C 0 : v ( x ) v ( y ) = 0
( x , y ) C i C s   w i t h   i , s   { 1 , 2 , 3 , 4 , 5 , 6 }   a n d   i s : v ( x ) v ( y ) i ,
( x , y ) C i C s   a n d   ( w , z ) i , s C i C s ,
w i t h   i , s ,   i ,   s { 1 , 2 , 3 , 4 , 5 , 6 }   i s , i s , a n d   i > s : v ( x ) v ( y ) v ( w ) v ( z ) + i s
If this linear program is unfeasible, the judgements are considered inconsistent. If it is feasible, multiple optimal solutions may exist. In this case, the mean is given as the MACBETH scale [118].
The M-MACBETH software, which supports the MACBETH approach, is described in Bana e Costa et al. [119] (a demo and a user’s guide can be downloaded in http://m-macbeth.com/demo/, accessed on 4 April 2021) [120]. M-MACBETH performs a consistency check on the judgements given by the decision maker, and may suggest improvements in the judgements to guarantee consistency. Therefore, unlike other multicriteria methods, MACBETH does not allow any inconsistency [30].
MACBETH requires a descriptor to be defined for each criterion to be assessed. A descriptor is an ordered set of possible impact levels associated with a criterion, to objectively describe the impact of the alternatives with respect to that criterion [105]. The greater the objectivity with which the descriptor is created, the lower the ambiguity, and the better the model will be understood and accepted. Two of the levels of the descriptor are considered for reference and are called Good, considered by the decision maker to be undoubtedly satisfactory, and Neutral, if the decision maker considers a level neither satisfactory nor unsatisfactory. This has been checked by experiment, which significantly adds to the intelligibility of the criterion [106]. In the decision-making problem analysed in this study, almost all the descriptors were built by combining various basic interrelated qualitative features, making multi-dimensional descriptors [106]. As an example, Table 3 shows the descriptor created for the Academic performance criterion with the identified reference levels Good and Neutral. Other descriptors have been built for the other criteria.
To compute the subjective weights with the MACBETH approach, it is first necessary to generate the value functions for each criterion. The lecturer provided judgements between the scale levels of each descriptor using the MACBETH semantic categories or a range of two or more adjacent categories, which are shown in Table 2 [30]. When the difference in attractiveness between scale levels cannot be determined exactly, a positive category can be used. This feature of MACBETH is very useful for reflecting uncertainty by the decision maker in giving the judgements, and strengthens the fuzzy logic within fuzzy TOPSIS. The MACBETH questioning procedure should be used to complete the judgement matrix; this is done firstly by comparing the most attractive level of each descriptor with the least attractive, followed by the second most attractive with the least attractive, and so on. The most attractive level was next compared with the other options in decreasing order of attractiveness, and then the judgements making up the diagonal border of the upper triangular portion of the matrix are completed; finally, the remaining judgements from the upper diagonal are given [121]. For example, in the Academic performance criterion, the decision maker gave the judgements shown in Figure 3. The reference level Good is shown in green (L21) and the Neutral in blue (L24). The Figure shows that, when level L21 is compared with level L22, the decision maker hesitates between assigning very weak or weak, and so the range very weak–weak is assigned; it also shows the range of judgements when comparing L21 with L23 (weak–moderate) or L21 with L24 (moderate–strong). All the judgements given are consistent.
By linear programming, the M-MACBETH software creates a value function that associates a value 100 to the level Good and the value 0 to the Neutral level. As an example, Figure 4 shows the value function obtained for the Academic performance criterion. A similar process is applied to the other criteria, giving, in all cases, consistent judgement matrices and their respective value functions. The resulting value functions should be checked by the decision maker, to ensure that they properly represent the relative magnitude of the decision maker’s judgements [121].
To complete the weighting process between the criteria, an additional alternative must be created that includes all the criteria at the Neutral level in all the descriptors. The decision maker should give the judgements using the MACBETH semantic categories which evaluate the increase in attractiveness due to a change from the Neutral level to the Good level in one of its descriptors. This allows the criteria to be ranked from greatest to least attractiveness. Then, the most attractive swing will be compared to the second most attractive swing, and the most attractive swing with the third most attractive swing, and so on. This process continues row by row until the matrix is complete [121]. Figure 5 shows the MACBETH judgement matrix with the judgements given by the teacher of the subject. It can be seen that all the judgements are consistent.
Using the judgements, M-MACBETH computes the weightings associated with each criterion, giving values with the percentages shown in the bar graph in Figure 6. The red vertical line shows the range of weighting values compatible with the judgements of the decision maker [106]. These ranges are those used as thresholds (maximum and minimum values) of the fuzzy numbers, using as a modal value the value assigned and checked by the decision maker. In this way, the weightings obtained by MACBETH, converted into per units as TFN, are shown in Table 4.
The lecturer justified the judgements given to obtain the weightings by stating that the Capacity for team competition, though useful, especially in the early years of the degree, would be used less often than individual competition. With regard to Support, although it is important to have questionnaires to copy or edit, or to have information to assist in creating questionnaires or making the most of the utilities or combining them with the rest of the teaching in the course, the teacher felt that as it was very specific material, he would prefer to produce his own questionnaires rather than copy or adapt others.
On the other hand, Academic performance was felt to be the most important criterion, since the final goal of gamification is to improve academic results; he also felt that Students’ perceptions in respect of student motivation was very important, as it could provide a significant incentive to study difficult or dull material.
The lecturer also remarked that if another course or degree programme were to be assessed, even within the field of Engineering, the weighting would change.

4.2.2. Objective Weights

Objective weights are obtained from mathematical models, for example, the entropy method, the CRiteria Importance Through Intercriteria Correlation (CRITIC) method, statistical variance, principal element analysis, multiple objective programming, etc., without any consideration of the decision maker’s judgements. The objective weights are especially applicable in decision problems where reliable subjective weights cannot be obtained [122].
The concept of information entropy was introduced by Shannon [123]. Information entropy is the measurement of the level of disorder of a system, but it can also measure the amount of useful information contained in the data. When the difference in the values between alternatives in the same criterion is high, the entropy is small, indicating that this criterion provides a lot of useful information and, therefore, the weight of this criterion should be high. However, if the difference is small and the entropy is therefore high, the weighting of that criterion should be small. That is, a broad distribution includes more uncertainty than a more sharply spiked one [122]. Therefore, the entropy theory is an objective method of weight determination [124] because the criteria weights are obtained directly from the performance matrix using an unbiased procedure.
Shannon developed the following three properties for the entropy measure H, for all p i within the estimated joint probability distribution P [103]:
  • H is a continuous positive function;
  • If all p i are equal, p i = 1 / n , then H should be a monotonic increasing function of n;
  • For all n 2 ,   H ( p 1 , p 2 , , p n ) = H ( p 1 + p 2 ,   p 3 ,   , p n ) + ( p 1 + p 2 ) H ( p 1 p 1 + p 2 ,   p 2 p 1 + p 2 ) .
Shannon entropy showed that the only function to satisfy these properties is:
H ( x ) = k i = 1 n p ( x i ) l o g p ( x i )
where k = ( ln ( m ) ) 1 and x = ( x 1 ,   x 2 ,   ,   x n ) is a discrete random variable with values in a finite set, which occur with probability p ( x i ) .
If there is a fuzzy decision matrix X ˜ resulting from evaluating m alternatives in n criteria, and whose elements are x ˜ i j = ( l i j , m i j , u i j ) the rating of the ith alternative with respect to the jth criterion, as shown in Equation (22):
X ˜ = ( x ˜ 11 x ˜ 12 x ˜ 1 n x ˜ 21 x ˜ 22 x ˜ 2 n x ˜ m 1 x ˜ m 2 x ˜ m n )
The objective criteria weights were computed using the procedure described in [125]:
  • Calculate the normalised fuzzy decision matrix Z ˜ = ( z ˜ i j ) via:
    z ˜ i j = ( l i j i = 1 m l i j ,   m i j i = 1 m m i j ,   u i j i = 1 m u i j )
  • Calculate the Shannon entropy vector e ˜ = ( e ˜ 1 , e ˜ 2 ,   ,   e ˜ n ) from Equation (24):
    e ˜ j = ( 1 l n m i = 1 m l i j l n l i j , 1 l n m i = 1 m m i j l n m i j , 1 l n m i = 1 m u i j l n u i j )
    where l i j l n l i j or m i j l n m i j or u i j l n u i j are defined as 0 if l i j , m i j , or u i j are 0, respectively.
  • Obtain the degree of divergence d ˜ j of the intrinsic information of each criterion C j :
    d ˜ j = ( 1 1 l n m i = 1 m l i j l n l i j ,   1 1 l n m i = 1 m m i j l n m i j ,   1 1 l n m i = 1 m u i j l n u i j )
  • Calculate the fuzzy weight vector w ˜ = ( w ˜ 1 , w ˜ 2 ,   ,   w ˜ n ) from Equation (26).
    w ˜ j = ( 1 1 l n m i = 1 m l i j l n l i j j = 1 n 1 1 l n m i = 1 m l i j l n l i j ,   1 1 l n m i = 1 m m i j l n m i j j = 1 n 1 1 l n m i = 1 m l i j l n l i j ,   1 1 l n m i = 1 m u i j l n u i j j = 1 n 1 1 l n m i = 1 m l i j l n l i j )
  • Normalise the fuzzy criteria weights.
Additionally, De Luca and Termini [126] defined fuzzy entropy as a measure of fuzziness using the Equation (27):
H ( x ) = i = 1 n ( μ A ( x i ) l o g ( μ A ( x i ) ) + ( 1 μ A ( x i ) ) l o g ( 1 μ A ( x i ) ) )
where μ A ( x i ) [ 0 , 1 ] is the membership of x i in the fuzzy set A.
Pal and Pal [127,128] proposed the entropy function based on the exponential gain function described in Equation (28):
H ( x ) = k i = 1 n ( μ A ( x i ) e 1 μ A ( x i ) + ( 1 μ A ( x i ) e μ A ( x i ) )
where k is a normalising constant.
The crisp objectives weights obtained from fuzzy Shannon entropy, fuzzy De Luca and Termini entropy, and exponential entropy are shown in Table 5. The criteria with the highest weights in Shannon entropy are: Control of learning rate, Support, and Versatility in assessment of the questionnaire. However, for De Luca and Termini entropy they are: Flexibility in the creation of questionnaires, Academic performance, and Capacity to combine with other methodologies or novel teaching tools. In exponential entropy, the most important criteria are: Capacity to combine with other methodologies or novel teaching tools, Flexibility in the creation of questionnaires, and Control of learning rate. Therefore, a certain agreement can be seen between the different techniques when identifying the most important criteria. Error assessments Median Absolute Deviation ( M A D ) and Cumulative sum of Forecast Errors ( C F E ) were jointly applied to assess the error in each entropy measure with respect to those obtained by fuzzy Shannon entropy; in this way, both systematic and random errors could be analysed. It can be seen that the errors in the weightings obtained from De Luca and Termini entropy are much greater (including inverse values, that is, higher weights in Shannon entropy are associated with lower weights in De Luca and Termini entropy and vice versa). The errors calculated from exponential entropy, on the other hand, are much lower, with values both above and below those generated from Shannon entropy. It is also shown that in both cases, M A D s are small, although exponential entropy returns an error 65.48% lower, with respect to those obtained from De Luca and Termini entropy. For C F E , random errors are practically non-existent in both cases.
The normalised decision matrix obtained by applying Equation (23) is shown in Table 6. The fuzzy Shannon entropy vector, the fuzzy diversification vector, and the fuzzy criteria weights obtained by applying Equations (24)–(26), after normalising the fuzzy criteria weights, respectively, are shown in Table 7.

4.2.3. Resulting Weights

Subjective weights w ˜ j S and objective weights w ˜ j O of each criterion C j were aggregated using Equation (29). W S and W O are the weightings associated with the objective and subjective weights, respectively, and have values between 0 and 1. The ranking of alternatives is calculated by assuming that objective and subjective weights are of similar importance, that is, W S = 0.5 and W O = 0.5 .
w ˜ j = W S w ˜ j S + W O w ˜ j O   with   W S + W O = 1
The objective weights considered in the model are those obtained from fuzzy Shannon entropy, but those derived from De Luca and Termini entropy, and those given by Equation (28) from Pal and Pal [127,128], were also computed. This allows rankings to be compared among different techniques for obtaining objective weights (see Section 5, Results and Discussion)

5. Results and Discussion

The detailed characteristics of each app included in the research are described on the official website of each application (https://account.turningtechnologies.com/account/, accessed on 4 April 2021 [129], accessed on 4 April 2021; https://www.socrative.com/, accessed on 4 April 2021 [130]; https://quizizz.com/, accessed on 4 April 2021 [131]; https://www.mentimeter.com/, accessed on 4 April 2021 [132] and https://kahoot.com/, accessed on 4 April 2021 [133]). TurningPoint and Quizizz are free applications, while Socrative has a number of versions (free, Socrative PRO for K–12 teachers, and Socrative PRO for Higher Ed & Corporate), which is also the case with Kahoot! (free, Standard, Pro, Kahoot! 360, and Kahoot! 360Pro) and Mentimeter (free, Basic, Pro, and Enterprise). This analysis used the free version of each app.
Once the weightings and their means of integration are obtained, fuzzy TOPSIS is applied to obtain the ranking of the apps. The fuzzy weighted normalised decision matrix that results from combining the objective and subjective weights of Table 4 and Table 7, respectively, and applying W S = 0.5 and W O = 0.5 , is shown in Table 8. These are the values considered most appropriate and which give similar importance to objective and subjective weightings.
Since all the criteria are of the benefit type, the positive ideal point is defined as and the negative ideal point from Equations (16) and (17). The Euclidean distances d i + and d i of each alternative from the A + and A and the closeness coefficient C C in the case W S = 0.5 and W O = 0.5 are shown in Table 9. It can be seen that Quizizz, Socrative, and Kahoot! are chosen in first, second, and third place, respectively. Table 10 shows the distances, normalised closeness coefficient, and ranking of alternatives in the case of W S = 0.5 and W O = 0.5 , using objective weights from fuzzy Luca and Termini entropy, and Table 11 includes the same parameters but applying the objective weights from exponential entropy. The same ranking in the first three positions as for Shannon entropy is obtained from exponential entropy but, in the case of De Luca and Termini entropy, Socrative is in first place in the ranking, followed by TurningPoint and Quizizz. This is quite surprising, as TurningPoint is in a lower position in the ranking with the other objective weights and MCDA techniques used to validate the method (see the Validity of the proposed method section following).

5.1. Validity of the Proposed Method

The feasibility and validity of the proposed method is tested through PROMETHEE II, ELECTRE III, and fuzzy VIKOR (some of the methods proposed by the application that recommend the most suitable MCDA method carried out by Wątróbski et al. [15]). Objective weights from fuzzy Shannon entropy and subjective weights from MACBETH and W S = 0.5 and W O = 0.5 were used in all the MCDA applied.
In PROMETHEE II, the type I or strict immediate preference function was used in all criteria. The result was to obtain the positive ϕ + ( A ) and negative ϕ ( A ) outranking flows and the net outranking flow ϕ ( A ) = ϕ + ( A ) ϕ ( A ) of alternative A, as shown in Table 12.
ELECTRE III performs the ranking from antagonistic classifications (ascending and descending distillation), and orders the alternatives from best to least good, and from worst to least bad. This is done using fuzzy overclassification logic. The data for the alternatives were normalised to a scale of 0 to 10. Therefore, for each criterion j , the indifference q j and preference coefficients p j must be the same for all criteria [134]; the veto threshold is not considered since for normalisation of the values of the alternatives with respect to the criteria, the differences between these values are very small, and the introduction of high values for the parameter makes no sense in this case, since in some criteria the preference threshold p j plays this role. Table 13 shows the ascending ranking (from the worst alternative to the best), descending ranking (from the best alternative to the worst), and average ranking (to obtain a complete ranking, the final ranking is held to be an average of the ascending and descending ranking). Table 14 shows the dominance matrix.
In the fuzzy VIKOR method, v = n + 1 / 2 n = 0.6 , where n is the number of alternatives. The S ˜ j and R ˜ j are respectively the fuzzy separation of alternative A j from the fuzzy best value f i and the separation of alternative A j from the fuzzy worst value f i 0 . Q ˜ j gives the fuzzy separation measure of an alternative from the best alternative. S ˜ j , R ˜ j , and Q ˜ j are defuzzified using the Centre of Area (COA) method. Q j The resulting crisp S j , R j , and Q j and the corresponding associated rankings asociados are shown in Table 15. The lower Q j , the better the alternative.
Table 16 summarises the rankings obtained with MACBETH+TOPSIS, fuzzy VIKOR, PROMETHEE II, and ELECTRE III. It can be seen that all the techniques give Quizizz as the best solution, followed by Socrative.
In order to validate the proposed method, the similarity of the rankings obtained with all the MCDA techniques used was assessed using the Value of Similarity ( W S ) coefficient developed by Salabun and Urganiak [135]. This coefficient is strongly correlated with the difference between two rankings at particular positions, and the top of the ranking has a more significant influence on similarity than the bottom. The W S coefficient is calculated using Equation (30):
W S = 1 i = 1 n ( 2 R x i | R x i R y i | m a x { | 1 R x i | ,   | n R x i | } )
where n is the length of ranking and R x i and R y i are defined as the place in the ranking for the ith element in ranking x and ranking y, respectively. If the W S coefficient is less than 0.234, then the similarity is low, and if it is higher than 0.808, then the similarity is high [135]. Table 17 shows the W S coefficients of the ranking of the method described in this research with respect to those used to validate the method. Therefore, in all cases the similarity is high, but it is slightly higher for the ranking obtained from ELECTRE III.

5.2. Sensitivity Analysis

The sensitivity analysis performed by modifying the values of W S and W O can be seen in Table 18, Table 19 and Table 20 with the objective weights obtained from fuzzy Shannon entropy, fuzzy De Luca and Termini entropy, and exponential entropy, respectively. Table 16 shows that as the influence of the subjective weights increases, especially for very high values W S 0.8 , a permutation in the ranking of the fourth and fifth positions takes place. If only the subjective weights are considered ( W S = 1 ), Socrative is the alternative ranked first, but in all other cases, when the objective weights are taken into account, Quizizz is the alternative chosen. Therefore, it can be seen that the subjective weights have no influence on the ranking when W S < 0.8 and that Quizizz, Socrative, and Kahoot! are ranked in first, second, and third places, respectively, in these cases. The results demonstrate the need to combine subjective and objective weights to obtain more accurate results in the decision models, since, if only the subjective weights had been used, the results could have been misleading.
The sensitivity analysis performed by modifying the values of W S and W O using objective weights from fuzzy De Luca and Termini entropy is shown in Table 19. It can be seen that, as W S 0.2 , Socrative is the first-placed alternative, followed by Quizizz. When only the objective weights are included, TurningPoint is the highest-valued alternative, followed by Socrative. The other alternatives also undergo changes in the ranking; for example, Kahoot! goes from last place, when only objective weights are considered, to fourth place when objective and subjective weights are given equal weight, and finally to third place when W S 0.8 . TurningPoint goes from being the best alternative, when only objective weights are considered, to being in second place when 0.2 W S < 0.6 , third place when W S = 0.6 , and finally fourth place when W S 0.8 . It therefore seems that in this case, the results are unstable in the classification of alternatives, and the classification of an alternative may vary as the contributions of the objective or subjective weights are altered.
The sensitivity analysis performed by modifying the values of W S and W O using objective weights from Pal and Pal entropy is shown in Table 20. Quizziz and Socrative are the first and second alternatives, respectively, in all cases except when only subjective weights are considered. Kahoot!, TurningPoint, and Mentimeter are in third, fourth, and fifth place, respectively, in all cases; there is no change in the classification as the contribution of the objective and subjective weights is altered. These results show that the model is very stable and robust. Furthermore, these results are more in agreement with those obtained with fuzzy Shannon entropy.
A sensitivity analysis was also carried out, increasing and decreasing by 10% and 20% the weights of each criterion with respect to those obtained from MACBETH+Fuzzy Shannon entropy, while maintaining the weights assigned to the other criteria, to see whether this leads to any changes in the ranking of alternatives. Figure 7 shows the results of these variations. There is only one permutation in the ranking, between Mentimeter and TurningPoint, which move to the fifth and fourth places, respectively, when the weighting of the Control of learning criterion is decreased by 20%. The model is therefore seen to be stable.
The results were shown to the teacher, who was asked his opinion. He remarked that due to the characteristics of the course, subject, and student body, Quizizz was the alternative he considered most suitable, too.
Quizizz was therefore chosen as the application to do gamification in the Manufacturing Systems and Industrial Organisation course. Specifically, Quizizz was applied in the practical and problem classes as a way of remembering concepts and increasing student motivation. Students were divided into two groups (Group 1 and Group 2) to do these practical exercises and problems. These questionnaires include an extra question, about their assessment of what can be learnt with the tool used. The first year that Quizizz was used with students, 59.7% of students in Group 1 considered that the app was good or very good for learning, with 39% precision in giving correct answers; in Group 2, 90% of students valued the learning with Quizizz as good or very good, with a precision in the results of 44%. The following year, 73.17% of students in Group 1 valued it positively, with an academic result of 52% of correct answers; 72% of the students in Group 2 valued it positively, while the precision in the answers given was 61%. It was seen that Group 2 had better academic results than Group 1 in both years. The gamification activities were undertaken first in Group 1, and then a week later in Group 2. It seems that the students in Group 2, once they knew what Group 1 had done, performed better academically. It is also seen that the academic results have increased over the academic years, and so it is likely that the results will improve 10% in the next year.

6. Conclusions

There is ever stronger evidence of the favourable acceptance of gamification and its effectiveness in favouring highly engaging learning experiences. The many benefits described have led to a considerable increase in the number of applications aimed at gamification in teaching. Choosing the best one for a programme or year has thus become a complex decision. Nevertheless, the literature review carried out on different databases has shown that there are no studies using fuzzy multicriteria techniques to analyse the selection of gamification apps in university courses.
This study describes a model combining fuzzy TOPSIS with the MACBETH approach and fuzzy Shannon entropy, in order to choose the most suitable gamification application in the second-year degree programmes in Electrical Engineering and Industrial and Automatic Electronic Engineering at the Higher Technical School of Industrial Engineering at the Ciudad Real campus of the University of Castilla-La Mancha (Spain).
In the literature, fuzzy TOPSIS is usually combined with AHP or fuzzy AHP, despite the many criticisms directed at AHP. However, this study is the first in the literature to combine subjective weights obtained via MACBETH with objective weights computed using fuzzy Shannon entropy, and with the fuzzy TOPSIS methodology to obtain the ranking of alternatives. MACBETH provides a complete methodology for ensuring the accuracy of the weightings in the criteria, such as the reference levels and the definitions of the descriptors associated with each criterion; it also supplies a variety of tools to include doubts or incomplete knowledge of the decision maker, as well as the need to validate the results as they are obtained; furthermore, it avoids the many criticisms aimed at AHP. Additionally, weights derived from the data computed via fuzzy Shannon entropy are included in the study, giving greater reliability to the results. Objective weights from fuzzy De Luca and Termini entropy and exponential entropy computed with the Pal and Pal definition are compared with the obtained from fuzzy Shannon entropy, as well as the rankings obtained using these objective weights combined with the subjective weighs produced by MACBETH. The same ranking in the top three places as using Shannon entropy is obtained from exponential entropy but, in the case of De Luca and Termini entropy, Socrative is in first place in the ranking, followed by TurningPoint and Quizizz.
Objective and subjective weights are combined by assuming they are of equal importance. The results show that Quizizz, Socrative, and Kahoot! are to be found in first, second, and third place, respectively. The results of the proposed method are validated with PROMETHEE II, ELECTRE III, and fuzzy VIKOR. All the MCDA techniques used return Quizizz as the best solution, followed by Socrative. The similarity between the rankings of the various techniques was computed using the W S coefficient, and values greater than 0.808 were obtained in all cases, and thus great similarity, although it was slightly greater for ELECTRE III.
The results obtained by the model were shown to the teacher of the subject, who also considered that Quizizz was the most suitable gamification tool. The solution was also contrasted with the real experience of the use of Quizizz over a number of academic years in a course. An average of 74.85% of the students considered that, in the first year that it was used, the learning experience was very good or good. In the second year, an average of 72.59% of students considered the learning experience to be very good or good. With respect to the learning results, the first year achieved a percentage of correct answers to the questionnaire of 41.5%, while in the second year the average of correct answers was 56.5%.
The models, criteria, and weightings of the criteria can be used as described in this study in other courses and programmes, or, indeed, adapted to the specifics of each course.
As future lines of development, the aim is to include, together with the apps assessed, other additional applications, to see whether Quizizz is still the first choice; the alternatives assessed in this study also continuously introduce new utilities, and so their valuation with respect to some of the criteria may change. Students’ achievements with different apps will also be tested over a number of years in the course or degree programme, as an increase in learning has been detected as they are used over successive academic years. It is also intended that a study of the most suitable apps in the area of master’s degrees be carried out. New methods to obtain the objective weights could be developed and compared. Additionally, group decision-making, considering all the teachers in the field the course belongs to, could be involved in the proposed method. This could be applied to the course analysed in this study, or to other courses. This might allow the most suitable gamification apps to be identified for each subject taught. Modern MCDA methods could also be used to validate the method proposed, provided they were adapted to the characteristics of the problem described in this research. One such proposal, for example, is the Characteristic Objects METhod (COMET), which is completely free from the rank reversal phenomenon.

Funding

This research was funded by the University of Castilla-La Mancha and the European Union through the European Regional Development Fund to the Predictive Analysis Laboratory (PREDILAB) group (2020-GRIN-28770).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Domínguez, A.; Saenz-de-Navarrete, J.; de-Marcos, L.; Fernández-Sanz, L.; Pagés, C.; Martínez-Herráiz, J.J. Gamifying learning experiences: Practical implications and outcomes. Comput. Educ. 2013, 63, 380–392. [Google Scholar] [CrossRef]
  2. Ding, L.; Er, E.; Orey, M. An exploratory study of student engagement in gamified online discussions. Comput. Educ. 2018, 120, 213–226. [Google Scholar] [CrossRef]
  3. Barata, G.; Gama, S.; Jorge, J.; Gonçalves, D. Studying student differentiation in gamified education: A long-term study. Comput. Hum. Behav. 2017, 71, 550–585. [Google Scholar] [CrossRef]
  4. Buckley, P.; Doyle, E. Individualising gamification: An investigation of the impact of learning styles and personality traits on the efficacy of gamification using a prediction market. Comput. Educ. 2017, 106, 43–55. [Google Scholar] [CrossRef]
  5. Torres-Toukoumidis, A.; Romero-Rodríguez, L.M.; Pérez-Rodríguez, A.M. Ludificación y sus posibilidades en el entorno de blended learning: Revisión documental. RIED. Rev. Iberoam. Educ. Distancia 2018, 21, 95–111. [Google Scholar] [CrossRef] [Green Version]
  6. Carnero, M.C. Fuzzy Multicriteria Models for Decision Making in Gamification. Mathematics 2020, 8, 682. [Google Scholar] [CrossRef]
  7. Zainuddin, Z.; Chu, S.K.W.; Shujahat, M.; Perera, C.J. The impact of gamification on learning and instruction: A systematic review of empirical evidence. Educ. Res. Rev. 2020, 30, 100326. [Google Scholar] [CrossRef]
  8. Ruiz, R.; Tesouro, M. Beneficios e inconvenientes de las nuevas tecnologías en el aprendizaje del alumno. Propuestas formativas para alumnos, profesores y padres. Rev. Educ. Y Futuro Digit. 2013, 7, 17–27. [Google Scholar]
  9. García, M.J. Evaluación Dinámica de la Farmacología Mediante la Aplicación TurningPoint Cloud para Dispositivos Móviles: Un Acercamiento a la “Gamificación” en el aula. Informe Final del Proyecto Docente. Universidad de Salamanca. Available online: https://gredos.usal.es/handle/10366/138277 (accessed on 11 March 2021).
  10. Licorish, S.A.; Owen, H.E.; Daniel, B.; George, J.L. Students’ perception of Kahoot!’s influence on teaching and learning. Res. Pract. Technol. Enhanc. Learn. 2018, 13. [Google Scholar] [CrossRef] [Green Version]
  11. Zainuddin, Z. Students’ learning performance and perceived motivation in gamified flipped-class instruction. Comput. Educ. 2018, 126, 75–88. [Google Scholar] [CrossRef]
  12. Kim, S.H.; Lee, J. A study on decision consolidation methods using analytic models for security systems. Comput. Secur. 2007, 26, 145–153. [Google Scholar] [CrossRef]
  13. Rajak, M.; Shaw, K. Evaluation and selection of mobile health (mHealth) applications using AHP and fuzzy TOPSIS. Technol. Soc. 2019, 59, 101186. [Google Scholar] [CrossRef]
  14. Boneu, J.M. Plataformas abiertas de e-learning para el soporte de contenidos educativos abiertos. Rev. Univ. Y Soc. Del Conoc. 2007, 4. Available online: http://www.uoc.edu/rusc/4/1/dt/esp/boneu.pdf (accessed on 11 March 2021).
  15. Wątróbski, J.; Jankowski, J.; Ziemba, P.; Karczmarczyk, A.; Zioło, M. Generalised framework for multi-criteria method selection. Omega 2019, 86, 107–124. [Google Scholar] [CrossRef]
  16. Vahdani, B.; Mousavi, M.S.; Moghaddam, R.T. Group decision making based on novel fuzzy modified TOPSIS method. Appl. Math. Model. 2011, 35, 4257–4269. [Google Scholar] [CrossRef]
  17. Asuquo, M.P.; Wang, J.; Zhang, L.; Phylip-Jones, G. Application of a multiple attribute group decision making (MAGDM) model for selecting appropriate maintenance strategy for marine and offshore machinery operations. Ocean Eng. 2019, 179, 246–260. [Google Scholar] [CrossRef]
  18. Bina, C.; Xiaohuia, L.; Haowua, L.; Leijiao, G. Hybrid Subjetive and Objective Evaluation Method of the Equipment for First Class Distribution Network. Energy Procedia 2019, 158, 3452–3457. [Google Scholar] [CrossRef]
  19. Palczewski, K.; Sałabun, W. The fuzzy TOPSIS applications in the last decade. Procedia Comput. Sci. 2019, 159, 2294–2303. [Google Scholar] [CrossRef]
  20. Torfi, F.; Farahani, R.Z.; Rezapour, S. Fuzzy AHP to determine the relative weights of evaluation criteria and Fuzzy TOPSIS to rankthe alternatives. Appl. Soft Comput. 2010, 10, 520–528. [Google Scholar] [CrossRef]
  21. Amiri, M.P. Project selection for oil-fields development by using the AHP and fuzzy TOPSIS methods. Expert Syst. Appl. 2010, 37, 6218–6224. [Google Scholar]
  22. Sun, C.C. A performance evaluation model by integrating fuzzy AHP and fuzzy TOPSIS methods. Expert Syst. Appl. 2010, 37, 7745–7754. [Google Scholar] [CrossRef]
  23. Kutlu, A.C.; Ekmekçioğlu, M. Fuzzy failure modes and effects analysis by using fuzzy TOPSIS-based fuzzy AHP. Expert Syst. Appl. 2012, 39, 61–67. [Google Scholar] [CrossRef]
  24. Senthil, S.; Srirangacharyulu, B.; Ramesh, A. A robust hybrid multi-criteria decision making methodology for contractor evaluation and selection in third-party reverse logistics. Expert Syst. Appl. 2014, 41, 50–58. [Google Scholar] [CrossRef]
  25. Beikkhakhian, Y.; Javanmardi, M.; Karbasian, M.; Khayambash, B. The application of ISM model in evaluating agile suppliers selection criteria and ranking suppliers using fuzzy TOPSIS-AHP methods. Expert Syst. Appl. 2015, 42, 6224–6236. [Google Scholar] [CrossRef]
  26. Shaverdi, M.; Ramezani, I.; Tahmasebi, R.; Rostamy, A.A.A. Combining Fuzzy AHP and Fuzzy TOPSIS with Financial Ratios to Design a Novel Performance Evaluation Model. Int. J. Fuzzy Syst. 2016, 18, 248–262. [Google Scholar] [CrossRef]
  27. Samanlioglu, F.; Taskaya, Y.E.; Gulen, U.C.; Cokcan, O. A Fuzzy AHP–TOPSIS-Based Group Decision-Making Approach to IT Personnel Selection. Int. J. Fuzzy Syst. 2018, 20, 1576–1591. [Google Scholar] [CrossRef]
  28. Nojavan, M.; Heidari, A.; Mohammaditabar, D. A fuzzy service quality based approach for performance evaluation of educational units. Socio-Econ. Plan. Sci. 2021, 73, 100816. [Google Scholar] [CrossRef]
  29. Saluja, R.S.; Singh, V. A fuzzy multi-attribute decision making model for selection of welding process for grey cast iron. Mater. Today Proc. 2020, 28, 1194–1199. [Google Scholar] [CrossRef]
  30. Kundakcı, N. An integrated method using MACBETH and EDAS methods for evaluating steam boiler alternatives. J. Multicriteria Decis. Anal. 2019, 26, 27–34. [Google Scholar] [CrossRef]
  31. Roy, B. Multicriteria Methodology for Decision Aiding; Springer: Boston, MA, USA, 1996. [Google Scholar]
  32. Ishizaka, A.; Nemery, P. Multi-Criteria Decision Analysis: Methods and Software; Wiley: Chichester, UK, 2013. [Google Scholar]
  33. Ehrgott, M.; Figueira, J.R.; Greco, S. Trends in Multiple Criteria Decision Analysis; Springer: Boston, MA, USA, 2010. [Google Scholar]
  34. Zanakis, S.H.; Solomon, A.; Wishart, N.; Dublish, S. Multi-attribute decision making: A simulation comparison of select methods. Eur. J. Oper. Res. 1998, 107, 507–529. [Google Scholar] [CrossRef]
  35. Ferreira, L.; Borenstein, D.; Santi, E. Hybrid fuzzy MADM ranking procedure for better alternative discrimination. Eng. Appl. Artif. Intell. 2016, 50, 71–82. [Google Scholar] [CrossRef] [Green Version]
  36. Figueira, J.R.; Roy, B. A note on the paper, “ranking irregularities when evaluation alternatives using some electre methods”. Omega 2008, 37, 731–733. [Google Scholar] [CrossRef]
  37. Triantaphyllou, E. Multi-criteria decision making methods. In Multi-Criteria Decision Making Methods: A Comparative Study; Springer: Berlin/Heidelberg, Germany, 2000. [Google Scholar]
  38. Saaty, T.L.; Ergu, D. When is a decision-making method trustworthy? Criteria for evaluating multi-criteria decision-making methods. Int. J. Inf. Technol. Decis. Mak. 2015, 14, 1171–1187. [Google Scholar] [CrossRef]
  39. Zavadskas, E.K.; Turskis, Z.; Kildienė, S. State of art surveys of overviews on MCDM/MADM methods. Technol. Econ. Dev. Econ. 2014, 20, 165–179. [Google Scholar] [CrossRef] [Green Version]
  40. Liang, H.; Ren, J.; Gao, S.; Dong, L.; Gao, Z. Comparison of Different Multicriteria Decision-Making Methodologies for Sustainability Decision Making. Hydrog. Econ. Supply Chain Life Cycle Anal. Energy Transit. Sustain. 2017, 189–224. [Google Scholar] [CrossRef]
  41. Chauvy, R.; Lepore, R.; Fortemps, P.; Weireld, G. Comparison of multi-criteria decision-analysis methods for selecting carbon dioxide utilization products. Sustain. Prod. Consum. 2020, 24, 194–210. [Google Scholar] [CrossRef]
  42. Arabameri, A.; Pal, S.C.; Rezaie, F.; Chakrabortty, R.; Chowdhuri, I.; Blaschke, T.; Ngo, P.T.T. Comparison of multi-criteria and artificial intelligence models for land-subsidence susceptibility zonation. J. Environ. Manag. 2021, 284, 112067. [Google Scholar] [CrossRef]
  43. Guitouni, A.; Martel, J.M. Tentative guidelines to help choosing an appropriate MCDA method. Eur. J. Oper. Res. 1998, 109, 501–521. [Google Scholar] [CrossRef]
  44. Wątróbski, J.; Jankowski, J.; Ziemba, P.; Karczmarczyk, A.; Zioło, M. MCDA Method Selection Tool. Available online: http://www.mcda.it/ (accessed on 1 May 2021).
  45. Yazdani, M.; Torkayesh, A.E.; Santibanez-Gonzalez, E.D.R.; Otaghsara, S.K. Evaluation of renewable energy resources using integrated Shannon Entropy-EDAS model. Sustain. Oper. Consum. 2020, 1, 35–42. [Google Scholar]
  46. Fuertes, A.; García, M.; Castaño, M.A.; López, E.; Zacares, M.; Cobos, M.; Ferris, R.; Grimaldo, F. Uso de herramientas de respuesta de audiencia en la docencia presencial universitaria. Un primer contacto. In Actas de las XXII JENUI; Universidad de Almería: Almería, Spain, 2016; pp. 261–268. [Google Scholar]
  47. Çakıroğlu, Ü.; Başıbüyük, B.; Güler, M.; Atabay, M.; Memiş, B.Y. Gamifying an ICT course: Influences on engagement and academic performance. Comput. Hum. Behav. 2017, 69, 98–107. [Google Scholar] [CrossRef]
  48. Zainuddin, Z.; Shujahat, M.; Haruna, H.; Chu, S.K.W. The role of gamified e-quizzes on student learning and engagement: An interactive gamification solution for a formative assessment system. Comput. Educ. 2020, 145. [Google Scholar] [CrossRef]
  49. Hamari, J.; Koivisto, J.; Sarsa, H. Does gamification work?—A literature review of empirical studies on gamification. In Proceedings of the 47th 2014 Hawaii International Conference on System Sciences, Waikoloa, HI, USA, 6–9 January 2014; IEEE: New York, NY, USA, 2014. [Google Scholar]
  50. Wang, A.I.; Tahir, R. The effect of using Kahoot! for learning—A literature review. Comput. Educ. 2020, 149, 103818. [Google Scholar] [CrossRef]
  51. Dell, K.A.; Chudow, M.B. A web-based review game as a measure of overall course knowledge in pharmacotherapeutics. Curr. Pharm. Teach. Learn. 2019, 11, 838–842. [Google Scholar] [CrossRef] [PubMed]
  52. Knutas, A.; Ikonen, J.; Nikula, U.; Porras, J. Increasing collaborative communications in a programming course with gamification: A case study. In Proceedings of the 15th International Conference on Computer Systems and Technologies, Ruse, Bulgaria, 27 June 2014. [Google Scholar]
  53. Iosup, A.; Epema, D. An Experience Report on Using Gamification in Technical Higher Education. In Proceedings of the 45th ACM Technical Symposium on Computer Science, Education (SIGCSE ’14), Atlanta, GA, USA, 5–8 March 2014; ACM: New York, NY, USA. Available online: https://goo.gl/ISLuL6 (accessed on 1 May 2021).
  54. Laskowski, M. Implementing gamification techniques into university study path-A case study. In Proceedings of the Global Engineering Education Conference Lückemeyer, G. 2015. Virtual blended (EDUCON), Tallinn, Estonia, 18–20 March 2015; IEEE: New York, NY, USA, 2015; pp. 582–586. [Google Scholar]
  55. Dicheva, D.; Dichev, C.; Agre, G.; Angelova, G. Gamification in education: A systematic mapping study. Educ. Technol. Soc. 2015, 18, 75–88. [Google Scholar]
  56. Huang, B.; Hew, K.F. Implementing a theory-driven gamification model in higher education flipped courses: Effects on out-of-class activity completion and quality of artifacts. Comput. Educ. 2018, 125, 254–272. [Google Scholar] [CrossRef]
  57. Huang, B.; Hew, K.F.; Warning, P. Engaging learners in a flipped information science course with gamification: A quasi-experimental study. Commun. Comput. Inf. Sci. 2018, 843, 130–141. [Google Scholar]
  58. Gartner. Gartner Says by 2015, More Than 50 Percent of Organizations that Manage Innovation Processes Will Gamify Those Processes. Available online: https://www.pressebox.com/pressrelease/gartner-uk-ltd/Gartner-Says-By-2015-More-Than-50-Per-Cent-of-Organisations-That-Manage-Innovation-Processes-Will-Gamify-Those-Processes/boxid/417583 (accessed on 4 September 2020).
  59. IEEE. Everyone’s a Gamer—IEEE Experts Predict Gaming Will Be Integrated into More Than 85 Percent of Daily Tasks by 2020. Available online: https://www.prnewswire.com/news-releases/everyones-a-gamer---ieee-experts-predict-gaming-will-be-integrated-into-more-than-85-percent-of-daily-tasks-by-2020-247100431.html (accessed on 4 September 2020).
  60. Koivisto, J.; Hamari, J. The rise of motivational information systems: A review of gamification research. Int. J. Inf. Manag. 2019, 45, 191–210. [Google Scholar] [CrossRef]
  61. McGonigal, J. Reality Is Broken: Why Games Make Us Better and How They Can Change the World; Penguin: London, UK, 2011. [Google Scholar]
  62. Gupta, P. Tools, Tips & Resources Teachers Must Know to Learn about Gamification of Education. Available online: https://edtechreview.in/trends-insights/insights/2293-gamification-of-education (accessed on 4 September 2020).
  63. Lynch, M. 8 Must Have Gamification Apps, Tools, and Resources. Available online: https://www.thetechedvocate.org/8-must-gamification-apps-tools-resources (accessed on 4 September 2020).
  64. Educación 3.0. 25 Herramientas de Gamificación para Clase que Engancharán a Tus Alumnos. 2019. Available online: https://www.educaciontrespuntocero.com/recursos/herramientas-gamificacion-educacion/33094.html (accessed on 4 September 2020).
  65. Göksün, D.O.; Gürsoy, G. Comparing success and engagement in gamified learning experiences via kahoot and Quizizz. Comput. Educ. 2019, 135, 15–29. [Google Scholar] [CrossRef]
  66. Loayza, J. The 10 Best Educational Apps that Use Gamification for Adults in 2019. Available online: https://yukaichou.com/gamification-examples/top-10-education-gamification-examples/ (accessed on 4 September 2020).
  67. Gåsland, M. Game Mechanic Based e-Learning. Master’s Thesis, Norwegian University of Science and Technology, Trondheim, Norway, 2011. [Google Scholar]
  68. Li, W.; Grossman, T.; Fitzmaurice, G. GamiCAD: A gamified tutorial system for first time AutoCAD users. In Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology, Cambridge, MA, USA, 7 October 2012; ACM: Cambridge, MA, USA, 2012; pp. 103–112. [Google Scholar]
  69. Goehle, G. Gamification and Web-based Homework. Probl. Resour. Issues Math. Undergrad. Stud. 2013, 23, 234–246. [Google Scholar] [CrossRef]
  70. Snyder, E.; Hartig, J. Gamification of board review: A residency curricular innovation. Med Educ. 2013, 47, 524–525. [Google Scholar] [CrossRef]
  71. Rodríguez, F.; Santiago, R. Gamificación: Cómo Motivar a tu Alumnado y Mejorar el Clima en el aula. Innovación Educative; Editorial Océano: Barcelona, Spain, 2015. [Google Scholar]
  72. Acuña, M. Las 5 Mejores Herramientas de Gamificación Para Universitarios. Available online: https://www.evirtualplus.com/herramientas-de-gamificacion-para-universitarios/ (accessed on 4 September 2020).
  73. Roger, S.; Cobos, M.; Arevalillo-Herráez, M.; García-Pineda, M. Combinación de Cuestionarios Simples y Gamificados Utilizando Gestores de Participación en el aula: Experiencia y Per-cepción del Alumnado. Congreso Nacional de Innovación Educativa y de Docencia en red (INRED 2017); Universitat Politècnica de València: Valencia, Spain, 2017; pp. 1–12. [Google Scholar] [CrossRef]
  74. Plump, C.M.; LaRosa, J. Using kahoot! In the classroom to create engagement and active learning: A game-based technology solution for elearning novices. Manag. Teach. Rev. 2017, 2, 151–158. [Google Scholar] [CrossRef]
  75. Marín, A.; Pastor, J.M.; Villagrasa, J. La aplicación TurningPoint como herramienta de aprendizaje transformacional en los procesos educativos. Rev. D’innovació Educ. 2016, 16, 20–29. [Google Scholar]
  76. Gokbulut, B. The effect of Mentimeter and Kahoot applications on university students’ e-learning. World J. Educ. Technol. Curr. Issues 2020, 12, 107–116. [Google Scholar] [CrossRef]
  77. Mayhew, E.; Davies, M.; Millmore, A.; Thompson, L.; Pena Bizama, A. The impact of audience response platform Mentimeter on the student and staff learning experience. Res. Learn. Technol. 2020, 28, 2397. [Google Scholar] [CrossRef]
  78. Basilico, A.; Marceglia, S.; Bonacina, S.; Pinciroli, F. Advising patients on selecting trustful apps for diabetes self-care. Comput. Biol. Med. 2016, 711, 86–96. [Google Scholar] [CrossRef] [PubMed]
  79. Krishnan, G.; Selvam, G. Factors influencing the download of mobile health apps: Content review-led regression analysis. Health Policy Technol. 2019, 8, 356–364. [Google Scholar] [CrossRef]
  80. Mao, X.; Zhao, X.; Liu, Y. mHealth App recommendation based on the prediction of suitable behavior change techniques. Decis. Support. Syst. 2020, 132, 113248. [Google Scholar] [CrossRef]
  81. Păsărelu, C.R.; Andersson, G.; Dobrean, A. Attention-Deficit/ Hyperactivity Disorder Mobile Apps: A Systematic Review. Int. J. Med. Inform. 2020, 138, 104133. [Google Scholar] [CrossRef] [PubMed]
  82. Robillard, J.M.; Feng, T.L.; Sporn, A.B.; Lai, J.A.; Lo, C.; Ta, M.; Nadler, R. Availability, readability, and content of privacy policies and terms of agreements of mental health apps. Internet Interv. 2019, 17, 100243. [Google Scholar] [CrossRef] [PubMed]
  83. Beck, A.L.; Chitalia, S.; Rai, V. Not so gameful: A critical review of gamification in mobile energy applications. Energy Res. Soc. Sci. 2019, 51, 32–39. [Google Scholar] [CrossRef]
  84. Hwang, C.L.; Yoon, K. Multiple Attribute Decision Making: Methods and Applications; Springer: Berlin/Heidelberg, Germany, 1981. [Google Scholar]
  85. Chen, S.J.; Hwang, C.L. Fuzzy Multiple Attribute Decision Making: Methods and Applications; Springer: Berlin/Heidelberg, Germany, 1992. [Google Scholar]
  86. Yoon, K.P.; Hwang, L. Multiple Attribute Decision Making; Sage Publication: Thousand Oaks, CA, USA, 1995. [Google Scholar]
  87. Behzadian, M.; Otaghsara, S.K.; Yazdani, M.; Ignatius, J. A state-of the-art survey of TOPSIS applications. Expert Syst. Appl. 2012, 39, 13051–13069. [Google Scholar] [CrossRef]
  88. Chen, C.T. Extensions of the TOPSIS for group decision-making under fuzzy environment. Fuzzy Sets Syst. 2000, 114, 1–9. [Google Scholar] [CrossRef]
  89. Salih, M.M.; Zaidan, B.B.; Zaidan, A.A.; Ahmed, M.A. Survey on fuzzy TOPSIS state-of-the-art between 2007 and 2017. Comput. Oper. Res. 2019, 104, 207–227. [Google Scholar] [CrossRef]
  90. Bottani, E.; Rizzi, A. A fuzzy TOPSIS methodology to support outsourcing of logistics services. Supply Chain Manag. 2006, 11, 294–308. [Google Scholar] [CrossRef]
  91. Bairagi, B.; Dey, B.; Sarkar, B.; Sanyal, S.K. A De Novo multi-approaches multi-criteria decision making technique with an application in performance evaluation of material handling device. Comput. Ind. Eng. 2015, 87, 267–282. [Google Scholar] [CrossRef]
  92. Madi, E.; Garibaldi, J.M.; Wagner, C. An exploration of issues and limitations in current methods of TOPSIS and fuzzy TOPSIS. In Proceedings of the 2016 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Vancouver, BC, Canada, 24–29 July 2016; Available online: https://nottingham-repository.worktribe.com/output/799731 (accessed on 17 April 2021).
  93. Roszkowska, E.; Kacprzak, D. The fuzzy saw and fuzzy TOPSIS procedures based on ordered fuzzy numbers. Inf. Sci. 2016, 369, 564–584. [Google Scholar] [CrossRef]
  94. Zadeh, L.A. Fuzzy sets. Inf. Control. 1965, 8, 338–353. [Google Scholar] [CrossRef] [Green Version]
  95. Yeh, C.H.; Deng, H. An algorithm for fuzzy multi-criteria decision making. In Proceedings of the IEEE International Conference on Intelligent Processing Systems, Beijing, China, 28–31 October 1997; pp. 1564–1568. [Google Scholar]
  96. Kaufmann, A.; Gupta, M.M. Fuzzy Mathematical Models in Engineering and Management Science; North Holland: Amsterdam, The Netherlands, 1988. [Google Scholar]
  97. Ouma, Y.O.; Opudo, J.; Nyambenya, S. Comparison of Fuzzy AHP and Fuzzy TOPSIS for Road Pavement Maintenance Prioritization: Methodological Exposition and Case Study. Adv. Civ. Eng. 2015, 2015, 140189. [Google Scholar] [CrossRef] [Green Version]
  98. Kuo, M.S.; Tzeng, G.H.; Huang, W.C. Group decision making based on concepts of ideal and anti-ideal points in fuzzy environment. Math. Comput. Model. 2007, 45, 324–339. [Google Scholar] [CrossRef]
  99. Javad, M.O.M.; Darvishi, M.; Javad, A.O.M. Green supplier selection for the steel industry using BWM and fuzzy TOPSIS: A case study of Khouzestan steel company. Sustain. Futures 2020, 2, 100012. [Google Scholar] [CrossRef]
  100. Sirisawat, P.; Kiatcharoenpol, T. Fuzzy AHP-TOPSIS approaches to prioritizing solutions for reverse logistics barriers. Comput. Ind. Eng. 2018, 117, 303–318. [Google Scholar] [CrossRef]
  101. Leem, C.S.; Kim, S. Introduction to an integrated methodology for development and implementation of enterprise information systems. J. Syst. Softw. 2002, 60, 249–261. [Google Scholar] [CrossRef]
  102. ISO 9126-1. Software Engineering—Product Quality—Part 1: Quality Model, ISO/IEC 9126-1:2001; International Organization for Standardization: London, UK, 2001.
  103. Wang, T.C.; Lee, H.D. Developing a fuzzy TOPSIS approach based on subjective weights and objective weights. Expert Syst. Appl. 2009, 36, 8980–8985. [Google Scholar] [CrossRef]
  104. Bana e Costa, C.A.; Vansnick, J.C. MACBETH—An interactive path towards the construction of cardinal value functions. Int. Trans. Oper. Res. 1994, 1, 489–500. [Google Scholar] [CrossRef]
  105. Bana e Costa, C.A.; Ensslin, L.; Correa, E.C.; Vansnick, J.C. Decision Support System in action: Integrated application in a multicriteria decision aid process. Eur. J. Oper. Res. 1999, 113, 315–335. [Google Scholar] [CrossRef]
  106. Bana e Costa, C.A.; De Corte, J.M.; Vansnick, J.C. Macbeth. Int. J. Inf. Technol. Decis. Mak. 2012, 11, 359–387. [Google Scholar] [CrossRef]
  107. Montignac, F.; Noirot, I.; Chaudourne, S. Multi -Criteria evaluation of on-board hydrogen storage technologies using the MACBETH approach. Int. J. Hydrogen Energy 2009, 34, 4561–4568. [Google Scholar] [CrossRef]
  108. Fakhfakh, N.; Verjus, H.; Pourraz, F.; Moreaux, P. Measuring the satisfaction degree of quality attributes requirements for services orchestrations. In In Proceedings of the 4th International Conference on Communication Theory, Reliability, and Quality of Service, Budapest, Hungary, 17 April 2011; pp. 89–94. [Google Scholar]
  109. Karande, P.; Chakraborty, S. Using MACBETH method for supplier selection in manufacturing environment. Int. J. Ind. Eng. Comput. 2013, 4, 259–272. [Google Scholar] [CrossRef]
  110. Rodrigues, T.C. The MACBETH approach to health value measurement: Building a population health index in group processes. Procedia Technol. 2014, 16, 1361–1366. [Google Scholar] [CrossRef] [Green Version]
  111. Tosun, Ö. Using Macbeth Method for Technology Selection in Production Environment. Am. J. Data Min. Knowl. Discov. 2017, 2, 37–41. [Google Scholar] [CrossRef]
  112. Yazdi, A.K.; Esfeden, G.A. Designing robust model of six Sigma implementation based on critical successful factors and MACBETH. Int. J. Process. Manag. Benchmarking 2017, 7, 158–171. [Google Scholar] [CrossRef]
  113. Vieira, A.C.L.; Oliveira, M.D.; Bana e Costa, C.A. Enhancing knowledge construction processes within multicriteria decision analysis: The Collaborative Value Modelling framework. Omega 2020, 94, 102047. [Google Scholar] [CrossRef]
  114. Teotónio, I.; Cabral, M.; Cruz, C.O.; Silva, C.M. Decision support system for green roofs investments in residential buildings. J. Clean. Prod. 2020, 249, 119365. [Google Scholar] [CrossRef]
  115. Baltazar, M.E.; Silva, J. Spanish airports performance and efficiency benchmark. A PESA-AGB study. J. Air Transp. Manag. 2020, 89, 2020. [Google Scholar] [CrossRef]
  116. Pereira, M.A.; Machete, I.F.; Ferreira, D.C.; Marques, R.C. Using multi-criteria decision analysis to rank European health systems: The Beveridgian financing case. Socio-Econ. Plan. Sci. 2020, 72, 100913. [Google Scholar] [CrossRef]
  117. Bana e Costa, C.A.; De Corte, J.M.; Vansnick, J.C. On the mathematical foundations of MACBETH. In Multiple Criteria Decision Analysis. International Series in Operations Research & Management Science; Greco, S., Ehrgott, M., Figueira, J., Eds.; Springer: New York, NY, USA, 2016; p. 233. [Google Scholar]
  118. Bana e Costa, C.A.; De Corte, J.M.; Vansnick, J.C. On the Mathematical Foundations of MACBETH. In Multi Criteria Decision Analysis: State of the Art Surveys; Figueira, J., Greco, S., Ehrgott, M., Eds.; Springer: New York, NY, USA, 2005; pp. 409–442. [Google Scholar]
  119. Bana e Costa, C.A.; De Corte, J.M.; Vansnick, J.C. MACBETH. User’s Guide. Available online: http://m-macbeth.com/wp-content/uploads/2017/10/M-MACBETH-Users-Guide.pdf (accessed on 2 February 2021).
  120. MACBETH. Available online: http://m-macbeth.com/demo/ (accessed on 1 May 2021).
  121. Bana e Costa, C.A.; Chagas, M.P. A career choice problem: An example of how to use MACBETH to build a quantitative value model based on qualitative value judgments. Eur. J. Oper. Res. 2004, 153, 323–331. [Google Scholar] [CrossRef]
  122. Deng, H.; Yeh, C.H.; Willis, R.J. Inter-company comparison using modified TOPSIS with objective weights. Comput. Oper. Res. 2000, 27, 963–973. [Google Scholar] [CrossRef]
  123. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef] [Green Version]
  124. Zou, Z.H.; Yi, Y.; Sun, J.N. Entropy method for determination of weight of evaluating indicators in fuzzy synthetic evaluation for water quality assessment. J. Environ. Sci. 2006, 18, 1020–1023. [Google Scholar] [CrossRef]
  125. Kacprzak, D. Objective Weights Based on Ordered Fuzzy Numbers for Fuzzy Multiple Criteria Decision-Making Methods. Entropy 2017, 19, 373. [Google Scholar] [CrossRef] [Green Version]
  126. De Luca, A.; Termini, S. A definition of a nonprobabilistic entropy in the setting of fuzzy sets theory. Inf. Control. 1972, 20, 301–312. [Google Scholar] [CrossRef] [Green Version]
  127. Pal, N.R.; Pal, S.K. Object background segmentation using new definitions of entropy. IEEE Proc. 1989, 136, 284–295. [Google Scholar] [CrossRef] [Green Version]
  128. Pal, N.R.; Pal, S.K. Entropy: A new definition and its applications. IEEE Trans. Syst. Man Cybernet. 1991, 21, 1260–1270. [Google Scholar] [CrossRef] [Green Version]
  129. Turning. Available online: https://account.turningtechnologies.com/account/ (accessed on 1 May 2021).
  130. Socrative. Meet Socrative. Available online: https://www.socrative.com/ (accessed on 1 May 2021).
  131. Quizizz. The 100% Engagement Platform. Available online: https://quizizz.com/ (accessed on 1 May 2021).
  132. Mentimeter. Create Interactive Presentations & Meetings, Wherever You Are. Available online: https://www.mentimeter.com/ (accessed on 1 May 2021).
  133. Kahoot! Available online: https://kahoot.com/ (accessed on 1 May 2021).
  134. García, L.I.; Muñoz, A. Localización empresarial en Aragón: Una aplicación empírica de la ayuda a la decisión multicriterio tipo ELECTRE I y III. Robustez de los resultados obtenidos. Rev. Métodos Cuantitativos Econ. Y Empresa 2009, 7, 31–56. [Google Scholar]
  135. Sałabun, W.; Urbaniak, K. A New Coefficient of Rankings Similarity in Decision-Making Problems. In Computational Science—ICCS 2020. ICCS 2020. Lecture Notes in Computer Science; Krzhizhanovskaya, V., Ed.; Springer: Cham, Switzerland, 2020; Volume 12138. [Google Scholar]
Figure 1. The membership function of TFN.
Figure 1. The membership function of TFN.
Mathematics 09 01034 g001
Figure 2. Flow diagram of the study.
Figure 2. Flow diagram of the study.
Mathematics 09 01034 g002
Figure 3. Hierarchy.
Figure 3. Hierarchy.
Mathematics 09 01034 g003
Figure 4. (a) MACBETH judgement matrix of the Academic performance criterion; (b) value function.
Figure 4. (a) MACBETH judgement matrix of the Academic performance criterion; (b) value function.
Mathematics 09 01034 g004
Figure 5. MACBETH judgement matrix.
Figure 5. MACBETH judgement matrix.
Mathematics 09 01034 g005
Figure 6. Weightings and value ranges consistent with the judgements given by the decision maker.
Figure 6. Weightings and value ranges consistent with the judgements given by the decision maker.
Mathematics 09 01034 g006
Figure 7. Sensitivity analysis of the model proposed: (a) Capacity to combine with other methodologies or novel teaching tools; (b) Academic performance; (c) Flexibility in the creation of questionnaires; (d) Students’ perceptions; (e) Results reports; (f) Versatility in assessment of the questionnaire; (g) Capacity for group competition; (h) Ease of use; (i) Support; (j) Control of learning rate.
Figure 7. Sensitivity analysis of the model proposed: (a) Capacity to combine with other methodologies or novel teaching tools; (b) Academic performance; (c) Flexibility in the creation of questionnaires; (d) Students’ perceptions; (e) Results reports; (f) Versatility in assessment of the questionnaire; (g) Capacity for group competition; (h) Ease of use; (i) Support; (j) Control of learning rate.
Mathematics 09 01034 g007aMathematics 09 01034 g007b
Table 1. Linguistic variables for the ratings.
Table 1. Linguistic variables for the ratings.
Linguistic Terms for the RatingsFuzzy NumberInverse Fuzzy Number
( l ,   m ,   u ) ( 1 / u , 1 / m ,   1 / l )
Very Poor(0, 0, 1)(1, 0, 0)
Poor(0, 1, 3)(1/3, 1, 0)
Medium Poor(1, 3, 5)(1/5, 1/3, 1)
Fair(3, 5, 7)(1/7, 1/5, 1/3)
Medium Good(5, 7, 9)(1/9, 1/7, 1/5)
Good(7, 9, 10)(1/10, 1/9, 1/7)
Very Good(9, 10, 10)(1/10, 1/10, 1/9)
Table 2. MACBETH semantic scale.
Table 2. MACBETH semantic scale.
Semantic ScaleEquivalent Numerical ScaleDescription
Null0Indifference between the alternatives
Very weak1An alternative is very weakly attractive over another
Weak2An alternative is weakly attractive over another
Moderate3An alternative is moderately attractive over another
Strong4An alternative is strongly attractive over another
Very strong5An alternative is very strongly attractive over another
Extreme6An alternative is extremely attractive over another
Table 3. Descriptor associated with the Academic performance criterion.
Table 3. Descriptor associated with the Academic performance criterion.
Scale LevelsDescription
L21A considerable improvement (>10%) can be seen in academic results from the use of continuous formative applications over the year via the app (Good).
L22Some improvement (5–10%) can be seen in academic results from the use of continuous formative activities over the year via the app.
L23A small improvement (up to 5%) can be seen in academic results from the use of continuous formative activities over the year via the app.
L24No improvement can be seen in academic results from the use of continuous activities over the year via the app (Neutral).
L25Results are worse after using continuous activities over the year via the app.
Table 4. Weightings of the criteria obtained via the MACBETH approach in the form of triangular fuzzy numbers.
Table 4. Weightings of the criteria obtained via the MACBETH approach in the form of triangular fuzzy numbers.
Criterialmu
Capacity to combine with other methodologies or novel teaching tools0.0370.0670.079
Academic performance0.1380.1670.193
Flexibility in the creation of questionnaires0.1230.1330.151
Students’ perceptions0.1230.1330.151
Results reports0.1230.1330.151
Versatility in assessment of the questionnaire0.0860.1000.113
Capacity for group competition0.0030.0330.064
Ease of use0.0370.0670.079
Support0.0370.0670.079
Control of learning rate0.0860.1000.113
Table 5. Crisp objective weights from fuzzy Shannon entropy, De Luca and Termini entropy, and exponential entropy errors  M A D and C F E .
Table 5. Crisp objective weights from fuzzy Shannon entropy, De Luca and Termini entropy, and exponential entropy errors  M A D and C F E .
CriteriaFuzzy Shannon EntropyDe Luca and Termini EntropyExponential EntropyError Shannon-De Luca and TerminiError Shannon-Exponential
Capacity to combine with other methodologies or novel teaching tools0.0330.1790.107−0.146−0.074
Academic performance0.0140.3480.099−0.334−0.085
Flexibility in the creation of questionnaires0.0120.3690.105−0.357−0.093
Students’ perceptions0.0940.0140.0980.080−0.004
Results reports0.0960.0150.1000.081−0.004
Versatility in assessment of the questionnaire0.1310.0010.0960.1300.035
Capacity for group competition0.1140.0150.0990.0990.015
Ease of use0.0730.0550.1030.018−0.030
Support0.1750.0000.0900.1750.085
Control of learning rate0.2580.0030.1040.2550.154
M A D = 0.1680.058
C F E = 0.001−0.001
Table 6. The normalised decision matrix.
Table 6. The normalised decision matrix.
ALTERNATIVESC1C2C3C4C5
KAHOOT!(0.030, 0.075, 0.114)(0.280, 0.257, 0.233)(0.097, 0.125, 0.152)(0.219, 0.237 0.244)(0.167, 0.194, 0.225)
QUIZIZZ(0.152, 0.175, 0.205)(0.120, 0.143, 0.163)(0.161, 0.175, 0.196)(0.281, 0.263, 0.244)(0.233, 0.250, 0.250)
SOCRATIVE(0.273, 0.250, 0.227)(0.200, 0.200, 0.209)(0.226, 0.225, 0.217)(0.219, 0.237, 0.244)(0.300, 0.278, 0.250)
TURNINGPOINT(0.273, 0.250, 0.227)(0.280, 0.257, 0.233)(0.226, 0.225, 0.217)(0.000, 0.000, 0.024)(0.300, 0.278, 0.250)
MENTIMETER(0.273, 0.250, 0.227)(0.120, 0.143, 0.163)(0.290, 0.250, 0.217)(0.28°, 0.263, 0.244)(0.000, 0.000, 0.025)
ALTERNATIVESC6C7C8C9C10
KAHOOT!(0.292, 0.290, 0.278)(0.167, 0.200, 0.219)(0.107, 0.143, 0.175)(0.529, 0.435, 0.333)(0.045, 0.103, 0.143)
QUIZIZZ(0.389, 0.290, 0.278)(0.500, 0.400, 0.313)(0.321, 0.286, 0.250)(0.294, 0.304, 0.300)(0.227, 0.241, 0.257)
SOCRATIVE(0.038, 0.097, 0.139)(0.167, 0.200, 0.219)(0.250, 0.257, 0.250)(0.176, 0.217, 0.233)(0.409, 0.345, 0.286)
TURNINGPOINT(0.529, 0.323, 0.278)(0.000, 0.000, 0.031)(0.321, 0.286, 0.250)(0.000, 0.000, 0.033)(0.000, 0.000, 0.029)
MENTIMETER(0.000, 0.000, 0.028)(0.167, 0.200, 0.219)(0.000, 0.029, 0.075)(0.000, 0.043, 0.100)(0.318, 0.310, 0.286)
Table 7. The fuzzy Shannon entropy vector, the fuzzy diversification vector, and the fuzzy criteria weights.
Table 7. The fuzzy Shannon entropy vector, the fuzzy diversification vector, and the fuzzy criteria weights.
C1C2C3C4C5C6C7C8C9C10
( l i j ,   m i j ,   u i j ) ( l i j ,   m i j ,   u i j ) ( l i j ,   m i j ,   u i j ) ( l i j ,   m i j ,   u i j ) ( l i j ,   m i j ,   u i j ) ( l i j ,   m i j ,   u i j ) ( l i j ,   m i j ,   u i j ) ( l i j ,   m i j ,   u i j ) ( l i j ,   m i j ,   u i j ) ( l i j ,   m i j ,   u i j )
e ˜ j (0.904, 0.956, 0.983)(0.959, 0.980, 0.993)(0.964, 0.983, 0.994)(0.857, 0.861, 0.911)(0.845, 0.855, 0.912)(0.738, 0.814, 0.896)(0.772, 0.828, 0.913)(0.817, 0.898, 0.956)(0.623, 0.740, 0.876)(0.749, 0.000, 0.898)
d ˜ j (0.096, 0.044, 0.017)(0.041, 0.020, 0.007)(0.036, 0.017, 0.006)(0.143, 0.139, 0.089)(0.155, 0.145, 0.088)(0.262, 0.186, 0.104)(0.228, 0.172, 0.087)(0.183, 0.102, 0.044)(0.377, 0.260, 0.124)(0.251, 1.000, 0.102)
w ˜ j (0.054, 0.021, 0.025)(0.023, 0.010, 0.010)(0.020, 0.008, 0.009)(0.081, 0.067, 0.133)(0.087, 0.070, 0.132)(0.148, 0.089, 0.156)(0.129, 0.082, 0.130)(0.103, 0.049, 0.066)(0.213, 0.125, 0.186)(0.142, 0.480, 0.153)
Table 8. Fuzzy weighted normalised decision matrix with W S = 0.5 and W O = 0.5 , in the case of objective weights from fuzzy Shannon entropy.
Table 8. Fuzzy weighted normalised decision matrix with W S = 0.5 and W O = 0.5 , in the case of objective weights from fuzzy Shannon entropy.
CriteriaAlternatives ( l i j ,   m i j ,   u i j )
Capacity to combine with other methodologies or novel teaching toolsKAHOOT!(0.005, 0.013, 0.026)
QUIZIZZ(0.023, 0.031, 0.047)
SOCRATIVE(0.041, 0.044, 0.052)
TURNINGPOINT(0.041, 0.044, 0.052)
MENTIMETER(0.041, 0.044, 0.052)
Academic performanceKAHOOT!(0.057, 0.080, 0.102)
QUIZIZZ(0.024, 0.045, 0.071)
SOCRATIVE(0.041, 0.062, 0.092)
TURNINGPOINT(0.057, 0.080, 0.102)
MENTIMETER(0.024, 0.045, 0.071)
Flexibility in the creation of questionnaires KAHOOT!(0.022, 0.036, 0.056)
QUIZIZZ(0.036, 0.050, 0.072)
SOCRATIVE(0.050, 0.064, 0.080)
TURNINGPOINT(0.050, 0.064, 0.080)
MENTIMETER(0.065, 0.071, 0.080)
Students’ perceptionsKAHOOT!(0.071, 0.090, 0.142)
QUIZIZZ(0.092, 0.100, 0.142)
SOCRATIVE(0.071, 0.090, 0.142)
TURNINGPOINT(0.000, 0.000, 0.014)
MENTIMETER(0.092, 0.100, 0.142)
Results reportsKAHOOT!(0.053, 0.071, 0.128)
QUIZIZZ(0.074, 0.092, 0.142)
SOCRATIVE(0.095, 0.102, 0.142)
TURNINGPOINT(0.095, 0.102, 0.142)
MENTIMETER(0.000, 0.000, 0.014)
Versatility in assessment of the questionnaireKAHOOT!(0.082, 0.086, 0.135)
QUIZIZZ(0.082, 0.086, 0.135)
SOCRATIVE(0.012, 0.029, 0.068)
TURNINGPOINT(0.105, 0.095, 0.135)
MENTIMETER(0.000, 0.000, 0.014)
Capacity for group competitionKAHOOT!(0.020, 0.029, 0.068)
QUIZIZZ(0.059, 0.058, 0.097)
SOCRATIVE(0.020, 0.029, 0.068)
TURNINGPOINT(0.000, 0.000, 0.010)
MENTIMETER(0.020, 0.029, 0.068)
Ease of useKAHOOT!(0.021, 0.029, 0.051)
QUIZIZZ(0.063, 0.058, 0.073)
SOCRATIVE(0.049, 0.052, 0.073)
TURNINGPOINT(0.063, 0.058, 0.073)
MENTIMETER(0.000, 0.006, 0.022)
SupportKAHOOT!(0.113, 0.096, 0.133)
QUIZIZZ(0.063, 0.067, 0.120)
SOCRATIVE(0.038, 0.048, 0.093)
TURNINGPOINT(0.000, 0.000, 0.013)
MENTIMETER(0.000, 0.010, 0.040)
Control of learning rateKAHOOT!(0.011, 0.087, 0.067)
QUIZIZZ(0.057, 0.202, 0.120)
SOCRATIVE(0.103, 0.290, 0.133)
TURNINGPOINT(0.000, 0.000, 0.013)
MENTIMETER(0.080, 0.261, 0.133)
Table 9. The distances, normalised closeness coefficient, and ranking of alternatives in the case of W S = 0.5 and W O = 0.5 , using objective weights from fuzzy Shannon entropy.
Table 9. The distances, normalised closeness coefficient, and ranking of alternatives in the case of W S = 0.5 and W O = 0.5 , using objective weights from fuzzy Shannon entropy.
Alternatives d i + d i Normalised   C C Ranking
KAHOOT!17.34270.69730.20523rd
QUIZIZZ17.21000.83020.24391st
SOCRATIVE17.24830.80450.23652nd
TURNINGPOINT17.50480.51510.15165th
MENTIMETER17.49640.55480.16284th
Table 10. The distances, normalised closeness coefficient, and ranking of alternatives in the case of W S = 0.5 and W O = 0.5 , using objective weights from fuzzy De Luca and Termini entropy.
Table 10. The distances, normalised closeness coefficient, and ranking of alternatives in the case of W S = 0.5 and W O = 0.5 , using objective weights from fuzzy De Luca and Termini entropy.
Alternatives d i + d i Normalised   C C Ranking
KAHOOT!17.35490.69240.18434th
QUIZIZZ17.29160.76090.20203rd
SOCRATIVE17.20170.84870.22551st
TURNINGPOINT17.26230.77530.20632nd
MENTIMETER17.36020.68340.18195th
Table 11. The distances, normalised closeness coefficient, and ranking of alternatives in the case of W S = 0.5 and W O = 0.5 , using objective weights from exponential entropy.
Table 11. The distances, normalised closeness coefficient, and ranking of alternatives in the case of W S = 0.5 and W O = 0.5 , using objective weights from exponential entropy.
Alternatives d i + d i Normalised   C C Ranking
KAHOOT!17.34270.69050.19803rd
QUIZIZZ17.21520.81040.23271st
SOCRATIVE17.22740.79860.22912nd
TURNINGPOINT17.39490.62320.17894th
MENTIMETER17.46540.56300.16135th
Table 12. Positive, negative, and net outranking flows in PROMETHEE II.
Table 12. Positive, negative, and net outranking flows in PROMETHEE II.
AlternativesKAHOOT!QUIZIZZSOCRATIVETURNINGPOINTMENTIMETER
ϕ + ( A ) 1.44001.96301.81101.52801.2000
ϕ ( A ) 1.72601.28401.27801.66601.9880
ϕ ( A ) −0.28600.67900.5330−0.1380−0.7880
Ranking4th1st2nd3rd5th
Table 13. Rankings in the ascending and descending distillation and final ranking in ELECTRE III.
Table 13. Rankings in the ascending and descending distillation and final ranking in ELECTRE III.
AlternativesRanking in the Ascending DistillationRanking in the Descending DistillationRanking in the Median Preorder
KAHOOT!2nd3rd3rd
QUIZIZZ1st1st1st
SOCRATIVE2nd2nd2nd
TURNINGPOINT2nd3rd3rd
MENTIMETER3rd3rd4th
Table 14. Dominance Matrix in ELECTRE III. P+ means an alternative A i that is better than alternative A j . I means that an altenative A i is equivalent to alternative A j . P- means that A i is as good as alternative A j .
Table 14. Dominance Matrix in ELECTRE III. P+ means an alternative A i that is better than alternative A j . I means that an altenative A i is equivalent to alternative A j . P- means that A i is as good as alternative A j .
AlternativesKAHOOT!QUIZIZZSOCRATIVETURNINGPOINTMENTIMETER
KAHOOT!0P−P−IP+
QUIZIZZP+0P+P+P+
SOCRATIVEP+P−0P+P+
TURNINGPOINTIP−P−0P+
MENTIMETERP−P−P−P−0
Table 15. Comparison of ranking with different MCDA techniques using objective weights from fuzzy Shannon entropy and subjective weights from MACBETH and W S = 0.5 and W O = 0.5 .
Table 15. Comparison of ranking with different MCDA techniques using objective weights from fuzzy Shannon entropy and subjective weights from MACBETH and W S = 0.5 and W O = 0.5 .
Alternatives S j R j Q j
ValueRankingValueRankingValueRanking
KAHOOT!0.34834th0.12665th0.32815th
QUIZIZZ0.21772nd0.06361st0.05941st
SOCRATIVE0.21701st0.06512nd0.06322nd
TURNINGPOINT0.33653rd0.12474th0.31444th
MENTIMETER0.39245th0.09543rd0.27103rd
Table 16. Comparison of ranking with different MCDA techniques using objective weights from fuzzy Shannon entropy and subjective weights from MACBETH and W S = 0.5 and W O = 0.5 .
Table 16. Comparison of ranking with different MCDA techniques using objective weights from fuzzy Shannon entropy and subjective weights from MACBETH and W S = 0.5 and W O = 0.5 .
AlternativesMACBETH+
Fuzzy TOPSIS
Fuzzy VIKORPROMETHEE IIELECTRE III
KAHOOT!3rd5th4th3rd
QUIZIZZ1st1st1st1st
SOCRATIVE2nd2nd2nd2nd
TURNINGPOINT5th4th3rd3rd
MENTIMETER4th3rd5th4th
Table 17. W S coefficients.
Table 17. W S coefficients.
FUZZY VIKORPROMETHEE IIELECTRE III
W S coefficient0.8460.9010.984
Table 18. Sensitivity analysis with objective weights from fuzzy Shannon entropy.
Table 18. Sensitivity analysis with objective weights from fuzzy Shannon entropy.
AlternativesNormalised CC/Ranking
W S = 0
W O = 1
W S = 0.2
W O = 0.8
W S = 0.4
W O = 0.6
W S = 0.5
W O = 0.5  
W S = 0.6
W O = 0.4
W S = 0.8
W O = 0.2
W S = 1
W O = 0  
KAHOOT!0.2063/3rd0.2054/3rd0.2056/3rd0.2052/3rd0.2047/3rd0.2035/3rd0.2024/3rd
QUIZIZZ0.2637/1st0.2556/1st0.2480/1st0.2439/1st0.2400/1st0.2333/1st0.2273/2nd
SOCRATIVE0.2454/2nd0.2420/2nd0.2383/2nd0.2365/2nd0.2347/2nd0.2312/2nd0.2283/1st
TURNINGPOINT0.1191/5th0.1324/5th0.1449/5th0.1516/5th0.1579/5th0.1701/4th0.1805/4th
MENTIMETER0.1655/4th0.1646/4th0.1632/4th0.1628/4th0.1626/4th0.1619/5th0.1615/5th
Table 19. Sensitivity analysis with objective weights from De Luca and Termini entropy.
Table 19. Sensitivity analysis with objective weights from De Luca and Termini entropy.
AlternativesNormalised CC/Ranking
W S = 0
W O = 1
W S = 0.2
W O = 0.8
W S = 0.4
W O = 0.6
W S = 0.5
W O = 0.5  
W S = 0.6
W O = 0.4
W S = 0.8
W O = 0.2
W S = 1
W O = 0  
KAHOOT!0.1672/5th0.1733/5th0.1805/5th0.1843/4th0.1874/4th0.1950/3rd0.2024/3rd
QUIZIZZ0.1811/4th0.1887/4th0.1976/3rd0.2020/3rd 0.2073/2nd0.2172/2nd0.2273/2nd
SOCRATIVE0.2238/2nd0.2244/1st0.2252/1st0.2255/1st0.2262/1st0.2270/1st0.2283/1st
TURNINGPOINT0.2297/1st0.2212/2nd0.2110/2nd0.2063/2nd0.2015/3rd0.1910/4th0.1805/4th
MENTIMETER0.1982/3rd0.1924/3rd0.1857/4th0.1819/5th0.1777/5th0.1698/5th0.1615/5th
Table 20. Sensitivity analysis with objective weights from Pal and Pal entropy or exponential entropy.
Table 20. Sensitivity analysis with objective weights from Pal and Pal entropy or exponential entropy.
AlternativesNormalised CC/Ranking
W S = 0
W O = 1
W S = 0.2
W O = 0.8
W S = 0.4
W O = 0.6
W S = 0.5
W O = 0.5  
W S = 0.6
W O = 0.4
W S = 0.8
W O = 0.2
W S = 1
W O = 0  
KAHOOT!0.1928/3rd 0.1946/3rd0.1969/3rd0.1980/3rd 0.1991/3rd 0.2008/3rd0.2024/3rd
QUIZIZZ0.2400/1st 0.2369/1st 0.2339/1st0.2327/1st 0.2315/1st 0.2292/1st 0.2273/2nd
SOCRATIVE0.2298/2nd0.2295/2nd0.2292/2nd0.2291/2nd0.2289/2nd0.2287/2nd0.2283/1st
TURNINGPOINT0.1762/4th0.1777/4th0.1786/4th0.1789/4th0.1790/4th0.1800/4th0.1805/4th
MENTIMETER0.1612/5th0.1613/5th0.1614/5th0.1613/5th0.1615/5th0.1613/5th0.1615/5th
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Carnero, M.C. Developing a Fuzzy TOPSIS Model Combining MACBETH and Fuzzy Shannon Entropy to Select a Gamification App. Mathematics 2021, 9, 1034. https://doi.org/10.3390/math9091034

AMA Style

Carnero MC. Developing a Fuzzy TOPSIS Model Combining MACBETH and Fuzzy Shannon Entropy to Select a Gamification App. Mathematics. 2021; 9(9):1034. https://doi.org/10.3390/math9091034

Chicago/Turabian Style

Carnero, María Carmen. 2021. "Developing a Fuzzy TOPSIS Model Combining MACBETH and Fuzzy Shannon Entropy to Select a Gamification App" Mathematics 9, no. 9: 1034. https://doi.org/10.3390/math9091034

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop