Next Article in Journal
Prompt: ChatGPT, Create My Course, Please!
Previous Article in Journal
Multitrack Educational Programs as a Method of Educational Process Personalization at Universities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

SMART: Selection Model for Assessment Resources and Techniques

by
Isabel C. Gil-García
1,† and
Ana Fernández-Guillamón
2,*,†
1
Faculty of Engineering, Distance University of Madrid (UDIMA), C/Coruña, km 38500, Collado Villalba, 28400 Madrid, Spain
2
Department of Applied Mechanics and Projects Engineering & Renewable Energy Research Institute, Universidad de Castilla–La Mancha, 02071 Albacete, Spain
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Educ. Sci. 2024, 14(1), 23; https://doi.org/10.3390/educsci14010023
Submission received: 22 November 2023 / Revised: 19 December 2023 / Accepted: 20 December 2023 / Published: 25 December 2023
(This article belongs to the Section Higher Education)

Abstract

:
The European Higher Education Area has ushered in a significant shift in university teaching, aiming to engage students more actively in classes. Professors have leveraged virtual platforms and external tools to introduce interactive tasks. With the proliferation of technology, educators face a challenge in choosing the most suitable approach. This paper presents SMART (Selection Model for Assessment Resources and Techniques), a methodology that determines the optimal assessment activities for university-level education. The methodology employs multicriteria decision-making techniques, specifically AHP and TOPSIS methods, to optimize activities based on various subject-, lecturer-, activity-, and student-related criteria. According to SMART, the top five assessment tasks are group and individual report submissions, workshops, complex H5P activities, and questionnaires. Therefore, it is advisable to prioritize these activities based on the methodology’s results, emphasizing their importance over other assessment methods.

1. Introduction

The first attempt to create the European Higher Education Area (EHEA) was in 1999, when 29 education ministers signed the “Bologna Declaration”. Currently, the EHEA consists of 48 countries, and its implementation has involved a major change in most universities, moving from a traditional approach (teacher-focused) to a more student-centred approach [1]. All this has led to more dynamic classes, usually through interactive tasks based on information and communication technologies (ICTs) [2]. In fact, some authors consider ICTs to be an essential element in 21st-century education [3,4]. Thus, universities usually have virtual platforms, called learning management systems, where all the necessary elements of the subjects are included. In Spain, the most common one is based on Moodle [5]. According to [6], Moodle is characterized by a series of functionalities grouped into two classes: resources (which include teaching materials: web pages, documents, presentations, etc.) and modules (which provide interaction between students and teachers: databases, assignments, forums, questionnaires, wikis, activities based on the HTML5 package (H5P), etc.) [7,8,9]. These modules, in turn, are related to different types of activities: creation, organization, delivery, communication, collaboration, and assessment [10]. In [11] the different Moodle modules are grouped into the activity classes previously indicated. Along with the variety of Moodle modules, in recent years, numerous virtual tools have also appeared (such as Kahoot! [12], Socrative [13], Quizizz [14] or Genially [15]), which allow for the gamifying of classes [16,17]. Gamification is defined as a methodology used to increase motivation, competitiveness, and people’s effort by using typical game techniques [18,19]. Gamification can be conducted individually (each student competes against the rest of their classmates) or cooperatively (in groups); in the latter case, in addition to the fun and motivating dynamics created by gamification, participation and interpersonal relationships are also encouraged, creating a more suitable environment for learning [20]. This change in educational philosophy implies a transformation not only in the teaching methodology, but also in the way we evaluate learning. The transition towards student-centred teaching involves a more active and participatory approach, where students are seen as active agents in their own learning process. This shift requires assessments that go beyond simple memorization tests and truly reflect the deep understanding, practical application, and critical skills that students acquire throughout their education [21]. Furthermore, the transition to learner-centred education demands that students are actively involved in evaluating their own learning progress. In fact, students should have opportunities to assess their acquisition of knowledge and skills, as well as provide peer feedback, throughout a course. Involving students directly in these assessment processes, whether through self-evaluations, peer reviews, or other methods, can promote higher-order thinking, improved metacognition, and collaborative learning [22]. Some alternatives analysed in the methodology proposed in this article involve students in self-assessment and the evaluation of their peers.
For the lecturer, having all these alternatives often makes it difficult to decide which of them are the most appropriate, thus becoming a decision problem on which one to use. It should be remembered that any activity carried out in class must make sense for the subject, that is, encompass competencies and learning outcomes, and encourage collaborative learning or the use of new technologies [23]. In addition, from the lecturer’s point of view, it is also important to assess other aspects, such as the complexity of preparing and/or grading it. Multicriteria decision-making (MCDM) methodologies are a branch of operational research that deals with finding the optimal solution in complex scenarios (which include conflicting objectives and factors), allowing one to objectively assess the different alternatives and order them according to the criteria analysed [24]. These methodologies have become popular in recent years, and have been used in very diverse fields, such as the selection of materials for optimal design [25], the parametric optimization of machining processes [26], the selection of green technologies for the rehabilitation of existing buildings [27], the optimal selection of wind power plant locations [28], and many other applications [29].
However, as far as the authors of this work know, MCDM has not been used in educational contexts. Thus, here, SMART (Selection Model for Assessment Resources and Techniques) is proposed, a methodology that aims to determine the best activities to perform in class, using the following set of MCDM methods: the analytic hierarchy process (AHP) and the technique for order of preference by similarity to ideal solution (TOPSIS) to optimize the different activity alternatives based on different defined criteria. The rest of the document is organized as follows: Section 2 explains the methodology of the proposed model; the model evaluation is presented in Section 3, analysing the results in Section 4 and Section 5; finally, Section 6 summarizes the main conclusions obtained after this study.

2. Methodology

SMART aims to answer the following question: what assessment activities should be implemented in a class? Apparently, it seems simple to solve, but it is really a complex decision. To solve this problem, an optimized methodology is proposed that obtains the most appropriate assessment activities according to some defined criteria. Figure 1 shows an overview of SMART, including the following phases: “Data”, “Analysis”, and “Results”. While the “Data” phase is the basis of the model, the “Analysis” and “Results” phases make up its optimization process. They are described in the following sections.

2.1. Data Phase

This phase is the structure of the problem to be solved, where all the information is collected and prepared to execute the optimization process. In general, it is necessary to know the education level, modality, whether a virtual platform is used as the main medium for classes or as support, etc. Based on these data, criteria and alternatives are defined.
  • The criteria are the most important indicators that are considered in the evaluation process of the alternatives. They can be quantitative or qualitative and can be organized into main criteria (categories), subcriteria, etc. For example, for the problem analysed in this paper, several categories are involved: students, teachers, subject, and the activity in question. These categories deploy a series of indicators to be evaluated.
  • The alternatives are the different options involved in decision making. In this model, the alternatives correspond to the different assessment activities being analysed.
To effectively determine and define the criteria and alternatives, the authors recommend asking different people related to the educational context (e.g., teachers, counsellors, etc.) with an extensive experience in the field. In this way, it is possible to consider a more comprehensive set of criteria and alternatives.

2.2. Analysis Phase

First, the weight of the criteria is determined according to the AHP model and then the decision matrix is created. The authors recommend using AHP whenever the group of experts has extensive experience in the topic addressed (in this case, online teaching experience).

2.2.1. AHP

In 1980, Professor Saaty formulated the AHP [30], used in the decision-making procedure. The hierarchy of the model characterizes the method so that the upper level represents the goal to be achieved, the intermediate level incorporates all levels of criteria and subcriteria that link to the model, and at the base are the alternatives that will be evaluated. The fundamentals of the method are mathematical and psychological, and it can be used to evaluate alternatives. However, in this work, it is used to calculate the weight of the criteria, as detailed below:
1.
Design the hierarchical model of the problem. In this step, the problem is modelled with a three-level hierarchical structure, encompassing the goal or objective, criteria, and alternatives; see Figure 2.
2.
Assignment and assessment of priorities. The objective of this step is to obtain the weight of the criteria based on the evaluation of the criteria: it can be performed with a scale directly or indirectly through the comparison between pairs of criteria where the different priorities are compared in a matrix R. Each piece of data is a positive numeric value that determines the relative priority between the row criterion compared to the column criterion; see Table 1.
The complete mathematical procedure is as follows. A priority vector is intended to be determined as shown in Equation (1). For this, Equation (2) is proposed, from which matrix W is obtained by assigning the weights ( w j ) associated with the comparison of criteria C j ( j = 1 , 2 , , n ). The elements of the matrix are positive numbers.
w = [ w 1 , w 2 , w n ]
w 1 w 1 w 1 w n w n w 1 w n w n w 1 w n = μ w 1 w n
A simplified way to state the above equation is according to Equation (3):
W · w = μ · w
where for row i, the sum of the elements is w i · j = 1 n 1 w j , and for column j, the sum of the elements is 1 w j · i = 1 n w i = 1 w j .
Once the matrix is normalized, the sum of the columns obtains the vector w . This method demonstrates the mathematical strengths of reciprocity, homogeneity, and consistency [24]. Specifically, consistency is demonstrated using the consistency ratio ( C R ) indicator according to Equation (4), where R I is the random consistency index (simulation of 100,000 random reciprocal matrices [31]) and C I is the consistency index:
C R = C I R I ; C I = λ max n n 1
Due to the dimensions of the matrix being considered, the weights of the criteria will be valid if C R 0.1 . This threshold varies if the matrix has other dimensions.

2.3. Decision Matrix

The decision matrix database for the TOPSIS method is the set of evaluations of each alternative with respect to the criteria; see Table 2, where:
Ai 
alternatives, i = 1 , , m ;
Cj 
criteria, j = 1 , , n ;
vij 
evaluation of alternative A i with reference to criterion C j ;
W 
vector of weights associated with the criteria, obtained according to Section 2.2.1.

2.4. Results Phase: TOPSIS

Finally, the TOPSIS method is applied in order to obtain a ranking of alternatives. With the classification of alternatives provided by SMART, the most appropriate assessment activities to perform in class are determined. The TOPSIS method is based on the definition of the ideal and anti-ideal for the selection of alternatives, and was created by [32]. It states that the finally selected alternatives minimize the distance to the positive ideal solution and maximize the distance to the negative ideal solution.
Figure 3 shows a graphical representation of the method with five alternatives ( A 1 , , A 5 ), two criteria ( C 1 and C 2 ), and the plot of the ideal and anti-ideal points. Alternative A 3 is the closest to the ideal and A 2 and A 4 are the farthest from the anti-ideal. TOPSIS solves the problem by calculating the weighted distances to the ideal and anti-ideal for each alternative, through a multivariate data analysis [33].
The TOPSIS method algorithm is as follows [24]:
  • Construction of the decision matrix.
  • Normalization of the decision matrix.
  • Construction of the normalized weighted matrix.
  • Determination of the positive and negative ideal solution.
  • Calculation of the positive and negative ideal solution.
  • Calculation of the relative proximity of each alternative to the positive ideal solution.
  • Ordering of the alternatives according to their relative proximity.

3. Model Evaluation

As stated in Section 2, SMART consists of three phases: “Data”, “Analysis”, and “Results”. Each of these phases is evaluated following the case study.

3.1. Data

The education level is university, specifically in a technical subject of an official master’s degree, designed under Royal Decree 861/2010 of 2 July [34], which determines the basic competencies that an official degree must develop according to its level (bachelor’s, master’s, or doctorate).
The educational modality is virtual, using the Moodle learning platform, 4.2 version, based on free software under a General Public Licence.

3.1.1. Criteria

Table 3 shows the selected criteria organized by category.
With respect to the subject category, criteria C 1 C 3 are linked to the classification used by the Ministry of Education in the University Register [35]. Criterion C 1 (general and transversal competencies) refers to personal and interpersonal competencies; specific competencies ( C 2 ) are related to competencies of a training nature and achievement of knowledge related to the Master’s degree; the learning outcomes criterion ( C 3 ) encompasses a set of indicators that students are expected to understand, know, and be able to perform at the end of the subject [36]. Therefore, it will be very positive if the number of learning outcomes and competencies (specific, general, and transversal) encompassed by an activity is maximum.
In the lecturer category, the criteria correspond to the complexity when designing the activity from a technological point of view for criterion C 4 and the complexity in grading the activity for criterion C 5 . That does not mean that simple design activities are better; quite the opposite. The aim is for teachers to be prepared to develop robust and interactive activities and have the necessary tools for grading them, whether directly from the Moodle platform itself or by implementing new applications [37] capable of extracting and analysing the results. The objective will always be to promote student learning.
The activity category includes criteria related to the inclusion of innovative new technology activities ( C 6 ), as can be the case with H5P activities [38], encouraging collaborative learning ( C 7 ) with group activities in which students not only share knowledge, but also participate in the assessment process [39], and criterion C 8 is related to the integration of the activity within the platform where the course is developed (Moodle, in this case).
Finally, the student category includes the degree of difficulty to perform the activity ( C 9 ) from a technological point of view, and the feedback from the teaching team ( C 10 ) received by the students; it is very important to highlight the importance of feedback for students, since mistakes are a valuable source of learning [40].

3.1.2. Alternatives

Ten assessment activities are designed that make up the proposed alternatives ( A 1 A 10 ), see Table 4.
The different case study alternatives are described below:
A1 
Moodle assignment where students, based on the instructions specified in the assignment itself (activity content, format, structure, etc.), will submit a report they have prepared individually. The activity has associated submission dates. To carry out the activity, they need advanced use of different programs: a word processor, a spreadsheet for statistical analysis and graph insertion, and stipulated technical programs of the subject. The activity is graded with a points rubric, with individual feedback by section and general feedback for the entire activity.
A2 
Same type as the previous one, but carried out in groups. The lecturer previously forms the different work groups, the activity content is much more extensive, and grading is performed by groups also with a rubric and partial and total feedback.
A3 
Moodle questionnaire. It consists of blocks of questions of different types (multiple choice, true/false, single choice, match options, small calculations, fill in texts, etc.). Once completed, students can view their grade with brief feedback, as well as view their responses. The teaching staff prepares the questionnaire based on the random selection of a question database.
A4 
Moodle lesson. A group of pages with different types of information and associated questions of different modalities (essays, multiple choice, true/false, single choice, etc.). Movement between pages can have different itineraries, depending on student responses. At the end of the lesson, the student will see brief feedback and their grade.
A5 
Moodle workshop. As in alternative A 1 , students submit a report according to the professor’s specifications. Grading will be performed by the students themselves, through a rubric designed by the teacher. Self-assessment will weigh 20% and peer assessment 80%.
A6 
H5P activity. The H5P plugin is installed on the Moodle platform; therefore, the activity and grading are contained in the classroom. Its composition is a multicolumn object made up of different types of activities: fill-in texts, drag images or texts, select in an image, etc.; see Figure 4. Students receive the grade immediately after finishing the activity; the questions include a review and editing before ending the activity. The design of the activity by the teacher has required prior learning of the different objects to be used.
A7 
Moodle forum. Discussion on a subject topic; students can start a thread and the rest intervene positively or not in the different threads. At the end of the activity, the teacher will grade both types of interventions. The grade will be reflected in the grade book using a rubric.
A8 
Moodle glossary. In a collaborative way, students will prepare a list of definitions of a subject specified by the teacher. The teacher will reflect the grade in the grade book.
A9 
Moodle database. Students will create a technical data sheet of a certain technology and with the fields defined by the teacher. The teacher will reflect the grade in the grade book.
A10 
Activity with Genially. The activity is not integrated into the Moodle platform, but is an external web. It consists of an interactive Escape Room-type activity, carried out by groups. It consists of different tests, each of which provides a number, see Figure 5. When they have the sequence of numbers, the Escape Room ends. The grade will be reflected by the teacher in the grade book for the different groups, according to the time it takes the groups to solve the activity.

4. Analysis

Continuing the methodology, the “Analysis” phase is entered, where the weights of the previously defined criteria are determined, and the decision matrix is created.

4.1. Criteria Weights

Using the described sequence of the AHP method, the weight of the criteria is determined. A group of three experts ( E x 1 , E x 2 , and E x 3 ) participate in the decision process. They independently carry out the pairwise assessment of each criterion. The expert matrices are shown in Table 5, Table 6 and Table 7. An example of this assessment process is now detailed for expert 1 ( E x 1 ) and the specific competencies criterion ( C 2 ); see Figure 6. It is moderately important compared to learning outcomes ( C 3 ) and degree of difficulty to perform the activity ( C 9 ); strongly important compared to general and transversal competencies ( C 1 ); much stronger than use of new technologies ( C 6 ) and encourages collaborative learning ( C 7 ); and extremely important compared to complexity in preparing the activity ( C 4 ), grading the activity ( C 5 ), integration with the platform ( C 8 ), and feedback from the teaching team ( C 10 ).
The C R indicator is calculated, resulting in the following: C R E x 1 = 0.0922 , C R E x 2 = 0.0928 , and C R E x 3 = 0.0972 , all < 0.10 . Therefore, the matrices are valid and the weights of the criteria are obtained; see Table 8.
The individual weight of the criteria that stand out the most are the specific competencies ( C 2 ) and learning outcomes ( C 3 ), representing 33% and 24%, respectively, of the total criteria; see Figure 7.

4.2. Decision Matrix

The decision matrix, which is the cross-reference between the attributes of the criteria and alternatives, is created according to the objective function and unit of each criterion, as shown in Table 9. Criteria C 1 C 3 are expressed as percentages of the total competencies ( C 1 C 2 ) or learning outcomes ( C 3 ). The remaining criteria are evaluated according to a Likert scale [41], as shown in Table 10. Table 11 finally shows the decision matrix that will be used in the following phase (“Results”).

5. Results

In the “Results” phase, the alternatives are evaluated according to the TOPSIS method described in the Methodology section (Section 2.4). Based on the criteria weights (Table 8) and the decision matrix (Table 11) from the previous step (“Analysis”; see Section 4), the matrix is normalized (Table 12) and the normalized weighted matrix is obtained. The relative proximity of each alternative to the positive ideal solution ( P R i ) is calculated, as shown in Table 13.
The top five alternatives from the ranking obtained by SMART correspond to the following alternatives: Assignment. Report (Group)— A 2 ; Assignment. Report (Individual)— A 1 ; Workshop— A 5 ; Complex H5P Activity— A 6 ; and Questionnaire— A 3 , as shown in Figure 8. These five alternatives represent 72% of the ideal solution, therefore, it can be confirmed that they are the best assessment activities for the purpose of fulfilling the objective functions of each criterion, within a technical subject of an official master’s degree.

5.1. Discussion

The SMART methodology is useful both for educators and institutions.
The key implications of this methodology for educators are as follows:
  • Understanding which activities are optimal for assessment based on the specific criteria and educational context. In this particular case study, assignments, reports, workshops, complex H5P activities, and questionnaires were prioritized. However, depending on the criteria and educational context, traditional methods could also be suitable.
  • Understanding tradeoffs between different activity features based on the criteria evaluations.
  • Potentially saving preparation and grading time by replacing less optimal activities identified by the model.
The key implications of this methodology for institutions are the following:
  • Using ranked activity data to provide teachers/lecturers with standardized recommendations or resources for assessments.
  • Allocating educational technology budgets based on the activity ranking given by the model (e.g., tools for creating simulations).
  • Establishing faculty training priorities around highly ranked activities, if skill gaps exist.

5.2. Limitations of the Study and Future Works

The focus of this study is to support teachers/lecturers to choose what assessment activity to prepare. However, there are also some points that could be considered in the future:
  • Even though all the activities under analysis have been carried out throughout the described course, it is also interesting to confirm if by conducting the top five alternative ranking, the students’ qualifications improve.
  • Only instructor and activity factors were evaluated. Students’ preferences and perspectives could be incorporated into the SMART methodology by including student-related criteria in the criteria set used for evaluation, or by conducting a student survey to identify highly rated or preferred activities and using these as the activities (alternatives) for the SMART method.
  • Analysing the results using other combinations of MCDM techniques. For instance, to weight the criteria, entropy (objective method), the analytic network process (ANP) or the best worst method (BWM) can be used. Moreover, the VIseKriterijumska Optimizacija I Kompromisno Resenje (multicriteria optimization and compromise solution, VIKOR) and more could rank the alternatives and then compare the results with the ÉLimination Et Choix Traduisant la REalité (elimination and choice translating reality, ELECTRE) to eliminate the least favourable options.

6. Conclusions

The European Higher Education Area has promoted major changes in university teaching so that classes follow a more student-centred approach. Thus, teaching staff have begun to generate interactive content, taking advantage of new technologies, either through the modules included in virtual platforms (like Moodle, in the Spanish case) or other external tools. Therefore, the difficulty encountered by the lecturer lies in determining which activity is most appropriate or preferable to use. SMART aims to solve this problem by analysing different university-level activities according to various criteria (related to the subject, lecturer, activity, and student) using the AHP and TOPSIS multicriteria decision-making techniques. According to SMART, and with the activities and criteria considered, the five best activities are submission tasks such as reports (group and individual), workshops, complex H5P activities, and questionnaires. While SMART has originally been designed for a technical master’s subject, it is crucial to acknowledge that its applicability extends to diverse modalities and educational levels. However, it is essential to emphasize that while the methodology offers valuable insights into activity selection, the specific findings may not be universally applicable. The ranking process should be independently conducted for each educational scenario to ensure optimized, context-specific results, considering the unique criteria and needs of each case.

Author Contributions

Conceptualization, I.C.G.-G. and A.F.-G.; methodology, I.C.G.-G.; software, I.C.G.-G.; validation, A.F.-G.; investigation, I.C.G.-G.; resources, I.C.G.-G. and A.F.-G.; writing—original draft preparation, I.C.G.-G.; writing—review and editing, A.F.-G.; visualization, I.C.G.-G.; supervision, A.F.-G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AHPAnalytic Hierarchy Process
ANPAnalytic Network Process
BWMBest Worst Method
EHEAEuropean Higher Education Area
ELECTREÉLimination Et Choix Traduisant la REalité
ICTsInformation and Communication Technologies
MCDMMulticriteria Decision Making
SMARTSelection Model for Assessment Resources and Techniques
TOPSISTechnique for Order of Preference by Similarity to Ideal Solution
VIKORVIseKriterijumska Optimizacija I Kompromisno Resenje

References

  1. Fernández-Guillamón, A.; Molina-García, Á. Comparativa de herramientas de gamificación de acceso libre: Aplicación en asignatura de Grados en Ingenierías industriales. In Innovación Docente e Investigación en Ciencias, Ingeniería y Arquitectura; Dykinson: Madrid, Spain, 2019; pp. 783–800. [Google Scholar]
  2. Abdel-Aziz, A.A.; Abdel-Salam, H.; El-Sayad, Z. The role of ICTs in creating the new social public place of the digital era. Alex. Eng. J. 2016, 55, 487–493. [Google Scholar] [CrossRef]
  3. Chiappe, A. Trends in Digital Educational Content in Latin America; Universidad de La Sabana: Chía, Colombia, 2016. [Google Scholar]
  4. López-Gorozabel, O.; Cedeño-Palma, E.; Pinargote-Ortega, J.; Zambrano-Romero, W.; Pazmiño-Campuzano, M. Bootstrap as a tool for web development and graphic optimization on mobile devices. In XV Multidisciplinary International Congress on Science and Technology; Springer: Cham, Switzerland, 2020; pp. 290–302. [Google Scholar]
  5. Moodle: A Free Open Source Learning Platform or Course Management System. 2002. Available online: https://moodle.org/ (accessed on 13 October 2023).
  6. Blin, F.; Munro, M. Why hasn’t technology disrupted academics’ teaching practices? Understanding resistance to change through the lens of activity theory. Comput. Educ. 2008, 50, 475–490. [Google Scholar] [CrossRef]
  7. Ashrafi, A.; Zareravasan, A.; Rabiee Savoji, S.; Amani, M. Exploring factors influencing students’ continuance intention to use the learning management system (LMS): A multi-perspective framework. Interact. Learn. Environ. 2022, 30, 1475–1497. [Google Scholar] [CrossRef]
  8. Chichernea, V. Campus information systems for enhancing quality and performance in a smart city high education environment. In Conference Proceedings of «eLearning and Software for Education» (eLSE); Carol I National Defence University Publishing House: Bucharest, Romania, 2016; Volume 12, pp. 50–56. [Google Scholar]
  9. Fuentes Pardo, J.M.; Ramírez Gómez, Á.; García García, A.I.; Ayuga Téllez, F. Web-based education in Spanish Universities. A Comparison of Open Source E-Learning Platforms. J. Syst. Cybern. Inform. 2012, 10, 47–53. [Google Scholar]
  10. Piotrowski, M. What is an e-learning platform? In Learning Management System Technologies and Software Solutions for Online Teaching: Tools and Applications; IGI Global: Hershey, PA, USA, 2010; pp. 20–36. [Google Scholar]
  11. Costa, C.; Alvelos, H.; Teixeira, L. The Use of Moodle e-learning Platform: A Study in a Portuguese University. In Proceedings of the 4th Conference of ENTERprise Information Systems—Aligning Technology, Organizations and People (CENTERIS 2012), Algarve, Portugal, 3–5 October 2012; Volume 5, pp. 334–343. [Google Scholar] [CrossRef]
  12. Versvik, M.; Brand, J.; Brooker, J. Kahoot. 2012. Available online: https://kahoot.com/ (accessed on 10 October 2023).
  13. Socrative—Showbie Inc. 2010. Available online: https://www.socrative.com/ (accessed on 10 October 2023).
  14. Quizizz Inc. 2015. Available online: https://quizizz.com/ (accessed on 10 October 2023).
  15. Genially Inc. 2015. Available online: https://app.genial.ly/ (accessed on 10 October 2023).
  16. Fernández-Guillamón, A.; Molina-García, Á. Simulation of variable speed wind turbines based on open-source solutions: Application to bachelor and master degrees. Int. J. Electr. Eng. Educ. 2021. [Google Scholar] [CrossRef]
  17. Valda Sanchez, F.; Arteaga Rivero, C. Diseño e implementación de una estrategia de gamificacion en una plataforma virtual de educación. Fides Ratio-Rev. Difus. Cult. Cient. Univ. Salle Boliv. 2015, 9, 65–80. [Google Scholar]
  18. Hamari, J.; Koivisto, J.; Sarsa, H. Does Gamification Work?—A Literature Review of Empirical Studies on Gamification. In Proceedings of the 2014 47th Hawaii International Conference on System Sciences, Waikoloa, HI, USA, 6–9 January 2014; pp. 3025–3034. [Google Scholar] [CrossRef]
  19. Oliva, H.A. La gamificación como estrategia metodológica en el contexto educativo universitario. Real. Reflex. 2016, 44, 108–118. [Google Scholar] [CrossRef]
  20. Mohamad, J.R.J.; Farray, D.; Limiñana, C.M.; Ramírez, A.S.; Suárez, F.; Ponce, E.R.; Bonnet, A.S.; Iruzubieta, C.J.C. Comparación de dos herramientas de gamificación para el aprendizaje en la docencia universitaria. In V Jornadas Iberoamericanas de Innovación Educativa en el ámbito de las TIC y las TAC: InnoEducaTIC 2018, Las Palmas de Gran Canaria, 15 y 16 de noviembre de 2018; Universidad de Las Palmas de Gran Canaria: Las Palmas, Spain, 2018; pp. 199–203. [Google Scholar]
  21. Krahenbuhl, K.S. Student-centered education and constructivism: Challenges, concerns, and clarity for teachers. Clear. House J. Educ. Strateg. Issues Ideas 2016, 89, 97–105. [Google Scholar] [CrossRef]
  22. Deneen, C.C.; Hoo, H.T. Connecting teacher and student assessment literacy with self-evaluation and peer feedback. Assess. Eval. High. Educ. 2023, 48, 214–226. [Google Scholar] [CrossRef]
  23. Vinent, M.E.S. Del proceso de enseñanza aprendizaje tradicional, al proceso de enseñanza aprendizaje para la formación de competencias, en los estudiantes de la enseñanza básica, media superior y superior. Cuad. Educ. Desarro. 2009. [Google Scholar]
  24. García Cascales, M.S. Métodos Para la Comparación de Alternativas Mediante un Sistema de Ayuda a la Decisión SAD y “Soft Computing”. Ph.D. Thesis, Universidad Politécnica de Cartagena, Cartagena, Spain, 2009. [Google Scholar]
  25. Emovon, I.; Oghenenyerovwho, O.S. Application of MCDM method in material selection for optimal design: A review. Results Mater. 2020, 7, 100115. [Google Scholar] [CrossRef]
  26. Chakraborty, S.; Chakraborty, S. A scoping review on the applications of MCDM techniques for parametric optimization of machining processes. Arch. Comput. Methods Eng. 2022, 29, 4165–4186. [Google Scholar] [CrossRef]
  27. Si, J.; Marjanovic-Halburd, L.; Nasiri, F.; Bell, S. Assessment of building-integrated green technologies: A review and case study on applications of Multi-Criteria Decision Making (MCDM) method. Sustain. Cities Soc. 2016, 27, 106–115. [Google Scholar] [CrossRef]
  28. Gil-García, I.C.; Ramos-Escudero, A.; García-Cascales, M.S.; Dagher, H.; Molina-García, A. Fuzzy GIS-based MCDM solution for the optimal offshore wind site selection: The Gulf of Maine case. Renew. Energy 2022, 183, 130–147. [Google Scholar] [CrossRef]
  29. Toloie-Eshlaghy, A.; Homayonfar, M. MCDM methodologies and applications: A literature review from 1999 to 2009. Res. J. Int. Stud. 2011, 21, 86–137. [Google Scholar]
  30. Saaty, T.L. What Is the Analytic Hierarchy Process? Springer: Berlin/Heidelberg, Germany, 1988. [Google Scholar]
  31. Aguarón, J.; Moreno-Jiménez, J.M. The geometric consistency index: Approximated thresholds. Eur. J. Oper. Res. 2003, 147, 137–145. [Google Scholar] [CrossRef]
  32. Hwang, C.L.; Yoon, K.; Hwang, C.L.; Yoon, K. Methods for multiple attribute decision making. In Multiple Attribute Decision Making; Lecture Notes in Economics and Mathematical Systems; Springer: Berlin/Heidelberg, Germany, 1981; pp. 58–191. [Google Scholar]
  33. Dasarathy, B. SMART: Similarity Measure Anchored Ranking Technique for the Analysis of Multidimensional Data Arrays. IEEE Trans. Syst. Man Cybern. 1976, SMC-6, 708–711. [Google Scholar] [CrossRef]
  34. BOE-A-2010-10542; Real Decreto 861/2010, de 2 de Julio, Por el Que se Modifica el Real Decreto 1393/2007, de 29 de Octubre, Por el Que se Establece la Ordenación de las Enseñanzas Universitarias Oficiales. Agencia Estatal Boletín Oficial del Estado: Madrid, Spain, 2010.
  35. BOE-A-2010-10542; Real Decreto 1509/2008, de 12 de Septiembre, Por el Que se Regula el Registro de Universidades, Centros y Títulos. Agencia Estatal Boletín Oficial del Estado: Madrid, Spain, 2008.
  36. Mohammad-Davoudi, A.H.; Parpouchi, A. Relation between team motivation, enjoyment, and cooperation and learning results in learning area based on team-based learning among students of Tehran University of medical science. Procedia-Soc. Behav. Sci. 2016, 230, 184–189. [Google Scholar] [CrossRef]
  37. Marticorena-Sánchez, R.; López-Nozal, C.; Ji, Y.P.; Pardo-Aguilar, C.; Arnaiz-González, Á. UBUMonitor: An open-source desktop application for visual E-learning analysis with Moodle. Electronics 2022, 11, 954. [Google Scholar] [CrossRef]
  38. Gil-García, I.C.; Fernández-Guillamón, A.; García-Cascales, M.S.; Molina-García, Á. Virtual campus environments: A comparison between interactive H5P and traditional online activities in master teaching. Comput. Appl. Eng. Educ. 2023, 31, 1648–1661. [Google Scholar] [CrossRef]
  39. Sánchez-González, A. Peer assessment between students of energy in buildings to enhance learning and collaboration. Adv. Build. Educ. 2021, 5, 23–38. [Google Scholar] [CrossRef]
  40. de la Vega, I.N. Una aproximación al concepto de evaluación para el aprendizaje y al feedback con función reguladora a partir de los diarios docentes. J. Neuroeduc. 2022, 3, 69–89. [Google Scholar] [CrossRef]
  41. Luna, S.M.M. Manual práctico para el diseño de la Escala Likert. Xihmai 2007, 2. [Google Scholar] [CrossRef]
Figure 1. Framework of SMART. Source: own elaboration.
Figure 1. Framework of SMART. Source: own elaboration.
Education 14 00023 g001
Figure 2. AHP process hierarchy. Source: own elaboration based on [30].
Figure 2. AHP process hierarchy. Source: own elaboration based on [30].
Education 14 00023 g002
Figure 3. Ideal and anti-ideal alternatives. Source: own elaboration based on [33].
Figure 3. Ideal and anti-ideal alternatives. Source: own elaboration based on [33].
Education 14 00023 g003
Figure 4. A 6 Alternative. Activity H5P. Source: own elaboration.
Figure 4. A 6 Alternative. Activity H5P. Source: own elaboration.
Education 14 00023 g004
Figure 5. A 10 Alternative. Activity: Escape Room. Source: own elaboration.
Figure 5. A 10 Alternative. Activity: Escape Room. Source: own elaboration.
Education 14 00023 g005
Figure 6. Expert 1 ( E x 1 ) evaluation of specific competencies criterion ( C 2 ) with respect to the rest of criteria. Source: own elaboration.
Figure 6. Expert 1 ( E x 1 ) evaluation of specific competencies criterion ( C 2 ) with respect to the rest of criteria. Source: own elaboration.
Education 14 00023 g006
Figure 7. Comparison of criteria weights. Source: own elaboration.
Figure 7. Comparison of criteria weights. Source: own elaboration.
Education 14 00023 g007
Figure 8. Alternative ranking. Source: own elaboration.
Figure 8. Alternative ranking. Source: own elaboration.
Education 14 00023 g008
Table 1. Fundamental paired comparison scale proposed by Saaty. Source: own elaboration based on [30].
Table 1. Fundamental paired comparison scale proposed by Saaty. Source: own elaboration based on [30].
ScaleVerbal ScaleExplanation
1Equal importanceTwo criteria contribute equally to the objective
3Moderate importanceExperience and judgement favour one criterion over another
5Strong importanceOne criterion is strongly favoured
7Very strong importanceOne criterion is very dominant
9Extreme importanceOne criterion is favoured by at least one order of magnitude of difference
Table 2. Structure of the decision matrix. Source: own elaboration based on [24].
Table 2. Structure of the decision matrix. Source: own elaboration based on [24].
w 1 w 2 w j w n
C 1 C 2 C j C n
A 1 v 11 v 12 v 1 j v 1 n
A 2 v 21 v 22 v 2 j v 2 n
A i
A m v m 1 v m 2 v m j v m n
Table 3. Criteria organized by category. Case study. Source: own elaboration.
Table 3. Criteria organized by category. Case study. Source: own elaboration.
Category Criterion
Subject C 1 General and transversal competencies
C 2 Specific competencies
C 3 Learning outcomes
Lecturer C 4 Complexity in preparing the activity
C 5 Grading of the activity
Activity C 6 Use of new technologies
C 7 Encourages collaborative learning
C 8 Integration with the platform
Student C 9 Degree of difficulty to perform the activity
C 10 Feedback from the teaching team
Table 4. Alternatives. Case study. Source: own elaboration.
Table 4. Alternatives. Case study. Source: own elaboration.
Activities
A 1 Assignment: Report (Individual)
A 2 Assignment: Report (Group)
A 3 Questionnaire
A 4 Lesson
A 5 Workshop
A 6 Complex H5P Activity
A 7 Forums
A 8 Glossary
A 9 Databases
A 10 Genially: Escape Room case
Table 5. Expert 1 ( E x 1 ) matrix. Source: own elaboration.
Table 5. Expert 1 ( E x 1 ) matrix. Source: own elaboration.
C 1 C 2 C 3 C 4 C 5 C 6 C 7 C 8 C 9 C 10
C 1 11/51/33331/3331/5
C 2 5139977993
C 3 31/319955573
C 4 1/31/91/9131/51/31/31/31/7
C 5 1/31/91/91/311/31/51/51/51/7
C 6 1/31/71/55311/3331/3
C 7 31/71/53531331/3
C 8 1/31/91/5351/31/3131/3
C 9 1/31/91/7351/31/31/311/5
C 10 51/31/37733351
Table 6. Expert 2 ( E x 2 ) matrix. Source: own elaboration.
Table 6. Expert 2 ( E x 2 ) matrix. Source: own elaboration.
C 1 C 2 C 3 C 4 C 5 C 6 C 7 C 8 C 9 C 10
C 1 11/71/7331/31/331/31/3
C 2 7139955733
C 3 71/319955733
C 4 1/31/91/9131/51/51/31/51/5
C 5 1/31/91/91/311/51/31/31/51/3
C 6 31/51/55511/331/31/5
C 7 31/51/5533131/33
C 8 1/31/71/7331/31/311/31/3
C 9 31/31/35533313
C 10 31/31/35351/331/31
Table 7. Expert 3 ( E x 3 ) matrix. Source: own elaboration.
Table 7. Expert 3 ( E x 3 ) matrix. Source: own elaboration.
C 1 C 2 C 3 C 4 C 5 C 6 C 7 C 8 C 9 C 10
C 1 11/31/39953355
C 2 3139997793
C 3 31/319975575
C 4 1/91/91/9131/51/31/31/31/3
C 5 1/91/91/91/311/51/31/31/31/3
C 6 1/51/91/75513353
C 7 1/31/71/5331/31333
C 8 1/31/71/5331/31/3133
C 9 1/51/91/7331/51/31/313
C 10 1/51/51/5331/31/31/31/31
Table 8. Criteria weight. Source: own elaboration.
Table 8. Criteria weight. Source: own elaboration.
Criterion C 1 C 2 C 3 C 4 C 5 C 6 C 7 C 8 C 9 C 10
Weight0.07470.31270.22950.02040.01640.06220.07640.04050.05090.0722
Table 9. Units and objective function of decision matrix attributes. Maximize (↑). Minimize (↓). Source: own elaboration.
Table 9. Units and objective function of decision matrix attributes. Maximize (↑). Minimize (↓). Source: own elaboration.
CriterionUnitObjective Function
C 1 General and transversal competencies%Maximize (↑)
C 2 Specific competencies%Maximize (↑)
C 3 Learning outcomes%Maximize (↑)
C 4 Complexity in preparing the activityLikertMinimize (↓)
C 5 Grading of the activityLikertMinimize (↓)
C 6 Use of new technologiesLikertMaximize (↑)
C 7 Encourages collaborative learningLikertMaximize (↑)
C 8 Integration with the platformLikertMaximize (↑)
C 9 Degree of difficulty to perform the activityLikertMinimize (↓)
C 10 Feedback from the teaching teamLikertMaximize (↑)
Table 10. Likert scale. Source: own elaboration based on [41].
Table 10. Likert scale. Source: own elaboration based on [41].
ValueDescription
1Very little
2Little
3Moderately sufficient
4Sufficient
5A lot
Table 11. Decision matrix. Source: own elaboration.
Table 11. Decision matrix. Source: own elaboration.
C 1 C 2 C 3 C 4 C 5 C 6 C 7 C 8 C 9 C 10
A 1 4570804521525
A 2 4880905525535
A 3 3550955131524
A 4 3045605141534
A 5 4855705435545
A 6 4060655151544
A 7 2025352325523
A 8 1820202121522
A 9 1520182121522
A 10 2545455453144
Table 12. Normalized matrix. Source: own elaboration.
Table 12. Normalized matrix. Source: own elaboration.
C 1 C 2 C 3 C 4 C 5 C 6 C 7 C 8 C 9 C 10
A 1 0.4270.4570.4110.3280.5620.2310.1120.3530.2460.429
A 2 0.4560.5220.4620.4100.5620.2310.5590.3530.3690.429
A 3 0.3320.3260.4880.4100.1130.3460.1120.3530.2460.343
A 4 0.2850.2940.3080.4100.1130.4620.1120.3530.3690.343
A 5 0.4560.3590.3590.4100.4500.3460.5590.3530.4920.429
A 6 0.3800.3920.3340.4100.1130.5770.1120.3530.4920.343
A 7 0.1900.1640.1800.1640.3370.2310.5590.3530.2460.257
A 8 0.1710.1310.1030.1640.1130.2310.1120.3530.2460.171
A 9 0.1420.1310.0930.1640.1130.2310.1120.3530.2460.171
A 10 0.2370.2940.2310.4100.4500.5770.3350.0700.4920.343
Table 13. Normalized weighted matrix. Source: own elaboration.
Table 13. Normalized weighted matrix. Source: own elaboration.
C 1 C 2 C 3 C 4 C 5 C 6 C 7 C 8 C 9 C 10 d i + d i PR i
A 1 0.0320.1430.0940.0070.0090.0140.0090.0140.0130.0310.04930.12980.7248
A 2 0.0340.1630.1060.0080.0090.0140.0430.0140.0190.0310.02490.15630.8628
A 3 0.0250.1020.1120.0080.0020.0220.0080.0140.0130.0250.07260.11290.6086
A 4 0.0210.0920.0710.0080.0020.0290.0080.0140.0190.0250.09110.07590.4545
A 5 0.0340.1120.0830.0080.0070.0220.0430.0140.0250.0310.06240.10540.6282
A 6 0.0280.1230.0770.0080.0020.0360.0080.0140.0250.0250.06590.10420.6127
A 7 0.0140.0510.0410.0030.0060.0140.0430.0140.0130.0190.13650.04530.2491
A 8 0.0130.0410.0240.0030.0020.0140.0080.0140.0130.0130.15890.01940.1090
A 9 0.0110.0410.0210.0030.0020.0140.0080.0140.0130.0130.16050.01920.1067
A 10 0.0180.0920.0530.0080.0070.0360.0260.0030.0250.0230.09760.06770.4096
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gil-García, I.C.; Fernández-Guillamón, A. SMART: Selection Model for Assessment Resources and Techniques. Educ. Sci. 2024, 14, 23. https://doi.org/10.3390/educsci14010023

AMA Style

Gil-García IC, Fernández-Guillamón A. SMART: Selection Model for Assessment Resources and Techniques. Education Sciences. 2024; 14(1):23. https://doi.org/10.3390/educsci14010023

Chicago/Turabian Style

Gil-García, Isabel C., and Ana Fernández-Guillamón. 2024. "SMART: Selection Model for Assessment Resources and Techniques" Education Sciences 14, no. 1: 23. https://doi.org/10.3390/educsci14010023

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop