Next Article in Journal
Scientific Research on Bioethanol in Brazil: History and Prospects for Sustainable Biofuel
Next Article in Special Issue
Will AI Become a Threat to Higher Education Sustainability? A Study of Students’ Views
Previous Article in Journal
Patterns in Clinical Leadership Learning: Understanding the Quality of Learning about Leadership to Support Sustainable Transformation in Healthcare Education
Previous Article in Special Issue
Evaluation of the Quality of Higher Education Services by Revised IPA in the Perspective of Digitization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Knowledge Assessment Patterns for Distant Education: The Perceived Impact on Grading, Motivation, and Satisfaction

Faculty of Electrical Engineering and Computer Science, University of Maribor, Koroška cesta 46, 2000 Maribor, Slovenia
*
Author to whom correspondence should be addressed.
Sustainability 2024, 16(10), 4166; https://doi.org/10.3390/su16104166
Submission received: 4 March 2024 / Revised: 10 May 2024 / Accepted: 13 May 2024 / Published: 16 May 2024
(This article belongs to the Special Issue Sustainable Higher Education: From E-learning to Smart Education)

Abstract

:
With the increase in remote learning, the efficient implementation of distant knowledge assessments has become an essential topic. New challenges have arisen that have had to be adequately addressed and successfully solved. As a response, we introduced an assessment patterns catalogue for distant education, handling various challenges by proposing possible solutions. The catalogue presents a collection of proven practises targeting knowledge assessment in digital and distant environments. The paper presents the survey results from the final step of catalogue creation. The objective was to verify its suitability and expedite its use, focusing on several aspects of knowledge assessment. We focused on the perceived assessment patterns’ impact on grading objectivity and consistency, as well as students’ motivation and satisfaction with an implemented assessment, explored from the students’ and teachers’ perspectives. We gathered data using a uniform questionnaire distributed between students and teachers, both actively involved in distant knowledge assessment. Detailed data analysis highlighted the patterns with the highest perceived impact on the previously mentioned assessment aspects. We also analysed the top-rated patterns within the pattern categories. The results depict a high overlap between students’ and teachers’ perspectives, wherein patterns like Pentathlon, Statistical Validator, Game Rules, and Bonus Points were perceived as the patterns with the highest impact on grading objectivity and consistency, as well as the patterns with the most significant impact on students’ motivation and satisfaction.

1. Introduction

The main goal of education is to provide knowledge and skills within the teaching domain. To achieve this goal, a variety of approaches and methods are being used and constantly adapted in order to cope with emerging challenges within education. One of the goals of the UN for sustainable development is also quality education [1], whereby adaptation to current challenges is crucial for its achievement. An important role in providing sustainable development in society lies in higher education institutions [2,3]. Undoubtedly, the most significant changes in recent years were influenced by the COVID-19 pandemic. Remote learning also arose in environments where, until then, the activities were primarily practised in a person-to-person setting [4]. Higher education institutions had to move their learning activities to digital environments in a very short time, where reduced quality was not an option [5].
Together with teaching, knowledge assessments, a fundamental part of the educational process [6], were also moved to the remote environment. The latter added additional challenges to the already demanding task of implementing an efficient and fair assessment. The proven and widely used knowledge assessment practises (e.g., written exams) became practically unusable; therefore, an adaptation of what was used and known was necessary. Different studies addressed the educational process during the pandemic, some focusing specifically on knowledge assessment practises [4,6,7,8,9,10].
Many definitions of assessment exist. However, as pointed out by Ghaicha [11], Bachman [12] defined the assessment as an “act of interpreting information regarding student performance collected through any of a multitude of means or practices”. Various new practises and methods arose to facilitate the assessment process in distant environments. The limitations of having the process in-person are mostly gone. However, some parts of the educational process are still and will remain online. Although the shift in the teaching approaches was not promoted on the right footing, conditions, which put us out of our comfort zones, helped to push progress in the educational domain even further. This is especially true for the use and involvement of information technology in teaching activities, including knowledge assessment. Progressively, a new term emerged that describes the effective use of information and communication technology to achieve learning outcomes using a suitable pedagogical approach—smart education [13].
Already-proven best practises for distant education were collected in a catalogue of assessment patterns [14]. The catalogue was created in response to the situation in 2020, wherein we collected best practises and experiences that proved to be adequate in distance environments in higher education. It includes 47 educational patterns, collected in four categories, covering topics from designing knowledge assessments to patterns dedicated to communication before, within, and after knowledge assessment. Since the introduced catalogue [14] includes recurring practises derived from practical experiences, their general recognition, which will also result in broader acceptance, can only be achieved if the involved stakeholders grasp the effective and practical approach to addressing emerging challenges. Therefore, validating the created catalogue represents an essential step in catalogue introduction.
An initial validation was already performed, focusing on patterns, that are crucial for successful and efficient remote knowledge assessment from teachers’ [14], and students’ [15] perspectives. Therefore, this paper presents an in-depth analysis of teachers’ and students’ opinions regarding the proposed catalogue of assessment patterns for distant education. Our research was focused on two aspects. The first covers grading objectivity and consistency, and the second focuses on students’ motivation and satisfaction. When grading is objective and consistent, the difference that may appear when assessing students’ work is minimised [16].
Motivation represents an important element of teaching and learning [17,18]. According to Alsadoon et al. [18], motivation elevates the students’ interest in the learning domain and encourages their preparedness for learning. On the other hand, satisfaction is an emotion of joy and happiness from the individuals’ point of view, achieved when needs and desires are pleased [19]. It is a short-term attitude connected to subjective fulfilment of educational experience expectations [20].
However, research [21] suggest that students’ motivation decreases during online learning. By providing a positive and supportive environment, the students’ motivation could be increased [22]. Low engagement can result in poorly acquired knowledge; therefore, an important consideration when designing the catalogue was the impact of the patterns on students’ motivation and satisfaction.
Our research was performed from the teachers’ and students’ perspectives, using a uniform data collection approach, allowing a valid comparison within the groups. The presented research followed two research questions:
  • RQ1—Which assessment patterns have the highest impact on grading objectivity and consistency?
  • RQ2—Which assessment patterns have the highest impact on students’ motivation and satisfaction?
The rest of the paper is organised as follows. Section 2 summarises the existing related work and presents the structure and the creation of the catalogue of assessment patterns for distant education. The section depicts the catalog’s structure and briefly describes the defined patterns. Section 3 presents the applied research methodology, including the defined data collection method and research instrument. The results are presented in Section 4, starting with an overall analysis, which is deepened in Section 4.1 and Section 4.2. The research findings are discussed in Section 5, and the paper is concluded in Section 6.

2. Distant Assessment Pattern Catalog

Although distant education brings many benefits, various challenges arise that must be addressed efficiently [23]. Among the most important are the challenges connected to the lack of teachers’ knowledge and experience. Therefore, systematically collected and aggregated best practises can significantly contribute to the efficiency of remote education. Although the pandemic has ended, distant education in higher education has not disappeared. It proved to be a suitable solution in many contexts, and in addition, many educational institutions have significantly developed remote education infrastructure [6].
In distant education, a major challenge is the management and implementation of knowledge assessments [10]. The assessments have to be reliable and valid, even if implemented in remote settings, bringing additional challenges [10]. To successfully cope with those challenges, a well-defined and validated assessment pattern catalogue can be a great help. Its main goal is to group various ideas and proven practises, helping the involved parties when implementing an effective, fair, and motivational knowledge assessment process. Köppe et al. [24] defined educational patterns as “hypothesis solutions to recurring problems in an educational context”. According to a literature review [14], several available pedagogical patterns catalogues exist, e.g., [24,25,26,27]. The identified shortcoming lies in the lack of categorisation or cataloguing and the lack of consistency within authors [14]. The catalogues cover a wide educational area and connected aspects, wherein a few authors also address the knowledge assessment practises [28,29,30,31,32].
In response to the existing shortfall, we introduced a novel catalogue of assessment patterns for distant education [14]. The catalogue addresses two of the important challenges: efficient knowledge assessment and education in remote digital environments. The creation of the assessment patterns catalogue was conducted in a few sequential steps: (1) preliminary cataloguing; (2) organising workshops with practitioners and evaluating novelty with focus group members; (3) cataloguing; and (4) catalogue validation within pattern users [14].
According to first-hand experiences gathered during the COVID-19 pandemic, the preliminary cataloguing of assessment patterns began. A focus group of teachers and teaching assistants identified frequently used practises for distant knowledge assessments, resulting in the first set of patterns combined into the categorised catalogue. The initial version of the catalogue was later presented to the practitioners in the organised workshops. Workshop participants provided valuable feedback based on the patterns presentation and focused discussion. On the basis of the first two activities, the focus group iteratively finalised the cataloguing. The catalogue of assessment patterns for distant education consists of 47 patterns, divided into four semantic and two container categories [14]:
  • D—Patterns for the assessment conceptual design;
  • Q—Patterns for defining questions, answers, and schedules;
  • E—Execution and grading patterns;
  • C—Communication patterns;
  • A—Anti-patterns;
  • O—Other patterns.
In addition to pattern categorisation, we uniformly described each of the patterns, including attributes, namely, the pattern category, the challenge solved by the pattern, the main idea of the pattern, the context, participants, and environment, the implementation steps, variations, advantages and disadvantages, known uses, related patterns, and do not use with patterns. A short description of each pattern in the catalogue is presented in Table 1. In addition, an example of a complete description of chosen patterns can be found in the initial paper by Pavlič et al. [14].
The group of patterns for the knowledge assessment conceptual design (D) combines proven practises that are useful in the design and organisation of students’ knowledge assessments. Within this, teachers decide about allowed assets (e.g., Open Book) and the assessment structure (e.g., Pentathlon, Student Achievement Portfolio). The second group consist of patterns for defining questions, answers, and schedules (Q), wherein more detailed planning of the knowledge assessment instances is conducted. This includes question preparation (e.g., Question Donor, Colleague Veto), duration and content validation patterns (e.g., Professional Multiplier, Hidden Validator), and patterns used for the assessment preparation (e.g., Dress Rehearsal). Group execution and grading patterns (E) and communication patterns (C) consist of practises that could be applied within the knowledge assessment session implementation after the assessment design is finalised and questions are determined. Those are the patterns connected to grading (e.g., Criteria List, Results With Delay), implementation method (e.g., Eurosong, Personal Defence), and adjustments patterns (e.g., Accessibility Adjustment, Impro League). An important part of an efficient knowledge assessment is also appropriate communication. Therefore, practises that can be used when explaining the assessment process (e.g., Game Rules and To-Do List) are important, as well as solutions that could be used when challenges arise (e.g., firewalls and emergency calls).
The last sets of patterns are anti-patterns (A), which are poor practises that should be avoided (e.g., Certification Centre) and other patterns (O): patterns whose categorisation is not precise, since it depends on different factors and environments (e.g., Big Brother). As mentioned, a short description of each pattern is presented in Table 1, wherein the entire catalogue and its graphical representation can be found in our previous work [14].
The final step in catalogue design was the validation process. Validation is crucial for catalogue development and the expansion of its use. Although the catalogue was designed based on experiences gained in remote learning, most patterns could also be used in classroom settings. In related work, documented educational patterns were validated through peer reviews and discussions on domain events [33,34]. Further, a proven and established validation approach was not detected. As stated by Fioravanti and Barbosa [35], there is a gap in the validation of proposed pedagogical patterns. Consequently, we developed a questionnaire in order to review different quality aspects of the defined patterns. Teachers performed the first validation phase, while the second phase was conducted independently by students who were actively involved in the distant knowledge assessment process, implementing a majority of the presented patterns. The initial findings of the validation, addressing top-rated patterns crucial for successful and effective remote knowledge assessment, were already presented in our previous papers [14,15]. Hereinafter, we present an in-depth validation, analysing the aspects addressed by the formed research questions.

3. Research Methodology

A novel catalogue of assessment patterns for distant education was designed [14] in order to gather, define, and share proven practises for remote knowledge assessment. The final step of catalogue creation and validation aimed to research the perceived impact of the included patterns on grading objectivity and consistency and the impact of their use on students’ motivation and satisfaction. Since we are aware that opinions between teachers and students can vary significantly, we conducted the validation from both perspectives. The first validation phase was performed among teachers, namely, higher education teachers and their teaching assistants, and the second phase was conducted among students.
To answer the research questions presented in Section 1, we prepared a uniform questionnaire for teachers and students. The questionnaire was developed in sequential steps, considering the concepts of validity and reliability. The context of the research questions guided the definition of the variables, wherein each variable is represented in one statement. With this, we addressed the construct and content validity, assuring our tool measured the intended view of interest. The development of the questionnaire was conducted within a focus group, which was already a part of the catalogue creation presented in [14]. The focus group consisted of field experts, which also affected the tool’s validity. The instrument was then tested using a pilot group of teachers and students, where some minor changes were proposed. We implemented the test–retest approach to evaluate the reliability of the questionnaire. After introducing the changes to the questionnaire, the preparation of the research tool was concluded with a pilot data analysis to ensure suitable survey results.
In the questionnaire, participants evaluated four statements using a 5-point Likert scale for each of the 47 defined patterns. On the scale, 1 stands for completely disagree, while 5 stands for completely agree. The questionnaire is depicted in Table 2.
The demographics of our research participants is summarised in Table 3. Opinions were gathered from 33 teachers, including teachers and teaching assistants. On average, they had 11.1 years of teaching experience and were involved in 5.4 different courses. Teachers answered the survey in the context of assessment pattern catalogue creation.
Also, a sample of 51 students was involved in the research. All participating students were actively involved in the remote study process during the COVID-19 pandemic for almost two years, which also included the remote knowledge assessment. Since each study year comprises 10 study courses, each participant was included in knowledge assessment sessions in 20 courses. Therefore, they participated first-hand in the assessments containing many of the catalogue assessment patterns. At the time of the research, they were enrolled in the 1st year of the Master’s Degree study programme, focused on informatics.
Participation in the study was optional and anonymous, which was clearly stated to the potential participants. In order to be a part of the study, the participants had to give informed consent at the beginning of the questionnaire. With this, they were able to start the answering process. After the participants were selected, we organised a two-part session. In the first part, students were familiarised with the assessment patterns, and in the second part, all of them answered a questionnaire that was prepared for all 47 patterns.
After the survey, the results of both teachers and students were statistically analysed. According to the structure of a uniform questionnaire, we focused on two main viewpoints: (1) grading objectivity and consistency; and (2) students’ motivation and satisfaction. Each variable was represented by one statement, assessed by a 5-point Likert scale. The study’s main objective was to determine which of the defined assessment patterns was the most important for each concept. Therefore, a sorted list for each pattern category was created, based on the mean values that were gathered separately from students and teachers.

4. Results: The Impact of Defined Knowledge Assessment Patterns

Teachers and students answered four questions for each of the 47 identified patterns using a 5-point Likert scale. Tables 9 and 10 present average values for each pattern for students and teachers separately. The patterns are organised according to their categorisation presented in Section 2. Since the agreement with the statement was opined using a 5-point scale, numbers closer to 5 express strong agreement with the statement, while average numbers closer to 1 express strong disagreement.
The detailed analysis of the results is presented in Section 4.1 and Section 4.2, while Table 4, Table 5, Table 6 and Table 7 present a bird’s-eye view of the analysed data. Table 4 summarises the results for the assessment conceptual design patterns. The table shows the top-rated patterns according to the students and teachers, regarding the perceived pattern’s impact on the aspects researched in the presented study, namely, grading objectivity and consistency and students’ motivation and satisfaction. An overlap could be seen between students and teachers, where some of the patterns take precedence in related categories. The pattern Pentathlon was perceived as the pattern with the higher impact on grading objectivity and consistency by both students and teachers. The pattern addresses the challenge connected to representatives of a single assessment; therefore, we propose that the final grade of the student is composed of multiple assignments, e.g., lab work, quizzes, oral and written exams. This also positively impacts students’ motivation since teachers can engage students with partial assessments throughout the semester. This was also later confirmed with empirical results, as seen in Table 4. Also, other patterns that anticipate a gradual path to the final grade were perceived as very important for grading objectivity and consistency. Those are the patterns Student Achievement Portfolio and Stagewise Approach.
On the other hand, the Open Book pattern was perceived as important for students’ motivation and satisfaction, especially from the students’ perspective. The Open Book pattern allows the use of all resources during the knowledge assessments. An appealing technique for the students needs special attention from the teachers, especially regarding the types of questions to be prepared. A pattern that highly impacts students’ motivation and satisfaction, perceived by students and teachers, is also the Innovator pattern. The pattern encourages the use of innovative, out-of-the-box tools and approaches, such as, for example, business simulation games in applicable study courses. As is known, in addition to challenges connected to remote education, challenges associated with maintaining the motivation and engagement of the digital generation are also present. The Innovator pattern allows one to address both challenges at the same time.
Table 5 depicts the top three patterns for the researched aspects within the group of patterns for defining questions, answers, and schedules. The highest impact on grading objectivity and consistency was perceived for the content validation patterns, namely, Expert Validator, Statistical Validator, and Colleague Veto. When applying the patterns Expert Validator and Colleague Veto, a domain expert or colleague reviews the questions to validate questions and answers and eliminate the problematic ones. While this is performed before the knowledge assessment process implementation, the pattern Statistical Validator is applied after the assessment is completed, looking for a potential negative deviation from the expected average result.
In the area of students’ motivation and satisfaction, the patterns Question Donor and Example Questions were perceived as very important among students and teachers. Both patterns give students the opportunity to practise and get to know the examples of possible questions. Even more, within the pattern Question Donor, students provide their own questions and answers that later become the candidates for the knowledge assessment. The pattern offers an alternative way for motivating students to examine the study materials by proactively participating in defining the knowledge assessment.
The third group within the catalogue consists of execution and grading patterns. The patterns with the most significant perceived impact on studied domains are presented in Table 6. The students perceived the pattern Objective Assessor as the pattern with the highest impact on grading objectivity and consistency. The pattern is closely related to the previously mentioned pattern Statistical Validator, wherein the teachers eliminate the unclear questions from the grading. Also, a high impact on objectivity and consistency was perceived for the pattern Results With Delay, where assessment results are published after the review and, according to the teachers’ perspective, for the pattern Criteria List. The latter requests that the grading is followed by a previously defined scoring rubric.
The teachers and students agreed that the pattern with the highest impact on student motivation is the Bonus Points pattern. The pattern aims to provide bonus points as a reward in order to encourage the students’ engagement. This is especially important in remote environments since the teachers do not have direct access to their listeners. High on the list was also the pattern Impro League, meaning that teachers react agilely and quickly to possible problems. Again, this frequently emerges when knowledge assessments are implemented in distant digital environments. Regarding students’ satisfaction, the pattern Third Shift was placed among the top three according to the teachers and students. The pattern anticipates that the teachers will also be available to students outside the dedicated course time slots. However, particular caution is required so that good practise does not turn into poor practise, i.e., the anti-pattern Full-Time Job.
The last content group in the published catalogue are the communication patterns. The patterns with the highest impact on objectivity, consistency, motivation, and satisfaction are captured in Table 7. Appropriately planned and implemented communication can significantly improve grading efficiency. Knowing what to do and when to do it, whether we are explaining the course assessment structure at the beginning or the knowledge assessment structure before the assessment, greatly impacts grading objectivity and consistency. This was confirmed by students and teachers, wherein the pattern Game Rules was perceived as the pattern with the highest impact on both of the mentioned categories. In addition, students believe that the pattern To-Do List also impacts grading objectivity and consistency. The pattern assumes that the teachers provide a checklist of the key parts and tasks of the knowledge assessment.
On the other hand, a significant impact on students’ motivation and satisfaction was perceived for the pattern Appetizer. The pattern assumes that the preparation could be more efficient if students are familiar with the examples of possible assessment questions. Therefore, teachers present example questions to students during the prior exam lectures or even during the lectures throughout the semester. The teachers also emphasise the pattern Proactive Teacher as the pattern with a high impact on students’ satisfaction and the pattern Student Proxy as the pattern with an impact on students’ motivation. Both patterns are intended to avoid unattended situations, which could be especially common when the knowledge assessment is implemented at a distance.
As depicted within the presented analysis, the patterns in the catalogue were categorised into different groups, wherein Table 8 represents the synthesised average values of the perceived impact on researched views. The table presents four groups of patterns, namely, patterns for the assessment conceptual design (D), patterns for defining questions, answers, and schedules (Q), execution and grading patterns (E), communication patterns (C), anti-patterns (A), and other patterns (O). For each group, the perceived impact on grading objectivity and consistency and students’ motivation and satisfaction is presented, as assessed by students and teachers.
According to the students, the communication patterns most significantly impact the grading objectivity, followed by the patterns for the assessment conceptual design for objectivity, and patterns for defining questions, answers, and schedules for consistency. The group of patterns with the highest impact on students’ motivation and satisfaction were again communication patterns, followed by patterns for defining questions, answers, and schedules, and execution and grading patterns. The analysis of teachers’ perspective, on the other hand, shows that the teachers perceived the group of patterns for the assessment conceptual design as the patterns with the strongest influence on grading objectivity and consistency, which was followed by patterns for defining questions, answers, and schedules. The ranking is understandable since the teachers are much more aware of the importance of knowledge assessment design. At the same time, this aspect was perceived as insignificant for students since they are not actively involved in preparing knowledge assessments. Regarding the perceived impact on students’ motivation, the teachers again ranked the patterns for the assessment conceptual design the highest, while in the area of satisfaction, they assessed communication patterns as the most important.

4.1. Perceived Impact on Grading Objectivity and Consistency

Table 4 and Table 6 provide a summary and highlight the top three patterns for which students and teachers perceived the highest impact on grading objectivity and consistency. The detailed analysis of gathered data, namely, average values for each pattern in the introduced assessment pattern catalogue, is presented in Table 9. The table is colour-coded:
  • Blue standing for students;
  • Orange standing for teachers.
Additionally, the top-rated patterns are highlighted with the increasing colour intensity.
While the overlap between students’ and teachers’ perspectives was already seen in Section 4, the colour coding again shows a similar mindset but with some outliers. If we focus solely on values, ignoring the pattern rankings, hereinafter, we will point out the patterns with the largest difference in the average value assessed by the students and teachers. This indicates a difference in perceived objectivity and consistency among students and teachers.
The biggest difference in the assessment regarding the perceived impact on objectivity was for the patterns Member Channel in favour of students and Big Brother in favour of teachers. The difference for the pattern Member Channel was 1.69, wherein students perceived the pattern as more important for influencing objectivity than teachers. The pattern Member Channel is a communication pattern that encourages the use of separate channels where students can communicate in smaller groups. On the other end, the difference in average values in favour of teachers was for the pattern Big Brother. Therefore, with a difference of 0.86 in average values, the teachers perceived the pattern to have more influence on grading objectivity than students. In the published catalogue, we categorised the Big Brother pattern within other patterns. Although we are aware that it is sometimes inevitable to use the mentioned pattern, we believe that the objectivity and efficiency of the knowledge assessment could be ensured by using the combination of other proposed patterns. The mentioned patterns also received the largest difference in average values according to perceived impact on grading consistency. The differences were 1.63 for Member Chanel and 0.66 for Big Brother.
If we deeply analyse the difference between students and teachers for the top-rated patterns, differences can be observed. For example, the difference for the Pentathlon pattern was 0.33 regarding grading objectivity and 0.23 regarding grading consistency on the students’ side. On the other hand, the difference for the Objective Assessor pattern was 0.66 regarding objectivity and 0.55 regarding consistency, wherein again students perceived more impact.
An interesting analysis is also the average values of the patterns regardless of the defined categories. Students perceived the pattern Criteria List as the pattern with the highest impact on grading objectivity. The average value was 4.70. The pattern was followed by the patterns Expert Validator with an average value of 4.33 and Statistical Validator with a value of 4.24. According to the teachers, the patterns that impact the grading objectivity the most were the pattern Objective Assessor, with an average value of 4.75; the pattern Game Rules, graded 4.65 on average; and the pattern To-Do List, with a value of 4.53. According to the students, the patterns with the least influence on grading objectivity were the 24/7 pattern, valued at 2.15; Full-Time Job, with an average value of 2.24; and the pattern Certification Centre, with a value of 2.45. Similarly, teachers assessed the patterns Full-Time Job and 24/7 as the third and second least important pattern, while the pattern with the most minor influence on grading objectivity was, according to the teachers, Big Brother.

4.2. Perceived Impact on Students’ Motivation and Satisfaction

Table 6 and Table 7 provide a summary and highlights the top three patterns for which students and teachers perceived the highest impact on student motivation and satisfaction. The detailed analysis of the gathered data, namely, average values for each of the patterns, is presented hereinafter in Table 10. The table is colour-coded:
  • Blue standing for students;
  • Orange standing for teachers.
Additionally, the top-rated patterns are highlighted with the increasing colour intensity.
The biggest difference in the average values when assessing the perceived impact on students’ motivation was detected for the pattern Expert Validator and pattern 24/7. With the difference of 1.14, students perceived the greater influence of the Expert Validator pattern on their motivation. Therefore, students believe that using a colleague teacher or domain expert within the questions and answers validation significantly impacts motivation. On the other hand, the biggest difference was detected within the assessment of the 24/7 pattern. The teachers had a stronger perception of the influence of the anti-pattern on students’ motivation, with a difference of 1.17 within average values. The same pattern also occupies the same position in the assessment of the impact on students’ satisfaction, wherein the difference was even bigger, namely, 1.58. The 24/7 pattern is an anti-pattern covering unrealistic student expectations regarding teachers’ online responsiveness. However, since students assessed the pattern impact with only 2.62, we could assume that students themselves perceive the pattern as an anti-pattern, whereby they understand the reasonable concept of teacher availability.
Looking into the top-rated patterns regarding their impact on student’ motivation and satisfaction, we can detect only minor differences. Again, a very highly rated pattern is the Pentathlon pattern, wherein the difference between students’ and teachers’ assessments was only 0.02 for both aspects: motivation and satisfaction. A slightly greater difference was seen for the pattern Bonus Points: the pattern within the top three patterns from the execution and grading patterns. The difference when assessing the impact on motivation was 0.26, and when assessing the impact on satisfaction, it was 0.17.
The pattern Bonus Points was assessed with the highest average value when assessing the patterns’ impact on students’ motivation. Students assessed the pattern with an average value of 4.81 and teachers with 4.55. The pattern was followed by the patterns Pentathlon and Question Donor with 4.36, and the pattern Proactive Teacher with 4.30 from the teachers’ point. The students’ perspective presented with a slightly different sequence: the Bonus Points pattern was followed by Appetizer with 4.73 and Objective Assessor with an average value of 4.69. Regardless of the categories, the highly rated patterns in students’ satisfaction slightly overlapped between students and teachers. The highest teachers’ average values were given to the patterns Third Shift, Bonus Points and Emergency Call, namely, 4.64, 4.58, and 4.52. On the other hand, with the highest average values, students’ assessed Appetizer with 4.77, Bonus Point and Emergency Call with 4.75, and Impro League and Objective Assessor with 4.73, as the patterns with the greatest perceived impact on students’ satisfaction.

5. Discussion

Our catalogue of assessment patterns for distant education provides an extended group of proven practises, complementing the existing pedagogical patterns research domain (e.g., [24,25,26,27]), especially providing an essential addition to the knowledge assessment domain. The presented study explored the perceived impact of the defined patterns on grading objectivity and consistency and students’ motivation and satisfaction.
We defined a research instrument to implement the assessment. The questionnaire uses a Likert scale, wherein the results are presented as average values from students’ and teachers’ perspectives. The research was formed following two research questions, wherein the results are presented in Section 4 and the associated tables. The existing literature does not present any proven or established validation approach; moreover, Fioravanti and Barbosa [35] stated that there is a gap in the validation of proposed pedagogical patterns. For example, Köppe et al. [28] researched the proposed patterns, looking into students’ and teachers’ value and the effect on the final assessment. However, a research instrument was not defined, significantly affecting the possibility of expansion and reproducibility. Therefore, the defined questionnaire, presented in Table 2, significantly contributes to the existing body of knowledge.
Table 4, Table 5, Table 6 and Table 7 and Table 9 present and depict the results of the analysis addressing RQ1. We researched which patterns from the created catalogue significantly impact the grading objectivity and consistency. Table 9 presents a detailed analysis, assessing top patterns regarding the categories. It can be observed that the patterns with the highest perceived impact on grading objectivity are, according to the students, Objective Assessor, Game Rules, and To-Do List, and according to the teachers Criteria List, Expert Validator, and Statistical Validator. The patterns with the highest perceived impact on grading consistency are, according to the students, Game Rules, To-Do List, and Objective Assessor, and according to the teachers, Criteria List, Pentathlon, and Game Rules. We detected a great overlap, with some differences connected to patterns’ ratings.
The results of the analysis of the perceived impact on students’ motivation and satisfaction, as researched with RQ2, are depicted in Table 4, Table 5, Table 6 and Table 7 and Table 10. As can be observed from Table 10, the top-rated patterns regarding their impact on students’ motivation are, according to the students, Bonus Points, Appetizer, and Objective Assessor, and according to the teachers, Bonus Points, Questions Donor, and Pentathlon. Researching the area of students’ satisfaction, the top-rated patterns, according to the students, are Appetizer, Bonus Points, and Emergency Call, and according to the teachers, Third Shift, Bonus Points, and Emergency Call.
The implemented analysis also offers some additional insights into the considered domain, presented in Table 4, Table 5, Table 6 and Table 7. First, we will focus on communication patterns, since the patterns from this group are highly important for every researched area. As depicted in Table 7, we can see that the top three patterns, according to the students, with the highest perceived impact on grading objectivity are Game Rules, To-Do List, and Appetizer. On the other hand, the top three communication patterns with the highest perceived impact on students’ motivation, according to the teachers, are Proactive Teacher, Appetizer, and Student Proxy. Further, let us also look more deeply into the patterns for defining questions, answers, and schedules. We can see from Table 5 that according to the students, the top three patterns according to perceived impact on grading consistency are Expert Validator, Statistical Validator, and Colleague Veto. The top three patterns, according to the teachers, with the greatest perceived impact on students’ satisfaction are Example Questions, Question Donor, and Hidden Validator. The overview of Table 4, Table 5, Table 6 and Table 7 reveals the great overlap between students’ and teachers’ perspectives, wherein the top three patterns, if the rating is ignored, are often similar.
Our research was performed in the higher education environment, including students and teachers from IT-oriented study programmes. Therefore, our participants possessed advanced digital skills, which can foster a more positive attitude towards remote education and knowledge assessments as a whole, as they do not need to deal with the technological component. In addition, a positive attitude towards technology could also slightly affect the assessment score of the patterns with a stronger connection to IT, for example, communication patterns, wherein the communication path is realised remotely using technology. However, the defined patterns do not require advanced IT knowledge for teachers or students. Therefore, the results of the study are not contradictory if the backgrounds of the participants were different.
During the presentation of the results and detailed analysis, it was noticed that the applicability of the majority of presented patterns was not limited to remote knowledge assessments. Undoubtedly, a variety of them could also be used in the classroom setting. Therefore, one of our previous papers [36] researched the classroom applicability of the introduced catalogue. According to the results, the subset of the assessment patterns most suitable for the classroom consists of 12 patterns, among which are Pentathlon, Expert Validator, Colleague Veto, Bonus Points, Game Rules, Time reminder, and other [36].

Limitations

As in every research, the results can be affected by various factors. The limitations and potential threats to validity are presented hereinafter. The research participants were from the IT domain, namely, students in IT-oriented study programs and teachers involved in IT-oriented study programmes. Therefore, all of the participants had advanced IT knowledge. The research was performed in the higher education domain, involving students currently enrolled in the Master’s Degree study programme.

6. Conclusions

Different situations can confront us with previously unknown challenges, forcing us to find, define, and use proven possible solutions. One of the recent challenges we have had to cope with was distant education, which, practically overnight, became a part of our everyday life. The settings previously reserved for in-person sessions were moved to digital environments, including the knowledge assessment process. Some of the previous practises were, therefore, not efficient anymore, and a need for new and proven practises arose.
As an answer, we defined a novel catalogue of assessment patterns for distant education [14]. The final step of catalogue creation was the validation of the defined knowledge assessment patterns, wherein this paper presents the survey results implemented among teachers and students. The objective of the survey was to verify its suitability and expedite its use, focusing on several aspects of knowledge assessment. We addressed the validation from different angles, addressing grading objectivity and consistency, and students’ motivation and satisfaction. With the use of a uniform questionnaire, we gathered perspectives about the perceived impact on researched domains. According to the results, patterns such as Pentathlon, Expert and Statistical Validator, Game Rules, Results with Delay, and Objective Assessor had a high perceived impact on grading objectivity and consistency according to students and teachers. On the other hand, patterns such as Innovator, Example Questions, Question Donor, Bonus Points, and Appetizer had a high perceived impact on students’ motivation and satisfaction. The paper presents a detailed analysis, depicting top-rated patterns within defined pattern categories with corresponding average values, allowing an in-depth comparison among different aspects and views.
In future work, we plan to develop our catalogue of knowledge assessment patterns even further. Since one of our previous studies clearly confirmed the suitability of defined patterns for classroom settings, the catalogue’s further development is meaningful, wherein new and arising challenges must be addressed. One of them is, of course, practises related to the increased involvement of AI in the teaching process. We also plan to push catalogue dissemination further, by providing some additional analyses regarding the attitude towards the catalogue. Currently, the catalogue’s online collaborative platform is under construction in a limited number of languages. More translations will allow further propagation and development and the ability to propose new patterns.

Author Contributions

Conceptualization, T.B., M.H. and L.P.; methodology, T.B., M.H. and L.P.; validation, T.B., M.H. and L.P.; data curation, T.B., M.H. and L.P.; writing—original draft preparation, T.B. and L.P.; writing—review and editing, T.B., M.H. and L.P.; visualization, T.B.; supervision, M.H. All authors have read and agreed to the published version of the manuscript.

Funding

The authors acknowledge financial support from the Slovenian Research and Innovation Agency (Research Core Funding No. P2-0057).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Participation was optional and anonymous.

Data Availability Statement

Data can be obtained on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Nations, U. Sustainable Development Goals. 2020. Available online: https://sdgs.un.org/goals (accessed on 3 May 2024).
  2. Viera Trevisan, L.; Eustachio, J.; Galleli, B.; Filho, W.; Pedrozo, E. Digital transformation towards sustainability in higher education: State-of-the-art and future research insights. Environ. Dev. Sustain. 2023, 26, 2789–2810. [Google Scholar] [CrossRef]
  3. Kräusche, K.; Pilz, S. Integrated sustainability reporting at HNE Eberswalde—A practice report. Int. J. Sustain. High. Educ. 2017, 19, 291–312. [Google Scholar] [CrossRef]
  4. Chan, C.K.Y. A review of the changes in higher education assessment and grading policy during COVID-19. Assess. Eval. High. Educ. 2023, 48, 874–887. [Google Scholar] [CrossRef]
  5. Pokhrel, S.; Chhetri, R. A Literature Review on Impact of COVID-19 Pandemic on Teaching and Learning. High. Educ. Future 2021, 8, 133–141. [Google Scholar] [CrossRef]
  6. Montenegro-Rueda, M.; Luque-de la Rosa, A.; Sarasola Sánchez-Serrano, J.L.; Fernández-Cerero, J. Assessment in Higher Education during the COVID-19 Pandemic: A Systematic Review. Sustainability 2021, 13, 10509. [Google Scholar] [CrossRef]
  7. Şenel, S.; Şenel, H.C. Remote Assessment in Higher Education during COVID-19 Pandemic. Int. J. Assess. Tools Educ. 2021, 8, 181–199. [Google Scholar] [CrossRef]
  8. Ferretti, F.; Santi, G.R.P.; Del Zozzo, A.; Garzetti, M.; Bolondi, G. Assessment Practices and Beliefs: Teachers’ Perspectives on Assessment during Long Distance Learning. Educ. Sci. 2021, 11, 264. [Google Scholar] [CrossRef]
  9. Jacques, S.; Ouahabi, A.; Lequeu, T. Remote Knowledge Acquisition and Assessment During the COVID-19 Pandemic. Int. J. Eng. Pedagog. (iJEP) 2020, 10, 120–138. [Google Scholar] [CrossRef]
  10. Tuah, N.A.A.; Naing, L. Is Online Assessment in Higher Education Institutions during COVID-19 Pandemic Reliable? Siriraj Med. J. 2020, 73, 61–68. [Google Scholar] [CrossRef]
  11. Ghaicha, A. Theoretical Framework for Educational Assessment: A Synoptic Review. J. Educ. Pract. 2016, 7, 212–231. [Google Scholar]
  12. Bachman, L.F. Statistical Analyses for Language Assessment Book; Cambridge Language Assessment, Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  13. Demir, K. Smart education framework. Smart Learn. Environ. 2021, 8, 29. [Google Scholar] [CrossRef]
  14. Pavlič, L.; Beranič, T.; Brezočnik, L.; Heričko, M. Towards a novel catalog of assessment patterns for distant education in the information technology domain. Comput. Educ. 2022, 182, 104470. [Google Scholar] [CrossRef]
  15. Beranič, T.; Pavlič, L.; Brezočnik, L.; Heričko, M. The Students’ Perspective on Assessment Pattern Catalog for a Distant Education. In Learning Technology for Education Challenges, Proceedings of the 11th International Workshop, LTEC 2023, Bangkok, Thailand, 24–27 July 2023; Uden, L., Liberona, D., Eds.; Springer: Cham, Switzerland, 2023; pp. 204–213. [Google Scholar]
  16. Ragupathi, K.; Lee, A. Beyond Fairness and Consistency in Grading: The Role of Rubrics in Higher Education. In Diversity and Inclusion in Global Higher Education: Lessons from Across Asia; Sanger, C.S., Gleason, N.W., Eds.; Springer: Singapore, 2020; pp. 73–95. [Google Scholar] [CrossRef]
  17. Gopalan, V.; Abubakar, J.; Zulkifli, A.N.; Alwi, A.; Che Mat, R.C. A review of the motivation theories in learning. AIP Conf. Proc. 2017, 1891, 020043. [Google Scholar] [CrossRef]
  18. Alsadoon, E.; Alkhawajah, A.; Suhaim, A.B. Effects of a gamified learning environment on students’ achievement, motivations, and satisfaction. Heliyon 2022, 8, e10249. [Google Scholar] [CrossRef] [PubMed]
  19. Saif, N.I. The Effect of Service Quality on Student Satisfaction: A Field Study for Health Services Administration Students. Int. J. Humanit. Soc. Sci. 2014, 4, 172–181. [Google Scholar]
  20. Wong, W.H.; Chapman, E. Student satisfaction and interaction in higher education. High. Educ. 2022, 85, 957–978. [Google Scholar] [CrossRef]
  21. Slack, H.R.; Priestley, M. Online learning and assessment during the COVID-19 pandemic: Exploring the impact on undergraduate student well-being. Assess. Eval. High. Educ. 2023, 48, 333–349. [Google Scholar] [CrossRef]
  22. Gauthier, L. How Learning Works: 7 Research-Based Principles for Smart Teaching. J. Scholarsh. Teach. Learn. 2013, 14, 126. [Google Scholar] [CrossRef]
  23. Ali, W. Online and Remote Learning in Higher Education Institutes: A Necessity in light of COVID-19 Pandemic. High. Educ. Stud. 2020, 10, 16. [Google Scholar] [CrossRef]
  24. Köppe, C.; Nørgård, R.; Pedersen, A.Y. Towards a Pattern Language for Hybrid Education. In Proceedings of the VikingPLoP ’17: Proceedings of the VikingPLoP 2017 Conference on Pattern Languages of Program, Grube Schleswig-Holstein, Germany, 30 March–2 April 2017. [Google Scholar] [CrossRef]
  25. Bennedsen, J.; Eriksen, O. Categorizing Pedagogical patterns by teaching activities and Pedagogical values. Comput. Sci. Educ. 2006, 16, 157–172. [Google Scholar] [CrossRef]
  26. EduPLoP. Writing educational patterns-Patterns & Publications-Reposotory. Available online: https://goo.gl/lcfCFp (accessed on 11 January 2021).
  27. Open Pattern Repository for Online Learning Systems. Main Page-Open Pattern Repository for Online Learning Systems. Available online: https://www.learningenvironmentslab.org/openpatternrepository/index.php (accessed on 11 January 2021).
  28. Köppe, C.; Verhoeff, R.; van Joolingen, W. Incremental Grading in Practice: First Experiences in Higher Education. In Proceedings of the European Conference on Pattern Languages of Programs 2020, Virtual Event, 1–4 July 2020; pp. 1–11. [Google Scholar]
  29. Köppe, C.; Manns, M.; Middelkoop, R. Educational Design Patterns for Student-Centered Assessments. In Proceedings of the Preprints of the 26th Conference on Pattern Languages of Programs, Urbana, IL, USA, 7–10 October 2020; pp. 1–19. [Google Scholar]
  30. Bergin, J.; Kohls, C.; Köppe, C.; Mor, Y.; Portier, M.; Schümmer, T.; Warburton, S. Student’s Choice of Assessment. In Proceedings of the 21st European Conference on Pattern Languages of Programs, Kaufbeuren, Germany, 6–10 July 2016; pp. 1–10. [Google Scholar]
  31. Warburton, S.; Bergin, J.; Kohls, C.; Köppe, C.; Mor, Y. Dialogical Assessment Patterns for Learning from Others. In Proceedings of the 10th Travelling Conference on Pattern Languages of Programs, Leerdam, The Netherlands, 7–10 April 2016; pp. 1–14. [Google Scholar]
  32. Seoane Pardo, A.M.; García-Peñalvo, F. Pedagogical Patterns and Online Learning; IGI Global: Hershey, PA, USA, 2014; Chapter 15; pp. 298–317. [Google Scholar] [CrossRef]
  33. Köppe, C.; Niels, R.; Holwerda, R.; Tijsma, L.; Van Diepen, N.; Van Turnhout, K.; Bakker, R. Flipped classroom patterns: Designing valuable in-class meetings. In Proceedings of the 20th European Conference on Pattern Languages of Programs, Kaufbeuren, Germany, 8–12 July 2015. [Google Scholar]
  34. Köppe, C. Using pattern mining for competency-focused education. In Proceedings of the Second Computer Science Education Research Conference, Wroclaw, Poland, 9–12 September 2012; pp. 23–26. [Google Scholar]
  35. Fioravanti, M.L.; Barbosa, E.F. A systematic mapping on pedagogical patterns. In Proceedings of the 2016 IEEE Frontiers in Education Conference (FIE), Erie, PA, USA, 12–15 October 2016; pp. 1–9. [Google Scholar] [CrossRef]
  36. Heričko, M.; Beranič, T.; Pavlič, L. The Assessment Pattern Catalog for a Distant Education: The Study of the Classroom Applicability. In Learning Technology for Education Challenges, Proceedings of the 10th International Workshop, LTEC 2022, Hagen, Germany, 11–14 July 2022; Uden, L., Liberona, D., Eds.; Springer: Cham, Switzerland, 2022; pp. 30–41. [Google Scholar]
Table 1. Short descriptions of the defined patterns [14].
Table 1. Short descriptions of the defined patterns [14].
CategoryPatternDescription
DOpen BookAll assets are allowed during an online assessment.
DStagewise ApproachStagewise learning and remote assessment.
DPentathlonThe final grade comprises multiple assignments, e.g., hands-on, quizzes.
DContinuous TestingIntegrate remote assessment into lectures as a tool for teaching.
DExpert-LevelRemote assessments follow the complexity required for an expert.
DInnovatorBe innovative in all aspects—use out-of-the-box online tools, and approaches.
DStudent Achievement PortfolioUse assignments evolution/progress as an auditing trail for a later retrospective.
DColleague BloomUse Bloom’s taxonomy as guidance for remote assessment preparation.
QExpert ValidatorUse colleague teacher or domain expert to validate questions and answers.
QStatistical ValidatorStatistically validate online assignment responses.
QHidden ValidatorContinuously collect and validate students’ feedback in order to validate questions.
QColleague VetoEliminate or correct questions in case of doubts or concerns of a colleague.
QMathematical ValidatorFollow method (e.g., 1 min/question) to set time-limits for remote assessment.
QProfessional MultiplierMultiply time required for completion by expert by three (or two).
QDress RehearsalRehearse online assignment or exam submission.
QExample QuestionsProvide online question examples.
QQuestions DonorMotivate students to provide potential exam questions.
QTheme VariantsPrepare different online assignment variants for the same assessment type.
ETimeboxIntroduce time-limit for online assignments.
ERandomised OrderIntroduce random ordering of online questions and their answers.
EResults With DelayDisplay results with delay, fine-tune results—related to Objective Assessor.
EIdentity GuaranteeUse techniques that guarantee the authenticity of students (e.g., handwritten solutions).
EAccessibility AdjustmentsAllow online accessibility adjustments for students with special needs.
EObjective AssessorEliminate questions that turn out to be problematic.
ECriteria ListDefine the list of criteria for a more objective e-solutions assessment.
EEurosongIntroduce peer evaluation with the Eurosong points system.
EBonus PointsIntroduce bonus points to better engage students in an online setting.
EThird ShiftBe available for remotely assisting students outside course time slots.
ENumber DrawIntroduce random order of online assignment presentations.
EImpro LeagueAgile response to unexpected problems (e.g., internet failure).
ESelf-AssessmentMaking a list of questions with answers for students’ preparation/self-assessment.
EPersonal DefenceIntroduce remote oral defence of students’ e-solutions.
CAppetizerProvide example online exam questions.
CGame rulesExplain remote assessment rules at first lecture and before assessment.
CTime ReminderDuring an online assessment, remind students about the remaining time.
CTo-Do ListProvide a checklist of the key parts of the remote assignments.
CEmergency CallBe available in the separated channel to address urgent student calls.
CMember ChannelEncourage separated channel where students can communicate in smaller groups.
CAcademic Integrity AppealEmphasise the importance of academic integrity during online assessments.
CProactive TeacherA teacher periodically contacts students to see if there are any problems.
CFirewallExplain/present assessment results remotely to minimise exam reviews.
CStudent ProxyUrgent information can be distributed by student proxy. A voice of a class.
A24/7Unrealistic student expectations regarding teacher’s online responsiveness.
AFull-Time JobAddressing student problems remotely can become a full-time job.
ACertification CentreAvoid acting like a certification centre, where you focus only on assignment criteria.
OSafe Exam BrowserIntroduce the Safe exam browser to minimise cheating during the online assessment.
OBig BrotherIntroduce one or more cameras to supervise students remotely.
Table 2. The questionnaire.
Table 2. The questionnaire.
StatementScale
The pattern has a positive impact on grading objectivity.5-point Likert scale
The pattern has a positive impact on grading consistency.5-point Likert scale
The pattern has a positive impact on students’ motivation.5-point Likert scale
The pattern has a positive impact on students’ satisfaction.5-point Likert scale
Table 3. Demographics of the research participants.
Table 3. Demographics of the research participants.
TeachersStudents
33 teachers51 students
and teaching assistantsactively involved in remote assessment
11.1 years of teaching experiences1st year Master’s Degree
5.4 different coursesfocused on informatics
Table 4. Assessment conceptual design patterns’ impact on objectivity, consistency, motivation, and satisfaction.
Table 4. Assessment conceptual design patterns’ impact on objectivity, consistency, motivation, and satisfaction.
Patterns for the Assessment Conceptual Design (D)
StudentsTeachers
Objectivity1.PentathlonPentathlon
2.Student Achievement PortfolioStagewise Approach
3.Innovator, Colleague BloomStudent Achievement Portfolio
Consistency1.PentathlonPentathlon
2.Student Achievement PortfolioContinuous Testing
3.Colleague BloomStagewise Approach
Motivation1.PentathlonPentathlon
2.Open BookInnovator
3.InnovatorContinuous Testing, Stagewise Approach
Satisfaction1.Open BookInnovator
2.InnovatorOpen Book, Pentathlon
3.PentathlonStagewise Approach
Table 5. Patterns for defining the impact of questions, answers, and schedules on objectivity, consistency, motivation, and satisfaction.
Table 5. Patterns for defining the impact of questions, answers, and schedules on objectivity, consistency, motivation, and satisfaction.
Patterns for Defining Questions, Answers, and Schedules (Q)
StudentsTeachers
Objectivity1.Expert ValidatorExpert Validator
2.Statistical ValidatorStatistical Validator
3.Colleague VetoColleague Veto
Consistency1.Expert ValidatorStatistical Validator
2.Statistical ValidatorColleague Veto
3.Colleague VetoExpert Validator
Motivation1.Questions DonorQuestions Donor
2.Example QuestionsExample Questions, Hidden Validator
3.Statistical ValidatorDress Rehearsal
Satisfaction1.Example QuestionsExample Questions
2.Questions DonorQuestions Donor
3.Expert ValidatorHidden Validator
Table 6. Execution and grading patterns’ impact on objectivity, consistency, motivation, and satisfaction.
Table 6. Execution and grading patterns’ impact on objectivity, consistency, motivation, and satisfaction.
Execution and Grading Patterns (E)
StudentsTeachers
Objectivity1.Objective AssessorCriteria List
2.Results With DelayResults With Delay
3.Impro LeagueObjective Assessor
Consistency1.Objective AssessorCriteria List
2.Impro LeagueResults With Delay
3.Results With DelayObjective Assessor
Motivation1.Bonus PointsBonus Points
2.Objective AssessorEurosong
3.Impro LeagueSelf-Assessment
Satisfaction1.Bonus PointsThird Shift
2.Objective Assessor, Impro LeagueBonus Points
3.Third ShiftImpro League
Table 7. Communication patterns’ impact on objectivity, consistency, motivation, and satisfaction.
Table 7. Communication patterns’ impact on objectivity, consistency, motivation, and satisfaction.
Communication Patterns (C)
StudentsTeachers
Objectivity1.Game RulesGame Rules
2.To-Do ListAcademic Integrity Appeal
3.AppetizerTo-Do List
Consistency1.Game RulesGame Rules
2.To-Do ListAcademic Integrity Appeal
3.AppetizerAppetizer
Motivation1.AppetizerProactive Teacher
2.Game RulesAppetizer
3.Emergency CallStudent Proxy
Satisfaction1.Emergency CallEmergency Call, Game Rules
2.AppetizerAppetizer
3.Game RulesProactive Teacher
Table 8. Impact of pattern group on grading objectivity and consistency, and students’ motivation and satisfaction.
Table 8. Impact of pattern group on grading objectivity and consistency, and students’ motivation and satisfaction.
ObjectivityConsistencyMotivationSatisfaction
StudentsTeachersStudentsTeachersStudentsTeachersStudentsTeachers
D4.113.583.923.593.853.863.773.68
Q3.983.534.073.564.003.454.033.72
E4.063.504.043.543.913.643.853.75
C4.233.184.243.254.263.854.274.20
A2.392.642.362.691.582.181.452.17
O2.502.282.512.362.713.172.883.54
Table 9. Impact on grading objectivity and consistency perceived by students and teachers.
Table 9. Impact on grading objectivity and consistency perceived by students and teachers.
PatternObjectivityConsistency
StudentsTeachersStudentsTeachers
DOpen Book3.943.033.712.82
DStagewise Approach3.683.823.853.88
DPentathlon4.514.184.504.27
DContinuous Testing3.843.643.853.91
DExpert-Level3.523.273.373.18
DInnovator4.023.183.923.09
DStudent Achievement Portfolio4.293.674.213.79
DColleague Bloom4.023.423.963.76
QExpert Validator4.474.334.424.03
QStatistical Validator4.464.244.374.18
QHidden Validator4.043.274.023.42
QColleague Veto4.274.184.194.06
QMathematical Validator3.573.583.583.55
QProfessional Multiplier4.123.794.043.94
QDress Rehearsal3.963.003.982.94
QExample Questions4.163.004.183.21
QQuestions Donor3.962.943.842.91
QTheme Variants4.123.484.103.39
ETimebox3.693.523.653.64
ERandomised Order4.103.674.143.67
EResults With Delay4.454.154.414.15
EIdentity Guarantee4.103.734.123.55
EAccessibility Adjustments4.353.554.373.55
EObjective Assessor4.754.094.584.03
ECriteria List4.374.704.354.79
EEurosong3.043.032.963.09
EBonus Points4.223.124.213.21
EThird Shift4.082.823.982.94
ENumber Draw3.632.553.602.61
EImpro League4.403.184.463.24
ESelf-Assessment4.043.034.123.12
EPersonal Defence3.623.913.544.00
CAppetizer4.463.274.523.45
CGame rules4.654.184.674.24
CTime Reminder4.402.884.422.94
CTo-Do List4.533.454.613.42
CEmergency Call4.453.064.403.06
CMember Channel4.172.484.152.52
CAcademic Integrity Appeal4.083.554.023.55
CProactive Teacher3.983.183.983.30
CFirewall3.743.003.743.18
CStudent Proxy3.862.733.862.85
A24/72.482.152.502.18
AFull-Time Job2.492.242.512.30
ACertification Centre2.532.452.532.61
OSafe Exam Browser2.552.822.532.76
OBig Brother2.233.092.192.85
Table 10. Impact on students’ motivation and satisfaction perceived by students and teachers.
Table 10. Impact on students’ motivation and satisfaction perceived by students and teachers.
PatternMotivationSafisfaction
StudentsTeachersStudentsTeachers
DOpen Book4.253.734.494.09
DStagewise Approach3.664.063.623.82
DPentathlon4.344.364.114.09
DContinuous Testing3.514.063.253.45
DExpert-Level3.173.242.923.03
DInnovator4.114.184.134.12
DStudent Achievement Portfolio3.844.003.823.61
DColleague Bloom3.903.243.803.24
QExpert Validator4.082.944.213.42
QStatistical Validator4.113.094.173.70
QHidden Validator4.094.214.084.09
QColleague Veto3.983.184.123.58
QMathematical Validator3.242.913.103.33
QProfessional Multiplier3.843.183.883.73
QDress Rehearsal4.043.394.024.00
QExample Questions4.314.214.484.30
QQuestions Donor4.523.364.444.18
QTheme Variants3.823.033.822.91
ETimebox2.942.612.732.42
ERandomised Order3.162.732.902.48
EResults With Delay3.802.823.563.39
EIdentity Guarantee3.192.883.102.76
EAccessibility Adjustments4.474.064.484.39
EObjective Assessor4.693.764.734.39
ECriteria List4.263.554.224.15
EEurosong3.484.273.633.91
EBonus Points4.814.554.754.58
EThird Shift4.494.004.614.64
ENumber Draw3.513.943.183.06
EImpro League4.663.644.734.42
ESelf-Assessment4.364.154.394.36
EPersonal Defence2.904.062.883.48
CAppetizer4.734.244.774.48
CGame rules4.674.004.634.52
CTime Reminder4.403.584.404.27
CTo-Do List4.623.794.564.27
CEmergency Call4.633.974.754.52
CMember Channel4.434.004.404.06
CAcademic Integrity Appeal3.653.423.733.58
CProactive Teacher3.984.303.924.30
CFirewall3.453.213.433.85
CStudent Proxy4.064.034.064.18
A24/72.623.792.754.33
AFull-Time Job2.983.453.163.82
ACertification Centre2.522.272.732.48
OSafe Exam Browser1.582.091.481.85
OBig Brother1.572.061.421.76
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Beranič, T.; Heričko, M.; Pavlič, L. Knowledge Assessment Patterns for Distant Education: The Perceived Impact on Grading, Motivation, and Satisfaction. Sustainability 2024, 16, 4166. https://doi.org/10.3390/su16104166

AMA Style

Beranič T, Heričko M, Pavlič L. Knowledge Assessment Patterns for Distant Education: The Perceived Impact on Grading, Motivation, and Satisfaction. Sustainability. 2024; 16(10):4166. https://doi.org/10.3390/su16104166

Chicago/Turabian Style

Beranič, Tina, Marjan Heričko, and Luka Pavlič. 2024. "Knowledge Assessment Patterns for Distant Education: The Perceived Impact on Grading, Motivation, and Satisfaction" Sustainability 16, no. 10: 4166. https://doi.org/10.3390/su16104166

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop