Next Article in Journal
Exploring Students’ Mathematical Reasoning Behavior in Junior High Schools: A Grounded Theory
Previous Article in Journal
Material and Socio-Cognitive Effects of Immersive Virtual Reality in a French Secondary School: Conditions for Innovation
Previous Article in Special Issue
Rubric’s Development Process for Assessment of Project Management Competences
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Online Peer Assessment for Learning: Findings from Higher Education Students

by
Paula Loureiro
and
Maria João Gomes
*
Research Centre on Education, Campus of Gualtar, University of Minho, 4710-057 Braga, Portugal
*
Author to whom correspondence should be addressed.
Educ. Sci. 2023, 13(3), 253; https://doi.org/10.3390/educsci13030253
Submission received: 1 December 2022 / Revised: 20 February 2023 / Accepted: 22 February 2023 / Published: 27 February 2023
(This article belongs to the Special Issue Assessment and Evaluation in Higher Education—Series 2)

Abstract

:
Assessment practices in the higher education (HE) context have undergone profound changes over recent years, particularly regarding their purpose, strategies, and available resources. This exploratory study seeks to analyze, through the perceptions of HE students, the contribution and adequacy of an assessment for learning strategy, namely, online peer assessment (OPA), inspired by the conceptual framework of the PrACT Model, a framework which aims to contribute to the dissemination of alternative assessment practices. The main data collection technique used was the survey questionnaire and the study participants (n = 16) were students from a higher education institution in Portugal. Results point to the lack of student experience in the practice of OPA and are discussed in conformity with the dimensions of the PrACT framework. OPA is considered, from the student’s perspective, an adequate alternative digital assessment strategy, contributing to student motivation as well as to the development of cognitive, metacognitive, and digital skills.

1. Introduction

Technology has taken up a major role in every aspect of society, and education is no exception, as universities evolve to better suit the needs and requirements of an increasingly technological society [1] and master the constant introduction of “digital technologies for communication, collaborative knowledge construction, reading, and multimedia learning in education systems” [2] (p. 3100). The impact of the 21st century digital transformation has influenced education in an unprecedented way. Information is no longer restricted to print books or encyclopedias in the traditional sense; it is now spread across the network of connected digital technologies, may be consumed anytime [3], and “students can learn from anywhere without losing the facets of a seated classroom” [4] (p. 41). Besides the increased and diversified access to information, opportunities to contribute to the production and dissemination of knowledge, exclusive to educational environments, have also become possible to each and every individual, who is now both a producer and consumer of information, and this plays an unprecedented role of the individual in society [3]. Education has shifted from a one-way dissemination of knowledge to the achievement of a mutual-learning community where knowledge is created collaboratively [5].
Higher education institutions (HEIs) are currently preparing students for jobs that do not yet exist [1], and this implies equipping learners with a set of requirements seen as quality indicators and necessary traits for future success. Being able to perform meaningful, authentic tasks and to apply the knowledge and skills acquired in real world contexts are essential 21st century skills to help students contribute effectively to the global workforce [6,7,8,9,10]. Saykili [3] classifies 21st century skills into three main categories: learning and innovation skills (critical thinking and problem solving, communication and collaboration, creativity, and innovation skills); information, media, and technology skills (information literacy, media literacy, and information and communication literacy); life and career skills (flexibility and adaptability, initiative and self-direction, social and cross-cultural skills, productivity and accountability, leadership, and responsibility skills). Continuous development and lifelong learning in a fast-changing digital era will make the learners of today become an attractive, hirable, and employable workforce tomorrow [4,11].

1.1. Assessment in Higher Education

Assessment influences the way students define their priorities and engage in the learning process. The evolution of this association has been a concern for educational researchers, and the terminology to describe assessment has evolved to represent its purpose [12].
The way in which the assessment is operationalized influences the motivation and learning of students, being seen as “a tool for learning” [13].
Earl [14] presents three distinct and intertwined purposes of assessment: Assessment of Learning (AoL); Assessment for Learning (AfL) and Assessment as Learning (AaL). AoL is “assessment used to confirm what students know, to demonstrate whether or not the students have met the standards and/or show how they are placed in relation to others” [15] (p. 7). Over the years, assessment in schools has largely had this summative function, where students’ results are measured through tests and examinations, and their progress is subject to socially implemented standards [16]. More recently, educational policies have focused on using assessment more effectively to enhance student learning [15], highlighting the formative purpose of AfL and AaL. AfL focuses on learning and is “part of everyday practice by students, teachers and peers that seeks, reflects upon and responds to information from dialogue, demonstration and observation in ways that enhance ongoing learning” [12] (p. 2). AaL uses “assessment as a process of developing and supporting metacognition for students….focuses on the role of the student as the critical connector between assessment and learning” [15] (p. 7). AaL and AfL support student-centered learning and aim at strengthening student autonomy, decision making, and participation, particularly through feedback about ongoing progress [17].
In their study, Pereira et al. [18] refer the emergence of research on alternative or learner-oriented assessment methods in HEIs over the period 2006–2013, highlighting portfolio assessment and self- and peer-assessment practices as the main foci of the studies analyzed. Research on assessment and evaluation in higher education over the above-mentioned period underlines, for example, the advantages of formative assessment such as enhanced student learning and motivation, the benefits of e-learning assessment such as opportunities for reflection and innovation, the adequacy of alternative assessment methods, other than the conventional written tests, as they enable effective learning and professional development, and the effectiveness of peer assessment, from the student’s perspective, as it allows interaction between students and produces formative feedback.

1.2. Alternative Digital Assessment

Enhanced technological development and the demand for 21st century skills have resulted in the use of alternative and sustainable assessment methods and new learning scenarios [8] capable of providing a more effective learning environment. Portfolios, checklists, rubrics, surveys, student-centered assessments, as well as reflections are some examples of non-standardized forms of assessment which seek to demonstrate student achievement in a more comprehensive way [19] and make learning more significant through assessment. Assessment regimes are complex, and aligning instruction, assessment, and effective learning [20] has become quite challenging for instructors at various levels within the education setting [21], as all competing priorities must be carefully balanced. As such, a new assessment culture [22] emerges where assessment goes beyond its summative role of certification and ranking and undertakes a formative perspective in a change of roles which empowers students to actively build their knowledge and develop their skills [23,24] and challenges instructors to design deep and meaningful learning experiences free of constraints from time and place [9]. It also highlights the importance of features such as authenticity, flexibility, adaptability, social interaction, feedback, self-regulation, and scaffolding when designing an assessment for learning strategy [25].
Recent studies [26,27] have found that, although assessment practices have changed to include assessment of skills and learning enhancement in online environments, alternative digital assessment strategies such as online portfolios, podcasts, storytelling, or self and peer assessment are still limited to individual teacher proactivity rather than any assessment process institutionally planned or implemented [28]. According to Amante et al. [25], the “assessment culture”, which diverges from the “test culture” respects the three elements of the learning process—student, teacher and knowledge—in a balanced and dynamic way. Online multiple-choice quizzes, for example, incorporate the technological dimension but still favor the psychometric paradigm of assessment and not the development of the student, as in a learner-centered formative approach. The authors hold that the concept of an “alternative digital assessment strategy” refers to all technology-enabled tasks where design, performance, and the feedback is technologically mediated. The literature [20,28,29] points out that assessment is the most powerful tool to influence the way students spend their time, orient their effort, and perform their tasks. Therefore, in a competency based learning environment, assessment requirements must mirror a competency based curriculum, and assessment strategies need to adapt to online contexts and new learning resources while assessing knowledge, abilities, and attitudes [25,30].

1.3. The PrACT Model: An Alternative Digital Assessment Framework

Within an alternative evaluation culture, and based on the concerns for quality and validity of assessment practices, the authors Pereira, Oliveira, and Tinoca [30] propose a conceptual framework that led to the PrACT model [16,17,21,22]. This framework can be used “as a reference in the definition of an alternative digital strategy for online, hybrid (blended learning) or face-to-face contexts with strong use of technologies” and “constitutes a framework for the quality of a given assessment strategy” [24] (p. 318). The PrACT model is intended to impact the quality standards for e-assessment tasks in an assessment for learning context and emphasizes four main dimensions—Practicability, Authenticity, Consistency, and Transparency—as shown in Figure 1.
The Practicability dimension is related to the feasibility and sustainability of the assessment strategy used and it considers the resources, time, and training costs for assessors and organizations. Learners must also recognize the assessment tasks as feasible, relevant, and useful for the learning process. This dimension considers the following criteria: costs (related to training time and resources for assessors and assessees), efficiency (related to the costs/effects based on the expected results), and sustainability (related to the feasibility of the implementation of the assessment strategy) [24,25,30,31,32].
The Authenticity dimension is related to lifelong learning and the need for the assessment tasks to be complex and significant in order to resemble the skills required for professional life. This dimension includes a set of criteria intended to measure the level of authenticity of the assessment task: similarity (represents the connection to the real-world context), complexity (represents the nature of the e-assessment tasks), adequacy (represents the suitability of the assessment conditions) and significance (represents the meaningful value of the tasks for learners, instructors, and employers) [24,25,30,31,32].
The Consistency dimension is linked to the psychometric properties of validity and reliability, implying a variety of assessment indicators. To assess the degree of consistency in the assessment strategy, the following reference criteria are proposed: instruction-assessment alignment (meaning the work done during the learning process is coherent with the assessment tasks), multiplicity of indicators (meaning there is a variety of e-assessment methods, contexts, moments, and assessors), relevant criteria (meaning the applicability of the assessment criteria to assess competencies and skills), competency-assessment alignment (meaning the balance between the competencies developed and the assessment design used) [24,25,30,31,32].
The Transparency dimension refers to learner engagement, meaning the e-assessment strategy must be visible and intelligible by all participants. The following criteria are contemplated in this dimension: democratization (meaning the participation of students in defining the assessment criteria), engagement (meaning the participation of students in defining the learning goals and performance criteria), visibility (meaning the possibility for learners to share their learning process/products with others) and impact (meaning the effects that the e-assessment strategies have on the learning process and on the design of the educational program) [24,25,30,31,32].
The PrACT e-assessment framework clearly considers an assessment for learning perspective, placing learners as active agents in the learning process and sharing responsibility for the assessment of their own learning and that of others in a collaborative constructivist process that assumes a wide range of sources in a variety of situations and circumstances.
Research on alternative digital assessment [25,27] indicates that an important contribution to the development of lifelong learning skills and sustainable assessment practices is online peer assessment (OPA). According to Amendola and Miceli [33] (p. 72) “peer assessment (also called peer review) is a collaborative learning technique based on a critical analysis by learners of a task or artefact previously undertaken by peers” where “students reciprocally express a critical judgment about the way their peers performed a task assigned by the teacher and, in some cases, they give a grade to it” in a constructive revision of their work. Wang et al. define peer assessment as an effective formative assessment tool since “peers share ideas with each other and evaluate each other’s work by giving grades or ratings based on assessment criteria or by giving feedback to peers in written or oral comments” [34] (p. 714). According to Liu et al. [35] (p. 280), “peer feedback is primarily about rich detailed comments but without formal grades, whilst peer assessment denotes grading (irrespective of whether comments are also included)”. Despite these differences, the authors consider that “whether grades are awarded or not, the emphasis is on standards and how peer interaction can lead to enhanced understandings and improved learning” [35] (p. 280). The utility of feedback for subsequent assessment (i.e., feed-forward) also assumes a critical role in the definition of feedback as research indicates that students often need help to facilitate their efforts to make use of their feedback (i.e., feed-forward) [36]. The advancement of technology and the variety of learning management systems (LMS) available in universities allow instructors to boost the potential of online peer assessment by easily integrating it into classrooms while replacing traditional face-to-face peer assessment [37]. In this study, we adopted the use of the term “online peer assessment” (OPA) as it is commonly used in literature to refer to a “peer learning strategy that allows learners to assess peers’ performance by scoring or providing feedback” [38] (p. 387) as a means to promote learning in a technologically mediated environment.
Learning with peers is a key skill required for lifelong learning and, according to the literature, OPA enhances learning efficiency and student critical thinking while fostering the development of essential professional skills such as higher-order thinking skills, motivation, responsibility, and autonomy [39,40]. Being able to autonomously and critically assess their own performance and that of a peer within the required standards implicates students in a self-reflection process that promotes their understanding of the learning content, stimulates them to explore and justify their own artefacts, and strengthens their critical thinking [33,38,41,42,43]. Peer assessment also boosts students’ social and collaborative skills while fostering a learning environment with high levels of interactivity [38,42]. Peer assessment mediated by technology can assure anonymity of authorship, enhancing a more honest and fair assessment of peers [37,44,45,46] and reduce restrictions related to time and place, hence providing a ubiquitous learning environment [38].
Given the relevance of educational assessment of learning in the students’ learning process, this study aims to determine and discuss higher education (HE) students’ perceptions regarding an e-assessment strategy, namely OPA tasks designed based on the PrACT framework, as an assessment for learning strategy. Thus, the following general research question was defined:
  • What are the perceptions of HE students regarding online peer assessment practices inspired by the PrACT model, as an assessment for learning strategy?
The goal of the present article is to present the main findings concerning HE students’ perceptions towards online peer assessment and how those practices can influence the development of essential 21st century lifelong learning skills. It also aims at validating the potential of a peer assessment strategy designed according to the four dimensions of the e-assessment framework which led to the PrACT model. Finally, it intends to find out the students’ perceptions concerning the adequacy and suitability of the functionality of the peer assessment tool available on the learning management system (LMS) used to support teaching and assessment in the university where the study takes place.

2. Methodology

This study reports the implementation of an online peer assessment strategy in an HEI in Portugal. The focus of the study concerned the students’ perceptions about the suitability of an OPA strategy, based on the PrACT model and with an assessment for learning approach. A class of 21 students from the second year of an undergraduate program on education from a Portuguese university participated in the study. The selection of the participants was based on a non-probabilistic sampling by convenience, as this curricular unit was taught by one of the authors, who uses online peer assessment practices. It is important to note that, although this undergraduate program is a degree in education, it does not train teachers; instead, this cycle of studies provides graduates with knowledge and skills that enable them to act within and outside the educational system, particularly in terms of education, training, training management, social and community intervention and educational mediation. The classes began on 14 February and ended on 1 July 2022. The assessment tasks were proposed and discussed with the students in the first class of the semester, and, from time to time, the teacher reminded the students what the aims of the assessment tasks were and how important they were for their learning. At the same time, the teacher briefly explained and discussed with the students the importance of peer assessment for self-learning and for peer learning. All the students participated in the online peer assessment activities proposed in the first class of the curricular unit to which students agreed to participate. Nevertheless, only 16 students from the class actually responded to the online questionnaire sent to them.
The OPA process was implemented using the functionality available for that purpose in the Blackboard Collaborate platform, the LMS adopted by the HEI. This peer assessment tool allows teachers to randomly and anonymously designate the different tasks to the students so that they do not know which colleagues are assigned to evaluate their work. As such, this OPA tool allowed students to submit their work, designate the assessors randomly and anonymously, submit the assessment remarks anonymously, and consult each peer assessment document as soon as they were done. The use of an OPA tool with random and anonymous designation of assessors ensures that all the students submit their assessment tasks on time. If they do not do that, they will not be considered on the poll of assessors so they will not participate in the peer assessment task. Some students did not submit their assessment tasks on time, and so did not participate in that specific peer assessment task. The peer assessment tool used does not have a proper process for group peer assessment. To overcome this limitation, only one member of each group was included in the assessment group task so we could also designate, randomly and anonymously, the students to do the group peer assessment task. It is important to note that the students did not know which colleagues their peer assessors were but each one of the assessors knew whom they were assessing. This situation was impossible to avoid as they were assessing individual learning portfolios, which, according to their nature, reveal their authorship. Concerning the group essays, every student knew which theme was discussed by each group, so it was also very easy to identify their authors.
The peer assessment tool allows the teacher to define the moments to begin and end the assessment process. As an online tool, it also allows students to access the assessment task anytime, almost anywhere. Although it is true that access to technology may still be limited in some cases, the literature greatly emphasizes these benefits associated with technology mediated peer assessment. It is important to note that the students have free internet access on the university campus, everywhere at the university and in almost the whole city. It is very common for public places (like libraries, public services, or city plazas) and commercial places (like coffee shops or malls) to have free internet access. The students could access the university LMS (and therefore the online peer assessment tool) on their computer desktop or mobile devices. The OPA strategy was designed taking into consideration two main assessment tasks: an individual portfolio and a group essay, as shown in Figure 2.
Peer assessment of the portfolio occurred at two separate moments—the middle and the end of the class calendar—and by two anonymous (student) assessors. Peer assessment of group essays was performed by two different groups of students at the end of the semester. The multiple moments and assessors involved in the peer assessment process created opportunities for students to improve their work upon receiving written feedback from different peers. To support peer assessment activities, students were asked to use rubrics indicating assessment criteria and performance levels. The two rubrics were presented, explained, and discussed with the students at the beginning of the semester to help them understand better how they should do the peer assessment tasks. The students did not define the criteria but were asked to discuss the rubrics and were given an opportunity to express their opinion about the relevance and value of the criteria and suggest modifications. None of the students suggested modifications, and all the students present in the class agreed with the criteria used.
The teacher also developed a rubric to assess student participation and performance in the OPA activities and took that into consideration for the students’ final marks, as discussed with the students on their first class of the semester. As suggested by [35], awarding a percentage of the assignment marks for the quality of peer marking may encourage students to think carefully about the assessment criteria and the writing of feedback. This rubric was also discussed with the students at the beginning of the semester. The idea was to help students understand better how they should perform during peer assessment tasks. The rubric included criteria about: (i) effective use of the rubrics to assess colleagues, referring and justifying their attribution of levels; (ii) quality of writing; (iii) evidence of scientific knowledge about the themes of their colleagues’ work; (iv) evidence of critical analysis of peer work; and (v) participation on the activities of peer assessment (participation on some or all the activities).
Data collection was based on the application of an online questionnaire which was organized in six parts, four of which concerned each dimension of the PrACT framework. It included a total of 36 multiple-choice questions, using a 4-point Likert scale (1—totally disagree; 2—disagree, 3—agree, 4—totally agree) with a no-opinion option and two open-ended questions. An initial draft of the questionnaire was sent to two authors of the PrACT framework to analyze it and, after that, two working meetings were organized with them to discuss the questionnaire. These working meetings provided valuable suggestions and improvements and were very important in developing the final version of the questionnaire. It was then submitted for approval to the Ethics Committee of the HEI, where it was validated. Ethical concerns were considered in all phases of data collection and analysis, including informed consent, confidentiality, and anonymity. After this process, an email with the link to the online survey was sent to the students, only once the classes of the curricular unit in which the study took place ended and after students were graded. The link was available for approximately one month, and no reminders were sent, considering it was exam period for some of the students and holiday period for others.
Figure 3 presents the different phases of the data collection process.

3. Results and Discussion

Results from the survey are presented and discussed according to the four main sections of the questionnaire which correspond to the four dimensions of the PrACT model—Practicability, Authenticity, Consistency and Transparency—and the criteria which contribute to the definition of each of the dimensions.

3.1. Practicability Dimension

When implementing competency based e-assessment, the Practicability dimension could help assure the quality of the process, particularly through cost reduction, higher efficiency and recognized sustainability.
Regarding costs related to time or digital resources required to implement e-assessment activities, studies have proven that technology can help reduce teacher workload and the amount of time required to assess tasks and provide individual feedback [34,45,47,48]. Technology could also eliminate any restriction related to time or place and promotes a ubiquitous learning environment [38,49]. According to the questionnaire and the students’ perspectives of the time costs while engaged in the OPA activities, all students agreed that the use of a digital tool did facilitate the peer assessment process.
Efficiency relates costs to the expected results. Several authors [34,37,50] explain that online peer assessment has been replacing traditional peer assessment due to the proliferation of LMSs in academic institutions, the multiple benefits associated with significant learning, and the anonymity it provides. According to the perceptions of the students involved in the current study, one might agree that the digital tool used to perform the online peer assessment activities contributed to the efficiency of the assessment strategy. Most of the students agreed that the assessment tool used was intuitive and user-friendly, and disagreed that it might have made the process slow or tiring. All students, except one, enjoyed using this tool. The students also responded well to the peer assessment activities, feeling no social pressure or constraints due to the anonymity of the online process, as seen in previous research [37,44,45]. The students did, however, disagree when the question involved the quality of the anonymous feedback, as half of the students (8 out of 16) consider anonymous feedback less valid or reliable than non-anonymous feedback [37].
Most of the students (12 out of 16) responded that they would like to see the current OPA process implemented in other curricular units (classes), which leads us to conclude that they consider it feasible, relevant, and useful “assuring that it is possible to successfully implement and sustain the proposed assessment design” [23].
Practicability is particularly important in online contexts, given their specificities considering resources, time, and training costs, as well as their efficiency and sustainability. Student perceptions concerning this dimension may be summarized in Figure 4:

3.2. Authenticity Dimension

The Authenticity dimension considers the degree of similarity between the skills assessed and those required in professional life. In this section of the questionnaire, students’ answers were almost unanimous, as a great majority of the students acknowledged the enhanced learning and competency development promoted by online peer assessment practices.
Similarity is one of the reference criteria which contributes to the quality of competency based e-assessment tasks and refers precisely to the resemblance of the tasks to real-world contexts. OPA is known for fostering soft skills useful for students’ job placement, such as critical thinking, self-assessment, responsibility, autonomy, and team building [33,38,39,40,41,42,43]. Consequently, in the survey, all the students agreed that participating in the OPA tasks contributed to the development of assessment and analytical skills, digital competencies, critical reasoning, and the promotion of collaborative learning through contact with other points of view.
Real or professional, daily life contexts are often complex and with a variety of possible solutions, imposing a cognitive challenge that could be met through significant learning and e-assessment tasks, such as OPA [2,32], particularly in what concerns HE students [38]. Hence, most students in this study (14 of 16) agree that peer assessment is, indeed, a complex task, emphasized by the lack of any previous experience in such type of task. Panadero [51] suggests that more intensive peer assessment implementations can produce better human and social results, as students become aware of the complexities of peer assessment.
It is known that, in students’ professional life, there will be situations in which they will be asked to comment on or evaluate the work of others, and HEIs should reflect this demand. Nicol et al. [52] refer to recent studies to highlight the benefits of producing feedback and developing assessment skills for meaningful learning. This resembles the significant value of the e-assessment task for students, instructors, and employers. In the survey, almost all students believe that peer assessment relates to what they will do in the future, although only half of the students agreed that this task could actually be useful for their future. Based on this disagreement, we might conclude that this could occur because skills such as the ability to take ownership of the assessment criteria, to make informed judgments and articulate those judgments in written feedback, or even the ability to improve one’s work based on this reflective process, are currently not specifically developed through the curriculum as students recognize it, despite being an important requirement beyond university [52].
The quality of an alternative assessment task also depends on the adequacy of its complexity with the performing conditions. In the survey, when asked about the time available to perform the OPA tasks, all students, but one, considered it appropriate. The presentation of the rubrics and setting of deadlines in the first class of the semester could have contributed to effective time management by students.
The Authenticity dimension emphasizes the need for students to perceive online assessment tasks as complex, related to real-life context, and significant. Figure 5 illustrates student perceptions concerning the online peer assessment tasks they participated in.

3.3. Consistency Dimension

The Consistency dimension “takes into account that the assessment of competences requires the implication of a variety of assessment methods, in diverse contexts, by different assessors, as well as the adequacy of the employed strategies” [23].
One of the criteria this dimension comprises is the multiplicity of indicators to which the entire online peer assessment process contributes: products are assessed by multiple peers assigned randomly and anonymously by technology [37], products are reviewed in multiple moments due to the ubiquitous learning environment technology creates [49], and products can be saved, recovered, and shared in multiple formats and contexts [40]. Nicol et al. [52] suggest that student exposure to work of varying levels and to feedback from different sources helps produce high quality work. Besides the impact on learning, Amante et al. [25] note that skill assessment implies a variety of methods, contexts, and assessors to help provide validity and credibility to the assessment process [25]. Corroborating this perspective, in the survey, all the students but one agreed that receiving feedback about their work in multiple moments and by multiple peers helps ensure a reliable assessment process.
The literature [42,53] supports that the use of rubrics improves both the quantity and quality of peer assessment feedback and contributes to more appropriate and reasonable feedback. As mentioned in Section 2, all the students had access to two rubrics with relevant criteria and performance indicators to support their revision and production of feedback concerning both the portfolio and the group essay. When asked about the appropriateness of the rubrics, all the students agreed that they assured the quality of the assessment strategy. Although the rubrics were presented and discussed in the beginning of the semester, when questioned about them during the online survey at the end of the semester, all the students considered the criteria used for the assessment of the portfolio adequate, and all but one agreed on the criteria used to assess the group essay.
According to the student perceptions stated in the questionnaire, all the students agreed that the online peer assessment design used was aligned with the competences meant to be developed in the curricular unit, assuring the quality of the e-assessment task according to the criteria proposed [23]. Panadero [51] holds that, when the purpose of the peer assessment is enhanced learning, focus should be given to the clarity and validity of the feedback provided by the peers instead of the assignment of a specific mark. The importance of feedback production and critical reasoning was discussed with the students in the first class along with the implementation of the OPA process.
Finally, the Consistency dimension of the PrACT model comprises the criterion which measures “instruction-assessment alignment”, which resembles “the need to provide e-assessment scenarios that are representative of the learning situations experienced by the students” [23] (p. 13). All the students agreed that there was agreement between the work developed during classes and the assessment tasks used, and only two students disagreed that the assessment criteria were aligned with the work methodology of the curricular unit. All the students, except for one, believed that student participation in the peer assessment process should be included in their final marks, acknowledging the validity and reliability of this OPA design.
Figure 6 shows students’ perceptions concerning the consistency domain which stresses “the importance of aligning the competences being assessed with the e-assessment strategies being used and the assessment criteria, as well as the need to use a variety of indicators” [23] (p. 12).

3.4. Transparency Dimension

“The transparency dimension promotes student engagement in online tasks through the democratization and visibility of the e-assessment strategies being used” [24].
As mentioned before, the students were made aware of the OPA strategy and the assessment goals and rubrics from the first class of the semester. This awareness helps students know what is expected from them and adjust their learning process [23], promoting autonomy and significant learning [54]. Due to this democratization of the e-assessment task, all the students in the survey regarded the assessment process as transparent. Panadero [51] suggests that, through peer assessment, students are given the opportunity to state their point of view and share responsibility for the assessment process, implying human and social factors undervalued in the literature until very recently. This author believes peer assessment “does not happen in a vacuum; rather it produces thoughts, actions, and emotions as a consequence of the interaction of assessees and assessors” [51] (p. 2) When questioned during the survey about the validity of the feedback received, students showed different opinions, and the majority believed that the peer review of their work was not done according to the established criteria. This perspective is supported by previous results pointed out in the literature that suggest students do not acknowledge competence or authority among their peers to perform assessment tasks, and still prefer teacher assessment solely [26,37,40,51,52]. To reduce the distrustfulness felt by students and increase the reliability of the online peer assessment strategy, regular training and interaction is strongly recommended in the literature [37,40,51,55]. It is important to note that, when asked in the open-ended questions, these students responded that they had never been involved in peer assessment activities. This could help explain the distrustfulness felt.
Student engagement in the definition of the learning goals contributes to their active participation, commitment, and responsibility, and defines the degree of transparency of an assessment task [23,45]. Considered a collaborative learning method, online peer assessment creates the opportunities for students to articulate and negotiate their perspective of the assessment criteria, and co-construct knowledge by engaging in explanation, justification, and questioning processes to “negotiate meaning and reach consensus” [40] (p. 209). Therefore, in our study, all the students responded that having participated in the peer assessment activities enabled a greater engagement in the activities of the curricular unit, making the assessment process more transparent.
Web-based peer assessment has several advantages which contribute to the visibility of an assessment strategy, meaning the possibility of students sharing their learning processes or products with others [23]. Besides other benefits already mentioned (ubiquity, anonymity, format diversity…) technology permits “instant electronic submission, storage, distribution and retrieval of student work as well as assessment data” [40] (p. 208) making assessment visible to the student or to others at any moment of the learning process and supporting collaboration within the learning community. As such, all the students in the study agreed that using a digital tool allowed them to keep record of the peer feedback received. All the students but one also agreed that the random distribution and anonymity of assessors provided by technology made the assessment process transparent.
According to Nicol et al. [52], peer assessment presents the sole opportunity for students to constructively use the significant feedback they have received to reformulate their initial products before final submission. This does not usually happen when the feedback is provided by the teacher, as teacher correction generally tends to mean moving on to the next task in the curriculum and not reformulating the task reviewed. This opportunity for improvement can be related to the impact or effects that the e-assessment strategies have in the learning process. Accordingly, in the questionnaire, only one student disagreed that the feedback provided by the peers helped them reach their learning goals. Fukuda et al. [56] highlights that learners can be taught how to learn with the development of self-regulated learning skills which monitor and regulate their learning process, hence impacting their academic success. Skills such as self-reflection and self-efficacy can be fostered by the participation, critical judgement, and self-regulation required for the formulation of evaluative feedback in peer assessment activities [33,37,55,57,58]. Corroborating literature, all the participants in the study reported that having received feedback from their peers impacted their self-assessment and helped them define learning goals for competence development.
Students’ perceptions toward their engagement in the learning process through online peer assessment and promoted by the Transparency dimension is summarized in Figure 7:

3.5. Open-Ended Responses

Student responses in the open-ended question section of the survey corroborated the findings from the previous sections of this study regarding the dimensions of the PrACT model. There were two open-ended questions in the survey. The first question (Q1) asked students if they had participated in all the set OPA activities, requesting that they either talk about their experience or explain why they had not participated. The second question (Q2) asked if they had previously participated in any other OPA activities, and if that previous experience (or lack of experience) influenced their engagement in the present task. Concerning Q1, four students mentioned they had not participated in all the activities, either for professional reasons or lack of time. The 12 students who did participate in all the activities either simply replied that they had enjoyed it or listed some of the positive outcomes they had perceived (see Table 1). Regarding Q2, the results were surprising as none of the students had ever participated in OPA activities. Twelve students felt that this lack of experience could have influenced their engagement, and stated this as a main reason for the complexity and difficulties associated with the task (see Table 1). The other four students felt that previous practice would not have influenced their engagement, simply mentioning that it was a new experience and that they had never assessed portfolios.
After performing a six-phase thematic analysis [59] to the open-ended answers to find patterns of meaning, it was possible to code the students’ perceptions concerning online peer assessment into two main themes: the benefits of OPA and the constraints of OPA. Within the first theme, three sub-themes were identified:
  • Improvement of critical thinking skills;
  • Improvement of information and communication technology (ICT) skills;
  • General improvement.
Within the second theme, it was possible to identify one sub-theme:
  • Complexity of the peer assessment tasks due to lack of experience.
Generally speaking, the students pointed out the benefits they associated with the practice of peer assessment tasks and highlighted the complexity it represents. Emphasis was placed on the development of general skills, with specific mention of the promotion of critical thinking and digital literacy skills (Q1). Findings also reveal that none of the students had participated in peer assessment activities before, which led them to consider peer assessment a complex and difficult task (Q2). Table 1 shows some sample comments for each of the main themes mentioned.
The limited or lack of participation of students in the assessment processes is an aspect corroborated by Iglesias Péres et al. [60] and by Flores et al. [13], who highlight the predominance of traditional assessment practices, or even by Ibarra Saiz and Gómez [61], who call attention to the scarce training in assessment recognized by teachers and students themselves. As stated by Panadero and Brown [62], HE is the most suitable context for the implementation of peer assessment, due to, on the one hand, the considerable maturity and competence young adults possess to perform such a task, and, on the other hand, the implicit need to be able to assess their own work and that of others as well as to have their work assessed in a very near professional future. Nevertheless, when teachers’ motivation for the use of peer assessment is studied in the Spanish context [62], it is possible to conclude that it is in the HE setting that the use of peer assessment is lower, possibly due to the natural result of its non-use in previous levels of education, the duration of the courses with reduced opportunities for collaborative activities, the large number of students in HE classes preventing a trust relation between student assessor and assessee, the teaching-learning perspective focused on the transmission of knowledge and not on the development of competences, or, finally, the minimal pedagogical training of HE teachers in assessment [62].

3.6. Main Findings

This study intended to portray and discuss student perceptions in relation to the implementation of an OPA design based on the dimensions and criteria of the PrACT framework. This framework seeks to contribute to the promotion of the quality of competency based e-assessment strategies considering technological mediation and teacher and learner needs in an assessment for learning approach. As such, Figure 8 shows the main findings obtained from the online survey applied to the HE students who, during one semester, participated in the OPA tasks promoted in one of their curricular units.

4. Conclusions

The demands and concerns of the knowledge society of the 21st century has triggered awareness of alternative assessment practices. This investigation intends to reflect upon three key areas of assessment: formative assessment, digital assessment, and assessment in higher education. Grounded on the principles of these forms of assessment, our research comprises the implementation, in a higher education setting, of an alternative digital assessment strategy, with an assessment for learning foundation and inspired by the PrACT framework.
When we consider the several roles and conceptions of assessment, we notice the emphasis given in recent decades to competency based assessment or formative assessment. This conception of assessment implies the use of a variety of strategies and assessment methods and emerges in the context of an “assessment culture” [22] which is essentially qualitative and student-centered, and supports the assessment of authentic, real-world tasks, comprises reflection and self-knowledge, includes collaborative and co-regulated learning, and prioritizes the learning process [63]. Due to recent technological advances having created a shift to a highly connected, information-rich society, today’s students have different educational needs, namely, the need for lifelong learning and continuous development which prepares them for their professional future. Traditional assessment only allows the student to find out his/her level of intellectual knowledge, limiting and regulating access to new knowledge [7]. Alternative assessment seeks to create opportunities for continuous development, monitoring learning strategies centered on student singularity and on the application of acquired skills to the real world.
In the new assessment culture, technology-mediated assessment also has the potential to contribute to more meaningful and powerful learning [4,64]. The diversification and improvement of methodologies and tools used to carry out digital assessment has increased over time, providing instructors with a set of mobile applications and digital platforms which make the assessment process easier and far more beneficial, promoting the constructivist and formative dimension of the learning process [5]. Self-assessment and peer assessment tasks can also be simplified with the use of technology and can help increase student motivation and participation [64] if implemented thoughtfully and carefully. However, successfully integrating technology in the classroom and validating its impact on assessment for learning remains quite challenging and requires technology to effectively blend in with the learning objectives. The use of technology per se may not represent a transformative and worthwhile pedagogical practice. The effective application of technology, on the other hand, enables greater student involvement, access to feedback from teachers and peers in a collaborative construction of knowledge, storage of the learning artefacts reinforcing reflection and self-regulation, promotion of autonomy, and the obvious elimination of time and place restrictions [65]. The literature suggests that technology may be a solution to the challenges posed to HEIs as it responds to the new learning environments, to students’ educational needs, and to the pedagogical skills required for 21st century teachers [3].
The PrACT model conceptual framework emerges from the need to ensure the validity of competency based assessment and to authenticate the design of an alternative digital assessment strategy [23,25]. This study focuses on the implementation of an online peer assessment process based on the dimensions of the PrACT framework. According to the results from the student survey, it is possible to infer student awareness of three main learning achievements from the implementation of the mentioned OPA design:
  • Development of scientific/cognitive skills due to greater student participation and motivation in the activities;
  • Development of metacognitive skills, critical thinking, and collaboration due to the production and reception of constructive and significant feedback;
  • Development of digital skills and consolidation of learning due to the use of a digital tool.
On the one hand, the students believe that the implementation of this digital assessment strategy may have contributed to increased student engagement and motivation in the activities presented in class [66,67]. Consequently, this participation could have motivated the students to play an active role in the learning process by setting learning goals and doing their best to fulfill them, stimulating the development of cognitive skills [33,34,62,66,68].
On the other hand, the students also perceived that producing constructive and meaningful feedback, a cognitively more demanding and complex task than receiving it [68], helped them engage in metacognitive processes and promoted the development of essential skills such as critical thinking, self-assessment, and analytical skills which enhance significant learning [32,37,43,60,66,67].
Finally, according to the students’ perceptions, the technological dimension not only fosters the development of critical digital skills [5,23], but also provides the setting for future reference and consolidation of learning through storage of the entire learning process [23,40,67].
The survey results also point out some concerns the students have when involved in peer assessment practices, such as the accuracy and quality of anonymous feedback or the usefulness of such activities in their future. In this particular point, is important to note that contradictory results were found, as students considered peer assessment as relevant to their future, but did not see the task of doing it as developing their skills, which suggests the need for further studies.
In general terms, the results obtained in this study add to the existing literature [5,32,33,37,43,60,62,66,67] regarding the benefits and constraints felt by HE students when engaged in technology-mediated peer assessment. Although there are several studies worldwide considering different variables about digital peer assessment, in the context of Portuguese higher education institutions, there is very little literature on this topic, particularly considering the conceptual framework of the PrACT model used in this study.
Online peer assessment, as presented in this study, is a student-centered alternative assessment strategy which enhances learning through the development of core competencies. Training teachers and students to understand the benefits of peer assessment, as well as its regular implementation, should be a primary objective in higher education curricula, given that these constraints have a strong impact on the development of essential skills in students during HE and beyond. Given the demands of today’s society, resorting to innovative, authentic, and multifaceted assessment practices which enhance meaningful and lifelong learning is one of the greatest challenges of educational institutions, namely HEIs.

5. Limitations of the Study and Future Research

In spite of the encouraging results of this study, some limitations should be acknowledged which indicate the need for future studies. First, the sample size was relatively small, as the participants were undergraduate students from one single curricular unit. Due to the characteristics and size of the sample, at this point of the study we did not consider using inferential statistics. Additionally, the study was implemented over the course of a semester. Studies [37,40,55] indicate that regular training and practice is required to enhance student motivation and quality feedback, as students’ beliefs of the utility of the task may relate to their effort and performance. Finally, the data collection method used was the student survey. Other collection data methods, such as interviews or document analysis, were considered at the beginning of the study, but not possible to accomplish.
In future studies, it would be interesting to consider the implementation of this assessment design in different levels of education throughout consecutive school years, with the inclusion of other data collection methods such as observation, document analysis and interviews, not only to contemplate student perceptions, but also teachers’ and employers’ perspectives about the usefulness of the implementation of an online peer assessment strategy based on the PrACT framework during students’ school years, particularly tertiary education.

Author Contributions

Conceptualization, P.L. and M.J.G.; Methodology, P.L. and M.J.G.; Validation, P.L. and M.J.G.; Formal analysis, P.L. and M.J.G.; Investigation, P.L. and M.J.G.; Data curation, P.L. and M.J.G.; Writing—original draft, P.L.; Writing—review & editing, M.J.G.; Supervision, M.J.G.; Project administration, P.L. and M.J.G.; Funding acquisition, M.J.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work is funded by CIEd—Research Centre on Education, Institute of Education, University of Minho, projects UIDB/01661/2020 and UIDP/01661/2020, through national funds of FCT/MCTES-PT.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee for Social and Human Sciences of University of Minho (protocol code CEICSH 07172022—date of approval: 28 June 2022).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to consent provided by participants on the use of confidential data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fleming, E.C.; Robert, J.; Sparrow, J.; Wee, J.; Dudas, P.; Slattery, M.J. A Digital Fluency Framework to Support 21st-Century Skills. Change Mag. High. Learn. 2021, 53, 41–48. [Google Scholar] [CrossRef]
  2. Silber-Varod, V.; Eshet-Alkalai, Y.; Geri, N. Tracing research trends of 21st-century learning skills. Br. J. Educ. Technol. 2019, 50, 3099–3118. [Google Scholar] [CrossRef]
  3. Saykili, A. Higher Education in The Digital Age: The Impact of Digital Connective Technologies. J. Educ. Technol. Online Learn. 2019, 2, 1–15. [Google Scholar] [CrossRef] [Green Version]
  4. Riegel, C.; Kozen, A. Attaining 21st Century Skills in a virtual classroom. Educ. Plan. 2016, 23, 41–55. [Google Scholar]
  5. Songkram, N.; Chootongchai, S.; Khlaisang, J.; Koraneekij, P. Education 3.0 system to enhance twenty-first century skills for higher education learners in Thailand. Interact. Learn. Environ. 2021, 29, 566–582. [Google Scholar] [CrossRef]
  6. Oliveira, I.; Pereira, A. Avaliação digital autêntica: Questões e desafios. Rev. Educ. A Distância Elearning 2021, 4, 22–40. [Google Scholar]
  7. Volodina, A. Evaluation in lifelong learning. In Proceedings of the 2011 14th International Conference on Interactive Collaborative Learning, Piestany, Slovakia, 21–23 September 2011; pp. 1–4. [Google Scholar] [CrossRef]
  8. Boud, D. Sustainable assessment: Rethinking assessment for the learning society. Stud. Contin. Educ. 2000, 22, 151–167. [Google Scholar] [CrossRef]
  9. Boud, D.; Falchikov, N. Redesigning Assessment for Learning beyond Higher Education Uncovering Learning at Work View Project Simulation in Health Professional Education View Project. pp. 34–41, January 2005. Available online: https://www.researchgate.net/publication/228337704 (accessed on 28 November 2022).
  10. Mueller, J. The Authentic Assessment Toolbox: Enhancing Student Learning through Online Faculty Development. J. Online Learn. Teach. 2005, 1, 1–7. [Google Scholar]
  11. Harishree, C.; Mekala, S. Fostering 21 st Century Skills in the Students of Engineering in ESL Classroom. IUP J. Soft Ski. 2020, 14, 59–69. [Google Scholar]
  12. Schellekens, L.H.; Bok, H.G.J.; Jong, L.H.; Schaaf, M.F.; Kremer, W.D.J.; Vleuten, C.P.M. A scoping review on the notions of Assessment as Learning (AaL), Assessment for Learning (AfL), and Assessment of Learning (AoL). Stud. Educ. Eval. 2021, 71, 101094. [Google Scholar] [CrossRef]
  13. Flores, M.A.; Brown, G.; Pereira, D.; Coutinho, C.; Santos, P.; Pinheiro, C. Portuguese university students’ conceptions of assessment: Taking responsibility for achievement. High. Educ. 2020, 79, 377–394. [Google Scholar] [CrossRef]
  14. Earl, L. Assessment as Learning: Using Classroom Assessment to Maximize Student Learning; Corwin: Thousand Oaks, CA, USA, 2003. [Google Scholar]
  15. Earl, L. Assessment-A Powerful Lever for Learning. Brock Educ. J. 2007, 16, 1–15. [Google Scholar] [CrossRef] [Green Version]
  16. Vaz, R.F.N.; Nasser, L.; Lima, D.D.O. Avaliar para aprender: Um ato de insubordinação criativa. Revista@mbienteeducação 2021, 14, 214–243. [Google Scholar] [CrossRef]
  17. Pereira, D. A Avaliação das Aprendizagens no Ensino Superior na Perspetiva dos Estudantes:Um estudo exploratório. Master’s Thesis, Universidade do Minho, Braga, Portugal, 2011. [Google Scholar]
  18. Pereira, D.; Flores, M.A.; Niklasson, L. Assessment revisited: A review of research in Assessment and Evaluation in Higher Education. Assess. Eval. High. Educ. 2016, 41, 1008–1032. [Google Scholar] [CrossRef] [Green Version]
  19. Donovan, R.; Larson, B.; Stechschulte, D.; Taft, M. Building Quality Assessment; Saint Xavier University, Chicago, IL, USA, 2002.
  20. Van de Watering, G.; Gijbels, D.; Dochy, F.; van der Rijt, J. Students’ assessment preferences, perceptions of assessment and their relationships to study results. High. Educ. 2008, 56, 645–658. [Google Scholar] [CrossRef] [Green Version]
  21. Birenbaum, M. Evaluating the assessment: Sources of evidence for quality assurance. Stud. Educ. Eval. 2007, 33, 29–49. [Google Scholar] [CrossRef]
  22. Dierick, S.; Dochy, F. New lines in edumetrics: New forms of assessment lead to new assessment criteria. Stud. Educ. Eval. 2001, 27, 307–329. [Google Scholar] [CrossRef]
  23. Tinoca, L.; Pereira, A.; Oliveira, I. A.; Oliveira, I. A conceptual framework for e-assessment in higher education: Authenticity, consistency, transparency, and practicability. In Handbook of Research on Transnational Higher Education; IGI Global: Hershey, PA, USA, 2013; Volume 2, pp. 652–673. [Google Scholar] [CrossRef]
  24. Amante, L.; Oliveira, I.R.; Gomes, M.J. E-Assessment in Portuguese Higher Education; IGI Global: Hershey, PA, USA, 2018; pp. 312–333. [Google Scholar] [CrossRef] [Green Version]
  25. Amante, L.; Oliveira, I.; Pereira, A. Cultura da Avaliação e Contextos digitais da aprendizagem: O modelo PrACT. Culture of Evaluation and digital contexts of learning: The model PrACT. Cultura de la evaluación y contextos digitales de aprendizaje: El modelo PrACT. Rev. Docência E Cibercultura 2017, 1, 135–150. [Google Scholar] [CrossRef] [Green Version]
  26. Souza, E.; Amante, L. A autoavaliação e a avaliação entre pares: Estudo piloto numa Unidade Curricular do 2o Ciclo do ensino superior em Portugal. RE@D Rev. Educ. A Distância Elearning 2021, 4, 97–115. [Google Scholar]
  27. Ferrarini, R.; Amante, L.; Torres, P.L. Avaliações alternativas em ambiente digital: Em busca de um novo modelo teório-prático. Educ. Cult. Contemp. 2019, 16, 190–217. [Google Scholar] [CrossRef]
  28. Biggs, J.B. Teaching for Quality Learning at University. In Casting a Student Spotlight on Constructive Alignment to Enhance Curriculum Design and Student Learning View Project; SRHE and Open University Press: London, UK, 2003; Available online: https://www.researchgate.net/publication/215915395 (accessed on 28 November 2022).
  29. Gibbs, G. Using assessment to support student learning. In Assessment Matters in Higher Education; Leeds Met Press: Leeds, UK, 1999; pp. 41–53. [Google Scholar]
  30. Pereira, A.; Oliveira, I.; Tinoca, L. A Cultura de Avaliação: Que dimensões? In I Encontro Internacional TIC e Educação; Instituto de Educação da Universidade de Lisboa: Lisboa, Portugal, 2010; pp. 127–133. [Google Scholar]
  31. Amante, L.; Oliveira, I.; Gomes, M.J. Avaliação Digital nas Universidades Públicas Portuguesas: Perspetivas de Professores e de Estudantes in EUTIC2014 The role of ICT in the design of informational and cognitive processes; Lisboa, Portugal, 2014.
  32. Pereira, A.; Oliveira, I.; Pinto, M.C.; Amante, L. Desafios da Avaliação Digital no Ensino Superior; Universidade Aberta-LE@D: Lisboa, Portugal, 2015; pp. 1–121. [Google Scholar]
  33. Amendola, D.; Miceli, C. Online peer assessment to improve students’ learning outcomes and soft skills. Ital. J. Educ. Technol. 2018, 26, 71–84. [Google Scholar] [CrossRef]
  34. Wang, J.; Gao, R.; Guo, X.; Liu, J. Factors associated with students’ attitude change in online peer assessment–a mixed methods study in a graduate-level course. Assess. Eval. High. Educ. 2020, 45, 714–727. [Google Scholar] [CrossRef]
  35. Liu, N.F.; Carless, D. Peer feedback: The learning element of peer assessment. Teach. High. Educ. 2006, 11, 279–290. [Google Scholar] [CrossRef] [Green Version]
  36. Penn, P.; Wells, I. Enhancing Feedback and Feed-Forward via Integrated Virtual Learning Environment Based Evaluation and Support. 2017. Available online: https://moodle.uel (accessed on 24 November 2022).
  37. Kobayashi, M. Does anonymity matter? Examining quality of online peer assessment and students’ attitudes. Australas. J. Educ. Technol. 2020, 36, 98–110. [Google Scholar] [CrossRef]
  38. Lin, C.J. An online peer assessment approach to supporting mind-mapping flipped learning activities for college English writing courses. J. Comput. Educ. 2019, 6, 385–415. [Google Scholar] [CrossRef]
  39. Hoang, L.P.; Le, H.T.; Van Tran, H.; Phan, T.C.; Vo, D.M.; Le, P.A.; Nguyen, D.T.; Pong-Inwong, C. Does evaluating peer assessment accuracy and taking it into account in calculating assessor’s final score enhance online peer assessment quality? Educ. Inf. Technol. 2022, 27, 4007–4035. [Google Scholar] [CrossRef]
  40. Liu, X.; Li, L.; Zhang, Z. Small group discussion as a key component in online assessment training for enhanced student learning in web-based peer assessment. Assess. Eval. High. Educ. 2018, 43, 207–222. [Google Scholar] [CrossRef]
  41. Zhang, Y.; Pi, Z.; Chen, L.; Zhang, X.; Yang, J. Online peer assessment improves learners’ creativity:not only learners’ roles as an assessor or assessee, but also their behavioral sequence matter. Think. Skills Creat. 2021, 42, 100950. [Google Scholar] [CrossRef]
  42. Formanek, M.; Wenger, M.C.; Buxner, S.R.; Impey, C.D.; Sonam, T. Insights about large-scale online peer assessment from an analysis of an astronomy MOOC. Comput. Educ. 2017, 113, 243–262. [Google Scholar] [CrossRef]
  43. Zhan, Y. What matters in design? Cultivating undergraduates’ critical thinking through online peer assessment in a Confucian heritage context. Assess. Eval. High. Educ. 2020, 46, 615–630. [Google Scholar] [CrossRef]
  44. Kumar, K.; Sharma, B.N.; Nusair, S.; Khan, G.J. Anonymous online peer assessment in an undergraduate course: An analysis of Students’ perceptions and attitudes in the South Pacific. In Proceedings of the 2019 IEEE International Conference on Engineering, Technology and Education (TALE), Yogyakarta, Indonesia, 10–13 December 2019. [Google Scholar]
  45. Naveh, G.; Bykhovsky, D. Online Peer Assessment in Undergraduate Electrical Engineering Course. IEEE Trans. Educ. 2021, 64, 58–65. [Google Scholar] [CrossRef]
  46. Yang, R. Student responses to online peer assessment Tertiary English Language Classrooms. Electron. J. Engl. A Second. Lang. 2019, 23, 1–24. [Google Scholar]
  47. Phillips, F. The power of giving feedback: Outcomes from implementing an online peer assessment system. Issues Account. Educ. 2016, 31, 1–15. [Google Scholar] [CrossRef]
  48. Rosa, S.S.; Coutinho, C.P.; Flores, M.A. Online Peer Assessment: Method and Digital Technologies. Procedia Soc. Behav. Sci. 2016, 228, 418–423. [Google Scholar] [CrossRef] [Green Version]
  49. Chew, E.; Snee, H.; Price, T. Enhancing international postgraduates’ learning experience with online peer assessment and feedback innovation. Innov. Educ. Teach. Int. 2016, 53, 247–259. [Google Scholar] [CrossRef]
  50. Wang, X.-M.; Hwang, G.-J.; Liang, Z.-Y.; Wang, H.-Y. Enhancing Students’ Computer Programming Performances, Critical Thinking Awareness and Attitudes towards Programming: An Online Peer Assessment Attempt. J. Educ. Technol. Soc. 2022, 20, 58–68. Available online: https://eds.p.ebscohost.com/eds/command/detail?vid=13&sid=d77d16b9-d122-43ee-a49b-052c66ae9638%40redis&bdata=JkF1dGhUeXBlPWlwLHNoaWImbGFuZz1wdC1wdCZzaXRlPWVkcy1saXZlJnNjb3BlPXNpdGU%3d#AN=125829901&db=a9h (accessed on 14 March 2022).
  51. Panadero, E. Is It Safe? Social, Interpersonal, and Human Effects of Peer Assessment: A Review and Future Directions. 2016. Available online: https://www.researchgate.net/publication/328476401 (accessed on 22 November 2022).
  52. Nicol, D.; Thomson, A.; Breslin, C. Rethinking feedback practices in higher education: A peer review perspective. Assess. Eval. High. Educ. 2014, 39, 102–122. [Google Scholar] [CrossRef]
  53. Double, K.S.; McGrane, J.A.; Hopfenbeck, T.N. The Impact of Peer Assessment on Academic Performance: A Meta-analysis of Control Group Studies. Educ. Psychol. Rev. 2020, 32, 481–509. [Google Scholar] [CrossRef] [Green Version]
  54. Jennings, D. An Introduction to Self & Peer Assessment Contributing Lecturer UCD Teaching and Learning; UCD Teaching and Learning: Dublin, Ireland, 2013; Available online: www.ucdoer.ie (accessed on 24 November 2022).
  55. Bürgermeister, A.; Glogger-Frey, I.; Saalbach, H. Supporting Peer Feedback on Learning Strategies: Effects on Self-Efficacy and Feedback Quality. Psychol. Learn. Teach. 2021, 20, 383–404. [Google Scholar] [CrossRef]
  56. SFukuda, T.; Lander, B.W.; Pope, C.J. Formative Assessment for Learning How to Learn: Exploring University Student Learning Experiences. RELC J. 2022, 53, 118–133. [Google Scholar] [CrossRef]
  57. Ibarra-Sáiz, M.S.; Rodríguez-Gómez, G.; Boud, D. Developing student competence through peer assessment: The role of feedback, self-regulation and evaluative judgement. High. Educ. 2020, 80, 137–156. [Google Scholar] [CrossRef] [Green Version]
  58. Zheng, L.; Cui, P.; Li, X.; Huang, R. Synchronous discussion between assessors and assessees in web-based peer assessment: Impact on writing performance, feedback quality, meta-cognitive awareness and self-efficacy. Assess. Eval. High. Educ. 2018, 43, 500–514. [Google Scholar] [CrossRef]
  59. VBraun; Clarke, V. Using thematic analysis in psychology. Qual. Res. Psychol. 2006, 3, 77–101. [Google Scholar] [CrossRef] [Green Version]
  60. Pérez, M.C.I.; Vidal-Puga, J.; Juste, M.R.P. The role of self and peer assessment in Higher Education. Stud. High. Educ. 2022, 47, 683–692. [Google Scholar] [CrossRef]
  61. Saiz, M.S.I.; Gómez, G.R. Modalidades participativas de evaluación: Un análisis de la percepción del profesorado y de los estudiantes universitarios. Rev. Investig. Educ. 2014, 32, 339–362. [Google Scholar] [CrossRef]
  62. Panadero, E.; Brown, G.T.L. Teachers’ reasons for using peer assessment: Positive experience predicts use. Eur. J. Psychol. Educ. 2017, 32, 133–156. [Google Scholar] [CrossRef] [Green Version]
  63. Amante, L.; Oliveira, I. Modelo Pedagógico Virtual|Avaliação e Feedback. Desafios Atuais; Universidade Aberta: Lisboa, Portugal, 2019. [Google Scholar]
  64. Looney, J. Digital Formative Assessment: A Review of the Literature; European Schoolnet: Brussels, Belgium, 2019; Available online: http://www.eun.org/documents/411753/817341/Assess%40Learning+Literature+Review/be02d527-8c2f-45e3-9f75-2c5cd596261d (accessed on 23 November 2022).
  65. Santos, J. O Contributo da Tecnologias Digitais na Transparência da Avaliação Digital no Contexto de Educação Superior a Distância; Universidade Aberta: Lisboa, Portugal, 2018. [Google Scholar]
  66. Dochy, F.; Segers, M.; Sluijsmans, D. The Use of Self-, Peer and Co-assessment in Higher Education: A review. Stud. High. Educ. 1999, 24, 331–350. [Google Scholar] [CrossRef] [Green Version]
  67. Zheng, L.; Zhang, X.; Cui, P. The role of technology-facilitated peer assessment and supporting strategies: A meta-analysis. Assess. Eval. High. Educ. 2020, 45, 372–386. [Google Scholar] [CrossRef]
  68. Rosa, S.; Coutinho, C.; Flores, M. Online Peer Assessment no ensino superior: Uma revisão sistemática da literatura em práticas educacionais. Avaliação Rev. Avaliação Educ. Super. 2017, 22, 55–83. [Google Scholar] [CrossRef] [Green Version]
Figure 1. E-Assessment Framework—PrACT Model. Adapted from [31] (p. 4).
Figure 1. E-Assessment Framework—PrACT Model. Adapted from [31] (p. 4).
Education 13 00253 g001
Figure 2. Online peer assessment strategy.
Figure 2. Online peer assessment strategy.
Education 13 00253 g002
Figure 3. Data collection process.
Figure 3. Data collection process.
Education 13 00253 g003
Figure 4. Practicability dimension: survey results.
Figure 4. Practicability dimension: survey results.
Education 13 00253 g004
Figure 5. Authenticity dimension: survey results.
Figure 5. Authenticity dimension: survey results.
Education 13 00253 g005
Figure 6. Consistency dimension: survey results.
Figure 6. Consistency dimension: survey results.
Education 13 00253 g006
Figure 7. Transparency dimension: survey results.
Figure 7. Transparency dimension: survey results.
Education 13 00253 g007
Figure 8. Student perceptions: online peer assessment design based on PrACT model.
Figure 8. Student perceptions: online peer assessment design based on PrACT model.
Education 13 00253 g008
Table 1. Thematic categorization of student responses.
Table 1. Thematic categorization of student responses.
ThemeSub-ThemeSample Responses
Improvement of critical thinking skills.Student 6: I really liked this experience because it helped me develop my critical thinking skills.
Student 9: It was a very rewarding experience, as I had never participated in an identical activity before, which made me develop my critical thinking.
Benefits of OPAImprovement of Information and Communication Technology (ICT) skills.Student 1: It allowed me to improve my ICT skills.
Student 15: (…) for a better handling of ICT in the socio-educational context.
General improvement.Student 13: It was quite different, a super enriching experience. I really liked this type of format.
Student 12: My experience was quite enriching where I acquired several learnings.
Constraints of OPAComplexity of the peer assessment tasks influenced by lack of experience.Student 3: Yes, because I had to carry out this process in a more time-consuming and thoughtful way, as it is a very complex task.
Student 2: Yes, it influences because we have a sense of what it means to evaluate and the implications it has.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Loureiro, P.; Gomes, M.J. Online Peer Assessment for Learning: Findings from Higher Education Students. Educ. Sci. 2023, 13, 253. https://doi.org/10.3390/educsci13030253

AMA Style

Loureiro P, Gomes MJ. Online Peer Assessment for Learning: Findings from Higher Education Students. Education Sciences. 2023; 13(3):253. https://doi.org/10.3390/educsci13030253

Chicago/Turabian Style

Loureiro, Paula, and Maria João Gomes. 2023. "Online Peer Assessment for Learning: Findings from Higher Education Students" Education Sciences 13, no. 3: 253. https://doi.org/10.3390/educsci13030253

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop