Next Article in Journal
A Methodology for Forecasting the KPIs of a Region’s Development: Case of the Russian Arctic
Previous Article in Journal
Managing Neurodiversity in Workplaces: A Review and Future Research Agenda for Sustainable Human Resource Management
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Faculty and Students’ Perceptions about Assessment in Blended Learning during Pandemics: The Case of the University of Barcelona

1
Facultat de Psicologia, Universitat de Barcelona, 08035 Barcelona, Spain
2
Facultat d’Educació, Universitat de Barcelona, 08035 Barcelona, Spain
*
Author to whom correspondence should be addressed.
Sustainability 2024, 16(15), 6596; https://doi.org/10.3390/su16156596
Submission received: 12 May 2024 / Revised: 26 July 2024 / Accepted: 31 July 2024 / Published: 1 August 2024
(This article belongs to the Section Sustainable Education and Approaches)

Abstract

:
Blended teaching and learning modalities are well established in higher education, particularly after all learning through pandemics. This study aims to explore faculty and students’ perceptions about potentially empowering assessment practices in blended teaching and learning environments during remote teaching and learning. Two samples of 129 university educators and 265 students of the University of Barcelona responded to a survey. The most salient agreement between faculty and students deals with the accreditation purpose, thus summative function, of assessment and a lack of students’ participation in assessment practices. At the same time, the results show some disagreements regarding formative assessment purposes and feedback. Our results offer implications for future blended teaching and learning designs for training students and faculty in the pursuit of assessment literacy, and for institutional policies to ensure the sustainability of formative assessment practices.

1. Introduction

The COVID-19 pandemic generated a new hybrid scenario in which face-to-face and online teaching (synchronous and asynchronous) eventually blended. For this new instructional context, educators must consider purely technical aspects, such as, for example, the attentional arc in front of screens for learning time management [1] or differential characteristics of students with difficulties in accessing digital information [2]. However, more than these so-called technical conditions are required for a high-quality educational experience to be generated. Educators must also consider pedagogical, or rather techno-pedagogical, aspects, such as the distribution of knowledge sources and the diversity of roles to be exercised in the virtual classroom, the potential for multidirectional communicative interaction, and the very agency of the learning process, mediated by digital tools [3,4]. These specific pedagogical decisions must extend to assessment practices [5]. Therefore, knowing how assessment worked in the exceptional pandemic-affected course can be helpful to avoid perpetuating ineffective practices and to identify training needs among higher education faculty [6]. Further, in this scenario, it is relevant to study the perception of both instructors and students regarding the frequency and quality of certain assessment practices during the first confinement term regarding perceived purposes and features.
In any educational system, two primary purposes or functions of assessment are recognized. On the one hand, the regulation of teaching and learning processes, also known as formative assessment. On the other hand, the accreditation of learning results, or summative assessment. While many studies emphasize the regulatory function and highlight the relevance of assessment for learning [7], minimizing the value of summative purposes [8], other authors point out the complementarity of both functions and the indissolubility of these in any educational system, precisely because of the social function that academic education fulfills [9]. Previous research also shows that the tendency towards the summative function or purpose is more present at higher educational levels, where education adopts a final character [10], which makes teacher training essential to enable instructors to develop rich and balanced assessment practices [11,12]. Pastore and Andrade define teacher assessment literacy as the “interrelated set of knowledge, skills, and dispositions that a teacher can use to design and implement a coherent approach to assessment within the classroom context and the school system. An assessment literate teacher understands and differentiates the aims of assessment and articulates a sound, cyclical process of collection, interpretation, use of evidence and communication of feedback” (p. 134–135) [13]. Other authors [14] proposed a hierarchical model for teacher assessment literacy, including six components: knowledge base; teacher conceptions of assessment; institutional and socio-cultural contexts; teacher assessment literacy in practice; teacher learning; and teacher identity in its (re)construction as an assessor. Assessment literacy still needs to be improved, particularly regarding university faculty [15], whose specific pedagogical training is generally subject to voluntariness and individual acceptance of in-service training and professional development recycling programs [6,16].
The characteristics of good assessment practices have been discussed at length in the previous literature. There is now a broad consensus [17,18,19] that assessment should encompass the following:
  • Be focused on supporting the learning process.
  • Be aligned with didactic goals.
  • Take place throughout the learning process.
  • Follow clear criteria.
  • Progressively foster students’ responsibility in their learning and assessment process by developing evaluative judgment.
These characteristics are common in face-to-face and hybrid contexts, but some other features must be added for the virtual context. The Joint Information Systems Committee (JISC) [20] advocates that assessment practices should encompass the following:
  • Be accessible and designed under universal learning design criteria.
  • Be easily automated so educators’ workload—especially clerical tasks—can be minimized.
  • Be safe, respecting students’ rights and attentive to online risks.
Online assessment, hence, presents some differential features [21] that appeal not only to technical or technological issues of security and accessibility [22] but also to the instructional design itself and the new possibilities of interaction with and among students [23]. In that sense, some authors [24] propose productive online assessment tasks rather than re-productive, where students must elaborate, compare, and revise their productions in a cyclic way to strengthen evaluative judgment [25]. Indeed, virtual scenarios may constitute a privileged scenario to promote assessment for learning [23,26].
Formative assessment, in the pursuit of a steady improvement in teaching and learning processes, seeks to support these processes so that students can benefit from assessment and develop their abilities to become effective evaluators of their own and others’ work, which is an essential competence in today’s professional world [27]. It is the so-called assessment for learning (AFL) which we understand, together with Klenowsky, as a “process that is part of the daily practice of students, teachers and peers by which they seek, reflect on and respond to information (derived) from dialogue, demonstration and observation in a way that enhances continuous learning” (p. 264) [28]. The concept was initially driven by Sadler [29] and extensively developed later on by Hawe and Dixon [30], among others. Assessment for learning is associated with participatory processes [31], for example, through peer assessment strategies [32,33] and through self-assessment practices [34].
Involving students explicitly in the assessment process implies helping them develop their evaluative competence—and assessment literacy—by fostering evaluative judgment. Previous works define evaluative judgment as the ability to make decisions about the quality of one’s work and that of others, which involves being able to spot the excellence of a production or a process, as well as applying that understanding in the assessment of one’s work and other’s work [19]. In addition, feedback appears as another element of impact on learning, focused on processes and with a self-regulatory character [35,36,37].
The need for formative feedback may have been even more significant during the pandemic, as online instructional designs need carefully planned feedback to maintain learner engagement [38]. This requires a specific evaluative literacy [39,40] and awareness of the formative potential of assessment processes.
We propose to study the purposes and characteristics that both the faculty and the students attribute to assessment practices implemented in the context of confined terms during pandemics following these research questions: Do students and faculty share similar evaluations of the experienced remote assessment practices? To what extent were students enacting appropriate participation in the referred assessment practices? What are the most affecting personal and contextual variables in the participants’ evaluation?
In this project, we conducted descriptive and exploratory research to explore instructors and students’ perceptions during the two academic terms affected by COVID-19 at the University of Barcelona. Data were collected using two different surveys for students and instructors. The specific research goals (RGs) were as follows:
  • RG1: Explore faculty and students’ perceptions of the purposes of assessment practices carried out in blended teaching environments during the academic terms affected by total or partial lockdown.
  • RG2: Explore faculty and students’ perceptions of the characteristics of assessment practices carried out in blended teaching environments during the academic terms affected by total or partial lockdown.
  • RG3: Compare student and faculty’s perspectives specifically on those assessment practices associated with a formative purpose and students’ agency increase.
  • RG4: Explore likely connections between the two collectives’ perceptions by considering the following variables: general satisfaction with the experience of remote teaching and learning, gender, previous experience in online teaching and learning, academic course, and teaching experience.
The conceptual framework guiding this study is grounded in the dual purposes of assessment: formative (regulating teaching and learning processes) and summative (accrediting learning outcomes). While the literature often highlights the importance of formative assessment, the complementarity and necessity of both functions in educational systems are acknowledged. Teacher assessment literacy is crucial for developing effective assessment practices. Also, promoting active participation of both faculty and students in high-quality assessment practices is essential for sustainable education. Engaging in these practices ensures continuous improvement and fosters a culture of lifelong learning, making the educational ecosystem more resilient and adaptable. By investigating these aspects, this study aims to contribute to sustainable education by promoting assessment literacy among faculty and students, thereby fostering more balanced and effective assessment practices in blended learning environments.

2. Method

We conducted a descriptive, exploratory survey research. The research team, composed of an interdisciplinary group of instructors, invited all faculty and students from the Schools of Law, Pharmacy, Mathematics, History, Information and Audiovisual Communication, Psychology, and Education).

2.1. Participants

A total of 394 individuals responded to the invitation: 129 instructors and 256 students. Only 28% of the teaching staff declared having previous experience in online teaching, while for the students, the percentage of previous e-learning experience dropped to 20%. Table 1, Table 2, Table 3 and Table 4 further describe the participants.

2.2. Instruments and Data Collection

Data were collected online in March 2021 through surveys for students and instructors. We disseminated the surveys to potential participants via institutional communication channels. The responses collected refer to the second semester of the academic year 2019–2020 until the second semester of the academic course 2020–2021 (still active during the data collection). For the construction and application of these surveys, we followed all the procedures of responsible research and the institutional Code of Good Research Practices. All participants agreed to informed consent, and data were anonymously treated, stored in institutional facilities, and conveniently returned to participants.
The first section of each survey included the informed consent and demographic data. Next, 13 items were presented to investigate the perceptions of students and instructors on the following aspects (Table 5).
  • The primary purposes of assessment practices in their courses. According to the literature, the survey addressed four purposes (P1 to P4).
  • Quality characteristics of the assessment practices in their courses. We considered nine characteristics according to the previous theoretical review (C1 to C9).
All items were rated on a five-point Likert scale. Participants could choose to answer in a range 1 to 5, with 1 point meaning “Do not agree at all/Not at all/Never” and 5 points meaning “Strongly agree/Very much/Very often”, with the additional option of “No answer/Do not know/Not applicable”.

2.3. Data Analysis and Procedure

For the data analysis, we first descriptively explored the data to see their distribution and behavior. Subsequently, to address RG1, RG2, and RG3, we performed a comparison of means analysis for unpaired categorical data (Mann–Whitney U-test) to observe if there were significant differences between the students and teaching staff’s perceptions to eventually determine the effect size of the found differences. To address RG4, we carried out a Chi-square contrast.

3. Results

We have organized this Results Section into separated paragraphs dedicated to each research question. As a first general result, we highlight a higher global satisfaction on the faculty’s side compared to students’ perspectives, as faculty’s mean value was M = 3.56 and SD = 0.95, while that of students was M = 2.99 and SD = 1.09. We identified indeed a significant difference in those results (U = 12,088.5, p < 0.00001) between groups, with a moderate effect size (d = 0.557), which confirms the differential perspective of instructors and students in this emergency teaching and learning experience, supporting thus the need for the subsequent analyses.

3.1. Assessment Purposes (RG1)—Differences Found between Collectives (RG3)

As for the four assessment purposes included in the survey (see Table 6), the two items of the survey relating to the summative purpose of assessment received higher scores—though different—from both groups: P2 (identify the level of learning performance) and P4 (certify learning). In contrast, the purposes linked to formative assessment (P1 identify students’ needs and P3 orient the learning process) show a lower mean for both groups. The Mann–Whitney U-test showed significant differences in the four items between the perception of the faculty and that of students, with consistently higher values among instructors. Effect sizes varied from moderate to high.
Figure 1 shows these results graphically. We can observe that the instructors’ perceptions were consistently higher, and that the horizontal axis (summative purpose) predominates over the vertical axis (formative). Although there was a significant difference among the participants for all the assessment functions, the effect size was just moderate for the formative function of needs identification, together with the lowest mean for both group of participants, which coherently points to a certainly lower presence of this very important formative function of assessment, while the other three functions were much more present both in faculties and students’ perceptions.

3.2. Assessment Features (RG2)—Differences Found between Collectives (RG3)

The results refer to the dimensions and categories presented in Table 7, and they show that, following their professional responsibility, instructors perceive specific characteristics of good assessment practices more frequently than students (see Table 6), since their declared perception was systematically higher. Students’ evaluation is below three points in all items but the first one; in other words, their perception of eight out of nine assessment features is rather negative.
For both groups, the assessment characteristic with the highest (C2) and lowest (C4) perceived frequency, respectively, coincide. However, for students, the item with the highest reported frequency is the only one with a value of barely over three points (3.13). Meanwhile, the instructors’ perceptions of their pedagogical coherence are much higher, revolving four points in five out of nine items. It is also noteworthy that the four remaining items (C3, C4, C5, and C6) with faculty’s declared perception below a mean of three points precisely refer to those items or assessment characteristics more focused on students’ agency in assessment. These four items also present the lowest effect sizes altogether, contrasting with strong effect sizes for differences between participants for all the other items.
Figure 2 depicts the distribution of results and allows us to visualize the coincidences and divergences between the responses of both groups.

3.3. Incidence of Mediating Variables (RG4)

We considered several variables that could affect the perceptions of both collectives regarding the assessment practices. In the case of instructors, we asked them about the following:
  • General global satisfaction with the remote teaching experience;
  • Gender;
  • Previous experience in online teaching;
  • The course with main teaching duties in those semesters (first to fourth year of Bachelor’s degree);
  • Years of teaching experience (up to 10 years, 11–20 years, 21–30 years, more than 30 years).
In the case of students, we asked them about the following:
  • General global satisfaction with the remote learning experience;
  • Gender;
  • Previous experience in online teaching;
  • The course enrolled (first to fourth year of Bachelor’s degree).
We present the following subsections grouping results from both participant samples referring first to variables shared by students and instructors and then to the teaching experience of instructors. We will present only those results where significant differences could be identified, together with at least a moderate effect size.

3.3.1. Students’ and Faculty’s Global Satisfaction

First of all, regarding students (see Table 8), all aspects explored through the survey drew a connection between students’ satisfaction—which was a priori asked—and their perception of both assessment purposes and special features in the assessment practices during remote teaching, except for the sixth characteristic, referring to peer assessment and the coherence between assessment activities and goals of the course. The more satisfied the students declared themselves, the more sensitive they proved to be towards assessment practices. These results present a strong significance in all cases, but particularly moderate effect sizes concerning the four assessment purposes and the two first characteristics of assessment practices, C1, dealing with the presence of complex elaborative assessment tasks.
Faculty, on their side, showed some connection between their general satisfaction with remote teaching and the recognition of the assessment purpose of identifying students’ needs, and there was also a higher perception of peer assessment practices, usable feedback, and the integration of digital tools in assessment (C6, C7, and C9) in all cases with a moderate effect size, as shown in Table 9.

3.3.2. Students’ and Faculty’s Gender

We searched for differences in participants’ responses in connection with their gender. In this case, we found no differences among instructors and only weak to moderate connections for students in both assessment characteristics referring to self-assessment and a slightly stronger connection—medium effect size—for peer assessment (C5 and C6), as shown in Table 10. Particularly, women appeared to be more sensitive to these practices and evaluated them higher.

3.3.3. Faculty’s Previous Experience with Online Teaching and Learning

As we indicated in a previous section, both groups—faculty and students—had scarce experience in online teaching and learning prior to the pandemic (under 30% in both cases). Both groups are similar regarding this aspect. This condition is comprehensible, given that the University of Barcelona is a traditional face-to-face institution where online devices are considered a complement rather than a requirement.
In searching for connections between participants and prior online experience, we found no relevant result for faculty in terms of a likely connection between previous online experience or its lack and the perception of assessment purposes and features in the remote teaching semester, and only weakly significant results, and a minimal effect size, regarding the students without previous online experience with respect to the formative purpose of pinning students’ needs (P1) (χ2 = 9.688 (df = 4), n = 265, p = 0.0460 *, phi = 0.191, gamma = 0.112).

3.3.4. Students’ Enrolled Course during the Study and Faculty’s Main Course of Teaching

Concerning the course in which students were enrolled during the data collection process, very weak differences were found regarding the perception of purposes of assessment, both summative (P2) and formative (P3). Stronger differences, however, still of minimal effect size, referred to two features of the assessment, both related to students’ opportunity to reflect upon learning (C5 and C8). In this case, students of lower courses (first and second year) were more positive in their perception of these practices (see Table 11).
Regarding instructors, only weak differences were identified with respect to purposes of assessment P1 and P2, with a minimal effect size. As both collectives coincided in internally differing upon the evaluation purpose of evaluating the learning performance level (P2), we searched for intergroup differences, presented in Table 12. Following the results, less experienced students (first and second grade) coincided with instructors in their higher perception of this assessment purpose, whereas in higher courses (third and fourth grade), we found more differences among participants, with students less sensitive and satisfied with this aspect and disagreeing more with faculty. In other words, students with higher education experience prior to the pandemics seemed to be more critical regarding the assessment of performance levels during remote teaching compared with those students without previous higher education experience.

3.3.5. Faculty’s Teaching Experience

Table 13 shows the results of the connection between faculty’s teaching experience and the perception of assessment purpose and practice characteristics. Although significance was not very strong in any of the cases, effect sizes were close to moderate. Faculty with more teaching experience appeared to be more negative in their perception of the purpose of determining students’ performance level in comparison with less experienced faculty. This latter group also showed themselves as being more positive towards assessment practices where students could assume an active role in defining assessment criteria (C4) and also assessment practices with a formative use of feedback (C7, C8, and C9).

4. Discussion

In this paper, we share the results of the perceptions of a sample of instructors and students of the University of Barcelona on the assessment practices carried out during the period of blended education affected by the pandemic. In a certain way, these perceptions also refer to both collectives’ conceptions of assessment.
Firstly, with respect to RG1 and RG2 (to explore faculty and students’ perceptions of the purposes and characteristics of assessment practices carried out), we must highlight that it would have been desirable to reveal the formative purposes of the assessment (P1 and P3) as the predominant perceptions [7]. However, in our results, the participants perceived summative rather than formative assessment purposes. Both instructors and students report similar perceptions, which reinforces the consistency and validity of these results [9].
Teachers highly value purposes P2—identify the level of learning performance (summative) and P4—certify learning (summative). This points to an assessment culture closely linked to a summative vision. However, the diagnostic purpose of assessment deserves special attention. The lack of attribution of a diagnostic purpose to assessment (both on the part of the students and on the part of the faculty) is certainly alarming since the adjustment of assessment procedures to particular students or the possibility of personalization of certain proposals is being lost. Higher education has by itself a finalist nature. However, research indicates that the diagnostic function of assessment can and should be performed throughout the educational experience to adjust teaching practices and resources to student characteristics, adapt programming and curricular materials, and eventually offer educational support, specific to those who need it, accomplishing formative assessment. Formative assessment is valid at any time in the teaching and learning process [11,41], but, in this case, it was not valued. This result is also consistent with previous studies of preuniversity educational levels that locate a predominance of summative and accreditive purposes in conceptions and practices at the end of compulsory education [10].
This seems far from advocating the active role of students in the assessment process [32,42] in making sense of the feedback of the instructors and making efficient use of this feedback for further learning, revealing the need for the sustainability of the evaluation practices. Furthermore, as reported by some previous studies [7], these imbalances toward summative purposes would also not be nurturing or supportive of evaluative judgment [19,43].
Regarding the characteristics of the assessment processes, as evaluated by the participants in our study, both students and faculty coincide, reinforcing what is indicated in the literature [9], as far as they value more the second characteristic C2 (assessment activities are consistent with course goals and pursued competencies) but less the fourth C4 (students are invited/expected to assume an active role in defining and understanding assessment criteria). Constructive alignment is valued [17]. However, it is alarming that neither students nor instructors consider that an active role in understanding and establishing assessment criteria is important. To strengthen learning self-regulation processes [30,34], this first phase of appropriation/participation in the criteria is decisive.
Secondly, regarding RG3 (to compare student and faculty’s perspectives), there is a notable difference in satisfaction with experience; this is significantly greater for the teaching staff than for the students.
Regarding purposes and characteristics, there are several things that can be commented on. Regarding activities that require creative elaboration or production by students, such as the coherence between assessment practices and degrees, generic competencies and course objectives, or the opportunity to reflect on and react to feedback, all these characteristics of assessment practices were reported more frequently and strongly by faculty than by students.
Students generally value any function of assessment less than instructors. In other words, a deeper assessment and feedback literacy of students is required [32]. And, in summary, instructors’ discourse and practices seem less aligned than expected, since students do not confirm their perceptions [18,40].
However, regarding the characteristics of assessment practices, there are some noteworthy similarities between participants. Once again, students place little value on any of the characteristics of this experience with emergency online assessment. They are only closer to faculty in relation to characteristic C5 (students may self-assess). This perception of their chance to make judgments about the quality of their own processes and products could be the starting point for the development of self-assessment processes, with adequate training [34].
Finally, in relation to RG4 (to analyze assessment perceptions considering satisfaction, academic course, gender, or previous experience in online teaching and learning), regarding the students, and in relation to assessment purposes, the results of our study show that first- and second-year students were somewhat more positive regarding the assessment as an opportunity to reflect on learning than older students. This is also consistent with other recent studies [38] and underscores the importance of articulating first-year experiences in higher education to consolidate this vision and maintain it throughout the curriculum. Nevertheless, it is also this subset of less experienced students who reveals more sensitiveness to the certifying assessment purpose, as recently coming from secondary education, where grades continue to have great importance, especially because of the role they play in access to university.
Also, the variable “level of satisfaction” seems to correlate with students’ sensitiveness, since the more satisfied the students declared themselves, the more positive they were in perceiving assessment practices. Specifically, despite the generally lower perception of formative assessment, our results also point to a relationship, although moderate, between perceiving this purpose by students with higher levels of satisfaction. This would require further studies to understand the actual association between both constructs and be able to make decisions regarding training or institutional assessment policy.
In contrast, those students with experience in higher education before the pandemic turned out to be more critical regarding online assessment than those students without previous experience in higher education. Finally, women proved to be more sensitive to peer assessment practices. This corroborates previous experiences that give women a more conscious and dedicated role [44].
Regarding instructors, the lack of previous online experience did not appear to influence their responses. The situation was so unexpected and exceptional that we all made an extraordinary effort to adapt. In this sense, it is worth considering whether these results reflect only the urgent measures taken in response to the exceptional situation of the pandemic, for which almost three quarters of the teaching staff lacked previous experience in blended education contexts [45], or if they reveal previous deficiencies [46,47,48]. The years of teaching experience, however, showed differences among instructors: the less experienced instructors did not assign as much importance to the certifying purpose compared to more experienced colleagues. However, the latter group was more inclined towards assessment practices where students could take an active role in defining the assessment criteria (C4), and also towards assessment practices with a formative use of feedback (C7, C8, and C9).
The finding that instructors, globally seen, rarely refer to practices in which students have opportunities to participate actively in the assessment process is a worrying finding. It is likely that their initial training influenced their beliefs many years back; thus, they are more traditionally tuned. Older faculty might also be more critical of professional development programs and more resistant to change. However, previous studies in the pandemic prevent us from associating age (or gender) with tackling the challenge of using new digital resources [49]. Other studies, in contrast, do point to instructors’ digital competence prior to the global crisis, and also more general conceptions of teaching and learning, to be at the heart of the challenges encountered during the pandemic [50]. As some authors state [51], faculty assessment literacy, particularly feedback literacy, is at risk until sustained institutional support by directive positions and administrators is warranted. In that sense, fostering institutional actions to improve lifelong learning and lowering barriers to teaching innovation—such as boosting teaching teams or regulating qualification norms [6]—become crucial. Previous research has also recently warned of the danger of considering feedback literacy as something purely subjective, linked to the individual profiles of instructors, and has advocated instead for the need to approach this construct from a more communal and institutional perspective [39].
Thus, not only individual but also in-team teacher training are critical for the development of good assessment practices. The results also show that previous experience with online teaching allowed for better self-reported use of online assessment strategies, so that, without diminishing the value of face-to-face education at most universities, perhaps pandemics and the experience of emergency remote teaching have brought us evidence of our general need to consider online teaching and learning resources as a continuous companion, moving from the extraordinary to the ordinary. Also, in institutional terms, we advocate for the creation of teaching teams that may share and consolidate good assessment practices and collaborate to foster the progressive development of evaluative judgment and self-regulated learning [52].
We presented a first exploratory approach to assessment practices in emergency blended learning in our face-to-face institution. Overall, our results outline assessment practices that are still far from active and formative proposals, with scarce space for student participation in negotiating goals, neither for reflection on the practices themselves nor on assessment criteria, especially in courses approaching the end of the degree program, where a summative and accreditation perspective dominates. This constitutes a future challenge to support assessment literacy [32,53].
Regarding our results, the lower perception of some characteristics of assessment practices in relation to teaching experience deserves special attention. It alerts us of the need for ongoing professional development for senior faculty. The fading off of the diagnostic purpose and more participatory practices are other results that point to the need for a deeper pedagogical reflection on online assessment proposals [5]. Also, as noted by previous studies, the scarcity of previous experience in online teaching leads to a deficient in digital teaching competence [54] and sets critical difficulties for us to conclude from the instructors’ sample. However, concerning students, our results point to previous online experience as a differential factor for the higher degree of appreciation of more creative and productive assessment tasks, with integrated, reusable feedback. One could state that these assessment features—related to formative assessment—are more salient or accessible to students’ perceptions in online or hybrid educational contexts, where participants’ actions remain in time [23], and this particularity should be put into value [55]. In the case of online assessment practices, Forsyth and colleagues [56] suggest four differential desirable features: (a) there should be a diversity of presentation, grading, and feedback forms, catering to participants’ diversity; (b) assessment programs should be flexible and adaptative, fostering innovative uses over the replication of traditional practices; (c) online assessment should lessen faculty’s workload, facilitating automatic tasks, so that instructors could focus on nuclear pedagogical issues and actual formative practices; (d) administrative student profiles should be integrated into the LMS to ease clerical tasks eventually.
In addition, to develop students’ assessment literacy, educators should integrate self-assessment activities that encourage reflection on learning and promote understanding of assessment criteria. Implementing peer assessment practices can help students critically analyze work and provide constructive feedback. Providing timely, specific formative feedback guides students in closing performance gaps and improving their work. Engaging students in defining assessment criteria demystifies assessment processes and fosters ownership and accountability in learning. Leveraging digital tools for ongoing assessment and feedback enhances interactivity and engagement, while offering professional development for instructors equips them with the strategies needed to integrate these practices effectively, thus fostering a culture of continuous feedback and sustainable education.
We encourage future research to deepen the reasons for these descriptive results, free from pandemic-related constraints. Understanding faculty and students is necessary to generate more fine-tuned digitally supported educational experiences in iterative designs [57]. The results of this study have crucial implications for future blended educational proposals; finding a way to implement more competence-based assessment supported by technology remains challenging. On the one hand, more teacher training is required to improve assessment literacy [13]. But students, on their side, also need to gain awareness of their responsibility in the learning and assessment process to be empowered and gain agency so that instructors’ pedagogical efforts increase sustainability [32,58,59]. We must look up, thus, to teaching programs increasing complexity in assessment processes where students become active participants [31], especially in hybrid or blended designs, to promote self-regulated learning and persist in life-long learning skills. These decisions, in turn, shall improve the quality of programmatic assessment, allowing for more inclusive, personalized, and coherent assessment proposals [60,61].

5. Conclusions

This paper presents the results of a survey study which aimed to compare and contrast faculty and students’ perceptions of learning assessment practices during emergency remote learning during the COVID-19 pandemic lockdown at one of the leading higher education institutions in Spain. Our findings reveal a predominant emphasis on summative assessment purposes, highlighting the need for a shift towards more formative assessment practices that support continuous learning and development.
This research contributes significantly to the existing body of knowledge by identifying a gap between the perceived and ideal purposes of assessment in higher education, emphasizing the necessity of balancing formative and summative assessment functions to enhance learning outcomes. It also provides valuable insights into the current state of assessment literacy among both faculty and students, underlining the importance of developing assessment competencies to foster self-regulation and lifelong learning skills. Furthermore, this study offers practical guidelines for educators to develop students’ assessment literacy, such as integrating self-assessment, promoting peer assessment, providing formative feedback, involving students in defining assessment criteria, leveraging and normalizing digital tools, and offering professional development for instructors.
Future research should explore the long-term effects of enhanced assessment literacy on student outcomes, investigating how improved assessment competencies influence academic performance, self-regulation, and career readiness. Additionally, further studies are needed to examine the effectiveness of digital tools in supporting sustainable assessment practices, providing insights into innovative strategies for higher education. Comparative research across different disciplines and educational contexts could offer a deeper understanding of how assessment practices and literacy vary, helping to tailor interventions to specific needs and promote best practices universally. Finally, research should focus on the role of institutional support and policy in fostering sustainable assessment practices, understanding how organizational factors influence assessment literacy and practices and informing the development of comprehensive strategies to support educators and students.
Our study, while comprehensive, does have some limitations that are important to acknowledge. First, the participant samples are limited and represent only a small percentage of the entire volume of faculty and students invited to respond. However, we considered it acceptable in the context of pandemics. This response is ordinary in online data collection designs, and the emergency context poses an added challenge to the call for participation. The low response rate did hinder us from exploring likely differences between disciplinary areas, for example. Also, the results always showed mostly moderate to slight effect sizes. The second limitation deals with the nature of the data reported, i.e., non-observational, in surveys.
Despite these limitations, we underline two concluding points after presenting and discussing the results. First, the need for improvement in assessment practices, as mass access to higher education, new blended modalities, and the rise of artificial intelligence make a traditional assessment model, in which educators are the only feedback providers, useless. Second, to address this first need, we must also tackle the development of both faculty and students’ assessment literacy. Institutional stakeholders should promote professional development programs to enhance assessment for learning practices from a sustainability point of view. Feedback practices must turn sustainable for instructors to engage in them and, in turn, promote students’ agency in the assessment process. It is crucial to emphasize that improvement will not happen without strong institutional commitment.

Author Contributions

Conceptualization, A.R., E.C. and L.L. methodology, E.C.; formal analysis, L.L.; investigation, A.R., E.C. and L.L.; resources, A.R., E.C. and L.L.; data curation, L.L.; writing—original draft preparation, A.R., E.C. and L.L.; writing—review and editing, A.R.; supervision, E.C.; project administration, E.C.; funding acquisition, E.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Universitat de Barcelona “Análisis de las prácticas de evaluación en entornos de docencia mixta orientadas al desarrollo de las competencias transversales” (REDICE20-2380), Institut de Desenvolupament Professional (IDP).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board of the Universitat de Barcelona.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data of this study are available upon request from the authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Lissak, G. Adverse physiological and psychological effects of screen time on children and adolescents: Literature review and case study. Environ. Res. 2018, 164, 149–157. [Google Scholar] [CrossRef] [PubMed]
  2. Rodrigo, C.; Tabuenca, B. Ecologías de aprendizaje en estudiantes online con discapacidades. Comunicar 2020, 28, 53–65. [Google Scholar] [CrossRef]
  3. Coll, C.; Bustos, A.; Engel, A.; de Gispert, I.; Rochera, M.J. Distributed educational influence and computer-supported collaborative learning. Digit. Educ. Rev. 2013, 24, 23–42. Available online: https://raco.cat/index.php/DER/article/view/271198 (accessed on 15 February 2024).
  4. Stenalt, M.H.; Lassesen, B. Does student agency benefit student learning? A systematic review of higher education research. Assess. Eval. High. Educ. 2022, 47, 653–669. [Google Scholar] [CrossRef]
  5. Barberá, E.; Suárez-Guerrero, C. Evaluación de la educación digital y digitalización de la evaluación. RIED 2021, 24, 33–40. [Google Scholar] [CrossRef]
  6. Malagón, F.J.; Graell, M. La formación continua del profesorado en los planes estratégicos de las universidades españolas. Educación XX1 2022, 25, 433–458. [Google Scholar] [CrossRef]
  7. Yan, Z. Assessment-as-learning in classrooms: The challenges and professional development. J. Educ. Teach. 2021, 47, 293–295. [Google Scholar] [CrossRef]
  8. Sridharan, B.; Tai, J.; Boud, D. Does the use of summative peer assessment in collaborative group work inhibit good judgement? High. Educ. 2019, 77, 853–870. [Google Scholar] [CrossRef]
  9. Veugen, M.J.; Gulikers, J.T.M.; den Brok, P. We agree on what we see: Teacher and student perceptions of formative assessment practice. Stud. Educ. Eval. 2021, 70, 101027. [Google Scholar] [CrossRef]
  10. Remesal, A. Primary and secondary teachers’ conceptions of assessment: A qualitative study. J. Teach. Teach. Educ. 2011, 27, 472–482. [Google Scholar] [CrossRef]
  11. Cañadas, L. Evaluación formativa en el contexto universitario: Oportunidades y propuestas de actuación. Rev. Digit. Investig. Docencia Univ. 2020, 14, e1214. [Google Scholar] [CrossRef]
  12. Looney, A.; Cumming, J.; Van Der Kleij, F.; Harris, K. Reconceptualising the role of teachers as assessors: Teacher assessment identity. Assess. Educ. Princ. Policy Pract. 2018, 25, 442–467. [Google Scholar] [CrossRef]
  13. Pastore, S.; Andrade, H.L. Teacher assessment literacy: A three-dimensional model. Teach. Teach. Educ. 2019, 84, 128–138. [Google Scholar] [CrossRef]
  14. Xu, Y.; Brown, G.T.L. Teacher assessment literacy in practice: A reconceptualization. Teach. Teach. Educ. 2016, 58, 149–162. [Google Scholar] [CrossRef]
  15. Remesal, A.; Estrada, F.G. Synchronous Self-Assessment: First Experience for Higher Education Instructors. Front. Educ. 2023, 8, 1115259. [Google Scholar] [CrossRef]
  16. Offerdahl, E.G.; Tomanek, D. Changes in instructors’ assessment thinking related to experimentation with new strategies. Assess. Eval. High. Educ. 2011, 36, 781–795. [Google Scholar] [CrossRef]
  17. Biggs, J.; Tang, C. Teaching for Quality Learning at University; Open University Press: Oxford, UK, 2011. [Google Scholar]
  18. Laveault, D.; Allal, L. (Eds.) Assessment for Learning: Meeting the Challenge of Implementation; Springer: London, UK, 2016. [Google Scholar]
  19. Tai, J.; Ajjawi, R.; Boud, D.; Dawson, P.; Panadero, E. Developing evaluative judgement: Enabling students to make decisions about the quality of work. High. Educ. 2018, 76, 467–481. [Google Scholar] [CrossRef]
  20. JISC. The Future of Assessment: Five Principles, Five Targets for 2025. 2020. Available online: https://repository.jisc.ac.uk/7733/1/the-future-of-assessment-report.pdf (accessed on 15 February 2024).
  21. García-Peñalvo, F.J.; Corell, A.; Abella-García, V.; Grande, M. Online assessment in higher education in the time of COVID-19. Educ. Knowl. Soc. 2020, 21, 1–26. [Google Scholar] [CrossRef]
  22. Robertson, S.N.; Humphrey, S.M.; Steele, J.P. Using technology tools for formative assessments. J. Educ. Online 2019, 16, n2. [Google Scholar] [CrossRef]
  23. Lafuente, M.; Remesal, A.; Álvarez Valdivia, I. Assisting Learning in e-Assessment: A Closer Look at Educational Supports. Assess. Eval. High. Educ. 2014, 39, 443–460. [Google Scholar] [CrossRef]
  24. Sambell, K.; Brown, S. Changing assessment for good: Building on the emergency switch to promote future-oriented assessment and feedback designs. In Assessment and Feedback in a Post-Pandemic Era: A Time for Learning and Inclusion; Baughan, P., Ed.; Advance HE: York, UK, 2021; pp. 11–21. [Google Scholar]
  25. Fischer, J.; Bearman, M.; Boud, D.; Tai, J. How does assessment drive learning? A focus on students’ development of evaluative judgement. Assess. Eval. High. Educ. 2023, 49, 233–245. [Google Scholar] [CrossRef]
  26. Ruiz-Morales, Y.; García-García, M.; Biencinto, C.; Carpintero, E. Evaluación de competencias genéricas en el ámbito universitario a través de entornos virtuales: Una revisión narrativa. RELIEVE 2017, 23, 2. [Google Scholar] [CrossRef]
  27. Abelha, M.; Fernandes, S.; Mesquita, D.; Seabra, F.; Ferreira, A.T. Graduate employability and competence development in higher education—A systematic literature review using PRISMA. Sustainability 2020, 12, 5900. [Google Scholar] [CrossRef]
  28. Klenowski, V. Assessment for learning revisited: An Asia-Pacific perspective. Assess. Educ. Princ. Policy Pract. 2009, 16, 263–268. [Google Scholar] [CrossRef]
  29. Sadler, D.R. Formative assessment and the design of instructional systems. Instr. Sci. 1989, 18, 119–144. [Google Scholar] [CrossRef]
  30. Hawe, E.; Dixon, H. Assessment for learning: A catalyst for student self-regulation. Assess. Eval. High. Educ. 2017, 42, 1181–1192. [Google Scholar] [CrossRef]
  31. Molina, M.; Pascual, C.; López Pastor, V.M. Los proyectos de aprendizaje tutorado y la evaluación formativa y compartida en la docencia universitaria española. Perfiles Educ. 2022, 44, 96–112. [Google Scholar] [CrossRef]
  32. Carless, D.; Boud, D. The development of student feedback literacy: Enabling uptake of feedback. Assess. Eval. High. Educ. 2018, 43, 1315–1325. [Google Scholar] [CrossRef]
  33. Henderson, M.; Ajjawi, R.; Boud, D.; Molloy, E. The Impact of Feedback in Higher Education Improving Assessment Outcomes for Learners; Palgrave/MacMillan: London, UK, 2019. [Google Scholar]
  34. Panadero, E.; Jonsson, A.; Strijbos, J.-W. Scaffolding self-regulated learning through self-assessment and peer assessment: Guidelines for classroom implementation. In Assessment for Learning: Meeting the Challenge of Implementation; Laveault, D., Allal, L., Eds.; Springer: London, UK, 2016; pp. 311–326. [Google Scholar] [CrossRef]
  35. Hortigüela, D.; Pérez-Pueyo, A.; López-Pastor, V. Implicación y regulación del trabajo del alumnado en los sistemas de evaluación formativa en educación superior. RELIEVE 2015, 21, ME6. [Google Scholar] [CrossRef]
  36. Nicol, D. The power of internal feedback: Exploiting natural comparison processes. Assess. Eval. High. Educ. 2020, 46, 756–778. [Google Scholar] [CrossRef]
  37. Nicol, D.; Serbati, A.; Tracchi, M. Competence development and portfolios: Promoting reflection through peer review. AISHE-J. 2019, 11, 1–13. Available online: https://ojs.aishe.org/index.php/aishe-j/article/view/405/664 (accessed on 20 February 2024).
  38. Azevedo, R. Defining and measuring engagement and learning in science: Conceptual, theoretical, methodological, and analytical issues. Educ. Psychol. 2015, 50, 84–94. [Google Scholar] [CrossRef]
  39. Nieminen, J.H.; Carless, D. Feedback literacy: A critical review of an emerging concept. High. Educ. 2023, 85, 1381–1400. [Google Scholar] [CrossRef]
  40. O’Donovan, B.; Rust, C.; Price, M. A scholarly approach to solving the feedback dilemma in practice. Assess. Eval. High. Educ. 2016, 41, 938–949. [Google Scholar] [CrossRef]
  41. Lui, A.M.; Andrade, H.L. The Next Black Box of Formative Assessment: A Model of the Internal Mechanisms of Feedback Processing. Front. Educ. 2022, 7, 751548. [Google Scholar] [CrossRef]
  42. Boud, D. Retos en la reforma de la evaluación en educación superior: Una mirada desde la lejanía. RELIEVE 2020, 26, M3. [Google Scholar] [CrossRef]
  43. Winstone, N.E.; Mathlin, G.; Nash, R.A. Building feedback literacy: Students’ perceptions of the developing engagement with feedback toolkit. Front. Educ. 2019, 4, 39. [Google Scholar] [CrossRef]
  44. Ocampo, J.C.; Panadero, E.; Zamorano, D.; Sánchez-Iglesias, I.; Diez Ruiz, F. The effects of gender and training on peer feedback characteristics. Assess. Eval. High. Educ. 2023, 49, 539–555. [Google Scholar] [CrossRef]
  45. Cano, E.; Lluch, L. Competence-Based Assessment in Higher Education during COVID-19 Lockdown: The Demise of Sustainability Competence. Sustainability 2022, 14, 9560. [Google Scholar] [CrossRef]
  46. Mishra, L.; Gupta, T.; Shree, A. Online teaching-learning in higher education during lockdown period of COVID-19 pandemic. Int. J. Educ. Res. Open 2020, 1, 100012. [Google Scholar] [CrossRef]
  47. Sharma, A.; Alvi, I. Evaluating pre and post COVID-19 learning: An empirical study of learners’ perception in higher education. Educ. Inf. Technol. 2021, 26, 7015–7032. [Google Scholar] [CrossRef] [PubMed]
  48. Tillema, H.H.; Kremer-Hayon, L. “Practising what we preach”—Teacher educators’ dilemmas in promoting self-regulated learning: A cross case comparison. Teach. Teach. Educ. 2002, 18, 593–607. [Google Scholar] [CrossRef]
  49. Hidalgo, B.G.; Gisbert, M. La adopción y uso de las tecnologías digitales en el profesorado universitario: Un análisis desde la perspectiva del género y la edad. RED 2021, 21, 1–19. [Google Scholar] [CrossRef]
  50. Dorfsman, M.; Horenczyk, G. El cambio pedagógico en la docencia universitaria en los tiempos de COVID-19. RED 2021, 21, 1–27. [Google Scholar] [CrossRef]
  51. Carless, D.; Winstone, N. Teacher feedback literacy and its interplay with student feedback literacy. Teach. High. Educ. 2023, 28, 150–163. [Google Scholar] [CrossRef]
  52. Jiang, L.; Yu, S. Understanding Changes in EFL Teachers’ Feedback Practice During COVID-19: Implications for Teacher Feedback Literacy at a Time of Crisis. Asia-Pac. Educ. Res. 2021, 30, 509–518. [Google Scholar] [CrossRef]
  53. Gulikers, J.T.M.; Biemans, H.J.A.; Wesselink, R.; van der Wel, M. Aligning formative and summative assessments: A collaborative action research challenging teacher conceptions. Stud. Educ. Eval. 2013, 39, 116–124. [Google Scholar] [CrossRef]
  54. Pérez-López, E.; Yuste, R. La competencia digital del profesorado universitario durante la transición a la enseñanza remota de emergencia. RED 2023, 23, 1–19. [Google Scholar] [CrossRef]
  55. Gu, X.; Crook, C.; Spector, M. Facilitating innovation with technology: Key actors in educational ecosystems. Br. J. Educ. Technol. 2019, 50, 1118–1124. [Google Scholar] [CrossRef]
  56. Forsyth, R.; Cullen, R.; Stubbs, M. Implementing Electronic Management of Assessment and Feedback in Higher Education. In Research Handbook on Innovations in Assessment and Feedback in Higher Education; Evans, C., Waring, M., Eds.; Edward Elgar Publishing: Cheltenham, UK, 2024. [Google Scholar]
  57. Badwan, B.; Bothara, R.; Latijnhouwers, M.; Smithies, A.; Sandars, J. The importance of design thinking in medical education. Med. Teach. 2018, 40, 425–426. [Google Scholar] [CrossRef]
  58. Brown, G.T.L. Student Conceptions of Assessment: Regulatory Responses to Our Practices. ECNU Rev. Educ. 2022, 5, 116–139. [Google Scholar] [CrossRef]
  59. Tari, E.; Selfina, E.; Wauran, Q.C. Responsibilities of students in higher education during the COVID-19. Pandemic and new normal period. J. Jaffray 2020, 18, 129–152. [Google Scholar] [CrossRef]
  60. Torre, D.M.; Schuwirth, L.W.T.; Van der Vleuten, C.P.M. Theoretical considerations on programmatic assessment. Med. Teach. 2020, 42, 213–220. [Google Scholar] [CrossRef] [PubMed]
  61. Tai, J.; Ajjawi, R.; Bearman, M.; Boud, D.; Dawson, P.; Jorre de St Jorre, T. Assessment for inclusion: Rethinking contemporary strategies in assessment design. High. Educ. Res. Dev. 2023, 42, 483–497. [Google Scholar] [CrossRef]
Figure 1. Contrasting perceptions of assessment goals: faculty versus students.
Figure 1. Contrasting perceptions of assessment goals: faculty versus students.
Sustainability 16 06596 g001
Figure 2. Contrasting perceptions of assessment practices: faculty versus students.
Figure 2. Contrasting perceptions of assessment practices: faculty versus students.
Sustainability 16 06596 g002
Table 1. Participants: schools and degrees.
Table 1. Participants: schools and degrees.
Students (%) DegreeInstructors (%) Faculty
Mathematics (26.4)
Primary Teacher (19.3)
Pharmacy (18.1)
Informatics (10.6)
Archeology (9.4)
Management and Public Administration (7.9)
Audiovisual Communication (5.3)
Psychology (3)
Education (33.3)
Psychology (16.3)
Geography and History (15.5)
Pharmacy (14.7)
Mathematics (8.5)
Law (7)
Information and Audiovisual Media (4.7)
Table 2. Participants: academic courses.
Table 2. Participants: academic courses.
CourseStudents (%)Instructors (%)
First course77 (29.1)36 (27.9)
Second course70 (26.4)26 (20.2)
Third course49 (18.5)35 (27.1)
Fourth course69 (26.0)32 (24.8)
Table 3. Participants: years of teaching experience.
Table 3. Participants: years of teaching experience.
Teaching ExperienceInstructors (%)
Less than ten years42 (32.6)
Between 11 and 20 years27 (20.9)
Between 21 and 30 years34 (26.3)
More than 30 years26 (20.2)
Table 4. Participants: gender.
Table 4. Participants: gender.
GenderFemale (%)Male (%)
Students165 (62.26)100 (37.73)
Instructors79 (61.24)50 (38.76)
Table 5. Dimensions and items of the surveys linked to research goals.
Table 5. Dimensions and items of the surveys linked to research goals.
Research GoalsDimensions in the Student SurveyDimensions in the Instructor SurveyItems in Surveys
RG1. To explore the primary purposes of assessment practices in the blended courses of confinementPrimary goals of assessment practice as perceivedPrimary goals of assessment practice as intentionally designedP1—Identify students’ needs (formative)
P2—Identify the level of learning performance (summative)
P3—Orient the learning process (formative)
P4—Certify learning (summative)
RG2. To explore the features of assessment practices.Types and characteristics of assessment practices as perceived Types and characteristics of assessment practices as intentionally designedC1—Assessment activities are productive, requiring active elaboration on the students’ side.
C2—Assessment activities are coherent with the course goals and pursued competencies.
C3—Students are invited/expected to assume an active role in defining and comprehending assessment goals.
C4—Students are invited/expected to assume an active role in defining and comprehending assessment criteria.
C5—Students may self-assess.
C6—Students may carry out peer assessment.
C7—Students may integrate feedback into subsequent steps of learning tasks.
C8—Students have the chance to reflect upon feedback.
C9—Assessment practices promote using digital tools to offer and receive feedback.
RG3. To compare these purposes and features.Goals and characteristics of assessment practicesGoals and characteristics of assessment practicesSame items as RG1 and RG2.
RG4. To explore links between personal characteristics and online assessment perceptionsIdentification dataIdentification dataSee Table 1, Table 2, Table 3 and Table 4
Table 6. Purposes of assessment practices as perceived by faculty and students. ** values of p indicate significant differences at 99%.
Table 6. Purposes of assessment practices as perceived by faculty and students. ** values of p indicate significant differences at 99%.
Assessment PurposesStudents
(n = 265)
Instructors
(n = 129)
Mann–Whitney U-Testp (Two-Tailed)Effect Size (Cohen’s d)
M (SD)M (SD)
P1—Identify students’ needs (formative)2.91 (1.37)3.48 (1.33)128660.00003 **0.422
P2—Identify the level of learning performance (summative)3.14 (1.20)4.07 (1.20)9203.5<0.00001 **0.775
P3—Orient the learning process (formative)3.06 (1.23)4.02 (1.13)9515<0.00001 **0.813
P4—Certify learning (summative)3.47 (1.31)4.33 (1.17)9944.5<0.00001 **0.692
Table 7. Characteristics of assessment practices as perceived by faculty and students. * values of p indicate significant differences at 95%; ** values of p indicate significant differences at 99%.
Table 7. Characteristics of assessment practices as perceived by faculty and students. * values of p indicate significant differences at 95%; ** values of p indicate significant differences at 99%.
Characteristics of Assessment PracticesStudents
(n = 265)
Instructors (n = 129)Mann–Whitney U-Testp (Two-Tailed)Effect Size
(Cohen’s d)
M (SD)M (SD)
C1—Assessment activities are productive, requiring active elaboration from students.2.57 (1.13)3.83 (1.17)7652<0.00001 **1.095
C2—Assessment activities are coherent with the course goals and pursued competencies.3.13 (1.0)4.16 (1.02)7898.5<0.00001 **1.019
C3—Students are invited/expected to assume an active role in defining and comprehending assessment goals.2.47 (1.17)2.75 (1.20)14,8650.01786 *0.185
C4—Students are invited/expected to assume an active role in defining and comprehending assessment criteria.2.12 (1.14)2.45 (1.14)14,248.50.00368 **0.289
C5—Students may self-assess.2.58 (1.22)2.83 (1.47)155460.072150.185
C6—Students may carry out peer assessment.2.37 (1.20)2.72 (1.49)15,013.50.025 *0.258
C7—Students may integrate feedback into subsequent steps of learning tasks.2.56 (1.15)3.88 (1.03)7001.5<0.00001 **1.209
C8—Students have the chance to reflect upon feedback.2.74 (1.23)4.0 (0.91)7508<0.00001 **1.164
C9—Assessment practices promote using digital tools to offer and receive feedback.2.91 (1.24)3.81 (1.16)10,079.5<0.00001 **0.749
Table 8. Students’ general satisfaction, purposes, and characteristics of assessment practices. ** values of p indicate significant differences at 99%.
Table 8. Students’ general satisfaction, purposes, and characteristics of assessment practices. ** values of p indicate significant differences at 99%.
Students
n = 265
df = 16
Chi-SquareTwo-TailedEffect Size
χ2pPhiGamma
P1—Identify students’ needs (formative)78.210 **0.5430.51
P2—Identify the level of learning performance (summative)99.6880 **0.6130.515
P3—Orient the learning process (formative)99.390 **0.6120.549
P4—Certify learning (summative)86.1450 **0.570.484
C1—Assessment activities are productive, requiring active elaboration from students.76.3770 **0.5370.496
C2—Assessment activities are coherent with the course goals and pursued competencies.75.3160 **0.5290.501
Table 9. Faculty’s general satisfaction, purposes, and characteristics of assessment practices. ** values of p indicate significant differences at 99%.
Table 9. Faculty’s general satisfaction, purposes, and characteristics of assessment practices. ** values of p indicate significant differences at 99%.
Faculty
n = 129
df = 16
Chi-SquareTwo-TailedEffect Size
χ2pPhiGamma
P1—Identify students’ needs (formative)109.0780 **0.5220.294
C6—Students may carry out peer assessment.36.3660.0026 **0.5290.403
C7—Students may integrate feedback into subsequent steps of learning tasks33.1640.0070 **0.5070.466
C9—Assessment practices promote using digital tools to offer and receive feedback.40.140.0007 **0.560.443
Table 10. Students’ gender and features of assessment practices. * values of p indicate significant differences at 95%; ** values of p indicate significant differences at 99%.
Table 10. Students’ gender and features of assessment practices. * values of p indicate significant differences at 95%; ** values of p indicate significant differences at 99%.
Students
n = 265
df = 16
Chi-SquareTwo-TailedEffect Size
χ2pPhiGamma
C5—Students may self-assess.9.8050.0438 *0.193−0.239
C6—Students may carry out peer assessment.36.3660.0026 **0.5290.403
Table 11. Students’ academic course and features of assessment practices. ** values of p indicate significant differences at 99%.
Table 11. Students’ academic course and features of assessment practices. ** values of p indicate significant differences at 99%.
Students
n = 265
df(16)
Chi-SquareTwo-TailedEffect Size
χ2pPhiGamma
C5—Students may self-assess.28.3380.0049 **0.328−0.241
C8—Students have the chance to reflect upon feedback.28.1060.0053 **0.275−0.068
Table 12. Difference between faculty and students regarding P2 and academic course. * values of p indicate significant differences at 95%; ** values of p indicate significant differences at 99%.
Table 12. Difference between faculty and students regarding P2 and academic course. * values of p indicate significant differences at 95%; ** values of p indicate significant differences at 99%.
Students/FacultyChi-SquareTwo-TailedEffect Size
χ2pPhiGamma
First grade (n = 123, df = 4)18.1880.0011 **0.3580.533
Second grade (n = 95, df = 4)26.2730 **0.5260.745
Third grade (n = 84, df = 4)13.0740.0109 *0.3950.497
Fourth grade (n = 101, df = 4)24.2330.0001 **0.490.46
Table 13. Difference between faculty and students regarding P2 and academic course. * values of p indicate significant differences at 95%.
Table 13. Difference between faculty and students regarding P2 and academic course. * values of p indicate significant differences at 95%.
Faculty
n = 129
df(12)
Chi-SquareTwo-TailedEffect Size
χ2pPhiGamma
P2—Identify the level of learning performance (summative).21.8390.0394 *0.4110.115
C4—Students are invited/expected to assume an active role in defining and comprehending assessment criteria.23.9010.021 *0.43−0.239
C7—Students may integrate feedback into subsequent steps of learning tasks.21.8490.0393 *0.412−0.337
C8—Students have the chance to reflect upon feedback.24.5870.0169 *0.437−0.266
C9—Assessment practices promote using digital tools to offer and receive feedback.21.3060.0461 *0.406−0.083
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Remesal, A.; Cano, E.; Lluch, L. Faculty and Students’ Perceptions about Assessment in Blended Learning during Pandemics: The Case of the University of Barcelona. Sustainability 2024, 16, 6596. https://doi.org/10.3390/su16156596

AMA Style

Remesal A, Cano E, Lluch L. Faculty and Students’ Perceptions about Assessment in Blended Learning during Pandemics: The Case of the University of Barcelona. Sustainability. 2024; 16(15):6596. https://doi.org/10.3390/su16156596

Chicago/Turabian Style

Remesal, Ana, Elena Cano, and Laia Lluch. 2024. "Faculty and Students’ Perceptions about Assessment in Blended Learning during Pandemics: The Case of the University of Barcelona" Sustainability 16, no. 15: 6596. https://doi.org/10.3390/su16156596

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop