Next Article in Journal
Experimental and Estimated Evaluation of Drying Shrinkage of Concrete Made with Fine Recycled Aggregates
Previous Article in Journal
Coherence Analysis of National Maritime Policy of Pakistan across Shipping Sector Governance Framework in the Context of Sustainability
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sustainable Development in Action: A Retrospective Case Study on Students’ Learning Before, During, and After the Pandemic

by
Maura A. E. Pilotti
1,*,
Khadija El Alaoui
1,
Hanadi M. Abdelsalam
1 and
Rahat Khan
2
1
Department of Sciences and Human Studies, Prince Mohammad Bin Fahd University, Al Khobar 31952, Saudi Arabia
2
Department of Information Resources, Prince Mohammad Bin Fahd University, Al Khobar 31952, Saudi Arabia
*
Author to whom correspondence should be addressed.
Sustainability 2023, 15(9), 7664; https://doi.org/10.3390/su15097664
Submission received: 6 April 2023 / Revised: 2 May 2023 / Accepted: 5 May 2023 / Published: 6 May 2023
(This article belongs to the Section Sustainable Education and Approaches)

Abstract

:
Adherence to sustainable development in higher education rests on the assessment of students’ academic attainment, especially during unexpected environmental changes, such as the sudden move from face-to-face to online courses during the recent pandemic. Most studies devoted to this issue have compared students’ performance online with that of face-to-face courses before the pandemic, tallying together a variety of courses, often from specific disciplines. Besides their mixed results and generality, such studies do not address the issue of students’ adjustment to the post-pandemic learning environment. The present retrospective case study offered a simple evidence-based model for educators to measure the relationship between environmental changes and students’ behavior for self-reflection and adjustment. It examined students’ academic attainment (as measured by grades) within a broader timeframe, including courses taught by the same instructors face-to-face before and after the pandemic and online during the pandemic. Specific courses of the general education curriculum were selected to include a broad spectrum of students. The study then assessed whether students’ activities before, during, and after the pandemic predicted summative assessment performance (i.e., final exam grades) differently. In this study, performance differences were recorded, usually in favor of post-pandemic face-to-face classes. Midterm examinations were the best predictors of final exam grades irrespective of the modality of instruction and timeframe. Implications and applications of the methodology used and the results obtained were considered.

1. Introduction

The notion of sustainable development in higher education means instruction that helps learners develop knowledge, skills, and values for making informed decisions and performing responsible actions in the name of equity, environmental integrity, and economic viability [1]. Thus, a key aspect of sustainable development in higher education is instruction that fosters in learners the ability to cope with any of the crises that may plague contemporary society [2]. Undoubtedly, the recent COVID-19 pandemic exemplifies not only such a crisis but also an opportunity for educators to assess the instruction delivered in the classroom and reflect upon its effectiveness [3].
During the COVID-19 pandemic, courses traditionally offered face-to-face were forced online and often taught either asynchronously (requiring access to instructional materials at one’s own time within a predefined timeframe) or synchronously (involving real-time interactions in a virtual classroom). Research on the potential effects of sudden transitions (i.e., emergency remote teaching), particularly on academic performance, has focused mostly on students’ ability to adjust to online learning, using their pre-pandemic face-to-face learning as the reference point [4]. Even though synchronous online instruction mimics most of the features of face-to-face instruction, several educators and administrators have raised concerns regarding the ability of the online mode to deliver satisfactory learning [5]. The usefulness of the extant literature on transitions from face-to-face to online instruction has also been questioned as it has relied on comparisons predicated on the premise that transitions had been carefully planned [6].

1.1. The Present Study

This study is born from the acknowledgment that, by and large, the assessment of the impact of the pandemic on learning has tended to be limited either to the time of the pandemic or to a comparison of the periods before and during the pandemic [7,8,9,10,11]. Furthermore, research on the impact of the pandemic on learning has entailed either a combination of courses, thereby overlooking course specificity in a quest for generalization [12,13,14], or a few courses within specific disciplines [8,9]. Concerning the distinction between the Global North and Global South and its related biases [15], research has been surprisingly inclusive not only of student populations of the Global North but also of the traditionally understudied populations of the Global South [16,17,18,19]. In the extant literature, however, the examination of students’ re-adjustment to the once-familiar face-to-face learning environment has yet to become a relevant interest. To this end, the present study examines through a successive independent-sample design whether performance changes have occurred across a broader time frame that includes three semesters before, during, and after the pandemic. The university selected for the study is one that has returned to the original face-to-face course format used before the pandemic in all courses taught in the post-pandemic world, thereby allowing for clear-cut comparisons of an instructional period defined by online courses and two periods defined by face-to-face courses. The understudied population selected for the present investigation are female college students from a society that is in the process of shedding its patriarchy to foster gender equity. Students had no prior experience with online learning in formal education before the pandemic. As such, the online mode can be compared with two unadulterated instantiations of the face-to-face mode that differ in the amount of formal experience with online learning that students possess (before and after the pandemic).
The present research has emerged not only from the need to adopt a broader perspective on the relationship between environmental changes and academic performance but also from the recognition of the critical role played by assessment in education. Assessment can inform educators about students’ performance changes as a function of the instructional mode adopted as well as justify interventions intended to ensure optimal learning. The pandemic may be perceived as over by most students and educators, but its aftermath has yet to be thoroughly ascertained, thereby keeping the issue of quality assurance and the necessity of quality control at the forefront of the concerns of educators and administrators alike [20,21]. The present study suggests a simple model for educators who wish to assess (i.e., go beyond intuitions) whether students’ performance (as measured by grades) has changed over time, including the instructional periods before, during, and after the pandemic. The research further delves into potential differences in students’ performance between instructional periods by focusing on the distinction between formative assessment measures, which give students ongoing feedback about their learning in particular areas of the curriculum of a course, and summative assessment measures, which are intended to evaluate learning at the end of the semester against the standards set by the course. It asks whether formative performance measures (i.e., midterm and take-home assignments or tests before the midterm) differ in their ability to predict summative assessment scores (i.e., final test grades) in face-to-face courses before and after the pandemic and in online courses during the pandemic [22,23]. Differences in the extent to which formative assessment performance predicts summative assessment performance are assumed to be symptomatic of changes in students’ learning that need to be acknowledged, as they may demand adjustments in instruction.
Three courses are chosen as representative of key competencies of the undergraduate curriculum: (1) written communication in research as an index of students’ ability to communicate effectively in a second language (i.e., English) within a technical field; (2) introduction to statistics as an index of students’ computational competence, including students’ ability to reason logically to make informed decisions based upon facts; and (3) introduction to psychology, a topical course that requires application of knowledge (i.e., it relies on students’ ability to understand human nature by acquiring information from scientific sources, and using such information to reason and solve problems). These courses are specifically selected for three reasons. First, they are part of the general education undergraduate curriculum. In universities across the globe, general education courses are intended to ensure that all undergraduate students acquire the background knowledge as well as the analytical and communication skills that are deemed necessary to address the demands of their majors as well as those of their chosen professions [24]. Not surprisingly, performance failures or deficiencies in general education courses may ignite a cascade of undesirable effects, which can span from mild (e.g., the repetition of a course) to severe (e.g., academic dismissal, delayed degree attainment, and loss of financial aid eligibility). Thus, how students perform in such courses is considered key in determining the quality and nature of their academic success, including retention and graduation [25,26]. Second, general education courses enroll students from all majors [27], thereby rendering the sample of participants in our research representative of the diversity of students at the university selected for the present study. Third, these courses have been consistently taught by the same instructors across three instructional periods (before, during, and after the pandemic) without changes in content, requirements, and delivery, except for the synchronous online versus the in-person mode change dictated by the pandemic.

1.2. Hypotheses

Three distinct predictions are made. Each concerns the stability or change in students’ course performance across three instructional periods.
H 1.
If online learning replicates the main features of face-to-face learning, no differences in students’ performance will be observed over time (before, during, and after the pandemic). Furthermore, no performance differences will be expected if post-pandemic learning merely presents a return to pre-pandemic learning. Alternatively, if online learning during the pandemic has created new study habits (see [12,28]), then differences between before and after the pandemic will be detected.
H 2.
Relationships between key variables, such as formative assessment measures and summative assessment measures, are reflected in the ability of the former to predict the latter. Thus, another indicator of learning stability or change is the extent to which formative assessment measures predict students’ summative assessment performance across the three instructional periods. No change in students’ learning will be assumed if formative assessment measures predict to the same extent summative assessment performance in three instructional periods.
H 3.
Of course, we are mindful that courses often teach and assess a unique set of competencies. Thus, we also hypothesize that differences may emerge depending on the competencies that are to be acquired and practiced in a particular course. For instance, the acquisition of computational competencies (as taught in a statistics course) may be more reliant on the interaction between students and the instructor for timely corrective feedback [29] than the acquisition of communication competencies (as taught in a written communication course). As such, the former may be more likely to be sensitive to changes in the mode of instruction than the latter, yielding to performance differences in statistics between online instruction and face-to-face instruction (either before or after the pandemic).

2. Background

The COVID-19 pandemic has propelled several studies on the attitudes of students and instructors toward online learning, e.g., [30,31,32,33], and on the psychological impact of forced instructional changes dictated by the pandemic, including engagement, e.g., [34], and stress, e.g., [35]. Other studies have focused on the best practices for online instruction, e.g., [36]. Fewer studies though have directly assessed learning in students suddenly forced to move online, e.g., [37].
Interestingly, most of the studies that have examined whether performance differences exist between online and face-to-face are much less recent (i.e., they pertain to the pre-pandemic period) and offer mixed evidence. For instance, when examining course performance, Larson and Sung [38] and Driscoll et al. [39] found no significant differences between online (asynchronous) and face-to-face instruction. Instead, in a large-scale study including approximately 500,000 online and face-to-face courses, Xu and Jaggars [40] found online instruction to yield inferior outcomes. In another large-scale study, including 433 summer courses, Fischer et al. [41] reported that students’ grades were slightly lower in online courses. Yet, even if a cohesive set of findings existed, it would not adequately address the issue of performance in online courses offered in response to a sudden pandemic dictating social distancing.
Recent studies on students’ performance involve the pandemic period. They reflect the idiosyncrasies of the responses of diverse educational institutions as well as the particular curriculum assessed, thereby questioning whether their findings generalize to other subject matters and a broader range of students. For instance, in a case study, Iglesias-Pradas et al. [13] reported that emergency remote teaching, across a variety of engineering courses of different sizes, taught synchronously or asynchronously, was accompanied by an increase in students’ academic performance. A similar finding was reported by Gonzalez et al. [12] with a set of three science courses (i.e., applied computing, design of water treatment facilities, and metabolism). Instead, Chisadza et al. [42] reported lower performance in microeconomics courses offered online. With students enrolled in introductory physics who had no formal exposure to online instruction, Al-Zohbi et al. [8] found that male students performed better online than face-to-face on both formative and summative assessment measures, whereas female students either performed better online or performed at equivalent levels between instructional mediums. When students taking introductory physics were organized by major, Al-Zohbi et al. [9] found differences in formative assessment measures. Students in STEM majors performed better than non-STEM majors on lab assignments and better online than face-to-face on tests. In this study, the performance of non-STEM majors on both lab assignments and tests was indifferent to the mode of instruction.
Irrespective of the nature of the uncovered differences, scholars’ explanations have relied on a diverse array of variables. They include psychological factors pertaining to learners [43], such as changes in learning strategies, self-regulation [12,44], and attitudes and study preferences [42]. They also consider organizational factors, such as the faculty’s ability to preserve the quality of the instruction delivered, the readiness of the technical infrastructure, and the technical literacy of all parties involved [13]. Thus, in our view, it seems more practical for individual educators to perform independent examinations of the relationship between emergency remote teaching and their students’ learning. Our case study offers a simple roadmap on how to examine such a relationship, and then use the uncovered outcomes for fruitful self-reflection [45,46]. We contend that the inclusion of post-pandemic learning is critical to such examinations, albeit the issue of readjustment to face-to-face learning has yet to reach prominence. The question of readjustment to face-to-face instruction brings with it the construct of academic adaptability, which refers to the ability of students to overcome difficulties and achieve desired academic outcomes in a changing educational environment [47]. Academic adaptability has been reported to have a negative relationship with academic burnout and a positive relationship with academic performance [48]. Its relationship with good performance is assumed to reflect personal dispositions, such as self-discipline, lack of defensive rigidity [49], self-efficacy (i.e., confidence in one’s abilities [50,51]), and degree of engagement [52].
In most studies, any direct comparison of the academic performance of students before, during, and after the pandemic has been largely overlooked. Reasons may include its recency or the fact that visible remnants of the pandemic linger. In fact, at some universities, a number of courses and programs may be either still online or offered in a hybrid mode (i.e., partly online and face-to-face; [53]), thereby challenging clear-cut comparisons between face-to-face learning before and after the pandemic. The few studies that have examined performance in the post-pandemic environment have either limited their focus to the post-pandemic environment [54,55] or reasoned on the lessons learned from online instructions (during the pandemic) as a means of enhancing student success in face-to-face classes after the pandemic in particular courses or subject matters [56,57,58].
Adaptability is particularly critical to women in a society that has only recently attempted to move away from strict patriarchy to develop a social system that recognizes gender equity in education, employment, and personal matters. Saudi Arabia is a prototypical example of such a society. Its recent pursuit of gender equity, amid all the challenges that this may entail, has resulted in top-down legal changes and investments that have granted women agency to levels aspiring to the international standards set by the United Nations [59,60,61]. It is important to note though that the country’s transition to a more gender-equitable society has been driven primarily by economic considerations. Namely, it rests on moving the country away from fossil fuels and their byproducts to develop a sustainable economy diverse at its core and reliant on renewable energy sources [62,63,64,65]. Although both men and women are expected to contribute to the sustainable economic ecosystem envisioned from the top, women of college age are considered the main propellers of change and the necessary constituency of the envisioned economic engine [66,67,68,69]. As a result, women of college age are the most impacted by the top-down structural and functional changes reshaping the economy of the country. In light of the relevance of such women to the future of Saudi Arabia, in our study, we focus on a convenience sample of female college students. They are not only an understudied population in the extant literature but also a population from which the future economic engine of the country heavily rests. Their ability to re-adapt to face-to-face learning after the pandemic will determine their academic and professional success as well as the success of the economic plan envisioned from the top.

3. Materials and Methods

The present retrospective case study included three courses in the general education program of a University located in the Eastern Province of Saudi Arabia. The program served freshmen and sophomores who had chosen to major in Engineering, Computer Engineering, Computer Science, Interior Design, Architecture, Law, or Business. Each of the selected courses was taught by an instructor both face-to-face (3 semesters before the pandemic), synchronously online (3 semesters during the pandemic), and face-to-face (3 semesters after the pandemic). Semesters included Spring, Summer, and Fall. For each class taught by a faculty online, there was a corresponding one taught face-to-face before and after the pandemic to maintain consistency in the course material, instructor, pedagogical approach, and assessment between the two delivery formats. Courses were taught in English. At the selected university, institutional policies stressed faculty’s adoption of best practices (i.e., active learning, abundant learner–instructor interactions, clear organization and structure, and focus on valued content) regardless of the mode of instruction of each course to ensure that the online mode would deliver experiences as similar as possible to those of the face-to-face mode.
Three courses were chosen: a written communication course devoted to writing research reports, a statistics course concerned with data analyses, and an introductory psychology course that covered key applications of behavioral sciences. They were chosen for their ability to offer an adequate representation of all majors at the university, a robust enrollment, minimal overlap of students, a similar overall structure of class activities (to be described below), and a reputation for being challenging (as per students’ end-of-course evaluations). They were all 3-credit-hour courses that met 3 times per week. The statistics course had an extra hour for recitation, which did not add to the credit hour count of the course. At the selected university, all general education courses conform in content and format to the guidelines of the Texas International Education Consortium and are specifically devoted to the development of basic competencies across knowledge domains (e.g., written communication and statistics) or the acquisition of foundational knowledge within a specialized domain (e.g., psychology).
All participants completed the course in which they were enrolled, thereby receiving a grade that included all course activities. Their ages ranged from 18 to 25. Students were of Middle Eastern descent and bilingual, reporting Arabic as their first language and English as their second language. English language proficiency was assessed before enrollment through standardized tests. For face-to-face classes before and after the pandemic, all participants commuted daily by car or bus from the urban centers surrounding the university. Reported travel time varied from ½ hour to 2 h each way. If a student was enrolled in more than one course, only one course was chosen for that student through randomization. All participants were females. The online instruction sample included 695 participants. The face-to-face instruction sample included 693 participants before the pandemic and 750 participants after the pandemic. Class sizes varied from medium (15–34 students) to large (35–49 students; see also [13]). Students, freshmen or sophomores, qualified for participation by virtue of their being enrolled in one of the selected courses. To preserve the anonymity of the participants, grades were anonymized prior to their being used as research data. Participation complied with the guidelines for educational research set by the Office for Human Research Protections of the U.S. Department of Health and Human Services and with those set by the American Psychological Association. The research at the selected institution was conducted under the purview of the Deanship of Research.
Institutional guidelines required that instructors adopt a student-centered pedagogy, according to which the educator’s role is that of a facilitator rather than a provider of knowledge, and students’ diverse needs and abilities inform and guide instruction. According to this student-centered approach, students are not expected to be passive receivers of knowledge, but rather active contributors who not only acquire knowledge but also transform it and themselves in the process. Each course was taught in English (i.e., the primary vehicle of communication at the selected university). The written communication and psychology courses entailed a midterm and a final examination, and 5 take-home assignments before and after the midterm examination (i.e., 2–3 assignments per each half of the semester). The statistics course instead entailed a final examination, and 4 tests (2 tests for each half of the semester). In all courses, attendance was mandatory before the pandemic and remained so during and after the pandemic. As per institutional policies, all types of assessment covered five of the six levels of the Bloom taxonomy [70,71,72]. Namely, tests and assignments required not only remembering and understanding but also the application, analysis, and evaluation of information. Due to the introductory nature of the selected courses, the sixth level of the taxonomy (i.e., synthesis/creation of work) was not covered.
Peer observations confirmed each instructor’s adoption of a student-centered pedagogy (as per institutional policy). Each was also recognized as learning-oriented based on the LOGO: F scale [73], peer observations, and students’ end-of-term evaluations. The LOGO: F scale was administered via email before data collection to assess the educators’ orientation towards students’ learning (LO) and grades (GO). The scale measures LO and GO attitudes on a 5-point scale from “strongly disagree” (0) to “strongly agree” (4), as well as the frequency of LO and GO behaviors on a 5-point scale from “never” (0) to “always” (4). Each instructor received an average score above 3 for attitudes and behaviors promoting learning.
At the very onset of the pandemic, courses traditionally taught face-to-face were quickly moved to synchronous online instruction. In both modes of instruction, and thus before, during, and after the pandemic, the learning management system was Blackboard, which served for storage and use of class materials as well as for assignment submission. WhatsApp student groups functioned as informal support systems for learning and exchanges of course-related information. In online instruction, Blackboard Collaborate, a web-based platform that offers students and instructors a virtual classroom, was used for lectures, class discussions, and office hours. Blackboard Collaborate allowed students to communicate in real time with instructors and with each other through messages written in a chatbox or expressed orally through a microphone. Due to technical restrictions, the camera function of Blackboard Collaborate was sporadically utilized during class meetings except for testing during which cameras and microphones were activated to monitor students’ behavior. Thus, synchronous online instruction resembled face-to-face instruction with the exception that it deprived students of the richness of human contact that characterizes customary face-to-face interactions in the Middle East [74]. Even before the pandemic, all students used personal computers and had Internet access to carry out academic work. They also had easy access to a team of Information Technology specialists who could offer assistance with technical issues. Thus, the sudden transition to online learning during the pandemic did not involve students with unequal technological access to online learning. According to instructors’ testimonials and institutional policies, post-pandemic face-to-face instruction no longer involved the opportunity for lecture capture. Thus, in its format, it replicated pre-pandemic instruction with the exception that materials updated during the pandemic and posted on Blackboard were used in the post-pandemic face-to-face classes.

Data Analyses

The results reported below are considered significant at the 0.05 level. Performance in each course was examined separately to identify patterns that might be unique to a given course. In descriptive and inferential statistics, the period of instruction (before, during, and after the pandemic) and class activities were treated as the main variables in our analyses. To this end, students’ performance in a course both online and face-to-face was chronologically organized into 4 performance indices at each of 3 periods (before, during, and after the pandemic): pre-midterm assignments or tests, the midterm examination, post-midterm assignments or tests, and the final examination. In the statistics course, however, there was no midterm examination. Thus, only 3 indices were used. Across all selected courses, pre-midterm activities, midterm examinations, and post-midterm activities were considered formative assessment measures, whereas final exams were treated as summative assessment measures.
To determine whether performance changed over time, performance indices served as dependent measures in Analyses of Variance (ANOVAs) with the period of instruction as the independent variable. A posteriori, pairwise Bonferroni comparisons were conducted to determine the location of significant effects. Pearson correlation analyses were performed to determine the contribution of formative assessment indices (i.e., measures collected during each semester) to the final exam scores (summative assessment indicator). They aimed at assessing the relationship between the formative assessment indices and the summative assessment indicator.

4. Results

4.1. Assessing Performance in Face-to-Face and Synchronous Online Instruction (H 1 and H 3)

Table 1 displays students’ mean (M) performance and standard errors of the mean (SEM) in face-to-face courses before and after the pandemic and in online courses during the pandemic. In written communication, there were no differences in performance for pre-midterm assignments [F = 1.86, ns]. There were differences, however, in the scores of the midterm examination [F(2, 1040) = 15.16, MSE = 343.75, p < 0.001, ηp2 = 0.028], post-midterm assignments [F(2, 1040) = 3.20, MSE = 391.81, p = 0.041, ηp2 = 0.006], and final examination [F(2, 1040) = 9.56, MSE = 533.61, p < 0.001, ηp2 = 0.018]. A posteriori, pairwise Bonferroni comparisons (see Table 1) revealed that for both the midterm and final examinations, face-to-face performance after the pandemic was superior to online performance. Across all indices, performance in face-to-face classes before the pandemic was not different from online performance. However, for tests (i.e., midterm and final examinations), it was inferior to performance in face-to-face classes after the pandemic.
For introductory psychology, as per written communication, there were no differences in performance for pre-midterm assignments [F = 2.40, ns]. There were differences, however, in the scores of the midterm examination [F(2, 597) = 10.52, MSE = 324.25, p <0.001, ηp2 = 0.034], post-midterm assignments [F(2, 597) = 4.10, MSE = 313.63, p = 0.017, ηp2 = 0.014], and final examination [F(2, 597) = 45.36, MSE = 498.27, p < 0.001, ηp2 = 0.132]. A posteriori, pairwise Bonferroni comparisons (see Table 1) revealed that face-to-face performance after the pandemic was greater than online performance during the pandemic on the midterm examination, post-midterm assignments, and final examination. Online performance was lower than performance after the pandemic on the midterm examination and also lower than performance before the pandemic on the final examination.
For statistics, there were differences in pre-midterm tests [F(2, 492) = 54.83, MSE = 138.10, p < 0.001, ηp2 = 0.182], post-midterm tests [F(2, 492) = 28.23, MSE = 168.58, p < 0.001, ηp2 = 0.103], and the final exam [F(2, 492) = 16.89, MSE = 110.25, p < 0.001, ηp2 = 0.064]. Across all indices, a posteriori, pairwise Bonferroni comparisons (see Table 1) revealed that performance face-to-face after the pandemic was higher than online or before the pandemic performance. The only exception was face-to-face pre-midterm performance before the pandemic which did not differ from online.

4.2. Predicting Performance in Face-to-Face and Online Courses (H 2)

For all courses, the final examination served as the summative assessment measure. Thus, Pearson correlation analyses (r) were conducted to assess whether course activities during the semester (i.e., midterm and pre- and post-midterm assignments or exams) predicted performance on the final examination. Bivariate correlations were intended to illustrate the relationship between two variables irrespective of other variables (i.e., the relationship between each predictor and the selected outcome variable when variables are taken two at a time). Coefficients of determination (r2 %) were then computed to determine the extent to which each type of activity alone predicted summative assessment performance. In Table 2, all correlations significant at the 0.05 level are displayed along with the percentage of variance in the final examination that was accounted for by each activity.
When the midterm exam and assignments were included in the assessment protocol of a course (as per written communication and psychology), the midterm exam accounted for the largest portion of the variance in students’ final exam scores. When only tests during the semester were included (as per statistics), tests were better predictors of performance in face-to-face classes before the pandemic than in online or face-to-face classes after the pandemic. Tests performed during the second half of the semester were better predictors of the final exam performance for online classes, whereas those performed during the first half of the semester were better predictors of the final exam performance for face-to-face classes after the pandemic.

5. Discussion

Higher education institutions are key players in the United Nations’ Sustainable Development Goal 4 (i.e., quality education), which promotes values such as sustainable lifestyles, human rights, gender equity, and non-violence [75]. Severe challenges to Goal 4 include environmental disruptions that are unpredictable in nature, duration, scope, and consequence, such as pandemics, natural disasters, wars, and uprisings. Academic institutions are agents of change that are expected to exist in a world where such disruptions tend to occur at increasing frequency. The present retrospective case study was motivated by the conviction that understanding whether the goals of quality education are preserved over time regardless of environmental disruptions would rest, at the very minimum, on the assessment of students’ performance. Such an assessment would determine the extent to which measurable outcomes of instruction (e.g., grades) either online or face-to-face are isomorphic or improve over time. It would also require the timely identification of at-risk students to enact suitable remedial interventions that prevent avoidable failures. Essential to the identification of at-risk students would be the selection of indices that adequately predict key course outcomes, such as grades in summative assessment measures.
The results of the present study can be summarized in two main points: First, for the most part, formative and summative assessment grades were not significantly different between online courses and face-to-face courses before the pandemic, a finding that is inconsistent with that of other studies of emergency remote teaching that have reported either higher performance online [12,13,76] or lower performance online [77]. Furthermore, in our study, most formative and summative assessment measures in face-to-face courses after the pandemic were higher than online measures. For written communication and statistics, they were also higher than those in face-to-face classes before the pandemic. These patterns suggest that students benefited from a return to fully face-to-face classes.
The widespread lack of differences between face-to-face before the pandemic and online during the pandemic supports the idea that online learning in the selected courses withstood the disruptions brought about by the pandemic. According to Gonzalez et al. [12], during the pandemic, online students treated learning more as a continuous activity rather than as an intermittent enterprise dictated by deadlines. Although the synchronous online instruction adopted by the selected institution replicated many aspects of the traditional face-to-face instruction (e.g., real-time interactions in a virtual classroom), thereby easing the transition to online, it might have fostered a different approach to learning. Informal comments made by students to instructors appeared to support the assumption that the quality of students’ homework assignments was the primary beneficiary of the greater continuity in study activities. Students claimed to spread out the work required for assignments (instead of doing all the work in the days before deadlines) and devote more time to searching for information, writing, revising, and practicing skills. They also claimed that their approach to test preparation had changed, but they felt more anxious or merely more concerned about taking tests online due to the potential for unforeseen technical issues (e.g., a sudden interruption in internet connectivity). Even though both face-to-face and online students took tests through Blackboard, technical issues were not seen as similarly problematic in the face-to-face classroom because there was an instructor physically present who could verify claims and offer solutions. Yet, test performance might have equally benefitted from continuity in test preparation due to psychological factors.
The widespread higher face-to-face performance after the pandemic than online during the pandemic supports the idea that adjustments to learning during the pandemic were transferred to face-to-face classes, benefitting performance. Alternatively, the return to the social environment of face-to-face classes might have motivated students to higher performance levels. It is difficult to assess which of these two scenarios prevailed in our study, but some evidence seems to point to an interaction between personal excitement about returning to campus (a positive influence) and the challenges of time management arising from the demands of face-to-face classes (a negative influence). In fact, in interactions with instructors, students often reported that they were excited about returning to campus, primarily for the opportunity it offered to reaffirm the in-person relationships they had before. Yet, they also mentioned concerns about readjusting their schedules to include travel to and from campus. Also reported were concerns about having less time for class activities, difficulties in distributing their learning across time, and increased likelihood of cramming. Thus, study habits changes during the pandemic appeared to have been challenged by the realities of commuting and by an expanding social life. Not surprisingly, the advantage of post-pandemic learning over online learning was most evident in later formative assessment measures (i.e., midterm examinations and post-midterm activities) rather than earlier ones (i.e., pre-midterm activities) in two of the selected classes (i.e., written communication and introduction to psychology). Namely, in such classes, the experience of face-to-face classes did not benefit students at the start of the academic year.
Second, summative assessment grades were best predicted by performance on the midterm test as opposed to assignments for written communication and psychology (courses in the social sciences). This finding suggests that summative assessment performance in both online and face-to-face classrooms can best be predicted by midpoint test performance. In such courses, midpoint test performance can be a useful indicator of at-risk students, particularly if the information comes from tests. Although both tests and assignments had deadlines to be met, tests were to be completed within a much shorter timeframe (1 h–1.5 h). As a result, assignments were reported to be largely devoid of the apprehension generated by tests, thereby potentially making them less sensitive indicators of summative assessment performance. For statistics, tests predicted less of the summative assessment performance (i.e., final test grades) over time, even though the content and format of the assessment protocol did not change across the selected periods. This pattern was deemed to defeat the purpose of formative assessment measures, which, in addition to tools for giving students feedback, were treated as useful indicators of at-risk students. Thus, a self-study was initiated and is currently underway to determine the sources of the degraded predictability of formative assessment measures in the post-pandemic learning environment.

Implications and Limitations

The procedure used to obtain these findings offers a simple roadmap for educators eager to determine whether the performance of students enrolled in their courses benefited, suffered, or was unaffected by the switch from face-to-face to online instruction and from the latter to face-to-face instruction again. Of course, our data were collected through a successive-sample design in real classrooms, instead of through an experiment in a lab, thereby making cause–effect inferences merely speculative, rather than statistical certainties. Nevertheless, data that emerge from this design can shed light on the extent to which instructional modes are different or similar if an educator wishes to predict students’ end-of-term performance to develop timely remedial interventions. Prediction errors are costly though. Unrecognized difficulties on the part of students may lead to course failure (a miss), whereas temporary difficulties mistakenly identified as enduring and/or severe (a false alarm) may lead to unnecessary stress experienced by the misidentified students. While the extant literature contains a wealth of machine learning approaches with sophisticated algorithms, these approaches are often seen as mostly inaccessible to educators without a computer science background [78]. Aside from their complexity, such approaches entail many decisions that determine the feasibility and effectiveness of the approaches selected, ranging from how to measure students’ performance to the identification of the appropriate algorithm. Our approach is much less complex and thus more user-friendly for ensuring that data can be used by any instructor to foster evidence-based self-reflection and instructional change. The use of more than one semester of data is advisable though to reduce the impact of idiosyncratic influences.
It is important to note that the results of the present study led to fruitful discussions on learning and teaching by the participating faculty and their colleagues, who judged the study as an opportunity for self-reflection. It dispelled the fear that learning had been compromised by the online switch. Yet, it generated more questions than it could answer, all related to the interpretation of the data for the institution and faculty concerned. Grades are indeed merely raw indicators of the work that precedes and defines the quality of students’ assignments and test performance. A qualitative review of students’ end-of-course surveys did not indicate a preference for one mode of instruction over another. Yet, participating faculty suggested a multitude of accounts involving changes in psychological factors linked to online enrollment, including learning strategies, self-regulation [12], attitudes, and study preferences [42]. They all agreed, though, with Iglesias-Pradas et al. [13] that organizational factors are critical to transferring successfully face-to-face courses to the online mode, such as the faculty’s ability to preserve the quality of the instruction delivered, the readiness of the technical infrastructure, and the technical literacy of all parties involved. The study might not have fully satisfied the faculty’s curiosity, but it energized their inquiries into teaching and learning, undoubtedly a desirable outcome. A sustainable education critically relies on such inquiries.
While the present findings suggest that students taking a course face-to-face after the pandemic may perform better than students taking the same course online, limitations exist. First, these findings may not apply to courses for which a serious institutional effort was not made to equate synchronous online instruction to face-to-face instruction. Similarly, they do not apply to higher education institutions in which the technical support staff was unable to offer adequate assistance to both faculty and students during the transition to the online mode [79]. Second, they include only students who completed the courses in which they were enrolled. Although information was not available for all instantiations of a course and for the specific timing of the withdrawals in all courses, withdrawal rates from the subset at our disposal appeared to be numerically equivalent between the two modes. Third, only female students were included due to gender-segregation policies that prevented the researchers from accessing male students’ data. Furthermore, students were by and large from families whose socioeconomic background was middle class, thereby questioning whether students with a lower socioeconomic status would yield the same pattern of performance data [80]. Fourth, the successive-independent-sample design of the present study does not nurture straightforward cause-effect interpretations of the data. Although this design gives educators the opportunity to examine changes in a student population over time, it does not allow them to infer how individual students have changed over time. Furthermore, predictions of performance within a student sample merely highlight the relationship between particular measures of formative assessment and summative assessment outcomes. Discrepancies in the predictive validity of assessment measures in students sampled at different times may be nevertheless informative. Fifth, at the institution selected for the present study, attendance was mandatory for all courses irrespective of the instructional period in which they were offered. Not surprisingly, instructors reported that in their courses attendance was reasonably high and that students knew the value of class attendance. Instructors noted that most students recognized the impact of absences on their performance and that they might have counteracted it by engaging in compensatory activities (e.g., seeking information post-facto from classmates and instructors). Instructors also reported that the frequency of compensatory activities involving requests to meet before or after class and visits during office hours was rather elevated without marked differences over time between online and face-to-face modes of instruction. Of course, for online students, meetings occurred in the virtual classroom or office. Although instructors noted that good grades were linked to high attendance, thereby agreeing with the preponderance of the extant literature [77,81,82], they did not corroborate anecdotes with quantifiable evidence. Sixth, it is important to keep in mind that particular courses might yield idiosyncratic responses, such as those exhibited by statistics for which even the early formative assessment measure (pre-midterm tests) showed higher performance in classes offered face-to-face after the pandemic than online during the pandemic. Comments made by students can be quite revealing as to the nature and perhaps sources of such idiosyncratic responses. For instance, in our study, female students’ comments underscored their anxiety about computational abilities, which was amplified by the online mode. A return to face-to-face classes was seen as fostering less math-related anxiety since such classes offered opportunities for in-person interactions with instructors, tutors, and peers when help was needed. Other examples of idiosyncratic response patterns may be found in earlier studies. For instance, in a physics course, Al-Zohbi et al. [9] reported that STEM female students showed better test performance online than face-to-face before the pandemic but equivalent performance on homework assignments. Instead, non-STEM students did not display any difference in the mode of instruction. This course also showed that idiosyncratic responses to a particular curriculum might interact with other factors, such as the task at hand (e.g., tests versus homework assignments), as well as students’ academic majors. Seventh, the findings of our study pertained to academic performance measured by instructors in general education courses. Such courses assess key competencies, which are the foundations of major-specific competencies. We did not measure the latter. Thus, we did not determine whether any improvement, decline, or stability over time was associated with students’ major-specific competencies, such as those measured by practicums during the senior year. Students’ performance in practicums is generally treated as a direct measure of their work readiness and ability to adapt to the workplace. Some evidence exists though that declines in work readiness emerged from the pandemic period [83], and that the post-pandemic work environment might demand different skills [84]. This evidence suggests that determining work readiness in the post-pandemic environment is a key aspect of higher education institutions’ quest to ensure a quality education for their students.

6. Conclusions

We recognize that our study is merely a drop in an ocean of classes and students who have been exposed to online instruction during the pandemic. Yet, our study focuses on a convenience sample of young women whose learning is seen as key to the sustainable development of their country’s economy. As such, their learning is an important socio-economic predictor of the changes that are likely to define Saudi Arabia in the years to come.
Not surprisingly, following the results of our research, we discovered that several faculty members had developed a keen appreciation for action research [85,86] to ensure a sustainable education for their students. Thus, the assessment procedure that we demonstrate and advocate for here may be of interest to educators around the world for two key reasons. Not only does it provide useful information on students’ attainment, it also has the potential to engage individual faculty members in self-motivated and continuous assessment practices. If such practices are performed with objectivity and precision and are institutionally supported, they can offer a window into the impact of one’s teaching on students’ attainment, independently spearheading self-reflection and improvement [87,88,89]. They may lessen the need for institutionally mandated professional development opportunities for both faculty and staff in the areas of teaching and learning.
As a result of our past and current work, a Cognitive Science Research Center has been created at the university where the present study was conducted. The center is associated with the Department of Student Affairs so that action research can continue unabated in the post-pandemic learning environment. The association with the Department of Student Affairs is intended to foster research devoted to the identification of individual differences in academic attainment in specific courses or majors, and their sources. Particular attention will be given to the impact of educational interventions on information processing in the human brain [90]. Both aims involve faculty members as their quest to understand learning in their students intersects with their teaching practices.

Author Contributions

All authors have contributed to the research and related manuscript equally. Conceptualization, M.A.E.P., K.E.A., H.M.A. and R.K.; methodology, M.A.E.P., K.E.A., H.M.A. and R.K.; formal analysis, M.A.E.P., K.E.A., H.M.A. and R.K.; investigation, M.A.E.P., K.E.A., H.M.A. and R.K.; data curation, M.A.E.P., K.E.A., H.M.A. and R.K.; writing—original draft preparation, M.A.E.P., K.E.A., H.M.A. and R.K.; writing—review and editing, M.A.E.P., K.E.A., H.M.A. and R.K.; project administration, M.A.E.P., K.E.A., H.M.A. and R.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the guidelines for educational research of the OHPR of the U.S. Department of Health and Human Services as well as those of the American Psychological Association’s ethical standards for educational research. The research was conducted under the purview of the Deanship of Research at the selected institution.

Informed Consent Statement

Data collection and use were granted exempt status in compliance with the guidelines for educational research set by the Code of Federal Regulations Governing the Protection of Human Subjects in Research (45 CFR §46; https://www.hhs.gov/ohrp/regulations-and-policy/regulations/45-cfr-46/common-rule-subpart-a-46104/index.html).

Data Availability Statement

Data are available upon request.

Acknowledgments

We would like to express our appreciation and gratitude to the students who participated in the study and to the members of the Undergraduate Research Society who contributed to the data collection. We are also grateful for the support of colleagues of the PMU Cognitive Science Research Cluster.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Agbedahin, A.V. Sustainable development, education for sustainable development, and the 2030 agenda for sustainable development: Emergence, efficacy, eminence, and future. Sustain. Dev. 2019, 27, 669–680. [Google Scholar] [CrossRef]
  2. Amador, F.; Martinho, A.P.; Bacelar-Nicolau, P.; Caeiro, S.; Oliveira, C.P. Education for sustainable development in higher education: Evaluating coherence between theory and praxis. Assess. Eval. High. Educ. 2015, 40, 867–882. [Google Scholar] [CrossRef]
  3. Ahmed, V.; Opoku, A. Technology supported learning and pedagogy in times of crisis: The case of COVID-19 pandemic. Educ. Inf. Technol. 2022, 27, 365–405. [Google Scholar] [CrossRef]
  4. Schult, J.; Mahler, N.; Fauth, B.; Lindner, M.A. Did students learn less during the COVID-19 pandemic? Reading and mathematics competencies before and after the first pandemic wave. Sch. Eff. Sch. Improv. 2022, 33, 544–563. [Google Scholar] [CrossRef]
  5. Rehman, M.A.; Soroya, S.H.; Abbas, Z.; Mirza, F.; Mahmood, K. Understanding the challenges of e-learning during the global pandemic emergency: The students’ perspective. Qual. Assur. Educ. 2021, 29, 259–276. [Google Scholar] [CrossRef]
  6. Hodges, C.; Moore, S.; Lockee, B.; Trust, T.; Bond, A. The difference between emergency remote teaching and online learning. Educ. Rev. 2020, 27, 1–12. Available online: http://hdl.handle.net/10919/104648 (accessed on 5 April 2023).
  7. Adeyeye, B.; Ojih, S.E.; Bello, D.; Adesina, E.; Yartey, D.; Ben-Enukora, C.; Adeyeye, Q. Online learning platforms and covenant university students’ academic performance in practical related courses during COVID-19 pandemic. Sustainability 2022, 14, 878. [Google Scholar] [CrossRef]
  8. Al-Zohbi, G.; Pilotti, M.A.E.; Barghout, K.; Elmoussa, O.; Abdelsalam, H. Lesson learned from the pandemic for learning physics. J. Comput. Assist. Learn. 2022, 39, 591–602. [Google Scholar] [CrossRef]
  9. Al-Zohbi, G.; Pilotti, M.A.E.; Abdelsalam, H.; Elmoussa, O. Learning physics online or face-to-face: A case study of STEM and non-STEM students. Front. Psychol. 2022, 18, 1041187. [Google Scholar] [CrossRef]
  10. Gopal, R.; Singh, V.; Aggarwal, A. Impact of online classes on the satisfaction and performance of students during the pandemic period of COVID 19. Educ. Inf. Technol. 2021, 26, 6923–6947. [Google Scholar] [CrossRef]
  11. Muhammad, N.; Srinivasan, S. Online education during a pandemic-adaptation and impact on student learning. Int. J. Eng. Pedagog. 2021, 11, 71–83. [Google Scholar] [CrossRef]
  12. Gonzalez, T.; De La Rubia, M.A.; Hincz, K.P.; Comas-Lopez, M.; Subirats, L.; Fort, S.; Sacha, G.M. Influence of COVID-19 confinement on students’ performance in higher education. PLoS ONE 2020, 15, e0239490. [Google Scholar] [CrossRef] [PubMed]
  13. Iglesias-Pradas, S.; Hernández-García, Á.; Chaparro-Peláez, J.; Prieto, J.L. Emergency remote teaching and students’ academic performance in higher education during the COVID-19 pandemic: A case study. Comput. Hum. Behav. 2021, 119, 106713. [Google Scholar] [CrossRef] [PubMed]
  14. Vargas-Ramos, J.C.; Lerma, C.; Guzmán-Saldaña, R.M.E.; Lerma, A.; Bosques-Brugada, L.E.; González-Fragoso, C.M. Academic performance during the COVID-19 pandemic and its relationship with demographic factors and alcohol consumption in college students. Int. J. Environ. Res. Public Health 2021, 19, 365. [Google Scholar] [CrossRef]
  15. Basu, S. The global south writes 1325 (too). Int. Political Sci. Rev. 2016, 37, 362–374. [Google Scholar] [CrossRef]
  16. Limniou, M.; Varga-Atkins, T.; Hands, C.; Elshamaa, M. Learning, student digital capabilities and academic performance over the COVID-19 pandemic. Educ. Sci. 2021, 11, 361. [Google Scholar] [CrossRef]
  17. Mahande, R.D.; Malago, J.D.; Abdal, N.M.; Yasdin, Y. Factors affecting students’ performance in web-based learning during the COVID-19 pandemic. Qual. Assur. Educ. 2022, 30, 150–165. [Google Scholar] [CrossRef]
  18. Rodríguez-Planas, N. COVID-19, college academic performance, and the flexible grading policy: A longitudinal analysis. J. Public Econ. 2022, 207, 104606. [Google Scholar] [CrossRef]
  19. Realyvásquez-Vargas, A.; Maldonado-Macías, A.A.; Arredondo-Soto, K.C.; Baez-Lopez, Y.; Carrillo-Gutiérrez, T.; Hernández-Escobedo, G. The impact of environmental factors on academic performance of university students taking online classes during the COVID-19 pandemic in Mexico. Sustainability 2020, 12, 9194. [Google Scholar] [CrossRef]
  20. Maatuk, A.M.; Elberkawi, E.K.; Aljawarneh, S.; Rashaideh, H.; Alharbi, H. The COVID-19 pandemic and E-learning: Challenges and opportunities from the perspective of students and instructors. J. Comput. High. Educ. 2022, 34, 21–38. [Google Scholar] [CrossRef]
  21. Zhu, X.; Liu, J. Education in and after COVID-19: Immediate responses and long-term visions. Postdigital Sci. Educ. 2020, 2, 695–699. [Google Scholar] [CrossRef]
  22. Bacquet, J.N. Implications of summative and formative assessment in Japan—A review of the current literature. Int. J. Educ. Lit. Stud. 2020, 8, 28–35. [Google Scholar] [CrossRef]
  23. Lau, A.M.S. ‘Formative good, summative bad?’—A review of the dichotomy in assessment literature. J. Furth. High. Educ. 2016, 40, 509–525. [Google Scholar] [CrossRef]
  24. Lei, S.A.; Lei, S.Y. General education curricula affecting satisfaction and retention of undergraduate students: A review of the literature. Education 2019, 139, 197–202. [Google Scholar]
  25. Millea, M.; Wills, R.; Elder, A.; Molina, D. What matters in college student success? Determinants of college retention and graduation rates. Education 2018, 138, 309–322. [Google Scholar]
  26. Warner, D.B.; Koeppel, K. General education requirements: A comparative analysis. J. Gen. Educ. 2009, 58, 241–258. [Google Scholar] [CrossRef]
  27. Brint, S.; Proctor, K.; Murphy, S.P.; Turk-Bicakci, L.; Hanneman, R.A. General education models: Continuity and change in the US undergraduate curriculum, 1975–2000. J. High. Educ. 2009, 80, 605–642. [Google Scholar] [CrossRef]
  28. Munir, H. Reshaping sustainable university education in post-pandemic world: Lessons learned from an empirical study. Educ. Sci. 2022, 12, 524. [Google Scholar] [CrossRef]
  29. Mullen, C.; Pettigrew, J.; Cronin, A.; Rylands, L.; Shearman, D. Mathematics is different: Student and tutor perspectives from Ireland and Australia on online support during COVID-19. Teach. Math. Appl. Int. J. IMA 2021, 40, 332–355. [Google Scholar] [CrossRef]
  30. Adnan, M.; Anwar, K. Online learning amid the COVID-19 pandemic: Students’ perspectives. J. Pedagog. Sociol. Psychol. 2020, 2, 45–51. [Google Scholar] [CrossRef]
  31. Agarwal, S.; Kaushik, J.S. Student’s perception of online learning during COVID pandemic. Indian J. Pediatr. 2020, 87, 554. [Google Scholar] [CrossRef] [PubMed]
  32. Baber, H. Determinants of students’ perceived learning outcome and satisfaction in online learning during the pandemic of COVID-19. J. Educ. E-Learn. Res. 2020, 7, 285–292. [Google Scholar] [CrossRef]
  33. Mushtaha, E.; Dabous, S.A.; Alsyouf, I.; Ahmed, A.; Abdraboh, N.R. The challenges and opportunities of online learning and teaching at engineering and theoretical colleges during the pandemic. Ain Shams Eng. J. 2022, 13, 101770. [Google Scholar] [CrossRef]
  34. Zhao, H.; Xiong, J.; Zhang, Z.; Qi, C. Growth mindset and college students’ learning engagement during the COVID-19 pandemic: A serial mediation model. Front. Psychol. 2021, 12, 224. [Google Scholar] [CrossRef]
  35. Yang, C.; Chen, A.; Chen, Y. College students’ stress and health in the COVID-19 pandemic: The role of academic workload, separation from school, and fears of contagion. PLoS ONE 2021, 16, e0246676. [Google Scholar] [CrossRef]
  36. Morgan, H. Best practices for implementing remote learning during a pandemic. Clear. House J. Educ. Strateg. Issues Ideas 2020, 93, 135–141. [Google Scholar] [CrossRef]
  37. Jia, C.; Hew, K.F.; Bai, S.; Huang, W. Adaptation of a conventional flipped course to an online flipped format during the COVID-19 pandemic: Student learning performance and engagement. J. Res. Technol. Educ. 2020, 54, 281–301. [Google Scholar] [CrossRef]
  38. Larson, D.K.; Sung, C.H. Comparing student performance: Online versus blended versus face-to-face. J. Asynchronous Learn. Netw. 2009, 13, 31–42. [Google Scholar] [CrossRef]
  39. Driscoll, A.; Jicha, K.; Hunt, A.N.; Tichavsky, L.; Thompson, G. Can online courses deliver in-class results? A comparison of student performance and satisfaction in an online versus a face-to-face introductory sociology course. Teach. Sociol. 2012, 40, 312–331. [Google Scholar] [CrossRef]
  40. Xu, D.; Jaggars, S.S. Performance gaps between online and face-to-face courses: Differences across types of students and academic subject areas. J. High. Educ. 2014, 85, 633–659. [Google Scholar] [CrossRef]
  41. Fischer, C.; Xu, D.; Rodriguez, F.; Denaro, K.; Warschauer, M. Effects of course modality in summer session: Enrollment patterns and student performance in face-to-face and online classes. Internet High. Educ. 2020, 45, 100710. [Google Scholar] [CrossRef]
  42. Chisadza, C.; Clance, M.; Mthembu, T.; Nicholls, N.; Yitbarek, E. Online and face-to-face learning: Evidence from students’ performance during the COVID-19 pandemic. Afr. Dev. Rev. 2021, 33, S114–S125. [Google Scholar] [CrossRef]
  43. Atlam, E.S.; Ewis, A.; Abd El-Raouf, M.M.; Ghoneim, O.; Gad, I. A new approach in identifying the psychological impact of COVID-19 on university students’ academic performance. Alex. Eng. J. 2022, 61, 5223–5233. [Google Scholar] [CrossRef]
  44. Chiecher, A.C.; Ficco, C.R.; Bersía, P.B.; Luna Valenzuela, J. Learning strategies in the virtuality forced by the pandemic: Contributions to the design of learning contexts in the post-pandemic stage. Sapienza Int. J. Interdiscip. Stud. 2023, 4, e23005. [Google Scholar] [CrossRef]
  45. Al-Amin, M.; Jahan, I.; Rabbi, M.F.; Islam, U. Can blended learning be the new normal in the post-pandemic higher educational institutions? Int. J. Educ. Res. Rev. 2021, 6, 306–317. [Google Scholar]
  46. Ladson-Billings, G. I’m here for the hard re-set: Post-pandemic pedagogy to preserve our culture. Equity Excell. Educ. 2021, 54, 68–78. [Google Scholar] [CrossRef]
  47. McAbee, S.T.; Oswald, F.L.; Connelly, B.S. Bifactor models of personality and college student performance: A broad versus narrow view. Eur. J. Personal. 2014, 28, 604–619. [Google Scholar] [CrossRef]
  48. Xie, Y.J.; Cao, D.P.; Sun, T. The effects of academic adaptability on academic burnout, immersion in learning, and academic performance among Chinese medical students: A cross-sectional study. BMC Med. Educ. 2019, 19, 211. [Google Scholar] [CrossRef]
  49. Mumford, M.D.; Baughman, W.A.; Threlfall, K.V.; Uhlman, C.E.; Costanza, D.P. Personality, adaptability, and performance: Performance on well-defined problem-solving tasks. Hum. Perform. 1993, 6, 241–285. [Google Scholar] [CrossRef]
  50. Kozlowski, S.W.; Gully, S.M.; Brown, K.G.; Salas, E.; Smith, E.M.; Nason, E.R. Effects of training goals and goal orientation traits on multidimensional training outcomes and performance adaptability. Organ. Behav. Hum. Decis. Process. 2001, 85, 1–31. [Google Scholar] [CrossRef]
  51. Yongmei, H.; Chen, Y. The relationship between locus of control and academic adaptability among college students: Mediating effect of academic self-efficacy. Eur. J. Educ. Pedagog. 2023, 4, 73–77. [Google Scholar] [CrossRef]
  52. Elphinstone, B.; Whitehead, R.; Tinker, S.P.; Bates, G. The academic benefits of ‘letting go’: The contribution of mindfulness and nonattachment to adaptability, engagement, and grades. Educ. Psychol. 2019, 39, 784–796. [Google Scholar] [CrossRef]
  53. Levine-West, G.; Lam, Y.; Schaeffer, G. Study abroad in a (post-) pandemic world: Our new normal and some reasons for optimism. L2 J. 2023, 15, 29–53. [Google Scholar] [CrossRef]
  54. Pilotti, M.A.E.; Hassan, S.A.M.; El Alaoui, K.; Aldossary, F. Are law students’ individual differences in the post-pandemic world related to performance? Front. Educ. 2023, 7, 1064392. [Google Scholar] [CrossRef]
  55. Pilotti, M.A.E.; Abdelsalam, H.; Anjum, F.; Muhi, I.; Nasir, S.; Daqqa, I.; Gunderson, G.D.; Latif, R.M. Adaptive individual differences in Math courses. Sustainability 2022, 14, 8197. [Google Scholar] [CrossRef]
  56. Armadi, S. Problems of the online-based curriculum of language learning in higher education in the post-pandemic. J. Linguist. Engl. Teach. 2022, 7, 158–175. [Google Scholar] [CrossRef]
  57. Cicco, G. Higher education in a “Post”-pandemic era: Lessons learned for faculty and administration. i-Manag. J. Educ. Technol. 2022, 19, 1–6. [Google Scholar] [CrossRef]
  58. Tahir, I.; Van Mierlo, V.; Radauskas, V.; Yeung, W.; Tracey, A.; da Silva, R. Blended learning in a biology classroom: Pre-pandemic insights for post-pandemic instructional strategies. FEBS Open Bio 2022, 12, 1286–1305. [Google Scholar] [CrossRef]
  59. Else-Quest, N.M.; Grabe, S. The political is personal: Measurement and application of nation-level indicators of gender equity in psychological research. Psychol. Women Q. 2012, 36, 131–144. [Google Scholar] [CrossRef]
  60. Hepp, P.; Somerville, C.; Borisch, B. Accelerating the United Nation’s 2030 global agenda: Why prioritization of the gender goal is essential. Glob. Policy 2019, 10, 677–685. [Google Scholar] [CrossRef]
  61. Koehler, G. Tapping the Sustainable Development Goals for progressive gender equity and equality policy? Gend. Dev. 2016, 24, 53–68. [Google Scholar] [CrossRef]
  62. Albassam, B.A. Economic diversification in Saudi Arabia: Myth or reality? Resour. Policy 2015, 44, 112–117. [Google Scholar] [CrossRef]
  63. Barhoumi, E.M.; Okonkwo, P.C.; Zghaibeh, M.; Belgacem, I.B.; Alkanhal, T.A.; Abo-Khalil, A.G.; Tlili, I. Renewable energy resources and workforce case study Saudi Arabia: Review and recommendations. J. Therm. Anal. Calorim. 2020, 141, 221–230. [Google Scholar] [CrossRef]
  64. Le Ha, P.; Barnawi, O.Z. Where English, neoliberalism, desire and internationalization are alive and kicking: Higher education in Saudi Arabia today. Lang. Educ. 2015, 29, 545–565. [Google Scholar] [CrossRef]
  65. Nurunnabi, M. Transformation from an oil-based economy to a knowledge-based economy in Saudi Arabia: The direction of Saudi vision 2030. J. Knowl. Econ. 2017, 8, 536–564. [Google Scholar] [CrossRef]
  66. Abdelwahed, N.A.A.; Bastian, B.L.; Wood, B.P. Women, entrepreneurship, and sustainability: The case of Saudi Arabia. Sustainability 2020, 14, 11314. [Google Scholar] [CrossRef]
  67. Amirat, A.; Zaidi, M. Estimating GDP growth in Saudi Arabia under the government’s vision 2030: A knowledge-based economy approach. J. Knowl. Econ. 2020, 11, 1145–1170. [Google Scholar] [CrossRef]
  68. Macias-Alonso, I.; Kim, H.; González, A.L. Self-driven Women: Gendered mobility, employment, and the lift of the driving ban in Saudi Arabia. Gend. Place Cult. 2023; advance online publication. [Google Scholar] [CrossRef]
  69. Saleh, W.; Malibari, A. Saudi women and vision 2030: Bridging the gap? Behav. Sci. 2021, 11, 132. [Google Scholar] [CrossRef]
  70. Anderson, L.W.; Krathwohl, D. A Taxonomy for Learning, Teaching and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives; Longman: New York, NY, USA, 2001. [Google Scholar]
  71. Barari, N.; RezaeiZadeh, M.; Khorasani, A.; Alami, F. Designing and validating educational standards for E-teaching in virtual learning environments (VLEs), based on revised Bloom’s taxonomy. Interact. Learn. Environ. 2022, 30, 1640–1652. [Google Scholar] [CrossRef]
  72. Bloom, B.S. (Ed.) Taxonomy of Educational Objectives. Cognitive Domain; McKay: Philadelphia, PA, USA, 1956. [Google Scholar]
  73. Eison, J.; Janzow, F.; Pollio, H.R. Assessing faculty orientations towards grades and learning: Some initial results. Psychol. Rep. 1993, 73, 643–656. [Google Scholar] [CrossRef]
  74. Lim, T.S. Verbal communication across cultures. In Handbook of Intercultural Communication; Chen, L., Ed.; De Gruyter Mouton: Berlin, Germany, 2017; pp. 179–198. [Google Scholar]
  75. Bartusevičienė, I.; Pazaver, A.; Kitada, M. Building a resilient university: Ensuring academic continuity—Transition from face-to-face to online in the COVID-19 pandemic. WMU J. Marit. Aff. 2021, 20, 151–172. [Google Scholar] [CrossRef]
  76. Pilotti, M.A.E.; Al Mulhem, H.; Al Kuhayli, H.; El Alaoui, K. A case study of university students’ performance in Arabic and Islamic culture courses before and during the pandemic (face-to-face and online instruction). J. Educ. Muslim Soc. 2023, 4, 96–115. [Google Scholar]
  77. Chen, J.; Lin, T.F. Class attendance and exam performance: A randomized experiment. J. Econ. Educ. 2008, 39, 213–227. [Google Scholar] [CrossRef]
  78. Alyahyan, E.; Düştegör, D. Predicting academic success in higher education: Literature review and best practices. Int. J. Educ. Technol. High. Educ. 2020, 17, 3. [Google Scholar] [CrossRef]
  79. Pedro, N.S.; Kumar, S. Institutional support for online teaching in quality assurance frameworks. Online Learn. 2020, 24, 50–66. [Google Scholar] [CrossRef]
  80. Alam, G.M. Has secondary science education become an elite product in emerging nations?—A perspective of sustainable education in the era of MDGs and SDGs. Sustainability 2023, 15, 1596. [Google Scholar] [CrossRef]
  81. Thomas, P.V.; Higbee, J.L. The relationship between involvement and success in developmental algebra. J. Coll. Read. Learn. 2000, 30, 222–232. [Google Scholar] [CrossRef]
  82. Moore, R. Attendance and performance. J. Coll. Sci. Teach. 2003, 32, 367–371. [Google Scholar]
  83. Alam, G.M. Does online technology provide sustainable HE or aggravate diploma disease? Evidence from Bangladesh—A comparison of conditions before and during COVID-19. Technol. Soc. 2021, 66, 101677. [Google Scholar] [CrossRef]
  84. Bayerlein, L.; Hora, M.T.; Dean, B.A.; Perkiss, S. Developing skills in higher education for post-pandemic work. Labour Ind. J. Soc. Econ. Relat. Work 2021, 31, 418–429. [Google Scholar] [CrossRef]
  85. Keahey, J. Sustainable development and participatory action research: A systematic review. Syst. Pract. Action Res. 2021, 34, 291–306. [Google Scholar] [CrossRef]
  86. Liddy, M. From marginality to the mainstream: Learning from action research for sustainable development. Ir. Educ. Stud. 2021, 31, 139–155. [Google Scholar] [CrossRef]
  87. Al Kuhayli, H.; Pilotti, M.A.E.; El Alaoui, K.; Cavazos, S.E.; Hassan, S.A.M.; Al Ghazo, R. An exploratory non-experimental design of self-assessment practice. Int. J. Assess. Eval. 2019, 26, 49–65. [Google Scholar] [CrossRef]
  88. Pedrosa-de-Jesus, H.; Guerra, C.; Watts, M. University teachers’ self-reflection on their academic growth. Prof. Dev. Educ. 2017, 43, 454–473. [Google Scholar] [CrossRef]
  89. Brownhill, S. Asking additional key questions of self-reflection. Reflective Pract. 2023, 24, 400–412. [Google Scholar] [CrossRef]
  90. Thomas, M.S.; Ansari, D.; Knowland, V.C. Annual research review: Educational neuroscience: Progress and prospects. J. Child Psychol. Psychiatry 2019, 60, 477–492. [Google Scholar] [CrossRef]
Table 1. Mean (M) students’ performance and standard error of the mean (SEM) by course activities and instructional mode.
Table 1. Mean (M) students’ performance and standard error of the mean (SEM) by course activities and instructional mode.
Written CommunicationM FtF-Before
n = 341
SEMM Online
n = 356
SEMM FtF-After
n = 346
SEM
Pre-Midterm Assignments82.190.9583.750.9384.770.95
Midterm Exam75.29 b1.0076.73 b0.9882.62 a1.00
Post-Midterm Assignments85.42 b1.0787.321.0589.24 a1.06
Final Exam63.91 b1.2566.26 b1.2271.43 a1.24
Introductory Psychologyn = 176 n = 250 n = 174
Pre-Midterm Assignments89.951.2387.411.0390.651.23
Midterm Exam76.88 b1.3676.93 b1.1484.34 a1.37
Post-Midterm Assignments87.591.3484.32 b1.1289.11 a1.34
Final Exam75.27 a1.6860.52 b1.4180.17 a1.69
Statisticsn = 176 n = 165 n = 154
Pre-Midterm Tests79.38 b0.8981.24 b0.9292.11 a0.95
Post-Midterm Tests77.68 c0.9881.92 b1.0188.41 a1.05
Final Exam76.73 c0.7979.15 b0.8283.42 a0.85
Note: FtF means face-to-face. Scores that are significantly different from each other are labeled with different letters.
Table 2. Course activities in each period as predictive of final exam scores: Pearson correlations and coefficients of determination (%).
Table 2. Course activities in each period as predictive of final exam scores: Pearson correlations and coefficients of determination (%).
Final Exam Grades
Face-to-Face BeforeWritten Comm
R
r2Psychology
R
r2Statistics
r
r2
Pre-Midterm Act.0.48423%0.45621%0.77460%
Midterm Exam0.64942%0.63140%
Post-Midterm Act.0.48624%0.46221%0.82067%
Online
Pre-Midterm Act.0.37414%0.34112%0.58434%
Midterm Exam0.62439%0.53529%
Post-Midterm Act.0.37514%0.44220%0.70049%
Face-to-Face After
Pre-Midterm Act.0.48223%0.41517%0.61838%
Midterm Exam0.51627%0.54930%
Post-Midterm Act.0.41217%0.52928%0.35713%
Note: Act. = activities, either assignments or tests depending on the course.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pilotti, M.A.E.; Alaoui, K.E.; Abdelsalam, H.M.; Khan, R. Sustainable Development in Action: A Retrospective Case Study on Students’ Learning Before, During, and After the Pandemic. Sustainability 2023, 15, 7664. https://doi.org/10.3390/su15097664

AMA Style

Pilotti MAE, Alaoui KE, Abdelsalam HM, Khan R. Sustainable Development in Action: A Retrospective Case Study on Students’ Learning Before, During, and After the Pandemic. Sustainability. 2023; 15(9):7664. https://doi.org/10.3390/su15097664

Chicago/Turabian Style

Pilotti, Maura A. E., Khadija El Alaoui, Hanadi M. Abdelsalam, and Rahat Khan. 2023. "Sustainable Development in Action: A Retrospective Case Study on Students’ Learning Before, During, and After the Pandemic" Sustainability 15, no. 9: 7664. https://doi.org/10.3390/su15097664

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop