Next Article in Journal
The Influence of Absorption and Need for Cognition on Students’ Learning Outcomes in Educational Robot-Supported Projects
Previous Article in Journal
Conceptions and Attitudes of Pre-School and Primary School Teachers towards STEAM Education in Spain
Previous Article in Special Issue
Bachelor of Education Science Students’ Beliefs, Perceptions, and Experiences of Online Learning during the COVID-19 Pandemic: A Case of Disadvantaged Students
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Perceptions about the Assessment in Emergency Virtual Education Due to COVID-19: A Study with University Students from Lima

by
Iván Montes-Iturrizaga
1,*,
Gloria María Zambrano Aranda
2,
Yajaira Licet Pamplona-Ciro
3 and
Klinge Orlando Villalba-Condori
4
1
Department of Education, Pontifical Catholic University of Peru, San Miguel 15088, Peru
2
Department of Accounting Sciences, Pontifical Catholic University of Peru, San Miguel 15088, Peru
3
Department of Psychology, Universidad Continental, Huancayo 12000, Peru
4
Department of Education, Catholic University of Santa María, Arequipa 04013, Peru
*
Author to whom correspondence should be addressed.
Educ. Sci. 2023, 13(4), 378; https://doi.org/10.3390/educsci13040378
Submission received: 3 March 2023 / Revised: 2 April 2023 / Accepted: 4 April 2023 / Published: 7 April 2023

Abstract

:
The COVID-19 pandemic forced a large section of Peruvian universities to design systems for emergency virtual education. This required professors to quickly learn how to use teaching platforms, digital tools and a wide range of technological skills. In this context, it is remarkable that formative assessment may have been the pedagogical action with the greatest number of challenges, tensions and problems, due to the lack of preparation of many professors to apply performance tests and provide effective feedback. Given this, it is presumed that these insufficiencies (previously exhibited in face-to-face education) were transferred to virtual classrooms in the framework of the health emergency. A survey study was carried out on 240 students from a private university in Lima to find out their perceptions and preferences regarding the tests that their professors administered in the virtual classrooms. It was found that the students were assessed, for the most part, with multiple choice tests. In addition, it was found that the students recognized that the essay tests were the most important for their education, but they preferred multiple choice tests. Finally, it was found that law school students were mostly assessed with essay tests and psychology students with oral tests.

1. Introduction

The assessment of learning in the classroom is one of the most relevant processes to lead students towards the achievement of the competencies, skills, contents or abilities included in a curriculum or program [1,2,3,4]. This is because evaluation, if it is formative, should go hand in hand with two relevant components. The first of them is associated with the quality of the evidence of learning that is generated through the review of assignments, oral participations and exams that are applied throughout a semester or academic cycle [5]. This quality of evidence is directly associated with the meaningfulness, realism and contextual validity of the assessment conditions set for students [2,3,6]. For example, an essay exam will more closely correspond to the intellectual demands of a given professional field as opposed to one where students must select the correct answer purposely from a list of alternatives [1,2,7].
The second component or deployment is related to the possibility that university professors use this evidence to provide feedback on their teaching and learning processes [3,4,7]. In this way, we could say that we would be facing an evaluative process of formative nature; otherwise, we would be measuring or contemplating how students are progressing in terms of their learning [8]. For this reason, it could be affirmed that formative assessment is a process oriented towards students learning what they should learn. In other words, assessment understood in this way is aimed at learning; this is in total opposition to the practices that understand assessment as a simple verification of what has been learned. In this sense, the assessment of learning is above all a permanent disposition to know in a reflexive way the students’ learning processes in order to provide feedback [2,3]. This, as mentioned above, requires university professors to be attentive to the progress of their students and the possible problems they may encounter when learning. In the same way, is very important that evidence (in the form of tests or exams) enables as reliable a construction as possible, as it is on the basis of this and of the reflective processes that optimizing decisions will be taken in the form of feedback on both learning and teaching themselves [2].
In this light, assessment is not the same as grading, which measures or verifies the quantity of what has been learnt. Nor can we equate assessment with a mere value judgement, as this would lead us to the contemplation of a reality in which there is no intentionality in the interest of improving learning [3,5,7]
In view of the above, it would seem that the more formative approaches to assessment are more deeply rooted in the thinking and practice of basic education teachers (pre-school, primary, and secondary) than in university lecturers. It is possible that the reasoning that justifies not practicing truly formative assessment still persists, alluding to the fact that university students are adults and do not deserve help from their professors; when in fact—like any other student—they need help in a framework of warmth in terms of treatment [9]. All this points towards the personal conditions that are added to the pedagogical and professional competences that every university professor must possess; the same ones that are projected in the assessment displays in the classroom [10].
In short, assessment is a profoundly human act committed to the students whom we wish at all times to see succeed through their university studies. Such a well-meaning disposition elevates formative assessment to a practice that will lead to the greatest number of students (hopefully all of them) being able to achieve what the degree program expects of them. Thus, the success of the university education and the quality of the graduates’ professional displays would depend to a large extent on those timely aids, pedagogical adjustments, and meaningful feedback that we offer to the learners [3].
On the other hand, it is worth mentioning that Peruvian universities in the framework of the current reform generated by the current University Law 30220 of 2014 have seen the need to deploy greater efforts for better teaching. Under these influences, assessment had to be improved significantly thanks to the development of courses, workshops and specializations in university teaching. A series of very determined actions had thus been initiated to optimize the teaching skills of professors and to investigate the impact of the measures undertaken; all this in a context of greater competitiveness of Peruvian universities to conquer better positions in terms of scientific production, respectability, and expectant location within higher education [11].
However, these dynamics underwent abrupt changes and modifications as a result of the COVID-19 pandemic [2]. In this case, it is not that improvements in teaching had been stopped, but rather, more urgent issues had to be addressed at that time, such as the acquisition of platforms for online teaching, professor training in the use of these platforms, and the transfer of teaching materials to a virtual space little known by the vast majority of professors [7]. Likewise, programs had to be designed to establish teaching skills through digital platforms. Some of these training spaces would have been developed under a face-to-face education mode and others from a perspective closer to engineering than to pedagogy. In other cases, it was reported that some university owners ordered the merger of several sections or classrooms into one, where a professor who previously used to teach 25 or 35 students went from one moment to another to having more than 100 students in a single virtual classroom [2,3]. This would have occurred to a greater extent in private for-profit universities. In addition, these facts would have occurred in other latitudes with respect to the excessive increase in the number of students per classroom [12].
Thus, the above-mentioned situation would have had a significant impact on the quality of teaching and assessment itself, and where, since it is practically impossible to assess 100 or 125 students through essay exams, an excessive use of multiple-choice exams would have been resorted to; situation that would have had a negative impact on the emotional state and motivation of professors and cause burnout in the context of the pandemic [13,14,15]. At the same time, it is to be expected that—in other cases—professors have transferred the strengths and weaknesses of their face-to-face teaching (and their ways of assessing) to the online modality. However, this is also a problem for students because a significant percentage of them graduate from secondary education without having achieved the basic competencies needed to enter the university world [16]. This translates into problems in oral expression, in the search for reliable information, in writing and at the level of thinking in general, among others. This is largely explained by the results of the 2018 PISA (Peru), which revealed the serious deficiencies of our young people who are close to finishing high school [16,17].
In addition, recent studies report the existence of problems that have become a pandemic, such as plagiarism, lack of academic integrity, the increase in artificial grading, the difficulty of grading group work, and the absence of face-to-face relationships between professors and students [12,18]. However, the evidence also points to favorable perceptions of inclusion, well-being, and satisfaction with online assessment in the context of pandemic-related evidence [12,19].
On the other hand, other studies report that in pandemics, scores would have risen considerably on the tests [20], and that student recognized that the online modality demanded a greater effort, but with important advantages [21]. It should be noted that most of these studies compare scores before and during the pandemic, but by means of multiple-choice exams [22]. The number of research studies (such as this one) that analyze the types of exams used in online classes through a formative assessment perspective, and where the type of exam matters, especially given the need to provide feedback to students [23], is small. Other researchers have reported the need to transcend the traditional multiple-choice exam that was used during this health crisis with insistence [12,20,23,24,25]. Specifically, perceptual studies of students indicate that during the pandemic, multiple-choice tests have been favored over oral and essay tests [26,27,28]. However, we have not recorded studies that tell us about students’ perceptions of assessment according to university degree or program, and, for this reason, research has concentrated on reporting global results [5,23]. Given this, we anticipate that each degree program has a particular culture that would be reflected in the assessment practiced in the classroom [29,30].

Purpose of Study

By virtue of the aforementioned situation, the general objective of the present research is to study the perceptions and preferences of university students with respect to the assessment of their learning in the classroom in the context of the emergency online teaching provided during the COVID-19 pandemic by. We thus have as specific objectives: (1) To characterize perceptions regarding the tests given by professors; (2) To describe the perceived emphases regarding the multiple-choice tests applied by their professors; (3) To identify preferences regarding the types of tests; and (4) To identify the perceptions of students of which tests are the most relevant for their professional education.
In this way, it is intended to provide inputs for debate, reflection, and decision making in this thematic field, where technological criteria (applications, programs, and automated systems to provide multiple-choice tests) seem to have taken precedence [31]. For this reason, researchers in this field believe that, in the context of the COVID-19 pandemic, traditional tests focused more on memorization and on aspects of little relevance to students were applied [32,33]. Thus, this study aims to contribute, also, to this knowledge problem.

2. Materials and Method

2.1. Research Design

A quantitative approach study was developed using the survey method and the questionnaire was constructed as an instrument [34]. In terms of its characteristics, this research is observational (non-experimental), transversal, descriptive, comparative, and correlational.

2.2. Selection of Participants

The sample consisted of 240 undergraduate university students from a private for-profit university that serves vulnerable socioeconomic sectors of Metropolitan Lima (Peru). These students had entered the university in the month of March 2021, during the COVID-19 pandemic. They were surveyed between the months of October and November (2021) of that same year while completing their second semester of their course. It should be noted that in that second semester, the total population of students was 303; therefore, the sample represents 79.21% of the total population. Likewise, these students (from this on-site university) were immersed from the beginning of their training in emergency virtual education due to the health situation. It is important to mention that the study was census-based, and therefore, the link with the instrument was sent to 100% of the students enrolled in the second semester.
Table 1 shows the distribution of the sample by degree program.
Ages ranged from 16 to 54 years, with a mean of 25.4 and a standard deviation of 8.105. It should be noted that 72% of the subjects in the sample mentioned that they study and work in different private and public companies. Likewise, 90% of them were born in a province or region other than Lima and came from the highlands (Andean and Altiplano zones) that have an altitude of more than 1500 m above sea level. Fundamentally, these students came from Huancayo, Ayacucho, Abancay, Arequipa, and Puno. Additionally, according to the internal census of this university, 89% (of those enrolled in this institution) will be the first generation of university graduates in their entire family. Finally, in terms of socioeconomic status, the university has 70% of students living in poverty, and has one of the lowest tuition pensions in the country.

2.3. Questionnaire

A questionnaire (Appendix A) was created, where identification data, such as sex, age and degree program, are present in the first part. The second part comprises 11 multiple-choice questions that explore students’ perceptions regarding the most common types of tests (and their emphases) that were given by their university professors. The third part of the questionnaire explores preferences as to how to be assessed (essay tests or multiple-choice tests, for example). The fourth part of this instrument asks about evaluative perceptions regarding what type of test they consider most relevant to their preparation as future professionals.
It should be specified that this self-administered instrument was given thanks to Google Forms during the second COVID-19 wave. It is also important to point out that, at all times, we asked students about the perceptions regarding the assessment practiced (especially about the tests that their professors provided through the virtual classrooms) by their teachers in virtual emergency education in Peru. This research questionnaire had content validity, determined through the participation of 3 expert judges. All these judges were academics with doctoral degrees in educational sciences (one of them was a doctor in educational psychology), and with at least 10 years of experience in creating this type of test and in the thematic field of assessment. The questionnaire (anonymous) was answered under informed consent and voluntarily. It should be noted that this instrument was developed on the basis of a series of exploratory group interviews conducted by the main author of this study with students at the university.

2.4. Procedures

Access was granted to the university students thanks to the coordination established with the university’s highest authorities. These authorities were informed in writing about the nature of this research and the commitment to keep the name of the institution confidential was made explicit. In this way, a list was received with the e-mail addresses of all the students who were in their second semester. Then, the link to the questionnaire was sent by email and there were two more reminders so that they could participate in this study. Likewise, a general report of the results was given to the president of the university with recommendations regarding the classroom evaluation.

2.5. Data Analysis

Statistical analyses were performed in the SPSS program (Version 27 for Windows). Specifically, we performed descriptive and comparative analyses. The chi-square test (χ2 test) was also applied to study whether there were significant associations of the phenomena under study with the sex and age of the students.

3. Results

This research focuses on characterizing, from a quantitative approach, the perceptions that students at a university in Lima (Peru) have regarding the tests used by their professors in the context of emergency virtual education due to the health crisis caused by COVID 19. This, by virtue of the fact that each professional training program would present its own habits, idiosyncrasies, and evaluative practices that would be associated with its own pedagogical traditions. In this way, these degree-specific emphases (and those that are common) would be presented to the teaching staff, who have the power to decide which tests they will use (and which ones they exclude) in their classrooms.

3.1. Tests and Examinations Used according to Degree Program

By virtue of the above, it can be seen in Table 2 that the students indicated that they would have had to take a greater number of response selection tests (or the so-called “objective tests” in the Peruvian educational system). It is necessary to point out that law students are the ones who believed that they sat these tests less frequently (37.5%); the rest of the students of all the programs considered that it was these tests that were mostly used in their assessment. In this context, psychology (91.7%) and nutrition (89.9%) students considered these multiple-choice tests (MC) to be used the most by their professors.
It should be added that in this study we have not considered student perceptions of the feedback received from university professors. Therefore, it is especially recommended that this aspect be considered in future scientific research in this area.

3.2. Types of Multiple-Choice Tests

Additionally, the purpose was to understand the student´s appreciation of the multiple-choice tests (MC) given by their professors by degree program. In this item, which is expressed as a result in Table 3, In addition, we sought to know the students’ appreciation of the multiple-choice (MC) test applied by their teachers by degree. In this item, which is expressed as a result in Table 2, students were asked about the way in which these tests (MC) were applied by their teachers (in general terms). It was found that the most frequently administered tests combined memoristic items (because they required memorising irrelevant data) and thinking or reasoning oriented items. It was found that the most frequently given tests combined memorizing items (because they demanded irrelevant data to be memorized) and those oriented towards thinking or reasoning. This is followed by tests with items for understanding and thinking (reasoning), and, to a lesser extent, exclusively memorizing test, where much lower percentages (from 0.0% to 9.8%) are evidenced for all the degree programs. It is important to mention, as something positive, that no student of psychology, nutrition, and law believed that there were exclusively memorizing answer selection tests in their university courses. As for the MC more exclusively oriented towards thinking or reasoning, it was found that the students’ perception was that these tests were given to a greater extent in nutrition (44.4%), administration (39.4%) and law (37.5).
However, it should be noted that we must consider these results (especially those presented in Table 3) with caution since the students’ perceptions of the multiple-choice tests depend on their categories, knowledge of the subject matter, and thoughts about them. For this reason, it is likely that at the time of answering the questionnaire, many of the students may have had the opportunity to reflect on these issues for the first time. In any case, and in spite of these epistemological tensions, these perceptions refer us to the way in which they consider the tests they had to take.

3.3. Preferences on How to Be Assessed

We were also interested in finding out the preferences of students in all degree programs for two types of instruments to evaluate their learning throughout their professional training (Table 4): essay tests (ET) and multiple-choice test (MC). Thus, we found that the preferences of university students pointed more to the MC than to the ET. It is worth mentioning that this preference for MC was present in all degree programs except accounting (26.1%) and marketing (43.5%). Likewise, we identified that nursing (68.3%) and pharmacy (65.5%) students were the ones who preferred MC to a greater extent. Additionally, we found that the proportion of law students preferring multiple choice and essay tests was the same (50%), which is striking as they are in a career that demands a lot of reading, discussion, debate, and handling of sources, which made us think (by way of conjecture) that these students would prefer essay tests to a greater extent. We also had the same reasoning with respect to psychology students.
Given this, we have no further information to contextualize these results in terms of the actual tests and exams that had to be taken in these emergency virtual classes.

3.4. Perceptions about the Most Important Tests

However, when students were asked about the most important tests for their university education, we found that the essay tests stood out in all the degree programs (Table 5). In other words, MC are preferred in almost all degree programs, knowing that they would not be as important or would be the least relevant for their preparation for the professional field. In this case, it should be noted that 85% of the students at this university come (according to data from its academic office) from the public schools with the lowest learning results in Metropolitan Lima (Peru), as measured by the tests applied by the Ministry of Education. Thus, as is well known, the students who graduated from these basic education schools evidenced poor study habits, less ability to understand what they read, and possess fewer resources to express themselves adequately in writing. Thus, this paradox finding reveals a clear understanding that the essay tests (ET) are the most important; however, they are not preferred because the students’ anticipated problems in answering them successfully, given their educational deficiencies due to the high school or middle school they attended.
In this way, and in a more specific way, we found that law (87.5%), administration (84.8%), and marketing (82.6%) students are those who valued these essay tests to a greater extent as the most important or relevant for professional training. The other degree programs whose students perceived essay tests favorably were pharmacy (61.8%) and accounting (69.6%).
We also wanted to know whether age (ordinal variable) was associated with the preference for response selection tests or essay tests. In this case, there was no association between these variables (χ2 = 0.430, df = 1, p > 0.512). Similarly, we did not find a significant association between the age of the students and their perception of which tests are the most relevant for professional training (χ2 = 1.566, df = 1, p > 0.211). In this sense, it was determined that the marked preferences for multiple-choice tests and the belief that essay tests are the most relevant for vocational training were not associated with age. In other words, in all age groups, the perceptions are similar.
Along the same lines, we found that the sex of the students was not associated with preferences for either multiple-choice test or essay tests (χ2 = 3.175 df = 2, p > 0.204). Along the same lines, we found that there was no association between the sex of the students and their perception of developmental or essay (EE) tests as the most important for their professional training (χ2 = 1500, df = 2, p > 0.472). These last two analyses confirm that these preferences (multiple-choice tests) and the considerations about the most important tests (essay tests) are associated with previous school experiences (unsatisfactory in terms of communicative competencies) and the search for maximum ease. Or perhaps it is likely that these results refer to students anticipating (out of habit and developed abilities) that they would do worse (or require a different approach to study) with essay tests.

4. Discussion

This quantitative research aimed to explore the perceptions and preferences regarding classroom assessment in students of a private for-profit university, which mostly serves vulnerable populations. In this context, according to the figures of this university’s report, 70% of students are in poverty and 85% of them have completed their high school studies in public schools in the peripheral areas of the city of Lima. These data are very important because we are probably dealing with a student population (represented in the sample) with largely unsatisfactory school experiences and poor cultural stimuli (books, reflective dialogues, and student models). This fact has most probably generated the identified paradox that reveals that multiple choice exams (MC) are preferred, and, at the same time, essay test (ET) is recognized as the most appropriate for university education. Given this, it is possible that for these students, it is more complicated to have to write an answer because, most likely, in basic education they had few opportunities to develop this competence. On the other hand, answering a multiple-choice exam—even one of simple memorization—would be more within reach, given the low cognitive demand of these tests; even more so when there are still professors in much of the Peruvian educational system who are more focused on having their students retain large amounts of data, dates, principles, and formulas in their memory [2,7,23,29].
In addition to this, it was clearly identified that the tests that were given by university professors in this study were mostly response selection (MC) tests, and, within these, there was emphasis on the tests that combine memorized items and explore thinking or reasoning [23]. However, as noted in the results, the degree program variable was very revealing, in the sense that it allowed us to identify differentiated perceptions. This is most likely related to the existence of different professional cultures within each university, which would explain the ways of approaching teaching, assessment, and the tasks entrusted to students [35,36].
In any case, it will be necessary to develop qualitative studies aimed at knowing the perceptions, attributions, and reasons that professors have for using some types of tests and not others. To know the reasons certain tests are chosen, we would have to consider the specialty of the professors in these courses since, in the current study, it has been found that not only would the students find the assessment process different, but so would the professors [37]. Additionally, other variables or characteristics of the students should be considered in future studies in order to discriminate more clearly this phenomenon (even more so when we only found the degree program as the only relevant variable). For example, it would be advisable to consider in the future the semester and the type of subject (general education and specialization) that would make it possible to more precisely identify and understand the perceptions on the exams given in the classroom [37]. However, also, it would be pertinent to research the university professors themselves to learn about the tests they give and the reasons they have for considering them for their work in the classroom.
On the other hand, we must understand that during the pandemic some universities in Peru (especially for-profit ones) increased the number of students in virtual classrooms. This fact, which has not been investigated in depth, was probably due to the need to compensate for the economic crisis that many Peruvian universities went through in the face of the large student desertion that occurred mainly during the first year of the health crisis due to COVID-19.
As a consequence of the above (the fact of working with classrooms with 80 or 100 students), it is very likely that a significant number of professors opted to use multiple-choice exams instead of essay tests. However, it should also be noted that Peru was the country (according to the World Health Organization) with the highest number of deaths due to the COVID-19 pandemic in the world. This undoubtedly had an equal impact on students and their university professors; the emotional impact was compounded by the economic impact due to the shortage of medical oxygen, medical supplies, and beds in public hospitals. Therefore, it is likely that this situation had some impact or influence on the perceptions of the students who responded to the questionnaire, and also, on the emphasis on the tests given by their professors in the framework of emergency virtual education.

5. Limitations

It is suggested for future studies to consider at least three aspects or dimensions in order to understand in greater depth the phenomenon that has been the subject of this report. Firstly, it is necessary to analyze (by means of analytical rubrics or a checking system) the tests and exams used by university professors. Secondly, it would be interesting to explore the thoughts (beliefs, implicit theories, and/or attitudes) of university professors with respect to the assessment they practice. Additionally, as a third aspect, it is essential to consider the dedication and workload of these professors; in other words, to know details about their work as academics in one or more universities. All this, most probably, will lead us to case studies. It should be added that in this study we have not considered student perceptions of the feedback received from university professors. Therefore, it is especially recommended that this aspect be considered in future scientific research in this area.
We also did not take into account the academic achievement tests that the professors of our sample students provided in the virtual classrooms. Likewise, we did not have access to their grades or their scores on the tests administered by the professors, as this information is confidential information of a professional nature in the Peruvian legal framework. However, it is important to clarify that in this study, we are focusing on the perceptions about the exams given by professors in their classes and on their preferences in terms of assessment [22].

6. Conclusions

Finally, it is important to note that we do not have previous studies similar to this one that would have played a relevant role in establishing reliable comparisons that would indicate whether assessment in emergency e-learning is the same or different from that practiced before the pandemic. Based on the above, we can hypothesize that the assessment would have suffered a setback in this health crisis at the university that we considered in this research [38]. In this context, our study revealed that: (1) Multiple-choice tests were favored in some degree programs; (2) There are differences in the assessment cultures of each degree program; (3) Gender and age are not linked to preferences or to the perceptions on the most important tests for university education; and (4) Students preferred multiple-choice tests even though they did not consider them to be important.

Author Contributions

Conceptualization, I.M.-I. and Y.L.P.-C.; methodology, I.M.-I.; software, K.O.V.-C.; validation, G.M.Z.A., I.M.-I. and Y.L.P.-C.; formal analysis, I.M.-I.; investigation, I.M.-I.; resources, K.O.V.-C. and Y.L.P.-C.; data curation, G.M.Z.A.; writing—original draft preparation, I.M.-I.; writing—review and editing, I.M.-I. and G.M.Z.A.; visualization, Y.L.P.-C.; supervision, I.M.-I.; project administration, Y.L.P.-C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was approved by the research ethics committee of the Faculty of Health Sciences of the UMA.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data supporting results reported in this study are confidentially stored in a repository and are available on request.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Questionnaire

Questionnaire about Evaluation in the University

Dear university student.
I express my warmest greetings and hope you are in good health.
In this opportunity I am developing a study about the way you have been assessed at the university in these times of emergency virtual education. For this reason, we ask you to answer this anonymous questionnaire with complete sincerity. It is also worth mentioning that your answers will be used for strictly scientific purposes.
If we have your consent you can start from this moment; otherwise, you can leave this link.
Thank you very much for your participation!
  • Iván Montes-Iturrizaga PhD
  • Researcher
  • Personal data section
    • What is your sex?
      • ( ) Male.
      • ( ) Female.
    • How old are you (put your age in numbers)?
    • In which province where you born?
    • What is your current situation?
      • ( ) Study only.
      • ( ) Study and work.
    • What career are you studying at the university?
      • ( ) Administration.
      • ( ) Marketing.
      • ( ) Accounting.
      • ( ) Law.
      • ( ) Nursing.
      • ( ) Pharmacy.
      • ( ) Nutrition.
      • ( ) Psychology.
  • Questions about assessment
    6.
    In what ways were you assessed in university this semester? Check all that apply to your experience.
    • ( ) Multiple choice-test.
    • ( ) Essay test.
    • ( ) Oral test (live or by sending in a video).
    • ( ) Test where you had to respond to a case.
    7.
    How were the “multiple-choice test” that you were given in university in this semester? (Mark only one alternative)
    • ( ) Memorizing because you had to retain a lot of information.
    • ( ) To think and reflect on each question.
    • ( ) They were a mixture between memorizing and thinking and reflecting.
    8.
    Which of these two tests would you like to see more of in university?
    • ( ) Multiple-choice test.
    • ( ) Essay test.
    9.
    Which of these two types of tests do you consider most important for your university education?
    • ( ) Multiple-choice test.
    • ( ) Essay test.

References

  1. Joya, M.Z. La evaluación formativa, una práctica eficaz en el desempeño docente. Rev. Sci. 2020, 5, 179–193. [Google Scholar] [CrossRef]
  2. Montes-Iturrizaga, I. La evaluación en la universidad en tiempos de la virtualidad: ¿retroceso u oportunidad? Rev. Signo Educ. 2020, 29, 26–27. [Google Scholar]
  3. Montes-Iturrizaga, I. Evaluación Educativa: Reflexiones Para el Debate; UDL Editores: Madrid, Spain, 2020. [Google Scholar]
  4. Jáuregui, R.; Carrasco, L.; Montes, I. Evaluando, evaluando: ¿Qué Piensa y Qué Hace el Docente en el Aula, Informe Final de Investigación; Universidad Católica Santa María: Arequipa, Peru, 2003. [Google Scholar]
  5. Rosales, C. Evaluar es Reflexionar Sobre la Enseñanza, 3rd ed.; Narcea Ediciones: Madrid, Spain, 2014. [Google Scholar]
  6. Popham, W.J. Evaluación Trans-Formativa: El poder Transformador de la Evaluación Formativa; Narcea Ediciones: Madrid, Spain, 2013. [Google Scholar]
  7. Román, J.A. La educación superior en tiempos de pandemia: Una visión desde dentro del proceso formativo. Rev. Latinoam. Estud. Educ. 2020, 50, 13–40. [Google Scholar] [CrossRef]
  8. Stake, R. The countenance of educational evaluation. Teach Coll. Rec. 1967, 68, 523–540. [Google Scholar] [CrossRef]
  9. Garbizo, N.; Ordaz, M.; Lezcano, A.M. El profesor universitario ante el reto de educar: Su formación integral desde la Responsabilidad Social Universitaria. REXE. Rev. Estud. Exp. Educ. 2020, 19, 151–168. [Google Scholar] [CrossRef]
  10. Pino-Calderón, J.d.P. La formación del maestro y la escuela del desarrollo. VARONA. Rev. Científico-Metodológica 2015, 60, 13–18. [Google Scholar]
  11. Montes-Iturrizaga, I. Apreciaciones en torno a la propuesta de nueva Ley Universitaria. Rev. Signo Educ. 2014, 23, 26–28. [Google Scholar]
  12. Al-Maqbali, A.-H.; Hussain, R.M.R. The impact of online assessment challenges on assessment principles during COVID-19 in Oman. J. Univ. Teach. Learn. Pract. 2022, 19, 73–92. [Google Scholar] [CrossRef]
  13. Jarrín-García, G.H.; Patiño-Campoverde, M.M.; Moya-Lara, I.N.; Barandica-Macías, A.E.; Bravo-Zurita, V.E. Prevalencia del Síndrome de Burnout en docentes ecuatorianos de educación superior en tiempos de pandemia COVID-19. Polo Del Conoc. 2022, 7, 183–197. [Google Scholar] [CrossRef]
  14. Giler-Zambrano, R.M.; Loor-Moreira, G.G.; Urdiales-Baculima, S.B.; Villavicencio-Romero, M.E. Síndrome de Burnout en docentes universitarios en el contexto de la pandemia COVID-19. Dominio De Las Cienc. 2022, 8, 70. [Google Scholar]
  15. Cortez Silva, D.M.; Campana Mendoza, N.; Huayama Tocto, N.; Aranda Turpo, J. Satisfacción laboral y síndrome de Burnout en docentes durante el confinamiento por la pandemia COVID-19. Propósitos Y Represent. 2021, 9, e812. [Google Scholar] [CrossRef]
  16. Díaz-Pinzón, J.E. Análisis de los resultados de la prueba PISA 2018 en matemáticas para América. Rev. De Investig. Univ. Del Quindío 2021, 33, 104–114. [Google Scholar] [CrossRef]
  17. Taboada-Caro, M. Resultados de la PRUEBA PISA en el Perú: Análisis de la Problemática y Elaboración de una Propuesta Innovadora. Bachelor’s Thesis, Universidad de Piura, Piura, Peru, 2019. Available online: https://hdl.handle.net/11042/3949 (accessed on 17 December 2022).
  18. Friedman, A.; Blau, I.; Eshet-Alkalai, Y. Cheating and Feeling Honest: Committing and Punishing Analog versus Digital Academic Dishonesty Behaviors in Higher Education. Interdiscip. J. e-Ski. Lifelong Learn. 2016, 12, 193–205. [Google Scholar] [CrossRef] [Green Version]
  19. Rahman, M.A.; Novitasari, D.; Handrianto, C.; Rasool, S. Challenges In Online Learning Assessment During The COVID-19 Pandemic. Kolokium J. Pendidik. Luar Sekol. 2022, 10, 15–25. [Google Scholar] [CrossRef]
  20. Domínguez-Figaredo, D.; Gil-Jaurena, I.; Morentin-Encina, J. The Impact of Rapid Adoption of Online Assessment on Students’ Performance and Perceptions: Evidence from a Distance Learning University. Electron. J. e-Learn. 2022, 20, 224–241. [Google Scholar] [CrossRef]
  21. Slack, H.R.; Priestley, M. Online learning and assessment during the Covid-19 pandemic: Exploring the impact on undergraduate student well-being. Assess. Eval. High. Educ. 2022, 1–17. [Google Scholar] [CrossRef]
  22. Hassan, B.; Shati, A.A.; Alamri, A.; Patel, A.; Asseri, A.A.; Abid, M.; Al-Qahatani, S.M.; Satti, I. Online assessment for the final year medical students during COVID-19 pandemics; the exam quality and students’ performance. Oncol. Radiother. 2020, 16, 001–006. [Google Scholar]
  23. Montes-Iturrizaga, I.; Franco-Chalco, E. Formative learning assessment in the context of COVID 19 pandemic: Research from the perceptions of university students in Peru. In Proceedings of the 13th annual International Conference on Education and New Learning Technologies, Online conference, 5–6 July 2021. [Google Scholar] [CrossRef]
  24. Popham, W.J. Classroom Assessment, What Teachers Need to Know, 8th ed.; Pearson: Los Angeles, CA, USA, 2017. [Google Scholar]
  25. Ali, L.; al Dmour, N.A.H.H. The shift to online assessment due to COVID-19: An empirical study of university students, behaviour and performance, in the region of UAE. Int. J. Inf. Educ. Technol. 2021, 11, 220–228. [Google Scholar] [CrossRef]
  26. Universidad de Salamanca. Informe Encuesta Sobre el Impacto Académico de la COVID-19 en la Universidad de Salamanca. Salamanca, España. Available online: https://www.usal.es/informe-encuesta-impacto-academico-covid-19 (accessed on 27 December 2022).
  27. Vinet, S.; Casablancas, S.; Dari, N. La pandemia, las universidades y las prácticas de evaluación. Virtualidad Educ. Y Cienc. 2021, 12, 72–85. [Google Scholar]
  28. Álvarez, N.T.; Habib, L. Retos y Desafíos de las Universidades Ante la Pandemia de la COVID-19; Labýrinthos Editores: San Nicolás de los Garza, México, 2021; ISBN 978-607-99076-1-7. [Google Scholar]
  29. García-Peñalvo, F.J.; Corell, A.; Abella-García, V.; Grande, M. Online assessment in higher education in the time of COVID-19. Educ. Knowl. Soc. 2020, 21, 1–26. [Google Scholar] [CrossRef]
  30. Popham, W.J. Test Better, Teach Better: The Instructional Role of Assessment; Association for Supervision and Curriculum Development: Alexandria, VA, USA, 2003; ISBN 0-87120-667-6. [Google Scholar]
  31. Ferrada-Bustamante, V.; Ibarra-Caroca, M.; Vergara-Correa, D.; González-Oro, N.; Ried-Donaire, A.; Castillo-Retamal, F. Formación docente en TIC y su evidencia en tiempos de COVID-19. Saberes Educ. 2021, 6, 144–168. [Google Scholar] [CrossRef]
  32. Cañadas, L. Evaluación formativa en el contexto universitario: Oportunidades y propuestas de actuación. Rev. Digit. De Investig. En Docencia Univ. 2020, 14, e1214. [Google Scholar] [CrossRef]
  33. Paucca, N.; Cervantes, E.F.; Romaní, F.G.; Carrillo, J.W.; Cornejo, M. Evaluación remota desde la perspectiva de los estudiantes universitarios de Lima Metropolitana. Rev. De Investig. En Cienc. De La Educ. Horiz. 2022, 6, 319–331. [Google Scholar] [CrossRef]
  34. Montes Iturrizaga, I.; Sime, L.M.; Salcedo, E.; Soria, E.; Briceño, D. Investigación Educativa: Técnicas Para el recojo y Análisis de la Información. Master’s Thesis, Pontificia Universidad Católica del Perú, Escuela de Posgrado, Lima, Perú, 2021. Available online: https://repositorio.pucp.edu.pe/index/handle/123456789/182800 (accessed on 23 January 2023).
  35. Vaccarezza, G.; Sánchez, I.; Alvarado, H. Caracterización de prácticas pedagógicas en carreras de ingeniería civil de universidades de Chile. Espacios 2018, 39, 15. [Google Scholar]
  36. Cattaneo, F. Satisfacción Respecto de la Carrera Elegida, Según Orientación Vocacional y Rendimiento Académico en Estudiantes Universitarios de Mendoza. Bachelor’s Thesis, Pontificia Universidad Católica de Argentina, Mendoza, Argentina, 2022. Available online: https://repositorio.uca.edu.ar/handle/123456789/14040 (accessed on 15 November 2022).
  37. DWayne, D.B.; Green, M.; Neilson, E.G. Medical education in the time of COVID-19. Sci. Adv. 2020, 6, eabc7110. [Google Scholar] [CrossRef]
  38. Elzainy, A.; El Sadik, A.; Al Abdulmonem, W. Experience of e-learning and online assessment during the COVID-19 pandemic at the College of Medicine, Qassim University. J. Taibah. Univ. Med. Sci. 2020, 15, 456–462. [Google Scholar] [CrossRef] [PubMed]
Table 1. Distribution of the sample by degree program.
Table 1. Distribution of the sample by degree program.
Degree Programf%
Administration3313.8
Marketing239.6
Accounting239.6
Law833.3
Nursing4117.1
Pharmacy5522.9
Nutrition937.7
Psychology4820.0
Total240100.0
Table 2. Types of tests given by professors from the students’ perception by degree program.
Table 2. Types of tests given by professors from the students’ perception by degree program.
Multiple-Choice TestEssay TestOral TestCase-Based Test
Degree programf % f %f %f %
Administration24 (72.7)26 (78.8)14 (42.4)19 (57.6)
Marketing13 (59.1)17 (77.3)4 (18.2)11 (50.0)
Accounting17 (73.9)19 (82.6)7 (30.4)12 (52.2)
Law3 (37.5)7 (87.5)3 (37.5)4 (50.0)
Nursing36 (87.8)25 (61.0)18 (43.9)21 (51.2)
Pharmacy48 (87.3)37 (67.3)23 (41.8)24 (43.6)
Nutrition8 (89.9)7 (77.8)3 (33.3)2 (22.2)
Psychology44 (91.7)35 (77.8)26 (54.2)15 (31.3)
Table 3. Emphasis on multiple-choice tests perceived by students by degree program.
Table 3. Emphasis on multiple-choice tests perceived by students by degree program.
Memorizing Test + ThinkingMemorizing TestTest for Thinking
Degree programf % f %f %
Administration18 (54.5)2 (6.1)13 (39.4)
Marketing15 (65.2)2 (8.7)6 (26.1)
Accounting16 (69.6)1 (4.3)6 (26.1)
Law5 (62.5)0 (0.0)3 (37.5)
Nursing25 (61.0)4 (9.8)12 (29.3)
Pharmacy41 (74.5)1 (1.8)13 (23.6)
Nutrition5 (55.6)0 (0.0)4 (44.4)
Psychology29 (60.4)0 (0.0)19 (36.6)
Table 4. Students’ preferences regarding the type of exams by degree program.
Table 4. Students’ preferences regarding the type of exams by degree program.
Essay TestMultiple-Choice Test
Degree programf %f %
Administration15 (45.5)18 (54.5)
Marketing13 (56.5)10 (43.5)
Accounting17 (73.9)6 (26.1)
Law4 (50.0)4 (50.0)
Nursing13 (31.7)28 (68.3)
Pharmacy19 (34.5)36 (65.5)
Nutrition4 (44.4)5 (55.6)
Psychology22 (45.8)25 (52.1)
Table 5. Perceptions of students regarding the most important types of tests for their university education by degree program.
Table 5. Perceptions of students regarding the most important types of tests for their university education by degree program.
Essay TestMultiple-Choice Test
Degree programf % f %
Administration28 (84.8)5 (15.2)
Marketing19 (82.6)4 (17.4)
Accounting16 (69.6)7 (30.4)
Law7 (87.5)1 (12.5)
Nursing28 (68.3)13 (31.7)
Pharmacy34 (61.8)21 (38.2)
Nutrition6 (66.7)3 (33.3)
Psychology31 (64.6)16 (33.3)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Montes-Iturrizaga, I.; Zambrano Aranda, G.M.; Pamplona-Ciro, Y.L.; Villalba-Condori, K.O. Perceptions about the Assessment in Emergency Virtual Education Due to COVID-19: A Study with University Students from Lima. Educ. Sci. 2023, 13, 378. https://doi.org/10.3390/educsci13040378

AMA Style

Montes-Iturrizaga I, Zambrano Aranda GM, Pamplona-Ciro YL, Villalba-Condori KO. Perceptions about the Assessment in Emergency Virtual Education Due to COVID-19: A Study with University Students from Lima. Education Sciences. 2023; 13(4):378. https://doi.org/10.3390/educsci13040378

Chicago/Turabian Style

Montes-Iturrizaga, Iván, Gloria María Zambrano Aranda, Yajaira Licet Pamplona-Ciro, and Klinge Orlando Villalba-Condori. 2023. "Perceptions about the Assessment in Emergency Virtual Education Due to COVID-19: A Study with University Students from Lima" Education Sciences 13, no. 4: 378. https://doi.org/10.3390/educsci13040378

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop