**E-Learning as a Factor Optimizing the Amount of Work Time Devoted to Preparing an Exam for Medical Program Students during the COVID-19 Epidemic Situation**

**Magdalena Roszak 1,†, Bartosz Sawik 2,3,4,\* ,†, Jacek Sta ´ndo <sup>5</sup> and Ewa Baum <sup>6</sup>**


**Abstract:** The COVID-19 pandemic had a huge impact on the learning and teaching processes, particularly in healthcare education and training, because of the principal position of the cutting-edge student–patient interaction. Replacing the traditional form of organization and implementation of knowledge evaluation with its web-based equivalent on an e-learning platform optimizes the whole didactic process not only for the unit carrying it out but, above all, for students. This research is focused on the effectiveness of the application of e-learning for computer-based knowledge evaluation and optimizing exam administration for students of medical sciences. The proposed approach is considered in two categories: from the perspective of the providers of the evaluation process, that is, the teaching unit; and the recipients of the evaluation process, that is, the students.

**Keywords:** e-learning; digital training; healthcare education; innovation in teaching; clinical teaching; e-exams

#### **1. Introduction**

During the COVID-19 pandemic and mandatory lockdown, academic institutions have shifted to distance learning. This pandemic had a massive impact on the learning and teaching processes, especially in healthcare education, due to the predominant role of the current student–patient interaction. Replacing the traditional form of organization and implementation of knowledge evaluation with its web-based equivalent on an e-learning platform optimizes the whole didactic process not only for the unit carrying it out but, above all, for students. This research is focused on the effectiveness of the application of e-learning for computer-based knowledge evaluation and optimizing exam administration for students of medical sciences. The proposed approach is considered in two categories: from the perspective of the providers of the evaluation process, that is, the teaching unit; and the recipients of the evaluation process, that is, the students. Worldwide higher education institutions were forced to accelerate the introduction of web-based learning methodologies in areas where this was not the main core, such as clinical teaching. This paper presents the current trends and new challenges that emerge from this new e-learning environment, focusing on its potential to revolutionize healthcare education and exploring

**Citation:** Roszak, M.; Sawik, B.; Sta ´ndo, J.; Baum, E. E-Learning as a Factor Optimizing the Amount of Work Time Devoted to Preparing an Exam for Medical Program Students during the COVID-19 Epidemic Situation. *Healthcare* **2021**, *9*, 1147. https://doi.org/10.3390/ healthcare9091147

Academic Editors: José João Mendes, Vanessa Machado, João Botelho and Luís Proença

Received: 25 June 2021 Accepted: 30 August 2021 Published: 2 September 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

how it may help to better prepare future healthcare professionals for their daily practice. The process of optimization through e-learning should become a natural part of the didactic process conducted in every subject at all types of higher education institutions, including medical universities.

For more than a decade, medical schools have been working to transform pedagogy by eliminating/reducing lectures; using technology to replace/enhance anatomy and laboratories; implementing team-facilitated, active, and self-directed learning; and promoting individualized and interprofessional education [1–3]. The situation of the spread of the novel Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) accelerated the application of online teaching and examination in medical schools around the world. Many authors recently considered these issues, for instance, as in a recently published paper by Bianchi et al. about the effects of the SARS-CoV-2 pandemic on medical education [4]. Bianchi et al. presented considerations and tips for the Italian medical education system under the new circumstances of COVID-19 [4]. While the COVID-19 outbreak has been one of the most significant challenges faced in the history of medical education, it has also provided an impetus to develop innovative teaching practices, bringing unprecedented success in allowing for medical students to continue their education, for instance, in ophthalmology, despite these challenges [5]. Different types of online courses are provided at present to develop and implement an effective learning process for medical students. Paper [3], by Rose, presents and discusses the challenges in medical students' education in the time of COVID-19. This author also pointed out that additional unknown academic issues will require attention, including standardized examinations when testing centers are closed, the timeline for residency applications for current third-year students, and the ability to meet the requirements for certain subspecialties prior to applying to residency [3]. Another interesting example of the educational challenges during this pandemic can be found in medical and surgical nursing, with a core course in baccalaureate nursing programs that requires active and effective teaching and learning strategies to enhance students' engagement [6]. The unprecedented, abrupt shift to remote online learning within the context of the national lockdown due to the 2019 coronavirus disease (COVID-19) highlights the importance of addressing students' preparedness in managing their first experiences with online learning [7]. Many authors have tried to explore the medical students' and faculty members' perspectives of online learning during the COVID-19 era [8]. As examples, two recent papers by Varvara et al. [9] and by Iurcov et al. [10] consider the impact of this pandemic on academic activity and health status among medical dentistry students in Romania [9], and also in general dental education challenges during COVID-19 for dentistry undergraduate students in Italy [10]. At present, during the COVID-19 pandemic, e-learning has become a potential approach to technology in education that provides contemporary learners with authentic knowledge acquisitions. As a practical contribution, electronic examination (e-exam) is a novel approach to e-learning, designed to solve traditional examination issues. It is a combination of assorted questions designed by specialized software to detect an individual's performance. Despite the intensive research carried out in this area, the completion of e-exams brings challenges, such as authentication of the examinee's identity and answered papers [11]. It is important to explore the factors affecting students' preference for remote e-exams, methods of course assessment/evaluation, factors related to students' exam dishonesty/misconduct during remote e-exams and measures that can be considered to reduce this behavior [12]. This type of research has been carried out in many medical schools around the world to evaluate the experience of students at faculties of Medicine, Dentistry, Pharmacy, Nursing and Applied Medical Sciences regarding remote e-exams preferences and academic dishonesty during the pandemic [12].

#### **2. Technologies in Education**

Technology has always changed methods of learning and knowledge transfer. Generalized access to the Internet has brought about a revolution in learning and teaching. In one new method, a technologically new way of publishing educational content, we now

have previously known methods along with new elements that had no equivalent in the past [13–16]. Their emergence in education is determined by the application of modern digital technologies of sound and image recording and their integration with traditional text-based instruction [17,18]. The instant sharing of e-materials for education participants and their prompt updating by teachers is also of crucial significance. The evaluation of student knowledge and the learning process has also been revolutionized [19]. Online tests including a broad interface of questions and automatic verification of answers are now available, as well as self-study tests with explanations and decision-making labyrinths which encourage creative thinking [20–22]. The authors of papers [23–26] have published very interesting recent examples of the usage of technological innovation in medicine. In paper [23] by Guiter et al., the authors present the development of remote online collaborative medical school pathology and explain how students across several international sites, throughout the COVID-19 pandemic, could control the digital slides and offer their own diagnoses, followed by group discussions. In publication [24] by Guadalajara et al., the authors demonstrate whether it is possible to create a technological solution to flexibly self-manage undergraduate general surgery practices within hospitals. In this interesting research study, it was proven that the usage of innovative educational technology could be efficient. The use of mobile-learning application designed to be an educational opportunities' manager tool might be very helpful in promoting self-directed learning, flexible teaching, and bidirectional assessments. The authors also show some limitations for teachers who employ a personal teaching style, which may not need either checkerboards or a tool. Presented solution [24] supports teaching at hospitals in a pandemic without checkerboards. In paper [25], by Bianchi et al., the authors concentrated on an evaluation of the effectiveness of digital technologies during anatomy learning in nursing school. Nicholson et al. also considered anatomy in paper [26], but as an interactive, multi-modal anatomy workshop. The authors proved that an interactive workshop improved attendees' examination performance and promoted engaged enquiry and deeper learning. This tool accommodates varied learning styles and improves self-confidence, which may be a valuable supplement to traditional anatomy teaching [26].

#### **3. Changes and Challenges in the Process of Education at an Academic Level**

At present, students take full advantage of new digital technologies in both their daily lives and in the process of formal and informal education [27,28]. In higher education institutions, where teaching and learning are pursued only in the form of traditional practical and laboratory classes held in classrooms, learners do not find the means of information transfer that they know from the Internet. This contributes to a decrease in effective memorizing and generally reduced motivation to learn [29–31]. Distance education is a new method of working with students, which is becoming more crucial in current academic education, particularly in the face of the COVID-19 epidemic [32–34].

Technological progress inevitably leads to the implementation of up-to-date technologies in distance education at every level, including continuing education [17,35–37]. Students particularly favour interactive online courses, as they seem to produce better effects than traditional methods in terms of knowledge acquisition [38–41]. The numerous advantages that online learning offers leads students to turn to the Internet and multimedia sources of knowledge more than they turn to traditional textbooks [42,43]. Therefore, it seems appropriate to implement online learning and provide access to multimedia materials that include reliable educational content, which can replace traditional classes. The application of the methods and techniques of distance learning can be a source of competitive advantage for the school. It can considerably contribute to the quality and efficiency of contemporary student education [44,45].

The methods and techniques of distance education are commonly applied in medical schools, and scientific reports confirm their comparable or even greater effectiveness in comparison with traditional forms [16,37,38,44]. Acquiring skills in virtual reality will translate to a higher quality of medical procedures being administered to patients.

To date, most online trainings were intended for doctors (58%) and also for nurses, pharmacists and stomatologists [46]. It should be remembered that e-learning is not exclusive to higher education [47–49] but is also used for courses or training that the future graduates of medical schools will attend, broadening their knowledge after their medical studies [50]. A lack of experience with participating in online education is a burden for a graduate, as there is no easy way to gain such skills outside of school. Thus, distance learning allows the students to develop additional competences; not just digital ones but also soft competences [51]. All of this facilitates the development of skills such as collaborative work, time management and problem-solving, as well as encouraging creativity and flexibility. Consequently, embracing state-of-the-art technologies may result in a growing number of better-educated graduates who can adapt to the changing labour market and are interested in positions that present nontypical professional and scientific challenges.

Before the pandemic, distance education in medical universities, due to its character, was pursued mainly as part of a hybrid system, in which part of the learning process takes place in direct contact with the teacher in the classroom, and the other part takes place online [39,40,48]. In the case of practical classes, virtual education is combined with supervised hands-on practice, performed on patients in a hospital or medical simulation centres. Such is the nature of most courses taught at medical universities.

Additionally, education through simulation is becoming increasingly popular in the medical academic environment. This is the best teaching method, enabling the creation of real situations in risk-free conditions. Decision-making games can be used successfully in the educational process of future medical staff. The aim of this work was to create a didactic computer program "Trauma", analyze its impact on students' knowledge of the direction of medical rescue and evaluate the attractiveness of classes conducted using this method. The results show that the use of the "Trauma" program in didactics has allowed for improvements in the knowledge and skill levels of students taking part in the study in the field of trauma patients' treatment. In the assessment of students, the classes in which the program was used were interesting. The vast majority of respondents would like to participate in such classes again [52].

There are few reports concerning distance education in medical schools in Poland, especially its application in teaching and learning, as well as in evaluation, comprising credits and examinations [19–21,39,40,53]. It can, therefore, be concluded that, prior to the outbreak of the COVID-19 pandemic, distance learning was not common or its scope was limited. The present paper is a contribution to an academic discussion on e-learning for basic sciences and particularly its use to optimize the amount of time devoted to preparing an exam for medical programs. It looks at two categories: its usefulness for the educational institution and the recipients of the process—in this case, medical university students. The scope of the article involves a comparison of the amount of work time is needed before an examination administered entirely in a computer-based form and how much work time is needed before one given in a traditional paper-based form.

#### **4. Data and Methods**

The data collected in the article present the results of research into the working time and organization of electronic (in an e-learning portal) and traditional version of the evaluation of medical knowledge (without e-learning technologies) expressed in minutes (clock hours). The statistical analysis is based on descriptive statistics with the use of Excel [54] and R language [55]. The analysis shows the sum of the working time (clock hours) according to own formulas describing the same process, including a way to build databases for knowledge evaluation. There was no need to statistically analyze the collected data using statistical tests, as the complete data were compared.

The research conducted by the authors is very demanding, due to the labor-intensive preparation of databases, level of technology competence, slower implementation of elearning in medical education, which is significantly different from education in other fields, and its use by a large group of students from one year. For this reason, standard

educational theories were insufficient and could not fully meet all the conditions for the study and the stated goal of the authors' analyses. It was necessary to develop own procedures, accounting for the e-learning methodology.

The research necessary to carry out an analysis of the optimization of knowledge evaluation was conducted in the Department of Pathophysiology in cooperation with the Department of Computer Science and Statistics of Poznan University of Medical Sciences, Poland (PUMS) in the academic years of 2009–2019. Academic teachers (knowledge supply), technical and administrative staff (organizational support) and medical e-learning experts were involved in this research. An electronic evaluation of knowledge was conducted on the Online Learning and Training (OLAT) e-learning portal under an open source license.

The analysis of working time and the organization of electronic knowledge evaluation was carried out on 333 students in their second year of medical studies in the preclinical subject of pathophysiology in the 2018–2019 academic year. These students were studying for one year, and completed the subject at the same time.

The analysis of teachers' working time in the preparation of substantive content in pathophysiology, including the database of international standards for testing and assessment [19,20], the Question and Test Interoperability (QTI), was carried out in the 2015–2016 academic year. The same team of teachers and employees supporting the work participated in the organization and implementation of the evaluation of the delivered traditionally and with the use of an e-learning portal. It was necessary to compare both versions, implementing the evaluation of knowledge in pathophysiology for a large group of students at the same time.

#### **5. Preliminary Conditions for the Analysis of Work Time Devoted to Administering Examinations in Medical Sciences**

The analysis of the amount of time devoted to the evaluation of students' performance requires data on the distribution of tasks necessary to set regular exam and paper-based credit test in comparison with its electronic counterpart on an e-learning platform. The organization of the evaluation of students' knowledge usually involves teachers, supported by administrative and/or technical-engineering staff. Student evaluation, conducted in large groups in the traditional form, requires cooperation between these staff members in the auditorium hall or a lecture hall in which the exam or credit test is held; their work is purely organizational, not substantive.

The analysis of the amount of time devoted to the evaluation of students' knowledge in the medical program was conducted using the subject of pathophysiology as an example. This subject is taught in the 2nd year of studies, where the number of students is very high and often varies between 200 and 400. This serves to demonstrate the usefulness of e-learning for the evaluation of students' knowledge. The example presents how it is carried out on an e-learning platform, which complies with the international standards for testing and assessment [19,20], Question and Test Interoperability (QTI).

The study was conducted on a sample of one grade-level group, comprising 333 students taking the course over one semester of an academic year at the Department of Pathophysiology in PUMS [53].

#### **6. Implementation of Evaluation of Knowledge in Pathophysiology**

Continuous assessment tests (benchmark tests) given throughout the course, as well as the final test, had to be administered on the premises of the university, according to the act on studies pre-COVID [56].

The course in pathophysiology includes three tests:


Lectures in the subject also end with a summative test, which constitutes either part of the final exam or the credit required for course completion.

The analysis of the process of knowledge evaluation does not account for retakes (two additional attempts) to which students who fail continuous assessment are entitled. Similarly, the report does not consider retake final exams, which are available to people who fail the final exam. Sometimes a need arises to organize a committee course crediting for students who do not pass the course according to the above rules, or a committee exam granted by the Dean, following the consideration of an individual application submitted by the student or teacher.

Following the School Regulations, there must be two dates for the exam, settled by the student representative and the examiner. Additionally, the so-called pre-term exam date may be established, which increases the number of exam dates. Consequently, the minimum number of dates scheduled for the first-attempt examination is three. Every student independently decides on which day he or she wants to take the exam, considering his or her credit and exam calendar. Still, other individual cases have to be noted when a student or a group of students requests a different date to those already scheduled, which may be due to a fortuitous event or the individual organization of studies. As is apparent from the above, the evaluation of knowledge is somewhat burdensome for the unit responsible for the teaching process and the exam session. It requires excellent organization and flexibility, the fixing of teachers' dates with the students' requests and the availability of the halls where the evaluation is to take place. Accommodating all these aspects is a tough challenge for those coordinating the teaching process in a given department.

The university runs a continuous examination session, comprising a period of one or more years of study, during which the exams can be taken at any date. A regular two-week exam session, held directly after the end of the term for all subjects, is unworkable here, not least for organizational reasons. Such a session is hardly feasible, especially for students of medical programs, where groups are enormous.

#### **7. Traditional Examination vs. an E-Exam on an E-Learning Platform**

Table 1 presents a comparison of the organization and implementation of knowledge evaluation, using a traditional paper version and an online evaluation. This is a complement to earlier classifications and comparisons, compiled at Poznan University of Medical Sciences in the years 2009–2013 [19], using the example of courses in pathophysiology, medical didactics and andragogy, biostatistics, mathematics or information technology, for different study programs.

Requirements that have to be met to conduct a computer-based evaluation of knowledge on an e-learning platform include:



**Table 1.** A comparison of tasks necessary for the implementation of student evaluation.

#### **8. The Amount of Time Devoted to Preparing a Computer-Based Exam, on the School Premises, Compared to Its Traditional (Paper-Based) Version**

Table 2 presents a sample application of the above comparison, including the analysis of the work time and tasks necessary to evaluate the basic science knowledge (pathophysiology) of 333 students attending the course on the school premises. The set comprises one final exam and three benchmark tests throughout the course: four tests in total. Benchmark tests are scheduled to be taken on predetermined dates during the classes. For some of the tasks, the amount of time is difficult to estimate, so they are described without a determined duration. However, staff members are able to state duration on an individual basis depending on the department in which they work.

**Table 2.** The amount of time devoted to preparing and administering knowledge evaluation using traditional paper-based tests vs. online tests on an e-learning platform.


**No. Task Method of Testing Paper-Based Computer-Based** 10. Test result delivery (44.4 min) A: Entering the results to an e-form of student achievement (virtual student office) and notifying students. We assume a required time of 2 s per result. 333 papers per test × 2 s = 666 s, with four tests this amounts to 4 × 666 s = 2664 s = 44.4 min. There is a likelihood of committing errors while entering the results. 0 min. 11. Archiving test papers on the premises A. 333 papers per test For 4 tests, 1332 papers are stored in cabinets on the premises. None. All works are available online—see the example below. 12. Calculating the average score for crediting after three benchmark tests (15–30 min) A. Advanced use of spreadsheets required. Result analysis concerning retakes and preparing a list of people who failed the course. 0 min. Automated scores on the platform. Result lists can be imported as an .xls file. 13. Access to current and past results and papers A. Searching for paper-based tests in department archives and/or in files on a computer disc—is time-consuming A/T. Full access on the platform (example—figure) 14. Result availability for students None. Yes. All test results are available in one place on the platform (example—figure) 15. Course evaluation/ study surveys A. Difficult to conduct—rarely practised. (1) Survey/questionnaire printout necessary. (2) Significant amount of time required to enter hand-written answers in the file. (3) Poor hand-writing legibility. Frequently practised—guarantees close to 100% student participation. (1) Automated process. (2) 0 min. time required to enter hand-written answers to the file. (3) Everything is instantly recorded in a file on the platform, ready to be imported to an external data storage device. Total of work time to preparation and implementation of knowledge evaluation: 4245.4–4420.4 min. equals 70.8–73.7 h Assumed average = 72.2 h 400 min = 6.7 h Recapitulation of significant aspects of the organization of knowledge evaluation: (1) printout of large numbers of test papers, (2) required space for archiving thousands of test papers, (3) score review possible only on the premises, during the school's working time, (4) T. needs support from A./staff for distributing, assessing and archiving tests, (5) T. creates a few test versions for every course edition. Work is demanding, taking place under time and deadline pressure. (1) more people physically involved in conducting the exam, (2) access to computer rooms, (3) labour-intensive creation of question bank in the first year of testing, which is a capital for the future. Subsequent editions of the course require minor corrections, updating the database. New questions created by T. are continually supplied over the whole year, without the time and deadline pressure.

**Table 2.** *Cont.*

Symbol *h* in the table stands for *one hour*—60 min.

#### **9. The Analysis of the Amount of Time Needed to Create an Examination Question Bank**

The creation of an electronic database of questions used for exams and credit tests consists of two stages of work, carried out by teachers alone or teachers supported by administrative or technical-engineering staff. The preparation of questions involves the time spent working on the actual substantive content and then the time needed to save them in the QTI format. This is a standard format used in computer-based knowledge evaluation held on e-learning platforms [19,20].

Studies conducted in the years 2015–2016 [58] demonstrated that teacher working time devoted to writing 20 pathophysiology test questions, with 4–5 answer choices in an electronic form, varied between 40 and 150 min. Therefore, the time needed by pathophysiology teachers to develop a bank of 200 questions is 18.5 h of work and an additional 8 h to export the questions in QTI format. This is performed by entering the questions from a document created by the author (teacher), followed by parameterization. This relates to, for example, answer keys, the random selection of answers, ways of displaying the question on the screen, or a random selection of questions from the bank. This work is performed once, and it serves its purpose for many years in the future.

Preparing a question bank of 200 questions for use in e-testing on an e-learning platform thus takes 26.5 h of work time. The analysis of the results in the above table for a computer-based test shows a calculated work time of 6.7 h, plus 26.5 h to create a question bank in QTI standard format. The summative result is 33.2 h. Traditional testing takes longer, approximately 72.2 h. This is the result of the analysis of the testing process presented in Table 2, plus the time spent composing the actual test questions (item 3, Table 2). We assume that 200 questions have to be developed for the paper-based test versions, which takes teachers about 18.5 h. Work time required in the case of a traditional test amounts to 90.7 h. Comparing the conventional form with the computer-based form, we can conclude that the latter is much more effective and beneficial for the educational institution, as it takes 37% of the time necessary to conduct a traditional form of testing. The time saved can be allotted to other teaching assignments, such as expanding the question bank at any time that they see fit.

The substantive content, depending on the nature of the subject, has a predicted lifespan of from 5 to 7 years from teaching the course [19,58,59]. The amount of time required to create an electronic question bank in the first year of e-testing and evaluation is more significant than that required for paper-based test versions, but it is spread over 5 to 7 years of use. The obtained values then have to be divided by at least 5, which provides the real hourly workload needed to develop an electronic question bank for a given academic year.

The size of the question bank developed for a given unit should depend on the number of groups, in which credit tests are administered as well as the number of course editions over one academic year. It is also important that a few test/exam dates are available per attempt, which is typical of the continuous examination session. The more students, test dates and course editions there are, the larger the base should be, to ensure an objective evaluation of student knowledge.

The amount of time required to expand the question bank in a database was also analyzed, and the results were calculated with regard to the work time needed for its development, along with the work time needed to prepare and carry out the testing process, comparing paper- and computer-based forms (Table 3).

Examining the results obtained in Table 3, we can see that, in the case of a question bank comprising 1600 test questions, the amount of time needed to organize and prepare the process of knowledge evaluation is similar for both paper-based and computer-based testing. When the number is increased to 2000 questions, the computer-based form requires more time than the traditional version. Work time is longer by 14.5 h, which is an increase in time of 5.6% compared with the traditional form. Plans to develop a base of 3000 questions or even 5000 questions leads these values to rise to 15.6%. This is, respectively, 54.5 h of work time for a 3000-question database and 134.5 h for a 5000-question database, which is the extra time required in comparison with the traditional form of testing.

Therefore, it seems legitimate to ask whether computer-based evaluation, which requires a database of over 1000 questions, is, in fact, as useful as previous studies have suggested.


**Table 3.** The analysis of work time devoted to developing a question bank for paper-based and computer-based knowledge evaluation.

I time for developing the content of 200 questions—18.5 h; II the result of adding the amount of time from the *paper-based test* column and the time calculated in Table 2: 72.2 h; III time for composing 200 questions and saving them in QTI format—26.5 h; IV the result of adding the amount of time from the *computer-based test* column and the time calculated in Table 2: 6.7 h.

> To answer this question, other variables of the evaluation process must be analyzed, which influence the development of a question bank. These include a deadline for composing new questions, the number of teachers involved in the task and their IT competencies, and cooperation between units implementing e-evaluation. These aspects make it apparent that the traditional, paper-based form has severe limitations and are less useful when conducted on large groups of students, despite the reduced work time. When these aspects are added to the workload and organization time of the computer-based form, the difference is leveled for the excess time in the case of databases of over 1000 questions.

#### *9.1. Time Pressure, Question Reusability*

The sets of questions composed for paper-based tests form a base, which has to contain different or updated questions in every exam session. Paper-based test versions used with large groups of students in a given academic year are quickly known, so in order for them to be used again in the next academic year, they have to be revised and adjusted, which is as time-consuming as composing new questions. The work of writing questions for traditional tests has to be completed every year, and the deadline is determined by the pre-established schedule of tests. Consequently, the work is completed under time pressure, irrespective of the other assignments that teachers may have. That is why the development of question banks is sometimes abandoned, or too few questions are provided for the evaluation to be conducted properly. This stems from the ease of making the questions banks public, or a lack of randomness in test versions. For computer-based evaluation, the bank of questions can be enlarged as they can be supplied at any given time in the academic year, and combined with other activities. Inspiration for a valuable question could be derived from a discussion with students during a lecture or a clinical case study in a seminar. New test questions often appear as a result of the analysis of students' work on the e-learning platform or their self-test scores. Then, the teacher enters the new questions in the QTI format to the database at a convenient time. The work is calmer and more thoughtful, which translates into valuable testing material, which will serve well in the verification of learning outcomes.

#### *9.2. Cooperation between Units*

It is rare for a unit to have a question bank of over 1000 questions. Composing such a large number of questions is a challenge in terms of time expenditure and content-related effort. To support the process of developing substantial content, a collaboration of a team of experts from a given unit or a whole school would be advantageous, as the time devoted

to creating questions would be spread. Such a question bank saved in the international QTI standard format can easily be relocated to another e-learning platform, which implements computer-based evaluation standards. This, in turn, allows for different universities to share their databases, which naturally enlarges the pool of proven test questions. As a result, the time needed to develop questions for a single unit, calculated in Table 3, is significantly reduced. Such resources are invaluable to units collaborating in their creation for particular courses whose learning outcomes are the same in respective institutions. Such cooperation between teams of experts enables the workload to be significantly reduced. Writing exam questions is a complex and difficult process, so databases of over 1000 questions are an asset for many years to come, and clearly worth investing in.

#### *9.3. A Further Period of Use*

A question bank of the right size gradually reduces the amount of time necessary for creating substantive content and restricts the work to revising and updating the questions. This is not as time-intensive, and levels the excess time seen inn Table 3 for pools containing over 1000 questions. In the case of paper-based exams, the amount of time taken to develop the substantive content of questions is always the same, which is a strain for teachers.

#### *9.4. Experience Backed by Statistics*

Another important aspect concerns the analysis of the usefulness of test questions in the utilized database. Keeping the statistics and assessment of the question bank after the conducted evaluation in a given year seems to be a necessity. It allows the user to investigate the content, paying attention to the elimination of flawed test questions, ones that did not work or those at the wrong level of difficulty. It also serves to objectively analyze suggestions from students, who can express their reservations about questions after the exam or credit test. Questions should be thoroughly verified, with an emphasis on the scores achieved by all students taking the test. This will contribute to a reduction in the work time needed to supply new questions in future, and will definitely shorten the time spent on the substantive content of questions for banks containing over 1000 items.

#### *9.5. ICT Competences of the Teaching Staff*

A computer-based evaluation of student knowledge encourages the development of ICT competences in its participants, both students and teachers. Online testing will help the teachers improve their computer proficiency and develop their competences in this field, which will also contribute to a reduction in the work time needed in future concerning the conversion of questions to the QTI standard.

#### *9.6. Summary of Work Time Analysis*

The calculations made for databases of over 1000 questions demonstrate a longer work time needed for computer-based testing than paper-based testing; however, in the long term, the overall workload for a unit is reduced. It can thus be concluded that the electronic form is more advantageous and efficient than the traditional form.

Determining the labour cost and time involved in the process of knowledge evaluation in a particular teaching unit in one academic year must also consider the gains derived from the switch to an automated process. These include exempting assistants, administrative and technical-engineering staff from organizational duties connected with preparing and implementing the evaluation. The time devoted to preparing paper-based versions of tests, marking them using a template and archiving the results can be saved, and invested in developing their ICT competences. Their work on the e-learning platform will become more proficient and guarantee support to the authors of test questions in the creation and updating of items on the platform.

#### **10. Discussion**

The research was performed at the Department of Pathophysiology in cooperation with the Department of Computer Science and Statistics of Poznan University of Medical Sciences, Poland (PUMS). Academic teachers attended the research, as well as technical and administrative staff, and also e-learning experts. In 2009, this team introduced an e-learning portal for the entire university, further conducting its own research on the effectiveness and optimization of medical e-learning.

The research presented in the article was conducted by the Department of Pathophysiology, Poznan University of Medical Sciences, Poland on the ESTUDENT portal for remote education, which is an installation of the OLAT applications developed by the University of Zurich under an open-source license. The ESTUDENT portal is a proprietary LCMS application adapted to e-learning in the field of pathophysiology.

The described analysis of working time and the organization of electronic knowledge evaluation was carried out using an example of a large year of students in their second year of medicine in the preclinical subject of pathophysiology.

The working time of the analyzed knowledge evaluation through the e-learning portal is about 10% of the working time needed to carry out the evaluation in a traditional way. Electronic knowledge testing requires a greater amount of work time in the first year of application, due to the preparation of a larger database of questions compared to the number of questions required for the evaluation of knowledge conducted in the traditional (paper) version. However, teachers' working time is spread over 5–7 years of using the electronic question base. As part of the research, an analysis of the working time of building the database from 200 to 5000 test questions for the evaluation of knowledge in e-learning and the evaluation carried out in the traditional version was performed.

The data presented in the article are the result of pioneering research conducted by the authors in the field of the evaluation of preclinical knowledge of very numerous generations of medical students using the e-learning portal, which was carried out in 2009–2019. The described electronic realization used the example of the academic year of 2018–2019, when 333 students were studying medicine. On the basis of our own research, in 2015–2016, the time spent by teachers on exam questions was measured for those participating in traditional education and the e-learning portal. To conduct this research, the same team of academic teachers must participate, and the same conditions must be met for the implementation of both traditional knowledge evaluation and evaluation using e-learning methods.

The confirmation of the usefulness of e-learning in medical education is in the comparison of the benefits and limitations of the electronic evaluation of knowledge and the didactic process and evaluation using the traditional implementation. The work contains such analyses, also indicating the different stages of conducting these components of education in the e-learning portal. An important element of the research was the analysis of the work time needed for the preparation and implementation of electronic knowledge evaluation. The results clearly indicate the advantage of e-learning over traditional organization in terms of the implementation of examinations and surveys.

#### **11. Summary of Study Results**

The application of e-learning for computer-based knowledge evaluation and optimizing the administration of exams for students of medical sciences should be considered in two categories: from the perspective of the providers of the evaluation process, that is, the teaching unit, and from the recipients of the evaluation process, that is, the students.

The advantages to computer-based evaluation providers, that is, the teaching unit, include:


The advantages to computer-based evaluation recipients, that is, the students, include:


#### **12. Limitations**

The authors' research indicates several factors common to the evaluation of knowledge in large groups of students completing a subject at one time, and the factors significantly influencing the optimization of this process. This includes the labor-intensive preparation of databases with questions, the competence level of suppliers (teachers) and recipients (students) of knowledge in the field of e-learning technologies, ensuring conditions for independent work (parameterization of tests or examination rooms), archiving results or the speed of feedback after evaluation knowledge.

Certain elements of this process are changeable and difficult to standardize, depending on the university's IT infrastructure. The differences may be related to the type of elearning application or technical service support, which can be expanded on, and is at the full disposal of the candidates (university-wide center) or available locally (unit's own resources).

In order to optimize the implementation of the process, both traditional and e-learning variants should be carefully analyzed with the same human team, as shown in the diagram from the university, paying attention to its individual conditions (limitations, possibilities during a pandemic) and the specificity of issues. The final calculations may, therefore, differ slightly from those presented in the article. The analysis presented by the authors, as an example, indicates the superiority of the evaluation using e-learning technologies compared to the traditional evaluation. It proposes solutions to the optimal direction of this process, paying attention, for example, to cooperation between units, and sharing resources that will minimize the time spent working on question databases. The presented analysis is typical of universities working on open-source portals with limited funding, which is common in Eastern and Central Europe. It allows for a successful, remote execution, quickly, in one's own unit, over times such as the SARS-CoV-2 pandemic.

There are no complete, detailed analyses with the work time for all stages of work and the organization of medical evaluation for groups of more than 300 students. In order to be perform such analyses, the same team of academic teachers should be involved, and the same requirements should be met in the implementation of both traditional and remote application knowledge evaluation. Research also requires time and experience in the field of e-learning, which significantly affects the effectiveness of the process. There is no well-established educational theory for e-medical education, as remote methods contain parameters that are not known in traditional medical education. Research and discussion on the standardization of e-education and the development of patterns into which medical universities and schools are forced by the pandemic, testing of existing solutions, indications of limitations and addition of new variables are necessary in the important process of evaluation of the knowledge of medical students, who will become doctors (physicians).

#### **13. Conclusions**

Replacing the traditional form of organizing and implementing knowledge evaluation with a web-based equivalent on an e-learning platform optimizes the whole didactic process, not only for the unit carrying this out but, above all, for students. Due to this innovation, course participants have the opportunity to take full advantage of all the technological solutions that e-learning provides, with an implementation that can start from computer-based evaluation. The process of optimization through e-learning should become a natural part of the didactic process, conducted in every subject at all types of higher education institutions, including medical universities. The obtained results encourage their implementation, considering the nature and conditions of medical training, which is a key program in medical universities.

**Author Contributions:** Conceptualization, M.R., B.S., J.S. and E.B.; methodology, M.R., B.S., J.S. and E.B.; software, M.R., B.S., J.S. and E.B.; validation, M.R., B.S., J.S. and E.B.; formal analysis, M.R., B.S., J.S. and E.B.; investigation, M.R., B.S., J.S. and E.B.; resources, M.R., B.S., J.S. and E.B.; data curation, M.R., B.S., J.S. and E.B.; writing—original draft preparation, M.R., B.S., J.S. and E.B.; writing—review and editing, M.R. and B.S.; visualization, M.R., B.S., J.S. and E.B.; supervision, M.R., B.S., J.S. and E.B.; project administration, M.R., B.S., J.S. and E.B.; funding acquisition, M.R., B.S., J.S. and E.B. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was partly supported by Lodz University of Technology in Poland, Poznan University of Medical Sciences in Poland and AGH University of Science and Technology in Krakow, Poland.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Department of Pathophysiology, Poznan University of Medical Sciences, Poland; www.estudent.ump.edu.pl (accessed on 29 August 2021).

**Acknowledgments:** The authors are grateful to anonymous reviewers for their comments. The authors acknowledge the teachers, assistants, administrative staff and technical-engineering staff that agreed to participate in these study.

**Conflicts of Interest:** The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

#### **References**

