Next Article in Journal
Development and Validation of an Assessment Tool for Physical Education for Sustainable Development
Next Article in Special Issue
Assessment Automation of Complex Student Programming Assignments
Previous Article in Journal
Unknown Is Not Chosen: University Student Voices on Group Formation for Collaborative Writing
Previous Article in Special Issue
AI, Analytics and a New Assessment Model for Universities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Digital Assessment: A Survey of Romanian Higher Education Teachers’ Practices and Needs

by
Gabriela Grosseck
1,
Ramona Alice Bran
1,* and
Laurențiu Gabriel Țîru
2
1
Department of Psychology, Faculty of Sociology and Psychology, West University of Timisoara, 300223 Timisoara, Romania
2
Department of Sociology, Faculty of Sociology and Psychology, West University of Timisoara, 300223 Timisoara, Romania
*
Author to whom correspondence should be addressed.
Educ. Sci. 2024, 14(1), 32; https://doi.org/10.3390/educsci14010032
Submission received: 31 October 2023 / Revised: 11 December 2023 / Accepted: 21 December 2023 / Published: 27 December 2023
(This article belongs to the Special Issue Application of New Technologies for Assessment in Higher Education)

Abstract

:
Within the European Commission’s Digital Education Action Plan (2021–2027) and the DigCompEdu framework, our research focuses on the competence area of teachers’ assessment practices and needs. We designed a 24-item online questionnaire for Romanian higher education teachers who are using digital technologies for assessing students’ learning, learning outcomes and practical skills. The present paper analyzes how the 60 respondents from Romanian universities evaluate their own digital competence and how they are using digital assessment, but also what training needs they have in these regards. This study, carried out in May–June 2022, therefore attempts to identify the main concerns, challenges and obstacles higher education teachers encounter when designing and using digital assessment. Our findings indicate the importance of empowering teachers through continuous learning, embracing flexible hybrid models and reimagining assessment strategies for digital literacy. The ANOVA analysis reveals variations among three groups categorized by self-reported digital competencies in their utilization of digital tools. Responsible knowledge-sharing, AI literacy and adaptive curriculum design emerged as critical imperatives. Our study advocates for a transformative shift towards AI-based pedagogy, emphasizing personalized learning that aligns with teachers’ competencies and specific assessment needs while adhering to fundamental teaching principles.

1. Introduction

Educational assessment as part of didactic activity mediated by digital technology is a dynamic process, “a complex and contested practice” [1] which should facilitate self-reflection and learning self-regulation by moving past traditional hierarchical levels of checking knowledge acquisition and classifying students accordingly. Assessment involves, first of all, measuring (obtaining information about acquired knowledge), then appraising (judging the results) and finally making a decision of improvement (the purpose of the evaluation) [2,3,4]. Other researchers [5] underline the fact that, as technology evolves, so does terminology. Thus, we now make the distinction between online and digital assessment.
Online assessment represents a type of assessment in which the evaluation tasks are completed online, while the teacher and student are not in the same physical space while the evaluation is undertaken [6]. It follows that the assessment is dependent on technical conditions and on the digital competencies of both teachers and students. The evaluation is mediated by digital visual-audio channels and it can also include an asynchronous component, involving limited control and high risk (technical issues, fraud, etc.), which vary according to the tasks that are being assigned. As pointed out by [7], the sudden onset of the COVID-19 pandemic prompted educational institutions to quickly shift gears and adapt to a dramatically changed learning environment, which meant swiftly adopting different online assessment methods.
Digital assessment, on the other hand, refers to all the evaluation tasks based on technology in which designing, achieving performance and giving feedback are mediated by digital technologies. When it comes to this method, the digital competencies framework DigCompEdu highlights three fundamental directions which are needed to improve assessment based on technology [8,9]: the use of digital technologies for formative and summative assessment, strengthening the diversity and relevance of assessment methods and tools; evidence analysis, where the educator is involved in generating, selecting, analyzing and critically interpreting digital evidence of learner activity, performance and progress in order to adapt and document teaching strategies and learning activities [10]; and, finally, the use of digital technologies to plan and give directed, constructive feedback. Regarding the types of digital assessment, Gomez et al. [11] have extensively classified various methods that encompass a wide range of formats. These include, but are not limited to, forms, polls and quizzes, online tests; essays, case studies; e-portfolios; digital/online presentations; projects (individual, group); reflection, learning journals; peer assessment and self-assessment; gamification; online interviews; conversational simulations, posts in discussion forums, (peer) comments, debates; collaborative wikis; microblogs; creating videos, vlogs, podcasts (other multimedia).
Both in the physical and virtual classrooms, we assess the learning (summative assessment), but we also assess for learning improvement, aiming at formative assessment, including reflection on one’s own learning [12]. Formative and summative assessment tests in digital format may be similar in structure, but generally have different objectives. The main purpose of formative assessment is to provide feedback that can be used by the teacher and students to improve teaching and learning in formal teaching activities. The main aim of summative assessment, on the other hand, is to measure the level of success or competence that has been achieved at the end of a learning unit/semester/academic year by comparing it with a chosen standard or reference point. The digital tools used by the teacher in summative and/or formative assessment activities also provide immediate feedback (e.g., track changes, check lists, quizzes, but also audio-video) to the student [13].
By exploring the global perspective on digital assessment and its relationship to teacher training needs and practices, this article seeks to close a research gap. The authors noticed that this aspect is not given enough attention in the literature right now. This study focuses on the use of digital assessment by Romanian higher education teachers, offering insightful information that goes beyond the local context. By examining the unique obstacles and approaches in Romania, the study contributes to the global discourse on efficient digital evaluation techniques. The results serve as a helpful point of reference that advances knowledge in digital education by encouraging cross-cultural understanding and directing the creation of international best practices. This study is important because there has not been much research conducted in Romania on this subject, particularly in the aftermath of the COVID-19 pandemic. The majority of research concentrates on pre-university levels, which leaves a gap in our knowledge regarding the difficulties and advancements in higher education. Therefore, our research addresses a significant gap in the scholarly discussion, providing insights into an area that has not yet been thoroughly examined.
The structure of the paper is as follows: After the Introduction, we continue with Section 2, which presents a literature review of the digital assessment teachers’ needs and practices and delineates its domain. In Section 3, we describe the methodology of the study. Section 4 is dedicated to analyzing data and results, followed by discussions, limitations and suggestions for further research (Section 5). In Section 6, we draw several conclusions.

2. Literature Review

The rise of online learning and the need for efficient and effective ways to assess student progress have increased the importance of digital assessment in higher education. Teachers’ practices and needs in digital assessment encompass the use of diverse tools and techniques to support student learning and provide timely feedback. This is in agreement with the European Commission’s Digital Education Action Plan (2021–2027), which seeks to catalyze a radical shift in education by “prioritizing the development of a robust digital education ecosystem and the enhancement of teachers’ digital transformation skills” [14].
The imperative for teachers to master digital competence, as emphasized by [15] and echoed by [16], directly aligns with the overarching theme of enhancing teachers’ digital transformation skills outlined in the European Commission’s Digital Education Action Plan (2021–2027). Teachers, therefore, need digital competence in order to implement innovative education, as they have to train their students’ skills for the 21st century. Simply put, educators have to develop their digital competencies so that they can be effective in teaching their students, who are deemed digital natives. However, some challenges arise as university professors’ training primarily relies on digital competence models and frameworks focusing mainly on pre-university levels. These include the UNESCO ICT Competency Framework for Teachers; the Digital Competence Framework for Educators (DigCompEdu); the International Society for Technology in Education (ISTE) Framework for teachers; and the Common Digital Competence Framework for Teachers (INTEF). Hence, there is a need to reshape digital competence training in university professors from a comprehensive perspective. To achieve this goal, [17] argues that firstly the teachers’ level of digital competence should be assessed, and then a personalized training could be designed starting from the results’ analysis. Another study [12] concurs that teacher training in higher education should be continuous, open and carried out online. Moreover, educators should be offered the opportunity to decide “what, when, and how to learn” based on their needs and level of competence. Instefjord and Munthe [18] show that teachers’ digital competence comprises three knowledge areas: technology proficiency, pedagogical compatibility and social awareness.
Digital assessment has become relevant as part of the digital learning process, and it is essential for teachers to have digital competencies to effectively use digital assessment tools [15]. At the same time, Muammar et al. [19] indicate, for the assessment component, that teachers use a variety of digital tools to monitor student progress on a regular basis. Yet, some of them do not provide feedback in digital form, while others consistently use digital approaches to offer feedback to their students.
Findings of a recent study carried out by [14] involving 118 professors in a Portuguese university showed that teachers had a high overall level of digital competence, falling between the B2 (Expert) and C1 (Leader) categories on the DigCompEdu scale. However, certain areas were found to require improvement, especially in DigCompEdu’s core pedagogical components—Digital Technologies Resources and Assessment—since most university teachers possess moderate levels of digital competencies and scored lower proficiency levels, results reinforced by [20]’s research. Yet, this analysis was carried out with teachers working exclusively in virtual environments and who use digital assessment software, so for them such technologies are not merely a medium for information dissemination, but actual pedagogical resources which improve online educational practices.
All things considered, [21] suggest ensuring continuous progress and alignment with the Digital Education Action Plan’s strategic priorities by creating targeted teacher training initiatives with a focus on enhancing competences related to technological resources and assessment. It is essential to support capacity-building initiatives that examine various assessment formats and approaches (diagnostic, formative and summative). Furthermore, it is important to critically assess the data on student activity, performance and advancement obtained from digital technologies.
Vascolonces and coauthors [22] identify the main challenges when aiming to conduct transparent assessment in higher education. The study, involving five European higher education institutions, describes the process of creating a self-learning course for educators. The output covers key concepts and good practices regarding assessment for online learning and includes reflective tasks that can help develop practitioners’ competence in this field.
The digital transformation was already upon us even before the COVID-19 pandemic, but the pandemic accelerated the process and highlighted the need for digital assessment tools, as well as the needs and practices of teachers in this context [23,24]. Contrary to this, a work published in 2019 [25] drawing from two separate studies involving Swedish university teachers revealed that higher education teachers are ambivalent about technology integration. They are hesitant yet positive towards embedding digital technology in their pedagogical practice, and the authors argue that one-sided theoretical assumptions about technology integration may be a possible explanation for said ambivalence.
The study conducted by [21] used DigComEdu to measure educators’ digital competence from the students’ point of view and their self-perception of learning. The study found that more than 70% of students think that their learning process is positively impacted by their faculty’s digital competence. This is in line with the broader discussion on the challenges and opportunities presented by the digital transformation in education. The authors propose a comprehensive model comprising four elements which affect the student self-perception of learning: educators’ digital skills; the use of technology for communication, monitoring and assessment; educators’ engaging in digital ecosystems; and students’ data security in the learning process.
Integrating existing and emerging technologies aims for smarter, faster, fairer and more efficient assessment. Hence, the overarching goal is that by 2025 evaluation should become authentic, accessible, properly automated, continuous and secure [26]. In broad lines, this means that assessment should be: authentic—preparing the student for the labor market by testing knowledge and skills in a more realistic, motivating and contextualized way. Secondly, it should be accessible—designed to be usable by everyone, including people with disabilities, providing students with different opportunities to express and share what they know, but also establishing clearly what the assessment grids are. Properly automated assessment (e.g., AI-based assessment, automated feedback systems, adaptive learning tools based on test scores) will make grading easier and give more efficient feedback to students, thus improving the learning experience of students and facilitating teachers’ work [1]. Continuous assessment implies abandoning final summative assessments or reducing their importance in favor of continuous formative assessment (rich in opportunities for practice and reflection) in order to adapt to changes in the labor market. Finally, we should make sure assessment is secure—meaning it is based on the student’s own activity and completed by following the rules set by the teacher [26]. Using tools such as the Turnitin platform to avoid plagiarism, but also the differentiated, personalized assessment of competencies, can make the process safer by reducing fraud.

3. Methodology

This section describes the approach undertaken to investigate the digital assessment practices and needs of higher education teachers. The study focuses on educators who employ digital technologies in assessing students’ learning, learning outcomes and practical skills. The research aims to analyze the use of digital assessment tools, while also identifying the training requirements essential for proficient implementation. Thus, we sought answers for the following research questions:
  • What are the digital assessment practices employed by higher education teachers?
  • How do educators utilize digital technologies in assessing students’ learning, learning outcomes and practical skills?
  • What are the specific digital assessment tools commonly used in higher education settings?
  • What are the perceived benefits and challenges of using digital assessment tools in the evaluation process?
  • What are the training and support requirements for teachers to proficiently implement digital assessment tools?
  • What kind of support and training do our teachers need to make the most out of these digital tools in the evaluation process?
The methodology is provided below, encompassing participant demographics, sampling procedures, data collection instrument, survey administration and subsequent data analysis (ANOVA). The study respected ethical standards, ensuring confidentiality and voluntary participation.

3.1. Methods and Tools

The data collection instrument was a structured questionnaire designed specifically within the Erasmus+ project D-eva “Practical skills evaluation with digital technologies in teacher education” (d-eva.eu). The questionnaire was developed to assess the digital assessment practices, training needs, concerns, challenges and obstacles encountered by higher education teachers. The questions were designed to provide both quantitative and qualitative insights into the participants’ experiences. The survey was administered digitally through the QuestionPro platform and is available at https://bit.ly/DA_Teacher_Needs (accessed on 1 June 2022). Invitations were sent via email to potential participants, along with a brief explanation of the research purpose and assurances regarding confidentiality and voluntary participation. Participants were provided with a specific timeframe for completing the survey (May–June 2022).
The survey garnered a good level of engagement, with a total of 164 responses recorded. However, out of these, 60 participants successfully completed the survey, reflecting a completion rate of approximately 36.59%. The completion rate suggests a satisfactory level of participant commitment. We believe that the data provide valuable insights into participant behavior and engagement with the survey.

3.2. Participants

The 60 participants in our study are teachers from Romanian higher education institutions (HEIs). The majority of respondents are female, comprising 78.33% of the total sample. Male participants constitute a smaller proportion (20.00%), while 1.67% of respondents preferred not to disclose their gender or identified as ‘Other’ (1.67%). The dataset encompasses a range of ages, spanning from the lower twenties to the mid-sixties, reflecting a diverse cross-section of educators. The mean (average) age of the respondents is approximately 45 years, which indicates that educators in their late forties constitute a substantial portion of the sample.
Regarding the participants’ academic position, the data indicate a predominant presence of practitioners, comprising 83.33% of the total respondents (Table 1). Learning technologists constitute a smaller segment, representing 3.33% of the respondents. These individuals play a pivotal role in integrating technology into educational practices, acting as valuable resources in advancing digital assessment methods. Four participants (6.67%) hold leadership positions as heads of departments or organizations. Their input is of great importance as they likely influence the adoption and implementation of digital assessment strategies at an institutional level. Another 6.67% identify as ‘Other’. This category may cover a variety of roles not explicitly listed in the options provided. This diversity within the ‘Other’ category could potentially yield unique perspectives on digital assessment practices.
The data shown in Table 2 indicate a diverse distribution of the educational levels taught by the participants. A significant part of the respondents in our survey are involved in teaching at the graduate level (28.05%). This suggests a strong representation of educators who work with students pursuing advanced studies, potentially in master’s programs or the equivalent. At the postgraduate level, we have 21.95% of educators engaged with students enrolled in studies beyond their initial degree. This might include specialized programs, diplomas or certificates designed for individuals seeking advanced knowledge and skills in their respective fields. The notable presence of educators teaching at the doctoral level (34.15%) underscores a focus on advanced research and scholarship. This group likely includes professors and mentors guiding students through the process of earning their doctoral degrees. While constituting a smaller percentage, the inclusion of educators at the undergraduate level is still significant (10.98%). These individuals play a pivotal role in laying the foundation for the students’ academic journeys and fostering a strong educational base. We also have 4.88% of educators that do not fit neatly into the provided options. This is worth exploring further to understand the specific contexts and levels where they are professionally involved.
It is worth noting that some respondents are active at several levels (i.e., they teach both at the postgraduate and doctoral level), which is why Table 2 counts 82 responses from the 60 academics who completed our survey.
Therefore, our study gathered data from a diverse group of educators, representing a wide range of academic disciplines and roles. These educators specialized in subjects spanning applied informatics, linguistics, economics, counseling, literature and various other fields. Their expertise covered a broad spectrum of academic levels, from undergraduate to doctoral. The majority were academic practitioners actively engaged in teaching, while others held leadership positions in educational institutions. We believe this diversity of roles and subjects provides a comprehensive overview of the educational landscape.
The data regarding the participants’ years of experience with Technology Enhanced Learning Environments bring an interesting insight into their familiarity with digital learning platforms. The majority of participants possess a considerable level of experience, with 18 respondents reporting over 10 years of familiarity with virtual environments like Moodle or Blackboard. Some participants have accumulated over 20 years of experience, showcasing a deeply rooted understanding and expertise in this domain. A few participants reported lower levels of experience, with some indicating only 2 or 3 years of engagement. This suggests a mix of early-stage adopters and individuals who may have recently started integrating these technologies into their teaching practices.
Only a small number of respondents provided vague or unspecified information about their experience with the technology learning environment, with entries such as “few” or simply “22”. While these responses may require clarification, they underscore the diverse range of experiences and comfort levels among participants. A subset of participants mentioned specific platforms like Moodle and GAFE or MS Teams, along with the number of years they have been using them. This specificity not only highlights the prevalence of these platforms but also indicates a depth of familiarity and expertise.

4. Data Analysis and Results

We asked participants at the beginning and at the end of the survey to assess their digital competence levels as per the DigCompEdu Framework to provide a self-reflection on their proficiency in utilizing digital tools in teaching (Table 3).
Initially, 8.70% regarded themselves as “A1. Newcomers”, denoting a basic familiarity with digital tools. Following their engagement with the questionnaire, there was a slight decrease to 6.67%, potentially indicating a growing confidence or familiarity with educational technologies. At the same time, 11.96% identified as “A2. Explorers”, signifying a consistent commitment to exploring the potential of digital tools in education. A significant segment of 42.39% positioned themselves as “B1. Integrators”, indicating a proficiency in incorporating digital tools into teaching practices. Following their engagement with the questionnaire, this group experienced an increase to 53.33%, showcasing a growing proficiency in integrating digital technologies into their pedagogical approaches. In contrast, 27.17% of teachers initially assessed themselves as “B2. Experts”. However, after responding to the questionnaire, there was a substantial decrease to 13.33% in this self-assessment, suggesting a potential reevaluation of their perceived expertise in utilizing digital tools for teaching, possibly indicating a recognition of areas for further growth or development.
Only 8.70% of participants perceived themselves as “C1. Leaders”, but this means they are proficient not only in using digital tools but also in guiding others in their implementation. After completing the questionnaire, this confidence in leadership abilities appeared to strengthen to 13.33%. Finally, a smaller fraction of participants (1.09%) viewed themselves as “C2. Pioneers”, meaning they adopted cutting-edge technologies in teaching. This self-assessment experienced a slight increase to 1.67%, suggesting a growing sense of innovation and pioneering within this subset of participants.
These shifts in self-perceived competence levels highlight a dynamic process of growth and self-reflection in the participants’ digital competence journey. Such nuanced adjustments indicate their willingness to engage with and adapt to digital technologies, ultimately enhancing their teaching practices. While the majority position themselves within the “Integrator” to “Expert” range, there is a wider range of self-perceived competence levels covered by our participants.
The data in Figure 1 point to a diverse landscape of assessment practices employed by teachers.
Initial diagnosis assessments, designed to gauge students’ baseline knowledge or skills, yielded a moderate average score of 2.91. Summative assessments, which measure learning outcomes at the peak of an instruction period, emerged as a prominent feature in higher education. With an average score of 3.48, these assessments demonstrated a high level of effectiveness and alignment with learning objectives. On the other hand, formative assessments, aimed at providing ongoing feedback for improvement, garnered the highest average score of 3.63. This emphasizes their critical role in the learning process and indicates a strong preference among educators for this type of assessment. Self-assessment, where students evaluate their own progress, also played a significant role in the evaluation landscape. With an average score of 3.03, self-assessment proved to be a valuable tool, though there may be opportunities for further enhancement in this area. Peer-assessment, where students evaluate their peers’ work, received a slightly lower average score of 2.72. There is room for improvement in the implementation of this assessment method and it is interesting to further investigate the challenges which have led to this low score. Within the category labeled ‘others’, a spectrum of unique assessment methods emerged, revealing a spirit of innovation and customization. This category garnered an average score of 2.77, representing a moderate level of effectiveness.
It is evident that digital tools play a pivotal role in the assessment process carried out by our respondents (Figure 2). The high average score of 3.7 indicates a widespread reliance on technology to support various aspects of evaluation, which suggests a strong inclination towards leveraging digital resources to enhance assessment. Furthermore, the data demonstrate a proactive approach to monitoring and evaluating students’ learning journeys. With an average score of 3.53, teachers expressed a clear preference for using digital tools to assess the ongoing learning progress and development of their students.
The commitment to data-driven evaluation is also apparent, as indicated by the average score of 3.44 for using digital tools to assess learning outcomes and results. This reflects a structured and systematic approach to gauging the effectiveness of teaching methods. However, teachers show a preference for designing digital tests for their students, as evidenced by the average score of 3.47. Higher education teachers are, therefore, embracing technology in crafting assessments that align with their particular instructional goals. This proves a likely shift towards adopting digital assessments over traditional paper-based tests.
At the same time, there is a targeted use of digital tools to focus on specific facets of students’ learning experiences, as reflected in the average score of 3.38 for assessing specific aspects of learning. While the average score for using digital tools to support peer review processes is slightly lower at 2.89, it still shows a significant utilization of technology for facilitating collaborative evaluation. But there may also be opportunities for customization in this respect, to better align with the specific needs and preferences of the respondents. The score of 3.01 for using digital tools to support self-reflection in teaching practice indicates a recognition of the value of technology in fostering professional growth among teachers and reflects a willingness to leverage digital resources for continuous improvement.
Data from Figure 3 show how teachers track students’ progress with digital tools. It can be noted that 15.38% of teachers do not currently employ digital means to monitor students’ progress.
There is an area for development here, as incorporating digital assessment formats can offer them more efficient and comprehensive tracking methods. Similarly, another 15.38% of respondents mentioned that they do monitor students’ progress regularly, but without making use of digital tools. So, they are committed to monitoring, but have yet to explore the benefits that digital assessment formats can offer in terms of accuracy and efficiency. A significant portion (37.18%) of educators reported that they occasionally use digital tools like quizzes to assess their students’ progress. Meanwhile, 17.95% of respondents stated that they employ a variety of digital tools to monitor student progress, thus actively leveraging technology to track and analyze students’ development. Almost 9% (8.97%) indicated that they use learning analytics data to monitor and track student learning progress. This reflects a more advanced and data-driven approach, demonstrating a sophisticated understanding of leveraging digital tools for assessment. Only a small fraction (3.85%) systematically uses a range of digital tools to monitor student progress. While this group is relatively small, they represent a dedicated cohort actively integrating digital assessment formats into their teaching practice. Lastly, one respondent (1.28%) indicated an “Other” approach, suggesting a unique and personalized method for tracking student progress that does not fall into the predefined categories.
When it comes to providing feedback, the data from Figure 4 show a marginal fraction of respondents (3.85%) who indicated that feedback with digital technologies is dispensable within their specific work environment. This minority perspective probably reflects an educational context where feedback may not traditionally be emphasized or even deemed necessary.
In contrast, a segment of 20.51% of participants acknowledged the significance of feedback but disclosed that they do not employ digital means for its delivery. This cohort likely relies on traditional, non-digital methods such as face-to-face interactions or handwritten notes for providing feedback.
The most substantial cohort, encompassing 42.31% of respondents, reported that they occasionally use digital methods to offer feedback. This large category employs practices ranging from using automated scoring systems in online quizzes to making comments or giving “likes” in virtual learning environments.
Approximately one-fifth of the respondents (19.23%) specified that they employ a diverse array of digital tools and techniques for feedback provision. This group’s approach is notably multifaceted, suggesting a comprehensive integration of digital resources in their feedback processes. However, educators who systematically employ digital approaches for feedback constituted only 12.82% of our respondents. This subset consistently relies on digital technologies as their primary medium for providing feedback and their systematic approach indicates a high level of proficiency and comfort with digital tools for this purpose. Additionally, 1.28% of teachers (a very small percentage) outlined an alternative method for offering feedback that did not align with the predefined categories. This distinctive approach may reflect a personalized strategy tailored to their specific teaching context, underscoring the diversity of instructional approaches.
The data from Table 4 highlight a diverse and nuanced approach to digital assessment methods used in higher education, reflecting the adaptability and creativity of teachers in adapting assessments to suit different learning contexts and objectives. On average, respondents reported using digital assessment methods fairly regularly, as indicated by the mean score of 1.97 on a scale of “Often”, “Not very often” and “Rarely”. This suggests a meaningful integration of technology into their teaching practices.
Among the various methods, “hosting and file sharing services” emerged as the most frequently utilized, with a score of 1.84. This implies that teachers rely on platforms like Dropbox, Google Drive and similar tools to facilitate seamless sharing of educational resources. “Tests and quizzes online” as well as “projects” received high scores of 2.07, attesting to the popularity of interactive assessments and project-based learning approaches, methods which offer engaging ways to evaluate student comprehension and application of knowledge. Other methods, such as “reflection—individual and/or in groups” and “learning management platforms” received scores of 1.91, indicating that while they are utilized, they may not be central to instructional practices. Nevertheless, they remain valuable tools for enhancing the learning experience.
The data also revealed a range of scores, signifying varying levels of familiarity and integration of these digital assessment methods. For instance, “content mapping” and “debating apps” are employed more frequently, indicating a notable comfort level with these tools. On the other hand, some interesting methods such as “rubrics maker” and “3D virtual environments, VR, AR and/or AI” received slightly lower scores, perhaps because they are new and teachers need time to explore and integrate them in teaching and assessment.
Table 5 presents an overview of the types of competences assessed in the study.
Upon scrutinizing the data presented in Table 5, distinctive patterns emerge, shedding light on the assessment priorities among educators in higher education. A majority of teachers (63.51%) emphasize the evaluation of subject-specific skills, underscoring the critical significance of honing expertise within the scientific domains of their respective subjects. Analytical and research skills occupy a substantial role, with 48.65% of respondents frequently assessing these capabilities. This underscores the acknowledged value of cultivating analytical prowess and research acumen for robust academic inquiry and effective problem-solving.
In both academia and professional arenas, effective communication skills reign supreme. A significant 55.41% of respondents place frequent emphasis on evaluating this indispensable skill, recognizing its pivotal role in both educational and professional contexts. The assessment of collaborative activities demonstrates a balanced distribution, with 40.54% of educators assessing group and team work both “often” and “sometimes”. This dual frequency underscores a concerted effort in fostering collaborative aptitudes among students.
Critical thinking, a cornerstone of higher education, receives frequent assessment from 48.65% of educators. This percentage underscores the universal importance of nurturing critical thinking abilities, irrespective of students’ chosen subjects or specializations. The capacity for decision-making and problem-solving is deemed vital, with 44.59% of respondents frequently assessing this competency. This acknowledgment reflects the recognized need for students to make informed decisions and tackle complex problems.
Reflective practice, a crucial element of self-directed learning, is evaluated often by 37.84% of respondents, and an additional 36.49% assess it sometimes, indicating a collective recognition of its importance in the educational landscape.
Leadership and management skills, essential for future professionals, are evaluated often by 37.84% of higher education teachers. Organizational skills, encompassing the ability to plan, organize and prioritize work, are acknowledged as pivotal, with a balanced distribution of assessment frequencies.
Encouraging creativity and originality in students is a priority for 41.89% of respondents, who assess this competence often, hence showing they value and nurture innovative thinking.
In the contemporary digital society, digital literacy skills are extremely important for the 35.14% of respondents who say they frequently assess these skills. This reflects the evolving landscape of education in a digital age.
As we delve into the experiences of educators, we gain a real-world understanding of the complexities of using digital tools in assessment. Overcoming potential hurdles demands a flexible and human-centered approach. Based on the responses to the question “What was the biggest challenge you’ve encountered, and how did you navigate through it?”, the challenges can be categorized into seven distinct areas (Table 6). Each category presents unique obstacles that require specific strategies for resolution.
An exploratory factor analysis was conducted using Principal Component Analysis to identify the number of factors besides the seven items related to digital assessment practices. The Kaiser–Meyer–Olkin measure was 0.841, indicating a very good level of sampling adequacy for the factor analysis. Bartlett’s Test of Sphericity yielded an approximate chi-square of 297.415 (df = 21, p < 0.001), indicating that the correlation matrix was not an identity matrix and therefore suitable for factor analysis. The extraction communalities after the analysis ranged from 0.470 (‘to support peer review processes’) to 0.683 (‘to assess the students’ learning process’), indicating that a substantial amount of variance was shared among the items.
The total variance explained by the extracted component was substantial, with the first component accounting for 59.21% of the variance. The factor loadings on the single extracted component ranged from 0.685 (‘to support peer review processes’) to 0.827 (‘to assess the students’ learning process’), suggesting that all items loaded significantly onto this factor. The Cronbach’s alpha for the seven-item scale was 0.882, indicating a very good internal consistency.
Based on the EFA results, we constructed a composite variable which includes the mean of the seven-item scale regarding digital assessment practices. We were interested in conducting a comparison between the respondents with different levels of self-reported digital competencies about using digital tools in their academic practices.
The ANOVA analysis explores the differences in the mean values of composite variables (mean of uses of digital tools in academic setup) across three groups (self-reported digital competencies). Descriptive statistics indicate that Group A (level A, which includes A1 = newcomer and A2 = explorer) had a mean of 2.67 (SD = 0.60), Group B (level B, which includes B1 = integrator and B2 = expert) a mean of 3.46 (SD = 0.82) and Group C (self-reported C1 = leader and C2 = pioneer) a mean of 3.59 (SD = 0.80). The ANOVA results show a significant difference between groups (F = 8.584, p < 0.000), suggesting that at least one group mean is significantly different from the others (Table 7).
The Levene’s Test for Homogeneity of Variances was not significant (p = 0.175), indicating that the assumption of equal variances across groups is not violated. The Tukey HSD post-hoc test reveals significant mean differences between Groups A and B and A and C (both p < 0.001), but not between B and C (p = 0.794). This suggests that while Group A significantly differs from Groups B and C, Groups B and C are not significantly different from each other in their mean values the composite variable considered. So, those who identified at level A in digital competencies (A1 = Newcomer and A2 = Explorer) have a significantly lower mean (M = 2.67, SD = 0.60) in using the digital tools for assessments than those who self-reported at B or C levels.
We conducted two more EFAs using Principal Component Analysis to identify the latent factors of advantages and disadvantages of using digital assessments. The first one, regarding the items that reflect advantages, revealed that all the variables are reflected in one latent construct. The Kaiser–Meyer–Olkin measure was 0.882, indicating a very good sampling adequacy for the EFA. The extraction communalities after the analysis ranged from 0.439 (‘It helps to reduce the teachers’ ‘workload’) to 0.801 (‘Individual approach. Better quality feedback’), indicating that a large amount of variance was shared among the items. There was only one factor extracted that explained 66.86% of total variance. The Cronbach’s alpha for the 10-item scale was 0.94, indicating a very good internal consistency. The second EFA tried to identify the latent factors of disadvantages of the digital assessment process. The measures of Kaiser–Meyer–Olkin (0.93) and Bartlett’s Test of Sphericity (chi-square = 1096.52, df = 36, p < 0.000) indicated that our items are suitable for factorial analysis. There was only one factor extracted that counted for 83.71% of variances of individual items (disadvantages in digital assessment process).
To explore the differences between groups of respondents with different levels of self-reported digital competencies, we conducted a one-way ANOVA analysis for Group A (level A, which includes A1 = newcomer and A2 = explorer), Group B (level B, which includes B1 = integrator and B2 = expert) and Group C (self-reported C1 = leader and C2 = pioneer).
The ANOVA analysis explores the differences in the mean values of two composite variables (advantages in the digital assessment process and disadvantages in the digital assessment process) across three groups (self-reported digital competencies). Regarding the advantages, descriptive statistics indicate that there are no differences between Group A (level A, which includes A1 = newcomer and A2 = explorer), with a mean of 3.55 (SD = 1.01), Group B (level B, which includes B1 = integrator and B2 = expert), with a mean of 4.06 (SD = 0.56), and Group C (self-reported C1 = leader and C2 = pioneer), with a mean of 3.82 (SD = 0.88). As can be observed from the Table 8, the ANOVA results show a nonsignificant difference between groups (F = 0.735, p = 0.483). For composite variables that reflect disadvantages, the means for the considered groups were Group A (M = 2.29, SD = 1.85), Group B (M = 2.56, SD = 1.61) and Group C (M = 2.86, SD = 1.45). The ANOVA analysis (F = 2.117, p = 0.12) showed that there are no differences between groups regarding the perception of disadvantages in the digital assessment process.

5. Discussion

Looking at our data, the fact that 78.33% of respondents are female may reflect the evolving landscape of Romanian higher education, where female educators are increasingly taking prominent roles in academia. This distribution also underscores the willingness of educators, regardless of gender, to contribute their insights towards advancing digital assessment practices. This gender-related shift in academia is not unique to Romania [27]; it resonates with the global discourse on gender equity in education [28]. Furthermore, the representation of female educators in our study adds depth to the broader conversation on diversity and inclusion in Romanian higher education settings [29].
Moreover, a majority of participants (83.33%) are actively engaged in the direct practice of teaching and learning. Their involvement as classroom teachers, trainers, assessors and tutors positions them as primary stakeholders in the digital assessment process. None of the respondents identified themselves as staff developers. This absence in our sample could signal a gap in the existing support structures, highlighting an area that warrants further investigation and attention.
Our data also indicate a varied spectrum of experience levels with different technology-based learning environments, ranging from novices to seasoned practitioners. This diversity in years of academic experience enables us to have a comprehensive perspective on the challenges and opportunities faced by educators in leveraging technology for enhanced learning environments. As [30] indicated, this information will be instrumental in tailoring recommendations and strategies to cater to the varying levels of technological proficiency among educators.
The data in Figure 2 paint a clear picture of the proactive integration of digital tools in the assessment process among higher education teachers. They underscore a collective effort to enhance evaluation practices through technology, ranging from tracking learning progress to designing assessments and supporting self-reflection. The consistent average score of 3.35 further highlights the robust and reliable use of digital tools in assessment practices. This aligns with the findings of [24,31], revealing a growing reliance on digital tools for assessment purposes.
Our results also highlight a range of practices when it comes to tracking student progress through digital assessment formats (Figure 3). While some teachers have fully embraced digital tools, there is room for growth in encouraging broader adoption and exploring the diverse benefits they can offer in terms of accurate and efficient assessment practices. The positive reception of technology in assessment is further supported by a study conducted by [32], demonstrating that technology-enhanced assessments often lead to improved student outcomes.
Additionally, our findings on the diversified spectrum of feedback practices among higher education teachers, as depicted in Figure 3, align with the existing literature in the field [33]. From those who perceive feedback as non-essential in their particular environment to those who systematically employ digital approaches to provide feedback, the versatility of practices points to the adaptability of educators in tailoring feedback strategies to their instructional contexts. There is, indeed, no singularly prescribed method for providing feedback, but rather a variety of approaches guided by educators’ pedagogical styles, course dynamics and student preferences. Studies by [33,34] emphasize the importance of recognizing the diversity in feedback practices, attributing it to educators’ distinct pedagogical styles, the dynamics of their courses and the preferences of their students. By incorporating these insights, our study contributes to the ongoing discourse on the multifaceted nature of feedback methodologies in higher education.
The diversity in assessment practices, as shown in Table 4, resonates with the literature on pedagogy and evaluation methods. In particular, research by [5] emphasizes the importance of adopting a comprehensive approach to assess student performance, considering various stages of learning. As such, initial diagnosis assessments, designed to gauge students’ baseline knowledge or skills, yielded a moderate average score of 2.91. This suggests a potential for refinement in the design and implementation of these assessments.
Table 5 reflects a good integration of diverse digital assessment methods. Higher education teachers exhibit a well-rounded toolkit of assessment approaches, demonstrating an awareness of the benefits that technology can bring to the teaching and learning process. The scores provide insights into the relative frequency of usage, serving as a foundation for continued exploration and the refinement of digital assessment strategies in higher education settings towards the goal of enhanced learning outcomes. This aligns with what experts like [6,14,24] have found about the benefits of using various digital tools in assessment. It is like a continuous process of checking and adjusting to make sure we are using technology in the best way for students.
From the results we obtained (Table 6), it is clear that teachers recognize the multifaceted nature of skills and knowledge necessary for students’ success, encompassing subject-specific expertise, critical thinking, communication and digital literacy. These findings are in line with the research of [9], affirming that educators acknowledge the importance of subject-specific expertise, critical thinking, communication and digital literacy for student achievement. Moreover, our findings underscore a sound understanding of the diverse competences essential for academic and professional growth in higher education. The diverse landscape of assessment practices revealed by the responses proves that higher education teachers are able to address the specific needs and abilities of their students and to assess their knowledge and skills on multiple dimensions while also encouraging individual growth and development. These results are consistent with the study of [35], who argue that a diversified approach to assessment allows for a more comprehensive evaluation of students’ knowledge and skills across multiple dimensions.
In the realm of higher education, educators face a myriad of challenges when it comes to integrating digital tools into their assessment practices. As Table 7 depicts, these challenges can be categorized into distinct areas, each presenting its own unique set of obstacles.
First and foremost, educators grapple with the task of selecting the most suitable digital tool for their assessments. Choosing an app or a digital instrument is a recurring theme in the educational technology literature [5] and can be a difficult task which demands meticulous research and evaluation. Therefore, when deciding to introduce technology into our teaching activity, Henshaw [36] suggests considering the answers to at least the following three questions: Why am I doing this activity/task? What do I want students to do? How can I make it happen? Henshaw’s framework agrees with the recommendations put forth by [24], emphasizing the importance of educators critically evaluating the purpose and pedagogical impact of technology in their teaching practices.
Selecting appropriate online assessment methods is a critical aspect of integrating technology into education and needs to be explained, simulated and introduced gradually. This process is well supported by the existing literature. As noted by [6], a thoughtful and gradual introduction of digital assessment methods is crucial for their effective implementation. Initially, it is important to stick to one or two digital/online applications and tools that students are/are becoming familiar with. Once familiarity is established, educators can have a specific, purposeful and engaging task in mind, and there will certainly be more options in terms of tools available to accomplish it. Aligning with [37]’s research on the positive impact of varied digital tools on student engagement and learning outcomes, there are some questions to consider when deciding upon the digital tool to use: Do I need an account? Is it free? And if not—is it worth it? Does the app/tool allow us to do something we cannot do with other tools we already use (e.g., integrate multimedia)? Does it have any special equipment requirements? Can it be used on any device, with any browser, or even offline? Is it safe, intuitive and easy to use? How reliable and scalable is this tool? Does it allow assessment (i.e., can grades/points be awarded)? Is it possible to obtain reports to track progress? Are there elements of gamification? Are there different ways of giving feedback (e.g., written, oral, emoji, meme, social media, audio, video, etc.)?
Secondly, supervision during testing can be problematic. Ensuring all students receive proper oversight in a digital environment can be logistically challenging. Then, poor internet connectivity or technical glitches can unexpectedly disrupt the evaluation process, leading to delays. Educators have to be ready to find swift and effective solutions to such problems and be able to address hardware limitations to enhance software performance. The technological landscape is ever-evolving and teachers constantly seek out opportunities for formal education (attending workshops, watching tutorials) in order to stay well informed and proficient in this rapidly advancing field. The challenges associated with remote supervision echo the concerns raised by [38] in the study on online assessments. Moreover, [39] highlights the importance of providing personalized assessments as a priority for many educators. But tailoring assessments to meet the individual needs of students is an endeavor that requires careful consideration and planning. However, this personalized approach ultimately leads to more meaningful learning experiences. Research by [40] demonstrates the positive impact of personalized assessments on student engagement and learning outcomes. Lastly, time remains a constant constraint and balancing the demands of learning and configuring new tools with other responsibilities can be a daunting task.
In this tapestry of digital education, each challenge acts as a distinct puzzle piece. Every solution signifies a stride towards progress and innovation. Thus, from the educators’ responses regarding the risks associated with the use of digital tools in assessments, several concerns emerged, each demanding nuanced consideration.
One predominant puzzle piece in this landscape is the concern surrounding academic integrity, particularly the apprehension surrounding plagiarism, academic fraud and the unauthorized use of online resources, also revealed by previous studies [41,42]. Despite the enthusiasm and technological anticipation surrounding the educational potential of new digital tools, when asked about ensuring academic integrity, the respondents admitted that there still are many aspects to be specified and/or refined. The vast majority support the idea that a revision of digital assessment principles is needed, from the perspective of students, teachers and the institutional framework alike, which resonates strongly within the literature [43].
Another pivotal concern revolves around online responsibilities and etiquette. This encompasses a wide array of considerations, from grappling with issues of anonymity to contending with trolling, memes, jokes, cyberbullying and the need to uphold proper netiquette, as previous studies have also shown [44].
Academic misconduct and cybercrime represent a formidable set of challenges, as documented in several studies [45,46]. These challenges span from combating activities such as illegal downloads, plagiarism, copyright infringement, academic cheating, exam fraud, data or result fabrication, to addressing subtler activities like the utilization of essay writing services (mill papers, for instance). Tackling these issues necessitates robust measures to uphold academic rigor and fairness, as highlighted by [47], in the face of evolving digital threats.
The realm of cybersecurity constitutes yet another puzzle piece, with the literature emphasizing the need to safeguard personal data, preventing incidents like “zoom bombing”, or addressing technical glitches during exams [48]. Additionally, the potential for legal disputes and the necessity of implementing surveillance and monitoring measures all constitute critical concerns in the digital assessment landscape, being recurrent themes in the literature [49].
The advent of advanced technologies, particularly the use of AI applications, adds a layer of complexity [50,51]. Of particular note are generative AI tools like ChatGPT, which introduce novel considerations and challenges. As [50] prompts, educators should consider questions such as: Are students who use, for example, AI-based text generators to write papers really cheating? Is AI a tool that violates academic integrity? Will it encourage scientific fraud? How could artificial intelligence be used to personalize and differentiate learning? What responsibilities do students have regarding proper citation of sources when using AI text generators in their academic work? Is a new definition for plagiarism needed? Will language in school policy documents, etc., be revised? Do we need AI literacy [52]?
Hence, critical thinking takes on heightened significance in the digital sphere. Educators expressed concerns about cultivating discernment in online interactions, particularly in contexts such as teacher–student relationships on social media. Previous studies, such as that conducted by [53], stress the need to address challenges associated with combating misinformation and disinformation and managing the increased exposure brought about by audio-visual interactions.
Taken together, these insights shed light on the multifaceted nature of risks that educators associate with the integration of digital tools into assessment practices. From safeguarding academic integrity to managing the complexities of online interactions and grappling with the implications of advanced technologies, these concerns bring to the forefront the imperative for well-informed strategies and ethical guidelines in the realm of digital assessment. Although integrating digital tools in assessment is a complex task, it is a process that can be broken down into phases to avoid being overwhelmed, as articulated by [54].
Among the critical threads are issues of accessibility and inequalities, as emphasized by the Berkman Klein Center for Internet & Society [55]. Beyond the conventional rural–urban divide, disparities in internet connectivity, access to digital devices and proficiency with tools intersect with variables such as age, gender, ethnicity, race, education and socioeconomic status.
It is important that students are informed about the policies their institution has in place and how these ensure that the tools being used work for everyone (e.g., whether they require facial recognition or voice recognition). We need to make sure that all students have the same rich digital experience [56].
In the midst of the digital transformation, concerns about security, safety and privacy echo through the virtual corridors of education. Online educational platforms generally collect a great amount of student data. However, it is not clear how the data are collected, where they are being stored and how such data might be used. Finefter-Rosenclassbluh et al. [57] emphasize the fact that all educators should make sure that students have the skills to manage personal data profiles and their online social identities. Moreover, academic institutions must help protect student privacy (e.g., by opting for less invasive technologies, adopting policies that mitigate privacy concerns, developing programs and resources to address online privacy concerns, etc.). Likewise, policy makers need to recognize and debate the ethical issues linked to the rapidly increasing amounts of educational data being collected and stored and center issues of students’ rights to access and control their own data.
The journey into the digital realm also unveils the challenge of protecting users’ safety and wellbeing. In the 2021 report about digital ethics issued by the Berkman Klein Center for Internet & Society [55], we can find crucial evidence that students are not a monolithic group. For example, “surveillance technologies may be useful for students who have a learning disability to help tailor content to their specific needs. At the same time, the collection, storage, and use of student data must be overseen, particularly for vulnerable groups” (p. 25). Moreover, students may change their behaviors as they may fear that those surveilling them with technologies like closed-circuit televisions, CCTVs and online proctoring tools may misinterpret their actions or ideas. Such tools have come under increased scrutiny as some do not detect students of color.
New technologies, especially those relying on artificial intelligence or data analytics, are exciting but also present ethical challenges that deserve our attention and action. AI technologies have the capacity to make predictions and draw inferences about individuals and groups of students by algorithmically detecting patterns in large volumes of data. Teachers must understand the technology they want to use, otherwise all kinds of ethical problems can arise. Finding balance is the key.
Lastly, our main goal was to truly understand what educators require in this ever-changing educational environment. We have identified some critical focal points, and these findings guide us towards creating proposals that not only meet these needs, but also foster an environment where education is flexible and seamlessly integrated with technology.
One critical focal point revolves around the empowerment of educators to take control of their assessment practices. This empowerment is intricately linked to continuous learning and professional growth, including upskilling, reskilling and the provision of high-quality teacher training [58,59]. Educators need to cultivate online assessment literacy, acquiring a nuanced skill set tailored to the digital realm.
Recognizing the significance of flexibility in learning, another focal point, advocates for the exploration of hybrid models. These models, as discussed in the works of [60], present diverse learning opportunities tailored to individual students’ needs, seamlessly integrating traditional and online modes of education.
A paradigm shift in assessment practices (rethinking assessment tasks) constitutes another focal point [61]. Encouraging a fresh perspective involves crafting distinctive, creative and open-book assignments, fostering true digital literacy beyond traditional testing. This approach blends ingredients of creativity, imagination, interactive learning, open conversations, the skill of navigating difficult and uncomfortable conversations (e.g., the use of controversy pedagogy) and collaborative efforts to make this happen.
Responsibly sharing knowledge is a pivotal focal point, encompassing an understanding of copyright, Creative Commons licenses and the utilization of Open Educational Resources (OERs). Opportunities for open pedagogical methods, including the incorporation of tools like ChatGPT in open education [62,63], aim to personalize content and explore innovative teaching methods.
The necessity for AI awareness emerges as a crucial focal point, ensuring a collective understanding of AI as an integral part of the educational journey. We have to make sure that everyone is on the same page about AI as a part of the educational journey. Recent studies argue that understanding AI (having basic knowledge of AI literacy) is becoming an essential skill for educators and students alike [64,65].
The rise of AI underscores the need to adapt and redesign curricula. For instance, Oregon University [66] has introduced special icons facilitating the coherent integration of technology into education.
In the age of AI, a transformative shift in the concept of pedagogy, termed “pedAIgogy” by Donald Clark [67], emphasizes the need for educators to (re)learn to teach. Crafting prompts for a more personalized learning journey, as seen in the practices at the University of Sydney [68], signifies a shift to adjust to the evolving educational environment while preserving core principles of effective teaching.

Limits

Our research has taken a comprehensive look at various digital assessment methods and has provided a nuanced understanding of how teachers employ digital tools for subject-specific assessment purposes. It emphasizes not just adopting the latest tools, but using them thoughtfully to enhance the overall assessment experience. Additionally, it demonstrates a thoughtful consideration of ethical implications related to the integration of generative AI and it underscores the importance of maintaining a balance between automated evaluation and human judgment.
However, in our pursuit to understand how teachers in higher education use digital assessment, we have encountered some limitations. First, it is important to mention that the group of teachers represents a diverse range of educators from different universities. We focused on specific universities from Romania, so in the future we should invite more teachers from other academic settings and countries. While our findings are important for our sample, we need to be careful about applying them to other educational settings. Secondly, our study adopted a cross-sectional design, but we acknowledge the potential benefits of conducting longitudinal research. Follow-up studies could delve deeper into the evolution of digital assessment practices to see how things advance over time. The small sample size of 60 participants indicates that this field is still being explored in Romanian higher education. Notwithstanding the limited sample size, the knowledge obtained from this research constitutes a fundamental contribution to the discussion of digital assessment practices in higher education, providing insightful viewpoints that can guide policy choices, instructional strategies, and future research projects in Romania’s quickly changing educational environment.
Then, we need to keep in mind that the information we gathered is based on what teachers told us about their practices and thoughts. This means that some of the respondents might have given answers they thought were socially acceptable. Or they might remember things differently when talking about their digital assessment practices, which could lead to some inaccuracies in our data.
Surveys are a useful tool, but they have their limits. The questions we asked surely do not capture all the details of what teachers do. The way we worded the questions and how specific they were could have influenced how teachers understood and answered them.
Moreover, although digital assessment tools have come a long way, they probably cannot cover all the different ways teachers assess their students. Some subjects or skills might still need traditional methods. And with the introduction of technologies like ChatGPT at the end of 2022, the outlook on assessment is dramatically changing. By recognizing these limitations, we are setting the stage for future research in digital assessment practices and how teachers are using technology in education as a whole. Therefore, some important questions remain to be answered by future research: How do demographic factors (such as age, experience and technological proficiency) influence teachers’ use of digital assessment methods? How can the implementation of digital assessment tools be optimized to enhance the overall assessment process in higher education? How do teachers perceive the impact of digital assessment on students’ learning experiences and outcomes? What are the specific pedagogical strategies associated with the most effective digital assessment practices in higher education? etc.

6. Conclusions

Digital assessment practices in Romanian higher education have become an integral part of the teaching process [31,54]. Teachers are embracing a diverse array of digital tools, including 3D, VR or AR environments, Open Educational Resources, annotation tools and dynamic discussion forums. Yet, this shift towards digital assessment is not without its complexities. The emergence of generative AI, powered by models like ChatGPT and LLM, introduces a new dynamic. These advanced technologies offer unprecedented potential in automating assessment tasks (hence more efficient grading, particularly in scenarios with a large volume of student work), providing tailored feedback, as well as personalized guidance. Furthermore, generative AI models can adapt and improve based on the diverse range of student responses they encounter. This adaptability can result in the development of more sophisticated and contextually relevant assessments over time in a manner as discussed by Williams [1]. However, they also raise valid concerns about finding the right balance between automated and human evaluation, as well as crucial ethical considerations. When addressing the “honesty” of digital assessments, most professionals start from the wrong assumptions: How to stop cheating, copying, cheating online? Or what digital assessment security techniques should we use? We do not believe that this is an effective way to address this issue. Rather, as [69] suggests, we must ask ourselves: “Why do students cheat? How can we make students confident enough to take assessments without cheating? What tools and applications address the weakest links in the digital assessment lifecycle?”.
It is clear that we need a maturity in understanding digital assessment, not only with regard to adapting to technology, but also in overcoming teacher frustration when talking about technological adoption; participation in adequate trainings in the field of digital assessment methods as well as raising awareness in universities; increasing “students’ genuine engagement and deepen their sense of belonging in order to change their motivations and mitigate cheating” [69]; redesigning assessment tasks in multimodal formats; abandoning grid-type tests or at least ensuring a sufficiently generous diversity of question types and requiring higher level critical thinking skills [70]; using computer-assisted assessment for multiple choice testing to involve “significant institutional commitment, technical infrastructure, and high levels of quality assurance practices” [71] and, possibly, instituting “academic integrity contracts” for students to sign. In general, contracts have a psychological effect on people, and examinees are more likely to be authentic if they sign a form of contract, says [72].
In conclusion, our study reveals the dynamics of higher education evaluation practices in the Romanian academic context, covering the frequency of using digital tools and the range of methods used in assessment. Teachers are preoccupied with understanding the evolving educational landscape, leveraging digital tools and inventive methods to elevate the learning experience of their students and achieve effective digital assessment. At the same time, respondents admit that they require adequate training and updated skills and abilities. Based on these results, and in line with what other researchers have suggested, we believe teacher digital competence training should also include: open access online courses with the nano-MOOC structure (NOOCs), cMOOC, mini videos, online micro-courses, Web 2.0 tools and online documents on techno-pedagogical handbooks [73]. Another option could be designing platforms that self-assess teachers’ digital competence and propose an adequate training plan.
Our findings contribute to the global discourse on digital assessment practices, emphasizing the need for a nuanced approach that combines technological adaptation, effective training and a focus on the psychological aspects of academic integrity. As we navigate this novel direction, personalized teacher training emerges as a key strategy, tailoring competence development to individual needs and paving the way for a future where educators are well equipped to navigate the challenges of digital education.

Author Contributions

Conceptualization, G.G. and R.A.B.; methodology, G.G. and L.G.Ț.; software, G.G. and L.G.Ț.; validation, G.G., R.A.B. and L.G.Ț.; formal analysis, G.G. and L.G.Ț.; investigation, G.G.; resources, R.A.B.; data curation, G.G.; writing—original draft preparation, R.A.B. and G.G.; writing—review and editing, G.G., R.A.B. and L.G.Ț.; visualization, L.G.Ț.; supervision, L.G.Ț. All authors have read and agreed to the published version of the manuscript.

Funding

This research has been partially funded with support from the European Commission and has been approved by the Spanish National Agency, grant number 2020-1-ES01-KA226-HE-095485, project title “D-EvA Practical Skills Evaluation with Digital Technologies in Teacher Education”.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Scientific Council of University Research and Creation of West University of Timisoara (no. 88093/23.11.2023).

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Williams, P. AI, Analytics and a New Assessment Model for Universities. Educ. Sci. 2023, 13, 1040. [Google Scholar] [CrossRef]
  2. Saville, N. Digital Assessment. In Digital Language Learning and Teaching: Research, Theory, and Practice; Carrier, M., Damerow, R.M., Bailey, K.M., Eds.; Global Research on Teaching and Learning English; Routledge: London, UK, 2017; pp. 198–207. [Google Scholar]
  3. Wall, A.F.; Hursh, D.; Rogers, J. Assessment for Whom: Repositioning Higher Education Assessment as an Ethical and Value-Focused Social Practice. Res. Pract. Assess. 2014, 9, 5–17. [Google Scholar]
  4. Devran, B.Ç.; Elçi, A. Traditional Versus Digital Assessment Methods: Faculty Development. In Assessment, Testing, and Measurement Strategies in Global Higher Education; Railean, E., Ed.; IGI Global: Philadelphia, PA, USA, 2020; pp. 20–34. [Google Scholar]
  5. Timmis, S.; Broadfoot, P.; Sutherland, R.; Oldfield, A. Rethinking assessment in a digital age: Opportunities, challenges and risks. Br. Educ. Res. J. 2016, 42, 454–476. [Google Scholar] [CrossRef]
  6. Heil, J.; Ifenthaler, D. Online assessment in higher education: A systematic review. Online Learn. 2023, 27, 187–218. [Google Scholar] [CrossRef]
  7. Archer, E.; Bulut, O.; Zeniskyk, A.; Grover, R.; Randall, J. Editorial: Online assessment for humans: Advancements, challenges and futures for digital assessment. Front. Educ. 2023, 8, 2. [Google Scholar] [CrossRef]
  8. Punie, Y.; Redecker, C. (Eds.) EUR 28775 EN; European Framework for the Digital Competence of Educators: DigCompEdu. Publications Office of the European Union: Luxembourg, 2017. [CrossRef]
  9. Caena, F.; Redecker, C. Aligning teacher competence frameworks to 21st century challenges: The case for the European Digital Competence Framework for Educators (DIGCOMPEDU). Eur. J. Educ. 2019, 54, 356–369. [Google Scholar] [CrossRef]
  10. Refnaldi; Zaim, M.; Moria, E. Teachers’ Need for Authentic Assessment to Assess Writing Skill at Grade VII of Junior High Schools in Teluk Kuantan. In Proceedings of the Fifth International Seminar on English Language and Teaching (ISELT 2017), Kota Padang, Indonesia, 9–10 May 2017; Volume 110, pp. 179–185. [Google Scholar]
  11. Gomez, M.J.; Ruipérez-Valiente, J.A. Analyzing the Evolution of Digital Assessment in Education Literature Using Bibliometrics and Natural Language Processing. In Handbook of Research on Digital-Based Assessment and Innovative Practices in Education; Keengwe, J., Ed.; IGI Global: Philadelphia, PA, USA, 2022; pp. 178–200. [Google Scholar]
  12. Viñoles-Cosentino, V.; Esteve-Mon, F.M.; Llopis-Nebot, M.A.; Adell-Segura, J. Validation of a Platform for Formative Assessment of Teacher Digital Competence in Times of COVID-19. Ried-Rev. Iberoam. Educ. Distancia 2021, 24, 87–106. [Google Scholar] [CrossRef]
  13. Bearman, M.; Nieminen, J.H.; Ajjawi, R. Designing assessment in a digital world: An organising framework. Assess. Eval. High. Educ. 2023, 48, 291–304. [Google Scholar] [CrossRef]
  14. Moreira, J.A.; Nunes, C.S.; Casanova, D. Digital Competence of Higher Education Teachers at a Distance Learning University in Portugal. Computers 2023, 12, 169. [Google Scholar] [CrossRef]
  15. Basilotta-Gómez-Pablos, V.; Matarranz, M.; Casado-Aranda, L.; Otto, A. Teachers’ Digital Competencies in Higher Education: A Systematic Literature Review. Int. J. Educ. Technol. High. Educ. 2022, 19, 8. [Google Scholar] [CrossRef]
  16. Basantes-Andrade, A.; Casillas-Martín, S.; Cabezas-González, M.; Naranjo-Toro, M.; Guerra-Reyes, F. Standards of Teacher Digital Competence in Higher Education: A Systematic Literature Review. Sustainability 2022, 14, 13983. [Google Scholar] [CrossRef]
  17. Mengual-Andrés, S.; Roig-Vila, R.; Mira, J.B. Delphi study for the design and validation of a questionnaire about digital competences in higher education. Int. J. Educ. Technol. High. Educ. 2016, 13, 11. [Google Scholar] [CrossRef]
  18. Instefjord, E.; Munthe, E. Preparing pre-service teachers to integrate technology: An analysis of the emphasis on digital competence in teacher education curricula. Eur. J. Teach. Educ. 2016, 39, 77–93. [Google Scholar] [CrossRef]
  19. Muammar, S.; Bin Hashim, K.F.; Panthakkan, A. Evaluation of digital competence level among educators in UAE Higher Education Institutions using Digital Competence of Educators (DigComEdu) framework. Educ. Inf. Technol. 2023, 28, 2485–2508. [Google Scholar] [CrossRef] [PubMed]
  20. Fernández-Batanero, J.M.; Román-Graván, P.; MontenegroRueda, M.; López-Meneses, E.; Fernández-Cerero, J. Digital Teaching Competence in Higher Education: A Systematic Review. Educ. Sci. 2021, 11, 689. [Google Scholar] [CrossRef]
  21. de Obesso, M.d.l.M.; Núñez-Canal, M.; Pérez-Rivero, C.A. How do students perceive educators’ digital competence in higher education? Technol. Forecast. Soc. Change 2023, 188, 122284. [Google Scholar] [CrossRef]
  22. Vasconcelos, S.; Balula, A. Promoting transparent assessment in higher education—Prleiminary insight from the Digi-prof project. In Proceedings of the INTED 2023—17th International Technology, Education and Development Conference Proceedings, Valencia, Spain, 6–8 March 2023; pp. 2767–2772. [Google Scholar] [CrossRef]
  23. Palacios-Rodríguez, A.; Del Carmen Llorente Cejudo, M.; Cabero-Almenara, J. Editorial: Educational Digital Transformation: New Technological Challenges for Competence Development. Front. Educ. 2023, 8, 1267939. [Google Scholar] [CrossRef]
  24. Jurāne-Brēmane, A. Digital Assessment in Technology-Enriched Education: Thematic Review. Educ. Sci. 2023, 13, 522. [Google Scholar] [CrossRef]
  25. Sjöberg, J.L.P. University Teachers’ Ambivalence about the Digital Transformation of Higher Education. Int. J. Learn. Teach. Educ. Res. 2019, 18, 133–149. [Google Scholar] [CrossRef]
  26. JISC. Reimagining Digital Assessment in Higher Education. Available online: https://beta.jisc.ac.uk/guides/reimagining-digital-assessment-in-higher-education (accessed on 2 November 2020).
  27. Ilie, A.G.; Dumitriu, D.E.; Sârbu, R. Can Universities Close the Gender Gap? Exploring and Measuring the Gender Gap in Romanian Higher Education. J. East. Eur. Res. Bus. Econ. 2014, 2014, 49962. [Google Scholar] [CrossRef]
  28. World Economic Forum. Global Gender Gap Report. June 2023. Available online: https://www3.weforum.org/docs/WEF_GGGR_2023.pdf (accessed on 12 September 2023).
  29. Robayo-Abril, M.; Chilera, C.P.; Rude, B.; Costache, I. Gender Equality in Romania: Where Do We Stand?—Romania Gender Assessment©; World Bank: Washington, DC, USA, 2023; Available online: http://hdl.handle.net/10986/40666 (accessed on 11 May 2020).
  30. Washington, L.D.; Penny, G.R.; Jones, D. Perceptions of Community College Students and Instructors on Traditional and Technology-Based Learning in a Hybrid Learning Environment. J. Instr. Pedagog. 2020, 23. Available online: https://www.aabri.com/manuscripts/193074.pdf (accessed on 1 October 2023).
  31. Mosoiu, O.; Popa, L.O. Student Online Assessment: A Quantitative Snapshot Of Teachers’ Challenges And Viewpoints In The Context Of Digitalisation. In Proceedings of the 14th International Conference on Education and New Learning Technologies, Palma, Spain, 4–6 July 2022. [Google Scholar] [CrossRef]
  32. Sweeney, T.; West, D.; Groessler, A.; Haynie, A.; Higgs, B.; MacAulay, J.; Mercer-Mapstone, L.; Yeo, M. Where’s the Transformation? Unlocking the Potential of Technology-Enhanced Assessment. Teach. Learn. Inq. ISSOTL J. 2016, 5. [Google Scholar] [CrossRef]
  33. Henderson, M.; Selwyn, N.; Aston, R. What Works and Why? Student Perceptions of ‘Useful’ Digital Technology in University Teaching and Learning. Stud. High. Educ. 2015, 42, 1567–1579. [Google Scholar] [CrossRef]
  34. Henderson, M.; Ryan, T.; Phillips, M. The Challenges of Feedback in Higher Education. Assess. Eval. High. Educ. 2019, 44, 1237–1252. [Google Scholar] [CrossRef]
  35. Andersson, C.; Palm, T. The Impact of Formative Assessment on Student Achievement: A Study of the Effects of Changes to Classroom Practice after a Comprehensive Professional Development Programme. Learn. Instr. 2017, 49, 92–102. [Google Scholar] [CrossRef]
  36. Henshaw, F. These Are a Few of My Favorite Tools: Choosing the Right Tech for the Right Task. Available online: https://fltmag.com/favorite-tools-choosing-tech/ (accessed on 21 February 2021).
  37. Lin, L.-C.; Hung, I.; Kinshuk, K.; Chen, N.-S. The Impact of Student Engagement on Learning Outcomes in a Cyber-Flipped Course. Educ. Technol. Res. Dev. 2019, 67, 1573–1591. [Google Scholar] [CrossRef]
  38. Premawardhena, N.C. Remote Supervision: A Boost for Graduate Students; Springer: Berlin/Heidelberg, Germany, 2022; pp. 634–644. [Google Scholar]
  39. Zhang, L.; Basham, J.D.; Yang, S. Understanding the Implementation of Personalized Learning: A Research Synthesis. Educ. Res. Rev. 2020, 31, 100339. [Google Scholar] [CrossRef]
  40. Šimonová, I.; Poulová, P. Personalized Learning and Assessment; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2014; pp. 159–165. [Google Scholar]
  41. Eaton, S.E. Handbook of Academic Integrity; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar] [CrossRef]
  42. Holden, O.L.; Norris, M.E.; Kuhlmeier, V.A. Academic Integrity in Online Assessment: A Research Review. Front. Educ. 2021, 6, 639814. [Google Scholar] [CrossRef]
  43. Sabrina, F.; Azad, S.; Sohail, S.; Thakur, S. Ensuring Academic Integrity in Online Assessments: A Literature Review and Recommendations. Int. J. Inf. Educ. Technol. 2022, 12, 60–70. [Google Scholar] [CrossRef]
  44. Costa, R.S.; Ostáriz, P.L.; Mauri-Medrano, M.; Moreno-Guerrero, A.-J. Netiquette: Ethic, Education, and Behavior on Internet—A Systematic Literature Review. Int. J. Environ. Res. Public Health 2021, 18, 1212. [Google Scholar] [CrossRef]
  45. Awasthi, S. Plagiarism and Academic Misconduct A Systematic Review. DESIDOC J. Libr. Inf. Technol. 2019, 39, 94–100. [Google Scholar] [CrossRef]
  46. Johnson, C.; Davies, R.A.; Reddy, M.C.M. Using Digital Forensics in Higher Education to Detect Academic Misconduct. Int. J. Educ. Integr. 2022, 18, 12. [Google Scholar] [CrossRef]
  47. Davis, A. Academic Integrity in the Time of Contradictions. Cogent Educ. 2023, 10, 2289307. [Google Scholar] [CrossRef]
  48. Grandinetti, J. “From the Classroom to the Cloud”: Zoom and the Platformization of Higher Education. First Monday 2022, 27, 2. [Google Scholar] [CrossRef]
  49. Ulven, J.B.; Wangen, G. A Systematic Review of Cybersecurity Risks in Higher Education. Future Internet 2021, 13, 39. [Google Scholar] [CrossRef]
  50. Gu, J.J.; Wang, X.L.; Li, C.N.; Zhao, J.H.; Fu, W.J.; Liang, G.Q.; Qiu, J. AI-enabled image fraud in scientific publications. Patterns 2022, 3, 6. [Google Scholar] [CrossRef] [PubMed]
  51. Van Noorden, R.; Perkel, J.M. AI and science: What 1,600 researchers think. Nature 2023, 10, 621. [Google Scholar] [CrossRef] [PubMed]
  52. European Commision, Department of Education, Youth, Sport and Culture. Publications Office of the European Union. Ethical Guidelines on the Use of Artificial Intelligence (AI) and Data in Teaching and Learning for Educators. 2022. Available online: https://education.ec.europa.eu/news/ethical-guidelines-on-the-use-of-artificial-intelligence-and-data-in-teaching-and-learning-for-educators (accessed on 11 May 2020).
  53. Bran, R.; Țîru, L.G.; Grosseck, G.; Holotescu, C.; Maliţa, L. Learning from Each Other—A Bibliometric Review of Research on Information Disorders. Sustainability 2021, 13, 10094. [Google Scholar] [CrossRef]
  54. Grosseck, G. Digital ethics in higher education. In Self Learning Resources for Assessment with Digital Technologies; Ion, G., Mercader, C., Eds.; Universitat Autonoma de Barcelona: Barcelona, Spain, 2023; Available online: https://d-eva.eu/wp-content/uploads/2023/06/IO3_ENG_All_together.pdf (accessed on 11 May 2020).
  55. Berkman Klein Center for Internet & Society. Participants in an Ethics of Digitalization Research Sprint. (2021). Digital Ethics in Times of Crisis: COVID-19 and Access to Education and Learning Spaces; Harvard University: Boston, MA, USA, 2021; Available online: https://cyber.harvard.edu/sites/default/files/2021-02/Digital%20Ethics%20In%20Times%20of%20Crisis%20Report.pdf (accessed on 11 May 2020).
  56. Wilson, C.B.; Slade, C.; Kirby, M.M.; Downer, T.; Fisher, M.B.; Nuessler, S. Digital Ethics and the Use of ePortfolio: A Scoping Review of Literature. Int. J. E-Portfolio 2018, 8, 115–125. [Google Scholar]
  57. Finefter-Rosenbluh, I.; Perrotta, C. How do teachers enact assessment policies as they navigate critical ethical incidents in digital spaces? Br. J. Sociol. Educ. 2023, 44, 220–238. [Google Scholar] [CrossRef]
  58. Eyal, L. Digital Assessment Literacy—The Core Role of the Teacher in a Digital Environment. Educ. Technol. Soc. 2012, 15, 37–49. [Google Scholar]
  59. Husain, F. Digital Assessment Literacy: The Need of Online Assessment Literacy and Online Assessment Literate Educators. Int. Educ. Stud. 2021, 14, 10. [Google Scholar] [CrossRef]
  60. Eduljee, N.B.; Chakravarty, R.; Croteau, K.A.; Murphy, L. Understanding Research Trends in HyFlex (Hybrid Flexible) Instruction Model: A Scientometric Approach. Int. J. Instr. 2022, 15, 935–954. [Google Scholar] [CrossRef]
  61. Williams, P. Rethinking University Assessment. Int. J. Technol. Incl. Educ. 2014, 3, 257. [Google Scholar] [CrossRef]
  62. Lalonde, C. ChatGPT and Open Education. BCCampus. Available online: https://bccampus.ca/2023/03/06/chatgpt-and-open-education/ (accessed on 6 March 2023).
  63. Mills, A.; Bali, M.; Eaton, L. How do we respond to generative AI in education? Open educational practices give us a framework for an ongoing process. J. Appl. Learn. Teach. 2023, 6, 16–30. [Google Scholar] [CrossRef]
  64. Eaton, S.E.; Brennan, R.; Wiens, J.; McDermott, B. Artificial Intelligence and Academic Integrity. The Ethics of Teaching and Learning with Algorithmic Writing Technologies. University of Calgary. Taylor Institute for Teaching and Learning. Available online: https://prism.ucalgary.ca/handle/1880/115769 (accessed on 25 January 2023).
  65. Nguyen, G. Generative AI in Teaching and Learning: The Least You Need to Know. BCCampus. Available online: https://bccampus.ca/2023/09/18/generative-ai-in-teaching-and-learning-the-least-you-need-to-know/ (accessed on 18 September 2023).
  66. Oregon University. Syllabi and Assignment AI Icons. 2023. Available online: https://oregonstate.app.box.com/s/drnx7bwf749scblv50mov53j13cr3dd2 (accessed on 11 May 2020).
  67. Clark, D. PedAIgogy—New Era of Knowledge and Learning Where AI Changes Everything. Donald Clark Plan B. Available online: https://donaldclarkplanb.blogspot.com/2023/03/pedaigogy-new-era-of-knowledge-and.html (accessed on 23 March 2023).
  68. Liu, D. Prompt Engineering for Educators—Making Generative AI Work for You. The University of Sidney. Available online: https://educational-innovation.sydney.edu.au/teaching@sydney/prompt-engineering-for-educators-making-generative-ai-work-for-you/ (accessed on 27 April 2023).
  69. Pope, D.; Schrader, D. OPINION: We Can Add ChatGPT to the Latest List of Concerns about Student Cheating, but Let’s Go Deeper; Hechinger Report. Available online: https://hechingerreport.org/opinion-we-can-add-chatgpt-to-the-latest-list-of-concerns-about-student-cheating-but-lets-go-deeper/ (accessed on 14 February 2023).
  70. Zhai, X. ChatGPT: Artificial Intelligence for Education; University of Georgia: Athens, GA, USA, 2022. [Google Scholar] [CrossRef]
  71. Oldfield, A.; Broadfoot, P.; Sutherland, R.; Timmis, S. Assessment in a Digital Age: A Research Review; Graduate School of Education, University of Bristol: Britstol, UK, 2010; Available online: https://www.bristol.ac.uk/media-library/sites/education/documents/researchreview.pdf (accessed on 11 May 2020).
  72. Smith Budhai, S. Fourteen Simple Strategies to Reduce Cheating on Online Examinations. Faculty Focus. Available online: https://www.facultyfocus.com/articles/educational-assessment/fourteen-simple-strategies-to-reduce-cheating-on-online-examinations/ (accessed on 11 May 2020).
  73. Basantes-Andrade, A.; Cabezas-Gonzalez, M.; Casillas-Martin, S.; Naranjo-Toro, M.; Benavides-Piedra, A. NANO-MOOCs to train university professors in digital competences. Heliyon 2022, 8, 8. [Google Scholar] [CrossRef]
Figure 1. Type of assessment utilized. Legend: Responses to the question ‘What kind of assessment do you use?’ on a scale ranging from “to a very small extent”… “to a very large extent”.
Figure 1. Type of assessment utilized. Legend: Responses to the question ‘What kind of assessment do you use?’ on a scale ranging from “to a very small extent”… “to a very large extent”.
Education 14 00032 g001
Figure 2. Use of digital tools for assessment. Legend: Responses to the question: “I use digital tools…” on a scale ranging from never, once in a while, about half the time, most of the time and always.
Figure 2. Use of digital tools for assessment. Legend: Responses to the question: “I use digital tools…” on a scale ranging from never, once in a while, about half the time, most of the time and always.
Education 14 00032 g002
Figure 3. Monitoring student progress with digital assessment formats. Legend: Responses to the question: “Tracking student progress: I use digital assessment formats to monitor student progress. Please choose the option that best reflects your current practice.”.
Figure 3. Monitoring student progress with digital assessment formats. Legend: Responses to the question: “Tracking student progress: I use digital assessment formats to monitor student progress. Please choose the option that best reflects your current practice.”.
Education 14 00032 g003
Figure 4. The use of digital technologies for feedback. Legend: Responses to the question “Feedback: I use digital technologies to provide effective feedback. Please choose the option that best reflects your current practice.”.
Figure 4. The use of digital technologies for feedback. Legend: Responses to the question “Feedback: I use digital technologies to provide effective feedback. Please choose the option that best reflects your current practice.”.
Education 14 00032 g004
Table 1. Participants’ academic position.
Table 1. Participants’ academic position.
Academic PositionCountPercent
Academic practitioner (teacher, trainer, tutor, etc.)5083.33%
Learning technologist23.33%
Staff developer00%
Head of department or organization46.67%
Other 46.67%
Total60100%
Table 2. Levels taught.
Table 2. Levels taught.
LevelCountPercent
Graduate2328.05%
Postgraduate1821.95%
Doctoral2834.15%
Undergraduate910.98%
Other44.88%
Total82100%
Table 3. Digital competence self-assessment.
Table 3. Digital competence self-assessment.
Digital Competence LevelAt the BeginningAt the End
A1. Newcomer8.70%6.67%
A2. Explorer11.96%11.67%
B1. Integrator42.39%53.33%
B2. Expert27.17%13.33%
C1. Leader8.70%13.33%
C2. Pioneer1.09%1.67%
Total100%100%
Responses to the question: “How do you currently assess your digital competence level as a teacher? Assign a level of competence from A1 to C2, where A1 is the lowest and C2 the highest level?”.
Table 4. Forms and methods of digital assessment.
Table 4. Forms and methods of digital assessment.
StatementOftenNot Very OftenRarelyN/A
3D virtual environments, VR, AR and/or IA (chatbots)14.10%32.05%21.79%32.05%
annotation (on text, web pages—e.g., Hypothesis.is)23.08%47.44%17.95%11.54%
audio sharing channels/podcasts/audio tools19.23%39.74%21.79%19.23%
commenting and discussions posts (forum, board, blog, chat)24.36%37.18%26.92%11.54%
communication channels (e.g., email, Skype, Zoom, WhatsApp, etc.)39.74%28.21%24.36%7.69%
content mapping (Mindomo, Mindmeister, Coggle, etc.)11.54%39.74%26.92%21.79%
content presentation (posters, Powerpoint, Google Slides, Canva, Prezi)41.03%21.79%29.49%7.69%
debating apps (Kialo, Weje, etc.)14.10%25.64%28.21%32.05%
e-portfolios (for classroom learning, as personal development profile, etc.)37.18%34.62%19.23%8.97%
games and/or gamification apps (Kahoot, Socrative, etc.)19.23%38.46%26.92%15.38%
group work—virtual panel tools (Padlet, Jamboard, Miro, dotstorming)23.08%37.18%21.79%17.95%
hosting and file sharing services (Dropbox, Google Drive, OneDrive, etc.)39.74%29.49%24.36%6.41%
image sharing channels (e.g., Flickr, Unsplash, infographics, 360, 3D, etc.)17.95%30.77%26.92%24.36%
learning management platforms (Moodle, Blackboard, Canvas, G-Suite)32.05%32.05%24.36%11.54%
peer-assessment (PeerScholar, Compair, etc.)15.38%37.18%21.79%25.64%
projects (Miro, Weje, Trello, Google Keep, Slack, etc.)15.38%37.18%20.51%26.92%
reflection—individual and/or in groups (Wordpress, Blogspot, etc.)20.51%50.00%12.82%16.67%
rubrics maker (Colab, RubricMaker, etc.)14.10%25.64%25.64%34.62%
social media, meme, emoji and drawings25.64%37.18%20.51%16.67%
social networks (e.g., Facebook, Twitter, Instagram, TikTok, etc.)25.64%37.18%21.79%15.38%
tests and quizzes online (e.g., Socrative, Kahoot!, Quizziz, etc.)25.64%29.49%32.05%12.82%
text-based assignments (essays, laboratory reports, storytelling, etc.)39.74%30.77%21.79%7.69%
tools for interactivity, attention grabbing and brainstorming (Slido, etc.)25.64%39.74%16.67%17.95%
video sharing channels, vlogs, video content (EdPuzzle, Ted, FlipGrid, etc.)34.62%42.31%14.10%8.97%
wikis (Mediawiki, Wikipedia, etc.)21.79%39.74%25.64%12.82%
Legend: Responses to the question: “Below are methods and forms for digital assessment. Please indicate which forms and methods you have used, by selecting the appropriate column. This is a rather long question. We appreciate your patience.” (N/A—Not applicable).
Table 5. Types of assessed competences.
Table 5. Types of assessed competences.
StatementNeverRarelySometimesOftenN/A
Specific to scientific field of the individual subject1.35%12.16%17.57%63.51%5.41%
Analytics and research skills2.70%14.86%48.65%28.38%5.41%
Communication skills4.05%13.51%24.32%55.41%2.70%
Group/team work2.70%13.51%40.54%40.54%2.70%
Critical thinking4.05%16.22%28.38%48.65%2.70%
Ability to make decisions and solve problems1.35%18.92%31.08%44.59%4.05%
Reflection1.35%21.62%37.84%36.49%2.70%
Leaderships and management skills10.81%29.73%37.84%17.57%4.05%
Ability to plan, organize and prioritize work5.41%18.92%41.89%31.08%2.70%
Those associated with creativity and originality2.70%17.57%33.78%41.89%4.05%
ICT/digital/media literacy6.76%35.14%24.32%31.08%2.70%
Others8.11%17.57%29.73%9.46%35.14%
Legend: Responses to the question: “What type of competences do you assess?”. (N/A = not applicable).
Table 6. Challenges faced by higher education teachers in digital assessment.
Table 6. Challenges faced by higher education teachers in digital assessment.
CategoryChallenges
Tool selection and familiarityFinding the best and easiest tool to use.
Unfamiliarity with the existing set of tools.
Updating knowledge on available tools.
Supervision and engagementImpossibility to supervise all students during testing.
Familiarizing students with technological tools and developing their confidence.
Human connection and adaptabilityMaintaining the human connection in a digital environment.
Resistance to change.
Trying and practicing extensively.
Technical issues and solutionsUnexpected technical problems leading to evaluation postponement.
Addressing hardware limitations for better performance.
Knowledge and skill enhancementScarce knowledge of digital tools and technologies.
Attending courses, workshops, and watching tutorials for professional development.
Personalization and time constraintsProviding personalized assessments.
Facing time constraints for learning and configuration.
Connectivity and assessment method adjustmentsDealing with poor internet connectivity by contracting multiple providers.
Adapting assessment methods to accommodate large cohorts of students.
Table 7. Analysis of variance for composite variable ‘I use digital tools…’ in A, B, C groups.
Table 7. Analysis of variance for composite variable ‘I use digital tools…’ in A, B, C groups.
Sum of SquaresdfMean SquareFSig.
Between groups10.45685.2288.5840.000
Within groups52.986870.609
Total63.44289
Table 8. Analysis of variance for composite variables ‘Advantages in digital assessment process’ and ‘Disadvantages in digital assessment process’ in A, B, C groups.
Table 8. Analysis of variance for composite variables ‘Advantages in digital assessment process’ and ‘Disadvantages in digital assessment process’ in A, B, C groups.
Sum of SquaresdfMean SquareFSig.
DisadvantagesBetween groups3.86821.9340.7350.483
Within groups231.716882.633
Total235.58490
AdvantagesBetween groups2.48421.2422.1170.128
Within groups39.309670.587
Total41.79369
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Grosseck, G.; Bran, R.A.; Țîru, L.G. Digital Assessment: A Survey of Romanian Higher Education Teachers’ Practices and Needs. Educ. Sci. 2024, 14, 32. https://doi.org/10.3390/educsci14010032

AMA Style

Grosseck G, Bran RA, Țîru LG. Digital Assessment: A Survey of Romanian Higher Education Teachers’ Practices and Needs. Education Sciences. 2024; 14(1):32. https://doi.org/10.3390/educsci14010032

Chicago/Turabian Style

Grosseck, Gabriela, Ramona Alice Bran, and Laurențiu Gabriel Țîru. 2024. "Digital Assessment: A Survey of Romanian Higher Education Teachers’ Practices and Needs" Education Sciences 14, no. 1: 32. https://doi.org/10.3390/educsci14010032

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop