Next Article in Journal
Monolingual Early Childhood Educators Teaching Multilingual Children: A Scoping Review
Previous Article in Journal
Editorial: Empowering Teacher Professionalization with Digital Competencies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

University Teachers’ Digital Competence and AI Literacy: Moderating Role of Gender, Age, Experience, and Discipline

by
Ida Dringó-Horváth
1,
Zoltán Rajki
2 and
Judit T. Nagy
1,*
1
ICT Research Centre, Károli Gáspár University of the Reformed Church in Hungary, 1092 Budapest, Hungary
2
Department of Social Research, Pázmány Péter Catholic University, 1088 Budapest, Hungary
*
Author to whom correspondence should be addressed.
Educ. Sci. 2025, 15(7), 868; https://doi.org/10.3390/educsci15070868
Submission received: 30 May 2025 / Revised: 3 July 2025 / Accepted: 4 July 2025 / Published: 7 July 2025
(This article belongs to the Section Higher Education)

Abstract

The present research aims to contribute to the effective development of AI literacy and thus to its proper educational integration by investigating (i) the relationship between teachers’ AI literacy and digital competence and (ii) whether this relationship varies by gender, discipline, age, and teaching experience. This is the first large-sample study in Hungary to comprehensively analyze such relationships, based on a representative sample of 1103 teachers from 13 fields of education. After a theoretical grounding and literature review, the study describes the research methodology, analyzes the empirical results, and concludes. The research contributes to the AI literacy literature by providing empirical evidence from a previously understudied population—Hungarian university teachers—and by refining the understanding of the role of digital competence in the context of technological transformation. The findings highlight that the development of AI literacy does not require a one-size-fits-all approach but rather strategies tailored to the specific needs of target groups (e.g., gender, scientific fields, and experience levels).

1. Introduction

Artificial intelligence (AI), particularly generative AI tools, has emerged as a transformative force in 21st-century education, reshaping teaching methodologies, administrative processes, and learning experiences. Educators play a key role in the integration of technology into education—the successful application of AI tools in classrooms primarily depends on whether teachers are willing to adopt and integrate them into their teaching and learning strategies (Bozkurt, 2023; Mujiono, 2023). However, the effective and ethical use of AI tools requires specific competences, referred to in the literature as AI literacy (Long & Magerko, 2020; Hornberger et al., 2023; Ng et al., 2021). Long and Magerko (2020) define it as “a set of competencies that enables individuals to critically evaluate AI technologies, communicate and collaborate effectively with AI, and use AI as a tool online, at home, and in the workplace”.
Its relevance and complexity are illustrated by the successive additions to the various frameworks summarizing digital competences for educators, which are related to AI activities and competencies (for more details, see later sections). In addition, new pedagogical competency frameworks focusing specifically on AI literacy are also emerging. The AI Competency Framework for Teachers developed by UNESCO defines the knowledge, skills, and values educators should master in the age of AI. The framework is divided into five key dimensions: human-centered mindset, ethics of AI, AI foundations and applications, AI pedagogy, and AI for professional development. Within these, it outlines 15 competencies that can be achieved at 3 levels: acquire, deepen, and create. The framework aims to serve as a global reference for developing national AI competency frameworks, informing teacher training programs, and designing assessment parameters. Additionally, it provides strategies for teachers to enhance their AI knowledge, apply ethical principles, and support their professional growth (UNESCO, 2024).
The importance of AI literacy in higher education is, therefore, undeniable, but there are a number of challenges in developing it. The lack of clear guidelines for integration can hinder educators’ ability to effectively utilize AI tools, which leads to uncertainty and reluctance in adoption (Michel-Villarreal et al., 2023). In addition, Kizilcec (2023) pointed out the psychological barriers to adoption of AI technologies, creating barriers to the effective implementation. Another challenge is the varying levels of digital competence among educators, which can affect their ability to engage with AI technologies: educators with limited digital skills may find it difficult to grasp AI concepts and applications, thereby impeding their overall AI literacy (Walter, 2024). Furthermore, the need for continuous professional development is critical, as the fast-paced evolution of AI technologies requires educators to stay updated with the latest advancements (Walter, 2024).
The present research aims to contribute to the effective development of AI literacy and thus to its proper educational integration by investigating (i) the relationship between teachers’ AI literacy and digital competence and (ii) whether this relationship varies by gender, discipline, age, and teaching experience. This is the first large-sample study in Hungary to comprehensively analyze such relationships, based on a representative sample of 1103 teachers from 13 fields of education. After a theoretical grounding and literature review, the study describes the research methodology, analyzes the empirical results, and concludes, contributing to the global discourse on AI literacy.

2. Theoretical Background

2.1. AI Literacy and Digital Competence

The relationship between AI literacy and digital competence is interconnected yet distinct, as AI literacy can be viewed as a specialized subset within the broader framework of digital competence. With AI becoming an integral part of the digital learning environment and tools, existing digital competence frameworks for education are expanding to include AI-related skills.
The supplement to the DigCompEDU Framework (Bekiaridis & Attwell, 2024) expands the EU’s DigCompEdu framework by integrating AI-related competencies in education, recognizing AI’s impact on teaching and learning and the need for educators to use it effectively. It explores AI both as a learning tool and subject, aligning competencies with DigCompEdu’s six key areas, providing guidance on applications, skill development, competency progression, challenges, and solutions. A further supplementary proposal (Georgopoulou et al., 2024) focuses on strengthening critical thinking, which, combined with AI features, can enable educators to empower students to become responsible and informed digital citizens in the era of generative AI. The AI-TPACK, as an extension of the well-known Technological Pedagogical Content Knowledge framework, emphasizes human–AI collaboration in education, integrating AI not just as a tool but as a fundamental component that reshapes teaching, learning, and content delivery in the AI era (Mishra et al., 2023; Ning et al., 2024).
The relationship between university teachers’ AI literacy and their digital competence is an increasingly pertinent topic in the context of higher education. This literature review synthesizes existing research to explore this relationship and examines how factors such as gender, discipline, age, and teaching experience may moderate it. Kizilcec argues that understanding educators’ perspectives on emerging technologies is essential for maximizing their benefits, suggesting that digital competence is closely tied to educators’ readiness to adopt AI tools (Kizilcec, 2023). The interplay between AI literacy and digital competence is further supported by the Common Framework for Artificial Intelligence in Higher Education (AAI-HE Model) proposed by Jantakun et al., which illustrates how these competencies can enhance educational outcomes (Jantakun et al., 2021).

2.2. AI Literacy and Demographic Factors

The UNESCO AI Competency Framework for Teachers (UNESCO, 2024) emphasizes that the development of AI literacy must be inclusive and equitable, taking into account different social and demographic groups. The DigCompEdu framework highlights the importance of personalized, differentiated approaches, and according to Venkatesh et al. (2003), gender, age, and experience significantly influence technology acceptance, so we can conclude that they are also key factors in the development of AI literacy. For women, older people, and those with less experience, ease of use and social support increase the acceptance of AI tools, while for men and younger people, emphasizing usefulness increases acceptance. Targeted training and a supportive environment tailored to these demographic groups are necessary.
Moreover, the moderating effects of demographic factors, such as age, gender, teaching experience, and field of study, are critical to understanding the nuances of this relationship. Møgelvang’s research indicates that gender differences persist in technology acceptance and usage, which may extend to AI tools in educational contexts (Møgelvang et al., 2024). This suggests that male and female educators might exhibit different levels of AI literacy and digital competence, potentially influencing their engagement with AI technologies. Research suggests that gender differences in attitudes toward AI among educators are partly due to differences in perceptions of the technology, partly due to differences in participation in professional settings, and partly due to the social embeddedness of the technology. According to a meta-analysis by Cai et al. (2017), women tended to have fewer positive attitudes toward the use of technology than men, which may be reflected in educational applications of AI, although the difference was small. Gibert and Valls (2022) emphasized that women’s underrepresentation in the field of AI stems from structural inequalities, which may affect their participation and attitudes toward AI. Research by Møgelvang et al. (2024) showed that women in higher education were less likely and more narrowly focused on using generative AI chatbots, more likely to focus on text tasks, with greater concern for critical thinking, while men used them more frequently and more widely (see also McGrath et al., 2023, for similar gender differences in AI knowledge among Swedish university teachers). Venkatesh et al. (2003), in their model of information technology adoption, found that women tended to evaluate technology use more in terms of effort and social norms, while men tended to prioritize utility.
Empirical studies provide further insight into these gender dynamics. For instance, Al-Riyami et al. (2023) found in their research of Omani educators that gender significantly moderated the acceptance of Fourth Industrial Revolution (4IR) technologies, including AI. Specifically, women were more influenced by social factors, while men placed greater emphasis on facilitating conditions, such as infrastructure and technical support (Al-Riyami et al., 2023). However, the overall impact of gender was limited, suggesting that other contextual factors like training and infrastructure may overshadow gender differences in this context (Al-Riyami et al., 2023). Similarly, Zhang and Villanueva (2023) observed significant gender differences among Chinese university teachers regarding generative AI preparedness and digital competence. Female educators scored higher in digital competence areas, such as subject matter knowledge and pedagogical strategies, while men rated themselves higher in creativity and problem-solving related to AI. These findings indicate that women may excel in integrating AI into teaching practices, while men focused more on its creative applications, potentially reflecting differing priorities or training experiences.
In contrast, several studies reported no significant gender effects. Berber et al. (2023) found that among Turkish academics, gender did not significantly influence digital competence, suggesting that other factors like age or experience may be more determinative. Similarly, Xu et al. (2024) concluded that among Chinese university educators, gender did not moderate the acceptance or intention to use AI tools under the UTAUT2 model, with no significant impact on constructs like facilitating conditions or behavioral intention (Xu et al., 2024). Lérias et al. (2024) also found no correlation between gender and AI literacy levels among Portuguese polytechnic educators, indicating that individual skills and training opportunities may outweigh gender differences (Lérias et al., 2024).
These mixed findings align with broader theoretical frameworks. Venkatesh et al.’s (2003) observation that women prioritize effort and social influence while men focus on utility may explain some of the differences seen in Al-Riyami et al. (2023)’s study, where social factors were more critical for women. Conversely, the lack of gender effects in the works of Xu et al. (2024) and Lérias et al. (2024) could reflect contexts where professional training or institutional support minimize gender-based disparities, as suggested by Gibert and Valls (2022). Møgelvang et al.’s (2024) findings on women’s narrower use of AI chatbots and greater concern for critical thinking might resonate with Zhang and Villanueva’s (2023) results, where women showed higher digital competence, potentially indicating a more cautious or purpose-driven approach to AI. Meanwhile, the lower GAI-preparedness observed by Zhang and Villanueva (2023) among female teachers could potentially indicate a latent barrier for female educators, although this requires further investigation.
Research examining the relationship between educators’ teaching experience and their AI or digital competence yielded varied results. According to Ghimire et al. (2024), at a research university in the United States, the length of teaching experience did not significantly influence familiarity with or acceptance of generative AI tools, regardless of whether the educators were novices or had been teaching for a longer period. In contrast, Berber et al. (2023) determined in Turkey that academics with shorter teaching experience (1 month to 2 years) exhibited higher digital competence than those with over 15 years, suggesting that recent technological knowledge may provide an advantage. Xu et al. (2024) found in China that experience with AI tool usage (1 to 7+ years) did not moderate acceptance. Regarding educators teaching at different educational levels, specific observations about the relationship between teaching experience and AI literacy are scarce. Lérias et al. (2024) reported from the Portalegre Polytechnic University in Portugal that educators’ teaching cycles did not affect their AI literacy levels, indicating that experience across educational levels is not a decisive factor in AI literacy.
The studies known to us regarding the age-related findings of university educators present a mixed picture concerning the technological acceptance and competence of higher education instructors. Several studies suggested that younger educators are more open to technology and exhibit greater competence: Al-Riyami et al. (2023) found that for faculty members under 46 years old, social influence significantly affected their behavioral intention to use 4IR-related technologies, as evidenced by the path analysis, while this effect was not significant for those aged 46 and above. Zhang and Villanueva (2023) noted that 21–30-year-old teachers demonstrated higher digital competence. Similarly, Berber et al. (2023) reported outstanding competence among 21–27-year-olds, and Mah and Groß (2024) identified age-related differences in the positive perception of AI, with those under 30 rating it lower compared to older groups. In contrast, several studies found no significant correlation between age and technological attitudes or literacy. Ghimire et al. (2024) concluded that age did not influence awareness or attitudes toward generative AI. Xu et al. (2024) showed that age did not moderate the acceptance of AI tools among Chinese educators. Likewise, Lérias et al. (2024) determined that age was not a predictor of AI literacy. These findings suggest that the impact of age may be context dependent, and other factors, such as professional background or training, might play a more dominant role in technological acceptance.
Furthermore, the field of study could also moderate this relationship, as university teachers’ attitudes toward AI tools are fundamentally shaped by their field of training or profession, which shapes their attitudes through a unique combination of digital competences, pedagogical paradigms, and ethical contexts. In the humanities and social sciences, the emphasis on creativity and critical thinking requires applications other than AI, such as text analysis or ethical reflection, as opposed to the natural sciences, where data analysis and simulations dominate (Marciniak & Baksa, 2024). Ghimire et al. (2024) found that in the United States, instructors from the College of Science and the School of Business exhibited greater awareness and more positive attitudes toward generative AI tools, while those from the College of Arts scored lower, particularly in technical understanding. Similarly, Al-Riyami et al. (2023) in Oman observed that instructors with IT and engineering backgrounds showed stronger acceptance of 4IR technologies compared to those from non-technological fields. Zhang and Villanueva (2023) in China highlighted the high generative AI preparedness of instructors from the Faculty of Physics and Information Science, while those from the Physical Education Faculty demonstrated lower levels. In contrast, Lérias et al. (2024) in Portugal found no significant correlation between training area and AI literacy, suggesting that the impact of departmental affiliation may be context dependent. Overall, instructors from technological and scientific faculties generally hold an advantage in AI-related competencies.
Ethical considerations further widen the gap between disciplines. In healthcare, data security and algorithm bias are prominent issues, warranting comprehensive AI education for ethical application (Busch et al., 2023). This context-dependent ethical sensitivity shapes the cautious attitude of educators, especially in areas under social scrutiny, such as medicine or law. At the same time, uncertainty about the effectiveness and reliability of AI is pervasive: many instructors feel unprepared to critically evaluate the technology, which increases mistrust, especially in dental or other practice training (Uribe et al., 2024).
In conclusion, the literature suggests a relationship between university teachers’ AI literacy and their digital competence, with moderating effects of factors such as age, gender, teaching experience, and field of study varying by context. While the field of study consistently influences AI-related competencies, the impact of gender, age, and experience is less uniform, highlighting the importance of training and institutional support in shaping educators’ engagement with AI technologies.
The aim of the study is to answer the following research questions:
RQ1: Is the Hungarian university teachers’ AI literacy related to their digital competence?
RQ2: If it is related, is this relationship moderated by the teacher’s age, gender, teaching experience, and field of education?
As AI continues to evolve and permeate educational practices, understanding these dynamics will be crucial for developing effective training programs and policies aimed at enhancing educators’ competencies in this area.

3. Materials and Methods

3.1. Design

During the research, we used a survey research design with a quantitative approach. The data collection was conducted using a questionnaire (MS Forms). The sampling took place between 30 January 2024 and 27 March 2024, using an online, self-administered method, in higher education institutions in Hungary selected based on expert selection. Our expert group, consisting of professionals with in-depth knowledge of the Hungarian higher education system and the integration possibilities of artificial intelligence, defined the selection criteria: geographical location, size, type, profile of the institutions, and their experience with artificial intelligence. After that, the higher education institutions were selected, taking into account diversity and the likelihood of adopting artificial intelligence. In the selected institutions, a contact person was sought to distribute the questionnaire within the institution. Participation in the study was voluntary, and respondents provided online informed consent. The study received ethical approval from the Research Ethics Committee of the University Institute of Psychology (BTK/8779/2023), and the data complied with ethical principles. The data were stored in a secure database accessible only to the research team. After the analysis, a summary report was shared with those interested.

3.2. Sample

In total, 1103 Hungarian university teachers participated in the study, with an average age of 48.9 years (SD = 10.9). These teachers had varying levels of higher education teaching experience, averaging 16.3 years (SD = 10.8). Among the participants, 564 were women and 539 were men. Regarding the field of education, the participants came from 13 different fields.
To ensure representativeness across the dimensions of the field of education, age, and gender, post-weighting was applied. The target population of the research was the population of individuals currently teaching in Hungarian higher education institutions. For the field of education, data from the OH/FIR Institutional Staff Statistics for the spring of 2022/2023 (available at https://firstat.oh.gov.hu/intezmenyi-letszamstatisztika (accessed on 23 November 2024)) were used. For age, data from the OECD (Indicator D8: What is the profile of academic staff?) for 2021 were utilized. For gender, data from the OH/FIR Higher Education Statistical Data for 2022/2023 (Section 3.2; available at https://kir.oktatas.hu/firstat.index?fir_stat_ev=2022 (accessed on 23 November 2024)) were employed.
The case-preserving weighting was conducted using an iterative method based on marginal distributions (RIM), with 5 iterations performed. A total of 1103 completed questionnaires were included in the study; due to rounding, the weighted sample size was 1128. The full range of the sample weights was 0.18–2.44. No data imputation was applied.
The sociodemographic composition of the weighted teacher sample—based on the four variables included in this study: gender, age, field of education, and higher education experience—as well as the frequency of refusals to answer, are summarized in Table 1.

3.3. Measures

In addition to the personal variables (gender, age, field of science, and higher education experience) collected from the participants, we employed two measurement tools. We interpreted AI literacy as “a set of competencies that enables individuals to critically evaluate AI technologies, communicate and collaborate effectively with AI, and use AI as a tool online, at home, and in the workplace” according to Long and Magerko’s (2020) definition and used the AI literacy scale developed by Hornberger et al. (2023) to measure it. Within the framework of the present study, we examined the following dimensions and scales of the questionnaire: (I) understanding intelligence (2 items), (II) AI’s strengths and weaknesses (2 items), (III) recognizing AI (2 items), (IV) human role in AI (2 items), and (V) learning from data (2 items). For each of these, respondents were required to select the correct answer from four different response options.
We defined digital competence as “the skills related to the use of information and communication technologies in teaching and learning, as well as in other activities related to education (educational management, related individual and organizational communication, research activities)” (Dringó-Horváth et al., 2022) and measured it using the higher-education-specific version of DigCompEdu (Redecker & Punie, 2017) adapted by Dringó-Horváth et al. (2020) and Horváth et al. (2020). Using this framework, the digital competence level of teachers can be assessed with 22 items across 6 different competence areas: (1) teachers’ professional engagement (4 items), (2) searching for and using digital resources (3 items), (3) the learning–teaching process supported by digital solutions (3 items), (4) assessment practices (4 items), (5) supporting students (3 items), and (6) developing their digital competence (5 items) (Redecker & Punie, 2017). Each multiple-choice question within these areas was scored from 0 to 4 points. The respondents’ digital competence level was determined based on the total score (ranging from 0 to 88 points), which was obtained by summing the scores from each area.

3.4. Procedure

The descriptive analyses for DigCompEdu, the reliability assessment (Cronbach’s alpha), and the calculation of descriptive statistics for the AI literacy items (difficulty index and discrimination index) were conducted using SPSS 30.0 (SPSS Inc., Chicago, IL, USA) and MS Excel.
Further analyses were performed using the R program (R Core Team, 2022), utilizing, in addition to the base packages, the following packages: lavaan (Rosseel, 2012), survey (Lumley, 2020), mirt (Chalmers, 2012), haven (Wickham et al., 2023b), dplyr (Wickham et al., 2023a), and psych (Makowski, 2018).
To test for unidimensionality, we fitted a unidimensional model using confirmatory factor analysis, and its fit was evaluated using the following commonly used indices and thresholds, which indicate a unidimensional structure of the response patterns: RMSEA < 0.08 (Awang, 2012) and SRMR < 0.08 (Byrne, 1994). The AI literacy of the test-takers was estimated using Item Response Theory (IRT), as described in the article publishing the original measurement tool. We selected among the Rasch, 2-PL, and 3-PL models using multiple model-fit and item-fit indices: we applied the M2/df statistic (Backhaus et al., 2015; Brown, 2015) with a cutoff value of 0.3, the RMSEA and SRMR statistics with a cutoff value of ≤ 0.05 (Maydeu-Olivares, 2013), and the TLI and CFI indices with a threshold of ≥ 0.95. The fit of the items was examined using the signed chi-square (S − Χ2) index (Orlando & Thissen, 2003). The independence of the item residuals was assessed using the Q3 statistic, based on the criterion Q3 ≤ 0.2 (Chen & Thissen, 1997).
Additionally, in SPSS 30.0 (SPSS Inc., Chicago, IL, USA), to achieve objective O1, we used regression analysis, considering teachers’ digital competence as the independent variable and AI literacy as the dependent variable. Then, to achieve objective O2, we conducted moderation analyses using the blockwise method, incorporating the variables of gender, field of education, age, and higher education experience.
To answer research question RQ1, we used regression analysis, considering teachers’ digital competence as the independent variable and AI literacy as the dependent variable. Then, to answer research question RQ2, we conducted moderation analyses using the blockwise method, incorporating the variables of gender, field of education, age, and higher education experience. These analyses were performed with SPSS 30.0 (SPSS Inc., Chicago, IL, USA).

4. Results

4.1. The DigCompEdu Test

In the current study sample, the average DigCompEdu score was M = 50.478 (SD = 18.076), which, considering the maximum achievable score of 88 points, represents a 57.35% result. For the DigCompEdu questionnaire, the data analysis showed an excellent internal consistency for the whole instrument, with a Cronbach’s alpha of 0.936.

4.2. Descriptive Statistics of AI Literacy Items

Participants correctly answered an average of M = 5.29 (SD = 2.04) out of the 10 AI literacy items. Due to the multiple-choice format, it can be expected that, on average, they would have guessed correctly on 2.25 items. Table A1 presents the descriptive statistics for all examined AI literacy items. The difficulty index (corrected for guessing) was 0.035 for one item—recognizing AI 1—and ranged between 0.104 and 0.811 for the other items, which is ideal (between 0.05 and 0.95). The recognizing AI 1 item proved to be too difficult compared to the others, but its discrimination ability was acceptable (discrimination index of 0.297), so we continued with the analysis. The discrimination indices for all items were at least 0.2 (ranging between 0.267 and 0.429), indicating that the items had at least acceptable discrimination ability. Figure 1 shows the average raw scores (difficulty indices) for the domains (competencies).

4.3. Checking for Unidimensionality

The assumption of unidimensionality was tested using a confirmatory factor analysis (CFA) with a single-factor model. The results indicated that the model fit the data well (χ2 = 217.750, df = 35, p < 0.001, RMSEA = 0.0688, and SRMR = 0.0549). Therefore, we can say that the assumption of unidimensionality was not entirely clear-cut, but the results were acceptable.

4.4. Fitting the IRT Models

After fitting the three classical IRT models (Rasch, 2-PL, and 3-PL), we examined the model fits. As shown in Table A2, the 3-PL model fit well, only disregarding the TLI criterion, while the Rasch model and the 2-PL model showed acceptable fits based on the RMSEA and SRMR indices, but poor fits based on the M2/df, TLI, and CFI indices.
When comparing the three models using AIC and BIC, the 3-PL model proved to be the weakest; however, since AIC and BIC penalize model complexity, and given that all model fit indices supported the 3-PL model, we used this model for estimating the personal AI literacy abilities in our further analyses. The distribution of our sample according to the AI literacy estimated by IRT scores is presented in Figure A1. The assumption of local independence was verified using the Q3 statistic based on the 3-PL model (Yen, 1984). We examined the correlations between the residuals of all items, and every correlation was less than 0.2, indicating that local independence was not violated.

4.5. Relationship Between AI Literacy and Digital Competence

The results of a simple linear regression analysis, with digital competence as the independent variable and AI literacy as the dependent variable, showed that digital competence was positively related to AI literacy (R2 = 0.110; B = 0.005; p < 0.001). The subsequent moderation analyses were conducted in three steps. In each analysis, AI literacy was the dependent variable, and digital competence was the independent variable. The moderator variables were as follows:
  • First step: Gender and field of education.
  • Second step: Age and field of education.
  • Third step: Higher education experience and field of education.

4.5.1. First Step: Moderating Effects of Gender and the Field of Education

The field of education variable was transformed into eight categories by combining fields (see Table A3), and the information technology field (ID = 6) was chosen as the reference category. Our results indicate that the relationship between digital competence (DC) and AI literacy significantly differed by gender and certain fields of education. The DC × Gender interaction remained significant in all models, suggesting that digital competence was more strongly correlated with AI literacy scores among men, while among women, AI literacy was less dependent on the level of digital competence.
From Model 2, it can be concluded that the fields of education alone did not have a moderating effect, meaning that the correlation between digital competence and AI literacy did not differ significantly across fields of education.
However, Model 3 showed that by considering both fields of education and gender, the differences between genders can be nuanced. As seen in Figure 2, the difference between genders—in terms of the relationship between digital competence and AI literacy—varied by field of education. In the sample:
  • The strongest relationship between DC and AI literacy was observed among men in information technology education, along with the most significant difference compared to women.
  • Similar relationships were observed in the fields of political and legal sciences, economics, engineering and agricultural sciences, and natural sciences. Based on the regression, these fields did not show significant differences from the information technology field (see Table 2).
  • However, the fields of humanities, social sciences, and teacher training, theology and arts, and health sciences differed from this pattern. Here, the gender differences were weaker than in information technology education (see Table 2).

4.5.2. Second Step: Moderating Effects of Age and the Field of Education

Our second moderation analysis—in which the moderator variables were age and field of education—is presented in Table 3. According to Model 1, the DC × Age interaction was not significant (B = −0.008, p = 0.740), indicating that teachers’ age alone did not influence the relationship between digital competence and AI literacy. Model 2 shows that the DC × Age interaction remained non-significant (B = −0.012, p = 0.610), confirming that age alone did not moderate the effect of DC, nor did the DC × Field interactions (p > 0.05). Furthermore, as seen in Model 3, the DC × Field × Age interactions were also not significant (p > 0.05). Thus, teachers’ age, either alone or in combination with the field of education, did not influence the relationship between digital competence and AI literacy. This suggests that the relationship between digital competence and AI literacy did not differ significantly between younger and older teachers, and no significant differences were observed across fields of education as a function of age.

4.5.3. Third Step: Moderating Effects of Higher Education Experience and the Field of Education

Our third moderation analysis—in which the moderator variables were teaching experience in years and field of education—is presented in Table 4. Model 1 showed that the DC × Texp interaction was not significant (p = 0.914). Similarly, in Model 2, the DC × Texp interaction remained non-significant (B = −0.001, p = 0.955), and the DC × Field interactions were also not significant (p > 0.05). However, in Model 3, when additional variables were included, the effects of the interactions became clearer: the DC × Texp interaction became significant with a negative coefficient (B = −0.144, p = 0.043), indicating that for those with more teaching experience, the relationship between DC and AI literacy was weaker. Additionally, in Model 3, the DC × Field8 × Texp interaction was significant (B = 0.278, p = 0.005), suggesting that for teachers in the engineering and agricultural fields, compared to those in information technology, teaching experience strengthened the relationship between DC and AI literacy more significantly.

5. Discussion

The results showed that digital competence positively correlated with AI literacy (R2 = 0.110, p < 0.001), which is consistent with the literature’s findings that digital skills play a fundamental role in understanding and applying AI technologies (e.g., Long & Magerko, 2020; Kizilcec, 2023). This relationship suggests that teachers’ ability to effectively use digital tools in education promotes the development of their AI-related knowledge and skills.
The analysis of gender differences yielded particularly noteworthy results. Digital competence correlated more strongly with AI literacy among male teachers, while this relationship was weaker among females. This difference was particularly pronounced in the fields of information technology, political and legal sciences, economics, engineering, agricultural sciences, and natural sciences, while in the humanities, social sciences, teacher training, theology, arts, and health sciences, the gender difference was less significant. These results partially align with the research by Møgelvang et al. (2024), which found that men were more likely to use generative AI tools and approach technology application with different motivations, and McGrath et al. (2023) reported higher AI knowledge among male Swedish university teachers. Similarly, Mah et al. (2025) found that female German and Austrian university teachers placed greater emphasis on the ethical implications of AI in education compared to their male counterparts, and reported disciplinary differences, with arts faculty perceiving domain-specific AI applications as highly relevant and engineering faculty placing less emphasis on ethical considerations. Furthermore, Kallunki et al. (2024) noted that Finnish university faculty across diverse disciplines, such as arts and engineering, perceived AI as both an opportunity and challenge for teaching, with young teachers and educational technology experts adopting AI more readily.
The phenomenon may be explained by differing levels of technological self-confidence (Zhang et al., 2023), as well as sociocultural norms that may steer men toward more technology-oriented roles. However, some studies reported no significant gender effects (e.g., Berber et al., 2023), suggesting these differences may vary by context. In contrast, Salhab (2024) found that female instructors in a Palestinian university exhibited significantly more positive attitudes toward AI literacy integration into the curriculum, highlighting the role of cultural and contextual factors in shaping gender differences in AI-related perceptions.
Interestingly, age was not a significant moderator in the relationship between digital competence and AI literacy, neither on its own nor when examined in conjunction with the fields of study. This finding aligns with research indicating that age does not consistently influence technology acceptance (e.g., Ghimire et al., 2024; Xu et al., 2024; Galindo-Domínguez et al., 2024), though it contrasts with studies suggesting younger teachers exhibit greater openness or competence with new technologies (e.g., Al-Riyami et al., 2023; Zhang & Villanueva, 2023). The result suggests that in the Hungarian higher education context, the presence of digital competence may support the development of AI literacy to a similar extent across all age groups.
However, the moderating effect of higher education experience presented a more nuanced picture. For teachers with more experience, the relationship between digital competence and AI literacy was weaker (B = −0.144, p = 0.043), suggesting that more experienced teachers rely less on their digital skills in understanding or applying AI tools. This aligns with findings that less experienced teachers exhibit higher digital competence (Berber et al., 2023) and contrasts with McGrath et al. (2023), who found that Swedish university teachers with over 30 years of experience reported higher AI knowledge and greater willingness to adopt AI-based tools compared to those with less experience. These differences may reflect a reliance on established pedagogical methods over new technologies, though some studies found no such effect of experience on technology acceptance (e.g., Ghimire et al., 2024; Al-Riyami et al., 2023).
The fields of study themselves did not significantly moderate the relationship between digital competence and AI literacy, suggesting that the relationship was relatively general across higher education disciplines. This finding was unexpected given prior evidence of field-specific differences (e.g., Ghimire et al., 2024) but may reflect Hungary’s unified digitalization efforts (Hungary’s Artificial Intelligence Strategy, 2020). This aligns with broader efforts to integrate AI literacy into higher education teaching and learning, emphasizing the need for educators to develop AI competencies across disciplines (Chan, 2023).
The research contributes to the AI literacy literature by supporting the role of digital competence with large-scale, Hungarian-specific data and offering a new perspective on the moderating effects of gender and higher education experience. The context-dependent nature of the results underscores that the development of AI literacy does not require a uniform approach but rather a strategy that takes into account the specificities of the target groups (e.g., gender and scientific fields). The findings suggest that while the relationship between digital competence and AI literacy was broadly consistent across disciplines, context-specific factors like gender and experience played significant roles.
The results have several practical and theoretical implications. Firstly, the positive relationship between digital competence and AI literacy suggests that developing teachers’ digital skills is crucial for integrating AI technologies into higher education. This is in line with the recommendations of the UNESCO AI Competency Framework (UNESCO, 2024), which advocates for the joint development of teachers’ digital and AI-based competencies. Educational strategies that combine foundational digital pedagogy with targeted, hands-on AI activities—such as prompt-engineering tasks, workshops using AI-tools, or scenario-based ethical discussions—proves especially effective. To support long-term development and quality assurance, we recommend tracking AI literacy progression using a combination of standardized assessments—such as the AI/digital competency instruments employed in this study—and authentic artefacts like lesson plans or student work samples that demonstrate meaningful AI integration.
Secondly, the gender differences suggest that training programs aimed at increasing AI literacy should consider gender-specific needs. For example, programs focusing on increasing technological self-confidence for female teachers, while for men, deepening technical skills may be more effective. This suggests that when developing AI literacy, gender differences should be prioritized, while field-specific effects may be less critical given the consistent relationship observed across disciplines. This is particularly relevant in information technology and other technology-oriented fields, where gender differences were more pronounced.
Thirdly, the negative moderating effect of higher education experience warns that special support strategies are needed for experienced teachers. For example, targeted workshops or mentoring programs can help them keep up with the rapid development of AI technologies and integrate them into their pedagogical practice.

6. Conclusions

Our results indicated that digital competence was significantly and positively related to AI literacy, supporting the assumption found in international literature that digital skills play a fundamental role in understanding and integrating AI technologies into education (Long & Magerko, 2020; Kizilcec, 2023; Jantakun et al., 2021). Gender differences were particularly pronounced: the correlation between digital competence and AI literacy was stronger among male teachers, especially in information technology and other technology-oriented fields, while among female teachers, this relationship was weaker, particularly in the humanities, social sciences, and health sciences disciplines. However, some studies reported no significant gender effects (e.g., Berber et al., 2023; Xu et al., 2024), suggesting context-specific influences. Higher education experience also played a significant moderating role, with the relationship being weaker among more experienced teachers. Interestingly, age was not a determining factor, suggesting that in Hungarian higher education, the presence of digital competence supports the development of AI literacy to a similar extent across all age groups, despite mixed international findings where younger educators often show greater digital competence (e.g., Al-Riyami et al., 2023; Zhang & Villanueva, 2023).
The research contributes to the AI literacy literature by providing empirical evidence from a previously understudied population—Hungarian university teachers—and by refining the understanding of the role of digital competence in the context of technological transformation. The findings highlighted that the development of AI literacy does not require a one-size-fits-all approach but rather strategies tailored to the specific needs of target groups (e.g., gender and experience levels, with less emphasis on scientific fields given their consistent relationship). This aligns with the recommendations of the UNESCO AI Competency Framework for teachers (UNESCO, 2024), which advocates for the joint development of teachers’ digital and AI-based competencies.
Finally, the limitations of the research also provide guidance for future studies. The use of self-administered questionnaires may introduce bias, and due to the Hungarian context, the generalizability of the results may be limited to other cultural or educational systems. Further longitudinal research is needed to explore how AI literacy evolves, especially in a rapidly changing technological environment. Additionally, international comparative analyses could help contextualize the Hungarian-specific findings and compare them with global trends. Finally, to deepen the understanding of the relationship between AI literacy and digital competence, qualitative research—such as interviews or case studies—may also be warranted to uncover individual motivations and the role of contextual factors.

Author Contributions

Conceptualization, I.D.-H., J.T.N. and Z.R.; methodology, J.T.N.; software, J.T.N.; validation, J.T.N.; formal analysis, J.T.N.; investigation, J.T.N.; resources, I.D.-H.; data curation, J.T.N. and Z.R.; writing—original draft preparation, I.D.-H., J.T.N. and Z.R.; writing—review and editing, J.T.N.; visualization, J.T.N.; supervision, I.D.-H., J.T.N. and Z.R.; project administration, I.D.-H.; funding acquisition, I.D.-H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Károli Gáspár University of the Reforme Church in Hungary, grant number 66018R800.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of Károli Gáspár University of the Reformed Church in Hungary Institute of Psychology (BTK/8779/2023, 22 December 2023).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The original data presented in the study are openly available in KREPOZIT (the digital commons of Károli Gáspár University of the Reformed Church in Hungary) at https://krepozit.kre.hu/handle/123456789/1641 (accessed on 23 November 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
TPACKTechnological Pedagogical Content Knowledge
UTAUTUnified Theory of Acceptance and Use of Technology
RIMRandom Iterative Method
IRTItem Response Theory
RMSEARoot Mean Square Error of Approximation
SRMRStandardized Root Mean Squared Residual
TLITucker–Lewis Index
CFIComparative Fit Index
CFAConfirmatory Factor Analysis
AICAkaike Information Criterion
BICBayes Information Criterion
SDStandard Deviation
SEStandard Error
LLCILower Limit of Confidence Interval
ULCIUpper Limit of Confidence Interval
DCTeachers’ Digital Competence
TexpTeaching Experience in Years
AI-LitAI-Literacy
FieldField of Education
Gndr Gender
Age Age of Teachers in Years

Appendix A

Table A1. Descriptive item statistics for the AI literacy items.
Table A1. Descriptive item statistics for the AI literacy items.
ItemItem LabelDifficulty indexDifficulty Index Corrected for GuessingDiscrimination Index
01Understanding intelligence 10.8300.7730.310
02Understanding intelligence 20.8580.8110.267
03AI’s strengths and weaknesses 10.3370.1160.292
04AI’s strengths and weaknesses 20.3280.1040.300
05Recognizing AI 10.2770.0350.297
06Recognizing AI 20.6750.5660.420
07Human influence 10.3460.1280.297
08Human influence 20.4010.2010.351
09Learning from data 10.8010.7350.429
10Learning from data 20.4400.2530.406
Table A2. Model fit indices.
Table A2. Model fit indices.
ModelM2/dfRMSEASRMRTLICFIAICBIC
Rasch4.6290.0570.0640.8210.82512,444.7912,490.04
2-PL4.9230.0600.0540.8070.85012,440.1712,525.65
3-PL2.0600.0300.0470.9480.97112,451.4212,577.12
Table A3. Merging of educational fields.
Table A3. Merging of educational fields.
Field Before MergingField After MergingField ID
Political sciencePolitical and Legal Sciences1
Law
HumanitiesHumanities, Social Sciences, and Teacher Training2
Social Sciences
Teacher Training
EconomicsEconomics3
TheologyTheology and Arts4
Arts, Art Mediation
Information TechnologyInformation Technology6
Engineering, Agricultural SciencesEngineering, Agricultural Sciences8
Medical and Health SciencesHealth Sciences11
Sports Science
Natural SciencesNatural Sciences15
Figure A1. The distribution of IRT-estimated AI literacy in the sample.
Figure A1. The distribution of IRT-estimated AI literacy in the sample.
Education 15 00868 g0a1

References

  1. Al-Riyami, T., Al-Maskari, A., & Al-Ghnimi, S. (2023). Faculties’ behavioural intention toward the use of the fourth industrial revolution related-technologies in higher education institutions. International Journal of Emerging Technologies in Learning, 18(7), 159–177. [Google Scholar] [CrossRef]
  2. Awang, Z. (2012). Structural equation modeling using AMOS graphic. Uitim Press, MARA. [Google Scholar]
  3. Backhaus, K., Erichson, B., Plinke, W., & Weiber, R. (2015). Multivariate analysemethoden: Eine anwendungsorientierte einführung. Springer. [Google Scholar]
  4. Bekiaridis, G., & Attwell, G. (2024). Supplement to the DigCompEDU framework: Outlining the skills and competences of educators related to AI in education. University of Bremen, Institut Technik und Bildung (ITB). [Google Scholar] [CrossRef]
  5. Berber, Ş., Taksi Deveciyan, M., & Alay, H. K. (2023). Digital literacy level and career satisfaction of academics. İnsan ve Toplum Bilimleri Araştırmaları Dergisi, 12(4), 2363–2387. [Google Scholar] [CrossRef]
  6. Bozkurt, A. (2023). Generative artificial intelligence (AI) powered conversational educational agents: The inevitable paradigm shift. Asian Journal of Distance Education, 18(1), 198–204. [Google Scholar]
  7. Brown, T. A. (2015). Confirmatory factor analysis for applied research (2nd ed.). Guilford Press. [Google Scholar]
  8. Busch, F., Adams, L. C., & Bressem, K. K. (2023). Biomedical ethics aspects towards the implementation of artificial intelligence in medical education. Medical Science Educator, 33, 1007–1012. [Google Scholar] [CrossRef] [PubMed]
  9. Byrne, B. M. (1994). Structural equation modeling with EQS and EQS/Windows: Basic concepts, applications, and programming. SAGE Publications. [Google Scholar]
  10. Cai, Z., Fan, X., & Du, J. (2017). Gender and attitudes toward technology use: A meta-analysis. Computers & Education, 105, 1–13. [Google Scholar] [CrossRef]
  11. Chalmers, R. P. (2012). mirt: A multidimensional item response theory package for the R environment. Journal of Statistical Software, 48(6), 1–29. [Google Scholar] [CrossRef]
  12. Chan, C. K. Y. (2023). A comprehensive AI policy education framework for university teaching and learning. International Journal of Educational Technology in Higher Education, 20(1), 38. [Google Scholar] [CrossRef]
  13. Chen, W. H., & Thissen, D. (1997). Local dependence indexes for item pairs using item response theory. Journal of Educational and Behavioral Statistics, 22(3), 265–289. [Google Scholar] [CrossRef]
  14. Dringó-Horváth, I., Hülber, L., M. Pintér, T., & Papp-Danka, A. (2020). A tanárképzés oktatási kultúrájának több szempontú jellemzése. In A. Varga, H. Andl, & Z. Molnár-Kovács (Eds.), Új kutatások a neveléstudományokban (pp. 129–142). Pécsi Tudományegyetem Bölcsészet- és Társadalomtudományi Kar, Neveléstudományi Intézet. [Google Scholar]
  15. Dringó-Horváth, I., T. Nagy, J., & Weber, A. (2022). Felsőoktatásban oktatók digitális kompetenciáinak fejlesztési lehetőségei. Educatio, 30(3), 496–507. [Google Scholar] [CrossRef]
  16. Galindo-Domínguez, H., Delgado, N., Campo, L., & Losada, D. (2024). Relationship between teachers’ digital competence and attitudes towards artificial intelligence in education. International Journal of Educational Research, 126, 102381. [Google Scholar] [CrossRef]
  17. Georgopoulou, M. S., Krouska, A., Troussas, C., & Sgouropoulou, C. (2024, September 20–22). Redefining the concept of literacy: A DigCompEdu extension for critical engagement with AI tools. 2024 9th South-East Europe Design Automation, Computer Engineering, Computer Networks and Social Media Conference (SEEDA-CECNSM) (pp. 98–102), Athens, Greece. [Google Scholar] [CrossRef]
  18. Ghimire, A., Prather, J., & Edwar, J. (2024). Generative AI in education: A study of educators’ awareness, sentiments, and influencing factors. arXiv. [Google Scholar] [CrossRef]
  19. Gibert, K., & Valls, A. (2022). Building a Territorial Working Group to Reduce Gender Gap in the Field of Artificial Intelligence. Applied Sciences, 12(6), 3129. [Google Scholar] [CrossRef]
  20. Hornberger, M., Bewersdorff, A., & Nerdel, C. (2023). What do university students know about artificial intelligence? Development and validation of an AI literacy test. Computers and Education: Artificial Intelligence, 5, 100165. [Google Scholar] [CrossRef]
  21. Horváth, L., Misley, H., Hülber, L., Papp-Danka, A., Pintér, M. T., & Dringó-Horváth, I. (2020). Tanárképzők digitális kompetenciájának mérése—A DigCompEdu adaptálása a hazai felsőoktatási környezetre. Neveléstudomány, 2020(2), 5–25. [Google Scholar] [CrossRef]
  22. Hungary’s Artificial Intelligence Strategy. (2020). Available online: https://cdn.kormany.hu/uploads/document/6/67/676/676186555d8df2b1408982bb6ce81c643d5fa4ab.pdf (accessed on 23 November 2024).
  23. Jantakun, T., Jantakun, K., & Jantakoon, T. (2021). A common framework for artificial intelligence in higher education (AAI-HE model). International Education Studies, 14(11), 94. [Google Scholar] [CrossRef]
  24. Kallunki, V., Kinnunen, P., Pyörälä, E., Haarala-Muhonen, A., Katajavuori, N., & Myyry, L. (2024). Navigating the evolving landscape of teaching and learning: University faculty and staff perceptions of the artificial intelligence-altered terrain. Education Sciences, 14(7), 727. [Google Scholar] [CrossRef]
  25. Kizilcec, R. (2023). To advance AI use in education, focus on understanding educators. International Journal of Artificial Intelligence in Education, 34(1), 12–19. [Google Scholar] [CrossRef]
  26. Lérias, E., Guerra, C., & Ferreira, P. (2024). Literacy in artificial intelligence as a challenge for teaching in higher education: A case study at Portalegre Polytechnic University. Information, 15(4), 205. [Google Scholar] [CrossRef]
  27. Long, D., & Magerko, B. (2020). What is AI literacy? Competencies and design considerations. In Proceedings of the 2020 CHI conference on human factors in computing systems (pp. 1–16). Association for Computing Machinery. ISBN 9781450367080. [Google Scholar]
  28. Lumley, T. (2020). Survey: Analysis of complex survey samples (R package version 4.0) [Computer software]. Available online: https://CRAN.R-project.org/package=survey (accessed on 7 February 2025).
  29. Mah, D. K., & Groß, N. (2024). Artificial intelligence in higher education: Exploring faculty use, self-efficacy, distinct profiles, and professional development needs. International Journal of Educational Technology in Higher Education, 21, 58. [Google Scholar] [CrossRef]
  30. Mah, D. K., Knoth, N., & Egloffstein, M. (2025). Perspectives of academic staff on artificial intelligence in higher education: Exploring areas of relevance. Frontiers in Education, 10, 1484904. [Google Scholar] [CrossRef]
  31. Makowski, D. (2018). The psycho package: An efficient and publishing-oriented workflow for psychological science. Journal of Open Source Software, 3(22), 470. [Google Scholar] [CrossRef]
  32. Marciniak, R., & Baksa, M. (2024). Szövegalkotó mesterséges intelligencia a társadalomtudományi felsőoktatásban: Félelmek és lehetőségek. Educatio, 32(4), 599–611. [Google Scholar] [CrossRef]
  33. Maydeu-Olivares, A. (2013). Goodness-of-Fit Assessment of Item Response Theory Models. Measurement: Interdisciplinary Research and Perspectives, 11(3), 71–101. [Google Scholar] [CrossRef]
  34. McGrath, C., Cerratto Pargman, T., Juth, N., & Palmgren, P. J. (2023). University teachers’ perceptions of responsibility and artificial intelligence in higher education—An experimental philosophical study. Computers and Education: Artificial Intelligence, 4, 100139. [Google Scholar] [CrossRef]
  35. Michel-Villarreal, R., Vilalta-Perdomo, E., Salinas-Navarro, D., Thierry-Aguilera, R., & Gerardou, F. (2023). Challenges and opportunities of generative AI for higher education as explained by ChatGPT. Education Sciences, 13(9), 856. [Google Scholar] [CrossRef]
  36. Mishra, P., Warr, M., & Islam, R. (2023). TPACK in the age of ChatGPT and Generative AI. Journal of Digital Learning in Teacher Education, 39(4), 235–251. [Google Scholar] [CrossRef]
  37. Møgelvang, A., Bjelland, C., Grassini, S., & Ludvigsen, K. (2024). Gender differences in the use of generative artificial intelligence chatbots in higher education: Characteristics and consequences. Education Sciences, 14(12), 1363. [Google Scholar] [CrossRef]
  38. Mujiono, M. (2023). Educational collaboration: Teachers and artificial intelligence. Jurnal Kependidikan: Jurnal Hasil Penelitian dan Kajian Kepustakaan di Bidang Pendidikan, Pengajaran dan Pembelajaran, 9(2), 618–632. [Google Scholar] [CrossRef]
  39. Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. S. (2021). Conceptualizing AI literacy: An exploratory review. Computers & Education: Artificial Intelligence, 2, 100041. [Google Scholar] [CrossRef]
  40. Ning, Y., Zhang, C., Xu, B., Zhou, Y., & Wijaya, T. T. (2024). Teachers’ AI-TPACK: Exploring the relationship between knowledge elements. Sustainability, 16(3), 978. [Google Scholar] [CrossRef]
  41. Orlando, M., & Thissen, D. (2003). Further investigation of the performance of S-Χ2: An item fit index for use with dichotomous item response theory models. Applied Psychological Measurement, 27(4), 289–298. [Google Scholar] [CrossRef]
  42. R Core Team. (2022). R: A language and environment for statistical computing. R Foundation for Statistical Computing. Available online: https://www.R-project.org/ (accessed on 23 November 2024).
  43. Redecker, C., & Punie, Y. (Eds.). (2017). European framework for the digital competence of educators: DigCompEdu. Publications Office of the European Union. Available online: http://publications.jrc.ec.europa.eu/repository/bitstream/JRC107466/pdf_digcomedu_a4_final.pdf (accessed on 23 November 2024).
  44. Rosseel, Y. (2012). lavaan: An R package for structural equation modeling. Journal of Statistical Software, 48(2), 1–36. [Google Scholar] [CrossRef]
  45. Salhab, R. (2024). AI literacy across curriculum design: Investigating college instructors’ perspectives. Online Learning, 28(2), 22–47. [Google Scholar] [CrossRef]
  46. UNESCO. (2024). AI competency framework for teachers. UNESCO. [Google Scholar] [CrossRef]
  47. Uribe, S. E., Maldupa, I., Kavadella, A., El Tantawi, M., Chaurasia, A., Fontana, M., Marino, R., Innes, N., & Schwendicke, F. (2024). Artificial intelligence chatbots and large language models in dental education: Worldwide survey of educators. European Journal of Dental Education, 28(4), 865–876. [Google Scholar] [CrossRef]
  48. Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425–478. [Google Scholar] [CrossRef]
  49. Walter, Y. (2024). Embracing the future of artificial intelligence in the classroom: The relevance of AI literacy, prompt engineering, and critical thinking in modern education. International Journal of Educational Technology in Higher Education, 21, 15. [Google Scholar] [CrossRef]
  50. Wickham, H., François, R., Henry, L., Müller, K., & Vaughan, D. (2023a). dplyr: A grammar of data manipulation (R package version 1.1.0) [Computer software]. Available online: https://CRAN.R-project.org/package=dplyr (accessed on 23 November 2024).
  51. Wickham, H., Miller, E., & Smith, D. (2023b). haven: Import and export ‘SPSS’, ‘Stata’ and ‘SAS’ files (R package version 2.5.2) [Computer software]. Available online: https://CRAN.R-project.org/package=haven (accessed on 23 November 2024).
  52. Xu, S., Chen, P., & Zhang, G. (2024). Exploring Chinese university educators’ acceptance and intention to use AI tools: An application of the UTAUT2 model. SAGE Open, 14(4), 1–15. [Google Scholar] [CrossRef]
  53. Yen, W. M. (1984). Effects of local item dependence on the fit and equating performance of the three-parameter logistic model. Applied Psychological Measurement, 8(2), 125–145. [Google Scholar] [CrossRef]
  54. Zhang, C., & Villanueva, L. E. (2023). Generative artificial intelligence preparedness and technological competence: Towards a digital education teacher training program. International Journal of Education and Humanities, 11(2), 164–170. [Google Scholar] [CrossRef]
  55. Zhang, C., Schießl, J., Plößl, L., Hofmann, F., & Gläser-Zikuda, M. (2023). Acceptance of artificial intelligence among pre-service teachers: A multigroup analysis. International Journal of Educational Technology in Higher Education, 20(49), 1–22. [Google Scholar] [CrossRef]
Figure 1. Mean score for each competency. Numbers in brackets indicate the number of items per competency.
Figure 1. Mean score for each competency. Numbers in brackets indicate the number of items per competency.
Education 15 00868 g001
Figure 2. Variation in the relationship between digital competence and AI literacy by gender and fields of education: (a) male and (b) female.
Figure 2. Variation in the relationship between digital competence and AI literacy by gender and fields of education: (a) male and (b) female.
Education 15 00868 g002
Table 1. Sociodemographic characteristics of the weighted study sample (N = 1128).
Table 1. Sociodemographic characteristics of the weighted study sample (N = 1128).
Variable NameVariable ValuesMissing Values, N (%)N (%) or
Mean (SD)
Gender 0 (0)
Male 645 (57.2%)
Female 483 (42.8%)
Age (years) 2 (0.2%)47.5 (11.20)
Higher education experience (years) 0 (0)15.6 (10.9)
Field of education 0 (0)
Political Science 32 (2.9%)
Humanities 151 (13.4%)
Economics 146 (12.9%)
Theology 29 (2.6%)
Information Technology 69 (6.1%)
Law 39 (3.5%)
Engineering, Agricultural Sciences 206 (18.2%)
Arts, Art Mediation 76 (6.6%)
Medical and Health Sciences 208 (18.5%)
Teacher Training 76 (6.7%)
Sport Sciences 10 (0.9%)
Social Sciences 25 (2.2%)
Natural Sciences 60 (5.4%)
Table 2. The relationship between digital competence and AI literacy, with gender and field of education as moderators *.
Table 2. The relationship between digital competence and AI literacy, with gender and field of education as moderators *.
BSEpLLCIULCI
Model1 (R2 = 0.011)
DCxGndr → AI_Lit 0.1040.030<0.0010.0450.162
Model2 (R2 = 0.012)
DCxGndr → AI_Lit 0.0930.0460.0430.0030.183
DCxField → AI_LitDCxField10.0110.0850.897−0.1560.179
DCxField2−0.0120.0530.818−0.1160.190
DCxField30.0640.0770.409−0.0880.216
DCxField40.0210.0790.793−0.1340.175
DCxField8−0.0260.0720.722−0.1660.115
DCxField110.0350.0590.548−0.0800.151
DCxField150.0550.1030.591−0.1470.257
Model3 (R2 = 0.025)
DCxGndr → AI_Lit 0.3180.096<0.0010.1310.506
DCxField → AI_LitDCxField1−0.1470.1800.413−0.5000.205
DCxField20.0580.0660.381−0.0720.188
DCxField30.0640.1140.576−0.1600.287
DCxField40.0820.1020.421−0.1180.281
DCxField8−0.1870.01560.230−0.4920.119
DCxField110.1220.0750.103−0.0250.268
DCxField15−0.0060.1810.972−0.3620.349
DCxFieldxGndr → AI_LitDCxField1xGndr−0.0020.0120.881−0.0260.022
DCxField2xGndr−0.0210.0080.006−0.035−0.006
DCxField3xGndr−0.0120.0100.206−0.0310.007
DCxField4xGndr−0.0200.0100.041−0.040−0.001
DCxField8xGndr−0.0020.0110.855−0.0230.019
DCxField11xGndr−0.0230.0080.004−0.038−0.007
DCxField15xGndr−0.0080.0130.551−0.0330.018
* DC: teachers’ digital competence (standardized); Gndr: gender (reference: female); AI_Lit: AI-literacy (based on IRT scores); Field: field of education; Field6 (reference): information technology; Field1: political and legal sciences; Field2: humanities, social sciences, and teacher training; Field3: economics; Field4: theology and arts; Field8: engineering and agricultural sciences; Field11: health sciences; Field15: natural sciences.
Table 3. The relationship between digital competence and AI literacy, with age and field of education as moderators *.
Table 3. The relationship between digital competence and AI literacy, with age and field of education as moderators *.
BSEpLLCIULCI
Model1 (R2 < 0.001)
DCxAge → AI_Lit −0.0080.0230.740−0.0530.037
Model2 (R2 = 0.009)
DCxAge → AI_Lit −0.0120.0240.610−0.0580.034
DCxField → AI_LitDCxField10.0860.0770.262−0.0650.237
DCxField20.0350.0490.475−0.0610.130
DCxField30.1220.0720.092−0.0200.264
DCxField40.0640.0760.402−0.0860.214
DCxField80.0530.0600.377−0.0650.171
DCxField110.0760.0550.171−0.0330.184
DCxField150.1280.0990.195−0.0660.322
Model3 (R2 = 0.015)
DCxAge → AI_Lit −0.1040.0800.195−0.2610.050
DCxField → AI_LitDCxField10.0860.0770.264−0.0650.237
DCxField20.0160.0500.752−0.0820.114
DCxField30.1200.0730.100−0.0230.262
DCxField40.0480.080550−0.1090.205
DCxField80.0510.0060.394−0.0670.170
DCxField110.0800.0560.153−0.0300.190
DCxField150.2170.1140.058−0.0070.441
DCxFieldxAge → AI_LitDCxField1xAge0.0870.1190.465−0.1470.321
DCxField2xAge0.1740.0970.073−0.0160.364
DCxField3xAge0.1040.1010.303−0.0940.302
DCxField4xAge0.0670.1350.214−0.0970.432
DCxField8xAge0.0480.0980.623−0.1440.241
DCxField11xAge0.1050.0940.261−0.0780.289
DCxField15xAge−0.0630.1310.631−0.3190.194
* DC: teachers’ digital competence (standardized); Age: age of teachers in years (standardized); AI_Lit: AI-literacy; Field: field of education; Field6 (reference): information technology; Field1: political and legal sciences; Field2: humanities, social sciences, and teacher training; Field3: economics; Field4: theology and arts; Field8: engineering and agricultural sciences; Field11: health sciences; Field15: natural sciences.
Table 4. The relationship between digital competence and AI Literacy, with teaching experience and field of education as moderators *.
Table 4. The relationship between digital competence and AI Literacy, with teaching experience and field of education as moderators *.
BSEpLLCIULCI
Model1 (R2 < 0.001)
DCxTexp → AI_Lit 0.0020.0230.914−0.0420.047
Model2 (R2 = 0.009)
DCxTexp → AI_Lit −0.0010.0230.147−0.0470.042
DCxField → AI_LitDCxField10.0870.0770.257−0.0640.238
DCxField20.0320.0480.507−0.0630.127
DCxField30.1200.0720.096−0.0210.262
DCxField40.0540.0760.423−0.0890.211
DCxField80.0540.0600.374−0.0650.172
DCxField110.0790.0550.152−0.0290.187
DCxField150.1220.0990.218−0.0730.317
Model3 (R2 = 0.017)
DCxTexp → AI_Lit −0.1440.0710.043−0.284−0.004
DCxField → AI_LitDCxField10.0830.0780.287−0.0700.236
DCxField20.0290.0490.554−0.0670.125
DCxField30.1190.0720.098−0.0220.261
DCxField40.0600.079451−0.0960.215
DCxField80.0580.0600.331−0.0600.177
DCxField110.0790.0550.155−0.0300.187
DCxField150.2120.1250.090−0.0330.457
DCxFieldxTexp → AI_LitDCxField1xTexp0.1700.1130.133−0.0520.391
DCxField2xTexp0.1630.0880.065−0.0100.335
DCxField3xTexp0.1530.0970.1180.0390.344
DCxField4xTexp0.1360.1110.220−0.0810.352
DCxField8xTexp0.2780.0980.0050.0850.471
DCxField11xTexp0.1370.0850.108−0.0300.303
DCxField15xTexp0.0190.1290.883−0.2340.272
* DC: teachers’ digital competence (standardized); Texp: teaching experience in years (standardized); AI_Lit: AI-literacy; Field: field of education; Field6 (reference): information technology; Field1: political and legal sciences; Field2: humanities, social sciences, and teacher training; Field3: economics; Field4: theology and arts; Field8: engineering and agricultural sciences; Field11: health sciences; Field15: natural science.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dringó-Horváth, I.; Rajki, Z.; T. Nagy, J. University Teachers’ Digital Competence and AI Literacy: Moderating Role of Gender, Age, Experience, and Discipline. Educ. Sci. 2025, 15, 868. https://doi.org/10.3390/educsci15070868

AMA Style

Dringó-Horváth I, Rajki Z, T. Nagy J. University Teachers’ Digital Competence and AI Literacy: Moderating Role of Gender, Age, Experience, and Discipline. Education Sciences. 2025; 15(7):868. https://doi.org/10.3390/educsci15070868

Chicago/Turabian Style

Dringó-Horváth, Ida, Zoltán Rajki, and Judit T. Nagy. 2025. "University Teachers’ Digital Competence and AI Literacy: Moderating Role of Gender, Age, Experience, and Discipline" Education Sciences 15, no. 7: 868. https://doi.org/10.3390/educsci15070868

APA Style

Dringó-Horváth, I., Rajki, Z., & T. Nagy, J. (2025). University Teachers’ Digital Competence and AI Literacy: Moderating Role of Gender, Age, Experience, and Discipline. Education Sciences, 15(7), 868. https://doi.org/10.3390/educsci15070868

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop