1. Introduction
The rapid development and availability of artificial intelligence (AI) technologies to the general public have greatly impacted most areas of everyday life. The healthcare and finance industries have been affected the most, with AI technologies revolutionizing patient care and changing risk analysis in the finance sector. The transportation sector has experienced significant change with the implementation of intelligent traffic control systems aimed at reducing accidents and improving traffic efficiency. AI technologies are integrated into specialized computer programs as well as ordinary smart gadgets, allowing them to be used in daily activities and decision-making processes [
1]. AI has revolutionized communication by enabling individuals to communicate with technology through natural language processing applications like virtual assistants and chatbots. These tools enhance the smooth interaction between humans and computers, increasing the accessibility and user-friendliness of technology [
2]. Within the field of education, there is a discussion surrounding the capabilities of artificial intelligence (AI), recognizing its advantages, such as the ability to customize educational materials for individual students, but also noting its disadvantages when it is possible to violate ethical standards [
3].
With many advances made in AI technology, there is a growing recognition of the need for skills to ensure the responsible and ethical use of AI [
3,
4]—the issue of determining AI abilities, as well as recognizing areas for enhancement, is growing more essential [
4,
5,
6]. The significance of digital skills and competencies among educators is increasing, and proficiency in artificial intelligence is becoming a crucial component of modern education.
In 2017, the European Commission’s framework “DigiCompEdu” [
7] provided a report on the importance of digital skills and competencies in the everyday lives of educators and the need to continuously develop skills and competencies. The “DigCompEdu” framework defines a wide range of skills and competencies that educators must possess in order to proficiently use digital technology in their professional work field. The framework categorizes 22 competencies into six primary parts: professional engagement, digital resources, teaching and learning, assessment, empowering learners, and enabling learners’ digital competence. The goal of this framework is to encourage the efficient use of digital technology in educational institutions, thereby improving the quality of teaching and learning outcomes. The framework also functions as a benchmark for the creation of other digital competency frameworks at both national and regional levels, as well as for the design of teacher training programs [
7]. Based on the first version of DigiComp, a new framework, DigiComp 2.2 [
8], was developed, which examines the possible influence of AI on information and data literacy in two main domains: (1) navigating, searching, and refining data, information, and digital content and (2) assessing data, information, and digital content [
8,
9]. In addition, UNESCO has released the ICT Competency Framework for Teachers [
10], emphasizing the necessity of technical education for teachers to enhance their professional growth. This framework implies that training teachers in ICT involves professionalizing their role by including essential professional qualities to enhance their overall performance [
10,
11]. Although there are several frameworks for digital skill and competence development in education, none of them provide a complete answer on how to include AI competencies in already developed digital skill and competence frameworks [
9].
In order to find out the best way for a teacher to carry out professional development and evaluate their AI competencies, three research questions were raised:
- RQ1:
What principal components can be identified from a study that explain the variance in teachers’ self-assessment of AI competencies??
- RQ2:
How do the identified components align with existing AI or digital skills and competencies frameworks?
- RQ3:
What initial patterns emerge from the analysis?
The aim of this study is to explore and identify the principal components that explain the variance in teachers’ self-assessment of artificial intelligence (AI) competencies by developing and administering a self-assessment questionnaire. Rather than drawing definitive conclusions, the primary objective of this pilot project is to provide initial insights into the key dimensions of AI competencies for teachers. By focusing on the exploratory identification of these principal components, this research lays the groundwork for future studies that will refine and validate a comprehensive AI competency assessment tool for educators. This approach ensures that the initial outcomes derived from this pilot study offer substantial first-stage data that direct the ongoing development of more targeted professional development frameworks for educators in the digital technology age.
2. Key Evaluation Criteria for AI Literacy Tools
It is critical to differentiate between digital literacy and AI literacy in the ever-changing digital environment, since they both include unique sets of competencies and knowledge that are necessary for effectively navigating modern technologies. Digital literacy encompasses the fundamental skill and competence set required to proficiently utilize digital devices, communication tools, and networks. This encompasses proficiencies in utilizing software applications, overseeing digital assets, and participating in online communication and collaboration [
7,
12,
13]. However, AI literacy extends beyond fundamental digital abilities, embracing a more profound comprehension of artificial intelligence technology and its practical uses. AI literacy includes not only the ability to use AI tools but also the ability to grasp fundamental AI principles, analyze AI systems in a discerning manner, and address ethical concerns associated with AI use [
4,
14,
15]. While digital literacy provides individuals with the necessary abilities to operate in a digital environment, AI literacy enables them to effectively utilize and evaluate AI technologies, ensuring responsible and efficient integration into different areas of life and work.
Several existing digital and AI literacy assessment models test teachers’ AI and digital skills, such as their ability to understand, use, evaluate, and handle AI technologies in an ethical way. These models also test teachers’ knowledge, skills, and competencies in this area. Multiple sources [
3,
4,
5,
6,
16] indicate that possessing the capability to effectively utilize artificial intelligence (AI) is a crucial competency in the education field. These competencies allow educators to leverage AI tools for the purpose of improving teaching methods, personalizing the learning process, and simplifying administrative duties (such as lesson planning). It is important to not only utilize AI tools but also possess the ability to critically appraise them [
4,
6,
16]. This enables educators to evaluate the dependability and utility of AI systems, ensuring their equitable and effective utilization. The ethical considerations of AI use, which tackle issues like data privacy and potential biases in algorithms, play a crucial role in assisting students in utilizing and assessing the information that AI provides [
4,
6]. Similar to digital skills, effective use of AI does not necessitate in-depth knowledge of AI theory but rather the ability to use AI resources meaningfully [
4,
6,
7].
The differentiation between digital literacy and AI literacy emphasizes the need for educators to acquire a comprehensive range of abilities in order to effectively incorporate AI technologies into their teaching methodologies. This study aims to explore and identify the principal components of AI literacy that explain the variance in teachers’ self-assessed competencies and how these components align with existing digital skill and competence frameworks. Therefore, the following potential AI literacy competency components are set for the creation of a self-assessment questionnaire:
1.
Understanding the fundamentals of AI. It is essential for educators to understand the basic principles of AI, such as machine learning at a basic level, in order to critically evaluate when and how to incorporate AI tools into teaching methods, assessment and evaluation of students, or simplifying their day-to-day administrative duties [
4,
6].
2.
Critical evaluation of AI. It is essential for educators to both recognize and critically evaluate AI technology in order to understand the opportunities it provides and ensure appropriate use in education [
4,
16,
17].
3.
Ethics. It is also essential to understand the ethical issues that may arise when using AI technologies, such as data security and AI’s potential biases or violated ethical norms. Therefore, it is essential to critically evaluate not only the AI tool itself, looking for opportunities for use, but also the information it provides [
4,
8,
14,
15,
18].
4.
Usage. In order to assess AI literacy, it is necessary to include questions about the educator’s ability to use AI tools for teaching methods, personalizing the student experience, or performing administrative tasks [
4,
17,
19].
5.
Awareness. Considering that AI affects any sphere of life, it is essential to assess the understanding of the broader societal implications of AI, including its potential benefits and risks and how it can influence education and the future of work [
4,
17,
18].
6.
Communication. Those educators who already have basic AI competencies and have the desire to improve professionally need to assess their ability to communicate effectively about AI concepts and usage with students, colleagues, and parents and to collaborate with others in implementing AI solutions in education [
8,
17].
These components will guide the development of a self-assessment questionnaire for AI literacy-related questions and provide a foundation for future research on AI literacy among educators.
3. Evaluating AI Literacy: Analyzing Existing Assessment Scales
In order to answer the second research question—How do the identified components align with existing AI or digital skills and competencies frameworks, and what initial patterns emerge from analysis?—it is necessary to look at already existing AI literacy scales, evaluating their strengths and limitations.
The European Commission has created “Selfie for Teachers” [
20], a widely used and officially approved digital skills and competencies self-assessment tool for educators. The European Commission designed this comprehensive self-reflection tool to assist educators in assessing and developing their digital skills and competencies. By allowing educators to benchmark their assessments against other benchmarks, such as specific time periods, the tool guides them in improving their digital skills and competencies. DigCompEdu [
7] structures the “Selfie for Teachers” [
20] tool around six competency areas: professional engagement, digital resources, teaching and learning, assessment, empowering learners, and promoting digital competence in learners. Some of the tool’s strengths include a user-friendly interface where teachers can self-reflect, access their assessment at any time, and compare their results over time or with a group (such as their peers in an educational institution), as well as global averages. Although the self-assessment tool was developed based on DigiComp [
7] and DigiComp 2.2 [
8], it does not include in-depth questions about the use of AI in education. “Selfie for Teachers” provides personalized feedback and recommendations for each competency area, allowing teachers to identify their strengths and weaknesses and plan targeted professional development activities. The opportunity for the entire educational institution to work as a team to improve their requirements also plays an important role; teachers can use the collected data to identify common needs and develop group learning activities, promoting a community of practice. Teachers receive a comprehensive report after completing the self-reflection, which includes graphic depictions of their competence levels and specific guidance on skill and competence enhancement.
The Selfie for Teachers tool [
20] adequately addresses several facets of digital competence. However, it still lacks specific components and a comprehensive emphasis on AI-related proficiencies, which are becoming increasingly important in contemporary education [
4,
6,
16]. The tool mostly checks for general digital skills and competencies, but it does not specifically test AI literacy competencies like understanding AI algorithms, using AI in an ethical way, and critically evaluating AI technologies. The measure is heavily reliant on self-assessment, which is prone to mistakes; teachers’ self-perceptions may not consistently align with their real level of ability [
21]. To utilize the technology effectively, instructors must possess additional key abilities. Various languages offer the tool, but its efficacy may vary depending on the user’s proficiency with digital tools and their ability to physically complete the questionnaire. Some teachers may require extra assistance and training to properly utilize and understand the program’s feedback.
In summary, “Selfie for Teachers” is a valuable tool that helps teachers improve their digital skills and competencies through organized self-evaluation and individualized feedback. Integrating AI competence evaluation components and implementing additional support mechanisms for self-assessment and accessibility could promote ongoing professional learning and collaboration among instructors in an educational institution.
To serve as a more focused instrument exclusively for evaluating AI competencies, Wang et al. [
4] created the “Artificial Intelligence Literacy Scale” (AILS). The design of this scale evaluates users’ proficiency in utilizing AI technology through a comprehensive framework that includes four primary elements: comprehension, utilization, assessment, and ethical considerations. While AILS provides an in-depth assessment of AI competencies, it may not investigate specific applications, limiting its scope to specialized educational contexts. Teachers who use advanced or specialized artificial intelligence tools may need additional assessment tools to accurately assess their competence and provide greater opportunities for growth in developing their skills and competencies [
11,
22]. The AILS tool also has a high correlation between AI and digital literacy skills and competencies, which can create confusion when distinguishing between the two competencies [
9].
Given that educators do not require a high level of digital skills and competencies to work with AI tools [
4,
6,
7], it is also possible to consider the “Non-Expert AI Literacy Assessment Scale” (SNAIL) [
6]. The scale’s purpose is to measure AI literacy among individuals who have no formal training in AI or computer science. The SNAIL tool identifies three main factors: technical understanding, critical evaluation, and practical application. It is similar to the AILS [
4] tool but with fewer elements, making it potentially more accessible and understandable for educators. A significant advantage that distinguishes this tool from the ones mentioned above is its adaptability to people who are not experts in computer engineering, including artificial intelligence. However, because representatives from all sectors developed the questionnaire to assess AI literacy, it may not be specific enough to the education sector to evaluate teachers’ AI competencies. Despite some limitations related to the target audience, the SNAIL tool is an important step towards improving artificial intelligence skills and competencies in education, promoting targeted professional development, and promoting the responsible use of AI technologies in the education sector as well.
To conclude, it is important for educators to enhance their AI competencies, as technology is increasingly influencing every aspect of life, including education. It is crucial for educators to not only understand and use AI tools but also critically evaluate their reliability and ethical aspects. A better version of the AI literacy assessment for education requires several improvements to boost efficiency and promote educators’ unity. Detailed assessments of specific education-related AI applications need to be incorporated into the AILS tool [
4]. It is possible to enhance the applicability of the SNAIL tool [
6] by adapting it to the educational context and involving educators in its development. Including educators in the development of the SNAIL tool [
6] could significantly enhance its application. The Selfie for Teachers tool [
20] could be updated to include competencies in the AI specification, such as the definition of AI algorithms, ethical use of AI, and critical evaluation of AI technologies, while taking care not to overlap assessment criteria as defined in the assessment of digital skills and competencies. The self-assessment questionnaire should assess fundamental competencies such as comprehension of AI principles, practical application of AI tools, critical analysis of AI systems, ethical considerations in AI, incorporation of AI into teaching methods, awareness of AI’s societal impact, development of AI literacy, and communication and collaboration abilities. Continuously updating AI competence assessment tools could better prepare the pedagogical use of AI and improve teaching and learning processes, which provide a modern procedure for education.
4. Materials and Methods
This study aims to offer preliminary insights into the creation of a self-assessment questionnaire for teachers to evaluate their AI competencies. Given the growing importance of AI in education, it is critical to create a reliable tool to help educators self-assess and improve their AI-related competencies. This study focuses on understanding how teachers evaluate their AI competencies in the context of digital skills and competencies at different levels, laying the groundwork for improving the questionnaire, and directing future research with a larger sample size.
The self-assessment questionnaire was developed based on a review of existing literature on digital skills and AI literacy frameworks, including DigCompEdu [
7], the revised DigiCompEdu 2.2 [
8], and various AI literacy competencies [
4,
6,
16]. The questionnaire was designed to operationalize the six AI literacy competency-related components identified in the literature review—understanding AI fundamentals, critical evaluation, ethics, usage, awareness, and communication—into measurable constructs that assess educators’ AI literacy comprehensively. Each question category maps onto one or more of these components to ensure alignment between theoretical constructs and practical application. Questions about knowledge and identification (Q38, Q39) of AI reflect the competency of understanding AI fundamentals by measuring educators’ familiarity with AI principles and their ability to recognize AI tools and concepts in educational contexts [
4,
6]. Questions on practical experience (Q40) align with the usage component, assessing how educators integrate AI tools into teaching and administrative tasks [
4,
17,
19]. Critical evaluation questions (Q41, Q42) focus on educators’ ability to analyze the reliability, educational value, and ethical implications of AI tools, addressing a central aspect of AI literacy [
4], [
16,
17]. Similarly, questions about ethical considerations (Q42) directly measure educators’ awareness of issues like data security, potential biases, and fairness in AI use, consistent with the ethics component [
4,
18]. The inclusion of algorithmic thinking (Q43) evaluates deeper technical understanding and supports educators’ ability to communicate AI concepts effectively [
4,
6]. Moreover, questions about the digital divide (Q44) emphasize awareness component, highlighting educators’ strategies to ensure equitable access to AI tools for diverse student populations [
16,
17]. Lastly, questions about cooperation, professional growth, and cross-curricular connections test teachers’ ability to work together, grow professionally, and use AI in situations involving different subjects (Q45, Q46, Q47), which is in line with the communication and awareness component [
15,
17].
The creation of the self-assessment questionnaire is based on the Selfie for Teachers self-assessment tool [
20] for teachers’ digital skills and competencies developed by the European Commission and based on the DigiCompEdu 2.2 framework [
8], looking for new ways to include the AI competencies of educators in an existing tool. The Selfie for Teachers tool was chosen due to DigiComp’s [
7] extensive description of digital skills and competencies. The new DigiComp 2.2. [
8] version also includes the impact of AI on the digitalization of education. As a result, Selfie for Teachers is a good reference point to include a new facet in an already existing digital skills assessment tool [
9]. A total of 47 questions were included in the questionnaire, divided into three main sections:
Demographic Information (5 questions). This section collects basic respondent information such as age, teaching experience, and subject field.
Competency Assessment from Selfie for teachers (32 questions). This section uses Selfie for Teachers self-assessment questions divided into six main sections—professional engagement (question set 1), digital resources (question set 2), teaching and learning (question set 3), assessment (question set 4), empowering learners (question set 5), and enabling learners’ digital competence (question set 6). The questions were used to study the compatibility of the newly raised questions about the AI competencies needed by teachers with the existing framework and to answer research questions.
AI Literacy Competence (10 questions, question set 7, see attached
Appendix A). This section evaluates various competencies related to digital skills and competencies and AI literacy, such as understanding AI fundamentals, ethical considerations, practical usage, critical evaluation of AI technologies, algorithmic thinking, awareness of societal implications and the digital divide, professional development, and communication about AI technologies with students and other teachers. These questions were developed based on the criteria set forth in the literature analysis of the study.
Each AI competency-related question in the questionnaire is designed to measure teachers’ self-assessed proficiency levels across a six-point scale, ranging from basic awareness (level 1) to advanced application and leadership in digital and AI literacy competencies (level 6):
Level 1: Newcomer—The respondent is aware of the competency but has not applied it in practice.
Level 2: Explorer—The respondent has attempted or recognized the competence a few times but does not usually use it.
Level 3: Integrator—The respondent regularly includes competence in daily teaching practices without full critical evaluation.
Level 4: Expert—The respondent uses the competence daily and critically evaluates the appropriate tools or methods in context.
Level 5: Leader—The respondent shares experiences of daily competency usage with colleagues and adapts teaching practices based on evaluation.
Level 6: Pioneer—The respondent actively promotes and initiates changes in institutional practices regarding digital technologies.
Additionally, each question includes the option “Know nothing about this competence”, allowing participants to indicate a lack of knowledge about specific competencies. To enhance the clarity and consistency of the six-point proficiency scale, each question in the questionnaire included real-life examples relevant to its specific topic. This approach ensured that respondents could interpret the levels accurately and relate them to their professional experiences [
23,
24]. For instance, a critical evaluation question asked respondents to evaluate their ability to analyze and select AI tools based on outcomes, relevance to the learning program, and ethical considerations. A practical example provided for Level 4 (Expert) was as follows: “I analyze and select AI tools based on their impact on outcomes, relevance to the learning program, and ethical considerations (e.g., I choose a text-generating AI tool to teach students critical evaluation of historical facts)”. All questions included detailed descriptions, enabling respondents to align their self-assessment with specific scenarios and tasks. This ensured consistent understanding of the scale and improved the accuracy of the assessment. Detailed scaling provides a nuanced understanding of the respondents’ self-perceived proficiency levels [
25]. The same scale is used in the “Selfie for Teachers” [
20] tool to measure teachers’ digital skill and competence proficiency; it was also used in the study questionnaire for consistency purposes.
This study adhered to ethical standards for research involving human subjects. The institution’s ethics commission granted approval for the research prior to its conduct. The questionnaire was distributed online, and participants were informed that their participation was entirely voluntary. Informed consent was obtained by clearly explaining the purpose of the study, the anonymity of responses, and the inability to withdraw specific submissions after completion due to the anonymized nature of the data. No sensitive or personally identifiable information was collected, ensuring strict confidentiality and compliance with ethical guidelines for research.
To validate the questionnaire’s content, two practicing secondary school teachers—a geography teacher and a biology teacher—reviewed the items for new question clarity and relevance. Their feedback helped refine the questions to ensure that they accurately reflect the desired constructs. The final questionnaire was administered electronically using Microsoft Forms, which was chosen for its ease of use and ability to format questions effectively. Participants received detailed instructions on how to assess their skills, with each section of the questionnaire providing a brief description to guide their responses.
After the validation process, a convenience sample [
26,
27] of 42 secondary school teachers participated in this pilot study, representing a range of subjects and teaching grades from 5 to 12. The sample includes teachers with varying levels of experience in using digital technologies in their teaching, from daily users to those who use technology less frequently. This diversity allows for preliminary exploration of competencies across different teaching contexts and technology use levels and for verification of whether the questionnaire is equally comprehensible to teachers of different levels of digital competence.
Given the limited sample size of this pilot project, it was preferable to focus the data analysis just on the internal consistency of the questionnaire items. These findings enabled the establishment of a preliminary comprehension of the constituents present in the data, finding the answers to research questions. Reliability analysis was conducted by evaluating the Cronbach’s alpha coefficient for each scale, which assessed the internal consistency measures and reliability of various competencies. Cronbach’s alpha values equal to or greater than 0.70 are considered acceptable since they indicate a moderate level of internal consistency that is suitable for the early stages of research [
28,
29]. A principal component analysis (PCA) [
30] was performed to evaluate the variability of the data and determine if it is merely random. Principal component analysis (PCA) was chosen due to its ability to assess the complexity of the data and determine the extent to which the data may be further simplified. While eigenvalues greater than 1.0 (Kaiser’s criterion) are commonly used to determine the number of components to retain [
31], this method can sometimes lead to over-extraction. To address this, parallel analysis was employed as a more robust approach, as it compares the observed eigenvalues to those generated from random data. Parallel analysis determined that two components should be retained for each comparison, providing a more reliable foundation for interpretation. [
32,
33]. During the principal component analysis (PCA), the Kaiser-Meyer-Olkin (KMO) Measure of Sampling Adequacy was used to determine whether the variables were suitable for factor analysis. The KMO value indicates the degree to which the variables contribute to the extracted factors. If a variable’s KMO value falls below the acceptable threshold of 0.5, it suggests that the variable does not significantly contribute to the factor structure [
29]. To improve the overall sampling adequacy and the interpretability of the factor structure, the variable from the analysis was removed in such cases. Additionally, items with high uniqueness values (above 0.6) are also considered for removal [
34], as they indicate that the extracted components do not adequately explain the variable. Eliminating such items refines the PCA, enabling a clearer identification of the key components, while the remaining variables contribute more meaningfully to the analysis. Data analysis (Cronbach’s alpha and PCA) was performed using the application “Jamovi” [
35].
The goal is to identify the key characteristics that account for the biggest variances in the AI and digital skills and competencies of teachers. Considering that the main task at this stage of the research is to check the developed questionnaire and understand whether the questions included in it can be used equally well with teachers of different competence groups, a relatively small sample was created, which will be expanded in the next stages of the research. The limited sample size of this study is intended to produce first insights and ideas, rather than conclusive answers, for subsequent testing. The findings will facilitate the enhancement of the questionnaire and enable the preparation of subsequent research with more extensive sample sizes.
5. Results
This study aims to explore the basic dimensions of teachers’ self-assessed AI abilities. This study also seeks to understand how these competencies align with the existing digital skills and competence frameworks. The study aims to develop a self-assessment questionnaire and analyze the results using statistical methods to identify key elements in teacher competence. These three main questions guide the research:
RQ1: What principal components can be identified from a study that explain the variance in teachers’ self-assessment of AI competencies?
RQ2: How do the identified components align with existing AI or digital skills and competencies frameworks?
RQ3: What initial patterns emerge from the analysis?
This section presents the results of the analysis, starting with evaluating the internal consistency of the questionnaire through Cronbach’s alpha, followed by identifying the main components using principal component analysis (PCA) to answer the research question.
In order to evaluate the reliability of the questionnaire, Cronbach’s alpha coefficient was calculated for both the already-existing questions regarding digital competencies from Selfie for Teachers [
20] and new AI literacy-related questions in order to establish internal consistency and reliability of the instrument. By performing a reliability analysis on 32 questions that are repeated from the Selfie for Teachers framework [
20], Cronbach’s alpha value was 0.923, indicating overall internal consistency. In addition, Cronbach’s alpha value was 0.872 for the 10 questions related to AI, demonstrating an acceptable level of reliability. When each of the 42 individual items present (combined formed variable construction and questions related to AI) were looked at, it was observed that optimum alpha coefficients increased above 0.937, showing that the dependability of the questionnaire was bolstered for a variety of questions. The removal of an item did not significantly alter the internal consistency, which ranged from 0.917 to 0.938 for the current digital competency questions and from 0.848 to 0.872 for the AI-related questions. This implies that the definition and interrelation of the sample’s reliability integrate all measures; there is no evidence to exclude any item. All Alpha coefficients met the cutoff point of 0.70, regarded as an appropriate value for interrater reliability [
28], thereby validating the study’s instrument in terms of the questionnaire’s ability to target digital skills and competencies as well as AI competence of teachers.
Determining whether to add the AI-related competencies (question set 7, see
Appendix A) to the current digital skills and competencies (question sets 1–6) or to address them separately is important for answering research questions. To achieve this, principal component analysis (PCA) was employed to check the data’s embedded structures and determine how the identified skills and competencies corresponded with the subsets of digital skills and competencies that encompass AI competencies. PCA was done on each of the components of the digital skill and competence set (e.g., professional engagement, digital resources) together with the relevant AI questions in order to assess if the new skills and competencies moved into and integrated with existing digital skills competency factors or the new components were formed. This analysis aims to ascertain whether the AI competence aligns with the current digital skill and competencies frameworks within digital skills and competencies or if they stand alone and require separate treatment.
To provide an overview of teacher self-assessments across AI literacy and professional engagement topics, descriptive statistics were calculated for each question, including mean, median, standard deviation, and range (minimum–maximum) [
36]. The results (see
Table 1) revealed variability in proficiency levels, with means ranging from 2.3 (Q7_47) to 4.86 (Q1_14) and standard deviations ranging from 1.17 (Q1_8) to 2.98 (Q1_14). The scale, ranging from 1 (newcomer) to 6 (pioneer), with an additional option of 9 (no knowledge), captured a broad spectrum of responses, reflecting teachers’ diverse levels of proficiency and familiarity with the competencies.
Questions with a lot of variation, like Q1_14 (SD = 2.98), show a wide range of skill levels, likely due to the mix of beginner-level responses (e.g., “Newcomer”, coded with “1”, and the “No Knowledge” option, coded with “9”). In contrast, questions like Q1_8 (SD = 1.17) showed greater consistency, suggesting more consistent self-evaluation in professional engagement competency. Low mean scores in Q7_42 (mean = 2.38, SD = 2.16) and Q7_44 (mean = 2.64, SD = 2.09) highlight weaker areas, particularly in AI ethics and cross-disciplinary applications, indicating the need for more professional development or to make integration strategies clearer.
5.1. Comparing “Professional Engagement” and “AI Literacy Competence”
A principal component analysis (PCA) was conducted to explore the relationships between the “Professional Engagement” (Set 1) and “AI Literacy Competence” (Set 7) question sets. The Kaiser-Meyer-Olkin (KMO) value for all included questions was 0,741, which is considered an adequate sample size; only one question (1_9) was removed due to the low KMO value. Parallel analysis specified two components, which together explained 58.0% of the data’s variance (see
Table 2).
The first component explained 33.6% of the variance and included questions from both Set 1 and Set 7, suggesting an overlap between general professional engagement competencies and AI-related competencies. Set 7 questions primarily loaded the second component, explaining 24.4% of the variance and indicating that AI literacy competency forms a distinct factor from the general digital skills and competencies measured in Set 1 (see
Table 3).
Although there is some integration of AI literacy competencies with general professional engagement, these findings underscore the need to treat AI literacy competencies as a distinct set of competencies.
5.2. Comparing “Digital Resources” and “AI Literacy Competence”
On “Digital Resources” (Set 2) and “AI Literacy Competencies” (Set 7) question sets, PCA identified two components that explained 63.0% of the total variance. Component 1 explained 37.7%, while Component 2 explained 25.3% (see
Table 4). The KMO value of 0.790 indicated the data’s suitability for PCA, but questions 2_15 and 2_17 were excluded due to low KMO and high uniqueness.
Component 1, which grouped items from both digital resource and AI competency sets, involved creating, organizing, and using digital resources, as well as recognizing ethical aspects and critical thinking about AI. This suggests that teachers who manage digital resources effectively are also adept at using AI tools, indicating a blending of competencies. This component can be labeled “Integration and Management of Digital and AI Resources”. Component 2, focused mainly on AI-specific competencies, involved identifying AI, critically evaluating it, and sharing AI practices, showing that AI literacy remains distinct from broader digital skills (see
Table 5).
This highlights the need for focused development in AI competencies alongside existing digital skills and competencies. The PCA results on Set 2 and Set 7 suggest that while digital resource management and AI competencies overlap, AI literacy competencies still require targeted professional development, suggesting that both areas complement each other in educational practice.
5.3. Comparing “Teaching and Learning” and “AI Literacy Competence”
The PCA carried out on “Teaching and Learning” (Set 3) and “AI Literacy Competence” (Set 7) question sets made it possible to distinguish two components, which explains 65.0% of the total variance. The first component explained 37.5% of the variance; Component 2 added on another 27.6% (see
Table 6). The Kaiser-Meyer-Olkin (KMO) value = 0.799 is an appropriate sample for the PCA, but two questions (3_20 and 3_24) were dropped because they reported higher unique values above 0.7.
The evidence suggests that in Component 1, there were some similarities between both sets of questions. This means that the steps and tools used to encourage student collaboration, self-study, and feedback (Set 3) are similar to the skills and competencies needed to find, use, and evaluate AI tools (Set 7). For this reason, it is possible to interpret component 1 as “Teaching and Learning with AI Included”, aiming to investigate the influence of AI on other teaching methods. In contrast, Component 2 consists primarily of AI-related competencies (see
Table 7).
This suggests that we can look at AI competencies as a separate category of skills and competencies. The PCA analysis between these sets reveals that both components are conceptually connected but separate. Component 1 shows that AI can be a big part of teaching, while Component 2 shows that AI-related competencies are still their own area of expertise that needs special attention and growth.
5.4. Comparing “Assessment” and “AI Literacy Competence”
The principal component analysis (PCA) of “Assessment” (Set 4) and “AI Literacy Competence” (Set 7) question sets revealed that Set 4’s Kaiser-Meyer-Olkin (KMO) values were not so good for PCA analysis. Some items in Set 4 had KMO values below the recommended level of 0.5, which means that the sampling was not adequate for analyzing question set “Assesment” paired with question set “AI Literacy Competence”. For example, 4_25’s KMO value, which measures its extent of correlation with other variables, was 0.311; 4_26’s KMO was 0.388, and 4_27’s KMO was 0.479. All of these variables performed poorly and had high uniqueness scores, indicating that the extracted factor explained little of them (see
Table 8).
On a single component, the former produced a maximum factor loading of 43.0%, whereas the latter either loaded poorly or did not load at all. Therefore, in order to enhance the general factors, Set 4 items were removed from the analysis, and a few of the questions from Set 7 related to AI, which could stand as a separate factor, were kept. This implies that competencies in AI literacies may be most beneficial as a standalone group of skills, in contrast to assessment-related competencies.
5.5. Comparing “Empowering Learners” and “AI Literacy Competence”
For the analysis of “Empowering Learners” (Set 5) and “AI Literacy Competence” (Set 7), PCA revealed two components explaining 64.1% of the total variance. The first component explained 39.1%, and the second explained 24.9% (see
Table 9). The overall KMO value was 0.789, indicating good sampling adequacy. Questions 5_28 and 5_31 were removed due to low KMO values and high uniqueness.
Component 1 includes questions from both sets, suggesting a strong link between empowering learners (e.g., differentiation, personalizing learning) and using AI tools to support these practices. This component is named “Empowering Learners with AI”. Component 2 focuses on AI-specific competencies, showing that AI literacy competencies are integrated into traditional practices but also retain a distinct identity (see
Table 10).
This suggests that AI literacy competence may be most beneficial as a standalone group of skills, in contrast to “Empowering Learners” competencies.
5.6. Comparing “Enabling Learners’ Digital Competence” and “AI Literacy Competence”
Question sets “Enabling Learners’ Digital Competence” (Set 6) and “AI Literacy Competence” (Set 7) were analyzed through the PCA, and only one component was found to explain 56.2% of the total variance. For sampling size, the KMO value was found to be 0.828, which is good for running PCA. Items 6_33, 6_34, 6_35, and 7_43 were removed from Set 6 due to high uniqueness values of greater than 0.8, indicating that the component did not adequately explain these items. Component 1 includes questions from both Set 6 and Set 7, indicating the potential for a comprehensive approach for enhancing learners’ digital competence and understanding AI (see
Table 11).
It seems that the questions that focus on problem-solving skills, the appropriate use of technology, and the use of AI tools during teaching bear a strong resemblance to the competencies related to AI, including its proper use and assessment. This suggests a close relationship between learners’ technology competency and their proficiency in AI literacy competence, especially when it comes to using technology for responsible and creative problem-solving. These findings suggest a gradual shift towards integrating AI competency into digital learning skills and competencies, underscoring the need for these competencies to form part of a more comprehensive digital competence in education.
6. Discussion
The results of the study provide valuable insight into teachers’ self-assessment of AI competencies and the compliance of these competencies with the existing digital skills and competencies framework. The research shows that while AI literacy competence is a new and distinct set of competencies, it also integrates with already defined broader digital skills. The principal component analysis (PCA) was used to identify principal components that provided insight into how AI competencies fit into a broader digital skills framework.
The PCA revealed that AI competencies, including the critical evaluation of AI tools, ethical considerations, and the usage of AI technologies, constitute a distinct competency section that stands apart from other digital skills. Other frameworks, such as the Scale for the Assessment of Non-Experts’ AI Literacy [
6] and the Artificial Intelligence Literacy Scale (AILS) [
4], assert that while AI skills and competencies overlap with digital skills, they require separate attention due to their complexity and ethical implications. In addition, the analysis of question Set 6 (Enabling Learners’ Digital Competence) and question Set 7 (AI Literacy Competence) revealed a close relationship between AI and digital competencies aimed at improving student learning. The ability to incorporate AI tools into teaching and problem-solving activities demonstrates the essential integration of AI competencies into teachers’ professional activities. This suggests that educators should consider AI literacy competence, particularly in relation to the ethical and responsible use of AI, as a component of a broader system of digital competencies.
While most PCAs separated AI competencies as a separate component of digital skills, there was also some overlap between competencies. For example, the PCA of Set 2 (Digital Resources) and Set 7 (AI Literacy Competency) indicated that teachers who use digital resources in an organized manner demonstrate competencies to effectively use AI tools, which shows the mutual similarity of these skills and competencies. This is consistent with previous research that emphasizes the need for teachers to integrate AI competencies into their professional practice in the context of their existing digital skills [
3,
8]. The PCA showed that AI literacy competence in the context of professional engagement is a unique factor that focuses on ethical concerns and critical evaluation of AI technologies. This is in addition to question Set 1 (Professional Engagement) and question Set 7 (AI Literacy Competency). This finding encourages special attention to AI skills and competencies in teachers’ professional development, considering teachers’ already existing digital skills.
The results of this study emphasize the importance of developing customized professional development programs that specifically focus on the AI literacy competence of educators. Although AI literacy competence can complement digital skills in areas such as digital resource management, it requires a targeted approach to competence acquisition, especially in categories such as critical evaluation of AI tools and ethical considerations. Future research should focus on enhancing the self-assessment tool for AI competencies and exploring more effective integration of these competencies with existing digital skills frameworks.
The study suggests that teachers can benefit from a differentiated AI literacy competency professional development program based on their existing digital skills. For example, teachers who are already experienced users of digital resources may need less support in integrating AI tools into the teaching process, while those with less experience in using AI tools may need more comprehensive digital skills and AI literacy competence training. The results show that having developed digital skills does not necessarily equate to having developed AI literacy competence. Therefore, educators with different levels of knowledge and skills should receive adequate support to enhance their AI literacy competence.
In conclusion, this study emphasizes the importance of AI literacy competence for teachers and the need for continuous professional development in this area. By aligning AI literacy competencies with existing digital skills and competency frameworks, educators can be better prepared for the opportunities and challenges that the use of AI tools in education can create.
7. Conclusions
This study has provided valuable insights into the dimensions of teachers’ self-assessed AI competencies and how they align with existing digital literacy frameworks. Two key questions were addressed by conducting principal component analysis (PCA) on a pilot study:
RQ1. What principal components can be identified from a study that explain the variance in teachers’ self-assessment of AI competencies?
The principal component analysis (PCA) of digital skills and competencies and AI literacy competencies identified different components based on comparisons. For instance, relationships between Set 1 (Professional Engagement) and Set 7 (AI Literacy Competence), as well as Set 2 (Digital Resources) and Set 7, revealed two key components. In the comparison of Set 6 (Enabling Learners’ Digital Competence) and Set 7 (AI Literacy Competence), there were strong similarities between the two sets, indicating a close relationship between empowering learners’ digital and AI literacy competencies. PCA also underscored differences between general digital skills and competencies (such as managing digital resources or teaching and learning practices) and AI-related competencies (like critical appraisal of AI tools, working with real-world AI contexts, and considering ethics). This highlights both the integration of AI competencies with teachers’ digital skills and competencies and the uniqueness of AI literacy as a separate competency. Similar studies on the professional development of teachers in the field of digital literacy emphasize the importance of including AI skills in professional development programs; however, individual components of AI skills are essential to learn separately in order to effectively integrate the learning of AI skills in the context of already existing individual teachers’ AI competencies [
15,
37,
38]. The overlap between core AI literacy competencies and teachers’ existing digital skills and competencies largely explains their self-assessments.
RQ2. How do the identified components align with existing AI or digital skills and competencies frameworks?
The PCA analysis indicated that AI competencies align with existing digital skills and competence frameworks like Selfie for Teachers, suggesting potential for integration. However, AI literacy competencies—such as critically assessing AI tools, understanding ethical issues, and gaining basic AI literacy—often emerged as distinct components, underscoring the importance of specific attention to AI literacy [
37,
39]. While AI literacy competencies align with digital literacy in areas like resource management and instructional approaches, they also represent a separate area of expertise. Adding AI literacy competencies to frameworks like Selfie for Teachers, which already stress the importance of other digital skills and competencies, could better prepare teachers to use AI in the classroom, letting them effectively incorporate AI tools while also thinking about the moral and practical issues they raise.
RQ3: What initial patterns emerge from the analysis?
The PCA analysis revealed that AI literacy competencies, such as critical evaluation, ethical considerations, and AI tool usage, often emerged as distinct components, emphasizing their unique nature within broader digital skills frameworks. Notable overlaps were observed between “Enabling Learners’ Digital Competence” (question Set 6) and “AI Literacy Competence” (Set 7), highlighting the integration of AI competencies into teaching-focused digital skills. Similarly, the connection between “Digital Resources” (question Set 2) and “AI Literacy Competence” (question Set 7) suggests that organizational digital skills complement the effective use of AI tools. However, areas like “critical evaluation” and “ethical concerns” emerged as separate dimensions, indicating the need for targeted professional development to address advanced AI literacy competencies.
The study provides initial insights into the key components of AI literacy and digital competencies among teachers. AI literacy competencies have the potential to complement digital skills and competencies and other digital skills and competencies frameworks, but they also require focused attention as a distinct set of competencies. Future research should expand on these findings to better understand AI literacy and its integration into existing digital skills and competence frameworks. This will help in developing targeted professional development programs that ensure that teachers are well-prepared to navigate the challenges and opportunities AI presents in education.
8. Limitations and Future Research
This study provides an important first step in understanding the dimensions of teachers’ AI literacy competencies and how they align with broader digital skills frameworks. By employing principal component analysis (PCA) to explore patterns in self-assessments, this research lays the groundwork for future studies to deepen our understanding of AI literacy in education. Possible directions for future research include the use of confirmatory techniques, such as factor analysis and structural equation modeling (SEM), to validate the dimensions identified in this study [
40]. Such methods would provide greater precision and clarity regarding the relationships between competencies, enabling the construction of robust AI literacy and digital competency models. Similarly, future research could employ correlation analysis [
41] to examine the connections between specific AI literacy competencies. This would help make teachers’ professional development more aligned with contemporary competency needs.
Expanding the research to include subgroup analyses would also yield valuable insights. For instance, examining how teaching experience, subject area, or prior digital skill levels influence AI literacy could help tailor professional development programs to meet the needs of diverse educators. Stratified random sampling [
42] in future studies could improve the representativeness of samples, ensuring that findings are generalizable across teachers with varying educational levels and technical backgrounds.
The practical application of this study lies in its potential to develop customized professional development programs for educators. By identifying areas of strength, such as foundational digital skills, and gaps, such as ethical considerations and critical evaluation of AI tools in AI literacy competencies, this research provides a blueprint for designing training initiatives that equip teachers to navigate the challenges and opportunities presented by AI in education. This study suggests that educators could be prepared to use AI tools effectively while addressing ethical and practical concerns by incorporating AI literacy competencies into existing digital skills frameworks.
Despite its limitations, this study is highly relevant and contributes significant insights to the emerging field of AI literacy in education. The exploratory nature of the research, while limited in scope, provides a strong foundation for future work. The use of PCA highlighted key distinctions and overlaps between AI literacy and digital literacy competencies, emphasizing the importance of treating AI literacy competencies as a distinct yet integrated element of digital literacy competence.