Next Article in Journal
Performance of Environmentally Friendly Concrete Containing Fly-Ash and Waste Face Mask Fibers
Next Article in Special Issue
Can Multimodal Large Language Models Enhance Performance Benefits Among Higher Education Students? An Investigation Based on the Task–Technology Fit Theory and the Artificial Intelligence Device Use Acceptance Model
Previous Article in Journal
Physical Characterization of Ecological Briquettes Based on Vertisols and Sorghum Bicolor CS54 Fibers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Artificial Intelligence Literacy Competencies for Teachers Through Self-Assessment Tools

Faculty of Education Sciences and Psychology, University of Latvia, LV-1083 Riga, Latvia
*
Author to whom correspondence should be addressed.
Sustainability 2024, 16(23), 10386; https://doi.org/10.3390/su162310386
Submission received: 28 October 2024 / Revised: 18 November 2024 / Accepted: 25 November 2024 / Published: 27 November 2024
(This article belongs to the Special Issue Sustainable Digital Education: Innovations in Teaching and Learning)

Abstract

:
This study investigates the key components of teachers’ self-assessed artificial intelligence (AI) literacy competencies and how they align with existing digital literacy frameworks. The rapid development of AI technologies has highlighted the need for educators to develop AI-related skills and competencies in order to meaningfully integrate these technologies into their professional practice. A pilot study was conducted using a self-assessment questionnaire developed from frameworks such as DigiCompEdu and the Selfie for Teachers tool. The study aimed to explore the relationships between AI literacy competence and already defined digital skills and competencies through principal component analysis (PCA). The results revealed distinct components of AI literacy and digital competencies, highlighting competence overlaps in some areas, for example, digital resource management, while also confirming that AI literacy competencies form a separate and essential category. The findings show that although AI literacy aligns with other digital skills and competencies, focused attention is required to professionally develop AI-specific competencies. These insights are key elements of future research to refine and expand AI literacy tools for educators, providing targeted professional development programs to ensure that teachers are ready for the opportunities and challenges of AI in education.

1. Introduction

The rapid development and availability of artificial intelligence (AI) technologies to the general public have greatly impacted most areas of everyday life. The healthcare and finance industries have been affected the most, with AI technologies revolutionizing patient care and changing risk analysis in the finance sector. The transportation sector has experienced significant change with the implementation of intelligent traffic control systems aimed at reducing accidents and improving traffic efficiency. AI technologies are integrated into specialized computer programs as well as ordinary smart gadgets, allowing them to be used in daily activities and decision-making processes [1]. AI has revolutionized communication by enabling individuals to communicate with technology through natural language processing applications like virtual assistants and chatbots. These tools enhance the smooth interaction between humans and computers, increasing the accessibility and user-friendliness of technology [2]. Within the field of education, there is a discussion surrounding the capabilities of artificial intelligence (AI), recognizing its advantages, such as the ability to customize educational materials for individual students, but also noting its disadvantages when it is possible to violate ethical standards [3].
With many advances made in AI technology, there is a growing recognition of the need for skills to ensure the responsible and ethical use of AI [3,4]—the issue of determining AI abilities, as well as recognizing areas for enhancement, is growing more essential [4,5,6]. The significance of digital skills and competencies among educators is increasing, and proficiency in artificial intelligence is becoming a crucial component of modern education.
In 2017, the European Commission’s framework “DigiCompEdu” [7] provided a report on the importance of digital skills and competencies in the everyday lives of educators and the need to continuously develop skills and competencies. The “DigCompEdu” framework defines a wide range of skills and competencies that educators must possess in order to proficiently use digital technology in their professional work field. The framework categorizes 22 competencies into six primary parts: professional engagement, digital resources, teaching and learning, assessment, empowering learners, and enabling learners’ digital competence. The goal of this framework is to encourage the efficient use of digital technology in educational institutions, thereby improving the quality of teaching and learning outcomes. The framework also functions as a benchmark for the creation of other digital competency frameworks at both national and regional levels, as well as for the design of teacher training programs [7]. Based on the first version of DigiComp, a new framework, DigiComp 2.2 [8], was developed, which examines the possible influence of AI on information and data literacy in two main domains: (1) navigating, searching, and refining data, information, and digital content and (2) assessing data, information, and digital content [8,9]. In addition, UNESCO has released the ICT Competency Framework for Teachers [10], emphasizing the necessity of technical education for teachers to enhance their professional growth. This framework implies that training teachers in ICT involves professionalizing their role by including essential professional qualities to enhance their overall performance [10,11]. Although there are several frameworks for digital skill and competence development in education, none of them provide a complete answer on how to include AI competencies in already developed digital skill and competence frameworks [9].
In order to find out the best way for a teacher to carry out professional development and evaluate their AI competencies, three research questions were raised:
RQ1:
What principal components can be identified from a study that explain the variance in teachers’ self-assessment of AI competencies??
RQ2:
How do the identified components align with existing AI or digital skills and competencies frameworks?
RQ3:
What initial patterns emerge from the analysis?
The aim of this study is to explore and identify the principal components that explain the variance in teachers’ self-assessment of artificial intelligence (AI) competencies by developing and administering a self-assessment questionnaire. Rather than drawing definitive conclusions, the primary objective of this pilot project is to provide initial insights into the key dimensions of AI competencies for teachers. By focusing on the exploratory identification of these principal components, this research lays the groundwork for future studies that will refine and validate a comprehensive AI competency assessment tool for educators. This approach ensures that the initial outcomes derived from this pilot study offer substantial first-stage data that direct the ongoing development of more targeted professional development frameworks for educators in the digital technology age.

2. Key Evaluation Criteria for AI Literacy Tools

It is critical to differentiate between digital literacy and AI literacy in the ever-changing digital environment, since they both include unique sets of competencies and knowledge that are necessary for effectively navigating modern technologies. Digital literacy encompasses the fundamental skill and competence set required to proficiently utilize digital devices, communication tools, and networks. This encompasses proficiencies in utilizing software applications, overseeing digital assets, and participating in online communication and collaboration [7,12,13]. However, AI literacy extends beyond fundamental digital abilities, embracing a more profound comprehension of artificial intelligence technology and its practical uses. AI literacy includes not only the ability to use AI tools but also the ability to grasp fundamental AI principles, analyze AI systems in a discerning manner, and address ethical concerns associated with AI use [4,14,15]. While digital literacy provides individuals with the necessary abilities to operate in a digital environment, AI literacy enables them to effectively utilize and evaluate AI technologies, ensuring responsible and efficient integration into different areas of life and work.
Several existing digital and AI literacy assessment models test teachers’ AI and digital skills, such as their ability to understand, use, evaluate, and handle AI technologies in an ethical way. These models also test teachers’ knowledge, skills, and competencies in this area. Multiple sources [3,4,5,6,16] indicate that possessing the capability to effectively utilize artificial intelligence (AI) is a crucial competency in the education field. These competencies allow educators to leverage AI tools for the purpose of improving teaching methods, personalizing the learning process, and simplifying administrative duties (such as lesson planning). It is important to not only utilize AI tools but also possess the ability to critically appraise them [4,6,16]. This enables educators to evaluate the dependability and utility of AI systems, ensuring their equitable and effective utilization. The ethical considerations of AI use, which tackle issues like data privacy and potential biases in algorithms, play a crucial role in assisting students in utilizing and assessing the information that AI provides [4,6]. Similar to digital skills, effective use of AI does not necessitate in-depth knowledge of AI theory but rather the ability to use AI resources meaningfully [4,6,7].
The differentiation between digital literacy and AI literacy emphasizes the need for educators to acquire a comprehensive range of abilities in order to effectively incorporate AI technologies into their teaching methodologies. This study aims to explore and identify the principal components of AI literacy that explain the variance in teachers’ self-assessed competencies and how these components align with existing digital skill and competence frameworks. Therefore, the following potential AI literacy competency components are set for the creation of a self-assessment questionnaire:
1. Understanding the fundamentals of AI. It is essential for educators to understand the basic principles of AI, such as machine learning at a basic level, in order to critically evaluate when and how to incorporate AI tools into teaching methods, assessment and evaluation of students, or simplifying their day-to-day administrative duties [4,6].
2. Critical evaluation of AI. It is essential for educators to both recognize and critically evaluate AI technology in order to understand the opportunities it provides and ensure appropriate use in education [4,16,17].
3. Ethics. It is also essential to understand the ethical issues that may arise when using AI technologies, such as data security and AI’s potential biases or violated ethical norms. Therefore, it is essential to critically evaluate not only the AI tool itself, looking for opportunities for use, but also the information it provides [4,8,14,15,18].
4. Usage. In order to assess AI literacy, it is necessary to include questions about the educator’s ability to use AI tools for teaching methods, personalizing the student experience, or performing administrative tasks [4,17,19].
5. Awareness. Considering that AI affects any sphere of life, it is essential to assess the understanding of the broader societal implications of AI, including its potential benefits and risks and how it can influence education and the future of work [4,17,18].
6. Communication. Those educators who already have basic AI competencies and have the desire to improve professionally need to assess their ability to communicate effectively about AI concepts and usage with students, colleagues, and parents and to collaborate with others in implementing AI solutions in education [8,17].
These components will guide the development of a self-assessment questionnaire for AI literacy-related questions and provide a foundation for future research on AI literacy among educators.

3. Evaluating AI Literacy: Analyzing Existing Assessment Scales

In order to answer the second research question—How do the identified components align with existing AI or digital skills and competencies frameworks, and what initial patterns emerge from analysis?—it is necessary to look at already existing AI literacy scales, evaluating their strengths and limitations.
The European Commission has created “Selfie for Teachers” [20], a widely used and officially approved digital skills and competencies self-assessment tool for educators. The European Commission designed this comprehensive self-reflection tool to assist educators in assessing and developing their digital skills and competencies. By allowing educators to benchmark their assessments against other benchmarks, such as specific time periods, the tool guides them in improving their digital skills and competencies. DigCompEdu [7] structures the “Selfie for Teachers” [20] tool around six competency areas: professional engagement, digital resources, teaching and learning, assessment, empowering learners, and promoting digital competence in learners. Some of the tool’s strengths include a user-friendly interface where teachers can self-reflect, access their assessment at any time, and compare their results over time or with a group (such as their peers in an educational institution), as well as global averages. Although the self-assessment tool was developed based on DigiComp [7] and DigiComp 2.2 [8], it does not include in-depth questions about the use of AI in education. “Selfie for Teachers” provides personalized feedback and recommendations for each competency area, allowing teachers to identify their strengths and weaknesses and plan targeted professional development activities. The opportunity for the entire educational institution to work as a team to improve their requirements also plays an important role; teachers can use the collected data to identify common needs and develop group learning activities, promoting a community of practice. Teachers receive a comprehensive report after completing the self-reflection, which includes graphic depictions of their competence levels and specific guidance on skill and competence enhancement.
The Selfie for Teachers tool [20] adequately addresses several facets of digital competence. However, it still lacks specific components and a comprehensive emphasis on AI-related proficiencies, which are becoming increasingly important in contemporary education [4,6,16]. The tool mostly checks for general digital skills and competencies, but it does not specifically test AI literacy competencies like understanding AI algorithms, using AI in an ethical way, and critically evaluating AI technologies. The measure is heavily reliant on self-assessment, which is prone to mistakes; teachers’ self-perceptions may not consistently align with their real level of ability [21]. To utilize the technology effectively, instructors must possess additional key abilities. Various languages offer the tool, but its efficacy may vary depending on the user’s proficiency with digital tools and their ability to physically complete the questionnaire. Some teachers may require extra assistance and training to properly utilize and understand the program’s feedback.
In summary, “Selfie for Teachers” is a valuable tool that helps teachers improve their digital skills and competencies through organized self-evaluation and individualized feedback. Integrating AI competence evaluation components and implementing additional support mechanisms for self-assessment and accessibility could promote ongoing professional learning and collaboration among instructors in an educational institution.
To serve as a more focused instrument exclusively for evaluating AI competencies, Wang et al. [4] created the “Artificial Intelligence Literacy Scale” (AILS). The design of this scale evaluates users’ proficiency in utilizing AI technology through a comprehensive framework that includes four primary elements: comprehension, utilization, assessment, and ethical considerations. While AILS provides an in-depth assessment of AI competencies, it may not investigate specific applications, limiting its scope to specialized educational contexts. Teachers who use advanced or specialized artificial intelligence tools may need additional assessment tools to accurately assess their competence and provide greater opportunities for growth in developing their skills and competencies [11,22]. The AILS tool also has a high correlation between AI and digital literacy skills and competencies, which can create confusion when distinguishing between the two competencies [9].
Given that educators do not require a high level of digital skills and competencies to work with AI tools [4,6,7], it is also possible to consider the “Non-Expert AI Literacy Assessment Scale” (SNAIL) [6]. The scale’s purpose is to measure AI literacy among individuals who have no formal training in AI or computer science. The SNAIL tool identifies three main factors: technical understanding, critical evaluation, and practical application. It is similar to the AILS [4] tool but with fewer elements, making it potentially more accessible and understandable for educators. A significant advantage that distinguishes this tool from the ones mentioned above is its adaptability to people who are not experts in computer engineering, including artificial intelligence. However, because representatives from all sectors developed the questionnaire to assess AI literacy, it may not be specific enough to the education sector to evaluate teachers’ AI competencies. Despite some limitations related to the target audience, the SNAIL tool is an important step towards improving artificial intelligence skills and competencies in education, promoting targeted professional development, and promoting the responsible use of AI technologies in the education sector as well.
To conclude, it is important for educators to enhance their AI competencies, as technology is increasingly influencing every aspect of life, including education. It is crucial for educators to not only understand and use AI tools but also critically evaluate their reliability and ethical aspects. A better version of the AI literacy assessment for education requires several improvements to boost efficiency and promote educators’ unity. Detailed assessments of specific education-related AI applications need to be incorporated into the AILS tool [4]. It is possible to enhance the applicability of the SNAIL tool [6] by adapting it to the educational context and involving educators in its development. Including educators in the development of the SNAIL tool [6] could significantly enhance its application. The Selfie for Teachers tool [20] could be updated to include competencies in the AI specification, such as the definition of AI algorithms, ethical use of AI, and critical evaluation of AI technologies, while taking care not to overlap assessment criteria as defined in the assessment of digital skills and competencies. The self-assessment questionnaire should assess fundamental competencies such as comprehension of AI principles, practical application of AI tools, critical analysis of AI systems, ethical considerations in AI, incorporation of AI into teaching methods, awareness of AI’s societal impact, development of AI literacy, and communication and collaboration abilities. Continuously updating AI competence assessment tools could better prepare the pedagogical use of AI and improve teaching and learning processes, which provide a modern procedure for education.

4. Materials and Methods

This study aims to offer preliminary insights into the creation of a self-assessment questionnaire for teachers to evaluate their AI competencies. Given the growing importance of AI in education, it is critical to create a reliable tool to help educators self-assess and improve their AI-related competencies. This study focuses on understanding how teachers evaluate their AI competencies in the context of digital skills and competencies at different levels, laying the groundwork for improving the questionnaire, and directing future research with a larger sample size.
The self-assessment questionnaire was developed based on a review of existing literature on digital skills and AI literacy frameworks, including DigCompEdu [7], the revised DigiCompEdu 2.2 [8], and various AI literacy competencies [4,6,16]. The questionnaire was designed to operationalize the six AI literacy competency-related components identified in the literature review—understanding AI fundamentals, critical evaluation, ethics, usage, awareness, and communication—into measurable constructs that assess educators’ AI literacy comprehensively. Each question category maps onto one or more of these components to ensure alignment between theoretical constructs and practical application. Questions about knowledge and identification (Q38, Q39) of AI reflect the competency of understanding AI fundamentals by measuring educators’ familiarity with AI principles and their ability to recognize AI tools and concepts in educational contexts [4,6]. Questions on practical experience (Q40) align with the usage component, assessing how educators integrate AI tools into teaching and administrative tasks [4,17,19]. Critical evaluation questions (Q41, Q42) focus on educators’ ability to analyze the reliability, educational value, and ethical implications of AI tools, addressing a central aspect of AI literacy [4], [16,17]. Similarly, questions about ethical considerations (Q42) directly measure educators’ awareness of issues like data security, potential biases, and fairness in AI use, consistent with the ethics component [4,18]. The inclusion of algorithmic thinking (Q43) evaluates deeper technical understanding and supports educators’ ability to communicate AI concepts effectively [4,6]. Moreover, questions about the digital divide (Q44) emphasize awareness component, highlighting educators’ strategies to ensure equitable access to AI tools for diverse student populations [16,17]. Lastly, questions about cooperation, professional growth, and cross-curricular connections test teachers’ ability to work together, grow professionally, and use AI in situations involving different subjects (Q45, Q46, Q47), which is in line with the communication and awareness component [15,17].
The creation of the self-assessment questionnaire is based on the Selfie for Teachers self-assessment tool [20] for teachers’ digital skills and competencies developed by the European Commission and based on the DigiCompEdu 2.2 framework [8], looking for new ways to include the AI competencies of educators in an existing tool. The Selfie for Teachers tool was chosen due to DigiComp’s [7] extensive description of digital skills and competencies. The new DigiComp 2.2. [8] version also includes the impact of AI on the digitalization of education. As a result, Selfie for Teachers is a good reference point to include a new facet in an already existing digital skills assessment tool [9]. A total of 47 questions were included in the questionnaire, divided into three main sections:
  • Demographic Information (5 questions). This section collects basic respondent information such as age, teaching experience, and subject field.
  • Competency Assessment from Selfie for teachers (32 questions). This section uses Selfie for Teachers self-assessment questions divided into six main sections—professional engagement (question set 1), digital resources (question set 2), teaching and learning (question set 3), assessment (question set 4), empowering learners (question set 5), and enabling learners’ digital competence (question set 6). The questions were used to study the compatibility of the newly raised questions about the AI competencies needed by teachers with the existing framework and to answer research questions.
  • AI Literacy Competence (10 questions, question set 7, see attached Appendix A). This section evaluates various competencies related to digital skills and competencies and AI literacy, such as understanding AI fundamentals, ethical considerations, practical usage, critical evaluation of AI technologies, algorithmic thinking, awareness of societal implications and the digital divide, professional development, and communication about AI technologies with students and other teachers. These questions were developed based on the criteria set forth in the literature analysis of the study.
Each AI competency-related question in the questionnaire is designed to measure teachers’ self-assessed proficiency levels across a six-point scale, ranging from basic awareness (level 1) to advanced application and leadership in digital and AI literacy competencies (level 6):
Level 1: Newcomer—The respondent is aware of the competency but has not applied it in practice.
Level 2: Explorer—The respondent has attempted or recognized the competence a few times but does not usually use it.
Level 3: Integrator—The respondent regularly includes competence in daily teaching practices without full critical evaluation.
Level 4: Expert—The respondent uses the competence daily and critically evaluates the appropriate tools or methods in context.
Level 5: Leader—The respondent shares experiences of daily competency usage with colleagues and adapts teaching practices based on evaluation.
Level 6: Pioneer—The respondent actively promotes and initiates changes in institutional practices regarding digital technologies.
Additionally, each question includes the option “Know nothing about this competence”, allowing participants to indicate a lack of knowledge about specific competencies. To enhance the clarity and consistency of the six-point proficiency scale, each question in the questionnaire included real-life examples relevant to its specific topic. This approach ensured that respondents could interpret the levels accurately and relate them to their professional experiences [23,24]. For instance, a critical evaluation question asked respondents to evaluate their ability to analyze and select AI tools based on outcomes, relevance to the learning program, and ethical considerations. A practical example provided for Level 4 (Expert) was as follows: “I analyze and select AI tools based on their impact on outcomes, relevance to the learning program, and ethical considerations (e.g., I choose a text-generating AI tool to teach students critical evaluation of historical facts)”. All questions included detailed descriptions, enabling respondents to align their self-assessment with specific scenarios and tasks. This ensured consistent understanding of the scale and improved the accuracy of the assessment. Detailed scaling provides a nuanced understanding of the respondents’ self-perceived proficiency levels [25]. The same scale is used in the “Selfie for Teachers” [20] tool to measure teachers’ digital skill and competence proficiency; it was also used in the study questionnaire for consistency purposes.
This study adhered to ethical standards for research involving human subjects. The institution’s ethics commission granted approval for the research prior to its conduct. The questionnaire was distributed online, and participants were informed that their participation was entirely voluntary. Informed consent was obtained by clearly explaining the purpose of the study, the anonymity of responses, and the inability to withdraw specific submissions after completion due to the anonymized nature of the data. No sensitive or personally identifiable information was collected, ensuring strict confidentiality and compliance with ethical guidelines for research.
To validate the questionnaire’s content, two practicing secondary school teachers—a geography teacher and a biology teacher—reviewed the items for new question clarity and relevance. Their feedback helped refine the questions to ensure that they accurately reflect the desired constructs. The final questionnaire was administered electronically using Microsoft Forms, which was chosen for its ease of use and ability to format questions effectively. Participants received detailed instructions on how to assess their skills, with each section of the questionnaire providing a brief description to guide their responses.
After the validation process, a convenience sample [26,27] of 42 secondary school teachers participated in this pilot study, representing a range of subjects and teaching grades from 5 to 12. The sample includes teachers with varying levels of experience in using digital technologies in their teaching, from daily users to those who use technology less frequently. This diversity allows for preliminary exploration of competencies across different teaching contexts and technology use levels and for verification of whether the questionnaire is equally comprehensible to teachers of different levels of digital competence.
Given the limited sample size of this pilot project, it was preferable to focus the data analysis just on the internal consistency of the questionnaire items. These findings enabled the establishment of a preliminary comprehension of the constituents present in the data, finding the answers to research questions. Reliability analysis was conducted by evaluating the Cronbach’s alpha coefficient for each scale, which assessed the internal consistency measures and reliability of various competencies. Cronbach’s alpha values equal to or greater than 0.70 are considered acceptable since they indicate a moderate level of internal consistency that is suitable for the early stages of research [28,29]. A principal component analysis (PCA) [30] was performed to evaluate the variability of the data and determine if it is merely random. Principal component analysis (PCA) was chosen due to its ability to assess the complexity of the data and determine the extent to which the data may be further simplified. While eigenvalues greater than 1.0 (Kaiser’s criterion) are commonly used to determine the number of components to retain [31], this method can sometimes lead to over-extraction. To address this, parallel analysis was employed as a more robust approach, as it compares the observed eigenvalues to those generated from random data. Parallel analysis determined that two components should be retained for each comparison, providing a more reliable foundation for interpretation. [32,33]. During the principal component analysis (PCA), the Kaiser-Meyer-Olkin (KMO) Measure of Sampling Adequacy was used to determine whether the variables were suitable for factor analysis. The KMO value indicates the degree to which the variables contribute to the extracted factors. If a variable’s KMO value falls below the acceptable threshold of 0.5, it suggests that the variable does not significantly contribute to the factor structure [29]. To improve the overall sampling adequacy and the interpretability of the factor structure, the variable from the analysis was removed in such cases. Additionally, items with high uniqueness values (above 0.6) are also considered for removal [34], as they indicate that the extracted components do not adequately explain the variable. Eliminating such items refines the PCA, enabling a clearer identification of the key components, while the remaining variables contribute more meaningfully to the analysis. Data analysis (Cronbach’s alpha and PCA) was performed using the application “Jamovi” [35].
The goal is to identify the key characteristics that account for the biggest variances in the AI and digital skills and competencies of teachers. Considering that the main task at this stage of the research is to check the developed questionnaire and understand whether the questions included in it can be used equally well with teachers of different competence groups, a relatively small sample was created, which will be expanded in the next stages of the research. The limited sample size of this study is intended to produce first insights and ideas, rather than conclusive answers, for subsequent testing. The findings will facilitate the enhancement of the questionnaire and enable the preparation of subsequent research with more extensive sample sizes.

5. Results

This study aims to explore the basic dimensions of teachers’ self-assessed AI abilities. This study also seeks to understand how these competencies align with the existing digital skills and competence frameworks. The study aims to develop a self-assessment questionnaire and analyze the results using statistical methods to identify key elements in teacher competence. These three main questions guide the research:
RQ1: What principal components can be identified from a study that explain the variance in teachers’ self-assessment of AI competencies?
RQ2: How do the identified components align with existing AI or digital skills and competencies frameworks?
RQ3: What initial patterns emerge from the analysis?
This section presents the results of the analysis, starting with evaluating the internal consistency of the questionnaire through Cronbach’s alpha, followed by identifying the main components using principal component analysis (PCA) to answer the research question.
In order to evaluate the reliability of the questionnaire, Cronbach’s alpha coefficient was calculated for both the already-existing questions regarding digital competencies from Selfie for Teachers [20] and new AI literacy-related questions in order to establish internal consistency and reliability of the instrument. By performing a reliability analysis on 32 questions that are repeated from the Selfie for Teachers framework [20], Cronbach’s alpha value was 0.923, indicating overall internal consistency. In addition, Cronbach’s alpha value was 0.872 for the 10 questions related to AI, demonstrating an acceptable level of reliability. When each of the 42 individual items present (combined formed variable construction and questions related to AI) were looked at, it was observed that optimum alpha coefficients increased above 0.937, showing that the dependability of the questionnaire was bolstered for a variety of questions. The removal of an item did not significantly alter the internal consistency, which ranged from 0.917 to 0.938 for the current digital competency questions and from 0.848 to 0.872 for the AI-related questions. This implies that the definition and interrelation of the sample’s reliability integrate all measures; there is no evidence to exclude any item. All Alpha coefficients met the cutoff point of 0.70, regarded as an appropriate value for interrater reliability [28], thereby validating the study’s instrument in terms of the questionnaire’s ability to target digital skills and competencies as well as AI competence of teachers.
Determining whether to add the AI-related competencies (question set 7, see Appendix A) to the current digital skills and competencies (question sets 1–6) or to address them separately is important for answering research questions. To achieve this, principal component analysis (PCA) was employed to check the data’s embedded structures and determine how the identified skills and competencies corresponded with the subsets of digital skills and competencies that encompass AI competencies. PCA was done on each of the components of the digital skill and competence set (e.g., professional engagement, digital resources) together with the relevant AI questions in order to assess if the new skills and competencies moved into and integrated with existing digital skills competency factors or the new components were formed. This analysis aims to ascertain whether the AI competence aligns with the current digital skill and competencies frameworks within digital skills and competencies or if they stand alone and require separate treatment.
To provide an overview of teacher self-assessments across AI literacy and professional engagement topics, descriptive statistics were calculated for each question, including mean, median, standard deviation, and range (minimum–maximum) [36]. The results (see Table 1) revealed variability in proficiency levels, with means ranging from 2.3 (Q7_47) to 4.86 (Q1_14) and standard deviations ranging from 1.17 (Q1_8) to 2.98 (Q1_14). The scale, ranging from 1 (newcomer) to 6 (pioneer), with an additional option of 9 (no knowledge), captured a broad spectrum of responses, reflecting teachers’ diverse levels of proficiency and familiarity with the competencies.
Questions with a lot of variation, like Q1_14 (SD = 2.98), show a wide range of skill levels, likely due to the mix of beginner-level responses (e.g., “Newcomer”, coded with “1”, and the “No Knowledge” option, coded with “9”). In contrast, questions like Q1_8 (SD = 1.17) showed greater consistency, suggesting more consistent self-evaluation in professional engagement competency. Low mean scores in Q7_42 (mean = 2.38, SD = 2.16) and Q7_44 (mean = 2.64, SD = 2.09) highlight weaker areas, particularly in AI ethics and cross-disciplinary applications, indicating the need for more professional development or to make integration strategies clearer.

5.1. Comparing “Professional Engagement” and “AI Literacy Competence”

A principal component analysis (PCA) was conducted to explore the relationships between the “Professional Engagement” (Set 1) and “AI Literacy Competence” (Set 7) question sets. The Kaiser-Meyer-Olkin (KMO) value for all included questions was 0,741, which is considered an adequate sample size; only one question (1_9) was removed due to the low KMO value. Parallel analysis specified two components, which together explained 58.0% of the data’s variance (see Table 2).
The first component explained 33.6% of the variance and included questions from both Set 1 and Set 7, suggesting an overlap between general professional engagement competencies and AI-related competencies. Set 7 questions primarily loaded the second component, explaining 24.4% of the variance and indicating that AI literacy competency forms a distinct factor from the general digital skills and competencies measured in Set 1 (see Table 3).
Although there is some integration of AI literacy competencies with general professional engagement, these findings underscore the need to treat AI literacy competencies as a distinct set of competencies.

5.2. Comparing “Digital Resources” and “AI Literacy Competence”

On “Digital Resources” (Set 2) and “AI Literacy Competencies” (Set 7) question sets, PCA identified two components that explained 63.0% of the total variance. Component 1 explained 37.7%, while Component 2 explained 25.3% (see Table 4). The KMO value of 0.790 indicated the data’s suitability for PCA, but questions 2_15 and 2_17 were excluded due to low KMO and high uniqueness.
Component 1, which grouped items from both digital resource and AI competency sets, involved creating, organizing, and using digital resources, as well as recognizing ethical aspects and critical thinking about AI. This suggests that teachers who manage digital resources effectively are also adept at using AI tools, indicating a blending of competencies. This component can be labeled “Integration and Management of Digital and AI Resources”. Component 2, focused mainly on AI-specific competencies, involved identifying AI, critically evaluating it, and sharing AI practices, showing that AI literacy remains distinct from broader digital skills (see Table 5).
This highlights the need for focused development in AI competencies alongside existing digital skills and competencies. The PCA results on Set 2 and Set 7 suggest that while digital resource management and AI competencies overlap, AI literacy competencies still require targeted professional development, suggesting that both areas complement each other in educational practice.

5.3. Comparing “Teaching and Learning” and “AI Literacy Competence”

The PCA carried out on “Teaching and Learning” (Set 3) and “AI Literacy Competence” (Set 7) question sets made it possible to distinguish two components, which explains 65.0% of the total variance. The first component explained 37.5% of the variance; Component 2 added on another 27.6% (see Table 6). The Kaiser-Meyer-Olkin (KMO) value = 0.799 is an appropriate sample for the PCA, but two questions (3_20 and 3_24) were dropped because they reported higher unique values above 0.7.
The evidence suggests that in Component 1, there were some similarities between both sets of questions. This means that the steps and tools used to encourage student collaboration, self-study, and feedback (Set 3) are similar to the skills and competencies needed to find, use, and evaluate AI tools (Set 7). For this reason, it is possible to interpret component 1 as “Teaching and Learning with AI Included”, aiming to investigate the influence of AI on other teaching methods. In contrast, Component 2 consists primarily of AI-related competencies (see Table 7).
This suggests that we can look at AI competencies as a separate category of skills and competencies. The PCA analysis between these sets reveals that both components are conceptually connected but separate. Component 1 shows that AI can be a big part of teaching, while Component 2 shows that AI-related competencies are still their own area of expertise that needs special attention and growth.

5.4. Comparing “Assessment” and “AI Literacy Competence”

The principal component analysis (PCA) of “Assessment” (Set 4) and “AI Literacy Competence” (Set 7) question sets revealed that Set 4’s Kaiser-Meyer-Olkin (KMO) values were not so good for PCA analysis. Some items in Set 4 had KMO values below the recommended level of 0.5, which means that the sampling was not adequate for analyzing question set “Assesment” paired with question set “AI Literacy Competence”. For example, 4_25’s KMO value, which measures its extent of correlation with other variables, was 0.311; 4_26’s KMO was 0.388, and 4_27’s KMO was 0.479. All of these variables performed poorly and had high uniqueness scores, indicating that the extracted factor explained little of them (see Table 8).
On a single component, the former produced a maximum factor loading of 43.0%, whereas the latter either loaded poorly or did not load at all. Therefore, in order to enhance the general factors, Set 4 items were removed from the analysis, and a few of the questions from Set 7 related to AI, which could stand as a separate factor, were kept. This implies that competencies in AI literacies may be most beneficial as a standalone group of skills, in contrast to assessment-related competencies.

5.5. Comparing “Empowering Learners” and “AI Literacy Competence”

For the analysis of “Empowering Learners” (Set 5) and “AI Literacy Competence” (Set 7), PCA revealed two components explaining 64.1% of the total variance. The first component explained 39.1%, and the second explained 24.9% (see Table 9). The overall KMO value was 0.789, indicating good sampling adequacy. Questions 5_28 and 5_31 were removed due to low KMO values and high uniqueness.
Component 1 includes questions from both sets, suggesting a strong link between empowering learners (e.g., differentiation, personalizing learning) and using AI tools to support these practices. This component is named “Empowering Learners with AI”. Component 2 focuses on AI-specific competencies, showing that AI literacy competencies are integrated into traditional practices but also retain a distinct identity (see Table 10).
This suggests that AI literacy competence may be most beneficial as a standalone group of skills, in contrast to “Empowering Learners” competencies.

5.6. Comparing “Enabling Learners’ Digital Competence” and “AI Literacy Competence”

Question sets “Enabling Learners’ Digital Competence” (Set 6) and “AI Literacy Competence” (Set 7) were analyzed through the PCA, and only one component was found to explain 56.2% of the total variance. For sampling size, the KMO value was found to be 0.828, which is good for running PCA. Items 6_33, 6_34, 6_35, and 7_43 were removed from Set 6 due to high uniqueness values of greater than 0.8, indicating that the component did not adequately explain these items. Component 1 includes questions from both Set 6 and Set 7, indicating the potential for a comprehensive approach for enhancing learners’ digital competence and understanding AI (see Table 11).
It seems that the questions that focus on problem-solving skills, the appropriate use of technology, and the use of AI tools during teaching bear a strong resemblance to the competencies related to AI, including its proper use and assessment. This suggests a close relationship between learners’ technology competency and their proficiency in AI literacy competence, especially when it comes to using technology for responsible and creative problem-solving. These findings suggest a gradual shift towards integrating AI competency into digital learning skills and competencies, underscoring the need for these competencies to form part of a more comprehensive digital competence in education.

6. Discussion

The results of the study provide valuable insight into teachers’ self-assessment of AI competencies and the compliance of these competencies with the existing digital skills and competencies framework. The research shows that while AI literacy competence is a new and distinct set of competencies, it also integrates with already defined broader digital skills. The principal component analysis (PCA) was used to identify principal components that provided insight into how AI competencies fit into a broader digital skills framework.
The PCA revealed that AI competencies, including the critical evaluation of AI tools, ethical considerations, and the usage of AI technologies, constitute a distinct competency section that stands apart from other digital skills. Other frameworks, such as the Scale for the Assessment of Non-Experts’ AI Literacy [6] and the Artificial Intelligence Literacy Scale (AILS) [4], assert that while AI skills and competencies overlap with digital skills, they require separate attention due to their complexity and ethical implications. In addition, the analysis of question Set 6 (Enabling Learners’ Digital Competence) and question Set 7 (AI Literacy Competence) revealed a close relationship between AI and digital competencies aimed at improving student learning. The ability to incorporate AI tools into teaching and problem-solving activities demonstrates the essential integration of AI competencies into teachers’ professional activities. This suggests that educators should consider AI literacy competence, particularly in relation to the ethical and responsible use of AI, as a component of a broader system of digital competencies.
While most PCAs separated AI competencies as a separate component of digital skills, there was also some overlap between competencies. For example, the PCA of Set 2 (Digital Resources) and Set 7 (AI Literacy Competency) indicated that teachers who use digital resources in an organized manner demonstrate competencies to effectively use AI tools, which shows the mutual similarity of these skills and competencies. This is consistent with previous research that emphasizes the need for teachers to integrate AI competencies into their professional practice in the context of their existing digital skills [3,8]. The PCA showed that AI literacy competence in the context of professional engagement is a unique factor that focuses on ethical concerns and critical evaluation of AI technologies. This is in addition to question Set 1 (Professional Engagement) and question Set 7 (AI Literacy Competency). This finding encourages special attention to AI skills and competencies in teachers’ professional development, considering teachers’ already existing digital skills.
The results of this study emphasize the importance of developing customized professional development programs that specifically focus on the AI literacy competence of educators. Although AI literacy competence can complement digital skills in areas such as digital resource management, it requires a targeted approach to competence acquisition, especially in categories such as critical evaluation of AI tools and ethical considerations. Future research should focus on enhancing the self-assessment tool for AI competencies and exploring more effective integration of these competencies with existing digital skills frameworks.
The study suggests that teachers can benefit from a differentiated AI literacy competency professional development program based on their existing digital skills. For example, teachers who are already experienced users of digital resources may need less support in integrating AI tools into the teaching process, while those with less experience in using AI tools may need more comprehensive digital skills and AI literacy competence training. The results show that having developed digital skills does not necessarily equate to having developed AI literacy competence. Therefore, educators with different levels of knowledge and skills should receive adequate support to enhance their AI literacy competence.
In conclusion, this study emphasizes the importance of AI literacy competence for teachers and the need for continuous professional development in this area. By aligning AI literacy competencies with existing digital skills and competency frameworks, educators can be better prepared for the opportunities and challenges that the use of AI tools in education can create.

7. Conclusions

This study has provided valuable insights into the dimensions of teachers’ self-assessed AI competencies and how they align with existing digital literacy frameworks. Two key questions were addressed by conducting principal component analysis (PCA) on a pilot study:
RQ1. What principal components can be identified from a study that explain the variance in teachers’ self-assessment of AI competencies?
The principal component analysis (PCA) of digital skills and competencies and AI literacy competencies identified different components based on comparisons. For instance, relationships between Set 1 (Professional Engagement) and Set 7 (AI Literacy Competence), as well as Set 2 (Digital Resources) and Set 7, revealed two key components. In the comparison of Set 6 (Enabling Learners’ Digital Competence) and Set 7 (AI Literacy Competence), there were strong similarities between the two sets, indicating a close relationship between empowering learners’ digital and AI literacy competencies. PCA also underscored differences between general digital skills and competencies (such as managing digital resources or teaching and learning practices) and AI-related competencies (like critical appraisal of AI tools, working with real-world AI contexts, and considering ethics). This highlights both the integration of AI competencies with teachers’ digital skills and competencies and the uniqueness of AI literacy as a separate competency. Similar studies on the professional development of teachers in the field of digital literacy emphasize the importance of including AI skills in professional development programs; however, individual components of AI skills are essential to learn separately in order to effectively integrate the learning of AI skills in the context of already existing individual teachers’ AI competencies [15,37,38]. The overlap between core AI literacy competencies and teachers’ existing digital skills and competencies largely explains their self-assessments.
RQ2. How do the identified components align with existing AI or digital skills and competencies frameworks?
The PCA analysis indicated that AI competencies align with existing digital skills and competence frameworks like Selfie for Teachers, suggesting potential for integration. However, AI literacy competencies—such as critically assessing AI tools, understanding ethical issues, and gaining basic AI literacy—often emerged as distinct components, underscoring the importance of specific attention to AI literacy [37,39]. While AI literacy competencies align with digital literacy in areas like resource management and instructional approaches, they also represent a separate area of expertise. Adding AI literacy competencies to frameworks like Selfie for Teachers, which already stress the importance of other digital skills and competencies, could better prepare teachers to use AI in the classroom, letting them effectively incorporate AI tools while also thinking about the moral and practical issues they raise.
RQ3: What initial patterns emerge from the analysis?
The PCA analysis revealed that AI literacy competencies, such as critical evaluation, ethical considerations, and AI tool usage, often emerged as distinct components, emphasizing their unique nature within broader digital skills frameworks. Notable overlaps were observed between “Enabling Learners’ Digital Competence” (question Set 6) and “AI Literacy Competence” (Set 7), highlighting the integration of AI competencies into teaching-focused digital skills. Similarly, the connection between “Digital Resources” (question Set 2) and “AI Literacy Competence” (question Set 7) suggests that organizational digital skills complement the effective use of AI tools. However, areas like “critical evaluation” and “ethical concerns” emerged as separate dimensions, indicating the need for targeted professional development to address advanced AI literacy competencies.
The study provides initial insights into the key components of AI literacy and digital competencies among teachers. AI literacy competencies have the potential to complement digital skills and competencies and other digital skills and competencies frameworks, but they also require focused attention as a distinct set of competencies. Future research should expand on these findings to better understand AI literacy and its integration into existing digital skills and competence frameworks. This will help in developing targeted professional development programs that ensure that teachers are well-prepared to navigate the challenges and opportunities AI presents in education.

8. Limitations and Future Research

This study provides an important first step in understanding the dimensions of teachers’ AI literacy competencies and how they align with broader digital skills frameworks. By employing principal component analysis (PCA) to explore patterns in self-assessments, this research lays the groundwork for future studies to deepen our understanding of AI literacy in education. Possible directions for future research include the use of confirmatory techniques, such as factor analysis and structural equation modeling (SEM), to validate the dimensions identified in this study [40]. Such methods would provide greater precision and clarity regarding the relationships between competencies, enabling the construction of robust AI literacy and digital competency models. Similarly, future research could employ correlation analysis [41] to examine the connections between specific AI literacy competencies. This would help make teachers’ professional development more aligned with contemporary competency needs.
Expanding the research to include subgroup analyses would also yield valuable insights. For instance, examining how teaching experience, subject area, or prior digital skill levels influence AI literacy could help tailor professional development programs to meet the needs of diverse educators. Stratified random sampling [42] in future studies could improve the representativeness of samples, ensuring that findings are generalizable across teachers with varying educational levels and technical backgrounds.
The practical application of this study lies in its potential to develop customized professional development programs for educators. By identifying areas of strength, such as foundational digital skills, and gaps, such as ethical considerations and critical evaluation of AI tools in AI literacy competencies, this research provides a blueprint for designing training initiatives that equip teachers to navigate the challenges and opportunities presented by AI in education. This study suggests that educators could be prepared to use AI tools effectively while addressing ethical and practical concerns by incorporating AI literacy competencies into existing digital skills frameworks.
Despite its limitations, this study is highly relevant and contributes significant insights to the emerging field of AI literacy in education. The exploratory nature of the research, while limited in scope, provides a strong foundation for future work. The use of PCA highlighted key distinctions and overlaps between AI literacy and digital literacy competencies, emphasizing the importance of treating AI literacy competencies as a distinct yet integrated element of digital literacy competence.

Author Contributions

Conceptualization, I.T. and L.D.; methodology, I.T. and L.D.; software I.T.; validation, I.T. and L.D.; formal analysis, I.T.; investigation, I.T.; resources, I.T.; data curation, I.T.; writing—original draft preparation, I.T.; writing—review and editing, I.T. and L.D.; visualization, I.T.; supervision, L.D.; project administration, L.D.; funding acquisition, L.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by University of Latvia, Faculty of Educational Sciences and Psychology, Scientific Institute of Pedagogy.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Ethics Committee of University of Latvia (protocol code 71-43/142, 29.10.2024) for studies involving humans.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data used to prepare self-assessment questionnaire are available at https://education.ec.europa.eu/selfie-for-teachers, accessed on 16 May 2024.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

  • Self-assessment questions about teachers’ AI literacy competence
Note: This appendix contains questions about teachers’ AI literacy competence (question set 7). Other questions asked in the self-assessment questionnaire are the same as the Selfie for Teachers self-assessment framework
The digital competence of teachers also includes skills in the use of artificial intelligence. As digital technology and artificial intelligence tools develop, understand the benefits and challenges of using them in the learning process.
Questions about the level of literacy are arranged according to how actively artificial intelligence technologies are used for the learning process or self-improvement.
38. Knowledge of the existence and potential of AI tools, understanding of how these tools could be integrated in learning, and a conceptual understanding of the criteria for the selection and use of AI resources.
  • I am aware that there are different AI tools that can be used in education in different subject areas and planning lessons, both involving tools in lessons (e.g., ChatGPT, MidJourney, Co-pilot, Canva, Gemini).
  • I have tried using an AI tool at least once to plan a lesson or engage AI tools in the learning process, supporting the students’ learning process (for example, please prepare lesson ideas, make a presentation, check facts).
  • I use AI tools to support the learning process of students, and different learning needs and abilities navigate the diverse AI tools with a basic understanding of their underlying AI mechanisms (e.g., offer students with dyslexia text-to-speech tools, ask students to check their answers with the AI tool help).
  • I analyze and select AI tools based on their impact on outcomes, relevance to the learning program, and ethical considerations (e.g., I choose a text-generating AI tool to teach students critical evaluation of historical facts).
  • I evaluate the effectiveness of AI tools in my practice and am aware of how they support the learning process of my students (e.g., assessing whether the use of AI has improved motivation and/or understanding in a science subject).
  • I encourage and support my colleagues to responsibly integrate AI tools into other subjects, and I share resources that can improve their understanding and use of AI in education (e.g., proposing a collaborative platform in which to share AI tools to tailor learning activities to individual student needs).
  • Do not know about this competency.
39. Identification
Ability to identify and understand the basic principles of using AI
  • I am aware that AI technologies are constantly evolving; I am aware of the latest developments and their potential impact on education (e.g., reading about the latest AI language processing tools that could help with language learning).
  • I have tried experimenting with different AI tools to innovate, improve teaching methods, and boost student motivation (e.g., using AI tools to promote creativity and writing skills).
  • I use critical thinking to assess the credibility of AI-generated content and ensure that it is appropriate for educational use (for example, checking the sources from which the content of the AI-generated historical description was obtained).
  • I analyze and select AI content that meets ethical standards and promotes an inclusive and fair environment (e.g., ensuring that an assessment tool based on AI principles is as objective as possible).
  • I evaluate my students’ interactions with AI technologies and adjust my teaching methods to address identified problems (for example, I observed changes in how students solve problem questions with the help of AI).
  • I propose and support workshops and seminars to promote AI literacy among colleagues and students, creating a school environment that is aware of the benefits and challenges of AI in education (e.g., conducting a workshop on the ethics of AI classroom use).
  • Do not know about this competency
40. Practical experience
Hands-on experience using AI tools, including ability to use AI tools and integrate multiple AI resources for different purposes and improve your personal skills.
  • I am aware of my level of expertise in working with AI tools and am actively looking for opportunities to improve my skills (e.g., participating in professional development courses on AI education tools).
  • I have tried to integrate various AI technologies into my teaching process, learning from both successes and failures (for example, experimenting with different AI tools to determine which ones work best for my teaching style).
  • I use AI tools not only for their novelty but also as effective solutions for specific learning goals and tasks (for example, consistently using AI for personalized learning to adapt to each student’s tempo and style).
  • I analyze and select AI tools not only based on their popularity but also by carefully evaluating their functionality and impact on student learning (e.g., selecting AI tools after evaluating their success in similar scenarios).
  • I evaluate how I use AI tools to ensure that they are used effectively and meaningfully to support the student learning process and not replace essential human aspects (for example, considering how AI has changed the learner’s problem-solving skills).
  • I propose and support my colleagues in adopting effective strategies for using practical AI tools for use in the classroom based on my experience and understanding of these tools (e.g., organizing experiences sharing sessions where teachers can share AI lesson plans and discuss how AI tools have improved the student learning outcomes).
  • Do not know about this competency
41. Evaluation
The ability to critically evaluate the importance of AI in the learning process, not only ensuring the effective use of AI but also to understand and reflect on the wider implications of integrating AI into the learning process.
  • I recognize the need to critically evaluate AI tools, not only by evaluating their basic functionality but also by taking into account the algorithms they use, the reliability of the data, and the ethical implications (for example, realizing that an AI reading assistant recommendation depends on its training data, which may affect its neutrality).
  • I have tried to evaluate the results provided by the AI tool to understand their reliability and accuracy (e.g., comparing AI-generated ratings with manual ratings to identify discrepancies).
  • I use certain criteria to systematically evaluate AI tools before integrating them into practice, focusing on their educational value, user experience, and impact on student privacy data (e.g., using criteria to evaluate the accuracy of feedback provided by an AI-based writing aid).
  • I analyze and select AI tools not only based on their performance, but also by learning their relevance goals and ethical standards (for example, choosing an AI tool for language learning that meets different learning needs).
  • I evaluate the effectiveness of AI tools by gathering feedback from students, reviewing performance data, and considering broader learning opportunities (e.g., assessing whether an AI tool improves learning outcomes for a variety of students in groups).
  • I propose and support colleagues in offering methods to develop evaluation skills for AI tools, emphasizing the importance of a critical approach in the selection and use of these tools (for example, a professional development workshop that focuses on evaluating the ethical considerations of AI in education).
  • Do not know about this competency
42. Ethical considerations
Ethical aspects of AI technologies, promoting a balanced approach that considers both integrations of AI benefits as well as potential ethical issues.
  • I am aware of the ethical implications of using AI tools in the classroom, including privacy concerns, data security, and potential bias (e.g., understanding the importance of consent when using student data to operate with AI tools).
  • I have tried to engage in discussions and professional development programs about the ethical use of AI in education (for example, participating in a workshop on ethical AI practices and their importance in protecting student privacy).
  • I use guidelines and best practices for the ethical use of artificial intelligence to adopt conscious decisions on the integration of AI tools in the learning process (for example, following the ethical use of AI developed by the EU guidelines for choosing AI tools for classroom use).
  • I analyze and select AI tools not only for their benefits in the learning process but also for their ethical standards, ensuring that they promote equality and do not discriminate (e.g., by checking the MI reading tool for bias in language and representation).
  • I evaluate the ethical implications of using artificial intelligence in my practice, taking into account both the immediate impact on students and their achievements as well as a wider impact on society (for example, evaluating the long-term impact that results from reliance on AI for personalized learning).
  • I propose and support colleagues and IT administrators in the development of strategies and policies that prioritize ethical considerations in the implementation and use of AI technologies in education (for example, recommending the development of school-wide policies on the ethical use of AI, which include regular review and updating based on new best practices).
  • Do not know about this competency
43. Algorithmic thinking
Algorithmic thinking skills and how these skills contribute to general AI literacy. The purpose is to promote a proactive approach to developing these competencies, recognizing their value in moving forward more digital and AI-integrated educational environments.
  • I am aware of the principles of algorithmic thinking, including problem decomposition, algorithm recognition, and abstraction and algorithm development (for example, understanding how breaking down a complex problem into smaller parts can help develop solutions).
  • I have tried to get into coding and programming courses to improve my algorithmic thinking skills (e.g., taking a Python coding workshop to understand the logic of algorithmic thinking).
  • I use algorithmic thinking strategies not only in computer science classes but also in other lessons in subjects to promote students’ problem-solving skills and creativity (for example, by applying models of the concept of pattern recognition in mathematics to solve complex problems).
  • I analyze and select digital tools and resources that support the development of algorithmic thinking skills in students (for example, choosing a coding platform that is age-appropriate and promotes logical thinking).
  • I evaluate how algorithmic thinking improves my ability to understand and apply AI concepts while being aware of its impact on my professional development and teaching practice (for example, considering how learning algorithms has improved my ability to explain MI actions to students).
  • I propose and support initiatives or programs that promote algorithmic thinking and coding skills development among colleagues and students, recognizing the importance of these skills in promoting understanding of AI (for example, organizing the school-wide event “Hour of Code” to introduce everyone to the basics of coding).
  • Do not know about this competency
44. The digital divide
The role of the educator in bridging the digital divide by emphasizing physical access to AI technologies importance as a cornerstone for developing AI literacy and ensuring that all students have the opportunity to benefit from an AI-enhanced learning experience.
  • I recognize the importance of physical access to AI technologies for both myself and my students, realizing that equal access is essential to using these tools in education (e.g., recognizing that all students must have access to AI solutions to learn).
  • I have tried to ensure that all students have the opportunity to try out AI tools by actively searching for solutions overcoming barriers to access (for example, allowing students without access to try out AI tools at school for computer equipment at home).
  • I use AI tools that are widely available and do not require high-end technology, thus reducing the impact of the digital divide (for example, choosing AI tools that can work across devices).
  • I analyze and select digital resources based on the ability of different students to access them, taking into account both technology available to students, as well as ease of use (for example, prioritizing AI tools that offer multilingual support or are numerous for a user with visual, hearing, or mobility impairments).
  • I evaluate my actions in reducing the digital divide in practice by assessing how access to AI technologies affects student motivation and learning outcomes (for example, evaluating the effectiveness of AI tools for distance learning cases).
  • I propose and support strategies and initiatives aimed at improving access to AI technologies for all students, supporting policies that close the digital divide (e.g., recommending investment in school infrastructure to support AI use of tools and ensure equal access).
  • Do not know about this competency
45. Cooperation skills
An approach to developing students’ cooperation skills, emphasizing the role of AI in effective teamwork and in promoting cooperative interaction both in the classroom and in the wider social and professional sphere in context.
  • I recognize the importance of collaborative skills in the context of an AI-enriched learning environment, recognizing the importance of teaching for students to effectively interact both with AI technologies and with their peers (for example, understand how group work with AI tools can improve the learning process).
  • I have tried to implement lessons that encourage students to interact with and alone with AI technologies, exploring new ways of cooperative learning (for example, organizing a project where students collaborate with AI tools to research and present a historical event).
  • I use AI tools that promote collaborative skills, such as AI chatbots for group discussions or AI tools for team projects to promote collaboration in the classroom (for example, using an AI tool to promote effective group project work in environmental science).
  • I analyze and select AI tools and activities based on their ability to enhance student team building, communication, and problem-solving skills (for example, choosing an AI coding challenge that requires teamwork and joint decision-making).
  • I evaluate the results of collaborative skill development in AI-enriched learning, taking into account both the effectiveness of collaboration and the integration of AI technologies to improve future teaching approaches (e.g., assessing a group project where students and an AI tool work together to create a digital story, focusing on how well students present their ideas and incorporate AI-generated content).
  • I propose and support strategies and best practices for integrating AI tools to develop collaborative skills, sharing insights on improving group dynamics and collaboration skills by integrating AI into the learning process
    (e.g., organizing workshops where students use AI data analysis tools together to solve real-life problem examples, thereby improving their ability to work in a team and make data-driven decisions).
  • Do not know about this competency
46. Professional growth
The importance of self-determination, decision-making, and independent learning in the professional for development and evaluation in their practice.
  • I am aware of my choices in using and integrating AI tools in my practice, recognizing the importance of self-study to improve your AI skills (e.g., to assess your ability to explore new AI tools without external guidance).
  • I have tried to independently solve problems and adapt AI tools to the needs of my classroom, relying on my own assessment and skills rather than strictly adhering to specific uses (e.g., customizing an AI tool yourself to achieve specific learning goals).
  • I use self-directed learning strategies to continuously improve my understanding of the use of AI in education, setting personal learning goals and finding resources to achieve them (for example, using online courses and forums to improve your knowledge of AI trends and tools).
  • I analyze and select AI tools based on their comprehensive evaluation for use in education and match my teaching style by choosing the best and most appropriate tools for my students (e.g., independently reviewing and selecting an AI tool that promotes inclusive learning).
  • I evaluate my decisions and actions related to the use of AI in the learning process with this in mind: effectiveness and ethical aspects (e.g., evaluation of the impact of AI on student motivation and achievement while also improving your teaching approach).
  • I suggest and support strategies for developing autonomy by sharing experiences and encouraging colleagues to explore AI tools and technologies independently (for example, leading a workshop on self-directed AI research in education).
  • Do not know about this competency
47. Making cross-curricular connections
How AI is used in learning, integrating AI so that tasks are related to real ones, and interdisciplinary connection to world problems.
  • I recognize the importance of applying AI tools in real-world contexts in my subject and understand how AI can be integrated into various subjects (for example, realizing the potential of AI to analyze historical data in social studies).
  • I have tried to create lesson plans and activities that incorporate AI tools into practical scenarios to demonstrate the multifaceted nature of AI in different areas (for example, developing a project where students use AI to predict weather conditions in geography).
  • I use AI tools to bridge cross-subjects, fostering conceptual understanding across subjects to deliver comprehensive knowledge of how AI technologies work and affect various aspects of society (e.g., linking AI-driven data analysis in science to ethical discussions in the social sciences).
  • I analyze and select resources and AI tools that can be used in different contexts, ensuring that I can adapt your teaching strategies to include relevant real-world examples (e.g., choosing AI tools that can be used in both environmental and economic fields).
  • I evaluate the effectiveness of using AI tools in different learning contexts by assessing how well students can apply information provided by AI to understand complex problems in the surrounding world (e.g., assessing student projects on sustainable development built with the help of AI tools to assess their understanding of both the subject and the principles of MI themselves).
  • I propose and support methods of incorporating real-world examples into the learning process by sharing strategies and for examples of how MI can create an intersubject link (for example, organizing a professional development seminar on the use of AI in interdisciplinary project work).
  • Do not know about this competency

References

  1. Jiang, Y.; Li, X.; Luo, H.; Yin, S.; Kaynak, O. Quo vadis artificial intelligence? Discov. Artif. Intell. 2022, 2, 4. [Google Scholar] [CrossRef]
  2. Mizumoto, A.; Eguchi, M. Exploring the potential of using an AI language model for automated essay scoring. Res. Methods Appl. Linguist. 2023, 2, 100050. [Google Scholar] [CrossRef]
  3. Holmes, W.; Porayska-Pomsta, K.; Holstein, K.; Sutherland, E.; Baker, T.; Shum, S.B.; Santos, O.C.; Rodrigo, M.T.; Cukurova, M.; Bittencourt, I.I. Ethics of AI in education: Towards a community-wide framework. Int. J. Artif. Intell. Educ. 2022, 32, 504–526. [Google Scholar] [CrossRef]
  4. Wang, B.; Rau, P.-L.P.; Yuan, T. Measuring user competence in using artificial intelligence: Validity and reliability of artificial intelligence literacy scale. Behav. Inf. Technol. 2023, 42, 1324–1337. [Google Scholar] [CrossRef]
  5. Long, D.; Magerko, B. What is AI literacy? Competencies and design considerations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–16. [Google Scholar] [CrossRef]
  6. Laupichler, M.C.; Aster, A.; Haverkamp, N.; Raupach, T. Development of the “Scale for the assessment of non-experts’ AI literacy”–An exploratory factor analysis. Comput. Hum. Behav. Rep. 2023, 12, 100338. [Google Scholar] [CrossRef]
  7. Redecker, C. European Framework for the Digital Competence of Educators: DigCompEdu. JRC Publications Repository. Available online: https://publications.jrc.ec.europa.eu/repository/handle/JRC107466 (accessed on 1 March 2023).
  8. Vuorikari, R.; Holmes, W. DigComp 2.2. Annex 2. Citizens interacting with AI systems. In DigComp 2.2, The Digital Competence Framework for Citizens: With New Examples of Knowledge, Skills and Attitudes; Publication Office of the European Union: Luxenbourg, 2022. [Google Scholar] [CrossRef]
  9. Tiernan, P.; Costello, E.; Donlon, E.; Parysz, M.; Scriney, M. Information and Media Literacy in the Age of AI: Options for the Future. Educ. Sci. 2023, 13, 906. [Google Scholar] [CrossRef]
  10. UNESCO. UNESCO ICT Competency Framework for Teachers. 2018. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000213475 (accessed on 28 May 2024).
  11. Fernández-Batanero, J.M.; Montenegro-Rueda, M.; Fernández-Cerero, J.; García-Martínez, I. Digital competences for teacher professional development. Systematic review. Eur. J. Teach. Educ. 2022, 45, 513–531. [Google Scholar] [CrossRef]
  12. Spante, M.; Hashemi, S.S.; Lundin, M.; Algers, A. Digital competence and digital literacy in higher education research: Systematic review of concept use. Cogent Educ. 2018, 5, 1519143. [Google Scholar] [CrossRef]
  13. Pangrazio, L.; Godhe, A.-L.; Ledesma, A.G.L. What is digital literacy? A comparative review of publications across three language contexts. E-Learn. Digit. Media 2020, 17, 442–459. [Google Scholar] [CrossRef]
  14. Yi, Y. Establishing the concept of AI literacy. Jahr–Eur. J. Bioeth. 2021, 12, 353–368. [Google Scholar] [CrossRef]
  15. Sperling, K.; Stenberg, C.-J.; McGrath, C.; Åkerfeldt, A.; Heintz, F.; Stenliden, L. In search of artificial intelligence (AI) literacy in Teacher Education: A scoping review. Comput. Educ. Open 2024, 6, 100169. [Google Scholar] [CrossRef]
  16. Celik, I. Exploring the determinants of artificial intelligence (Ai) literacy: Digital divide, computational thinking, cognitive absorption. Telemat. Inform. 2023, 83, 102026. [Google Scholar] [CrossRef]
  17. Labadze, L.; Grigolia, M.; Machaidze, L. Role of AI chatbots in education: Systematic literature review. Int. J. Educ. Technol. High. Educ. 2023, 20, 56. [Google Scholar] [CrossRef]
  18. Sharples, M. Towards social generative AI for education: Theory, practices and ethics. Learn. Res. Pract. 2023, 9, 159–167. [Google Scholar] [CrossRef]
  19. Ng, D.T.K.; Su, J.; Leung, J.K.L.; Chu, S.K.W. Artificial intelligence (AI) literacy education in secondary schools: A review. Interact. Learn. Environ. 2023, 1–21. [Google Scholar] [CrossRef]
  20. Economou, A. SELFIEforTEACHERS Toolkit-Using SELFIEforTEACHERS; Joint Research Centre: Sevilla, Spain, 2023. [Google Scholar]
  21. Yan, Z.; Carless, D. Self-assessment is about more than self: The enabling role of feedback literacy. Assess. Eval. High. Educ. 2022, 47, 1116–1128. [Google Scholar] [CrossRef]
  22. Reisoğlu, İ. How Does Digital Competence Training Affect Teachers’ Professional Development and Activities? Technol. Knowl. Learn. 2022, 27, 721–748. [Google Scholar] [CrossRef]
  23. Jones, L.; Fletcher, C. Self-assessment in a selection situation: An evaluation of different measurement approaches. J. Occup. Organ. Psychol. 2002, 75, 145–161. [Google Scholar] [CrossRef]
  24. Higgins, E.T.; Strauman, T.; Klein, R. Standards and the process of self-evaluation. Handb. Motiv. Cogn. Found. Soc. Behav. 1986, 1, 23–63. [Google Scholar]
  25. Andrade, H.; Du, Y. Student responses to criteria-referenced self-assessment. Assess. Eval. High. Educ. 2007, 32, 159–181. [Google Scholar] [CrossRef]
  26. Given, L.M. The Sage Encyclopedia of Qualitative Research Methods; Sage Publications: Thousand Oaks, CA, USA, 2008. [Google Scholar]
  27. Leiner, D. Convenience samples from online respondent pools: A case study of the SoSci Panel. Stud. Commun. Media 2014, 5, 367–396. [Google Scholar] [CrossRef]
  28. Multon, K.D.; Coleman, J.S.M. Coefficient Alpha. In Encyclopedia of Research Design (Volume 0, pp. 160–163); SAGE Publications, Inc.: Thousand Oaks, CA, USA, 2010. [Google Scholar] [CrossRef]
  29. Shrestha, N. Factor analysis as a tool for survey analysis. Am. J. Appl. Math. Stat. 2021, 9, 4–11. [Google Scholar] [CrossRef]
  30. Coleman, J.S.M. Principal Components Analysis. In Encyclopedia of Research Design (1098-102); SAGE Publications, Inc.: Thousand Oaks, CA, USA, 2010. [Google Scholar] [CrossRef]
  31. Abdi, H.; Williams, L.J. Principal component analysis. WIREs Comput. Stat. 2010, 2, 433–459. [Google Scholar] [CrossRef]
  32. Horn, J.L. A rationale and test for the number of factors in factor analysis. Psychometrika 1965, 30, 179–185. [Google Scholar] [CrossRef]
  33. Iacobucci, D.; Ruvio, A.; Román, S.; Moon, S.; Herr, P.M. How many factors in factor analysis? New insights about parallel analysis with confidence intervals. J. Bus. Res. 2022, 139, 1026–1043. [Google Scholar] [CrossRef]
  34. Tabachnick, B.G.; Fidell, L.S.; Ullman, J.B. Using Multivariate Statistics; Pearson: Boston, MA, USA, 2013; Volume 6. [Google Scholar]
  35. The Jamovi Project. Jamovi (Version 2.5) 2024. Available online: https://www.jamovi.org (accessed on 28 May 2024).
  36. Brown, B.L. Descriptive Statistics. In Encyclopedia of Research Design (353-59); SAGE Publications, Inc.: Thousand Oaks, CA, USA, 2010. [Google Scholar] [CrossRef]
  37. Ding, A.-C.E.; Shi, L.; Yang, H.; Choi, I. Enhancing teacher AI literacy and integration through different types of cases in teacher professional development. Comput. Educ. Open 2024, 6, 100178. [Google Scholar] [CrossRef]
  38. Brandão, A.; Pedro, L.; Zagalo, N. Teacher professional development for a future with generative artificial intelligence—An integrative literature review. Digit. Educ. Rev. 2024, 151–157. [Google Scholar] [CrossRef]
  39. Delcker, J.; Heil, J.; Ifenthaler, D. Evidence-based development of an instrument for the assessment of teachers’ self-perceptions of their artificial intelligence competence. Educ. Technol. Res. Dev. 2024. [Google Scholar] [CrossRef]
  40. Mancha, R.; Leung, M.T. Structural equation modeling. In Encyclopedia of Research Design (Volume 0, pp. 1455-1461); SAGE Publications, Inc.: Thousand Oaks, CA, USA, 2010. [Google Scholar] [CrossRef]
  41. Sheskin, D.J. Correlation. In Encyclopedia of Research Design 265-67; SAGE Publications, Inc.: Thousand Oaks, CA, USA, 2010. [Google Scholar] [CrossRef]
  42. Lemm, K.M. Stratified Sampling. In Encyclopedia of Research Design 1452-54; SAGE Publications, Inc.: Thousand Oaks, CA, USA, 2010. [Google Scholar] [CrossRef]
Table 1. Descriptive statistics of teacher responses across questionnaire items.
Table 1. Descriptive statistics of teacher responses across questionnaire items.
QuestionMeanMedianSDMinimumMaximum
1_63.954.001.0816
1_73.433.001.8919
1_83.383.001.1716
1_93.814.001.5319
1_103.643.002.0219
1_113.764.001.9719
1_123.694.001.3516
1_133.673.001.7919
1_144.863.002.9829
2_154.024.001.4116
2_163.263.001.2916
2_173.263.002.1819
2_183.073.001.7019
2_192.812.001.7319
3_203.243.001.3216
3_213.213.001.8919
3_223.123.001.8919
3_233.123.001.7719
3_243.072.002.4119
4_253.103.001.4816
4_263.143.001.9619
4_273.433.002.3419
5_283.483.002.6419
5_292.622.001.8519
5_303.333.002.1719
5_313.714.002.3419
6_323.743.002.4419
6_332.793.001.5116
6_343.103.001.8319
6_353.313.002.2319
6_362.692.002.0419
6_372.762.002.2419
7_382.572.001.8519
7_393.022.002.3819
7_403.382.502.6119
7_412.622.002.0919
7_422.381.002.2219
7_434.433.003.5319
7_443.292.002.9719
7_453.262.002.8319
7_463.432.002.8619
7_472.361.002.1619
Table 2. Component statistics on PCA comparing “Professional Engagement” and “AI Literacy Competence” question sets.
Table 2. Component statistics on PCA comparing “Professional Engagement” and “AI Literacy Competence” question sets.
ComponentSS Loadings% of VarianceCumulative %
15.3833.633.6
23.9024.458.0
Table 3. Component loadings on PCA comparing “Professional Engagement” and “AI Literacy Competence” question sets.
Table 3. Component loadings on PCA comparing “Professional Engagement” and “AI Literacy Competence” question sets.
Component
Question12Uniqueness
1_6 0.7090.512
1_7 0.6070.509
1_8 0.7010.515
1_100.5360.3880.464
1_11 0.5700.566
1_12 0.8110.372
1_13 0.7620.349
7_380.6010.5280.210
7_390.6760.3050.352
7_400.782 0.423
7_410.813 0.259
7_420.807 0.261
7_440.711−0.3670.484
7_450.703 0.533
7_460.759 0.455
7_470.626 0.458
Table 4. Component statistics on PCA comparing “Digital Resources” and “AI Literacy Competence” question sets.
Table 4. Component statistics on PCA comparing “Digital Resources” and “AI Literacy Competence” question sets.
ComponentSS Loadings% of VarianceCumulative %
14.9037.737.7
23.2925.363.0
Table 5. Component loadings on PCA comparing “Digital Resources” and “AI Literacy Competence” question sets.
Table 5. Component loadings on PCA comparing “Digital Resources” and “AI Literacy Competence” question sets.
Component
Question12Uniqueness
2_160.665−0.5080.556
2_180.919 0.237
2_190.858 0.286
7_380.830 0.250
7_390.622 0.394
7_40 0.5620.498
7_410.6360.3590.292
7_420.6630.3860.217
7_44 0.8000.369
7_45 0.7510.347
7_46 0.6020.490
7_470.763 0.398
7_43−0.3190.7860.471
Table 6. Component statistics on PCA comparing “Teaching and Learning” and “AI Literacy Competence” question sets.
Table 6. Component statistics on PCA comparing “Teaching and Learning” and “AI Literacy Competence” question sets.
ComponentSS Loadings% of VarianceCumulative %
14.8737.537.5
23.5927.665.0
Table 7. Component loadings on PCA comparing “Teaching and Learning” and “AI Literacy Competence” question sets.
Table 7. Component loadings on PCA comparing “Teaching and Learning” and “AI Literacy Competence” question sets.
Component
Question12Uniqueness
3_210.877 0.362
3_220.968 0.201
3_230.947 0.244
7_380.796 0.242
7_390.5400.3630.399
7_40 0.5250.508
7_410.5720.4150.285
7_420.5010.5140.251
7_43−0.3150.7170.592
7_44 0.8950.326
7_45 0.8940.276
7_46 0.6910.461
7_470.734 0.400
Table 8. KMO Measure of Sampling Adequacy on PCA comparing “Assessment” and “AI Literacy Competence” question sets.
Table 8. KMO Measure of Sampling Adequacy on PCA comparing “Assessment” and “AI Literacy Competence” question sets.
Question MSA
Overall0.622
4_250.311
4_260.388
4_270.479
7_380.640
7_390.841
7_400.523
7_410.701
7_420.631
7_430.462
7_440.720
7_450.687
7_460.633
7_470.747
Table 9. Component statistics on PCA comparing “Empowering Learners” and “AI Literacy Competence” question sets.
Table 9. Component statistics on PCA comparing “Empowering Learners” and “AI Literacy Competence” question sets.
ComponentSS Loadings% of VarianceCumulative %
14.7039.139.1
22.9924.964.1
Table 10. Component loadings on PCA comparing “Empowering Learners” and “AI Literacy Competence” question sets.
Table 10. Component loadings on PCA comparing “Empowering Learners” and “AI Literacy Competence” question sets.
Component
Question 12Uniqueness
5_290.886 0.301
5_300.804 0.478
7_380.885 0.229
7_390.722 0.355
7_400.3600.4540.515
7_410.692 0.288
7_420.6740.3320.231
7_43 0.7560.539
7_44 0.8540.317
7_45 0.8790.230
7_46 0.5950.483
7_470.854 0.346
Table 11. Component loadings on PCA comparing “Enabling Learners’ Digital Competence” and “AI Literacy Competence” question sets.
Table 11. Component loadings on PCA comparing “Enabling Learners’ Digital Competence” and “AI Literacy Competence” question sets.
Component
Question 1Uniqueness
6_320.6270.607
6_360.8470.283
6_370.8600.261
7_380.7370.457
7_390.7830.387
7_400.6740.546
7_410.8630.256
7_420.8950.199
7_440.6050.634
7_450.6840.532
7_460.7080.499
7_470.6340.598
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tenberga, I.; Daniela, L. Artificial Intelligence Literacy Competencies for Teachers Through Self-Assessment Tools. Sustainability 2024, 16, 10386. https://doi.org/10.3390/su162310386

AMA Style

Tenberga I, Daniela L. Artificial Intelligence Literacy Competencies for Teachers Through Self-Assessment Tools. Sustainability. 2024; 16(23):10386. https://doi.org/10.3390/su162310386

Chicago/Turabian Style

Tenberga, Ieva, and Linda Daniela. 2024. "Artificial Intelligence Literacy Competencies for Teachers Through Self-Assessment Tools" Sustainability 16, no. 23: 10386. https://doi.org/10.3390/su162310386

APA Style

Tenberga, I., & Daniela, L. (2024). Artificial Intelligence Literacy Competencies for Teachers Through Self-Assessment Tools. Sustainability, 16(23), 10386. https://doi.org/10.3390/su162310386

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop