1. Introduction
Artificial intelligence is fundamentally transforming the methods of teaching and learning [
1,
2,
3]. In the educational process of students and teachers, AI is considered one of the most effective tools, both within and outside the school environment [
4,
5,
6]. The gradual integration of technology into education has triggered higher demands for students’ AI literacy and capabilities. To cultivate essential skills, schools must adapt to the transition towards a digital society in order to cultivate students’ AI literacy [
7,
8,
9]. The advent of AI has revolutionized the educational environment and instructional paradigms, including the introduction of new requirements related to the knowledge and capabilities of educators. Teachers, as central figures in the educational system, are presently called upon to improve their competencies, particularly in the use of artificial intelligence for pedagogical purposes, in this digital age. Existing research states that a common strategy for advancing the AI literacy of pre-service teachers requires the implementation of courses focused on AI [
10,
11]. An essential factor influencing the use of technology by novice educators is the quality of AI and experiences embedded within teacher education programs [
12,
13]. Furthermore, preliminary research has reported that merely increasing the number of AI courses in educational institutions is insufficient to address this issue comprehensively. It is crucial to invest in teacher training and effectively encourage the use of AI to support students learning [
14,
15]. Various countries and international organizations regularly revise standards related to the AI literacy of educators. These establishments also implement teacher education and training programs with the aim of enhancing their capacities for applying AI technology in their teaching practices [
15,
16,
17]. As an example, two researchers from the University of Cyprus have proposed a context-based instructional design approach termed “Technology Mapping” (TM), which served as a valuable reference for carrying out case-based instruction for teachers in the context of Technological Pedagogical Content Knowledge (TPACK) [
18]. The Initial Teacher Education (ITE) program in Australia was implemented across 48 universities, primarily focusing on public higher institutions [
19]. In this AI era, teaching and learning are defined as complex activities that include the multifaceted use of knowledge. The avenues for knowledge acquisition have gradually diversified as AI has advanced. As a result, the conventional role of teachers in knowledge dissemination is expected to diminish or be replaced in the future [
20,
21]. To navigate this evolving circumstance, teachers must possess competencies in technology, pedagogy, and content knowledge. This optimization of technology to support students in learning specific subject matter is commonly referred to as Technological Pedagogical Content Knowledge (TPACK) [
22,
23,
24]. The TPACK theory has gained widespread recognition in the field of teacher education. In the context of AI education, several research studies have expanded their framework to conceptualize the technological integration expertise of teachers as the incorporation of AI technology into technological pedagogical content knowledge, termed AI-Technological Pedagogical Content Knowledge (AI-TPACK) [
25]. However, numerous findings have reported that contemporary educators lack proficiency in this area, often failing to effectively incorporate technology into the classroom instructional process [
26]. Recent trends in educational research state that pre-service teachers, who are often considered digital natives, tend to identify more strongly with AI technology compared to the majority of in-service educators, regarded as digital immigrants [
27,
28]. The AI-TPACK framework has exerted a profound influence on research and practice in the realms of teacher education and professional development, inciting extensive scholarly investigation and academic inquiry [
29,
30].
The technology of artificial intelligence (AI) distinguishes itself from conventional information technologies by not only pervasively infiltrating and influencing teaching and learning across all dimensions but also by catalyzing a transformation in the cognitive structures and instructional methodologies of educators. The traditional Technological Pedagogical Content Knowledge framework (TPACK) necessitates the infusion of novel connotations, requiring continuous adaptation to contemporary trends to enhance educators’ ability to effectively respond to the demands of the AI era [
31]. The passage highlights a critical aspect of educational technology and teacher education in the context of the Technological Pedagogical Content Knowledge (TPACK) framework. TPACK suggests that effective integration of technology in teaching hinges on an understanding of the interplay between three core elements: subject matter (content knowledge), pedagogy (pedagogical knowledge), and technology (technological knowledge). The passage points out that technological knowledge, unlike pedagogical and content knowledge, is more dynamic and subject to frequent changes, reflecting the fast-paced evolution of technology [
17,
24]. As artificial intelligence (AI) becomes increasingly integrated into educational practices, the passage raises a pertinent question about the adequacy of the existing TPACK framework for meeting the contemporary demands of teaching and professional development for educators [
32]. This leads to the exploration of whether the TPACK framework needs to evolve or incorporate new dimensions in the era of AI. The integration of AI technology into the TPACK framework could potentially revolutionize teaching methodologies, learning environments, and other educational aspects [
33]. Thus, the development of an AI-infused TPACK model (AI-TPACK) becomes a significant area of research and inquiry. This model would not only incorporate the traditional elements of TPACK but also integrate AI technologies, potentially leading to more effective and innovative teaching practices that align with the rapid advancements in AI and its applications in education. The exploration of AI-TPACK is essential to understanding how AI can enhance the educational process and support teachers in adapting to the evolving technological landscape.
The concept of AI-TPACK represents a nuanced and specialized form of knowledge that emerges from the intersection of three distinct areas: disciplinary knowledge (content expertise), pedagogical knowledge (teaching methods and strategies), and artificial intelligence technological knowledge. This type of knowledge is distinct from the expertise of subject-matter experts and AI technology specialists [
34]. It goes beyond general pedagogical knowledge that is not specific to any discipline, embodying a tailored approach to teaching within specific subject areas through the use of artificial intelligence technology.
AI-TPACK enables educators, or AI entities functioning as educators, to possess a level of knowledge comparable to that of human teachers. This knowledge equips them to independently or collaboratively carry out teaching tasks alongside human educators [
35]. This aspect is particularly significant in the current era of artificial intelligence, where AI technology transcends its traditional role as merely a tool for teaching and learning. Instead, there’s an emerging focus on how human teachers and AI entities (AI teachers) can effectively collaborate. This collaborative aspect forms an integral part of the AI-TPACK framework.
Therefore, within the AI-TPACK framework, the interactive relationships among artificial intelligence technology, subject matter content, and teaching methods are pivotal. These relationships, especially when viewed through the lens of human–computer collaborative thinking, constitute the core essence of AI-TPACK [
31]. This perspective underscores the importance of integrating AI technology not just as a supplementary tool but as an integral component of the teaching and learning process, reshaping how educational content is delivered and understood in the AI era.
Several studies on TPACK have consistently focused on the application of the TPACK theory in the field of AI education [
36,
37]. The technology of AI extensively permeates and influences education and learning, as well as instigates transformation in the cognitive structures and instructional methods of teachers [
38]. Angeli and Valanides observed the difficulty in clearly defining the constituent elements of TPACK, as the boundaries among these components are highly ambiguous and unclear. This issue is equally evident in the literature on AI-TPACK [
39,
40]. Despite extensive empirical research that has validated the relationships among the components of TPACK [
41,
42,
43]. The investigation of AI-TPACK for teachers is still in its early stages. In terms of theoretical exploration, its current framework has identified constituent elements but failed to postulate further assumptions about the existing intrinsic relationships. The existing research on AI-TPACK has focused mainly on listing its components without thoroughly exploring their connections. However, there needs to be more empirical support and a reliable measure for its validity. Revisiting the relationships between technology, pedagogy, and subject matter knowledge, including constructing a framework for teachers’ AI-TPACK, has become an urgent issue. Therefore, to address these gaps, a comprehensive analysis of the current state of teachers’ AI-TPACK research was carried out by systematically exploring the concepts, structure, characteristics, and impact-effect models of teachers’ AI-TPACK. It adopted various techniques, including exploratory factor analysis, confirmatory factor analysis, and structural equation modeling techniques, as well as developed a measurement scale that complied with the psychometric standards. This scale is then empirically tested and refined to clarify the relationships among the knowledge elements of the teacher’s AI-TPACK.
3. Research Objectives
This research is aimed at achieving the following primary objectives:
To develop and validate an AI-TPACK measurement tool designed for teachers, featuring ideal metrics for assessing their knowledge levels across the various components of AI-TPACK;
To explore the relationships among the constituent knowledge elements of AI-TPACK and confirm whether these connections are consistent with theoretical assumptions.
To address the first objective, this research systematically dissected the essence of AI-TPACK, formulated its questionnaire items, and engaged domain experts to refine these items iteratively. The aim was to eliminate any ambiguity in item descriptions and overlap between different dimensions of AI-TPACK. This iterative process led to the development of an initial scale. Subsequently, exploratory factor analysis was conducted on the questionnaire, and its items were modified and reduced to form the formal one. Finally, the formal questionnaire was administered, and the collected data were subjected to confirmatory factor analysis and reliability assessment to validate the scientific robustness of the constructed AI-TPACK scale.
To address the second objective, considering the complexity of AI-TPACK knowledge elements and their relationships, the existing research stated that this concept should not be analyzed as a singular structural knowledge element. Instead, it is important to thoroughly explore the relationships among its inherent structures. In accordance with the general knowledge and specific technological expertise of teachers, this research constructed a model to analyze the hidden impact relationships among the seven core knowledge elements within the field of AI-TPACK for teacher education.
4. Methodology
A comprehensive account of the development process of the AI-TPACK scale, its validation, the respondents concerned, and methodological considerations were stated in this section. The development and implementation of the scale comprise several critical phases. It all commenced with an extensive literature review, followed by content deconstruction, item generation, and refinement, as well as expert review. The process was concluded with survey-based research using the methodology [
67]. In this research, the project revision comprised the use of both exploratory factor analysis and confirmatory factor analysis methods. The data collected from the survey were subjected to analysis using structural equation modeling, with the main aim of understanding the interrelationships between the constituent elements of AI-TPACK knowledge [
68]. Further details about each significant step in this process will be elaborated on in the relevant subsections.
4.1. Existing Scales
In recent years, numerous research studies in the field of TPACK have focused on developing tools for assessing teachers’ TPACK structures. For example, Schmidt et al. designed a five-point scale to measure its seven components among 124 pre-service teachers in the United States. This tool was adapted and localized by other researchers [
66], and Koh et al. further modified it into a 29-item, 7-point scale. However, the findings indicated that not all seven TPACK factors could be clearly identified. Some factors, like PK and PCK, as well as TCK and TPACK, were merged to form new ones. The TPACK framework was also modified for more generic purposes instead of specific subject content [
69]. Two items linked to TPK independently formed another factor, suggesting that the theoretically proposed seven-factor structure of TPACK does not fully manifest in practical situations [
55]. To address the challenges associated with TPACK measurement, Chai and colleagues particularly focused on the conceptual distinctions among these components [
59]. This scale was used to assess 455 in-service and 550 pre-service teachers in Singapore [
70] and 550 pre-service teachers in the Asian Chinese-speaking regions (Singapore, Hong Kong, Mainland China, and Taiwan) [
71], respectively, successfully identifying the seven TPACK factors. Similarly, when the modified scale was used to measure science and Chinese language teachers in Singapore [
64], it effectively distinguished these factors. These results suggest that a clear conceptual delineation of each TPACK component tends to enhance the discriminant validity of the respective scales. Therefore, it was recommended that TPACK research, recognizing the considerable interrelatedness and overlap among the seven components, carefully define each element when developing new scales or adapting existing ones.
In the academic field, research on the TPACK Scale within the context of AI is a dynamic and evolving process. Celik introduced an ethical dimension to TPACK, giving rise to the Intelligent-TPACK Scale, designed to assess the ethical knowledge of teachers in AI. It is evident that there is currently no universally accepted AI-TPACK scale [
25]. This deficiency is most apparent in several critical aspects:
Firstly, existing research has explored the integration of artificial intelligence into teachers’ AI-TPACK, although these investigations often focus on its specific aspects, such as natural language processing or machine learning, rather than the comprehensive application of teachers’ AI-TPACK as a whole. This limitation posed a challenge to establishing a comprehensive and systematic AI-TPACK scale.
The teacher’s AI-TPACK concept is complex, as it includes integrating knowledge pertaining to AI technology, subject matter expertise, pedagogical knowledge, and the intersection of these three domains. In the process of incorporating AI technology into a teacher’s TPACK, it is important to clarify which specific teacher’s AI-TPACK can be effectively combined with the particular subject matter and pedagogical knowledge to yield favorable educational outcomes [
25]. However, current research often lacks an in-depth exploration of this interdisciplinary integration, thereby complicating the establishment of an AI-TPACK assessment framework.
Lastly, the development of an AI-TPACK requires a thorough examination of diverse contextual factors and challenges encountered in real-world applications, including distinct instructional settings, subject domains, and student demographics. This includes a substantial body of empirical research and on-site investigations to assess the feasibility and efficacy of the AI-TPACK assessment framework [
72]. Presently, few investigations have been carried out in this field, thereby impeding the establishment of an AI-TPACK assessment framework supported by empirical evidence.
The development of an AI-TPACK assessment framework requires a comprehensive examination of various aspects. This includes the general application of AI, its integration with the teacher’s TPACK, and addressing the various contextual factors and challenges encountered in practical applications. A comprehensive and systematic AI-TPACK assessment framework can be realized only through an in-depth exploration of these factors.
4.2. Item Generation
The scale (see
Appendix A) used in this research was mainly adapted from that of TPACK, developed by Schmidt [
66], Landry [
73], Smith [
74], and Celik [
25]. To refine this scale, insights were gathered from open-ended questionnaire surveys conducted among a sample of primary and secondary school teachers, as well as educational experts. In addition, the survey aimed to explore the constituents of teachers’ TPACKs integrating AI. Based on the theoretical framework for the teacher’s TPACK structure and a detailed examination of its contents and extensions, each factor was further refined. The principle of factor-item congruence demands a set of typical psychological and behavioral items be compiled for each component. As a result, 6 items were formulated for each factor, leading to a total of 42, which constitute the
Teachers’ TPACK Scale: Semantic Analysis Expert Questionnaire. Incorporating feedback and suggestions from 12 education doctoral reviewers, items that were conceptually similar or repetitive were merged, while those deemed difficult to understand or ambiguous were either removed or revised. After extensive deliberation, a final set of 42 items, organized as 6 items per factor, was established. These items adopted a 5-point Likert self-assessment scoring system, ranging from Strongly Conformant to Strongly Non-conformants, with high scores indicating advanced levels of teacher AI-TPACK competence.
The formal teacher’s AI-TPACK survey questionnaire consists of two main sections: basic information and the scale. The basic information section is designed to gather essential demographic data about the respondents, including gender, highest educational attainment, teaching role, subject category, and educational stage, as well as their familiarity with and exposure to teachers’ AI-TPACK. The Teacher AI-TPACK scale section comprised seven dimensions, totaling 42 items, which collectively assessed different aspects of the teacher’s AI-TPACK.
4.3. Expert Consultation
To guarantee the reliability and validity of the measurement instrument, a consultative process was undertaken involving ten experts in the field of educational technology. These experts were selected from five reputable universities, which include East China Normal University, Beijing Normal University, and Guangxi Normal University. This diverse group of experts comprised four professors and one associate professor, supplemented by five individuals who hold doctoral degrees [
75,
76]. This approach of incorporating feedback and insights from a panel of distinguished experts is a standard method for enhancing the credibility and accuracy of a research instrument. By involving professionals with various levels of expertise and from different academic institutions, a comprehensive and multifaceted perspective on the instrument’s effectiveness and applicability is ensured. Their collective input contributes significantly to refining the measurement tool, ensuring that it accurately captures the intended constructs and is relevant to the field of educational technology. Such rigorous validation processes are crucial in academic research, especially in fields like educational technology, where precision and relevance are paramount.
After thorough consideration of the consistency of the measurement items and feedback from experts, the necessary adjustments were made. This process led to the development of the Integrated Teacher’s AI-TPACK Prediction Scale. As an illustration of the AI-TK dimension, the fifth item in the scale developed by Celik was formulated as “I am familiar with AI-based tools and their technical capacities”. This item was intended to assess the familiarity of educators with AI tools, as only those familiar with this technological tool can effectively use it for certain tasks. It was also recommended to revise and relocate this item to the first position under the AI-TK dimension, phrased as “I know how to execute some tasks with AI-based tools”. The items “I know how to execute some tasks with AI-based tools” and “I know how to initialize a task for AI-based technologies by text or speech” exhibited substantial conceptual overlap, both implying the use of AI technology for task execution. In an educational context, these items were modified to read, “I frequently use AI technology for teaching”. Additionally, the first item under Intelligent Technological Knowledge (TK) in the Celik scale was “I know how to interact with AI-based tools in daily life”, and the notion of interaction was slightly unclear. This item was altered or refined to “I know how to use AI technology for interactive teaching” in order to be consistent with the educational context. Finally, in the Schmidt scale, the sixth item in the TK category was “I have the technical skills I need to use technology”, originally designed to evaluate whether educators possess the necessary AI skills for teaching. However, it was reported that novice educators tend to respond negatively. To address this, the assessment of teacher AI-TK potential was recommended, thereby modifying the item to read as “I can easily acquire the AI technology skills required for teaching”.
4.4. Research Respondents
In this research, the teacher’s AI-TPACK scale was developed, and its reliability and validity assessments were performed. The survey took place from July 2023 to September 2023 and included 400 teachers as respondents. This group had completed coursework in educational technology and received systematic training in national AI technology. They were familiar with the commonly used AI technologies and had acquired a certain level of practical experience in the application of AI technology.
A randomized sampling method was used for the survey, and a total of 400 questionnaires were distributed. After an assessment of teachers’ knowledge and exposure to AI technology, it was observed that 34 respondents were either unfamiliar with or lacked prior exposure to AI technology. Subsequently, these 34 questionnaires were excluded, resulting in the final selection of 366 valid ones, representing 91.50% of the distributed surveys. The demographic characteristics of the respondents, comprising 82 males and 284 females, are shown in
Table 2. Among the surveyed respondents, 36.89% and 63.11% were pre-service and in-service teachers, respectively. In terms of their educational background, 54.10%, 41.53%, and 4.37% held undergraduate, master, and doctoral degrees. Furthermore, concerning the subjects taught, 25.14% and 74.86% were from the arts and sciences, respectively. Teachers from elementary, middle, high school, and university settings represented 14.75%, 41.26%, 33.06%, and 10.93% of the sample, respectively. The t-test results indicate that there are no significant differences in teachers’ AI-TPACK proficiency across different categories of Gender, Highest Educational Attainment, Teacher Type, Subject Category, and Educational Stage (
p-values > 0.5). Thus, it can be inferred that the uneven distribution of samples does not exert an influence on the outcomes.
4.5. Data Analysis
Data analysis comprised a multi-phase process, including using questionnaires to survey and measure the subjects. Both exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) were conducted on the acquired information, which led to the subsequent adjustments of these items. This iterative process eventually resulted in the development of the official teacher’s AI-TPACK questionnaire. It was then used in the formal measurement phase, and the respondents were encouraged to complete all questionnaire items within the designated timeframe, with a focus on providing authentic responses. To establish and validate the elements and structure of the teacher’s AI-TPACK, the data obtained from the formal measurement were divided into two essentially homogeneous halves. One half was subjected to exploratory factor analysis using the specialized software SPSS 27, while the other was used for confirmatory factor analysis and structural equation modeling (SEM).
The data analysis process consisted of three distinct stages: In the first stage, exploratory factor analysis was used to assess the structural validity of the teacher’s AI-TPACK scale and identify its optimal factor structure. In the second stage, confirmatory factor analysis was applied to validate the structural models of the teacher’s AI-TPACK scale and its constituent knowledge elements. This stage was used to confirm whether the predefined models, factor quantities, and scale structure were in line with the actual data. Finally, the third stage used structural equation modeling to examine the causal relationships among the knowledge elements within the teacher’s AI-TPACK.
6. Discussion
The main objective of this research is to integrate AI technologies to form the teacher’s AI-TPACK framework and evaluate the knowledge elements described within the teacher’s AI-TPACK framework. When developing and validating this tool, recommendations from recent publications [
25,
98] and analyzed data from various perspectives were considered.
In the initial step of this research, the development and validation of the scale were based on several procedures guided by the theoretical framework of the teacher’s AI-TPACK. As reported by Graham, defining the main concepts and relationships between the knowledge elements of a teacher’s AI-TPACK is of critical importance [
99]. To realize this, definitions of knowledge elements and major criteria for distinguishing these elements were established based on a two-year literature review and similar research activities [
31,
100]. This comprehensive process led to the determination of the structure of the teacher’s AI-TPACK and facilitated its subsequent development.
Several factors were considered during the validation process. As stated in the methodology section, in addition to traditional validation methods such as factor analysis and correlation coefficients, a more advanced technique known as structural equation modeling was adopted to validate the relationships between the teacher’s AI-TPACK elements. This additional step, in line with the recommendations made by Graham, provided valuable insights into the complex relationships between these elements [
99]. The results presented raised questions about the accuracy of the teacher’s AI-TPACK model shown in
Figure 2, which depicts the relationships among the seven knowledge elements. These findings are inconsistent with the existing model, showing that the relationship between knowledge elements was more complex than initially anticipated. Despite the current teacher’s AI-TPACK framework representing well-defined relationships and equal influence among these elements in the development of the teacher’s AI-TPACK, the present research results indicated that the relationships among the teacher’s AI-TPACK components were not clearly defined and were more complex.
The teacher’s AI-TPACK framework is structured as a hierarchical model, with fundamental core knowledge elements and composite knowledge repositories both situated at a lower level. As one moves from the first to the second level, a significant indirect impact on AI-TPACK is observed, whereas the direct impact from the first level on the teacher’s AI-TPACK tends to be negligible. From the existing research, it was concluded that the core knowledge elements (CK, PK, and AI-TK) have a relatively minor influence on teachers’ AI-TPACK. This finding is consistent with the investigation by Mishra and Koehler that the composite knowledge elements are not simply a combination of the two core ones; these knowledge bases possess distinct characteristics [
101]. As stated in the theory, the teacher’s AI-TPACK framework was developed based on Shulman PCK principles. The reference to Pamuk’s work [
42] highlights an important aspect of the TPACK framework: Pedagogical Content Knowledge (PCK) is considered to be a predominant knowledge element that directly influences the development of TPACK. This suggests that the integration of pedagogical strategies with content expertise is crucial for effectively incorporating technology into teaching practices. However, with the integration of AI technology, the results of this research indicated that the relationship between PCK and its impact on teachers’ AI-TPACK development is minimal (PCK = −0.008). This surprising discovery suggested that the explanatory power of the core knowledge elements (CK, PK, and AI-TK) is not solely derived from the composite knowledge repositories (PCK, AI-TCK, and AI-TPK).
Based on general findings, the knowledge elements were categorized into two distinct types, namely those related to and unconnected to technology. One of the most significant findings is the distinctive difference in the explanatory power of technology-related and non-technology knowledge elements. The results obtained showed that knowledge elements within the teacher’s AI-TPACK framework, especially those technology-related knowledge elements (AI-TK, AI-TCK, and AI-TPK), have a strong correlation with the teacher’s AI-TPACK and possess firm explanatory power. On the contrary, CK and PK, as well as PCK, have a relatively weaker impact on the teacher’s AI-TPACK (CK = 0.052, PK = 0.088, PCK = −0.008) when compared to AI-TK (0.654), AI-TCK (0.207), and AI-TPK (0.870). In other words, technology-related elements have a lower direct impact on AI-TPACK development compared to non-technology elements. The explanatory power of AI-TCK (0.207) is significantly lower than AI-TK (0.654), while that of PCK (−0.008) is much lower than PK (0.088). Even though PCK and AI-TCK combined PK and AI-TK with content (C) knowledge, their explanatory power decreased in relation to the teacher’s AI-TPACK. The impact of content (C) knowledge on the explanatory power of the PCK and AI-TCK variables prompts further research to investigate whether modifying the framework by removing the content (C) element would result in a better fit for the structural model.
The analyzed data and understanding of the relationships revealed the need to modify the traditional TPACK framework. This adaptation incorporates the explanatory power of the relationships between knowledge elements and their hierarchical structure. Future research should focus on several main aspects: First, increasing theoretical and empirical investigations based on the teacher’s AI-TPACK framework to uncover the reasons behind the low explanatory power of content (C) knowledge. Second, the developed teacher’s AI-TPACK model displayed good reliability and validity, but it is unclear whether the seven-factor model of the teacher’s AI-TPACK is the most optimal among other possible structural models. To address this, further exploration needs to include the construction and testing of competing models. Third, conducting investigations on the teacher’s AI-TPACK scale to assess and guide its level in practical settings effectively. Fourth, based on the developed teacher’s AI-TPACK framework, exploring the relationship between the level of the teacher’s AI-TPACK and AI literacy tends to be a critical area for investigation.
This study bridges the gap between sustainability in education and AI by proposing a contemporary educational framework tailored for teachers in the AI era. It underscores the vital importance of incorporating AI into teaching methodologies to ensure that education remains relevant and sustainable amid rapid technological advancements [
96]. The study elucidates the AI-TPACK framework, highlighting its significance in the ongoing development of sustainable teaching practices and in the further integration of AI and information technology in educational contexts. The AI-TPACK model equips teachers to modify their pedagogical approaches to include AI, thereby preparing students with essential skills for a digitally driven society. This approach is not only innovative but also addresses the dynamic educational needs of a technology-centric world, contributing to the sustainability of educational practices [
102].
Limitations
This study advocates for a progressive and systematic approach to assessing the validity and reliability of the AI-TPACK depth scale. Despite the comprehensive scope and nationwide application of the scale, the research has identifiable limitations.
First, the study employed a survey research model, using a scale to collect data. While surveys are effective for understanding population characteristics, they are less precise in capturing behaviors and perceptions compared to observational methods [
103]. Responses in survey research are inherently constrained by the structure of the survey tool itself. A more robust developmental approach could involve qualitative data collection from not only educational technology experts but also pre-service teachers, offering a broader perspective beyond the current study’s framework.
Second, the study’s large sample size was predominantly female. However, the literature from 2002 onwards suggests no significant gender differences among pre-service teachers regarding attitudes, abilities, and use of technology [
104,
105,
106,
107]. Further, recent studies highlight that gender and computer attitudes are not significant predictors of information and communication technology usage [
108]. In Schmidt et al.’s [
66] survey, a notable 93.5% of respondents were females, and similar gender distributions have been observed in other studies focusing on pre-service teachers’ TPACK development [
64,
109,
110].
Lastly, beyond the model’s results, the data interpretation and understanding of relationships suggest that the traditional AI-TPACK framework requires revision to better reflect the intensity of relationships between knowledge elements and their hierarchical structure. Future research should extend to diverse contexts, aiding not only in the validation of the framework but also in refining these insights. Specifically, there is a need for a deeper understanding and validation of the relationships across different levels within the AI-TPACK framework.
The follow-up research will pivot towards qualitative studies of AI-TPACK, focusing on its behavioral manifestations and continually validating and revising the scale in practice. Efforts will include expanding the sample size and ensuring a more balanced demographic representation, such as in terms of gender and educational levels. Additionally, the research will delve into the relationships between different levels of the AI-TPACK framework, exploring the evolution and interplay of core knowledge elements (CK, PK, and AI-TK) and composite knowledge repositories (PCK, AI-TCK, and AI-TPK).