Next Article in Journal
Research on Green Supplier Selection Method Based on Improved AHP-FMEA
Next Article in Special Issue
Students’ Attitudes Towards AI and How They Perceive the Effectiveness of AI in Designing Video Games
Previous Article in Journal
Developing a Sustainability Reporting Framework for Construction Companies: Prioritization of Themes with Delphi Study Approach
Previous Article in Special Issue
Digital Twins, Extended Reality, and Artificial Intelligence in Manufacturing Reconfiguration: A Systematic Literature Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Artificial Intelligence Applications in Primary Education: A Quantitatively Complemented Mixed-Meta-Method Study

by
Yavuz Topkaya
1,
Yunus Doğan
2,*,
Veli Batdı
3 and
Sami Aydın
4
1
Education Faculty, Hatay Mustafa Kemal University, Hatay 31060, Turkey
2
School of Foreign Languages, Fırat University, Elazığ 23119, Turkey
3
Department of Curriculum and Instruction, Gaziantep University, Gaziantep 27310, Turkey
4
Gaziantep Education Faculty, Gaziantep University, Gaziantep 27310, Turkey
*
Author to whom correspondence should be addressed.
Sustainability 2025, 17(7), 3015; https://doi.org/10.3390/su17073015
Submission received: 29 January 2025 / Revised: 6 March 2025 / Accepted: 19 March 2025 / Published: 28 March 2025

Abstract

:
In recent years, rapidly advancing technology has reshaped our world, holding the potential to transform social and economic structures. The United Nations’ Sustainable Development Goals (SDGs) provide a comprehensive roadmap that promotes not only economic growth but also social, environmental, and global sustainability. Meanwhile, artificial intelligence (AI) has emerged as a critical technology contributing to sustainable development by offering solutions to both social and economic challenges. One of the fundamental ideas is that education should always maintain a dynamic structure that supports sustainable development and fosters individuals equipped with sustainability skills. In this study, the impact of various variables related to AI applications in primary education at the elementary school level, in line with sustainable development goals, was evaluated using a mixed meta-method complemented with quantitative analyses. Within the framework of the mixed meta-method, a meta-analysis of data obtained from studies conducted between 2005 and 2025 was performed using the CMA program. The analysis determined a medium effect size of g = 0.51. To validate the meta-analysis results and enhance their content validity, a meta-thematic analysis was conducted, applying content analysis to identify themes and codes. In the final stage of this research, to further support the data obtained through the mixed meta-method, a set of evaluation form questions prepared within the Rasch measurement model framework was administered to primary school teachers. The collected data were analyzed using the FACETS program. The findings from the meta-analysis document review indicated that AI studies in primary education were most commonly applied in mathematics courses. During the meta-thematic analysis process, themes related to the impact of AI applications on learning environments, challenges encountered during implementation, and proposed solutions were identified. The Rasch measurement model process revealed that AI applications were widely used in science and mathematics curricula (FBP-4 and MP-2). Among the evaluators (raters), J2 was identified as the most lenient rater, while J11 was the strictest. When analyzing the AI-related items, the statement “I can help students prepare a presentation describing their surroundings using AI tools” (I17) was identified as the most challenging item, whereas “I understand how to effectively use AI applications in classroom activities” (I14) was found to be the easiest. The results of the analyses indicate that the obtained data are complementary and mutually supportive. The findings of this research are expected to serve as a guide for future studies and applications related to the topic, making significant contributions to the field.

1. Introduction

By the late 20th century, technological and technical advancements had led to profound changes in societies. Today, information technologies are at the core of continuously evolving education systems [1]. Among the notable technologies of this era is artificial intelligence (AI), which can be defined as a technology that enables computer systems to mimic certain human capabilities [2]. AI encompasses autonomous machines capable of analyzing large datasets, recognizing patterns, learning, and solving problems [3,4]. Currently, AI has shaped the future of computer systems and has become an indispensable part of daily life. With advancements in AI-driven computing, continuous innovations in software and hardware have emerged, integrating AI-powered applications such as robots, smart home systems, and autonomous vehicles into everyday life [5].
AI has a wide range of applications across various sectors, from healthcare to finance and manufacturing to education. For instance, in the healthcare sector, AI can analyze hospital data to assist in diagnosis and treatment planning, as well as provide preventive healthcare services [6]. In financial services, AI can be used to prevent fraud and enhance customer service [7]. In the industrial sector, AI enables more cost-effective and widespread production, while in transportation, it contributes to safer travel [8,9].
The rapid advancement of AI is profoundly transforming our lifestyles, learning methods, and ways of conducting business [10]. Today, the greatest challenge in sustainable education for AI-integrated smart manufacturing capabilities is the ability of the education system to adapt to changing conditions and to focus on the continuous development of individuals’ competencies [11]. Therefore, AI education should evolve from being limited to experts to being accessible to the general public [12]. Traditionally covered in higher education, AI topics have now been strategically introduced into secondary and primary school curricula worldwide [13]. Introducing AI education at an early age provides opportunities to build a sustainable world by helping students understand AI-based solutions to global issues such as environmental challenges, food security, poverty, inequality, and violence [14,15]. Through such educational initiatives, children not only become aware of emerging technologies and their functionalities but also gain inspiration to become future AI users, designers, software developers, and researchers. By maintaining a balance between society, the economy, and the environment, AI education can help achieve the goals of sustainable development.

1.1. Artificial Intelligence in Education

As in many fields, artificial intelligence (AI) is being utilized in education. The significance of applying AI in the education sector became evident during the COVID-19 pandemic, which resulted in a shift from traditional classroom teaching to digital learning methods [16,17,18]. In fact, studies on the use of AI in educational settings began as early as the 1980s [19]. Over the past five decades, research in the literature has focused on AI-supported educational applications [20], language-teaching processes in educational environments, systems for evaluating and supporting individual performance [21], and intelligent teaching systems and environments [22].
These studies indicate that using AI in education can provide more effective learning experiences by personalizing learning, helping students discover their abilities, fostering creativity, and reducing teachers’ workload [23,24]. Additionally, the literature highlights that AI can support students with special needs, offer content tailored to different learning styles, analyze students’ learning processes to focus on their individual needs, and guide them in career planning [25]. These findings align with the three paradigms defined by [26] for the use of AI in education: guiding, supporting, and integrating students into enhanced practices.
While such findings showcase the benefits of AI for students, studies examining teachers’ perceptions of AI reveal that teachers often see AI as a professional threat that could replace their jobs [27]. However, recent research indicates that these perceptions are changing as teachers now expect AI to bring significant advancements to education in various settings [28]. Teachers’ perceptions of AI systems depend on factors such as their pedagogical beliefs, teaching experience, and history of using educational technology. These factors can influence their willingness to adopt new educational technologies [29].
Studies exploring teachers’ perceptions of AI in education (AIE) reveal that teachers see AI as a tool that can contribute to the teaching–learning process by providing digitized learning materials and facilitating human–computer interaction [30]. They also perceive AI as a means to address learning difficulties faced by students in overcrowded classrooms [31]. Moreover, the research indicates that teachers believe AIE can reduce their administrative workload by taking over repetitive and simple tasks [32].

1.2. Artificial Intelligence in Elementary Schools

Artificial intelligence (AI) is becoming increasingly significant in our daily lives, with applications in disaster prevention, agriculture, civil engineering, and healthcare, among others [33,34,35,36]. Educating all age groups, from elementary school students to the elderly, about AI and how to use it is crucial for fostering a society where AI is effectively utilized. AI education should not be limited to higher education but should be integrated into all levels, starting from elementary school through secondary education [37].
In 2019, the United Nations Educational, Scientific, and Cultural Organization (UNESCO) evaluated educational programs provided by governments and institutions, adopting a policy to support research aimed at developing standardized AI curricula for K-12 education [13]. This initiative has led to an increase in research on K-12 education and the emergence of conferences and journals dedicated to the topic [38].
As children increasingly interact with new technological tools and engage with learning processes through AI systems, the importance of AI education tailored to children is growing. For instance, Ref. [39] found significant improvements in elementary students’ spatial reasoning skills after participating in short- or long-term robotic interventions.
Ref. [40] developed a Model-Tracing Intelligent Tutor (MIT) to interpret students’ mathematical problem-solving behaviors, diagnose learning difficulties, and provide individual feedback. Their findings showed that the model-tracing-based learning diagnosis system significantly outperformed traditional web-based tests in helping students learn mathematics.
Ref. [41] investigated the impact of robotics-based education on students’ mental rotation abilities. Their study revealed that robotics-based education significantly improved boys’ mental rotation skills compared to the control group, while no significant difference was observed for girls. Although the study provides evidence of the potential of robotics-based education, it also highlights the need for future research to deeply analyze gender differences in learning outcomes achieved through educational robotics.
Ref. [42] emphasized the importance of introducing children to AI concepts at an early age and proposed a curriculum design focusing on why, what, and how to teach AI. Similarly, Ref. [31] echoed this perspective, underscoring the value of early AI education for children.

1.3. Purpose and Significance of This Study

In the 21st century, technology has led to revolutionary changes in the field of education. With these transformations, knowledge is no longer acquired solely through individual memory or within classroom settings but also through interactions with online networks and communities. As a result, the learning process is conceptualized as a network structure in which knowledge is continuously updated by establishing connections between different information nodes. In this context, connectivism theory asserts that knowledge changes rapidly and is particularly effective in fields requiring constant updates, such as technology and digital literacy. The connectivist paradigm views learning as a multilayered networked process [43]. In other words, connectivism aims to redefine teaching and learning processes in knowledge societies using emerging technologies. Additionally, connectivism explains learning in e-learning environments [44] and contributes to modernizing traditional education by integrating e-learning scenarios into face-to-face learning settings [45].
Within this framework, artificial intelligence (AI) applications offer significant contributions to educators who seek to integrate digital technology into learning environments. AI is defined as the ability of a digital device to perform tasks typically attributed to humans [46]. In today’s world, innovative technologies such as AI are increasingly playing a pivotal role in classrooms and student learning processes [47,48]. Consequently, AI research has focused on student learning, teacher instruction, student performance evaluation, and school administration [49]. Studies related to student learning primarily explore academic performance, 21st-century skills, student motivation, and engagement [50,51,52].
When considering these aspects as a whole, no previous study has been found that integrates the impact of AI applications on different variables at the elementary education level using a mixed-meta method combined with quantitative analyses. In this research, the effect of different variables on AI applications is examined in three dimensions, aiming to provide an original perspective to the literature by addressing the following questions:
  • What are teachers’ perspectives within the scope of this meta-analysis (quantitative dimension)?
  • What are teachers’ perspectives within the scope of this meta-thematic analysis (qualitative dimension)?
  • What are teachers’ perspectives within the scope of the Rasch measurement model (quantitative dimension)?
In this context, following a document review methodology within a mixed-meta method complemented with quantitative analyses, this study focuses on the points below:
Within the scope of the meta-analysis, the following was performed:
  • Determining the overall effect size of different variables on AI applications;
  • Assessing the effect size of different variables on AI use based on subject area, the duration of implementation, and sample size.
    Within the scope of the meta-thematic analysis, the outlined was performed:
  • Identifying the impact of AI applications on learning environments and determining potential challenges and solutions in AI implementation;
Within the scope of the Rasch measurement model (teacher perspectives), the following was performed:
  • Conducting a general analysis of teachers’ opinions on AI applications;
  • Analyzing the leniency or strictness of evaluators (jury members);
  • Performing a difficulty analysis of AI-related assessment items (criteria).
This study aims to contribute to the field by providing a comprehensive evaluation of AI applications in elementary education through a systematic, data-driven approach.

2. Methods

The methodology of a scientific study encompasses every stage of the research process, from its justification and content to the concepts used, measurement techniques applied, the analysis of collected data, and the interpretation of results [53]. Methodology can be classified into qualitative, quantitative, and mixed-methods approaches [54]. Among these, methodological pluralism—which combines quantitative and qualitative methods—has gained significant traction in fields such as education and sociology [55].
Methodological pluralism involves the holistic analysis of quantitative and qualitative data through document analysis [56]. Mixed-methods research, as defined by [57], aligns with this concept of methodological pluralism. It involves the integration of qualitative and quantitative approaches, either concurrently or sequentially, to create a unified dataset.
This study adopts a methodological pluralism framework to evaluate the use of AI in primary schools, encompassing both qualitative and quantitative analyses. It employs a mixed-meta approach, which integrates quantitative and qualitative methods within a single research process. Specifically, the methodological process includes three methods, outlined below:
  • Meta-analysis: a quantitative synthesis of data to determine the effect size of AI applications;
  • Meta-thematic analysis: a qualitative examination of recurring themes in the literature, focusing on the effects of AI applications in educational contexts;
  • The Rasch measurement model: a quantitative analysis of participant opinions, providing insights into teacher perspectives and evaluating response consistency.
This combination of methods ensures a scientifically robust and holistic research process. The research integrates findings from meta-analysis, meta-thematic analysis, and the Rasch model to comprehensively analyze AI applications in education. A visual representation of this methodological framework is presented in Figure 1.
The mixed-meta method involves the holistic analysis of quantitative and qualitative data based on document examination. In other words, a mixed-meta method allows for the analysis of quantitative data using statistical programs such as CMA V3/MetaWin 2.1, and qualitative data using software like Nvivo/Maxqda 2020, thereby enabling the integration of both datasets into a single study framework. This makes the method a comprehensive and rich approach to research [59]. The Multiple Complementary Approach (McA), a design within this method, integrates meta-thematic analysis with the quantitative research process to address the shortcomings of meta-thematic findings, support the obtained results, and provide a more holistic perspective [60]. The analysis of scientifically valuable quantitative and qualitative data is crucial in this research process. In this context, the mixed-meta method integrates meta-analysis, meta-thematic analysis, and the Rasch analysis based on participant views in the quantitative dimension. This three-phase approach enhances the depth of this research and ensures the generation of more comprehensive and valid results.
Method Phases:
Meta-analysis: statistical analysis of quantitative data, determining the impact of different variables on AI applications;
Meta-thematic analysis: thematic analysis of qualitative data, exploring the impact of artificial intelligence applications on education.

2.1. Meta-Analysis Process

In the first dimension of the research, meta-analysis was used. Ref. [61] defined the concept of a meta-analysis as a statistical analysis process aimed at integrating research findings by combining the analysis results obtained from individual studies. A meta-analysis is a statistical method that, instead of examining individual studies one by one, aims to gather various studies effectively and reliably to produce broader and more meaningful results [62]. In this dimension of the research, the meta-analysis process combines the general effect size of studies involving artificial intelligence applications at the elementary school level as well as the effect sizes related to lessons, application duration, and sample size. By doing so, the results of various studies conducted at the elementary school level are effectively and validly integrated, leading to a holistic conclusion [62].

2.1.1. Data Collection and Analysis

Research on the use of artificial intelligence in elementary schools was conducted in both English and Turkish using the keywords “impact/effectiveness of artificial intelligence use/in elementary schools” in the literature. During this study, databases such as YÖK, Google Scholar, Web of Science, Taylor & Francis Online, Science Direct, and ProQuest Dissertations & Theses Global were searched. The search was conducted according to the inclusion criteria listed below. These criteria are shown in Table 1.
Table 1 outlines the inclusion criteria used to select studies for the meta-analysis. Studies that did not meet these criteria were excluded from the analysis. In this context, studies that lacked access permission, did not contain quantitative data, lacked the necessary data for analysis, were found in multiple databases, or did not involve an experimental process were excluded from the analysis. The number of studies included and excluded, along with the reasons for exclusion, are shown in the PRISMA flow diagram [63] in Figure 2.
As shown in Figure 2, as a result of the screenings, in the first phase, N = 1475 studies examining the impact of various variables on the use of artificial intelligence were identified. Out of these studies, 182 were excluded due to duplication, 486 due to irrelevant topics, 437 due to failure to meet the inclusion criteria—identified through reading the abstracts (NMA = 437)—and 64 due to insufficient digital data (NMA = 64). As a result, as presented in Figure 2, 12 studies were included in the meta-analysis. Additionally, inter-rater reliability was calculated using the formula [agreement/(agreement + disagreement) × 100] proposed by [64], and the reliability level of the research was determined to be 0.90. The data were analyzed using the CMA 2.0 program.

2.1.2. Effect Size and Model Selection

Meta-analysis procedures were carried out using the CMA 2.0 software. The effect size (g) obtained from the analysis was interpreted according to the effect levels defined by [65]. The data were assessed and interpreted within the framework of the random-effects model (REM). Ref. [66] pointed out that the use of the fixed-effects model (SEM) is limited in most cases, emphasizing that the REM is a more appropriate option. Therefore, in this research, the REM was preferred.

2.1.3. Moderator Analysis

In the meta-analysis of studies on artificial intelligence applications, a heterogeneity test was performed and an I2 value of 85.90 was found. This value indicates that the overall effect size of artificial intelligence applications, as well as different variables, could be evaluated. Values with heterogeneity rates of 75% or higher are considered high heterogeneity so it is recommended to perform moderator analysis [67]. Therefore, in this research, to deepen the meta-analysis, various factors influencing artificial intelligence applications were examined, and conducting a moderator analysis was deemed necessary.

2.1.4. Publication Bias

In the meta-analyses, reliability is a key aspect. In studies where effect size is calculated, the inclusion of published studies or those with significant differences can raise concerns about publication bias. For this reason, certain calculations were made to assess publication bias in the meta-analyses. Figure 3 shows the reliability calculations related to this in a funnel plot.
Figure 3 shows a funnel plot that presents a visual summary of the meta-analysis dataset and also illustrates the potential for publication bias [67]. Funnel plots are used to detect publication bias in meta-analyses [68]. In this plot, the horizontal axis represents effect size, and the vertical axis represents sample size, variance, or standard error values. The funnel plot highlights the relationship between effect size and sample size. As sample size increases, studies tend to cluster closer to the average effect size at the top of the plot [69]. If there is no publication bias, the plot should form a symmetrical, inverted funnel shape, as seen in Figure 3 [70]. Upon examining Figure 3, it can be observed that there is a balanced distribution of studies on both the left and right sides of the symmetry axis. This indicates that no publication bias was found in this research.
To test for publication bias, the calculation of Rosenthal’s fail-safe N is a method used for this purpose [71]. The N value indicates how many unpublished null studies would be required to invalidate the existing effect. A high N value suggests that the results are valid [72]. In this research, the safe N value is 841. Comparing this with the number of studies included in the analysis, it can be said that this number is quite high, meaning no bias was detected.

2.2. Meta-Thematic Analysis Process

As part of the mixed-meta method, the second dimension of this study applied the meta-thematic analysis process. Meta-thematic analysis can be defined as a type of analysis where participants’ views (raw data) from qualitative research on a particular topic are re-evaluated, and themes and codes are extracted [59]. In this study, qualitative research on the use of artificial intelligence at the primary school level was reviewed and the meta-thematic analysis process was evaluated. In meta-thematic analysis, it is not the quantity of data that is important but rather working with enough data to reach saturation. This type of analysis involves developing new codes and themes from the data obtained through document analysis. In other words, meta-thematic analysis is a process based on text or verbal data that combines qualitative findings to generate new themes and codes. The themes and codes obtained in this process are included in this study to provide broader and more comprehensive results. In the qualitative dimension of this study, themes and codes related to the use of artificial intelligence at the primary school level were created using the meta-thematic analysis process [60].

2.2.1. Data Collection and Review

Qualitative studies on the use of artificial intelligence at the primary school level were gathered using the document review method. These studies were reviewed after a preliminary reading, and the studies that were considered appropriate were selected through four stages: initial reading, full reading, and a review of the findings section. The review phase involved reading the title, a short summary, and an expanded summary before moving to the findings section. After examining the findings section, it was decided whether the study would be included in the current research [58]. As indicated in the PRISMA diagram, the studies were selected after a four-stage process. Ultimately, four studies were reviewed in the meta-thematic analysis process. Document analysis is defined as the process of collecting, examining, evaluating, and analyzing different documents as primary sources of research data [73]. In other words, document analysis includes a series of processes involving the examination and evaluation of printed and digital (computer-based and internet-accessible) data [74]. In the processing of the meta-thematic data obtained through qualitative research using document analysis, the Maxqda program was used. The data were transferred into the program, and content analysis—a commonly used method [74]—was applied to analyze indicators, comments, and discourses [75]. In this context, the participant opinions obtained in this research were re-analyzed to generate different codes and themes.

2.2.2. Coding Process

An important phase of the meta-thematic analysis process is coding. The opinions of participants were re-formulated and codes were created, grouping similar codes under common themes. In this phase, the findings of the meta-thematic analysis process—which were based on the opinions of participants from the reviewed studies—included the themes and codes determined for the study’s main objectives. The reliability of the coding process is demonstrated by ensuring that the generated codes are consistent with each other [76]. In this study, two themes were created at the end of the meta-thematic analysis process: “the impact of AI applications on learning environments” and “problems and solutions encountered in AI applications”. Two coders independently created the themes and codes. One of the coders was the researcher, and the other was an academic expert in the field. After completing the coding process, the themes and codes created by both coders were compared to check for consistency and agreement. Similar themes and codes were recorded jointly, while discrepancies were discussed until agreement was reached between the coders. The reliability between coders was calculated using Cohen’s Kappa coefficient [77]. The Kappa value for the theme “the impact of AI applications on learning environments” was 0.79, and for the theme “problems and solutions encountered in AI applications” it was 0.86 (Appendix A). The Kappa values ranging from 0.79 to 0.86 indicate a good to very good level of agreement between the coders.

2.2.3. Reliability in the Meta-Thematic Analysis Process

Reliability methods used in qualitative research were also applied to the meta-thematic analysis in this study. During this phase, the concept of researcher triangulation [78] was used, where two researchers collaborated throughout the meta-thematic analysis process. Furthermore, direct quotes from participant expressions were included to provide the raw data source while creating the themes and codes. The related quotes were expressed with codes indicating the study and page number from which the quotes were taken. For example, the numeric expression M12-s.15 refers to M (article), 12 (study number), and “s.15” (page number). The codes obtained from the YÖK National Thesis Center were also abbreviated using thesis numbers. Through the meta-analysis of studies conducted on primary school AI use, effect sizes were identified, and codes and themes were created through the meta-thematic analysis. To support these findings and provide a holistic table, the opinions of class teachers were analyzed using the Rasch measurement model.

2.3. Rasch Measurement Model Analysis Process

In the third dimension of the research, quantitative data related to teacher opinions on the use of AI in primary schools were analyzed using the Rasch measurement model, developed by [79,80]. This model is based on objectivity [81]. It also evaluates other variable sources that may affect the test results, such as item difficulty levels, raters, scoring keys, conditions, tasks, and scoring criteria [82,83]. The multi-faceted Rasch model establishes necessary linear relationships between different facets (such as AI application usage levels, evaluation item characteristics, and rater behaviors) and forms these connections [84]. In the current research, the Rasch model includes three facets: 21 primary school teachers, 18 items related to AI use in education, and 4 educational programs (Turkish, Mathematics, Life Science, and Science). To collect the data, the researchers created the “Primary School Teachers’ AI Application Usage Evaluation Form”.

2.3.1. Study Group

The study group consisted of 21 classroom teachers working in public primary schools during the 2024–2025 academic year (Appendix D). The raters independently assessed the levels of artificial intelligence application usage using an 18-item evaluation form. Since studies based on the Rasch measurement model do not assume generalizing the results of the sample data to the population [79] (pp. 2–15), no generalization issues were encountered.

2.3.2. Research Data and Analysis

To collect data for this study, the “Primary School Teachers’ AI Application Usage Evaluation Form” was used (Appendix B). The evaluation form—developed within the framework of the Rasch measurement model and consisting of 18 items—assesses teachers’ knowledge acquisition, knowledge deepening, and knowledge creation under the headings of curriculum and assessment, pedagogy, and the application of digital skills. The form includes evaluations of the Turkish, Mathematics, Science, and Life Science curricula. The evaluation form was created based on a literature review [85,86] and expert opinions. Afterward, the form was re-examined by experts for verification, and necessary additions and deletions were made. This form contains a total of 18 criteria. The content validity index (CVI) of the items was calculated using the Content Validity Ratio (CVR) formula developed by [87] and found to be 0.81 (Appendix C). According to [88], this value is statistically significant at the 0.05 level. The analysis of the AI application evaluation form was conducted using the FACETS analysis program developed by [79] (pp. 2–15). FACETS is an analysis program frequently used in the multi-faceted Rasch measurement model. This program generally includes three main facets: rater, ability, and task [89].

3. Findings

This section interprets the findings obtained through the meta-analysis, meta-thematic analysis, and participant opinions on various variables related to the use of AI applications in both classroom and out-of-class teaching environments. In the first phase of this study, meta-analytic findings based on document analysis are presented; in the second phase, results of the meta-thematic analysis are provided; and in the third phase, participant opinions are supported by the quantitative data.

3.1. Meta-Analysis Findings on AI Applications

When examining the meta-analysis findings in Table 2, it was determined that the effect size of academic achievement scores (AA) for AI-supported applications—calculated using the REM method—was g = 0.51 [0.28; 0.74]. This effect size is classified as moderate according to the classification in [65], indicating that AI applications have a positive and beneficial effect on these variables. Additionally, a significant difference was found between the scores based on test types (p < 0.05).
When examining the results of the heterogeneity test in Table 2, it is seen that the effect sizes of the scores for AI applications exhibit a heterogeneous distribution (QB = 163.11; p < 0.05). The I2 value was calculated as 85.90%, indicating that the observed 86% variance originates from the true variance among the studies. According to [67], an I2 value of 25% is considered low, around 50% is moderate, and 75% or higher is high heterogeneity. The calculated I2 value of 85.90% in this study points to a high level of heterogeneity [90]. This suggests the presence of moderator variables affecting the total effect size. In other words, the detection of high heterogeneity indicates the need for a moderator analysis [72].
As presented in Table 3, the duration of implementation and sample size have been selected as moderator analyses. In the significance test, a significant difference was found in the duration of implementation (QB = 7.69; p < 0.05); however, no significant differences were observed in terms of lessons (QB = 1.7; p > 0.05) and sample size (QB = 1.29; p > 0.05). Nevertheless, when the results of this analysis are generally evaluated, it can be stated that various variables have a moderate effect on artificial intelligence applications.

3.2. Meta-Thematic Findings Regarding Artificial Intelligence Applications

In this part of the study, the themes and codes obtained through the meta-thematic analysis of the studies accessed regarding the effects of various variables on artificial intelligence applications are presented. The data related to the effects of different variables on artificial intelligence applications were analyzed, and two themes were formed: the impact of artificial intelligence applications on educational settings, and challenges encountered and solutions proposed. The themes and codes are illustrated in models. Additionally, direct quotations are provided as reference sentences within the comments.
When examining Figure 4, the contributions of artificial intelligence applications to learning environments can be seen in the model as codes. Some of the codes in this model can be explained as providing support for challenging subjects, presenting topics through visualization, enhancing problem-solving skills, fostering intrinsic and extrinsic motivation, supporting socio-emotional development, and improving presentation skills. Expressions and quotations that can serve as references to these codes are stated in the studies as follows:
“I like to learn English with it (the AI coach) as it helps improve my English competence”, and
“You can get (virtual) flowers and awards if you practice English with the AI coach every day and achieve good performance” (M1-p. 6).
In another study, the following is conveyed: “He also started to improve his oral skills, and finally gave presentations in front of large audiences” (M2-p. 1890).
Additionally, in the study coded (M5-p. 12), it is stated as follows: “I thought it was interesting because I was able to actually control the AI to play rock-paper-scissors with only two fingers”.
Furthermore, it can be stated that artificial intelligence applications help students develop critical thinking skills, promote learning through exploration, enhance problem-solving skills, and foster the acquisition of various other competencies. However, some challenges may be encountered during the implementation of these applications. The problems faced by practitioners and the proposed solutions are presented in the figure below.
When examining Figure 5, the problems encountered in learning environments during the artificial intelligence (AI) application process and the corresponding solutions are visualized. Some of these issues are indicated with the following codes: AI applications being deemed insufficient, difficulty in understanding questions asked, a lack of knowledge about AI, inability to produce adequate outcomes, concerns about personal data being compromised, lack of emotional support from AI applications, and anxiety related to AI. Expressions related to these codes include those outlined below:
“In my opinion, artificial intelligence is software that humans create, that we decide on the different things it learns and, after that, the computer adds them to other things it learned, like when we trained it with pictures, we also showed some pictures, and not always the computer was right, so we tried to give it more pictures and teach it things” (M3, p. 187).
In the study (M1, p. 10), the limitations of AI in sentence formation and vocabulary memorization were stated as follows: “For instance, one student indicated that ‘… it (the AI coach) improves our oral English through several ways, such as English shadowing, mimicking picture books, and memorizing vocabulary…’ Another student wrote, ‘it (the AI coach) contains many resources linked to our textbooks, picture books, and movie clips for budding practice’”.
Additionally, concerns about AI were expressed as follows: “I had concerns about the potential for AI to dominate the world, given its ability to complete tasks in just a few steps” (M5, p. 12).
During the analysis process, solution suggestions for potential problems in AI applications were identified. Some of the proposed solutions include the following: students should actively engage in learning efforts with AI; AI applications should be used through teamwork; AI application activities should be conducted interactively and actively; security measures should be prioritized in AI applications.
Quotations referencing these suggestions include the following:
In the study (M2, p. 1822), “Students building basic robotic models benefit when they are working individually; meanwhile, students might work better in teams (ideally, two to three members per team) when working on advanced robotic models that include writing code (programming)”.
Another example states the following: “In order to reduce such fraud, I think it is more convenient and accurate to use AI than humans” (M4, p. 12).
In conclusion, while certain issues may arise during AI applications, these problems can be resolved by utilizing the proposed solutions.
Findings derived from classroom teachers’ opinions have been tabulated and expressed using the Rasch measurement model. This approach provided quantitative support to the data obtained through the mixed-meta process, presenting a detailed interaction of all the dynamics of this study.

3.3. Findings Related to the Rasch Measurement Model for Artificial Intelligence Applications

Since this research was conducted using a quantitatively supported mixed-meta method, this section presents the findings related to AI applications in primary school education using the multi-faceted Rasch measurement model. When analyzing the AI applications, the surfaces used in this study (teachers’ AI applications, judges’ strictness/generosity, and the appropriateness of the items used) and general information about these surfaces are provided in the calibration map shown in Figure 6. The measurement on the left side of Figure 6 falls between (−) and (+) and is the same for all three surfaces. On the measurement scale, rankings are made based on the levels of educational programs, the strictness/leniency of raters (jury members), and the difficulty levels of items. Educational programs are arranged such that the program with the highest use of artificial intelligence applications is at the top, while the program with the least application is at the bottom. Similarly, raters are ordered from the most lenient evaluator (J2) to the stricter ones. Additionally, more difficult items are positioned at the top, while easier items are placed at the bottom.
In the findings, four curricula (Turkish, mathematics, life sciences, and science education curricula), 21 judges, and 18 evaluation items related to the program content were taken into consideration. In this calibration map, it was found that the use of AI applications was at a high level in the science and mathematics curricula (science education program: FBP-4 and mathematics program: MP-2), while it was observed to be at a low level in the life sciences and Turkish education programs (life sciences program: HBP-3 and Turkish program: TP-1).
Among the judges (raters), J2 was identified as the most generous rater, while J11 was determined to be the strictest rater.
When the column containing the items related to the use of AI applications was examined, the item “I can enable students to prepare a presentation describing their environment using AI tools” (I17) was found to be the most difficult item, whereas the item “I know how to efficiently use AI applications in classroom activities” (I14) was identified as the easiest item.
After evaluating the findings from the calibration map, the analysis report prepared for the curricula is presented in Figure 7.
As a result of the Rasch analysis, the reliability coefficient was determined to be 0.91. This coefficient value indicates the reliability with which the curricula were ranked. Furthermore, when the data in Figure 7 were analyzed, it was observed that there were statistically significant differences between the curricula (X2 = 44.4; df = 3; p = 0.00). Additionally, the standard error (RMSE) of the logit values related to the curricula was found to be 0.07, indicating a very low level of error.
This error rate, with its adjusted standard deviation, is below the critical value of 1.0, which is considered acceptable. In this context, the order of AI usage rates in the curricula from high quality to low quality can be listed as follows:
FBP-4 (science education curriculum);
MP-2 (mathematics education curriculum);
HBP-3 (life sciences curriculum);
TP-1 (Turkish education curriculum).
In the Rasch analysis, the quality control limits for “infit” (internal consistency) and “outfit” (outlier-sensitive consistency) values are accepted to be between 0.6 and 1.4 [91]. During the decision-making process, the “infit” values are used for unexpected responses from judges, while “outfit” values are used for unexpected distant responses [92,93]. Upon examining the values in Figure 7, it is understood that these limits are appropriate.
In Figure 8, information regarding the strictness/leniency of the judges in relation to their use of AI in secondary education is presented. In Figure 7, it is observed that the judges’ scores are ranked from the strictest to the most lenient. The judge identified with the code J2 is seen to be the most lenient, while the judge identified with the code J11 is the strictest. Additionally, the judge separation index was observed to be 6.44, and the reliability coefficient was noted as 0.98. A statistically significant difference was also found between the strictness/leniency of the judges’ scores (X2 = 872.8; df = 20; p = 0.00).
As a result, it was determined that the score observed in J2, with 283, was the most lenient, while the score observed in J11, with 150, was the strictest.
When the infit (internal consistency) and outfit (outlier-sensitive consistency) values of the surfaces in Figure 8 were evaluated, it was found that the judge identified with the code J15 did not meet the infit and outfit criteria (range of 0.6 to 1.4). In this case, it can be stated that the mean of the infit and outfit squares for the judge coded as J15 fell outside the defined limits. This can be interpreted as the lenient judge (J15) not exhibiting consistent scoring behavior while evaluating AI applications.
On the other hand, it was noted that the other judges (17 judges) demonstrated consistency in scoring among themselves as their values fell within the expected quality control range, and thus, they can be considered as appropriate.
Upon examining the results of the data analysis presented in Figure 9, it was observed that the separation index is 3.04 and the reliability coefficient is 0.90. This reliability coefficient indicates that the items used in this study to determine teachers’ levels of using AI tools are reliable. Additionally, there were statistically significant differences among the item difficulty levels used to evaluate teachers’ opinions regarding their use of AI tools (X2 = 181.7; df = 17; p = 0.00).
The most difficult items related to the use of AI applications were identified as outlined below:
“I can guide students to prepare a presentation describing their environment using AI tools”.
“I can enable students to create simple patterns (e.g., drawing, storytelling) using AI tools”.
“I can integrate AI applications into pedagogical methods and techniques to contribute to students’ learning processes”.
On the other hand, the easiest items were as outlined below:
“I know how to use AI applications effectively in classroom activities”.
“I know how AI applications can support teaching activities in a classroom setting”.
“I can encourage students to collaborate and improve their skills using AI tools”.
The standard error (RMSE) for the analysis of criteria prepared by the researchers to determine the level of teachers’ use of AI applications was found to be 0.15, indicating that the standard error related to identifying the levels of AI application use is quite low.
Furthermore, when the infit (internal consistency) and outfit (outlier-sensitive consistency) statistical values for the research surfaces in Figure 9 were examined, it was noted that only the item coded I16 exceeded the outfit threshold. In this context, it can be stated that the mentioned item demonstrates inconsistency in evaluating the levels of AI use.
All other items, however, were within the specified consistency value limits, indicating their reliability in assessing AI application usage levels.

4. Discussion and Conclusions

In this study, a mixed-meta method, complemented with a quantitative research process, was employed to examine the effects of studies conducted at the primary school level on various variables and to reveal the current state of the literature in comparison with other studies.
In line with the purpose of the research, the following methodological steps were followed:
A meta-analysis was conducted first;
This was followed by meta-thematic analysis;
Finally, the participants’ opinions were collected in the quantitative dimension and evaluated using the Rasch measurement model.

4.1. Results of the Meta-Analysis Process

This study includes 37 quantitative studies for the meta-analysis, while 4 qualitative studies were included in the meta-thematic analysis process. In the quantitative scope, the teachers’ views on the relevant variables were collected using the prepared evaluation form. The findings based on the data obtained from the analysis are presented in this section. According to the meta-analysis, the results of the included studies based on the REM showed that the effect of various variables on the use of artificial intelligence was moderate (g = 0.51 [0.28; 0.74]), based on the classification in [65]. The results of other studies in the literature, which were not included in this study, support the research results [94,95]. Based on these results, it can be stated that different variables have an impact on the use of artificial intelligence applications.
During the analysis, a moderator analysis was also conducted regarding the effects of various variables on the use of artificial intelligence. The first moderator analysis, related to the duration of implementation, indicated that a duration of 1–4 weeks had an effect size of g = 0.59. Secondly, when subjects were used as moderators, the highest effect size (g = 0.81) was found for other subjects. Thirdly, regarding sample size as a moderator, large sample groups showed the highest effect size (g = 0.63). When analyzing the effect sizes of moderators, a significant difference was observed for the duration of implementation (QB = 7.69; p < 0.05), while no significant difference was found for subjects (QB = 1.7; p > 0.05) and sample size (QB = 1.29; p > 0.05). The findings of studies in the literature, such as those by [96,97], support the findings of this study in terms of subjects and sample size. In contrast to the studies in the literature, no effect on academic achievement was observed with time durations but this research found a significant difference in the moderator related to implementation time. When considering the results of the moderator analyses as a whole, it can be stated that artificial intelligence applications have a moderate impact across all groups with similar effect sizes. During the meta-analysis process, it was observed that the majority of studies regarding artificial intelligence applications were conducted in mathematics classes. In this context, further research on the use of artificial intelligence applications in other subjects is recommended.

4.2. Results of the Meta-Thematic Analysis Process

In the second phase, a document-analysis-based meta-thematic analysis was conducted, integrating the findings from the meta-analysis to validate them and expand the scope of the results. The themes and codes regarding the effects of artificial intelligence applications on learning environments and the problems and solutions encountered during their use were formed. The analysis revealed that students were in an interactive learning environment while using artificial intelligence applications. It can be stated that being highly interactive in the constructivist learning process helps make artificial intelligence applications more comprehensible [94]. One of the codes reached in the research was that artificial intelligence applications support individuals who learn at different paces and paths; Refs. [98,99] have reached similar findings in their studies. It can be said that an individual’s self-confidence in any task contributes to their success in that task. Our research concluded that artificial intelligence applications increase individuals’ self-confidence and enhance their creativity. This result is supported by findings from other studies using digital technology to support student learning [100,101].
In our research, it was also found that artificial intelligence applications improve problem-solving skills, creative thinking, and higher-order thinking abilities, aligning with findings in the literature indicating that technology-supported lesson content enhances students’ cognitive capacities [102,103].
Another theme created in our study is the problems and solutions encountered during the use of artificial intelligence applications. Studies indicating that individuals with low self-efficacy experience high computer anxiety [104] align with the findings in this research. The research findings show that students’ anxiety while using artificial intelligence applications may be related to their low self-efficacy in this area. One of the codes found was that students were disappointed by the responses provided by artificial intelligence during the process. Similar to this finding, the research of [105] suggests that poor performance of the applications does not meet students’ expectations. Another issue identified in this study is that artificial intelligence applications do not provide emotional support. The literature review shows that a lack of emotional support in artificial intelligence applications can negatively affect the targeted lesson outcomes [106,107]. Another identified issue is that students’ feedback on artificial intelligence usage is not understood by the application. Findings from studies in the literature [108,109,110] support this finding in our research.
It is suggested that artificial intelligence applications should have an attractive design as this will encourage students to develop a positive attitude toward using them [111]. Based on this, it can be concluded that artificial intelligence applications should be produced with engaging designs. Another problem encountered in the process is that educators do not have the necessary training, which prevents them from obtaining the desired results from the application. Our research found that teachers need sufficient knowledge and skills regarding artificial intelligence applications to overcome this issue. Ref. [98] study also aligns with our findings, indicating that teachers should receive technical training on artificial intelligence.
Data privacy and ethical concerns in artificial intelligence applications are among the other findings of this study. These concerns may cause anxiety among users during the implementation process. Ref. [112] stated in his research that how data are stored, who can access it, and how it is protected are significant factors limiting the integration of artificial intelligence in education. In another study, Ref. [113] emphasized that artificial intelligence has the potential to deepen existing inequalities and exacerbate privacy-related issues. The Rasch measurement model has been used to evaluate the artificial intelligence applications used by classroom teachers in their lessons.

4.3. Results Related to the Rasch Measurement Model Process

The quantitative section of our research presents the findings related to teachers’ views on their level of use of artificial intelligence applications, derived through analyses using the Rasch measurement model. In this way, the findings obtained through the mixed-meta processes were supported quantitatively to ensure the alignment of this study’s results. The Rasch measurement model adopts a fundamental approach that relates the probability of answering a question correctly to an individual’s ability [114]. The multi-faceted Rasch measurement model, developed by John M. Linacre, not only examines the relationship between individuals’ ability levels and the difficulty levels of the items on the measurement tool but also allows the evaluation of other variable sources that might affect test results, such as scorers, scoring keys, conditions, tasks, and scoring criteria [83]. In this context, simultaneous surfaces (the level of use of artificial intelligence applications, the rigidity/generosity of the jury, and the characteristics of the evaluation questions) that were prioritized and analyzed in the multi-faceted Rasch measurement model have been ranked among themselves. Awareness of artificial intelligence applications—a concept related to Industry 4.0—is crucial as these applications play a significant role in shaping the future of technology [115]. In the current study, teachers’ views were examined in the context of curricula by associating dimensions such as curriculum and assessment, pedagogy, and the application of digital skills with sub-dimensions like knowledge acquisition, knowledge creation, and knowledge deepening.
The analysis revealed that the science and mathematics curricula had the highest levels of use in the teaching programs. Artificial intelligence, which has made significant advancements in the past 50 years, has become an important research area [116]. AI encompasses many cognitive areas of human intelligence, including learning, reasoning, planning, problem-solving, perception, natural language processing, deep learning, expert systems, image processing, sentiment analysis, speech recognition, and more [117]. Therefore, it can be said that the primary use of AI in science and mathematics is due to the functionalities of artificial intelligence itself.
When the scorers’ rigidity/generosity information was evaluated in relation to the assessment of AI applications, it was found that the jury defined by the J2 code was the most generous, while the jury defined by the J11 code was the strictest. Furthermore, it was determined that the scorers were reliably ranked in terms of rigidity/generosity and differed from each other. Studies using the multi-faceted Rasch measurement model in the literature also indicate that scorers (juries) can be both objective and biased at times [93,114,118,119]. In addition, the analysis concluded that the jury separation index was 6.44, which is above the desired level. This value indicates that there are differences in scoring among the juries, that juries vary according to their generosity/rigidity levels, and that scoring errors related to generosity/rigidity exist in the scores given by the juries [120].
The items in the teacher evaluation form regarding the level of use of AI applications were found to serve the purpose of measuring teachers’ competence levels. When the items prepared to determine teachers’ level of use of AI were analyzed, it was found that the most difficult items were “I can make students prepare a presentation describing their environment using AI tools”, “I can make students prepare simple designs (like drawing pictures, creating stories) using AI tools”, and “I can integrate AI applications into pedagogical methods and techniques to support students’ learning processes”. Therefore, it can be said that teachers may face difficulties in using AI. Based on these results, it can be expressed that it is an area that needs further research on how AI can be used effectively in educational environments to achieve the desired outcomes and create designs based on the characteristics of the gains. On the other hand, the easiest items were “I know how to use AI applications effectively in classroom activities”, “I know how AI applications can support teaching activities in the classroom”, and “I can encourage students to collaborate and develop using AI tools”. Although the participants possess basic knowledge about AI, it can be said that teachers need to receive necessary training to stay up-to-date with developing technologies [121].

4.4. Integrative Results of This Study

When considering the results of the entire research process as a whole, it is evident that the obtained data support each other. In the meta-analysis process, it was found that AI studies in elementary schools were most frequently conducted in mathematics lessons [122]. In the Rasch measurement model section of this study, it was observed that classroom teachers most commonly used AI applications in science and mathematics lessons. In this regard, it can be said that the results of these two processes overlap. Furthermore, during the meta-thematic analysis process, the theme of training practitioners in AI applications emerged as an issue and solution. In the Rasch measurement model in the jury opinions section, it was found that the most difficult items were related to teachers preparing presentations with AI, integrating AI applications into the curriculum, and preparing simple designs related to teaching practices, indicating that educators need training in AI applications. Thus, the results of this study’s processes are compatible. Moreover, during the meta-thematic analysis, the theme related to the effect of AI on educational environments was “providing interactive teaching opportunities”, which aligns with the item in the Rasch measurement model “I can encourage students to collaborate and develop using AI tools” being among the easiest items. In this way, it can be said that the results from the two processes are integrated as both contribute to the creation of an interactive classroom environment where teachers facilitate collaboration among students using AI tools [123]. The use of artificial intelligence applications offers several advantages, such as providing personalized learning opportunities for students, giving immediate constructive feedback, creating collaborative learning environments, ensuring access to rich educational resources, identifying gaps in students’ learning areas through smart learning analytics, and enhancing students’ educational performance through continuous learning support [124]. In today’s technological age, the findings of this study—which involves both qualitative and quantitative research within the methodological pluralism framework—reflect how artificial intelligence, as the most prominent tool of the connectivity paradigm in modern educational environments, encapsulates and supports many of the findings.

4.5. Limitations and Future Research

Although this study presents a mixed-methods approach supported by quantitative analyses within the framework of methodological pluralism, there are several limitations to consider. The data collected for the meta-analysis and meta-thematic analysis were limited to certain databases. The meta-analysis focused solely on the impact of AI applications on academic achievement. The effects of the studies on attitudes and retention can also be included in the meta-analysis process. Moderator variables in this study included the duration of the intervention, the subject areas, and the sample size. In the Rasch measurement model, data were collected regarding the Turkish, mathematics, science, and life-sciences curricula. Research on artificial intelligence applications in the art, music, and physical education curricula in primary schools could also be conducted. Additionally, survey questions were developed based on teachers’ digital competencies [86], covering aspects such as curriculum and assessment, pedagogy, and digital skills. Studies could be conducted on other professional practice areas within the framework of teachers’ knowledge and technology competencies [86]. This study focused on gathering perspectives from classroom teachers. In the quantitative aspect of this study, opinions were gathered from classroom teachers. The scope of this study could be expanded by including raters with different levels of expertise in the research.

4.6. Recommendations

  • The application duration, subject areas, and sample sizes in AI-related research have significant effects on academic success and the impact of AI on educational environments. The use of the mixed-meta method, supported by the Rasch measurement model, has provided a more holistic perspective, allowing for a deeper exploration of the topic. Based on the limitations and findings of this study, the following points are recommended:
  • Research on AI applications in primary school subject areas such as art, music, and physical education can be conducted. In addition to quantitative methods, qualitative methods could be employed to explore the effectiveness and applicability of survey questions;
  • The meta-analysis phase of this study could include investigations into the impact of AI applications on attitudes and long-term retention;
  • Studies could explore teachers’ information and technology competencies [86] within other professional practice areas;
  • This study focused on perspectives from classroom teachers. Including evaluators from different expertise levels could broaden the scope of the study;
  • Despite teachers’ positive expectations regarding AI, it is essential that they first familiarize themselves with the technology and learn how to integrate it into their classrooms. Many teachers may regard AI as an advanced technological product without prior experience. In this regard, in-service training could increase teachers’ knowledge about AI and improve their integration of this technology, significantly enhancing student success and the learning experience [86];
  • Given the methodological diversity, the use of a mixed-meta method combined with quantitative analyses has allowed for a comprehensive examination of the findings, with detailed insights into how various variables affect the use of AI applications. Therefore, it is recommended to apply the mixed-meta method integrated with either qualitative or quantitative analyses in other areas to achieve comprehensive research findings;
  • Policymakers should take necessary measures to address concerns related to ethics, data security, and human rights as AI becomes more integrated into education;
  • Artificial-intelligence-supported assessment tools are highly effective in monitoring student performance and providing immediate feedback. Educational institutions can make these systems more widespread to reduce teachers’ workload and track students’ progress in more detail;
  • For students to succeed in AI-supported learning environments, they need to possess critical thinking, problem-solving, and digital literacy skills. Curriculum adjustments should be made to equip students with these skills.

Author Contributions

Methodology, Y.T. and Y.D.; Software, Y.T.; Validation, Y.D., V.B. and S.A.; Formal analysis, Y.T.; Investigation, Y.D. and S.A.; Resources, S.A.; Data curation, Y.D.; Writing—original draft, Y.D., V.B. and S.A.; Writing—review & editing, Y.T. and V.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data available on request due to restrictions.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Agreement Value Ranges of Themes Related to Artificial Intelligence Applications

Effect on Learning EnvironmentsProblems EncounteredRelated Solution SuggestionsProblems Encountered and Solution Suggestions
K2 K2 K2 K2
K1 +-ΣK1 +-ΣK1 +-ΣK1 +-Σ
+26228+14115+12113+26228
-31821-099-178-11617
Σ292049 Σ141024 Σ13821 Σ271845

Appendix B. Primary School Teachers’ Artificial Intelligence Applications Evaluation Form

Sustainability 17 03015 g0a1

Appendix C. The Content Validity Ratios of the Artificial Intelligence Application Evaluation Items

Item NoItemsNecessaryUseful/InsufficientUnnecessaryCVC *
1I can integrate AI-based resources into the curriculum to support course content.12--100%
2I use AI applications to assess students’ progress.12--100%
3I can update and enrich the curriculum using AI applications.111-83%
4I can use Al tools to evaluate students’ performance in digital environments.12--100%
5When assessing lessons with AI tools, I can better analyze students’ individual learning levels.101167%
6I can provide individualized feedback to students using AI-based assessment tools.111-83%
7I can integrate AI applications into pedagogical methods and techniques to support students’ learning processes.101167%
8I understand how AI applications can support teaching activities in the classroom environment.12--100%
9I can effectively use AI tools during lesson planning to provide students with richer learning experiences.11--183%
10I can create AI-supported digital content (such as interactive lesson notes, videos) to help students better understand the subject.10-267%
11I can use Al tools to encourage students’ active participation.12--100%
12I can encourage students to collaborate and develop using Al tools.102-67%
13I can guide students in using AI tools to create projects (such as drawing pictures, creating stories).101167%
14I understand how to effectively use Al applications in classroom activities.101167%
15I can use Al tools to facilitate students’ collaborative work.10-267%
16I can ensure that students use digital tools safely and effectively while conducting research.12--100%
17I can help students prepare a presentation describing their surroundings using Al tools.10-267%
18I can guide students in creating simple designs using Al tools (such as drawing pictures, creating stories).101167%
* Expert Count: 12; Content Validity Criterion (CVC): 0.56; Content Validity Index (CVI): 81% CVI > CVC (81 > 0.56)

Appendix D. Demographic Information of the Participants

Participant NumberParticipant GenderProfessional SeniorityEducational Status
1Female19 yearsMajor
2Male18 yearsMajor
3Female24 yearsMajor
4Male23 yearsMajor
5Female19 yearsMajor
6Female22 yearsMajor
7Male36 yearsMinor
8Male28 yearsMajor
9Female25 yearsMajor
10Female22 yearsMajor
11Female35 yearsMinor
12Male23 yearsMaster’s
13Female25 yearsMajor
14Male19 yearsMajor
15Female14 yearsMajor
16Male26 yearsMaster’s
17Male34 yearsMinor
18Male28 yearsMajor
19Female22 yearsMajor
20Female32 yearsMajor
21Male24 yearsMajor

References

  1. Candan, F.; Başaran, M. A meta-thematic analysis of using technology-mediated gamification tools in the learning process. Interact. Learn. Environ. 2023, 1–17. [Google Scholar] [CrossRef]
  2. Russell, S.J.; Norvig, P. Artificial Intelligence: A Modern Approach, 3rd ed.; Pearson Education: Hoboken, NJ, USA, 2009. [Google Scholar]
  3. Akerkar, R. Introduction to Artificial İntelligence; Prentice-Hall of India Private Limited Learning Press: New Delhi, India, 2014. [Google Scholar]
  4. Ginsenberg, M. Essentials of Artificial Intelligence; Morgan Kaufmann Publishers: Burlington, MA, USA, 2012. [Google Scholar]
  5. Komalavalli, K.; Hemalatha, R.; Dhanalakshmi, S. A survey of artificial intelligence in smart phones and its applications among the students of higher education in and around Chennai City. Shanlax Int. J. Educ. 2020, 8, 89–95. [Google Scholar] [CrossRef]
  6. Topol, E. Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again; Basic Books: Hachette, UK, 2019. [Google Scholar]
  7. Litan, A. Hype cycle for blockchain 2021; more action than hype. Gartner 2021, 21, 7–14. Available online: https://stephenlibby.wordpress.com/2021/07/14/hype-cycle-for-blockchain-2021-more-action-than-hype/ (accessed on 18 March 2025).
  8. Hamet, P.; Tremblay, J. Artificial intelligence in medicine. Metab. Clin. Experimental. 2017, 69, 36–40. [Google Scholar] [CrossRef] [PubMed]
  9. Li, B.; Hou, B.; Yu, W.; Lu, X.; Yang, C. Applications of artificial intelligence in intelligent manufacturing: A review. Front. Inf. Technol. Electron. Eng. 2017, 18, 86–96. [Google Scholar] [CrossRef]
  10. Polat, M.; Aralan, T. Yapay zekâ tabanlı teknolojilerin BM Sürdürülebilir Kalkınma Hedeflerine katkıları. J. Orig. Stud. 2024, 5, 85–101. [Google Scholar] [CrossRef]
  11. Jing, X.; Zhu, R.; Lin, J.; Yu, B.; Lu, M. Education sustainability for ıntelligent manufacturing in the context of the new generation of artificial ıntelligence. Sustainability 2022, 14, 14148. [Google Scholar] [CrossRef]
  12. Chiu, T.K.F.; Chai, C.-s. Sustainable curriculum planning for artificial ıntelligence education: A self-determination theory perspective. Sustainability 2020, 12, 5568. [Google Scholar] [CrossRef]
  13. Pedró, F.; Subosa, M.; Rivas, A.; Valverde, P. Artificial Intelligence in Education: Challenges and Opportunities for Sustainable Development; UNESCO: Paris, France, 2019. [Google Scholar]
  14. Lee, K. A systematic review on social sustainability of artificial intelligence in product design. Sustainability 2021, 13, 2668. [Google Scholar] [CrossRef]
  15. Ta, M.D.P.; Wendt, S.; Sigurjonsson, T.O. Applying artificial intelligence to promote sustainability. Sustainability 2024, 16, 4879. [Google Scholar] [CrossRef]
  16. Maqbool, F.; Ansari, S.; Otero, P. The role of artificial ıntelligence and smart classrooms during COVID-19 pandemic and its impact on education. J. Indep. Stud. Res. Comput. 2021, 19, 7–14. [Google Scholar] [CrossRef]
  17. Mijwil, M.M.; Aggarwal, K.; Mutar, D.S.; Mansour, N.; Singh, R.S.S. The position of artificial ıntelligence in the future of education: An overview. Asian J. Appl. Sci. 2022, 10, 102–108. [Google Scholar] [CrossRef]
  18. Pantelimon, F.V.; Bologa, R.; Toma, A.; Posedaru, B.S. The evolution of AI-driven educational systems during the COVID-19 pandemic. Sustainability 2021, 13, 13501. [Google Scholar] [CrossRef]
  19. Self, J. The birth of IJAIED. Int. J. Artif. Intell. Educ. 2016, 26, 4–12. [Google Scholar] [CrossRef]
  20. Bracaccio, R.; Hojaij, F.; Notargiacomo, P. Gamification in the study of anatomy: The use of artificialintelligence to improve learning. FASEB J. 2019, 33, 444.28. [Google Scholar] [CrossRef]
  21. Santos, O.C. Training the body: The potential of AIED to support personalized motor skills learning. Int. J. Artif. Intell. Educ. 2016, 26, 730–755. [Google Scholar] [CrossRef]
  22. Roscoe, R.D.; Walker, E.A.; Patchan, M.M. Facilitating peer tutoring and assessment in intelligentlearning systems. In Tutoring and Intelligent Tutoring Systems; Craig, S.D., Ed.; Nova Science Publishers: Hauppauge, NY, USA, 2018; pp. 41–68. [Google Scholar]
  23. Liang, Y.; Chen, L. Analysis of current situation, typical characteristics and development trend of artificial intelligence education application. China Electrofication Educ. 2018, 3, 24–30. [Google Scholar]
  24. Xue, Q.; Li, F. Security risks and countermeasures in artificial intelligence education applications. J. Distance Educ. 2018, 36, 88–94. [Google Scholar]
  25. Mu, P. Research on artificial intelligence education and its value orientation. In Proceedings of the 1st International Education Technology and Research Conference (IETRC), Tianjin, China, 14–15 September 2019. [Google Scholar]
  26. Ouyang, F.; Jiao, P. Artifcial intelligence in education: The three paradigms. Comput. Educ. Artifcial Intell. 2021, 2, 100020. [Google Scholar] [CrossRef]
  27. Luckin, R.; Holmes, W.; Griffiths, M.; Forcier, L.B. Intelligence Unleashed—An Argument for AI in Education; Pearson: London, UK, 2016. [Google Scholar]
  28. Panigrahi, C.M.A. Use of artificial intelligence in education. Manag. Account. 2020, 55, 64–67. Available online: https://ssrn.com/abstract=3606936 (accessed on 20 October 2024).
  29. Ryu, M.; Han, S. The educational perception on artificial intelligence by elementary school teachers. J. Korean Assoc. Inform. Educ 2018, 22, 317–324. [Google Scholar] [CrossRef]
  30. Jia, J.; He, Y.; Le, H. A Multimodal Human-Computer Interaction System and Its Application in Smart Learning Environments. In Proceedings of the Blended Learning. Education in a Smart Learning Environment: 13th International Conference ICBL 2020, Bangkok, Thailand, 24–27 August 2020. [Google Scholar]
  31. Holmes, W.; Bialik, M.; Fadel, C. Artificial Intelligence in Education; Center for Curriculum Redesign: Boston, MA, USA, 2019. [Google Scholar] [CrossRef]
  32. Qin, F.; Li, K.; Yan, J. Understanding user trust in artificial intelligence-based educational systems: Evidence from China. Br. J. Educ. Technol 2020, 51, 1693–1710. [Google Scholar] [CrossRef]
  33. Avci, O.; Abdeljaber, O.; Kiranyaz, S.; Hussein, M.; Gabbouj, M.; Inman, D.J. A review of vibration-based damage detection in civil structures: From traditional methods to machine learning and deep learning applications. Mech. Syst. Signal Process. 2021, 147, 107077. [Google Scholar] [CrossRef]
  34. Jain, P.; Coogan, S.C.; Subramanian, S.G.; Crowley, M.; Taylor, S.; Flannigan, M.D. A review of machine learning applications in wildfire science and management. Environ. Rev. 2020, 28, 478–505. [Google Scholar] [CrossRef]
  35. Nichols, J.A.; Herbert Chan, H.W.; Baker, M.A. Machine learning: Applications of artificial intelligence to imaging and diagnosis. Biophys. Rev. 2019, 11, 111–118. [Google Scholar] [CrossRef] [PubMed]
  36. Sharma, R.; Kamble, S.S.; Gunasekaran, A.; Kumar, V.; Kumar, A. A systematic literature review on machine learning applications for sustainable agriculture supply chain performance. Comput. Oper. Res. 2020, 119, 104926. [Google Scholar] [CrossRef]
  37. Su, J.; Yang, W. Artificial intelligence in early childhood education: A scoping review. Comput. Educ. Artif. Intell. 2022, 3, 100049. [Google Scholar] [CrossRef]
  38. Yue, M.; Jong, M.S.Y.; Dai, Y. Pedagogical design of K-12 artificial intelligence education: A systematic review. Sustainability 2022, 14, 15620. [Google Scholar] [CrossRef]
  39. Francis, K.; Rothschuh, S.; Poscente, D.; Davis, B. Malleability of spatial reasoning with short-term and long-term robotics interventions. Technol. Knowl. Learn. 2022, 27, 927–956. [Google Scholar] [CrossRef]
  40. Chu, Y.-S.; Yang, H.-C.; Tseng, S.-S.; Yang, C.-C. Implementation of a model-tracing-based learning diagnosis system to promote elementary students’ learning in mathematics. Educ. Technol. Soc. 2014, 17, 347–357. Available online: https://www.jstor.org/stable/jeductechsoci.17.2.347 (accessed on 24 November 2024).
  41. González-Calero, J.A.; Cózar, R.; Villena, R.; Merino, J.M. The development of mental rotation abilities through robotics-based instruction: An experience mediated by gender. Br. J. Educ. Technol. 2019, 50, 3198–3213. [Google Scholar] [CrossRef]
  42. Yang, W. Artificial Intelligence education for young children: Why, what, and how in curriculum design and implementation. Comput. Educ. Artif. Intell. 2022, 3, 100061. [Google Scholar] [CrossRef]
  43. Siemens, G. Connectivism: A learning theory for the digital age. Int. J. Instr. Technol. Distance Learn. 2005, 2, 14–16. Available online: http://www.itdl.org/Journal/Jan_05/article01.htm (accessed on 10 January 2025).
  44. Goldie, J.G.S. Connectivism: A knowledge learning theory for the digital age? Med. Teach. 2016, 38, 1064–1069. [Google Scholar] [CrossRef] [PubMed]
  45. Guerra, F.C.H. A Model for Putting Connectivism Into Practice in a Classroom Environment. Master Thesis, Universidade Nova, Lisboa, Potugal, 2022. [Google Scholar]
  46. Chiu, T.K. A Holistic approach to the design of Artificial Intelligence (AI) education for k-12 schools. TechTrends 2021, 65, 796–807. [Google Scholar] [CrossRef]
  47. Humble, N.; Mozelius, P. Artificial Intelligence in Education-a Promise, a Threat or a Hype? In Proceedings of the European Conference on the Impact of Artificial Intelligence and Robotics 2019 (ECIAIR 2019), Oxford, UK, 31 October–1 November 2019; pp. 149–156. [Google Scholar]
  48. Kaplan-Rakowski, R.; Grotewold, K.; Hartwick, P.; Papin, K. Generative AI ve teachers’ perspectives on ıts ımplementation in education. J. Interact. Learn. Res. 2023, 34, 313–338. [Google Scholar]
  49. Chiu, T.K.; Xia, Q.; Zhou, X.; Chai, C.S.; Cheng, M. Systematic literature review on opportunities, challenges, and future research recommendations of artificial intelligence in education. Comput. Educ. Artif. Intell. 2023, 4, 100118. [Google Scholar] [CrossRef]
  50. Fu, S.; Gu, H.; Yang, B. The affordances of AI-enabled automatic scoring applications on learners’ continuous learning intention: An empirical study in China. Br. J. Educ. Technol. 2020, 51, 1674–1692. [Google Scholar] [CrossRef]
  51. Li, M.; Su, Y. Evaluation of online teaching quality of basic education based on artificial ıntelligence. Int. J. Emerg. Technol. Learn. (iJET) 2020, 15, 147–161. [Google Scholar] [CrossRef]
  52. Luo, D. Guide teaching system based on artificial ıntelligence. Int. J. Emerg. Technol. Learn. (iJET) 2018, 13, 90–102. [Google Scholar] [CrossRef]
  53. Koçel, T. Yönetim ve organizasyonda metodoloji ve güncel kavramlar. İstanbul Üniversitesi İşletme Fakültesi Derg. 2017, 46, 3–8. [Google Scholar]
  54. Toraman, S. Karma yöntemler araştırması: Kısa tarihi, tanımı, bakış açıları ve temel kavramlar. Nitel Sos. Bilim. 2021, 3, 1–29. [Google Scholar] [CrossRef]
  55. Molina-Azorín, J.F.; López-Gamero, M.D.; Pereira-Moliner, J.; Pertusa-Ortega, E.M. Mixed methods studies in entrepreneurship research: Applications and contributions. Entrep. Reg. Dev. 2012, 24, 425–456. [Google Scholar] [CrossRef]
  56. Creswell, J.W.; Sözbilir, M. Karma Yöntem Araştırmalarına Giriş; Pegem Akademi Yayıncılık: Ankara, Türkiye, 2017. [Google Scholar]
  57. Creswell, J.W.; Plano Clark, V.L.; Gutmann, M.; Hanson, W. Advanced mixed methods research designs. In Handbook of Mixed Methods in Social and Behavioral Research; Tashakkori, A., Teddlie, C., Eds.; Sage: New York, NY, USA, 2003; pp. 209–240. [Google Scholar]
  58. Batdı, V. Farklı değişkenlerin öznel iyi oluş düzeyine etkisi: Nitel analizle bütünleştirilmiş karma-meta yöntemi. Gaziantep Üniversitesi Eğitim Bilim. Derg. 2024, 8, 53–83. [Google Scholar]
  59. Batdı, V. Yabancılara dil öğretiminde teknolojinin kullanımı: Bir karma-meta yöntemi. Milli Eğitim Derg. 2021, 50, 1213–1244. [Google Scholar] [CrossRef]
  60. Batdı, V. Meta-thematic analysis. In Meta-Thematic Analysis: Sample Applications; Batdı, V., Ed.; Anı Publication: Ankara, Türkiye, 2019; pp. 10–76. [Google Scholar]
  61. Glass, G.V. Primary secondary and meta-analysis of research. Educ. Res. 1976, 5, 3–8. [Google Scholar]
  62. Tsagris, M.; Fragkos, K.C. Meta-analyses of clinical trials versus diagnostic test accuracy studies. Diagn. Meta-Anal. A Useful Tool Clin. Decis.-Mak. 2018, 31–40. [Google Scholar] [CrossRef]
  63. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G.; PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. BMJ 2009, 339, b2535. [Google Scholar] [PubMed]
  64. Miles, M.B.; Huberman, A.M. Qualitative Data Analysis: An Expanded Sourcebook, 2nd ed.; Sage Publication: London, UK, 1994. [Google Scholar]
  65. Thalheimer, W.; Cook, S. How to calculate effect sizes from published research: A simplified methodology. Work.-Learn. Res. 2002, 1, 1–9. [Google Scholar]
  66. Schmidt, F.L.; Oh, I.S.; Hayes, T.L. Fixed-versus random-effects models in meta-analysis: Model properties and an empirical comparison of differences in results. Br. J. Math. Stat. Psychol. 2009, 62, 97–128. [Google Scholar] [CrossRef]
  67. Cooper, H.; Hedges, L.V.; Valentine, J.C. The Handbook of Research Synthesis and Meta-Analysis, 2nd ed.; Russell Sage Publication: New York, NY, USA, 2009. [Google Scholar]
  68. Duval, S.; Tweedie, R. A nonparametric “trim and fill” method of accounting for publication bias in meta-analysis. J. Am. Stat. Assoc. 2000, 95, 89–98. [Google Scholar]
  69. Sterne, J.A.; Harbord, R.M. Funnel plots in meta-analysis. Stata J. Promot. Commun. Stat. Stata 2004, 4, 127–141. [Google Scholar] [CrossRef]
  70. Rodríguez, M.D. Glossary on meta-analysis. J. Epidemiol Community Health 2001, 55, 534–536. [Google Scholar] [CrossRef]
  71. Rosenthal, R. The file drawer problem and tolerance for null results. Psychol. Bull. 1979, 86, 638–641. [Google Scholar] [CrossRef]
  72. Borenstein, M.; Hedges, L.V.; Higgins, J.P.T.; Rothstein, H.R. Introduction to Meta-Analysis; John Wiley & Sons Ltd. Press: Hoboken, NJ, USA, 2009. [Google Scholar]
  73. Sak, R.; Sak, İ.T.Ş.; Şendil, Ç.Ö.; Nas, E. Bir araştırma yöntemi olarak doküman analizi. Kocaeli Üniversitesi Eğitim Derg. 2021, 4, 227–256. [Google Scholar] [CrossRef]
  74. Bowen, G.A. Naturalistic inquiry and the saturation concept: A research note. Qual. Res. 2008, 8, 137–152. [Google Scholar] [CrossRef]
  75. Bryman, A. Social Research Methods; Oxford University Press: Oxford, UK, 2016. [Google Scholar]
  76. Mayring, P. Qualitative content analysis. A Companion Qual. Res. 2000, 1, 159–176. [Google Scholar]
  77. Cohen, J.A. Coefficient of agreement for nominal scales. Educ. Psychol. Meas. 1960, 20, 37–46. [Google Scholar]
  78. Streubert, H.J.; Carpenter, D.R. Qualitative Research in Nursing: Advancing the Humanistic Imperative, 5th ed.; Wolters Kluwer: Alphen aan den Rijn, The Netherlands, 2011. [Google Scholar]
  79. Linacre, J.M. Generalizability Theory and Many Facet Rasch Measurement; Annual Meeting of the American Educational Research Association: Denver, CO, USA, 1993. [Google Scholar]
  80. Linacre, J.M. A User’s Guide to Winsteps, Ministep Rasch-Model Computer Programs. 2008. Available online: https://www.researchgate.net/publication/238169941_A_User’’s_Guide_to_Winsteps_Rasch-Model_Computer_Program (accessed on 18 March 2025).
  81. Semerci, Ç. Mikro öğretim uygulamalarının çok-yüzeyli Rasch ölçme modeli ile analizi. Eğitim Ve Bilim 2011, 36, 161. [Google Scholar]
  82. İlhan, M. Açık uçlu sorularla yapılan ölçmelerde klasik test kuramı ve çok yüzeyli Rasch modeline göre hesaplanan yetenek kestirimlerinin karşılaştırılması. Hacet. Üniversitesi Eğitim Fakültesi Derg. 2016, 31, 346–368. [Google Scholar] [CrossRef]
  83. Lynch, B.K.; McNamara, T.F. Using G-theory and many-facet Rasch measurement in the development of performance assessments of the ESL speaking skills of immigrants. Lang. Test. 1998, 15, 158–180. [Google Scholar] [CrossRef]
  84. Hambleton, R.K.; Swaminathan, H. Estimation of item and ability parameters. In Item Response Theory: Principles and Applications; Springer Science+Business Media: New York, NY, USA, 1985; pp. 125–150. [Google Scholar]
  85. Sevil, Ş.; Aras, İ.S. Eğitimde Kullanılan Yapay Zekâ Araçları: Öğretmen El Kitabı; Milli Eğitim Bakanlığı Yenilik ve Eğitim Teknolojileri Genel Müdürlüğü: Ankara, Türkiye, 2024. [Google Scholar]
  86. UNESCO. Öğretmenlere Yönelik Bilgi ve İletişim Teknolojileri Yetkinlik Çerçevesi; UNESCO: Paris, France, 2018. Available online: https://yegitek.meb.gov.tr/www/unesco-ogretmenlere-yonelik-bilgi-ve-iletisim-teknolojileri-yetkinlik-cercevesi/icerik/3146 (accessed on 22 October 2024).
  87. Lawshe, C.H. A quantitative approach to content validity. Pers. Psychol. 1975, 28, 563–575. [Google Scholar]
  88. Veneziano, L.; Hooper, J. A method for quantifying content validity of health-related questionnaires. Am. J. Health Behav. 1997, 21, 67–70. [Google Scholar]
  89. Güler, N.; Gelbal, S. A study based on classic test theory and many facet Rasch model. Eurasian J. Educ. Res. 2010, 38, 108–125. [Google Scholar]
  90. Higgins, J.P.; Thompson, S.G.; Deeks, J.J.; Altman, D.G. Measuring inconsistency in meta-analyses. BMJ 2003, 327, 557–560. [Google Scholar]
  91. Linacre, J.M.; Wright, B. Rasch Measurement Transactions; MESA Press: Chicago, IL, USA, 1995. [Google Scholar]
  92. Baştürk, R.; Işıkoğlu, N. Okul öncesi eğitim kurumlarının işlevsel kalitelerinin çok yüzeyli Rasch modeli ile analizi. Kuram Ve Uygulamada Eğitim Bilim. 2008, 8, 7–32. [Google Scholar]
  93. Batdı, V. Ortaöğretim matematik öğretim programı içeriğinin Rasch ölçme modeli ve Nvıvo ile analizi. Turk. Stud. 2014, 9, 93–109. [Google Scholar] [CrossRef]
  94. Shamir, G.; Levin, I. Teaching machine learning in elementary school. Int. J. Child-Comput. Interact. 2021, 31, 100415. [Google Scholar] [CrossRef]
  95. Kajiwara, Y.; Matsuoka, A.; Shinbo, F. Machine learning role playing game: Instructional design of AI education for age-appropriate in K-12 and beyond. Comput. Educ. Artif. Intell. 2023, 5, 100162. [Google Scholar] [CrossRef]
  96. Kablan, Z.; Topan, B.; Erkan, B. Sınıf içi öğretimde materyal kullanımının etkililik düzeyi: Bir meta-analiz çalışması. Kuram Ve Uygulamada Eğitim Bilim. 2013, 13, 1629–1644. [Google Scholar]
  97. Camnalbur, M. Bilgisayar Destekli Öğretimin Etkililiği Üzerine Bir Meta Analiz Çalışması. Master’s Thesis, Marmara Üniversitesi, İstanbul, Türkiye, 2008. [Google Scholar]
  98. Salas-Pilco, S.Z. The impact of AI and robotics on physical, social-emotional and intellectual learning outcomes: An integrated analytical framework. Br. J. Educ. Technol. 2020, 51, 1808–1825. [Google Scholar] [CrossRef]
  99. Pillai, R.; Sivathanu, B.; Metri, B.; Kaushik, N. Students’ adoption of AI-based teacher-bots (T-bots) for learning in higher education. Inf. Technol. People 2024, 37, 328–355. [Google Scholar] [CrossRef]
  100. Bers, M.U.; Flannery, L.; Kazakoff, E.R.; Sullivan, A. Computational thinking and tinkering: Exploration of an early childhood robotics curriculum. Comput. Educ. 2014, 72, 145–157. [Google Scholar] [CrossRef]
  101. Fitton, V.A.; Ahmedani, B.K.; Harold, R.D.; Shifflet, E.D. The role of technology on young adolescent development: Implications for policy, research and practice. Child Adolesc. Soc. Work. J. 2013, 30, 399–413. [Google Scholar] [CrossRef]
  102. Garrison, D.R. E-Learning in The 21st Century: A Community of Inquiry Framework for Research and Practice; Routledge: New York, NY, USA, 2017. [Google Scholar]
  103. Ke, F. Examining online teaching, cognitive, and social presence for adult students. Comput. Educ. 2010, 55, 808–820. [Google Scholar] [CrossRef]
  104. Ellis, R.D.; Allaire, J.C. Modeling computer interest in older adults: The role of age, education, computer knowledge, and computer anxiety. Hum. Factors 1999, 41, 345–355. [Google Scholar] [CrossRef] [PubMed]
  105. Toivonen, T.; Jormanainen, I.; Kahila, J.; Tedre, M.; Valtonen, T.; Vartiainen, H. Co-Designing Machine Learning Apps In K–12 With Primary School Children. In Proceedings of the 2020 IEEE 20th International Conference on Advanced Learning Technologies (ICALT), Tartu, Estonia, 6–9 July 2020; pp. 308–310. [Google Scholar]
  106. Randall, N.A. Survey of Robot-Assisted Language Learning (RALL). ACM Trans. Hum. -Robot. Interact. 2019, 9, 1–36. [Google Scholar] [CrossRef]
  107. Wang, Y.-C. Using wikis to facilitate interaction and collaboration among EFL learners: A social constructivist approach to language. System 2014, 42, 383–390. [Google Scholar] [CrossRef]
  108. Al-kfairy, M. Factors ımpacting the adoption and acceptance of chatgpt in educational settings: A narrative review of empirical studies. Appl. Syst. Innov. 2024, 7, 110. [Google Scholar] [CrossRef]
  109. Chen, L.; Chen, P.; Lin, Z. Artificial intelligence in education: A review. IEEE Access 2020, 8, 75264–75278. [Google Scholar] [CrossRef]
  110. Wang, S.P.; Chen, Y.L. Effects of multimodal learning analytics with concept maps on college students’ vocabulary and reading performance. J. Educ. Technol. Soc. 2018, 21, 12–25. [Google Scholar]
  111. Reeves, B.; Hancock, J.; Liu, X. Social Robots Are Like Real People: First Impressions, Attributes, and Stereotyping of Social Robots. Technol. Mind Behav. 2020, 1. [Google Scholar] [CrossRef]
  112. Huang, L. Ethics of artificial intelligence in education: Student privacy and data protection. Sci. Insights Educ. Front. 2023, 16, 2577–2587. [Google Scholar] [CrossRef]
  113. Baihakki, M.A.; Mohamed Saleh Ba Qutayan, S. Ethical issues of artificial intelligence (AI) in the healthcare. J. Sci. Technol. Innov. Policy 2023, 9, 32–38. [Google Scholar] [CrossRef]
  114. Baştürk, R. Bilimsel araştırma ödevlerinin çok yüzeyli Rasch ölçme modeli ile değerlendirilmesi. J. Meas. Eval. Educ. Psychol. 2010, 1, 51–57. [Google Scholar]
  115. Doğan, O.; Baloğlu, N. Üniversite öğrencilerinin endüstri 4.0 kavramsal farkındalık düzeyleri. TÜBAV Bilim Derg. 2020, 13, 126–142. [Google Scholar]
  116. Talan, T. Artificial intelligence in education: A bibliometric study. Int. J. Res. Educ. Sci. (IJRES) 2021, 7, 822–837. [Google Scholar] [CrossRef]
  117. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Bengio, Y. Generative adversarial networks. Commun. ACM 2020, 63, 139–144. [Google Scholar] [CrossRef]
  118. Köse, İ.A.; Usta, H.G.; Yandı, A. Sunum yapma becerilerinin çok yüzeyli rasch analizi ile değerlendirilmesi. Abant İzzet Baysal Üniversitesi Eğitim Fakültesi Derg. 2016, 16, 1853–1864. [Google Scholar]
  119. Semerci, Ç. Öğrencilerin BÖTE bölümüne ilişkin görüşlerinin rasch ölçme modeline göre değerlendirilmesi (Fırat Üniversitesi örneği). Educ. Sci. 2012, 7, 777–784. [Google Scholar]
  120. Uluman, M.; Tavşancıl, E. Çok değişkenlik kaynaklı rasch ölçme modeli ve hiyerarşik puanlayıcı modeli ile kestirilen puanlayıcı parametrelerinin karşılaştırılması. İnsan Ve Toplum Bilim. Araştırmaları Derg. 2017, 6, 777–798. [Google Scholar]
  121. Benvenuti, M.; Cangelosi, A.; Weinberger, A.; Mazzoni, E.; Benassi, M.; Barbaresi, M.; Orsoni, M. Artificial intelligence and human behavioral development: A perspective on new skills and competences acquisition for the educational context. Comput. Hum. Behav. 2023, 148, 107903. [Google Scholar] [CrossRef]
  122. Zhang, K.; ve Aslan, A.B. AI technologies for education: Recent research & future directions. Comput. Educ. 2021, 2, 100025. [Google Scholar] [CrossRef]
  123. Lin, X.F.; Wang, Z.; Zhou, W.; Luo, G.; Hwang, G.J.; Zhou, Y.; Liang, Z.M. Technological support to foster students’ artificial intelligence ethics: An augmented reality-based contextualized dilemma discussion approach. Comput. Educ. 2023, 201, 104813. [Google Scholar] [CrossRef]
  124. Kaledio, P.; Robert, A.; Frank, L. The Impact of artificial ıntelligence on students’ learning Experience. Available SSRN 2024. [Google Scholar] [CrossRef]
Figure 1. Mixed-meta method complemented with quantitative research design [58].
Figure 1. Mixed-meta method complemented with quantitative research design [58].
Sustainability 17 03015 g001
Figure 2. Selection of the studies included in the analysis.
Figure 2. Selection of the studies included in the analysis.
Sustainability 17 03015 g002
Figure 3. Funnel plot.
Figure 3. Funnel plot.
Sustainability 17 03015 g003
Figure 4. The effects of artificial intelligence applications on learning environments.
Figure 4. The effects of artificial intelligence applications on learning environments.
Sustainability 17 03015 g004
Figure 5. Problems encountered in artificial intelligence applications and solution suggestions.
Figure 5. Problems encountered in artificial intelligence applications and solution suggestions.
Sustainability 17 03015 g005
Figure 6. Data calibration map.
Figure 6. Data calibration map.
Sustainability 17 03015 g006
Figure 7. Measurement report of curricula.
Figure 7. Measurement report of curricula.
Sustainability 17 03015 g007
Figure 8. The strictness/leniency of the judges.
Figure 8. The strictness/leniency of the judges.
Sustainability 17 03015 g008
Figure 9. Item difficulty analysis for evaluating artificial intelligence applications.
Figure 9. Item difficulty analysis for evaluating artificial intelligence applications.
Sustainability 17 03015 g009
Table 1. Inclusion criteria for meta-analysis process.
Table 1. Inclusion criteria for meta-analysis process.
CriteriaDescription
Time Period2005–2025
Publication LanguageEnglish and Turkish
Appropriateness of Teaching MethodExperimental and/or quasi-experimental designed studies with pre-test–post-test control groups using artificial intelligence applications
Statistical DataSample size (n), arithmetic mean (X), and standard deviation (ss) for effect size calculation
Table 2. Meta-analysis data.
Table 2. Meta-analysis data.
Test TypeModel 95% + Confidence IntervalsHeterogeneity
ngLowerUpperQpI2
AAFEM240.590.470.64163.110.0085.90
REM240.510.280.74
Table 3. General effect sizes of studies included in the analysis according to moderator analysis.
Table 3. General effect sizes of studies included in the analysis according to moderator analysis.
ItemsGroupsEffect Size and 95% Confidence IntervalsNull TestHeterogeneity
ngLower LimitUpper LimitZ-Valuep-ValueQ-Valuedfp-Value
Application duration1–40.590.590.300.884.010.00
5+0.090.09−0.150.330.750.45
Unspecified0.580.58−0.021.191.890.06
Total0.320.320.140.503.540.007.6920.02
SubjectsMaths190.440.180.713.250.01
AI30.800.071.532.150.03
Others20.810.191.442.540.01
Total240.530.300.764.470.001.720.43
Sample sizeSmall60.500.090.902.400.02
Medium90.370.180.553.930.00
Large60.630.181.082.750.01
Total240.420.260.575.240.001.2920.52
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Topkaya, Y.; Doğan, Y.; Batdı, V.; Aydın, S. Artificial Intelligence Applications in Primary Education: A Quantitatively Complemented Mixed-Meta-Method Study. Sustainability 2025, 17, 3015. https://doi.org/10.3390/su17073015

AMA Style

Topkaya Y, Doğan Y, Batdı V, Aydın S. Artificial Intelligence Applications in Primary Education: A Quantitatively Complemented Mixed-Meta-Method Study. Sustainability. 2025; 17(7):3015. https://doi.org/10.3390/su17073015

Chicago/Turabian Style

Topkaya, Yavuz, Yunus Doğan, Veli Batdı, and Sami Aydın. 2025. "Artificial Intelligence Applications in Primary Education: A Quantitatively Complemented Mixed-Meta-Method Study" Sustainability 17, no. 7: 3015. https://doi.org/10.3390/su17073015

APA Style

Topkaya, Y., Doğan, Y., Batdı, V., & Aydın, S. (2025). Artificial Intelligence Applications in Primary Education: A Quantitatively Complemented Mixed-Meta-Method Study. Sustainability, 17(7), 3015. https://doi.org/10.3390/su17073015

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop