Next Article in Journal
Correction: Li et al. Symphony or Solo: Does Convergence Exist in Environmental Taxation among EU Countries? Sustainability 2024, 16, 7678
Next Article in Special Issue
Advancing Sustainable Additive Manufacturing: Analyzing Parameter Influences and Machine Learning Approaches for CO2 Prediction
Previous Article in Journal
Balancing Development and Sustainability: Lessons from Roadbuilding in Mountainous Asia
Previous Article in Special Issue
Students’ Attitudes Towards AI and How They Perceive the Effectiveness of AI in Designing Video Games
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Effectiveness of Artificial Intelligence Practices in the Teaching of Social Sciences: A Multi-Complementary Research Approach on Pre-School Education

by
Yunus Doğan
1,*,
Veli Batdı
2,
Yavuz Topkaya
3,
Salman Özüpekçe
4 and
Hatun Vera Akşab
5
1
School of Foreign Languages, Fırat University, Elazig 23119, Türkiye
2
Nizip Education Faculty, Gaziantep University, Gaziantep 27310, Türkiye
3
Education Faculty, Hatay Mustafa Kemal University, Hatay 31060, Türkiye
4
Department of Geography Education, Education Faculty, Dicle University, Diyarbakır 21280, Türkiye
5
Department of Curriculum and Instruction, Gaziantep University, Gaziantep 27310, Türkiye
*
Author to whom correspondence should be addressed.
Sustainability 2025, 17(7), 3159; https://doi.org/10.3390/su17073159
Submission received: 6 February 2025 / Revised: 15 March 2025 / Accepted: 25 March 2025 / Published: 2 April 2025

Abstract

:
The aim of this study is to evaluate artificial intelligence applications in the preschool education level within the framework of the multi-complementary approach (McA). The McA is designed as a comprehensive approach that encompasses multiple analysis methods. In the first phase of the study, the pre-complementary knowledge process, meta-analysis, and meta-thematic analysis methods were used; in the post-complementary knowledge process, an experimental design with a control group and pre-test/post-test was applied. Finally, in the complementary knowledge phase, the findings of the first two phases were combined, providing an opportunity to evaluate the effectiveness of artificial intelligence applications in preschool education from a more comprehensive and broader perspective. The study provides information about the McA, and then the methodological process and findings of the research are presented in detail within this framework. After providing information about the McA, the methodological process and results of the study are presented step by step within this framework. A literature review based on document analysis in the context of social sciences and teaching in preschool education using artificial intelligence applications has shown that the application of artificial intelligence has positive and significant effects on both student performance and various variables supporting teaching. The complementary results favoring artificial intelligence applications encourage the increased use of such technologies in preschool education, promoting their more widespread and systematic use in the teaching environment.

1. Introduction

Social change and transformation continue to accelerate inevitably, with new technologies emerging every day and significant developments occurring in existing technologies. It is observed that, influenced by technological advancements in recent years, artificial intelligence is beginning to be recognized as one of the fundamental technologies of today’s society. Artificial intelligence (AI) is defined as any technique that enables a computer to mimic human behavior and reproduce the human decision-making process to solve complex tasks with minimal or no human intervention [1]. AI is described as computer systems that understand like humans, behave like humans, reason logically, perform rationally, exhibit intelligent behaviors of humans [2,3], and demonstrate human-like capabilities such as learning from information, adapting, synthesizing, self-correcting, and processing data in complex tasks [4,5].
Social sciences, while continuing the effort to understand human behaviors and social structures through traditional methods, are increasingly recognizing the importance of artificial intelligence in this field due to current requirements such as big data analysis and the modeling of social dynamics [6]. Artificial intelligence in education benefits educators by providing more effective and efficient administrative functions, improving students’ experiences and learning quality [7]. Artificial intelligence in education offers benefits such as personalized learning experiences for students, enhanced learning experiences, and increased teacher productivity [8]. Applications of artificial intelligence in preschool education have created social impacts in various applications such as adaptive learning environments, machine learning, computer vision, speech processing, neuroscience, health, natural language understanding, and the Internet of Things [9]. It helps improve instructional practices by providing personalized learning experiences [10]. Artificial intelligence-supported teaching systems benefit both students and teachers by increasing the accuracy and efficiency of data acquisition in preschool education [11].

1.1. Artificial Intelligence in Social Sciences

Artificial intelligence is becoming both a participant in and a mediator of human interaction [12]. In recent years, with the advancement of digital technologies, AI applications have been widely used in various fields. While artificial intelligence has primarily been the focus of engineers and information technology specialists, it has now expanded to include the social sciences as well [13]. AI, which provides an interdisciplinary and multifaceted field of study, is utilized not only in computer science, linguistics, and mathematics but also in fields such as art, psychology, and design [14].
One of the fundamental and rapidly evolving components of artificial intelligence is machine learning (ML), which refers to the process of defining specific algorithms and continuously improving a data model based on data analysis [15,16]. The growing interest in big data, machine learning, and other data science techniques in recent years has sparked a dynamic debate about how and in which areas these methods can be applied in research [17]. As a rapidly advancing technology, machine learning has been widely adopted across a broad spectrum that includes the social sciences, such as sociology [18,19], medicine [20,21], economics [22], and finance [23]. Deep learning (DL) is a subfield of machine learning, which itself is a subfield of artificial intelligence, following the developmental progression of AI > ML > DL [24]. Deep learning is particularly useful for high-dimensional and large datasets and outperforms ML algorithms in many applications that require processing deep neural networks, images, text, speech, and audio data [25]. The resurgence of artificial intelligence can be attributed, at least in part, to deep learning algorithms [26]. It is noted that AI, particularly in the context of machine learning and deep learning, is a dominant factor in emerging technologies as well as in social and economic sciences [27].
Machine learning is increasingly applied to large-scale social datasets related to humans [28]. The use of big data applications like machine learning in the social sciences offers a promising new statistical modeling culture for social scientists beyond merely data availability [29]. With the internet enhancing the capacity to generate, analyze, and store big data, a new era known as the digital age or the digital information age has begun. These developments have paved the way for new research areas and methodologies in the social sciences [30]. To obtain these data, social scientists use either qualitative or quantitative methods. In quantitative research, AI can predict relationships between specific variables using machine learning techniques such as regression, decision trees, random forests, and support vector machines [31,32,33]. Unlike quantitative research, qualitative research primarily deals with non-numerical data such as text, images, audio, and video [34]. Therefore, AI is utilized in a different manner in this type of research.
Natural Language Processing (NLP), defined as a component of artificial intelligence that enables familiarity with human interpretative language [35], facilitates analyses such as sentiment analysis in the social sciences with the help of AI [6]. NLP has become increasingly relevant to social science research due to its ability to provide reliable tools for analyzing textual data, including interview transcripts, social media posts, and news articles [36]. Through artificial intelligence, audio and video data can be automatically converted into text, leading to cost savings [37] as well as reductions in labor and time [38]. AI-powered tools such as NVivo and ATLAS.ti allow researchers to conduct thematic analysis by automatically identifying recurring themes, patterns, and key concepts within texts [39,40]. AI tools are also effectively utilized in data visualization, enabling the presentation of data in visual formats such as graphs and charts. It is discernible from the relevant literature that artificial intelligence is increasingly integrated into the social sciences, providing tools for big data analysis, behavior modeling, and automated decision-making. AI contributes to educational research by improving personalized learning, automating qualitative data analysis, and providing real-time feedback for educators. Studies have shown that AI enhances adaptive learning environments, supports students with individualized feedback, and assists educators in lesson planning and student assessment [41,42,43].

1.2. Artificial Intelligence in Pre-School Education

With the increasing demand for education, the integration of artificial intelligence (AI) and education has led to the emergence of a new research field, resulting in a growing body of literature on educational AI [44]. AI has been found to have a profound impact on various fields, including education. According to [7], AI in education is seen as a synthesis of three fundamental areas: knowledge, education, and computational science. Ref. [45] categorizes AI-powered educational tools into three main types: those aimed at students, teachers, and the educational system. In today’s society, children prefer learning resources to be at their fingertips and to learn at their own pace, making AI a promising tool to support students [46,47]. AI-driven applications such as personalized learning systems [48,49,50], chatbots [51], robots [52,53,54], online games [55], and simulations [7] are increasingly being utilized in education.
Early childhood is considered an ideal time to spark children’s interest in AI [53]. AI is becoming more integrated into various educational settings, including preschool education. The use of AI-powered tools in early childhood classrooms has gained attention in recent years [56], with research highlighting the potential of AI-driven smart learning structures, chatbots, and digital assistants in improving the quality of early childhood education [57]. Several studies have shown promising effects of AI in early childhood education, with an increasing focus on its role in enhancing learning and development. Within early childhood education, AI technologies such as chatbots, intelligent tutoring systems, and speech recognition tools have been integrated to enhance engagement and cognitive development. Studies indicate that AI-driven applications improve early literacy skills, problem-solving abilities, and social interactions by providing personalized learning experiences. For example, AI-assisted storytelling applications increase children’s vocabulary retention by 25%, while interactive AI-based games improve problem-solving and motor skills. AI also enhances social-emotional learning by fostering communication skills and collaboration through interactive AI tutors [58,59,60]. Thus, AI technologies are being employed to improve the quality of preschool education [11]. The inclusion of AI in early childhood classrooms is believed to have positive effects [61]. AI technologies can increase children’s enthusiasm for learning [62,63], while intelligent tutoring systems provide real-time feedback to enhance the effectiveness of preschool education [64]. AI can also support the development of computational thinking skills [65], contribute to efficient educational content by enabling personalized learning experiences, and allow AI-driven robots to interact with children to create tailored learning experiences [66,67]. Additionally, AI-powered tools such as augmented reality and virtual reality can make learning more engaging and enhance children’s understanding of information. The emergence of generative AI, including large language models (LLMs), has reshaped educational applications by offering interactive, real-time feedback, content generation, and adaptive tutoring. While LLMs are widely applied in secondary and higher education, recent developments indicate potential use in early childhood learning by personalizing language instruction and fostering creativity through AI-assisted storytelling and interactive dialog systems. Generative AI provides age-appropriate conversational interfaces, helping preschool learners improve comprehension skills and develop curiosity-driven inquiry through structured interactions. Additionally, AI-powered visual recognition tools can facilitate early learning in subjects such as literacy and numeracy by adapting teaching content based on student responses [68,69].

1.3. Purpose and Significance of the Study

The purpose of this study is to examine the impact of artificial intelligence (AI) applications on the learning environment and academic performance in early childhood education from a holistic perspective. This study focuses on the intersection of AI and social sciences, specifically within preschool education. By evaluating AI’s role in early childhood learning, the research aims to bridge the gap between technology and social science methodologies. The literature indicates that while there are numerous studies on AI across various fields, including social sciences, the number of studies specifically addressing its use in social sciences is limited [70,71]. Considering this gap and recognizing the insufficiency of studies that integrate both qualitative and quantitative research findings, this study aims to contribute innovatively to the literature through an experimental design conducted at the early childhood education level. By comprehensively presenting AI’s role in educational processes, this study seeks to bridge this gap. The objective is to combine qualitative and quantitative findings to obtain broader and more comprehensive results. Thus, the primary objective of this study is to assess the effectiveness of artificial intelligence (AI) applications in preschool education using the multi-complementary approach (McA) [11]. This research aims to address the following specific questions:
  • What is the overall effect size (g) of AI applications in preschool education, as derived from meta-analysis findings?
  • How do participants perceive AI applications in preschool education, based on meta-thematic analysis?
  • Is there a statistically significant difference in pre-test and post-test scores for students in the experimental group?
  • How do observer perspectives complement the statistical findings in AI-assisted preschool education?
  • Do the combined findings of meta-analysis and experimental results reinforce each other?
This study seeks to provide a comprehensive understanding of the role of AI in early childhood education, contributing valuable insights to both academia and educational practice.

2. Method

This study aims to determine the impact of artificial intelligence (AI) applications on the field of social sciences. Through a literature review, the study seeks to evaluate AI applications from both quantitative and qualitative perspectives, analyzing and interpreting the findings of the current experimental study. In this context, the study also aims to compare the results of the experiment with similar studies in the literature to identify commonalities and differences. Additionally, to provide a broader perspective, the study has been conducted within the framework of the multi-complementary analysis (McA) approach. The McA is a method that integrates findings obtained through various analytical tools, extensive data sources, and a holistic perspective [72,73].

2.1. What Is the Multi-Complementary Approach?

The McA is an approach that enables the holistic evaluation of qualitative and quantitative data collected through different analytical programs. This approach allows a subject to be examined from multiple perspectives, identifying gaps in the literature and contributing uniquely to research through various data collection and analysis methods [72]. Accordingly, the McA consists of three main stages that facilitate access to pre-complementary knowledge, post-complementary knowledge, and complementary knowledge [72,73].

2.2. Pre-Complementary Knowledge Phase

As shown in Figure 1, the pre-complementary knowledge phase is the first stage of the McA. In this phase, a literature review is conducted to examine studies on AI applications in the field of social sciences. The literature is analyzed through meta-analysis and meta-thematic analysis processes, aiming to identify existing gaps in this field.
In this context, it can be stated that a detailed table was created by reviewing studies on artificial intelligence applications using the document analysis method in both types of analysis. These two types of analysis are defined as follows: ref. [74] described meta-analysis as the “analysis of analyses”, while defined it as a quantitative approach that systematically analyzes the findings of individual studies to derive a general conclusion. Meta-analysis is defined as a research synthesis that combines the statistical results of quantitative studies [75]. The primary goal of meta-analysis is to summarize and interpret all available evidence on a specific topic or research question [76]. Another method used in the pre-holistic knowledge phase of the research is meta-thematic analysis. Meta-thematic analysis is a process based on document analysis, aiming to access qualitative research on a specific topic and re-analyze raw data from these studies to develop themes and codes [77]. This method involves collecting detailed notes through observation, interviews, and document analysis, then classifying the extensive raw data into significant themes, categories, and explanatory case examples to transform them into readable narratives [78]. Therefore, the most distinctive feature of the meta-thematic analysis process is its ability to access raw qualitative data and generate reliable research findings [79].

2.2.1. Literature Review and Inclusion Criteria

To develop the information in the pre-complementary phase, a comprehensive literature review was conducted on artificial intelligence applications in the field of social sciences. The study utilized the Web of Science, Science Direct, Taylor and Francis Online, ProQuest Dissertations and Theses Global, Google Scholar, and YÖK databases to access relevant studies.
Searches were conducted in both Turkish and English using the following keywords:
  • “yapay zeka” (artificial intelligence)
  • “sosyal bilimler ve yapay zeka” (social sciences and artificial intelligence)
  • “artificial intelligence”
  • “social sciences and artificial intelligence”
To refine the search, additional terms such as “effectiveness of”, “effect of”, “the impact of”, and “the effects” were included to focus on the effectiveness of AI applications.

2.2.2. Meta-Analysis Inclusion Criteria

The criteria for including studies in the meta-analysis were as follows:
  • Conducted between 2005 and 2025;
  • Implemented AI applications in the experimental group;
  • Contained pre-test and post-test data on AI applications;
  • Focused on the impact of AI applications on students’ academic achievement;
  • Included descriptive statistics required for effective size calculations within the McA framework, such as sample size (n), arithmetic mean ( X ¯ ), and standard deviation (SD).
In addition to the above five criteria, the meta-thematic analysis included studies that
  • Examined the impact of AI applications on academic success;
  • Used qualitative research methods and included participants’ perspectives;
  • Were conducted between 2005 and 2025 and were retrieved from the same databases used in the meta-analysis.
A total of 248 studies on AI applications in social sciences were identified. However, only six studies met the meta-analysis criteria, and five studies met the meta-thematic analysis criteria. The remaining studies were excluded from the analysis due to irrelevance, failure to meet inclusion criteria, or duplication across multiple databases. The primary reasons for exclusion were
  • Lack of empirical data (50%);
  • Absence of pre-test/post-test designs (30%);
  • Studies not specific to preschool education (15%);
  • Insufficient statistical data for meta-analysis (5%).
The selected studies had similar sample sizes and focused on early childhood education, ensuring alignment with the present study. The number of studies included in the meta-analysis and meta-thematic analysis is presented in Figure 2 using the PRISMA flow diagram, developed by [80].
In the pre-complementary knowledge phase of the study, the data obtained through document analysis were transferred to Microsoft Office Excel and Word programs. During the meta-analysis process, the statistical software Comprehensive Meta-Analysis (CMA) 2.0 was used [81]. For the document analysis of studies conducted within the meta-thematic framework, the Maxqda-11 software was utilized. These programs were chosen for their reliability in handling large datasets and advanced statistical capabilities. The study was conducted using a computer with at least 8 GB RAM and a multi-core processor to ensure smooth execution of data analysis. The collected data were analyzed using the content analysis method. Content analysis is a qualitative research technique that involves classifying research findings based on specific themes and codes and interpreting them systematically [82]. Accordingly, participant opinions obtained from study documents were analyzed, and themes and codes were developed. The pre-complementary knowledge phase analysis of the study was conducted in two separate processes. The meta-analysis process is detailed below.

2.2.3. Effect Size and Model Selection

The value calculated to provide information about the magnitude and direction of the relationship between two groups or variables is defined as effect size [75]. In this study, Hedges’ g, a standardized measure of the difference between means, was preferred for effect size estimation [83].
For interpreting effect size values, the classification by [84] was taken into account. According to their classification
  • If −0.15 ≤ g < 0.15, the effect size is considered negligible;
  • If 0.15 ≤ g < 0.40, it is small;
  • If 0.40 ≤ g < 0.75, it is moderate;
  • If 0.75 ≤ g < 1.10, it is large;
  • If 1.10 ≤ g < 1.45, it is very large;
  • If g ≥ 1.45, it is considered excellent [84].
In meta-analytic calculations, effect sizes are generally estimated using either the fixed-effect model (FEM) or the random-effects model (REM) [85]. According to [86], the application of the FEM is quite limited. Given the variability in instructional levels, subject areas, implementation durations, and sample sizes in this study, the random-effects model (REM) was deemed the most appropriate approach.

2.2.4. Heterogeneity Test

According to [87], heterogeneity in meta-analysis refers to the extent to which the results of included studies differ. Instead of the Q statistic, which is commonly used in heterogeneity testing, the I2 statistic is preferred as it provides more reliable results.
  • The I2 value ranges from 0% to 100%;
  • 0% indicates no heterogeneity;
  • 75% or higher indicates high heterogeneity [88].
In this study, the calculated I2 value was 82.82, indicating a high level of heterogeneity. Due to this, moderator analyses were conducted to test the differences between groups based on various variables [89].

2.2.5. Coding

Coding is defined as a method that enhances the reliability of a study by enabling independent replication by different researchers [90]. In this study, to ensure validity and reliability, a coding form was developed [91].
This form included
  • Study code, title, author(s), year of publication, academic term, course, educational level, and sample details;
  • Statistical data related to these variables.
Additionally, this form allowed multiple coders to examine the dataset and ensured that only studies with inter-coder agreement were assigned codes. To assess inter-coder reliability, the formula proposed by [92] was used:
Reliability = AgreementAgreement + Disagreement × 100\text{Reliability} = \frac{\text{Agreement}}{\text{Agreement} + \text{Disagreement}} \times 100 Reliability = Agreement + DisagreementAgreement × 100
An agreement rate of 90–71% was achieved among the coders.
In the meta-thematic analysis of qualitative studies, participant opinions were re-evaluated, and studies with similar characteristics were grouped into common categories, from which themes were developed. In cases of disagreement, coders engaged in the literature-based discussions to reach a consensus on common codes [93].
During the interpretation of themes and codes, participant statements were included as direct quotations to enhance clarity, transparency, and reliability. Each referenced study was coded using letters, numbers, and symbols. For example, in M11-s.5:
  • “M” represents the article;
  • “11” represents the study number;
  • “s.5” refers to the page number of the quotation.

2.2.6. Publication Bias and Reliability

In meta-analytic studies, various methods have been developed to ensure that analyses are conducted reliably and to avoid potential biases. One of these methods is the calculation of the “fail-safe N”, proposed by [89]. The fail-safe N refers to the number of unpublished null studies required to invalidate the observed effect. A high fail-safe N value supports the validity of the results. In this study, the calculated safe N value was 248 (p = 0.1), and when compared to the studies included in the analysis, this value is considered quite high, suggesting that the risk of bias is low [75].
In meta-analyses, a funnel plot is also used to determine publication bias [94]. In this plot, the horizontal axis represents the effect size, while the vertical axis shows the sample size, variance, or standard error values. The funnel plot explains the relationship between the effect size of the study and the sample size (i.e., number of studies). The key point of the plot is that as the sample size (number of studies) increases, the precision of the effect size estimate also increases [95]. Additionally, in the absence of publication bias, the plot will take the form of an inverted symmetric funnel as shown in Figure 3 [96]. Upon examining the funnel plot, it is observed that the studies (points) are distributed on both sides of the vertical line, indicating that there is no bias. Therefore, it can be explicitly stated that no bias is present in this study.

2.3. Post-Complementary Knowledge Phase

The second phase, the post-complementary knowledge phase, involves information from original studies conducted by the researcher to address the deficiencies identified in the first phase. To achieve this, the research sought to determine how the subject in question has been addressed in the relevant literature. This process was carried out using meta-analysis and meta-thematic analysis methods. Since the McA is based on inductive reasoning, the data obtained in the first step are key indicators evaluated within an integrated framework that shows what is missing in early childhood education. Thus, the complementary knowledge phase was conducted as a new initiative to address the missing data. This step was considered a complementary process aimed at addressing gaps, drawing attention, and raising awareness within the context of the study area. Accordingly, in the first phase, it was found that there were a limited number of studies on artificial intelligence applications in early childhood education, and this gap was identified as a significant void in the relevant literature. Therefore, efforts were made to fill this gap and draw the attention of other researchers to this area. As a result, artificial intelligence applications were implemented in the experimental group in early childhood education, and these applications were developed through experimental processes conducted to evaluate students’ performance.

2.4. Design of the Experimental Process

McA is a mixed design that combines multiple research methods. In this design, various combinations of quantitative, qualitative, or both methods are used to structure each stage based on the findings from the previous stage [73]. In the experimental section of the research, an explanatory sequential design model was adopted. According to [97], in this multiphase mixed method, quantitative data are collected first, followed by qualitative data that help provide a deeper explanation of the quantitative findings. The main purpose of this design is to provide a more comprehensive perspective on the research problem through quantitative data and findings. The qualitative data and their analysis enrich and clarify the statistical results by deeply exploring participants’ views. In the pre-complementary knowledge phase, data from studies on artificial intelligence applications were evaluated through meta-analysis and meta-thematic analysis, and these analyses contributed to guiding the experimental processes. After the pre-complementary phase, it was found that research was primarily focused on social sciences, particularly in the health field, and there was a significant gap in studies targeting early childhood education. Therefore, an experimental study was planned to examine the impact of artificial intelligence applications on student performance in early childhood education.

2.5. Formation of the Experimental Group and Participation Selection

In the study, a single-group pre-test/post-test experimental design was applied. In this design, the effect of the experimental process is tested through a single group. In this context, the experimental group consists of 14 early childhood education students (n = 14) enrolled in the 2024–2025 academic year. A pre-test (T1) was applied to the experimental group before the experimental process. The same test was re-administered as the post-test (T2) at the end of the experimental process in Table 1.
Participants were selected from a preschool institution offering early childhood education. A total of 14 students (aged 60–72 months) were recruited based on the following criteria:
  • Enrollment in an AI-integrated early education program.
  • Parental consent for participation.
  • No prior exposure to AI-based educational tools.

2.6. Data Collection Tool: Student Evaluation Form

A student evaluation form was prepared by the researcher to identify the impact of artificial intelligence applications on student performance using a multi-complementary approach with 60–72-month-old children attending preschool education. The form was used as a measurement tool. First, the preschool education program where the application would take place was reviewed. The achievements and indicators in the preschool education program were examined in detail. In the student evaluation form developed to assess the effect of artificial intelligence applications on preschool students’ performance, the answer options for the items are scored as follows: 1 = Cannot do, 2 = Can do partially, 3 = Can do completely. The items in the measurement tool were prepared with the developmental characteristics of 60–72-month-old students in mind. Expert opinions were obtained regarding the items in the tool. Based on the feedback and suggestions received, the items of the tool were revised for semantic coherence, scope, sentence structure, and spelling rules, and the final version of the measurement tool was applied.

2.7. Process Duration

Weekly lesson plans suitable for artificial intelligence applications were prepared based on the achievements in the preschool education program. Themes were determined, especially related to health, art, Turkish, and social studies achievements, as they are more related to the social sciences. Additionally, when creating the lesson plans, activities were determined based on the use of artificial intelligence tools for the experimental group students. Before the process, information about the artificial intelligence tool to be used in the study was provided. In the first week, the topic of “Healthy Eating”, in the second week “Learning Colors-Pink”, in the third week “Natural Disasters”, and in the fourth week “Dental Health” were taught. The study employed “Animated Drawings”, an AI-powered tool that allows students to animate their drawings. This application was chosen for its accessibility, ease of use, and ability to engage children in interactive learning experiences. Special features of this AI tool include automated motion generation based on user drawings, interactive engagement through animated storytelling, and real-time feedback on artistic expressions. The experimental process spanned four weeks, with AI-integrated lessons conducted weekly. The topics covered included the following:
  • Week 1: Healthy Eating;
  • Week 2: Learning Colors—Pink;
  • Week 3: Natural Disasters;
  • Week 4: Dental Health.
Ethical Considerations: Given that research involving young children is typically classified as high-risk by university ethics committees, specific measures were taken to ensure compliance with ethical standards:
  • Informed consent was obtained from parents and guardians;
  • Children’s data privacy was safeguarded by anonymizing collected data;
  • AI applications used in the study did not store sensitive personal information;
  • Teachers and researchers actively supervised AI-assisted learning sessions to prevent over-reliance on AI for educational activities.
Week 1: The teacher welcomes the children and guides them to the play centers. “Little Bean Song” movements were performed as sports and dance activities. “Healthy Eating”, “How Can We Be Healthy?”, and “Healthy Tomorrow” educational videos were watched. The importance of breakfast and the components of a healthy breakfast were discussed. Healthy foods were introduced. The concept of a balanced and adequate diet was emphasized. After these explanations and brainstorming, the teacher showed flashcards and a slide presentation about “Healthy and Unhealthy Foods”. The teacher made the students prepare a healthy breakfast plate with the song “Healthy Eating Song”. The students created a healthy breakfast plate using the healthy foods they had learned. The students chose one healthy or unhealthy food and drew it. Afterward, the drawings were animated using the artificial intelligence tool. The teacher taught the “Vegetable Fruit Song”. In this song, students distinguished new words and discussed the new vocabulary. “The Visit of Vitamins Story” was shown. The lesson aimed to teach the word “vitamin”, and the teacher asked the students to use the word appropriately.
Week 2: The children were asked what colors they had already learned. Then, each child was given a color and asked to show or say an object of that color. They were told that by mixing red and white, they would obtain a new color and were asked to guess what this color might be. A class experiment was conducted to discover this color. Three cups were placed on the table: one empty and two filled with red and white paint, respectively. The cups were mixed, and the color pink was created. The children were asked what natural things were pink. “Pink Color Flashcards and Slide Show” and the “Learning Colors-Pink” educational video were shown. After all these activities, the “Pink Color Finger Game” was sung. Finally, “The Brave Pink Cloud Story” was shown, and then the students performed the “Flamingo Coloring” activity. The students drew and colored a pink flamingo. For those struggling with the drawing, a pre-made flamingo image was given. After completing the drawing and coloring, the drawings were uploaded to the artificial intelligence tool for animation.
Week 3: The teacher welcomed the children and directed them to the play centers. The teacher asked the children what natural events were. They discussed disasters like earthquakes, floods, fires, droughts, and volcanic eruptions, and when and why they occur. The teacher explained what should be performed in case of an earthquake. First, the “Preschool Disaster Education” educational video and the “Earthquake Education for Children ’Earth’ Animation Cartoon” were shown. Then, the “Drop, Cover, and Hold On Story” was shown. Afterward, a “Natural Disasters Finger Game” was taught, and everyone sang it together. Flashcards about other natural disasters were shown. The next natural event to be taught was the volcano (volcanic eruption). The teacher took a boiled egg and a globe model. The teacher explained that the Earth has layers like the egg and peeled the outer shell, showing the inner white layer. Then, the yellow part was shown as the magma, the central part of the Earth. It was explained that this magma sometimes needs to escape to the surface, but this requires a mountain. The children were asked to perform the “Volcano Experiment” to demonstrate what they had learned. The students were asked to draw any natural event. After completing the drawing, it was uploaded to the artificial intelligence tool. The students were asked to review each other’s work.
Week 4: First, as sports and dance activities, the “Brushing Teeth Song” movements were performed. After the movements, the “Why Do We Brush Our Teeth?” and “How to Brush Our Teeth Properly?” educational videos were shown. The children sat in a way that allowed them to see the teacher. The teacher suddenly held her cheek and asked, “What happened to my tooth?” The children were prompted to think about why the tooth might hurt. The importance of brushing teeth for dental health was discussed, including which foods should be eaten and avoided for healthy teeth. An “Oral and Dental Health Drama” was performed. Two or three students acted as germs, while the others represented healthy teeth. The germs chased the healthy teeth, and when they caught them, the teacher used a toothbrush to clean the teeth and save them from germs. The “My Teeth Song” was then taught and sung together. The students were given time to draw healthy and unhealthy (decayed) teeth. After completing their drawings, they were uploaded to the artificial intelligence tool.
The lessons for the experimental group were applied over four weeks in accordance with the preschool education program. The teaching methods for all topics involved a mix of question-and-answer, auditory-visual, experiments, and drama-based techniques. Throughout the process, the students reinforced their learning by drawing and animating their creations using the artificial intelligence tool.

2.8. Thematic Analysis Process

Following the final comprehensive data phase, thematic analysis was conducted to support the quantitative data obtained and to gain more detailed information about artificial intelligence applications. In this process, information was gathered through observation regarding students’ understanding of artificial intelligence applications. If a researcher wants to understand a behavior occurring in a specific environment in a detailed, comprehensive, and time-evolving manner, they may resort to the observation method [98]. According to [99], observation is a method used to gather direct and detailed information about events, situations, and behaviors. When applied regularly, it provides reliable results as a research method [100]. In the study, an observation form was prepared by consulting expert opinions. The personal information of the observers was kept confidential. Two individuals participated as observers in the study. Citations from the observers were labeled with the expressions G1 and G2. During the thematic analysis process, the consistency of the themes and codes derived from the observations of students was evaluated, along with the Cohen Kappa agreement values between the coders. The Kappa value is a measure indicating the level of agreement between the observers or data coders during the research process. According to this value, the levels of agreement are as follows: values of 0.20 or lower indicate weak agreement, values between 0.21 and 0.40 indicate low-to-moderate agreement, values between 0.41 and 0.60 indicate moderate agreement, values between 0.61 and 0.80 indicate good agreement, and values between 0.81 and 1.00 indicate very good agreement [101]. Looking at the agreement values for the study, 0.917 shows that there was very good agreement.

2.9. Complementary Knowledge

The final phase of the multi-complementary approach (McA), the complementary knowledge phase, involves the integration of the findings and results obtained from the first two phases. In this phase, the use of different data collection and analysis methods can contribute to enriching the study. According to [102], methodologically, mixed-methods research can provide more qualitative results compared to single-method studies. As [103] state, in mixed-methods studies where qualitative and quantitative data are integrated, the research findings complement and support each other in terms of interpretation, explanation, description, and verification. In this regard, the integration of data obtained through two qualitative and two quantitative research methods to examine the effectiveness of artificial intelligence applications can be seen as a fundamental feature of the comprehensive information phase, providing rich and in-depth insights.

3. Findings

In this section of the study, the findings obtained from the meta-analysis and meta-thematic analysis conducted on the use of artificial intelligence applications in social sciences are interpreted.
According to the meta-analysis findings presented in Table 2, the effect size of AI applications (Achievement) in social sciences, calculated based on REM, was found to be g = 0.74 [0.13; 1.353]. Since this effect size is considered large, it indicates that AI-based applications have a positive and significant impact in social sciences [84].
When examining the heterogeneity test value obtained in Table 2, it is observed that the effect sizes of artificial intelligence (AI) applications in social sciences are distributed heterogeneously (Q = 217.746; p ˂ 0.05). The I2 value (94.48%) indicates that 94% of the observed variance originates from true variance among the studies. According to [104], an I2 value of 25% represents low heterogeneity, 50% indicates moderate heterogeneity, and 75% or above signifies high heterogeneity. In this study, the calculated I2 value of 94.48 confirms the presence of high heterogeneity [88]. In this context, the moderator variables influencing the overall effect size appear at a high heterogeneity level. Therefore, since the obtained I2 value indicates heterogeneity, conducting a moderator analysis is necessary [75]. For this reason, the selected moderator variables include education level, implementation duration, and sample size (Table 3). According to the findings of the moderator analysis, the highest effect sizes were observed in the following categories:
  • Education level: “Others” category (g = 0.80);
  • Implementation duration: 9+ weeks (g = 0.33);
  • Sample size: Medium sample group (g = 1.82).
These results suggest that AI applications are more effective in the specified groups within the moderator analysis. However, the significance test did not reveal significant differences in terms of education level (Q_B = 0.04; p ˃ 0.05), implementation duration (Q_B = 0.00; p ˃ 0.05), or sample size (Q_B = 30.29; p ˃ 0.05). When the analysis results are evaluated as a whole, AI applications have shown a broad-level effect across all groups in a similar manner. However, no significant differences were found among the groups.

3.1. Meta-Thematic Findings on Artificial Intelligence Applications

This section presents the results of the meta-thematic analysis of qualitative studies on artificial intelligence (AI) applications within specific themes and codes (See Appendix A2 for the key features of the relevant studies). The obtained data are categorized under the following themes:
  • Contribution to Educational Environments;
  • Contribution to Innovation and Technological Development;
  • Challenges and Solutions.
The codes related to these themes are visually modeled below. Additionally, direct quotations are included in the relevant discussions to support these codes.

3.1.1. Contributions to Education Environments

When examining Figure 4, the codes fall under the theme of “Contribution to Educational Environments” in relation to artificial intelligence (AI) applications. Some of these codes can be explained as follows:
  • Resembling a real teacher;
  • Improving readiness;
  • Being reassuring;
  • Using motivating expressions;
  • Reinforcing learning by reteaching topics;
  • Providing 24/7 learning opportunities;
  • Demonstrating tolerance.
Relevant reference statements supporting these codes include the following:
  • “(M5-p.17053) In the study, it is stated: ‘When I make a mistake, it tells me not to worry, gives me hints, and helps me find the correct answer. It also summarizes topics... It provides visuals, which is something a real teacher would do.’”.
  • “(M5-p.17051) It was useful for students who were unprepared for the lesson. It served as a preparatory tool, and thanks to the chatbot, even if we didn’t fully grasp the topic, we had already learned half of it by the time the teacher started explaining”.
  • “(M5-p.17069) As a student, I felt good. I believe my classmates felt the same way... It says things like ‘You are amazing!’ ‘Great!’ or ‘I’m making this question easier for you!’”.
  • “(M5-p.17051) It was positive... I mean, a teacher comes and teaches a topic, then another one comes and summarizes it”.
These findings suggest that AI contributes to educational environments in various ways, enhancing the learning experience from multiple dimensions.

3.1.2. Contributions to Innovation and Technological Advancement

When examining Figure 5, the codes fall under the theme of “Contribution to Innovation and Technological Development” in relation to artificial intelligence (AI) applications. Some of these codes can be explained as follows:
  • Usage across different disciplines;
  • Benefits for educational services;
  • Contribution to diagnosis and treatment;
  • Saving time and increasing efficiency;
  • Helping users stay up to date;
  • Performing automated tasks.
Relevant reference statements supporting these codes include the following:
  • “(M1-p.180) AI and robotics technology can be used in many different disciplines. In the future, it will be an essential field that professionals in all industries need to learn… I cannot provide a very technical definition due to my lack of knowledge”.
  • “(M2-p.33) AI is particularly active in the diagnosis and treatment process, especially in the diagnosis phase”.
  • “(M4-p.10) AI can retrieve more relevant information faster, helping you stay up to date and potentially learn new skills more quickly”.
  • “(M4-p.12) In the future, repetitive, time-consuming, and automatable tasks will be handled by AI”.
  • “(M3-p.77) There will be a significant gain in speed and time. It will accelerate processes greatly. Right now, we think our current pace doesn’t harm us. We still wait two weeks for molecular tests. But in 20 years, those two weeks could mean a lot. We need to be even faster”.
These findings suggest that AI applications contribute to innovation and technological development by facilitating work across various disciplines, helping professionals stay up to date, handling time-consuming tasks, improving job performance, and increasing overall efficiency.

3.1.3. Challenges in AI Applications and Suggested Solutions

When examining Figure 6, the codes are seen within the theme of “challenges and solutions in AI applications”. Some of these codes can be explained as “expressing fear, being in the hands of certain individuals, causing job loss, creating ethical problems, carrying the risk of cost and waste, should be in an assistant position, and should be economically viable”.
The reference statements for these codes include the following:
  • In the study (M1-p.183), it is stated:
    “Right now, it expresses fear and anxiety in me. Since I do not fully grasp the situation, and I cannot predict what it might do to people in the future, it seems frightening to me due to the uncertainty”.
  • In the study (M1-p.192), another statement highlights job loss concerns:
    “As a banker, I believe it will negatively impact my profession. Since we mainly deal with statistical calculations and more technical matters, I think the banking profession will cease to exist in the near future, after 2030”.
  • Another concern (M1-p.199) is about control and accessibility:
    “What worries me is that it will be in the hands of certain individuals, unable to reach the public, and unable to serve the general population. A majority of people may remain in hunger and poverty, and they may survive only if ‘certain individuals’ provide help. There is such a danger”.
  • Ethical concerns are also raised in (M3-p.59):
    “I honestly believe it will create ethical problems. After all, what will happen in terms of ethics? Robots have no legal responsibility…”
  • Another concern regarding cost and inefficiency is found in (M3-p.75):
    “Let’s say an aspect of AI is developed, and it looks great. You invest in it, make serious financial commitments, bring in people to set it up, pay those people, buy the machines. But in the end, you get far less performance than what was promised. That is waste”.
Proposed Solutions
Reference statements related to possible solutions include the following:
  • In (M1-p.188), a participant suggests AI should be used as an assistant rather than a replacement:
    “I would prefer it to be an assistant. I think it would be more useful that way. I would prefer it as an assistant to make my daily tasks easier”.
  • In (M3-p.72), financial feasibility is emphasized:
    “Financial viability is a very important factor. The initial costs of setting up and integrating new systems can be significant…”
In conclusion, AI applications present various challenges such as causing social anxiety, creating ethical concerns, diminishing cognitive skills, surpassing human capabilities, leading to job loss, and negatively affecting relationships. However, they also offer solutions such as reducing workload, functioning as an assistant, making life easier, and handling bureaucratic tasks.

3.2. Comparison of Pre-Test and Post-Test Results After the Experimental Process

Table 4 presents the results of the assessment tool applied to students in the experimental group at the end of the AI-assisted instruction. In this section of the research, the aim was to collect complementary data through an experimental study integrating the first phase of McA.
For this purpose, the data from the assessment tool evaluating the impact of AI applications on students’ learning performance in the “preschool education level” is presented in Table 4.
In Table 4 above, a significant difference is observed between the pre-test and post-test scores of the experimental group during the experimental process. The pre-test and post-test scores of the students in the experimental group are presented. When examining the relevant data, a difference of 9.29 points is observed between the pre-test score (18.64) and the post-test score (27.93). This difference is in favor of the post-test (p < 0.05). As a result, it can be stated that the applied interventions have positively contributed to students’ learning performance.

3.3. Thematic Findings from Observers’ Opinions After the Experimental Process

In this section, the findings derived from observers’ opinions regarding the use of the AI tool are interpreted. In the final holistic information phase of the research, the observers’ comments are grouped under two thematic headings. These themes can be expressed as “contribution to the social-emotional dimension” and “problems encountered in AI applications and their solutions”.
When examining Figure 7, the codes related to the “contribution to the social-emotional dimension” of AI applications are represented in the model. The observing teachers indicated that the application sparked interest and curiosity among the students. Regarding this code, G1 stated, “Introducing the AI tool sparked curiosity in the students”, while G2 expressed, “Using the AI tool during the lesson increased students’ interest”. The observers also noted that the application enhanced students’ enthusiasm for learning, made learning enjoyable, and fostered a positive mood. G1 commented, “I observed that the application significantly increased the students’ enthusiasm for the learning process”, while G2 mentioned, “Thanks to the activities, the students’ overall mood became more positive”. Additionally, the AI application is said to have fostered social and emotional development by promoting responsibility, increasing motivation, enhancing collaboration, boosting self-confidence, improving communication skills, and increasing students’ willingness to participate in class. The issues encountered by the observers during the AI application process and their proposed solutions are shown in Figure 8.

3.4. Findings Related to the Holistic Information Stage

At this stage, the results of the pre-complementary and post-complementary knowledge phases are combined and expressed. In the pre-complementary knowledge phase, it is observed that AI applications positively influence the field of social sciences. The effect size of AI applications in social sciences research is g = 0.74. Additionally, the moderator analysis indicates that the greatest effect size is observed in university-level and other educational stages, which has encouraged researchers to implement AI applications in preschool education. No studies on preschool education were found in the included works. In this context, the implementation of AI applications in preschool education is seen as an indication of a gap in the literature.
In the qualitative part of the pre-complementary knowledge phase, which consists of meta-thematic analysis, it can be stated that despite some issues with AI applications, they provide significant contributions to educational environments, innovation, and the technological development process. In the second part of the study, an experimental process was applied to AI applications in preschool education. The results of the post-complementary phase show that there is a significant difference between the pre-test and post-test scores of the experimental group.
Finally, when evaluating the thematic process of the study, it is concluded that, according to the observer teachers’ feedback, AI applications contributed significantly to the socio-emotional dimension by increasing students’ self-confidence and motivation, making learning enjoyable, improving responsibility and communication skills, and fostering a positive mood. The pre-complementary and post-complementary findings support each other and merge coherently to reach a holistic conclusion, indicating that the research findings are consistent.

4. Discussion and Conclusions

The findings of this research have been discussed within the framework of the McA. Accordingly, the results of the pre-complementary phase, which includes meta-analysis and meta-thematic analysis, are presented first. Following this, artificial intelligence (AI) applications were implemented, and a measurement tool was applied. The findings obtained from teachers’ observations, together with other data, were combined, and the results of the holistic phase, which includes recommendations, were presented. To explain the effects of AI applications in a more systematic and comprehensive manner, the process was approached in a phased manner.

4.1. Results of the Pre-Complementary Knowledge Phase

Based on the meta-analysis of the documents included in the research, it was determined that AI applications have a positive effect on the field of social science (g = 0.74). This result highlights that AI applications are effective in the social sciences. AI methods have successfully developed diagnosis, understanding human development, and data management in behavioral and social sciences [105]. In recent years, research in big data and AI in social sciences has grown exponentially, with management and psychology leading the way, along with emerging interdisciplinary areas like social sciences and geography [106]. AI improves social science research by providing accurate and efficient data analysis, enhancing decision-making, and promoting responsible and ethical development [107]. Models like LLMs are transforming social science research by simulating human-like responses and allowing for large-scale testing of theories and hypotheses about human behavior [71,108]. Additionally, AI increases the effectiveness of data management in social and human services by offering advanced tools for literature review, data collection, and visualization [105]. Generative AI provides new approaches to studying human behavior through surveys, online experiments, and automated content analysis [70], while its capabilities in text, image, and sound processing support decision-making processes in social research with greater accuracy and efficiency [107].
To examine the effects of AI applications in social sciences in more detail, moderator analyses were conducted. The effect sizes (g = 0.77, 0.32) for teaching level and application duration indicate that AI applications have a significant impact on the teaching level in social sciences, while having a moderate impact on the application duration. Here, it is observed that the impact of AI applications varies with the duration of the application, but this variation is not as pronounced as the differences seen in teaching levels.
Moreover, the positive results obtained from the meta-analysis were found to align with the findings from the meta-thematic analysis. The themes identified in the literature review, particularly AI’s “contribution to education”, are supported by previous studies. Ref. [109] noted that AI technologies, which are increasingly widespread in education, show promise for improving student learning performance and experiences. AI tools contribute to the learning process by providing feedback and offering flexible, personalized learning experiences [110]. Furthermore, studies have shown that AI applications contribute to innovation and technological advancements. AI transforms work operations by enhancing human tasks, improving productivity, and fostering innovation across various sectors [111]. In the financial sector, it improves decision-making and customer service through automation, analytics, and algorithmic trading [112,113], while deep learning and neural network technologies assist in predicting diseases, patient care, and disease outbreaks [114,115].

4.2. Results of the Post-Complementary Knowledge Phase

In the post-complementary knowledge phase of the study, a significant difference was found between the pre-test and post-test scores of the experimental group in the use of AI applications in early childhood education (Pre-test: x ¯ = 18.64, Post-test: x ¯ = 27.93). This result demonstrates that AI applications had a greater impact on achieving learning objectives in early childhood education, highlighting the effectiveness of AI in the learning process. The positive effects of AI applications on learning have been observed in many studies. In early childhood education, AI contributes to teaching and learning processes through collaboration between humans and machines [68] and provides adaptive learning environments that enhance personalized learning experiences [116].
Thematic analysis based on observations from the teachers involved in the experimental process resulted in two key themes. First, AI applications support students’ social and emotional development, which is also corroborated by previous research. AI enhances creativity, problem-solving skills, and social skills such as collaboration and communication by digitizing game activities and providing real-time monitoring of children’s developmental progress [69,117]. AI systems create personalized learning paths through machine learning algorithms, taking individuals’ strengths and weaknesses into account and fostering the development of self-esteem, confidence, and critical thinking skills through interactive activities and educational games [118]. The second theme that emerged from the observations relates to the challenges and solutions encountered in AI applications. Issues such as technology addiction [119,120], lack of social integration [121], and effects on teacher–student interaction [111] were identified as challenges in AI implementation. Observer feedback played a critical role in validating the study’s conclusions by providing qualitative insights into student engagement, learning behaviors, and social interactions. Teachers reported increased enthusiasm, curiosity, and active participation among students using AI tools, reinforcing the positive impact reflected in the pre-test and post-test results. Observers also noted that AI applications supported collaborative learning, as students frequently engaged in discussions about their AI-assisted activities. However, some concerns were raised regarding attention lapses due to prolonged screen exposure, highlighting the need for structured AI integration in early childhood classrooms. These observations confirm that while AI enhances learning outcomes, it should be used as a complementary tool rather than a primary instructional method.

4.3. Results in the Complementary Knowledge Phase

In the final stage of the McA process, the results from the preliminary and final complementary knowledge phases were combined to assess the consistency of the findings. The results of the measurement tool conducted after the experimental process indicate a positive effect on students’ learning performance, and these findings align with the data from the pre-complementary knowledge phase. This consistency in findings suggests that the experimental process had a positive effect on learning outcomes. The thematic analyses conducted in this phase also revealed that AI applications enhanced students’ interest, curiosity, communication skills, friendship relationships, and willingness to cooperate. These outcomes facilitated the students’ ability to adapt their social and emotional competencies to real-life contexts, thus contributing to the improvement of their learning performance. The results obtained from the measurement tool are consistent with the findings from the meta-analysis and meta-thematic analysis. Overall, all findings support each other, indicating that AI applications provide a meaningful contribution to both the learning process and various factors influencing learning.

4.4. Limitations

While the McA offers a comprehensive approach, there are several limitations to consider. The meta-analysis and meta-thematic analysis processes conducted in the study were limited to specific databases. The experimental process focused on the use of AI applications in the learning processes of preschool-level students. The research was also constrained to the application process for preschool students and the expected learning outcomes for the topics covered. Additionally, evaluating the effectiveness of AI applications in different subject areas and educational levels within the McA framework could provide further insights. The small sample size (n = 14) is also a limitation, as it may not fully represent the broader preschool population. However, the experimental design and mixed-methods approach mitigate some validity concerns by triangulating findings from multiple data sources. Future research should replicate the study with a larger, more diverse sample to enhance generalizability.

4.5. Suggestions

To enhance the effectiveness of AI applications in preschool education, the development of teachers’ digital pedagogical competence should be prioritized. In this regard, in-service training programs for teachers on the integration of AI applications into educational processes are recommended. These training programs would help teachers better understand the pedagogical potential of AI tools and use them in alignment with the lesson content. Furthermore, AI-supported learning environments should be designed to accommodate students’ individual differences and developmental characteristics, promoting more active participation in the learning process.
Considering the general findings of the study, it is recommended that environments conducive to the use of AI applications be created in educational settings, as the results showed that AI positively contributed to educational environments, innovation, technological development, and students’ socio-emotional aspects. Moreover, it is suggested that AI be integrated into courses at different educational levels in line with the results regarding its positive contributions to educational environments. Lastly, measures should be taken to prevent AI from fostering technological dependency, being perceived as mere games, or hindering face-to-face communication.

Author Contributions

Conceptualization, Y.D. and V.B.; methodology, V.B.; software, Y.T.; validation, S.Ö., H.V.A. and Y.D.; formal analysis, Y.T.; investigation, S.Ö.; resources, H.V.A.; data curation, V.B.; writing—original draft preparation, Y.T.; writing—review and editing, Y.D.; visualization, S.Ö.; supervision, V.B.; project administration, Y.D.; funding acquisition, Y.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of GAZİANTEP UNIVERSITY (E-87841438-302.08.01-581265-02.12.2024).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study. Written informed consent has been obtained from the patient(s) to publish this paper.

Data Availability Statement

Data available on request due to restrictions.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A1. The Kappa Agreement Values

Meta-Thematic Analysis Part
Contributions to Education EnvironmentsContributions to Innovation and Technological AdvancementProblems Encountered and Suggestions for Solution
K2 K2 K2
K1 +ΣK1 +ΣK1 +Σ
+18220+22123+31233
313161171821820
Σ211536 Σ231841 Σ332053
Kappa: 0.717 p: 0.000Kappa: 0.901 p: 0.000Kappa: 0.839 p: 0.000
Experimental-Qualitative Part
Contribution to Social-Affective DimensionProblems Encountered and Suggestions for Solution
K2 K2
K1 +ΣK1 +Σ
+16016+16117
1910178
Σ17926 Σ17825
Kappa: 0.917 p: 0.000Kappa: 0.816 p: 0.000

Appendix A2. Key Findings of the Studies Included in the Meta-Thematic Analysis

StudyKey Findings of Qualitative Studies
M1This study examines the perceptions of social actors toward artificial intelligence and robotics technologies. The most notable findings are as follows:
  • Significant concerns have emerged regarding the possibility that artificial intelligence and robotics technologies could erode human relationships and weaken interpersonal interactions
  • Participants expressed that artificial intelligence could be used as an extension of the human body in the future, potentially leading to a machine-driven way of life.
  • There are significant uncertainties about the control of artificial intelligence and robotics technologies, with individuals fearing that these systems may be monopolized by specific groups.
  • Although artificial intelligence is currently one of the least monitored areas in the healthcare sector, it is expected to become widely adopted in healthcare services in the future.
M2This study includes the perspectives of doctors, nurses, and patients regarding artificial intelligence and robotic nurses. The key findings are as follows:
  • Participants stated that artificial intelligence and robotic nurses could assist in healthcare services but may be inadequate in situations requiring empathy and a human touch.
  • It was emphasized that the widespread adoption of robotic nurses could raise concerns about job security for healthcare workers, yet it could also reduce workload and contribute to more efficient healthcare services.
  • Participants expressed that artificial intelligence and robotic nurses might create ethical issues, particularly regarding uncertainties in responsibility-sharing.
M3This study examines the expectations, concerns, and impacts of artificial intelligence use in healthcare. The most notable findings are as follows:
  • Participants believe that artificial intelligence could play a supportive role in medical diagnoses and patient care but also consider the exclusion of human factors in fully automated systems to be dangerous.
  • Among the advantages of artificial intelligence usage are rapid diagnosis, data-driven decision-making, and cost reduction; however, the greatest concerns revolve around data security and ethical responsibility.
  • Participants emphasized that artificial intelligence systems should collaborate with humans in the healthcare field; otherwise, professional roles and patient safety could be at risk.
M4This study examines the impact of artificial intelligence on managers’ skills. The most notable findings are as follows:
  • While artificial intelligence supports managers in skills such as information gathering, decision-making, and time management, it does not directly influence human-specific abilities such as leadership, creativity, and imagination.
  • Artificial intelligence can take over simple decision-making and information-gathering processes, but managers’ roles in strategic thinking and leadership based on emotional intelligence remain essential.
  • To work effectively with artificial intelligence, managers need to have basic algorithmic knowledge, be aware of ethical data usage, and be capable of managing interdisciplinary collaborations.
M5This study examines the impact of AI-powered chatbots on social studies education. The most notable findings are as follows:
  • Students reported that chatbot-supported learning increased their engagement in lessons and motivated them more.
  • Teachers and students evaluated the pedagogical and design features of chatbots positively but emphasized the need to enhance their communication capabilities and incorporate voice interaction features.

References

  1. Russell, S.J.; Norvig, P. Artificial Intelligence: A Modern Approach, 4th ed.; Pearson Education: Cranbury, NJ, USA, 2021. [Google Scholar]
  2. Howard, J. Artificial intelligence: Implications for the future of work. Am. J. Ind. Med. 2019, 62, 917–926. [Google Scholar] [PubMed]
  3. McCarthy, J. What Is Artificial Intelligence? Available online: https://www-formal.stanford.edu/jmc/whatisai.pdf (accessed on 11 January 2019).
  4. Chatterjee, S.; Bhattacharjee, K.K. Adoption of artificial intelligence in higher education: A quantitative analysis using structural equation modelling. Educ. Inf. Technol. 2020, 25, 3443–3463. [Google Scholar]
  5. Popenici, S.A.D.; Kerr, S. Exploring the impact of artificial intelligence on teaching and learning in higher education. Res. Pract. Technol. Enhanc. Learn. (RPTEL) 2017, 12, 1–13. [Google Scholar]
  6. Turgut, K. Yapay zekâ’nın yüksek öğretimde sosyal bilim öğretimine entegrasyonu. Ank. Uluslararası Sos. Bilim. Derg. (Yapay Zekâ ve Sos. Bilim. Öğretimi) 2024, 1–7. [Google Scholar]
  7. Chen, L.; Chen, P.; Lin, Z. Artificial intelligence in education: A review. IEEE Access 2020, 8, 75264–75278. [Google Scholar]
  8. Tambuskar, S. Challenges and benefits of 7 ways artificial intelligence in education sector. Rev. Artif. Intell. Educ. 2022, 3, e03. [Google Scholar]
  9. Zhou, L.; Pan, S.; Wang, J.; Vasilakos, A.V. Machine learning on big data: Opportunities and challenges. Neurocomputing 2017, 237, 350–361. [Google Scholar]
  10. Nan, J. Research of applications of artificial intelligence in preschool education. J. Phys. Conf. Ser. 2020, 1607, 012119. [Google Scholar]
  11. Sun, W. Design of auxiliary teaching system for preschool education specialty courses based on artificial intelligence. Math. Probl. Eng. 2022. [Google Scholar] [CrossRef]
  12. Rezaev, A.V.; Tregubova, N.D. Are sociologists ready for ‘artificial sociality’? current issues and future prospects for studying artificial intelligence in the social sciences. Monit. Public Opin. Econ. Soc. Change 2018, 5, 91–108. [Google Scholar]
  13. Jarek, K.; Mazurek, G. Marketing and artificial ıntelligence. Cent. Eur. Bus. Rev. 2019, 8, 46–55. [Google Scholar]
  14. Rezk, S.M.M. The role of artificial intelligence in graphic design. J. Art Des. Music 2023, 2, 1–12. [Google Scholar]
  15. Rebala, G.; Ravi, A.; Churiwala, S. An Introduction to Machine Learning; Springer: Berlin/Heidelberg, Germany, 2019. [Google Scholar]
  16. Russell, S.J.; Norvig, P. Artificial Intelligence: A Modern Approach; Pearson Education: Cranbury, NJ, USA, 2010. [Google Scholar]
  17. Agarwal, R.; Dhar, V. Big data, data science and analytics: The opportunity and challenge for IS research. Inf. Syst. Res. 2014, 25, 443–448. [Google Scholar]
  18. Chen, Y.; Wu, X.; Hu, A.; He, G.; Ju, G. Social prediction: A new research paradigm based on machine learning. J. Chin. Sociol. 2021, 8, 1–21. [Google Scholar]
  19. Liu, Z. Sociological perspectives on artificial intelligence: A typological reading. Sociol. Compass 2021, 15, e12851. [Google Scholar]
  20. Mercaldo, F.; Nardone, V.; Santone, A. Diabetes mellitus affected patients classification and diagnosis through machine learning techniques. Procedia Comput. Sci. 2017, 112, 2519–2528. [Google Scholar]
  21. Waljee, A.K.; Higgins, P.D.R. Machine learning in medicine: A primer for physicians. Am. J. Gastroenterol. 2010, 105, 1224–1226. [Google Scholar]
  22. Mullainathan, S.; Spiess, J. Machine learning: An applied econometric approach. J. Econ. Perspect. 2017, 31, 87–106. [Google Scholar]
  23. Cavalcante, R.C.; Brasileiro, R.C.; Souza, V.L.; Nobrega, J.P.; Oliveira, A.L. Computational intelligence and financial markets: A survey and future directions. Expert Syst. Appl. 2016, 55, 194–211. [Google Scholar]
  24. Aggarwal, M.; Murty, M.N. Deep Learning. In Springer-Briefs in Applied Sciences and Technology; Springer: Berlin/Heidelberg, Germany, 2021. [Google Scholar]
  25. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar]
  26. Zhang, J.; Feng, S. Machine learning modeling: A new way to do quantitative research in social sciences in the Era of AI. J. Web Eng. 2021, 20, 281–302. [Google Scholar]
  27. Apostolopoulos, I.D.; Groumpos, P.P. Fuzzy cognitive maps: Their role in explainable artificial intelligence. Appl. Sci. 2023, 13, 3412. [Google Scholar] [CrossRef]
  28. Lazer, D.; Pentland, A.; Adamic, L.; Aral, S.; Barabasi, A.L.; Brewer, D.; Van Alstyne, M. Computational Social Science. Science 2009, 323, 721–723. [Google Scholar] [PubMed]
  29. Veltri, G.A. Big data is not only about data: The two cultures of modelling. Big Data Soc. 2017, 4. [Google Scholar] [CrossRef]
  30. Castells, M. The Rise of the Network Society: The Information Age: Economy, Society, and Culture; Wiley-Blackwell Publishing: Hoboken, NJ, USA, 2010. [Google Scholar]
  31. Chen, M.; Liu, Q.; Huang, S.; Dang, C. Environmental cost control system of manufacturing enterprises using artificial intelligence based on value chain of circular Economy. Enterp. Inf. Syst. 2020, 16, 1856422. [Google Scholar]
  32. Probst, P.; Wright, M.N.; Boulesteix, A. Hyperparameters and tuning strategies for random forest. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2019, 9, e1301. [Google Scholar]
  33. Sun, Z.; Anbarasan, M.; Kumar, D.P. Design of online intelligent English teaching platform based on artificial intelligence techniques. Comput. Intell. 2020, 37, 1166–1180. [Google Scholar]
  34. Longo, L. Enpowering qualitative research methods in education with artificial intelligence. In Proceedings of the World Conference on Qualitative Research, Barcelona, Spain, 11 October 2020. [Google Scholar]
  35. Kumar, R.; Mishra, B.K. (Eds.) Natural Language Processing in Artificial Intelligence, 1st ed.; Apple Academic Press: Palm Bay, FL, USA, 2020. [Google Scholar]
  36. Boumans, J.W.; Trilling, D. Taking Stock of the Toolkit: An Overview of Relevant Automated Content Analysis Approaches and Techniques for Digital Journalism Scholars; Routledge: London, UK, 2018; pp. 8–23. [Google Scholar]
  37. Abram, M.D.; Mancini, K.T.; Parker, R.D. Methods to integrate natural language processing into qualitative research. Int. J. Qual. Methods 2020, 19, 1609406920984608. [Google Scholar] [CrossRef]
  38. Mahmood, A.; Wang, J.; Yao, B.; Wang, D.; Huang, C. LLM-powered conversational voice assistants: Interaction patterns, opportunities, challenges, and design Gguidelines. arXiv 2023, arXiv:2309.13879. [Google Scholar]
  39. Paulus, T.M.; Marone, V. In Minutes Instead of Weeks”: Discursive constructions of generative AI and qualitative data analysis. Qual. Inq. 2024, 10778004241250065. [Google Scholar] [CrossRef]
  40. Rietz, T.; Maedche, A. Cody: An AI-based system to semi-automate coding for qualitative research. In Proceedings of the CHI ’21: CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; Article 394. Association for Computing Machinery: New York, NY, USA, 2021; pp. 1–14. [Google Scholar]
  41. Çelik, S. Understanding Data Science. J. Curr. Res. Soc. Sci. 2019, 9, 235–256. [Google Scholar]
  42. Isichei, B.C.; Leung, C.K.; Nguyen, L.T.; Morrow, L.B.; Ngo, A.T.; Pham, T.D.; Cuzzocrea, A. Sports Data Management, Mining, and Visualization; Lecture Notes in Networks and Systems; Springer: Cham, Switzerland, 2022; Volume 450 LNNS, pp. 141–153. [Google Scholar]
  43. Owan, V.; Abang, K.B.; Idika, D.O.; Etta, E.O.; Bassey, B.A. Exploring the potential of artificial intelligence tools in educational measurement and assessment. EURASIA J. Math. Sci. Technol. Educ. 2023, 19, em2307. [Google Scholar]
  44. Song, P.; Wang, X. A bibliometric analysis of worldwide educational artificial intelligence research development in recent twenty years. Asia Pac. Educ. Rev. 2020, 21, 473–486. [Google Scholar]
  45. Baker, T.; Smith, L. Educ-AI-Tion Rebooted? Exploring the Future of Artificial Intelligence in Schools and Colleges; Nesta: London, UK, 2019. [Google Scholar]
  46. Chassigonal, M.; Khoroshvin, A.; Klimova, A. Artificial intelligence trends in education: A narrative overview. Procedia Comput. Sci. 2018, 136, 16–24. [Google Scholar]
  47. Mondal, K. A synergy of artificial intelligence and education in the 21st-century classrooms. In Proceedings of the 2019 International Conference on Digitization (ICD), Sharjah, United Arab Emirates, 18–19 November 2019; pp. 68–70. [Google Scholar]
  48. Chan, K.S.; Zary, N. Applications and challenges of implementing artificial intelligence in medical education: An integrative review. JMIR Med. Educ. 2019, 5, e13930. [Google Scholar]
  49. Miao, F.; Holmes, W.; Huang, R.; Zhang, H. AI and Education: Guidance for Policymakers; UNESCO Publishing: Paris, France, 2021. [Google Scholar]
  50. Mintz, J.; Holmes, W.; Liu, L.; Perez-Ortiz, M. Artificial intelligence and K-12 education: Possibilities, pedagogies, and risks. Technol. Pedagog. Educ. 2023, 40, 325–333. [Google Scholar]
  51. Smutny, P.; Schreiberova, P. Chatbots for learning: A review of educational Chatbots for the Facebook Messenger. Comput. Educ. 2020, 151, 103862. [Google Scholar]
  52. Nilsson, N.J. The Quest for Artificial Intelligence: A History of Ideas and Achievements; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
  53. Su, J.; Ng, D.T.K.; Chu, S.K.W. Artificial intelligence (AI) literacy in early childhood education: The challenges and opportunities. Comput. Educ. Artif. Intell. 2023, 4, 100124. [Google Scholar]
  54. Woo, H.; LeTendre, G.K.; Pham-Shouse, T.; Xiong, Y. The Use of social robots in classrooms: A review of field-based studies. Educ. Res. Rev. 2021, 33, 100388. [Google Scholar]
  55. Young, M.F.; Slota, S.; Cutter, A.B.; Jalette, G.; Mullin, G.; Lai, B.; Simeoni, Z.; Tran, M.; Yukhymenko, M. Our princess is in another castle: A review of trends in serious gaming for education. Rev. Educ. Res. 2012, 82, 61–89. [Google Scholar]
  56. Ding, Y. Performance analysis of public management teaching practice training based on artificial intelligence technology. J. Intell. Fuzzy Syst. 2021, 40, 3787–3800. [Google Scholar] [CrossRef]
  57. Ma, L. An immersive context teaching method for college English based on artificial intelligence and machine learning in virtual reality technology. Mob. Inf. Syst. 2021, 2021, 1–7. [Google Scholar]
  58. Lin, P.; Van Brummelen, J.; Lukin, G.; Williams, R.; Breazeal, C. Zhorai: Designing a conversational agent for children to explore machine learning concepts. Proc. AAAI Conf. Artif. Intell. 2020, 34, 13381–13388. [Google Scholar]
  59. Tseng, T.; Murai, Y.; Freed, N.; Gelosi, D.; Ta, T.D.; Kawahara, Y. PlushPal: Storytelling with interactive plush toys and machine learning. In Proceedings of the IDC ’21: Interaction Design and Children, Athens, Greece, 24–30 June 2021; pp. 236–245. [Google Scholar]
  60. Vartiainen, H.; Tedre, M.; Valtonen, T. Learning machine learning with very young children: Who is teaching whom? Int. J. Child-Comput. Interact. 2020, 25, 100182. [Google Scholar]
  61. Amershi, S.; Cakmak, M.; Knox, W.B.; Kulesza, T. Power to the people: The role of humans in interactive machine learning. AI Mag. 2014, 35, 105–120. [Google Scholar] [CrossRef]
  62. Jin, L. Investigation on potential application of artificial intelligence in preschool children’s education. J. Phys. Conf. Ser. 2019, 1288, 012072. [Google Scholar]
  63. Yi, H.; Liu, T.; Lan, G. The key artificial intelligence technologies in early childhood education: A review. Artif. Intell. Rev. 2024, 57, 12. [Google Scholar] [CrossRef]
  64. Jiang, X. Design of artificial intelligence-based multimedia resource search service system for preschool education. In Proceedings of the 2022 International Conference on Information System, Computing and Educational Technology (ICISCET), Montreal, QC, Canada, 23–25 May 2022; pp. 76–78. [Google Scholar]
  65. Lee, J. Coding in early childhood. Contemp. Issues Early Child. 2020, 21, 266–269. [Google Scholar]
  66. Dongming, L.; Wanjing, L.; Shuang, C.; Shuying, Z. Intelligent robot for early childhood education. In Proceedings of the 2020 8th International Conference on Information and Education Technology, Virtual Conference, 28–30 March 2020. [Google Scholar]
  67. Williams, R.; Park, H.; Breazeal, C. A Is for artificial intelligence: The impact of artificial intelligence activities on young children’s perceptions of robots. In Proceedings of the CHI ’19: CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019. [Google Scholar]
  68. Crescenzi-Lanna, L. Literature review of the reciprocal value of artificial and human intelligence in early childhood education. J. Res. Technol. Educ. 2022, 55, 21–33. [Google Scholar]
  69. Fikri, Y.; Rhalma, M. Artificial intelligence (AI) in early childhood education (ECE): Do effects and interactions matter? Int. J. Relig. 2024, 5, 7536–7545. [Google Scholar]
  70. Bail, C. Can generative Al improve social science? Proc. Natl. Acad. Sci. USA 2024, 121, e2314021121. [Google Scholar] [CrossRef] [PubMed]
  71. Grossmann, I.; Feinberg, M.; Parker, D.; Christakis, N.; Tetlock, P.; Cunningham, W. AI and the transformation of social science research. Science 2023, 380, 1108–1109. [Google Scholar] [PubMed]
  72. Batdı, V. Metodolojik Çoğulculukta Yeni Bir Yönelim: Çoklu Bütüncül Yaklaşım. Sos. Bilim. Derg. 2016, 50, 133–147. [Google Scholar]
  73. Batdı, V. Eğitimde Yeni Bir Yönelim: Mega-Çoklu Bütüncül Yaklaşım ve Beyin Temelli Öğrenme Örnek Uygulaması, 2nd ed.; IKSAD Publishing House: Ankara, Türkiye, 2018. [Google Scholar]
  74. Glass, G.V. Primary, Secondary, and Meta-Analysis of Research. Educ. Res. 1976, 5, 3–8. [Google Scholar]
  75. Borenstein, M.; Hedges, L.V.; Higgins, J.P.T.; Rothstein, H.R. Introduction to Meta-Analysis, 1st ed.; John Wiley & Sons: Hoboken, NJ, USA, 2009. [Google Scholar]
  76. Lipsey, M.W.; Wilson, D.B. Practical Meta-Analysis; Sage Publications, Inc.: Thousand Oaks, CA, USA, 2001. [Google Scholar]
  77. Batdı, V. (Ed.) Meta-Tematik Analiz. Meta-Tematik Analiz: Örnek Uygulamalar; Anı Yayıncılık: Ankara, Türkiye, 2019; pp. 10–76. [Google Scholar]
  78. Patton, M.Q. Nitel Araştırma ve Değerlendirme Yöntemleri; Bütün, M., Demir, S.B., Çev, Eds.; PegemA Yayıncılık: Ankara, Türkiye, 2014. [Google Scholar]
  79. Batdı, V. (Ed.) Introduction to Meta-Thematic Analysis; Meta-Thematic Analysis in Research Process; Anı Yayıncılık: Ankara, Türkiye, 2020; pp. 1–38. [Google Scholar]
  80. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G.; Prisma, G. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. Ann. Intern. Med. 2009, 6, e1000097. [Google Scholar]
  81. Rosenberg, M.; Adams, D.; Gurevitch, J. MetaWin: Statistical Software for Meta-Analysis, Version 2.0; Sinauer Associates Inc.: Sunderland, MA, USA, 2000. [Google Scholar]
  82. Cohen, L.; Manion, L.; Morrison, K. Research Methods in Education, 6th ed.; Routledge: London, UK, 2007. [Google Scholar]
  83. Hedges, L. Distribution theory for Glass’s estimator of efect size and related estimates. J. Educ. Stat. 1981, 6, 107–112. [Google Scholar]
  84. Thalheimer, W.; Cook, S. How to Calculate Effect Sizes from Published Research Articles: A Simplified Methodology; Work-Learning Research Publication: New York, NY, USA, 2002. [Google Scholar]
  85. Ried, K. Interpreting and understanding meta-analysis graphs: A practical guide. Aust. Fam. Physician 2006, 35, 635–638. [Google Scholar]
  86. Schmidt, F.L.; Oh, I.-S.; Hayes, T.L. Fixed- versus random-effects models in meta-analysis: Model properties and an empirical comparison of differences in results. Br. J. Math. Stat. Psychol. 2009, 62, 97–128. [Google Scholar]
  87. Higgins, J.P.; Thompson, S.G. Quantifying heterogeneity in a meta-analysis. Stat. Med. 2002, 21, 1539–1558. [Google Scholar]
  88. Higgins, J.P.T.; Thompson, S.G.; Deeks, J.J.; Altman, D.G. Measuring inconsistency in meta-analyses. BMJ 2003, 237, 557–560. [Google Scholar] [CrossRef]
  89. Rosenthal, R. The file drawer problem and tolerance for null results. Psychol. Bull. 1979, 86, 638–641. [Google Scholar]
  90. Wilson, D.B. Systematic Coding. In The Handbook of Research Synthesis and Meta-Analysis; Cooper, H., Hedges, L.V., Valentine, J.C., Eds.; Russell Sage Foundation: Manhattan, NY, USA, 2009; pp. 159–176. [Google Scholar]
  91. Bangert-Drowns, R.L.; Rudner, L.M. Meta-Analysis in Educational Research; ERIC Clearinghouse on Tests, Measurement, and Evaluation: Washington, DC, USA, 1991. [Google Scholar]
  92. Miles, M.B.; Huberman, A.M. Qualitative Data Analysis: An Expanded Sourcebook; Sage Publications: Thousand Oaks, CA, USA, 1994. [Google Scholar]
  93. Silverman, D. Doing Qualitative Research: A Practical Handbook; Sage Publications: Thousand Oaks, CA, USA, 2005. [Google Scholar]
  94. Duval, S.; Tweedie, R. Trim and Fill: A simple funnel-plot based method of testing and adjusting for publication bias in meta-analysis. Biometrics 2000, 56, 455–463. [Google Scholar] [CrossRef] [PubMed]
  95. Sterne, J.A.; Harbord, R.M. Funnel plots in meta-analysis. Stata J. Promot. Commun. Stat. Stata 2004, 4, 127–141. [Google Scholar]
  96. Sterne, J.A.C.; Sutton, A.J.; Ioannidis, J.P.A.; Terrin, N.; Jones, D.R.; Lau, J.; Carpenter, J.; Rücker, G.; Harbord, R.M.; Schmid, C.H.; et al. Recommendations for examining and interpreting funnel plot asymmetry in meta-analyses of randomised controlled trials. BMJ 2011, 343, d4002. [Google Scholar]
  97. Ivankova, N.V.; Creswell, J.W.; Stick, S.L. Using mixed-methods sequential explanatory design: From theory to practice. Field Methods 2006, 18, 3–20. [Google Scholar]
  98. Bailey, K.D. Methods of Social Research, 2nd ed.; The Free Press: New York, NY, USA, 1982. [Google Scholar]
  99. Yıldırım, A.; Şimşek, H. Sosyal Bilimlerde Nitel Araştırma Yöntemleri; Seçkin Yayıncılık: Ankara, Türkiye, 2006. [Google Scholar]
  100. Merriam, S.B. Qualitative Research: A Guide to Design and Implementation; John Wiley & Sons Inc.: New York, NY, USA, 2013. [Google Scholar]
  101. Viera, A.J.; Garrett, J.M. Understanding interobserver agreement: The Kappa statistic. Fam. Med. 2005, 37, 360–363. [Google Scholar]
  102. Johnson, R.B.; Onwuegbuzie, A.J. Mixed methods research: A research paradigm whose time has come. Educ. Res. 2004, 33, 14–26. [Google Scholar]
  103. Tashakkori, A.; Creswell, J.W. Editorial: Exploring the nature of research questions in mixed methods research. J. Mix. Methods Res. 2007, 1, 207–211. [Google Scholar]
  104. Cooper, H.; Hedges, L.V.; Valentine, J.C. The Handbook of Research Synthesis and Meta-Analysis; Russell Sage Publication: Washington, DC, USA, 2009. [Google Scholar]
  105. Robila, M.; Robila, S. Applications of artificial intelligence methodologies to behavioral and social sciences. J. Child Fam. Stud. 2019, 29, 2954–2966. [Google Scholar]
  106. Liao, H.; Wang, Z.; Liu, Y. Exploring the cross-disciplinary collaboration: A scientometric analysis of social science research related to artificial intelligence and big data application. IOP Conf. Ser. Mater. Sci. Eng. 2020, 806, 012019. [Google Scholar]
  107. Hernández-Lugo, M.d.l.C. Artificial Intelligence as a tool for analysis in Social Sciences: Methods and applications. LatIA 2024, 2, 11. [Google Scholar] [CrossRef]
  108. Xu, R.; Sun, Y.; Ren, M.; Guo, S.; Pan, R.; Lin, H.; Sun, L.; Han, X. AI for social science and social science of AI: A survey. Inf. Process. Manag. 2024, 61, 103665. [Google Scholar] [CrossRef]
  109. Hwang, G.J.; Xie, H.; Wah, B.W.; Gašević, D. Vision, challenges, roles and research issues of artificial intelligence in education. Comput. Educ. Artif. Intell. 2020, 1, 100001. [Google Scholar] [CrossRef]
  110. Yetişensoy, O.; Karaduman, H. The effect of AI-powered Chatbots in social studies education. Educ. Inf. Technol. 2024, 1, 1–35. [Google Scholar] [CrossRef]
  111. Schiff, D. Out of the laboratory and into the classroom: The future of artificial intelligence in education. AI Soc. 2020, 36, 331–348. [Google Scholar] [CrossRef]
  112. Sharma, B. Research paper on artificial intelligence. Int. J. Sci. Res. Eng. Manag. 2024, 8, 1–10. [Google Scholar] [CrossRef]
  113. Soni, P. A Study on artificial intelligence in finance sector. Int. Res. J. Mod. Eng. Technol. Sci. 2023, 9, 223–232. [Google Scholar]
  114. Bhattamisra, S.; Banerjee, P.; Gupta, P.; Mayuren, J.; Patra, S.; Candasamy, M. Artificial intelligence in pharmaceutical and healthcare research. Big Data Cogn. Comput. 2023, 7, 10. [Google Scholar] [CrossRef]
  115. Secinaro, S.; Calandra, D.; Secinaro, A.; Muthurangu, V.; Biancone, P. The role of artificial intelligence in healthcare: A structured literature review. BMC Med. Inform. Decis. Mak. 2021, 21, 125. [Google Scholar] [CrossRef]
  116. Doğan, M.; Dogan, T.; Bozkurt, A. The use of artificial intelligence (AI) in online learning and distance education processes: A systematic review of empirical studies. Appl. Sci. 2023, 13, 3056. [Google Scholar] [CrossRef]
  117. Masturoh, U.; Irayana, I.; Adriliyana, F. Digitalization of play activities and Ggames: Artificial intelligence in early childhood education. TEMATIK J. Pemikir. Penelit. Pendidik. Anak Usia Dini 2024, 10, 1. [Google Scholar]
  118. Kuchkarova, G.; Kholmatov, S.; Tishabaeva, I.; Khamdamova, O.; Husaynova, M.; Ibragimov, N. Al-integrated system design for early stage learning and erudition to develop analytical deftones. In Proceedings of the 2024 4th International Conference on Advance Computing and Innovative Technologies in Engineering (ICACITE), Greater Noida, India, 14–15 May 2024; pp. 795–799. [Google Scholar]
  119. Bozkurt, A. ChatGPT, üretken yapay zeka ve algoritmik paradigma değişikliği. Alanyazın 2023, 4, 63–72. [Google Scholar]
  120. Puteri, S.; Saputri, Y.; Kurniati, Y. The impact of artificial intelligence (AI) technology on students’ social relations. BICC Proc. 2024, 2, 153–158. [Google Scholar]
  121. Demircioğlu, E.; Yazıcı, C.; Demir, B. Yapay zekâ destekli matematik eğitimi: Bir içerik analizi. Int. J. Soc. Humanit. Sci. Res. (JSHSR) 2024, 11, 771–785. [Google Scholar]
Figure 1. Multi-complementary approach [73].
Figure 1. Multi-complementary approach [73].
Sustainability 17 03159 g001
Figure 2. Flow diagram of the studies included into the analyses.
Figure 2. Flow diagram of the studies included into the analyses.
Sustainability 17 03159 g002
Figure 3. Funnel plot.
Figure 3. Funnel plot.
Sustainability 17 03159 g003
Figure 4. Contributions to education environments.
Figure 4. Contributions to education environments.
Sustainability 17 03159 g004
Figure 5. Contributions to innovation and technological advancement.
Figure 5. Contributions to innovation and technological advancement.
Sustainability 17 03159 g005
Figure 6. Challenges in AI applications and suggested solutions.
Figure 6. Challenges in AI applications and suggested solutions.
Sustainability 17 03159 g006
Figure 7. Contributions to social-affective dimension.
Figure 7. Contributions to social-affective dimension.
Sustainability 17 03159 g007
Figure 8. Presents the observer comments on the “problems encountered and solutions in AI applications”. The observer teachers reported that students faced difficulties during the drawing process. G1 noted, “Some students, especially those working on drawing tasks that required fine motor skills, struggled significantly”, while G2 observed, “Some students had difficulty with drawing tasks that required visual-motor coordination”. The observer teachers also mentioned that students’ attention was easily distracted, and they quickly became bored. Regarding this, G1 commented, “Repetitive activities caused students to lose focus and become bored”, and G2 stated, “Activities that required prolonged attention led to students quickly losing interest”. As for the solutions to the problems faced during the application process, the observer teachers suggested addressing material shortages, having a trial or draft drawing beforehand, limiting the number of students, not extending the time excessively, and conducting the activities with background music.
Figure 8. Presents the observer comments on the “problems encountered and solutions in AI applications”. The observer teachers reported that students faced difficulties during the drawing process. G1 noted, “Some students, especially those working on drawing tasks that required fine motor skills, struggled significantly”, while G2 observed, “Some students had difficulty with drawing tasks that required visual-motor coordination”. The observer teachers also mentioned that students’ attention was easily distracted, and they quickly became bored. Regarding this, G1 commented, “Repetitive activities caused students to lose focus and become bored”, and G2 stated, “Activities that required prolonged attention led to students quickly losing interest”. As for the solutions to the problems faced during the application process, the observer teachers suggested addressing material shortages, having a trial or draft drawing beforehand, limiting the number of students, not extending the time excessively, and conducting the activities with background music.
Sustainability 17 03159 g008
Table 1. Symbolic representation of the experimental study.
Table 1. Symbolic representation of the experimental study.
Experimental GroupRT1XT2
R: Neutrality in group formation, X: Independent variable level (artificial intelligence applications in early childhood education), T1: Pre-test application, T2: Post-test application.
Table 2. Meta-analysis data.
Table 2. Meta-analysis data.
Test TypeModel 95% Confidence IntervalHeterogeneity
n g LowerUpperQpI2
AchievementFEM130.550.410.69217.750.0094.48
REM130.740.131.35
Table 3. Overall effect sizes of the studies included in the analysis according to the moderator analysis.
Table 3. Overall effect sizes of the studies included in the analysis according to the moderator analysis.
ItemGroupsEffect Size and 95% Confidence IntervalNull TestHeterogeneity
n g LowerUpperZ-Valuep-ValueQ-Valuedfp-Value
Education LevelUniversity60.65−0.661.960.970.33
Others70.800.171.422.490.01
Toal130.770.201.332.670.000.0410.84
Application DurationSessions70.30−0.290.901.000.32
9–+40.33−0.120.791.430.15
Total110.32−0.040.681.740.080.0010.95
Sample SizeSmall60.01−0.240.260.090.93
Medium31.821.212.435.880.00
Large41.02−0.362.401.440.15
Total130.290.060.522.510.0130.2920.00
Table 4. Comparison of pre-test and post-test scores of experimental groups.
Table 4. Comparison of pre-test and post-test scores of experimental groups.
Test
Type
Groupsn x ¯ sddfLevenetp
Fp
pretestExperiment1418.641.94262.280.14−14.170.00
posttestExperiment1427.931.49
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Doğan, Y.; Batdı, V.; Topkaya, Y.; Özüpekçe, S.; Akşab, H.V. Effectiveness of Artificial Intelligence Practices in the Teaching of Social Sciences: A Multi-Complementary Research Approach on Pre-School Education. Sustainability 2025, 17, 3159. https://doi.org/10.3390/su17073159

AMA Style

Doğan Y, Batdı V, Topkaya Y, Özüpekçe S, Akşab HV. Effectiveness of Artificial Intelligence Practices in the Teaching of Social Sciences: A Multi-Complementary Research Approach on Pre-School Education. Sustainability. 2025; 17(7):3159. https://doi.org/10.3390/su17073159

Chicago/Turabian Style

Doğan, Yunus, Veli Batdı, Yavuz Topkaya, Salman Özüpekçe, and Hatun Vera Akşab. 2025. "Effectiveness of Artificial Intelligence Practices in the Teaching of Social Sciences: A Multi-Complementary Research Approach on Pre-School Education" Sustainability 17, no. 7: 3159. https://doi.org/10.3390/su17073159

APA Style

Doğan, Y., Batdı, V., Topkaya, Y., Özüpekçe, S., & Akşab, H. V. (2025). Effectiveness of Artificial Intelligence Practices in the Teaching of Social Sciences: A Multi-Complementary Research Approach on Pre-School Education. Sustainability, 17(7), 3159. https://doi.org/10.3390/su17073159

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop