Next Article in Journal
The More, the Better? Exploring the Effects of Modal and Codal Redundancy on Learning and Cognitive Load: An Experimental Study
Previous Article in Journal
Cognitive Reappraisal: The Bridge between Cognitive Load and Emotion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Safety, Identity, Attitude, Cognition, and Capability: The ‘SIACC’ Framework of Early Childhood AI Literacy

1
Shanghai Institute of Early Childhood Education, Shanghai Normal University, Shanghai 200233, China
2
Lab for Educational Big Data and Policymaking, Ministry of Education, Shanghai 200233, China
3
Faculty of Education and Human Development, The Education University of Hong Kong, Hong Kong, China
*
Author to whom correspondence should be addressed.
Educ. Sci. 2024, 14(8), 871; https://doi.org/10.3390/educsci14080871
Submission received: 12 July 2024 / Revised: 7 August 2024 / Accepted: 8 August 2024 / Published: 9 August 2024
(This article belongs to the Topic Artificial Intelligence in Early Childhood Education)

Abstract

:
With the rapid advancement of Artificial Intelligence (AI) in early childhood education (ECE), young children face the challenge of learning to use AI ethically and appropriately. Developing AI education programs requires an age- and culturally-appropriate AI literacy framework. This study addresses this fundamental gap by creating a Chinese framework for early childhood AI literacy through an expert interview study with a grounded theory approach. Seven Chinese experts, including ECE and AI professors, kindergarten principals, and Directors of ECE Information Departments, were purposely sampled and interviewed, representing scholars, policymakers, and practitioners. The synthesis of the transcribed evidence generated five dimensions of young children’s AI literacy, namely Safety, Identity, Attitude, Cognition, and Capability, collectively forming a holistic framework titled the ‘SIACC’ framework. The Chinese definition of early childhood AI literacy was also reported. This study introduces the Chinese framework of AI literacy and provides a scientific basis for policymakers to establish AI literacy standards for young children. Additionally, it offers a conceptual structure for developing systematic indicators and scales within AI literacy in ECE.

1. Introduction

Artificial Intelligence (AI) is revolutionizing the industry of early childhood education (ECE) and driving it toward sophisticated AI-driven conversations from simple touch-based interactions [1,2,3,4]. Policymakers are concerned about this paradigm shift and are aware of the urgency of cultivating AI literacy in young children [5,6]. In practice, young children’s daily engagement with AI underscores the critical need for proficient AI literacy to prepare them for the new AI and digital era. In academia, emerging studies have extensively examined the transforming progression of ECE with AI, encompassing its multifaceted effects on learning environments [7], intellectual augmentation through varied creative means [8], the deployment of AI-enriched robotic tools [9], the perspectives and acceptance of AI integration [10], the impact of ChatGPT on ECE practices [6], and the impact assessment of structured AI literacy programs [11]. However, the literature reveals a critical gap: the lack of an age- and culturally appropriate conceptual framework for early childhood AI literacy—foundational for informing early AI educational initiatives [12,13,14,15,16,17]. To fill this research gap, this study aims to establish a foundational definition and framework for early childhood AI literacy that aligns with Chinese ECE values and leverages expert AI insights. The research objectives are twofold: first, to develop a culturally attuned conceptualization of AI literacy for young children, and second, to weave this understanding into a comprehensive framework that can further guide the development of assessment scales and educational programs tailored to foster AI-age readiness among young children in ECE settings.

1.1. The Definitions of AI Literacy

AI literacy is an emerging and continuously evolving research area, with its definition undergoing transformation through four stages, including the first three aspects (know and understand AI, use and apply AI, and evaluate and create AI) highlighted in the Ng et al. review [16]. The core difference between the four stages is the extent to which AI is being developed for use. Initially, AI literacy was defined as the ability to know and understand AI. In contrast, Burgsteiner et al. [12] and Kandlhofer et al. [13] described it as the competency to understand the basic techniques and concepts behind AI in various products and services. The progression to the second stage expanded this definition to include the use and application of AI, suggesting that AI literacy encompasses the essential abilities required to live, learn, and work in our digital world through AI-driven technologies, as argued by Steinbauer et al. [18].
Its conceptual evolution continued to combine the elements of knowing, understanding, using, and applying AI together, and it can be assigned to the third stage. In this context, AI literacy is the knowledge and understanding of AI’s essential functions and the ethical usage of AI applications in everyday life [19,20]. The fourth stage, focusing on evaluating and creating AI, defined by Long and Magerko [14] and further supported by Druga et al. [17], emphasizes the competencies that enable individuals to critically evaluate, communicate, and collaborate effectively with AI, positioning AI as a tool for use online, at home, and in the workplace.
This developmental trajectory was captured by the review article by Ng et al. [16] that integrated these stages with Bloom’s Taxonomy, proposing a vertical developmental path for AI literacy as individuals age (Figure 1). However, despite summarizing these stages, Ng et al. [16] did not provide a clear definition of AI literacy but highlighted the urgent research needed to establish a suitable definition and advance the AI literacy field. In the context of ECE, where the age group is younger, a clear definition of AI literacy has yet to be established. Therefore, this study seeks to explore the definition of young children’s AI literacy through Chinese expert interviews, aiming to contribute to the foundational understanding of AI literacy at this critical stage of development.

1.2. The Constructs of AI Literacy

As a subset of digital literacy, AI literacy has emerged as a crucial skill set in response to its growing significance in our daily lives. It enhances individuals’ understanding of AI technologies and critical thinking skills and aids in making informed decisions [21,22]. However, despite a significant increase in related research, scholars have yet to reach a consensus on the constructs of AI literacy.
Early studies primarily focused on the knowledge and understanding of AI and, accordingly, AI literacy was supposed to involve three key facets [13]: (1) knowing that AI has been used to improve human daily life; (2) computers can learn from data through classification, prediction, and generation; and (3) understanding that AI should be used ethically to avoid bias. Recent research has proposed a more comprehensive framework, combining Ng et al.’s [16] review of AI literacy with Brennan and Resnick’s [23] computational thinking, which includes three aspects: AI concepts, AI practices, and AI perspectives. In addition, Ng et al. [16] developed a coding framework for AI literacy based on Figure 1, which encompasses four aspects: knowing and understanding AI, using and applying AI, evaluating and creating AI, and AI ethics. Moving beyond this vertical framework, Kim et al. [24] proposed that AI literacy could be achieved through three competencies: AI knowledge, AI skill, and AI attitude, presenting a horizontal structure. Consequently, the internal structure of AI literacy remains to be seen, with no established structural model of AI literacy for the ECE segment. Therefore, developing young children’s AI literacy is necessary, considering the inclusion of both vertical and horizontal content in the framework.

1.3. The Context of This Study

Teaching and learning with AI has become popular among early childhood educators and researchers. However, previous instances in the literature suggested a need for more specialized research for young children [25,26,27]. Additionally, existing research was biased towards technology use and skills, including curriculum design, AI tools, pedagogical approaches, research designs, and assessment methods [3,9,13,22,28,29,30,31]. Some research has vaguely proposed that young children’s primary AI literacy elements, including AI knowledge, AI skills, and AI attitudes, serve other purposes [32]. Although AI technology is commonly used in ECE, there needs to be a more precise definition and construct for young children’s AI literacy [7]. To fill this research gap, a grounded theory approach and expert interview methods were used in this study to define young children’s AI literacy and provide a framework structure. This innovative research is expected to provide a comprehensive theoretical foundation for early childhood AI education, which will not only help to increase young children’s AI literacy to ensure their safe and successful participation in a digital society but also positively impact the integration of early childhood education and AI technology, guiding future research and educational practice. In particular, the following problems guided this research:
  • What is the definition of young children’s AI literacy, according to Chinese experts?
  • What constructs are identified by Chinese experts as central to young children’s AI literacy?

2. Method

2.1. Grounded Theory Approach Based on Expert Interview

A grounded theory methodology was employed in this study in order to allow the theory to emerge from the data and to address the research questions with preliminary findings [33]. As an inductive and interpretive method of collecting and analyzing data, the grounded theory approach is often used to develop understanding and theories about patterns of human behavior in social contexts [33]; thus, it is particularly well suited for exploring phenomena that have had limited previous research [34], as in the case of AI literacy in young children explored in this study.
In this study, the resulting theory was rooted in the initial and focused coding of expert interview data. Such exploratory expert interviews are productive for gaining a sense of direction in a little-known field [35]. On the one hand, experts can see the macro level that ordinary researchers tend to overlook, providing a more holistic perspective [36]. On the other hand, past research has shown that utilizing the expertise of experts can reveal the internal processes of actual complex problems [37]. Since young children’s AI literacy is an emerging topic that has been little explored before, we believe grounded theory based on expert interviews is an appropriate approach for this exploratory study. The study allowed for flexibility without a preconfigured structure, and we look forward to constructing new theories from scratch.

2.2. Participants

To enhance the representativeness of the data, the current study adopted the expert sampling method, a subtype of purposive sampling approach [38]. This technique was pivotal due to the study’s focus on the nascent topic of early childhood AI literacy, necessitating participants with a profound grasp of the subject matter. Seven experts were purposefully chosen in concordance with the research question, grounding our selection on the breadth of their experience and expertise. Expert sampling facilitated an intensive exploration of the core phenomenon, proving indispensable for eliciting information-rich cases and optimizing limited resources [39]. The sampled experts, drawn from varying capacities, offer insights reflective of distinct stakeholder perspectives. Professors provide responses anchored in academic scholarship, wherein those specializing in early childhood education proffer an ECE-oriented viewpoint, while professors in computer science deliver a technological angle. Conversely, directors from governmental bodies contribute from a policy-making, wide-ranging, and systematic stance, and kindergarten principals share insights from a practical, in-field perspective. This blend of participant perspectives, alongside the cognitive inclinations represented, yields a holistic picture of stakeholder opinions on the development of young children’s AI literacy. Such a methodological choice ensures diversity in comprehending young children’s AI literacy and aids in formulating an innovative, yet exhaustive, conceptual framework. Table 1 presents the list of seven experts selected for this study.

2.3. Data Collection

In alignment with the ethical considerations of research and to facilitate participants’ preparation, the interview protocol was emailed to each participant one week prior to their appointment. This email outlined the study’s objectives and procured informed consent for participation. Subsequent to receiving consent, the principal investigator conducted individual interviews with each of the seven samples at agreed-upon times and venues. The study focused on the two research questions, employing semi-structured interviews to collect enriched data. Initiating with a general interview template, the participants were engaged with foundational questions such as: How is young children’s AI literacy conceptualized? In what ways does it diverge from digital literacy? What proposed structure should a young child’s AI literacy framework encompass? To illuminate the expertise specific to the individual’s professional domain, tailored inquiries were also posed. For instance, Expert G, an individual with substantial tenure in early childhood education and a career in preschool technology administration, was asked to contemplate young children’s AI literacy in relation to that of adults, including the potential overlaps and distinctions. Considering the initial stage of constructing an AI literacy framework, a series of probing questions were integrated to extract a deeper, more nuanced data set. The interviewees were generally allowed the latitude to express their thoughts without interruption, except to reign in divergences or to clarify uncertainties. These interviews spanned from 51 to 78 min, averaging approximately 66 min in duration.
Since all participants were Chinese, the interviews were conducted in their native language, Mandarin. All conversations during the interviews were audio-recorded and transcribed verbatim to ensure strict adherence to the data. Initially, the data were analyzed and processed entirely in Mandarin. When presentation problems were encountered, the researcher reconfirmed with the interviewee, thus ensuring that the interviewee’s meaning was understood. The Mandarin data were finally translated into English by native Chinese speakers who were also proficient in English, and an external translator verified the translations to improve accuracy. This research received ethical approval from the first author’s university, and, to protect the identity of the participants, their names were referred to by code names.

2.4. Data Analysis

A grounded theory approach was employed with inductive coding methods to analyze the interview data. In particular, a three-stage constant comparative procedure was used to identify the themes found in the interview data [33].
Stage 1 is the open coding period, when we thoroughly analyzed all the data and coded all meaningful units related to young children’s AI literacy. The initial codes were primarily descriptive, extracting relevant data from the text containing answers to the research questions, which provided the basis for subsequent higher-order coding [40]. Subsequently, the core generics of the initial codes were judged according to rules that ensure consistency, and then the initial codes were coded into several meaningful broad categories using MAXQDA 2022. The coding results were compiled into a list, and each result was elaborated on with a short meaning statement. This process produced 118 codes that conceptualized young children’s AI literacy.
Stage 2 is the axial coding period, during which we reanalyzed the transcripts of all interviews and compared the codes derived from the first stage, resulting in 29 themes. First, we linguistically derived the meaning of the codes and compared our understanding with dictionary terms. Then, the existing literature was used to validate the coding methods. Finally, words or phrases from the experts’ language were used to obtain a broad overview of the theme names.
Stage 3 is the selective coding period, when we constantly compare all themes and adjust them accordingly. This phase not only merged themes that were too narrowly defined and gave them a broader meaning, but also split and refined themes that were too broadly defined. We further analyzed the stated objectives or intentions of the experts, thus discovering, on the one hand, the close links between the existing themes and, on the other hand, distinguishing terms that tend to be used interchangeably in the past literature and modifying them. When the coding iteration reached the third stage, no new themes or other changes could be generated, and the analysis ended with 20 refined themes and 5 core themes because saturation was reached [41]. Table 2 summarizes the final version of the themes extracted from young children’s AI literacy conceptions.
To ensure the reliability and credibility of the analysis of coding results, the research team agreed on divergent codes or themes through discussion [42]. Initially, the first author took the lead in coding, and then the co-authors examined the completed coding and revised it through discussion after reviewing the source data independently. When major disagreements were encountered, the coding results were discussed with other research team members. Finally, the first author refined and finalized the coding scheme. Inter-researcher reliability was also calculated in MAXQDA 2022 through a code comparison query [43], with 97% agreement. Subsequently, a peer debriefing was conducted to provide additional scrutiny and further confirm the credibility of the results [44].

3. Findings and Discussion

3.1. The Definition of Young Children’s AI Literacy

Table 3 presents the individual definitions of young children’s AI literacy provided by seven experts. We identified the most frequently occurring keywords in their definitions through detailed analysis. These include ‘Interact’ (mentioned by Expert A, Expert C, Expert E, and Expert F), ‘Utilize’ or ‘Use’ (Expert A, Expert B, Expert E, and Expert F), ‘Control’ (Expert A and Expert G), ‘Ethically’ (Expert B, Expert D, Expert E), and ‘Appropriately’ (Expert C and Expert D), along with the context of ‘Daily Lives’ (Expert A, Expert B, and Expert D). Synthesizing these recurring keywords, we formulated a comprehensive definition of young children’s AI literacy as follows:
Young Children’s AI literacy means being ethically and appropriately capable of interacting with, utilizing, and controlling AI in their daily lives.

3.2. The Construct of Young Children’s AI Literacy

In this section, we introduce five main components within a Chinese model that encapsulate experts’ understanding of young children’s AI literacy, as illustrated in Figure 2.

3.2.1. Dimension 1: Safety

This dimension emphasizes minimizing cyber risks. It involves how young children navigate safely in the digital space, identify and deal with potential cyber threats, and help them use technology safely and responsibly. Accordingly, the following four items construct this dimension.
  • AI Safety Awareness
All seven experts unanimously mentioned AI safety awareness, emphasizing the need for young children to be cautious and aware of AI technology to better protect themselves in the digital space. This awareness and alertness are critical components for children to understand and cope with the role of AI in the digital world, especially considering the potential risks and uncertainties of AI technology. For example, Expert A mentioned the AI alertness of young children. He said:
Young children need to have the concept of AI alertness. Young children should know which things they encounter contain AI and which do not. Since AI can fabricate, lose control, or “go crazy”, AI is risky. If young children are unaware of the existence of AI, they might make mistakes or even be deceived and exploited by AI. Therefore, children must have a complex and challenging understanding of AI’s external form.
2.
Content-AI Interaction
The second item of the Safety dimension is Content-AI Interaction. Experts A, B, D, E, and G mentioned this item. It emphasizes the specific content protocols that young children must master when interacting with AI. By learning these interaction norms, children can better understand the digital content related to AI and integrate AI tools in content creation (such as drawing). As Expert D said:
As principals, we value cultivating children’s early AI literacy, especially in content creation and communication. We offer a range of activities for interacting with AI tools, such as using AI drawing programs and participating in robot conversations. These activities are all aimed at helping children understand the functions of AI and learn to collaborate with it in creation.
3.
Personal AI Security
Subsequently, the third item of the Safety dimension is Personal AI Security. Experts A, B, C, D, E, and H mentioned this aspect repeatedly, demonstrating a high level of concern about the ethical aspect of the usage of AI for ECE with regard to data storage and sharing and its implications. Expert B emphasized the understanding of AI security at the personal level for young children, and she said, “This is the foundation for protecting children’s data and privacy in the AI environment and is key to helping them avoid potential fraud and AI misuse”. As Expert G elaborated, “We teach children how to identify and handle permission requests when using smart devices. This is a part of their digital life. We encourage parents to participate in this process, helping children build awareness of personal AI security”. Additionally, Expert E shared insights from a practical standpoint:
In our kindergarten, we are committed to creating a safe learning environment by using stories and role-playing games to help children understand how they should protect themselves when interacting with AI, such as not providing personal information to unknown software and letting them know that not all online requests should be responded to. Our goal is to nurture children to become little experts in information security, enabling them to learn how to protect their data and privacy through games.
4.
Organizational AI Security
Finally, all seven experts mentioned that a cautious approach to using AI is necessary at the organizational level, particularly in educational institutions like kindergartens. As Expert B said, “Institutions should educate young children and teachers on the correct use of AI, ensuring that the use of AI aligns with ethics and human values, and preventing the improper development of AI’s manipulative capabilities”. This reflects the “Organizational AI Security” part, which recognizes the extensive impact of AI at the organizational level and takes measures to ensure the safe, rational, and ethical integration of AI technology into the educational environment. As Expert E said:
Our kindergarten operates at the practical level, and how to make good use of AI is very important. Isn’t it said that AI will control humans in the future? These matters must first be addressed at the adult level in kindergartens, such as kindergarten teachers and caregivers, and then permeate to the young children, giving children the correct values. Machines are meant to serve humans and should not develop to the point where they violate ethical norms to enhance AI’s level of intelligence. It should be used to support and enhance students’ learning experience rather than becoming the center of learning.
Unfortunately, there were no experts who mentioned their criteria for how to choose the AI to be used for children. There are also no agreed upon uniform standards practiced in Chinese kindergartens. Therefore, developing a referenceable supply-side filter to ensure the quality of AI education programs may be a direction for future research, which requires a concerted effort by policy-makers, researchers, teachers, AI designers, and multiple parties.
In summary, young children’s AI literacy in the Safety dimension centers on four essential items: AI Safety Awareness, Content-AI Interaction, Personal AI Security, and Organizational AI Security. Each aspect plays a vital role: AI Safety Awareness increases children’s awareness of AI’s existence and educates them about its potential risks. Content-AI Interaction guides them in using AI tools for creation and communication. Personal AI Security prioritizes their data and privacy protection. Lastly, Organizational AI Security underscores the ethical and safety standards necessary in educational settings utilizing AI. These four items provide a comprehensive safety framework, thereby minimizing cyber-risks and equipping children to grow safely and responsibly in a rapidly evolving AI environment. Together, they lay a solid foundation for fostering safe behaviors among young children in the digital world, while nurturing their skills for healthy interaction and development in future societies.

3.2.2. Dimension 2: Identity

The dimension of Identity involves preparing young children for digital engagement, which includes developing a healthy digital identity and realizing the importance of self-presentation in virtual environments. Accordingly, the following four items construct this dimension.
  • AI Co-Creation Identity
The first item of the Identity dimension, AI Co-creation identity, primarily reflects the views of three university professors (Experts A, B, and C). For instance, Expert C mentioned:
In constructing AI Co-creation identity, we need to explore the role and boundaries of AI with young children. It is not just a technical implementation but a process of shaping values. We must guide the children to understand how their interaction and symbiotic relationship with AI will affect their identity, behavior, and decision-making in the digital world.
2.
Digital AI Citizen Identity
All seven experts unanimously pointed out that in the age of AI, young children need to become digital citizens who can understand, apply, and interact with AI, namely, the second part of the Identity dimension: Digital AI Citizen Identity. As Expert B said, “The Digital AI Citizen Identity is more than just a label; it is our understanding of the role and responsibility of the new generation of young children in the digital world”. Kindergarten principal Expert D also mentioned, “As educators, our task is to guide our children to become responsible digital citizens. This means educating them on how to interact properly with AI and safely express themselves in cyberspace, which is indispensable”. Furthermore, technology expert C also elaborated:
I believe that Digital AI Citizen Identity is about understanding AI technology and knowing how to play a responsible role in a digital society. Our children are digital natives; their early exposure to technology and AI profoundly impacts their cognitive and behavioral patterns. Therefore, we must focus on cultivating their correct understanding of AI technology, including its benefits and potential risks, and how to use AI safely and effectively daily. Our joint responsibility as educators and technology experts is to ensure they are well-prepared as citizens of the digital age.
3.
AI Identity Management
The third item is AI Identity Management; Expert C said, “We need to teach them how to interact with AI while protecting their data and privacy. This is a technical challenge and a part of moral and social responsibility”. Expert A discussed the issue of how we perceive AI and its positioning, raising questions about whether AI should be given an independent personality, should be treated with the same respect as humans, and whether our interactions with AI should adhere to social norms and ethical standards. He suggested, “If the answer to these questions is ‘yes’, then we need to establish an AI interaction protocol, essentially co-creating an identity for AI within our social structure while conducting AI identity management”. This indicates that managing AI identity is about understanding AI and adhering to specific interaction standards and norms.
4.
Intellectual Property in AI
The final item of identity is intellectual property in AI, and experts from various fields have expressed their understanding of this aspect. Technology expert and university professor Expert C stated:
In the field of AI, protecting the intellectual property of young children is crucial. We must ensure that AI applications do not infringe upon children’s creative thinking and original expression. Simultaneously, we must develop educational tools that stimulate children’s creativity and respect and protect their original ideas.
Kindergarten principal Expert E also mentioned, from a practical perspective, “When using AI teaching tools, we need to be especially careful not to infringe upon children’s intellectual property. This means we should encourage children to freely express their ideas and ensure they are respected and protected”. Expert F, from a policy-making perspective, said:
We need to research and propose policies that ensure the sensible use of AI in early childhood education while protecting children’s intellectual property. We should promote an environment that leverages the advantages of AI and simultaneously respects and protects the original thinking of young children.
In summary, four essential items were emphasized within the Identity dimension of early childhood AI literacy: AI Co-creation Identity, Digital AI Citizen Identity, AI Identity Management, and Intellectual Property in AI. These aspects range from co-creation, individual identity, and security management to creative protection, all aimed at equipping young children to be ready to engage in the digital world. Through these aspects, children learn to coexist and co-create with AI, develop into responsible digital citizens, effectively manage their digital identities, and understand and respect intellectual property. This comprehensive approach to cultivation ensures that children can participate and express themselves safely and confidently in the digital age while protecting and developing their individuality and creativity.

3.2.3. Dimension 3: Attitude

In the ‘Attitude’ dimension of young children’s AI literacy, the focus is on understanding AI with an appropriate attitude and emotion, aiming to maximize the opportunities provided by AI. Accordingly, the following three items construct this dimension.
  • AI Self-Awareness
The first item at the attitude level is ‘AI Self-Awareness’, which means helping young children understand and manage their emotions and attitudes when interacting with AI. As Expert A mentioned:
The attitude is that we cannot wholly trust AI, nor can we completely deny it. We should have an objective and comprehensive understanding and attitude toward AI, which involves recognizing its benefits, acknowledging its drawbacks, and balancing its pros and cons for our use to enhance and promote human knowledge.
This statement conveys the core concept of AI Self-Awareness: Young children need an objective and comprehensive understanding of AI. This includes recognizing AI’s different forms and functions, understanding its advantages and disadvantages, and balancing these pros and cons.
2.
Self-Management with AI
The second item in the attitude dimension is ‘Self-Management with AI’, where experts differ in their opinions on the extent of management. Technical experts and university professors advocate that young children’s attitude towards AI should be both knowledgeable and rational, enhancing their management of AI. Technical Expert C stated:
From an attitude perspective from a technical standpoint, it is necessary to explain to young children the working principles and capabilities of AI. Children need to understand that while AI may appear intelligent, it is still limited by the rules set by programmers and is just a set of programming languages. This understanding helps young children maintain appropriate expectations when interacting with AI without over-relying on it.
University professor Expert B believed, “Children should understand the basic principles of AI so that they can have AI-oriented social interactions without the misconception that AI is more powerful than parents or teachers, and can also manage AI more emotionally and with greater measure”.
Kindergarten principals Expert D and Expert E indicated that young children are self-managing, but can be more relaxed. The fact that AI is ultimately just a program should be approached like the belief in Santa Claus; there is no need to tell young children that AI is not a natural person; that is, they do not need to know the underlying reasons. As Expert D said:
That is the young child’s world, their understanding. Telling them that AI is fake, just a program, would confuse them. It is unnecessary for children three to six years old. Their thinking stage is imagination, where things we adults find incredible are possible in their world. Telling them it is just a program would diminish their imagination.
Principal (Expert E) also stated a similar opinion with an example:
One child in our kindergarten said, ‘I love kindergarten, but when I am sick and cannot come, I plan to send my robot to replace me’. This imagination would no longer exist if they knew AI was just a program.
3.
Digital AI Empathy
The third item under the Attitude dimension is “Digital AI Empathy”, as Expert A mentioned:
Young children view AI entirely differently from us adults. From an adult perspective, we have experienced the transition from a world without AI to one with AI, so we need to adopt an accepting attitude toward it. However, children do not face this issue of change and acceptance. They are born into this world as it is, part of the alpha generation, where the concept of change does not exist, but rather there is an inherent digital AI empathy.
This item focuses on cultivating young children’s empathetic understanding of AI and its functions. It emphasizes young children’s humanized comprehension and emotional response towards AI technology, as they tend to view that all things in reality have ‘spirits’, so does AI. As Expert E mentioned:
AI is no longer just about interacting with a machine. Its design is increasingly aimed at mimicking human behavior and emotions. In this process, young children need to maintain their capacity for love, that is, a disposition towards love, aiding them in developing a more comprehensive and responsible attitude in the digital world.
Experts have suggested that adult attitudes toward AI indirectly influence young children’s attitudes toward AI, particularly when those attitudes are negative, as Expert B said:
Digital AI empathy is not a one-way street from young children to AI, but a complex network of interactions. Adult attitudes toward AI will directly affect young children’s attitudes toward AI. Adults with negative attitudes toward AI are prone to disallow young children’s use of AI, which will fundamentally change young children’s attitudes. The extent to which adults should be involved in young children’s use of AI is a question worth exploring. We can’t just tell young children that AI is just a program, as this might stifle their imagination and creativity. However, we also need to inform them that AI can make mistakes and is not perfect.
In summary, the Attitude dimension of young children’s AI literacy includes AI Self-Awareness, Self-Management with AI, and Digital AI Empathy. The goal is to cultivate young children’s active and balanced attitudes, emotional understanding, and use of AI, thereby supporting them in developing a more comprehensive and responsible attitude in the digital world.

3.2.4. Dimension 4: Cognition

The Cognitive dimension is a vital part of understanding digital transformation. It focuses on how young children process and interpret information provided by AI, involving their cognitive development in the digital world. Expert A believed: “Understanding AI, recognizing its various forms, and knowing its functions are all cognitive actions. You must have cognition and understanding to develop skills”. Accordingly, the following five items construct this dimension.
  • AI Content Creation Thinking
Experts believe that the first item of the Cognition dimension is ‘AI Content Creation Thinking’, which focuses on young children’s understanding and assessment of how AI creates and presents content. This involves teaching children to identify the characteristics of AI-generated information and content and how to use it effectively to enrich their learning and creative thinking. Regarding the importance of content generation, Expert A stated: “One of the most crucial terms for Chat GPT is ‘generative’, meaning it can generate”. In the field of ECE, Expert G said: “AI can engage in autonomous dialogue with children during daily activities, such as handwashing, adapting language and rhythm to the children’s age characteristics”. Expert B and Expert C directly mentioned the concept of AI Content Creation Thinking, with Expert B pointing out, “AI Content Creation Thinking is crucial; young children need to understand that through using AI, they can access customized and innovative learning content. For example, AI can generate personalized learning materials based on the child’s interests and learning progress”. Expert C also said:
We emphasize the importance of AI Content Creation Thinking in young children’s cognitive development of AI literacy. We must cultivate children’s understanding and innovative thinking towards AI-generated content. AI can create various educational materials, but it is equally important to teach children how to assess the quality and applicability of these contents. This is about technological education and maintaining analytical and innovative thinking in a world where technology is constantly evolving.
2.
Computational AI Thinking
Experts consider the second cognition item to be computational AI thinking, which focuses on developing foundational programming thinking in young children, enabling them to understand and effectively use AI tools for problem-solving. As Expert C said, “Computational AI thinking is the direct understanding and application of the basics and operational processes of AI technology”. As Expert E mentioned, “It is a mode of thinking in AI; for instance, programming activities are just a form and medium. Thus, what is being cultivated is children’s computational thinking, which is a type of our AI thinking abilities”.
3.
AI Math Logic Thinking
Four experts mentioned the third item of Cognition as AI math logic thinking, considering it to be distinct from computational thinking in its greater focus on directly understanding and applying AI technology fundamentals and operational processes. Expert C believed, “This fundamental mathematical logic thinking is the cornerstone for understanding more complex AI systems and algorithms.” Expert D added:
As an essential aspect of AI literacy cognitive development, AI Math Logic thinking should be actively integrated into the daily development of young children. Through simple mathematical games and AI interactive tools, we encourage children to explore the fun of mathematics while developing their logical thinking. Such learning helps children grow in mathematics and lays a solid foundation for their cognitive development in the digital and intelligent world.
4.
AI Critical Thinking
The fourth item of cognition is AI critical thinking, which focuses on cultivating young children’s ability to analyze and evaluate the information provided by AI critically. This aspect involves the reception of information and, more importantly, deep contemplation and reasonable questioning. Expert B noted, “In the era of AI, young children need to learn not only to receive and use information provided by AI but also to think critically about this information. This is necessary cognitive thinking”. Expert C also emphasized, “AI Critical Thinking is about teaching children to identify biases and misleading information. In the digital world, this ability is crucial for cognitive development”. Expert G added, “Young children need to understand that any information generated by AI is not absolute, and they should learn to interpret and analyze this information from different perspectives.” Expert F stated, “As educators, we must cultivate children’s critical thinking, enabling them to remain clear-headed in an era of information overload.
5.
AI Systems Thinking
The fifth item of Cognition is AI Systems Thinking, focusing on young children’s overall understanding and analysis of AI systems. This aspect focuses on how children understand AI technology’s systematic and interconnected nature and how these systems affect and integrate into their daily lives and learning environments. Technology Expert C emphasized: “AI Systems Thinking involves understanding the systems and frameworks behind AI technology. We must make young children understand that AI is not just an independent tool, but a complex system composed of multiple interconnected parts”. Expert B also explained:
AI Systems Thinking is vital in helping children understand the interactions and dependencies between systems in the digital world. This way of thinking allows them to understand better the comprehensiveness and complexity of AI and the underlying logic of why AI evolves so quickly.
Expert F pointed out, from a problem-solving perspective, that “Cultivating young children’s systems thinking skills will help them solve problems more effectively in an increasingly technological society, aiding them in adapting to future changes.
In summary, in the Cognition dimension of young children’s AI literacy, there are five aspects: AI Content Creation Thinking, Computational AI Thinking, AI Math Logic Thinking, AI Critical Thinking, and AI Systems Thinking. These aspects collectively constitute the comprehensive cognitive development of children in AI and the digital world. AI Content Creation Thinking focuses on understanding how AI creates and presents content; Computational AI Thinking emphasizes foundational programming thinking and the application of AI tools; AI Math Logic Thinking highlights the importance of mathematical, logical thinking; AI Critical Thinking cultivates children’s ability to analyze and evaluate the information provided by AI critically; and AI Systems Thinking focuses on the overall understanding of the operation and impact of AI systems. These aspects collectively assist young children in developing the necessary cognitive abilities during digital transformation, laying a solid foundation for their learning and life in an increasingly technological world.

3.2.5. Dimension 5: Capability

The final dimension is capability, which focuses on turning ideas into reality in the era of artificial intelligence (AI). This dimension represents children’s self-management skill development in an AI environment. It nurtures children’s autonomy and decision-making abilities when interacting with AI, which is crucial for meaningful AI interactions. Accordingly, the following four items construct this dimension.
  • AI Contextual Understanding Ability
The first aspect of the Capability dimension is AI Contextual Understanding Ability, a fusion of AI Insight and AI Scene Perception. Expert A emphasized AI insight:
AI insight is a crucial ability for young children, involving innate aptitude and individual differences. It also relates to acquired experiences, upbringing, and experiences. Reflecting on myself, I feel that my distinct feature is having five to ten years of advanced insight compared to others, allowing me to always be one step ahead in keenly foreseeing future issues. This insight is more critical. For instance, if a child can discern the core of a problem and pinpoint it accurately, they can then communicate and manage various interactions with AI.
Expert C emphasized AI Scene Perception:
In discussing young children’s AI literacy abilities, I consider ‘AI Scene Perception’ a key factor. We must understand that AI is a collection of programming and algorithms and an entity embedded in children’s daily lives. Children must learn to identify and understand AI elements in their environment, whether in intelligent toys or online learning tools. This perceptual ability is the foundation for their understanding and adaptation to a technology-driven world. We should develop corresponding educational tools and curricula to help children cultivate this ability, enabling them to navigate more comfortably in an AI-rich world.
Expert B combined AI Insights and AI scene perception, believing this dimension should be AI contextual understanding ability. She said:
AI contextual understanding ability guides children to understand AI’s application and impact in different contexts, discerning its applicability and limitations. The focus is on cultivating an understanding of AI’s functions and boundaries and the ability to adapt and apply AI in diverse environments.
2.
AI Data Analysis and Management Ability
AI Data Analysis and Management Ability is the second aspect of capability, focusing on data analysis and management skills in AI. Expert C underscored the need for AI data analysis skills:
In this data-driven era, children must learn how to extract and interpret data from AI systems. This is not just about the data itself but, more importantly, about converting data into meaningful information and decisions. We should develop tools suitable for children to help them grasp the basic concepts and practices of data analysis.
Expert E focused on nurturing AI data capabilities in daily teaching and learning:
In kindergarten, simple games and activities can guide young children in learning how to process and analyze data. For instance, using smart toys to collect information and then guiding children to do basic categorization and analysis of this information is not only interesting but also provides them with initial data handling experience.
Expert B focused on the development of anomaly handling ability; she said:
AI systems are not flawless and may display errors or abnormal behaviors. Educating children to identify and handle these anomalies is very important. This involves technical skills, the cultivation of safety awareness, and a sense of responsibility.
Moreover, Expert A proposed a more comprehensive integrated perspective, he said:
AI Data Analysis and Management Ability should be a holistic concept, including data understanding, information processing, and response to anomalies. Through simulation activities and interactive learning, we can allow children to practice these skills in real-life scenarios to enhance their ability to interpret AI-generated data and troubleshoot anomalies or errors in AI systems.
3.
AI Exploratory Learning and Problem-solving Ability
AI Exploratory Learning and Problem-Solving Ability is the third aspect of capability, integrating elements such as questioning, autonomous learning, and problem-solving abilities.
Expert D emphasized the importance of AI questioning ability:
In AI learning, cultivating children’s ability to ask questions is crucial. Children should learn how to pose meaningful questions to AI systems, not just seeking information but also enhancing the depth of knowledge and understanding. Asking questions is the starting point of exploratory learning and the key to driving innovation and deep understanding.
Expert B emphasized the ability of self-directed AI learning:
Our educational goal is to encourage children to engage in self-directed AI learning. In this process, children learn to set their learning objectives and independently explore AI tools and concepts. Such autonomy promotes personalized learning and lays the foundation for children to confidently explore more complex AI environments in the future.
Experts A, B, C, E, D, F, and G collectively emphasized the ability to solve problems. For example, Expert C emphasized the capacity for exploratory learning and its concurrent enhancement of problem-solving skills: “Through interaction with AI, children can explore in practical operations, learning new knowledge during play and exploration. Exploratory learning not only sparks children’s curiosity but also aids in developing their problem-solving abilities.
4.
AI Communication Ability
AI Communication Ability is the fourth aspect of capability, focusing on the ability of children to communicate with AI and people using AI effectively. This aspect includes human-machine communication, i.e., young children’s communication skills with AI, and interpersonal communication, especially in contexts involving AI. Expert C emphasized direct communication with AI:
The key to AI communication ability lies in educating children on effectively interacting with AI systems. This involves inputting commands and interpreting information, understanding AI feedback, and adjusting communication methods to optimize interaction. We must develop educational tools that enable children to learn AI communication through practical operation.
Expert B also focused on interpersonal communication skills:
In an AI environment, we must also emphasize the communication skills between people. AI can serve as a tool to enhance children’s social skills, such as through team-based AI projects where children learn to express their ideas, understand others’ perspectives, and cooperate effectively.
Expert A reemphasized the importance of establishing AI interaction protocols to integrate the development of children’s AI communication ability:
Children’s interaction with AI is based on norms jointly formulated by adult society, educational experts, and AI specialists (AI interaction protocol). We need to teach children how to internalize these norms and transform them into practical communication skills while emphasizing interpersonal communication skills.

3.3. General Discussion

The emerging field of early childhood AI literacy is a vital research area that interrogates how to effectively introduce young children to AI, a ubiquitous agent in this AI and digital era. To forge a structured pedagogical framework for early AI education, this study has generated a Chinese 5-dimension model, labeled ‘SIACC’, which is anchored in the principle of developmental appropriateness and accords with the progressive stages of learning. This discussion aims to unpack the nature and distinctiveness of the SIACC model, thereby offering insights for future AI research and educational best practices.

3.3.1. The Interconnected Five Dimensions

In synthesizing expert opinions, this study has yielded the SIACC Chinese model for early childhood AI literacy, encompassing Safety, Identity, Attitude, Cognition, and Capability. These dimensions are interdependent and interconnected, fostering a holistic comprehension of AI among young learners within the contemporary digital milieu: (1) Safety establishes a protective foundation for child-AI interaction. By prioritizing awareness of potential risks and the valorization of safe practices, this dimension underscores the importance of rigorous personal and institutional safety training; (2) Identity concerns the cultivation of a digital persona conducive to responsible AI engagement. This facet is critical in shaping the child’s self-conception and societal role within AI-integrated digital spaces; (3) Attitude intertwines with Safety and Identity to nurture a discerning yet receptive disposition towards AI. This dimension influences children’s emotional and ethical engagement with AI technologies; (4) Cognition: furnishes the intellectual bedrock that underpins all other dimensions, empowering children to evaluate AI-generated content and apply AI to problem-solving competently; (5) Capability: acts as the crucible where theoretical knowledge and attitudes are transmuted into practical expertise, ensuring children’s digital actions are informed, imaginative, and prudent.
The synergy among these dimensions is exemplified by the precedence of AI Safety Awareness (Safety) for AI Self-Awareness development (Attitude), which in turn enhances AI Communication Ability (Capability). Similarly, AI Co-Creation Identity (Identity) is enriched through AI Content Creation Thinking (Cognition), which helps children grasp their roles as AI co-creators. In the SIACC model shown in Figure 2, Safety and Identity are filled in blue to signify their foundational roles in developing young children’s AI literacy. These two dimensions serve as the starting points for all other dimensions. The other three dimensions (Attitude, Cognition, and Capability) are in white to indicate that their development builds progressively on the foundations of Safety and Identity, and they are interwoven and dynamically evolving. Consequently, this model constitutes an ecosystem wherein each dimension nurtures and amplifies the others, laying the groundwork for a robust AI literacy educational framework that will be vital in shaping young digital citizens in the future.

3.3.2. The Uniqueness of the SIACC Model

The SIACC model is comprehensive and progressive, designed to embrace the entire spectrum of learning stages—from knowing to applying AI. First, it operationalizes an age-appropriate approach specifically for the ECE sector, considering the cognitive aptitudes of young learners and corresponding instructional strategies. Its incorporation of the “know, understand, use, and apply” methodology, along with concepts of AI self-awareness, digital AI empathy, and AI contextual understanding, aligns well with the capabilities of young children. Alternatively, it places less emphasis on sophisticated tasks such as AI content creation and co-creation identity, which may overtax younger learners but are suitable challenges for older students. Especially for younger cohorts, the SIACC model stresses the importance of mastering AI’s foundational concepts, as Su and Zhong (2022) explained [45]. This is juxtaposed against the more complex AI comprehension expected from older students, such as those who engage with machine learning model design, an aspect explored by Shamir and Levin (2022) and Su et al. (2022) [25,46].
Second, the SIACC framework advocates for the horizontal integration of AI Knowledge, Skill, and Attitude alongside its vertical developmental dimension, as endorsed by Kim et al. [24]. This integrative approach eschews teaching these components in isolation; instead, they form a synergistic educational experience. This approach promotes acquiring knowledge, skills, and holistic growth, ensuring children develop the understanding and attitudes necessary for ethical and responsible AI interaction.
In summary, the SIACC model effectively contends with the complexities of AI literacy for young children with age-appropriateness. By following this model, scholars and educators are equipped with a valuable and practical framework, ensuring comprehensive AI literacy in the early childhood domain. This ultimately contributes to an inclusive and substantively grounded foundation in AI education.

4. Conclusions, Limitations, and Implications

This grounded theory study establishes a comprehensive framework for young children’s AI literacy, comprising five interconnected dimensions: Safety, Identity, Attitude, Cognition, and Capability. These dimensions collectively aim to protect children online, help them understand their digital identity, develop positive attitudes towards AI, grasp AI concepts deeply, and translate this understanding into practical skills. The framework is designed to be cyclical and mutually reinforcing, promoting continuous growth and development in AI literacy.
However, the model has several limitations. Firstly, the input from seven experts, six from Shanghai and one from Hong Kong, introduces a Chinese perspective that may need more international viewpoints. Additionally, unresolved conflicts in expert opinions, particularly on self-management with AI, highlight the need for further experimental research to address these discrepancies. The contrasting views between technical experts and kindergarten principals on whether young children need to understand the underlying principles of AI illustrate this issue. Lastly, while this grounded theory study provides preliminary evidence and lays the foundation for developing a systematic set of indicators for young children’s AI literacy, a comprehensive and in-depth assessment model requires the participation of other stakeholders, including teachers, parents, and the children themselves.
Nevertheless, the framework provides foundational insights for creating systematic indicators of AI literacy in young children. It offers a scientific basis for policymakers, particularly in China, to develop standardized AI literacy education suitable for early childhood. The study also highlights the potential for developing assessment tools and educational programs based on these indicators, facilitating evidence-based interventions in early AI education. Additionally, this study suggests several future research directions. First, it underscores the need for international perspectives in future studies. Incorporating viewpoints from diverse geographic and cultural backgrounds is essential to enhancing the model’s global applicability. Second, it highlights the necessity for expert consensus. Therefore, conducting experimental research to resolve discrepancies in expert opinions is critical to the successful implementation of AI in early childhood education. Third, it emphasizes the importance of stakeholder involvement. Engaging a broad range of stakeholders, including teachers, parents, children, and policymakers, in developing culturally appropriate policies and practices is key to successful early AI education. Finally, the study points to the need for program development and longitudinal studies. Future research should focus on creating and assessing evidence-based early AI education programs to determine their effectiveness in various educational settings with longitudinal and robust evidence.

Author Contributions

Conceptualization, H.L. and W.L.; methodology, W.L.; software, W.L.; validation, W.L., H.H. and M.G.; formal analysis, W.L.; investigation, W.L.; resources, H.H.; data curation, W.L.; writing—original draft preparation, W.L.; writing—review and editing, W.L., H.H. and H.L.; visualization, W.L.; supervision, H.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Scientific Ethics Committee of Shanghai Normal University (protocol code 2023045 and date of approval 21 September 2023).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data are unavailable, due to ethical restrictions.

Conflicts of Interest

The authors declare no conflict of interests.

References

  1. Ali, S.; DiPaola, D.; Lee, I.; Sindato, V.; Kim, G.; Blumofe, R.; Breazeal, C. Children as creators, thinkers and citizens in an AI-driven future. Comput. Educ. Artif. Intell. 2021, 2, 100040. [Google Scholar] [CrossRef]
  2. Chen, J.J.; Lin, J.C. Artificial intelligence as a double-edged sword: Wielding the POWER principles to maximize its positive effects and minimize its negative effects. Contemp. Issues Early Child. 2024, 25, 146–153. [Google Scholar] [CrossRef]
  3. Su, J.; Yang, W. Artificial intelligence in early childhood education: A scoping review. Comput. Educ. Artif. Intell. 2022, 3, 100049. [Google Scholar] [CrossRef]
  4. Luo, W.; Yang, W.; Berson, I.R. Digital transformations in early learning: From touch interactions to AI conversations. Early Educ. Dev. 2024, 35, 3–9. [Google Scholar] [CrossRef]
  5. Luo, W.; Berson, I.R.; Berson, M.J.; Han, S. Between the folds: Reconceptualizing the current state of early childhood technology development in China. Educ. Philos. Theory 2021, 54, 1655–1669. [Google Scholar] [CrossRef]
  6. Luo, W.; He, H.; Liu, J.; Berson, I.R.; Berson, M.J.; Zhou, Y.; Li, H. Aladdin’s Genie or Pandora’s Box for early childhood education? Experts chat on the roles, challenges, and developments of ChatGPT. Early Educ. Dev. 2023, 35, 96–113. [Google Scholar] [CrossRef]
  7. Su, J.; Ng, D.T.K.; Chu, S.K.W. Artificial intelligence (AI) literacy in early childhood education: The challenges and opportunities. Comput. Educ. Artif. Intell. 2023, 4, 100124. [Google Scholar] [CrossRef]
  8. Berson, I.R.; Berson, M.J.; Luo, W.; He, H. Intelligence augmentation in early childhood education: A multimodal creative inquiry approach. In International Conference on Artificial Intelligence in Education; Springer Nature: Cham, Switzerland, 2023; pp. 756–763. [Google Scholar] [CrossRef]
  9. Kewalramani, S.; Kidman, G.; Palaiologou, I. Using Artificial Intelligence (AI)-interfaced robotic toys in early childhood settings: A case for children’s inquiry literacy. Eur. Early Child. Educ. Res. J. 2021, 29, 652–668. [Google Scholar] [CrossRef]
  10. Allehyani, S.H.; Algamdi, M.A. Digital competencies: Early childhood teachers’ beliefs and perceptions of ChatGPT application in teaching English as a second language (ESL). Int. J. Learn. Teach. Educ. Res. 2023, 22, 343–363. [Google Scholar] [CrossRef]
  11. Su, J.; Yang, W. AI literacy curriculum and its relation to children’s perceptions of robots and attitudes towards engineering and science: An intervention study in early childhood education. J. Comput. Assist. Learn. 2024, 40, 241–253. [Google Scholar] [CrossRef]
  12. Burgsteiner, H.; Kandlhofer, M.; Steinbauer, G. Irobot: Teaching the basics of artificial intelligence in high schools. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016; pp. 4126–4127. [Google Scholar]
  13. Kandlhofer, M.; Steinbauer, G.; Hirschmugl-Gaisch, S.; Huber, P. Artificial intelligence and computer science in education: From kindergarten to university. In Proceedings of the 2016 IEEE Frontiers in Education Conference (FIE), Erie, PA, USA, 12–15 October 2016; pp. 1–9. [Google Scholar]
  14. Long, D.; Magerko, B. What is AI literacy? Competencies and design considerations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–16. [Google Scholar] [CrossRef]
  15. Mercer, N.; Hennessy, S.; Warwick, P. Dialogue, thinking together and digital technology in the classroom: Some educational implications of a continuing line of inquiry. Int. J. Educ. Res. 2019, 97, 187–199. [Google Scholar] [CrossRef]
  16. Ng, D.T.K.; Leung, J.K.L.; Chu, S.K.W.; Qiao, M.S. Conceptualizing AI literacy: An exploratory review. Comput. Educ. Artif. Intell. 2021, 2, 100041. [Google Scholar] [CrossRef]
  17. Druga, S.; Christoph, F.L.; Ko, A.J. Family as a third space for AI literacies: How do children and parents learn about AI together? In Proceedings of the CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 30 April–5 May 2022; pp. 1–17. [Google Scholar]
  18. Steinbauer, G.; Kandlhofer, M.; Chklovski, T.; Heintz, F.; Koenig, S. A differentiated discussion about AI education K-12. KI-Künstliche Intell. 2021, 35, 131–137. [Google Scholar] [CrossRef] [PubMed]
  19. Druga, S.; Vu, S.T.; Likhith, E.; Qiu, T. Inclusive AI literacy for kids around the world. In Proceedings of the FabLearn 2019, New York, NY, USA, 9–10 March 2019; pp. 104–111. [Google Scholar] [CrossRef]
  20. Rodríguez-García, A.; Arias-Gago, A.R. Revisión de propuestas metodológicas: Una taxonomía de agrupación categórica. ALTERIDAD. Rev. Educ. 2020, 15, 146–160. [Google Scholar] [CrossRef]
  21. Pinski, M.; Benlian, A. AI literacy for users—A comprehensive review and future research directions of learning methods, components, and effects. Comput. Hum. Behav. Artif. Hum. 2024, 2, 100062. [Google Scholar] [CrossRef]
  22. Yang, W. Artificial Intelligence education for young children: Why, what, and how in curriculum design and implementation. Comput. Educ. Artif. Intell. 2022, 3, 100061. [Google Scholar] [CrossRef]
  23. Brennan, K.; Resnick, M. New frameworks for studying and assessing the development of computational thinking. In Proceedings of the 2012 Annual Meeting of the American Educational Research Association, Vancouver, BC, Canada, 13–17 April 2012; p. 25. [Google Scholar]
  24. Kim, S.; Jang, Y.; Kim, W.; Choi, S.; Jung, H.; Kim, S.; Kim, H. Why and what to teach: AI curriculum for elementary school. Proc. AAAI Conf. Artif. Intell. 2021, 3, 15569–15576. [Google Scholar] [CrossRef]
  25. Su, J.; Zhong, Y.; Ng, D.T.K. A meta-review of literature on educational approaches for teaching AI at the K-12 levels in the Asia-Pacific region. Comput. Educ. Artif. Intell. 2022, 3, 100065. [Google Scholar] [CrossRef]
  26. Williams, R.; Park, H.W.; Breazeal, C. A is for artificial intelligence: The impact of artificial intelligence activities on young children’s perceptions of robots. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; pp. 1–11. [Google Scholar]
  27. Williams, R.; Park, H.W.; Oh, L.; Breazeal, C. Popbots: Designing an artificial intelligence curriculum for early childhood education. Proc. AAAI Conf. Artif. Intell. 2019, 33, 9729–9736. [Google Scholar] [CrossRef]
  28. Almatrafi, O.; Johri, A.; Lee, H. A systematic review of AI literacy conceptualization, constructs, implementation, and assessment efforts (2019–2023). Comput. Educ. Open 2024, 6, 100173. [Google Scholar] [CrossRef]
  29. Lin, P.; Van Brummelen, J.; Lukin, G.; Williams, R.; Breazeal, C. Zhorai: Designing a conversational agent for children to explore machine learning concepts. Proc. AAAI Conf. Artif. Intell. 2020, 34, 13381–13388. [Google Scholar] [CrossRef]
  30. Sakulkueakulsuk, B.; Witoon, S.; Ngarmkajornwiwat, P.; Pataranutaporn, P.; Surareungchai, W.; Pataranutaporn, P.; Subsoontorn, P. Kids making AI: Integrating machine learning, gamification, and social context in STEM education. In Proceedings of the 2018 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE), Wollongong, Australia, 4–7 December 2018; pp. 1005–1010. [Google Scholar] [CrossRef]
  31. Vartiainen, H.; Tedre, M.; Valtonen, T. Learning machine learning with very young children: Who is teaching whom? Int. J. Child-Comput. Interact. 2020, 25, 100182. [Google Scholar] [CrossRef]
  32. Su, J.; Yang, W. Artificial Intelligence (AI) literacy in early childhood education: An intervention study in Hong Kong. Interact. Learn. Environ. 2023, 31, 1–15. [Google Scholar] [CrossRef]
  33. Glaser, B.G.; Strauss, A.L. The Discovery of Grounded Theory: Strategies for Qualitative Research; Routledge: London, UK, 2017. [Google Scholar]
  34. Glaser, B.G. Basics of Grounded Theory Analysis; Sociology Press: Mill Valley, CA, USA, 1992. [Google Scholar]
  35. Littig, B.; Pöchhacker, F. Socio-translational collaboration in qualitative inquiry: The case of expert interviews. Qual. Inq. 2014, 20, 1085–1095. [Google Scholar] [CrossRef]
  36. Jiang, Y.; Zhang, B.; Zhao, Y.; Zheng, C. China’s preschool education toward 2035: Views of key policy experts. ECNU Rev. Educ. 2022, 5, 345–367. [Google Scholar] [CrossRef]
  37. Van Audenhove, L.; Donders, K. Expert interviews and elite interviews. In Handbook of Media Policy Methods; Van den Bulck, H., Puppis, M., Donders, K., Van Audenhove, L., Eds.; Palgrave MacMillan: London, UK, 2019; pp. 179–197. [Google Scholar]
  38. Suri, H. Purposeful sampling in qualitative research synthesis. Qual. Res. J. 2011, 11, 63–75. [Google Scholar] [CrossRef]
  39. Tracy, S.J. Qualitative Research Methods: Collecting Evidence, Crafting Analysis, Communicating Impact; John Wiley & Sons: Hoboken, NJ, USA, 2019. [Google Scholar]
  40. Punch, K. Developing Effective Research Proposals; SAGE: London, UK, 2000. [Google Scholar]
  41. Charmaz, K. Constructing Grounded Theory, 2nd ed.; SAGE: London, UK, 2014. [Google Scholar]
  42. Miles, M.B.; Huberman, A.M.; Huberman, M.A.; Huberman, M. Qualitative Data Analysis: An Expanded Sourcebook; SAGE: London, UK, 1994. [Google Scholar]
  43. Cohen, L.; Manion, L.; Morrison, K. Research Methods in Education; Routledge: London, UK, 2002. [Google Scholar]
  44. Creswell, J.W.; Creswell, J.D. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches; SAGE Publications: London, UK, 2017. [Google Scholar]
  45. Su, J.; Zhong, Y. Artificial Intelligence (AI) in early childhood education: Curriculum design and future directions. Comput. Educ. Artif. Intell. 2022, 3, 100072. [Google Scholar] [CrossRef]
  46. Shamir, G.; Levin, I. Teaching machine learning in elementary school. Int. J. Child-Comput. Interact. 2022, 31, 100415. [Google Scholar] [CrossRef]
Figure 1. Bloom’s Taxonomy and AI literacy from [16].
Figure 1. Bloom’s Taxonomy and AI literacy from [16].
Education 14 00871 g001
Figure 2. The ’SIACC’ Framework of Young Children’s AI Literacy.
Figure 2. The ’SIACC’ Framework of Young Children’s AI Literacy.
Education 14 00871 g002
Table 1. Background Information of the Participants and Rationale for Sampling.
Table 1. Background Information of the Participants and Rationale for Sampling.
NameTitleLocationResearch Field or Key Characteristics
Expert AChair ProfessorHong Kong, ChinaDigital Transformation in Early Childhood Education; Educational Policy; Early Childhood Education; Curriculum Theory; Pragmatics; Psycholinguistics; Cognitive Psychology
Expert BVice Dean and ProfessorShanghai, ChinaDigital Parenting; Digital Pedagogy; Early Childhood Mathematics Education
Expert CSecretary General and ProfessorShanghai, ChinaComputer Science and Technology; Communication and Information Engineering
Expert DPrincipalShanghai, ChinaShanghai Education digital Technology Benchmark Preschool
Nationally recognized principal
Expert EPrincipalShanghai, ChinaShanghai Education digital Technology Benchmark Preschool
Expert FDirectorShanghai, ChinaHead of the Preschool Teaching and Research Department in Xuhui District, Shanghai
Expert GSuperfine Teacher and DirectorShanghai, ChinaThe Deputy Director of the Early Childhood Education Information Department at the Shanghai Municipal Education Commission Information Center
Table 2. Coding Extracted to Young Children’s AI Literacy’s Conceptions.
Table 2. Coding Extracted to Young Children’s AI Literacy’s Conceptions.
Core ThemeRefined Theme
SafetyAI Safety Awareness
Content-AI Interaction
Personal AI Security
Organizational AI Security
IdentityAI Co-Creation Identity
Digital AI Citizen Identity
AI Identity Management
Intellectual Property in AI
AttitudeAI Self-Awareness
Self-Management with AI
Digital AI Empathy
CognitionAI Content Creation Thinking
Computational AI Thinking
AI Math Logic Thinking
AI Critical Thinking
AI Systems Thinking
CapabilityAI Contextual Understanding Ability
AI Data Analysis and Management Ability
AI Exploratory Learning and Problem-solving Ability
AI Communication Ability
Table 3. The Definitions of Young Children’s AI literacy from Experts.
Table 3. The Definitions of Young Children’s AI literacy from Experts.
NameEach Expert’s Definition
Expert AYoung Children’s AI literacy means capable to interact, control and utilize AI in their daily lives.
Expert BYoung Children’s AI literacy can be defined as their capability to understand the basic functions of AI and ethically use AI applications in their daily lives.
Expert CYoung children’s AI literacy means appropriately understanding the basic ideas behind smart machines like robots and computer programs. It includes recognizing how these machines can help us in simple tasks and learning to interact with them in a safe and kind way.
Expert DYoung Children’s AI literacy means ethically and appropriately using AI in their daily lives.
Expert EYoung children’s AI literacy refers to the ability to recognize and use simple AI tools in their surroundings. This involves identifying everyday technology that has AI, like interactive toys or learning apps, and understanding how to use them responsibly and ethically.
Expert FYoung children’s AI literacy can be defined as the early stage of understanding and engaging with AI. It focuses on familiarizing young children with the concept of artificial intelligence through interactive and age-appropriate examples, fostering an awareness of how AI is a part of their daily life and encouraging a thoughtful and ethical approach to its use.
Expert GYoung Children’s AI literacy means young children make a wise decision while using, creating, and controlling technology.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Luo, W.; He, H.; Gao, M.; Li, H. Safety, Identity, Attitude, Cognition, and Capability: The ‘SIACC’ Framework of Early Childhood AI Literacy. Educ. Sci. 2024, 14, 871. https://doi.org/10.3390/educsci14080871

AMA Style

Luo W, He H, Gao M, Li H. Safety, Identity, Attitude, Cognition, and Capability: The ‘SIACC’ Framework of Early Childhood AI Literacy. Education Sciences. 2024; 14(8):871. https://doi.org/10.3390/educsci14080871

Chicago/Turabian Style

Luo, Wenwei, Huihua He, Minqi Gao, and Hui Li. 2024. "Safety, Identity, Attitude, Cognition, and Capability: The ‘SIACC’ Framework of Early Childhood AI Literacy" Education Sciences 14, no. 8: 871. https://doi.org/10.3390/educsci14080871

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop