1. Introduction
As the development of generative AI technologies for education continues at a rapid pace [
1], it is vital for researchers, educators, and students to be aware of the varied benefits and risks of different AI tools and the forms of learning that these innovations seek to promote in classrooms. Issues of privacy, surveillance, and algorithmic bias present barriers to the ethical implementation of AI-driven educational tools in K-12 classrooms [
2], but many students and teachers still view AI systems as a “black box” in terms of how their information is used (or misused) [
3]. If we want to ensure more just and ethical AI-driven educational technologies, students’ voices must be centered in the design process to help shape emergent AI technologies that impact their classrooms and lives [
4]. The authors of the recent
Artificial Intelligence and the Future of Teaching and Learning report have called for research and design (R&D) efforts that center youth voices in the data, research, and design of educational AI solutions [
5]. They identified this need as one of the top five national R&D issues that require immediate action. With our study, we respond to this call and aim to understand youth perspectives on AI in science education. How are students making sense of the AI tools they interact with inside and outside of the classroom? What ethical issues are they noticing? How are they imagining AI in their classrooms in the future? With the explosion of renewed interest in AI and the variety of voices chiming into the conversation, it is vital that the voices of young people who will learn and live with these technologies are not drowned out. Our study is situated in a specific context where we engage learners in co-design as well as situated in the current technological moment where generative AI tools are advancing very rapidly and publicly. To this end, we conducted focus groups with youth in varied contexts to explore the following research questions:
RQ1: In the age of publicly accessible generative AI, what are students’ expectations about how AI might support their learning?
RQ2: How do middle school students envision and discuss the potential roles, risks, and benefits of AI technologies for their science classrooms?
2. Literature Review
While research into AI has exploded in recent years, thanks to the rise in publicly available generative AI, artificial intelligence has a long history of use for student learning. Existing educational research has demonstrated the power of intelligent agents in supporting collaborative science learning, metacognition, and inquiry practices [
6,
7]. Such agents can act as a tutor, guiding students through a set of structured learning activities [
8]; a facilitator, promoting productive collaboration during inquiry [
9]; an inquisitive knowledge partner, encouraging them to make connections between ideas [
10]; or a teachable peer, helping students explain their understanding of scientific ideas in new ways [
7], among other roles. AI has also been leveraged substantially as a tool for science learning assessments, with many studies investigating how machine learning can support teacher instruction and provide feedback on student ideas [
11]. However, as more powerful generative AI tools become increasingly present in students’ and teachers’ daily lives (e.g., ChatGPT, Magic School AI, Khanmigo), understanding what the new generation of AI agents can do—and how they should and should not be used—has become more urgent conversations in education. With many novice AI users suddenly having unprecedented access to powerful AI tools, it is important to understand how students perceive these tools, their benefits, their risks, and their roles in the learning process.
Previous studies of youth perspectives on AI highlight that while students notice the presence of AI in different aspects of their lives, they do not always understand how these technologies function [
3,
12]. The rapid expansion of AI in education and in broader society has revealed a need to establish guiding principles for designing AI systems as well as ensuring that the users of these technologies understand how and why their data are used [
13]. Researchers have documented how commercial AI software is plagued by issues of algorithmic bias and discrimination along gendered and racialized lines [
14], and young people are increasingly aware of the negative impacts biased technologies can have on their lives, even when they lack the formal vocabulary to describe it [
15]. Even elementary-age children have awareness of ethical issues but have limited understanding of how AI works [
12]. Emerging research on student perspectives also highlights how the increasing complexity of AI tools can impact student trust, and that there is often a disconnect between student expectations for AI and the realistic capabilities of current tools [
16]. Given the wide-reaching potential impacts of AI technologies on education, students and educators should be centrally involved in the co-design of AI-driven learning experiences so that designers can better understand their expectations for and experiences with AI tools [
17]. The present study follows this guidance by inviting youth to participate in design discussions regarding how they would like to see AI-driven technologies implemented in their science classrooms.
Foundational efforts to integrate AI-driven technologies into the classroom learning environment have predominantly used co-design practices with teachers. For example, Tatar et al. [
18] investigated the role of co-design with English Language Arts teachers to integrate AI into their classrooms and documented increases in teacher confidence and deepened views on AI. Co-design with teachers can also demonstrate potential for creating AI tools integrated into the learning environment, to support teacher practice and reflection on implementation [
19]. For example, teacher dashboards can leverage AI features to help teachers notice students’ varied science ideas through automatic scoring and evaluation [
20]. Such technologies can assist teachers in customizing their instruction as well as in evaluating student work, so that teachers can align instructional choices with evidence [
21]. By engaging teachers as active partners, co-design offers possibilities to inform the development of AI-driven learning tools, ensuring they are both pedagogically sound and responsive to the needs of learners. However, to fully realize the potential of AI in education, it is equally crucial to involve students in the co-design process, as their insights can ensure that ideas and activities resonate with their interests and needs.
Incorporating youth as active participants in the design of learning environments is grounded in a Participatory Design framework, which emphasizes the value of involving users—in this case, students—in every stage of the design process to ensure that resulting designs meet student interests and needs [
22]. By engaging in collaborative design with youth, researchers can better understand the unique challenges, preferences, and perspectives that students bring to the learning environment. This approach is particularly relevant in the development and use of AI-supported resources, as it can ensure that these technologies not only align with learning goals, but also foreground student ideas and experiences. Delgado et al. [
23] provide a framework for the many forms that participatory design of AI tools can take, highlighting how users can provide feedback not only on current designs but also engage in deeper conversations about tool purposes and whether and why certain tools should or should not be created. This invites students to not just consult on researchers’ designs but also to participate as intellectual collaborators in designing future AI tools. Building on this foundation, our study leverages a Participatory Design framework with co-design practices to involve students in shaping how and what they want to learn with AI, thereby fostering inclusive and expansive approaches to AI-enabled learning.
Recent AI literacy studies have demonstrated the importance of placing youth perspectives at the forefront of conversations and designs around AI. Druga et al. [
24] found in their co-design with youth and their families that putting youth in the active role of asking, adapting, authoring, and analyzing with and around AI tools positions youth as “agents of change, who can decide how AI should work, not just discover its current functionalities” (p. 207). Even without explicit ethical instruction, teenagers can grapple with a variety of ethical lenses on AI, considering both practical positive and negative consequences, as well as more philosophical reflections on virtue ethics and ethics of care [
25]. While young people may not always understand the technical layers of AI functionality, they are already growing up with and being influenced by AI in their daily lives, and they should be empowered to guide how AI develops to impact them in the future [
26].
This means that for research and technology design, understanding learners’ perspectives on AI is critical for developing ethical and engaging educational AI solutions. However, so far, many studies which aimed to build such an understanding have focused on students in higher education settings rather than on youth in middle-school or high-school classrooms [
27,
28]. Despite misconceptions that youth do not have the technical knowledge or ethical reasoning to participate as full stakeholders in AI design, Solyst et al. [
29] found across multiple workshops with diverse youth that they were more than capable of engaging in algorithm audits and rich conversations around AI bias and fairness.
Moreover, researchers focused on youth perspectives (i.e., students aged 12–18) have primarily worked in the mathematics, computational thinking, and computer science domains [
3,
30]. In a recent systematic review focused on empirical studies of AI applications in K-12 science, Heeg and Avraamidou [
31] found that the majority of studies were quantitative and aimed to validate the accuracy or efficiency of AI applications. The authors identified a need for qualitative studies to illuminate learners’ experiences with AI in science classrooms, encompassing students’ interactions with AI applications, but also with other students. Blending qualitative studies of learner experiences and perception of AI with existing quantitative evidence can give us a more complex and useful understanding of how AI influences science learning. With this study, we add to the qualitative literature on this topic by investigating youth perspectives on AI in the context of playtesting an AI-supported educational science game.
4. Results
Overall, five key themes characterized students’ conversations about the roles that AI plays in their educational lives. We have phrased these themes as students’ claims about what AI should be, what it could be, and what it is right now:
AI should make learning more engaging;
AI should provide students support and adapt to what they need;
AI should be equitable and safe;
AI could be a helpful teacher’s assistant;
AI tries to mimic humans, but that is not always good.
Each theme is discussed below with illustrative examples from students’ conversations. The goal of outlining these claims about the present and future of AI is to highlight how these varied groups of students are wrestling with many of the same questions and imagined futures as teachers and other adults in their lives. In our analysis, we also draw attention to the underlying ideas about learning that students made visible in their talk as they designed new possible futures for technology in their classrooms.
4.1. Claim #1: AI Should Make Learning More Engaging
This first theme was developed primarily from students’ responses to question three, “If you could design an AI tool for your classroom, what would you make?”. When asked how they would design AI-driven helpers to improve their learning, students across the different focus groups returned repeatedly to the idea that a well-designed AI agent would encourage their engagement. Multiple students mentioned wanting activities that would make learning “more fun” and allow for more active participation. Students introduced examples such as planning more field trips or generating 3D models that students could explore (Group 1), as opposed to listening to lectures or passively reading information. Others highlighted the sheer amount of information that an AI tool could generate to keep them busy (e.g., “a robot that could come up with math questions really fast”, Breanna, Group 3). Students saw the role of AI as able to provide a variety of possible activities that would keep them engaged with the learning process, such as when River (Group 3) noted that a robot could help the class “by reading to us or doing math problems or just entertaining us”.
This interest in designing AI that could generate more engaging activities led Caleb in Group 1 to propose, “make all teachers robots […] but they have a terrible code that you can hack”. This proposal was met with mixed responses from his peers. Another boy, Arun, agreed that a hackable robot teacher “would make the kids learn and would make it more fun” because the activity could be “like an escape room”, where students could practice their coding skills. Despite the somewhat joking way in which the robot teacher idea was raised, the students in Group 1 discussed the proposal in depth, again highlighting the desire for more active learning experiences that offered students opportunities to create and explore rather than sit and listen. A third student, Amelia, pushed back against the proposal, saying, “No, that’s terrible […] because then we don’t learn, and I actually like my teacher”. The thought experiment around “should we make all teachers robots?” continued to frame much of the discussion that followed, and students came back again and again to the core goal of their robot teacher design—a desire for agency over their learning experiences in a way that produced less passive boredom and more active learning.
This underlying idea about learning—that it was often a chore and less engaging than they wanted it to be—was also raised in other groups. Several students in Group 4 suggested designs that centered on helping them get work done that they found uninteresting (e.g., “I want it to do my math homework for me”, Gia, Group 4). Unlike Group 1, who had some background in what AI can do and how it works as part of their summer camp, Group 4 could not answer the initial question we asked (“What is AI?”), and so many of their suggested designs focused on similar ideas about having a robot complete tasks they did not want to do (e.g., assessments and writing). Despite this difference in background knowledge, both groups gravitated towards designs that solved a similar core problem—removing parts of their learning experiences that they found to be uninteresting. While Group 4 remained at the level of “What can AI remove that I don’t like”, Group 1’s more lengthy discussion about the robot teacher also asked, “What can AI create that would be better?”. Whether or not an AI teacher or AI tool could fulfill the goal of making learning more active, fun, and engaging (and whether or not it would actually be better), students clearly felt that advances in AI technology offered them possibilities to redesign their school experiences to align with their own goals and ideals for what learning should look and feel like.
4.2. Claim #2: AI Should Provide Students Support and Adapt to What They Need
Another theme that students explored across groups was what individualized support and adaptive AI might look like in the classroom. Drawing on their experiences in the demo science game, some students noted how AI technologies have the potential to offer useful differentiation for a variety of learners based on their particular interests, skills, and prior knowledge. For example, Mara (Group 2) explained that when playing the demo game, “if you’re really really knowledgeable in those topics, you would want something more advanced to challenge you”. Students in Group 1 also discussed how AI agents could adjust the level of difficulty and the context of the learning experiences to align with student interests (e.g., adding fantasy vs. science fiction vs. realistic narrative elements to the game’s story). They also noted how AI agents could offer just-in-time information during their scientific investigations (such as interesting facts about a topic) to support learners without interrupting or taking over. Students in Group 3 highlighted some design aspects of the game demo that limited students’ agency (e.g., the fact that the game did not support students in arguing for a pro-parking-lot stance). They suggested the AI-driven characters should be redesigned so that students could argue for alternative and unexpected solutions, so that students had more possible pathways through the story. Students saw AI as able to support differentiation within the narrative, so that feedback on their arguments could be responsive to the kinds of evidence they chose to engage with. This highlights the importance of asking students about AI perceptions in the context of an AI tool they can tinker with, as students were able to articulate their desire for adaptive AI in response to their frustration with the constraints of the narrative. The focus on tailoring students’ learning experiences ties back to the overarching design goal that students articulated throughout their discussions, which was to generate learning experiences that were active, agentic, enjoyable, and engaging for each individual student.
Students in Group 4 took a slightly different approach to designing adaptive AI support, focusing instead on how they could offload difficult tasks to AI tools. For example, Tiana suggested a design for an AI pencil that could write out assignments and other schoolwork for you by mimicking the user’s handwriting. She said that the user should be able to hold the pencil, “so it looks like you’re actually doing it but it’s the pencil”. As a younger participant (age 9), Tiana had mentioned having some difficulty with writing while typing responses to AI characters during the game demo, and so her design was aimed at offloading some of the writing work that she struggled with. This design highlights another tension that we saw across groups, which was a desire to reduce frustration, boredom, and difficulty that clashes with the need for students to be appropriately scaffolded in learning difficult but valuable skills. While students were clearly interested in designing adaptable and supportive AI, understanding when AI is a valuable addition for learning (providing necessary, timely, and temporary support) and when it is taking away from the learning process is a line that some students noticed and others either did not consider or chose to ignore.
4.3. Claim #3: AI Needs to Be Equitable and Safe
Another important theme, which was highlighted in some of our groups’ discussions (i.e., Groups 1 and 3) but not others, was the need to design AI that is ethical, equitable, and safe for its users. In Group 1, as the discussion of robot teachers continued, the students shifted to the logical consequences of using robots to teach, including the economic, societal, and ethical impacts. A central concern that several students raised was that AI tools cannot always be trusted to keep private the information they record and process. Students noted that the power of AI could be “kind of terrifying” and that it was important to obtain permission to use people’s art, voice recordings, and other data. Sara summarized the group’s privacy concerns by saying, “If [a student is] talking to the robot teacher, the robot teacher might as well just be listening or report to the government on what’s happening. And that might be like the person’s personal information. So then I think that would lead to the kids feeling like they can’t really talk to very many people about what’s going on”. Caleb, who originally pitched the robot teacher idea, argued that AI tools having access to information could be beneficial if it was used to keep students safe. However, Sara maintained that giving AI the ability to make decisions about sensitive student data could lead to “a big whole mess”, where personal information was taken out of context or misunderstood in ways that could lead to harming students and their families. In this way, Group 1′s discussions mirrored the broader conversations currently taking place in the public sphere about data security, data ownership, privacy, and trust in the design of AI tools. While students saw power and potential in the ability to design AI tools that could improve their learning, they also saw risks in allowing AI-driven agents to have access to their data, especially when they were unsure of who else would have access or how their information would be used.
While Group 3 did not dive deeply into data privacy the way Group 1 did, Group 3 did briefly highlight how differential access to advanced AI technologies could impact students. Taylor asked the researchers how students at other schools would be able to play the demo game if their school did not have access to the BeeVR technology, since that required more resources, other than laptops, in order to run. Taylor’s comment highlighted an underlying issue that was relevant to Group 3 in particular, as their school was in a rural community and their school Wi-Fi was often spotty and slow, which impacted their gameplay experience during the study. While Group 1 was primarily concerned with how AI might harm students when designed poorly, for Group 3, equitable AI meant ensuring that schools with fewer resources were also given the same opportunities to use technologies that could support their learning. While the extent to which groups explored ideas about equitable and safe AI differed according to what directions students guided the discussion in, the ideas that were raised made it clear that students can grapple with complex ethical AI questions when the opportunity arises.
Groups 2 and 4 did not address issues of ethics and safety in their discussions of AI, since it was not directly prompted as a discussion topic by the researchers. Group 2 was more focused on providing feedback on the particular AI features in the demo game, and so they focused on articulating useful vs. not useful features of AI rather than ethical layers. Group 4 was the group with the least prior knowledge about AI, so it was not surprising that they did not raise issues of ethics and safety without prompting.
4.4. Claim #4: AI Could Be a Helpful Teacher’s Assistant
In addition to creating more engaging and exciting learning activities, students also saw a potential role for AI in how it could improve teachers’ workload in the classroom. Many students in the 5th grade classroom in particular (Group 3) showed an awareness of classroom management issues and teacher orchestration needs that could potentially be improved with AI. For example, Lily suggested a robot AI design that would be “kind of like a teacher’s assistant” that could “help the kids if they’re learning something new and they don’t know exactly how to do it”. Both Lily (Group 3) and Tiana (Group 4) suggested that AI could help teachers with writing ideas on the board, a small but important facilitation task for keeping track of class discussions. Ciera and River (both Group 3) each highlighted that teachers often get pulled away to help a particular small group or student, and that the rest of the class could benefit from an AI teaching assistant that could support them while the teacher was busy. Ciera suggested that during group work, this “little robot” could “come over and help them with what they need help with, and it can answer their questions and show them how to do [an activity]” while their teacher was helping a different small group. River noted that a robot could be programmed to “keep us busy and also help us learn” if, for example, the teacher was in another room helping a student complete a make-up exam.
Eva (Group 4) noted that even the rather mundane tasks that teachers are required to manage could be supported by an AI teacher’s assistant, saying, “What I would want it to do is help the teachers remember everything [...] like remembering to change the calendar, because my teacher forgets it”. Multiple students in both Groups 3 and 4 also brought up the idea of AI support being used to clean the classroom (e.g., “a Roomba that can clean up your stuff, not just crumbs”, Bridget, Group 4). In these instances, students saw the role of AI as removing or reducing their teacher’s workload for tasks that did not necessarily involve learning but helped to support the learning community and its smooth operation. Unlike the previous suggestions by Group 1 to replace teachers with an engaging teaching robot, students in Groups 3 and 4 saw AI as a way to make their teachers more available to them, freeing up time for teachers to focus on helping students who need support. This highlights another underlying idea that students drew on in their designs, which is that teachers have many tasks on their plates and do not always have enough time or enough resources to give each student individualized support while keeping the rest of the class engaged and learning.
4.5. Claim #5: AI Tries to Mimic Humans, but That Is Not Always Good
Finally, students noted in their discussions how AI is currently designed to mimic human behaviors and explored the implications of these design choices. When asked “What is AI”, several students in Group 3 offered similar definitions that highlighted this mirroring of human behaviors, such as, “it got programmed to do stuff that humans can do” and “it learns from mistakes and stuff like us, and it’s like programmed to do human stuff”. However, when asked where they have seen AI before, students focused instead on the power of AI to find resources quickly and efficiently in ways that humans cannot (e.g., “you search up something and it gives you like a million results”, Cory, Group 3). Many students across groups had similar impressions about where they have seen AI in their own lives (e.g., Amazon Alexa, TikTok, Google searches), which focused on how AI could help find things or provide large amounts of knowledge. David (Group 2) mentioned how AI could act as a virtual opponent when playing chess, but overall, most students in our study had experiences with AI more as an all-knowing search engine, algorithm, or assistant.
When groups did bring up designs that involved AI doing more specific “human stuff”, the discussion tended to center on the inability of AI technologies to adequately mimic human qualities such as emotionality, social support, and intelligence. Students in Group 2 had an extended discussion about whether or not one of the AI-driven conversational agents in the demo game, which was designed to answer students’ science questions, could really be considered intelligent if it could not also answer math and history questions. David tried to test the conversational agent’s intelligence by asking questions such as “What is 1 + 1?” and “Who is George Washington?”, and the agent responded with “I’m not sure” (the base answer our prototype was trained to give when it was asked a question outside of its training). David argued that such conversational agents were “the wrong place to put AI”, because the AI tool did not offer the same breadth of information that a human could achieve using a search engine. While our team intentionally designed the AI-driven character to be a human-like character with a narrow set of expertise, students in Group 2 expected the agent to behave like a highly knowledgeable search engine rather than like a human with limited knowledge. Similarly, Dylan in Group 1 mentioned that an AI teacher might “go on and on” about a topic, while a human teacher could help students make connections between information and their own lives. Ryan (Group 1) agreed, noting that “humans are more comfortable with humans”, so AI agents might not be as effective for supporting learning without that sense of social support. Amelia (Group 1) added another layer, saying, “even if robots have emotion in their voice, it might not be real emotion”. All of these comments suggest that students see clear distinctions between the tasks that AI tools can effectively support, and the more complex parts of teaching that require intellectual and socioemotional skills. While a few younger students in Group 4 mentioned wanting an AI robot that could take care of them and “help each other out with everything” (Willow), students in Group 1 appeared convinced that AI should not be used to support students socially and emotionally the way their teachers do. Students in Groups 1 and 2 both articulated that it was not worth the time and money to design AI technologies that merely imitated what humans could do, but less skillfully and with less human connection.
5. Discussion and Conclusions
While the design proposals of students in this study sometimes pushed ethical and technological boundaries, at the core of these conversations was a desire for control over their learning experiences and a desire to make their classrooms better. These results suggest that we should not underestimate the complexity of students’ emerging understandings of AI technologies, nor their understanding of the complex realities of their own classrooms, even when they are still coming to understand how machine learning algorithms and large language models function. While experts in AI technologies may frame design feedback primarily in terms of technical feasibility, everyday users can envision possibilities for technology that go beyond current models and capabilities [
38]. Students had a clear understanding of the ways in which their classrooms could be re-designed to support more student agency and engaging learning, as well as the existing ways in which their teachers were limited by the time and resources they had to provide support. Students in this study ultimately perceived teachers as invaluable guides and partners in their learning journey and strived to think how to free up their teachers to focus on facilitating learning. Students also articulated desires for adaptable learning experiences, wanting the demo game to provide space to make unexpected choices and to argue for unique solutions to the socio-scientific problems presented in the story. However, students also demonstrated tensions in their design suggestions between a desire for more engaging, fun classrooms and a desire to receive personalized scaffolding in their learning experiences. This suggests that students may benefit from exploring what it feels like to use AI to make learning “easier” through temporary support, as research with older students has shown that students can develop more awareness of the value of their own writing experiences when teachers allow them to compose essays with AI and reflect on that experience [
39]. While the desire for learning to be more engaging is not new or unique to these particular students, we argue that the concerns and claims students raise are central to the ongoing design of AI tools. Considering youth as key stakeholders in the technologies being developed for their classrooms [
29], each claim raised in this study reveals learner-articulated problems of practice that will guide future iterations of our learning environments.
Importantly, the underlying concerns students highlighted in this study do not necessarily require AI-driven solutions. However, giving students an open design space to ask, “If you had access to powerful technology, how would you use it to make learning better?”, allowed them to articulate their needs, concerns, and hopes for their learning, which could be addressed through a variety of technological and non-technological pathways. Even if their ideas about AI were not necessarily surprising or new to expert AI researchers, the process of centering student ideas and visions for the future of AI helps to (re)align designs with the goals and needs of users. This need to align designs with user needs is not unique to AI tools, but highlighting the need for co-design helps AI developers resist the false assumption that stakeholders must be experts in a technology in order to reflect on its impacts. This study also further demonstrates the value of youth engaging in dialogic inquiry with AI, where they are not just learning about how AI works but also exploring how AI could change to better support their individual and community needs [
24]. Letting students take on the role of collaborative designer of AI technologies gives them a window into how these kinds of conversational agents work and can act as a context for further AI literacy development [
29]. Such co-design activities may be useful in engaging students with other forms of AI tools beyond chatbots, such as the ways in which machine learning can influence the assessment of their science learning [
11].
Our main argument is not that we should “make all teachers robots” or offload every possible task to AI; our argument is that including youth voices in the design of solutions for their classrooms is vital in order to understand what problems exist (or that students perceive as existing), what possible futures students imagine, and how we can design towards these futures. Although not all of the students in our study fully understood what AI is and what it can do, they intimately understood their own classrooms and the ways in which they could be better—more engaging, more supportive, and more responsive to their needs. This further highlights the current gap in AI co-design research, in which students and other stakeholders are often brought in as consultants but are given little agency as designers and true collaborators throughout the design process [
23]. Our study offers evidence that learners have everyday expertise that can help contribute to AI tool designs beyond simple user-testing and towards collaborative goal-setting and idea generation around what AI can do and what it should be used for. This aligns with other recent studies of student perspectives on AI tools, which have demonstrated that students can use what they know of their own teachers’ skills and knowledge to evaluate chatbots, and that though they appreciate the knowledge integration support chatbots can offer, they desire more conversational interactions that mirror how a supportive teacher would guide their learning [
10]. Integrating students as co-designers can allow them to apply their knowledge of classroom learning to the designs of future adaptive AI learning supports.
For those students who were beginning to understand how AI works, they were also eager to explore and debate the ethical layers of AI’s role in their classrooms. The playtesting of an AI-driven learning environment also provided a grounding context for students to explore these issues more concretely than discussing AI for learning in the abstract. For example, students were able to discuss how they have seen their data used in everyday life and compare that experience with how our specific game handles their data and privacy. They could also compare chatbots they have interacted with previously with the conversational agents they saw in the game and reflect on their different purposes. Some groups of students did present a false binary choice between human teachers and an all-powerful AI teacher, as if one must replace the other. This suggests that students may benefit from exploring how AI can be leveraged as a supportive tool in the classroom, beyond the general-purpose AI chatbots and assistants they typically encounter.
We hope readers will take with them the idea that students in middle grades are more than capable of discussing the complexities of AI and the possible risks it brings into their lives. Several groups were able to hold ethical, economic, socioemotional, and educational concerns in tension with one another as they workshopped design ideas together and navigated what the role of AI should be in their classrooms. Some of our groups were primed to discuss AI deeply based on their prior interests and experiences, and their conversations offer additional evidence that co-designing AI tools with youth can be a productive site for learning about the pitfalls of AI technologies [
15]. Furthermore, centering complex ethical dilemmas in discussions can help youth develop deeper understandings of AI as they express their concerns and hopes for how these technologies will impact their lives [
40]. In addition to learning about ethical AI through such discussions, young people can also offer critical insights to developers about the potential harm AI can cause [
29]. For researchers, these co-design discussions can help us align our learning designs with both classroom realities and equitable futures that are meaningful for learners. This alignment is always important for learning designs, but it is particularly vital when technology companies seek to implement advanced tools into classrooms without student input. Inviting students and teachers as active collaborators can help researchers to strike a balance between “evolution and revolution”, in which we both build on what education research already knows about the power and potential of AI for learning, while also thinking broadly with stakeholders about future possibilities [
41]. Working with students to articulate together what values and risks AI brings to their classrooms can help them to envision new possible futures and the technologies that these futures require [
42]. Centering students’ voices in the design and development of AI technologies for education offers them the agency to imagine and design towards an alternative future where all learning is active, engaging, and meaningful for their lives.