Next Article in Journal
A Comparative Analysis of Collaborative Problem-Solving Skills Between German and Chinese High School Students in Chemistry
Previous Article in Journal
Lifting the Gate: Evaluation of Supplemental Instruction Program in Chemistry
Previous Article in Special Issue
Investigating Teachers’ Use of an AI-Enabled System and Their Perceptions of AI Integration in Science Classrooms: A Case Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Integrating Youth Perspectives into the Design of AI-Supported Collaborative Learning Environments

by
Megan Humburg
1,*,
Dalila Dragnić-Cindrić
2,
Cindy E. Hmelo-Silver
1,
Krista Glazewski
3,
James C. Lester
4 and
Joshua A. Danish
1
1
Center for Research on Learning and Technology, Indiana University, Bloomington, IN 47405, USA
2
Learning Sciences Research, Digital Promise, Washington, DC 20036, USA
3
William & Ida Friday Institute for Educational Innovation, North Carolina State University, Raleigh, NC 27606, USA
4
Department of Computer Science, North Carolina State University, Raleigh, NC 27695, USA
*
Author to whom correspondence should be addressed.
Educ. Sci. 2024, 14(11), 1197; https://doi.org/10.3390/educsci14111197
Submission received: 31 July 2024 / Revised: 14 October 2024 / Accepted: 28 October 2024 / Published: 31 October 2024

Abstract

:
This study highlights how middle schoolers discuss the benefits and drawbacks of AI-driven conversational agents in learning. Using thematic analysis of focus groups, we identified five themes in students’ views of AI applications in education. Students recognized the benefits of AI in making learning more engaging and providing personalized, adaptable scaffolding. They emphasized that AI use in education needs to be safe and equitable. Students identified the potential of AI in supporting teachers and noted that AI educational agents fall short when compared to emotionally and intellectually complex humans. Overall, we argue that even without technical expertise, middle schoolers can articulate deep, multifaceted understandings of the possibilities and pitfalls of AI in education. Centering student voices in AI design can also provide learners with much-desired agency over their future learning experiences.

1. Introduction

As the development of generative AI technologies for education continues at a rapid pace [1], it is vital for researchers, educators, and students to be aware of the varied benefits and risks of different AI tools and the forms of learning that these innovations seek to promote in classrooms. Issues of privacy, surveillance, and algorithmic bias present barriers to the ethical implementation of AI-driven educational tools in K-12 classrooms [2], but many students and teachers still view AI systems as a “black box” in terms of how their information is used (or misused) [3]. If we want to ensure more just and ethical AI-driven educational technologies, students’ voices must be centered in the design process to help shape emergent AI technologies that impact their classrooms and lives [4]. The authors of the recent Artificial Intelligence and the Future of Teaching and Learning report have called for research and design (R&D) efforts that center youth voices in the data, research, and design of educational AI solutions [5]. They identified this need as one of the top five national R&D issues that require immediate action. With our study, we respond to this call and aim to understand youth perspectives on AI in science education. How are students making sense of the AI tools they interact with inside and outside of the classroom? What ethical issues are they noticing? How are they imagining AI in their classrooms in the future? With the explosion of renewed interest in AI and the variety of voices chiming into the conversation, it is vital that the voices of young people who will learn and live with these technologies are not drowned out. Our study is situated in a specific context where we engage learners in co-design as well as situated in the current technological moment where generative AI tools are advancing very rapidly and publicly. To this end, we conducted focus groups with youth in varied contexts to explore the following research questions:
RQ1: In the age of publicly accessible generative AI, what are students’ expectations about how AI might support their learning?
RQ2: How do middle school students envision and discuss the potential roles, risks, and benefits of AI technologies for their science classrooms?

2. Literature Review

While research into AI has exploded in recent years, thanks to the rise in publicly available generative AI, artificial intelligence has a long history of use for student learning. Existing educational research has demonstrated the power of intelligent agents in supporting collaborative science learning, metacognition, and inquiry practices [6,7]. Such agents can act as a tutor, guiding students through a set of structured learning activities [8]; a facilitator, promoting productive collaboration during inquiry [9]; an inquisitive knowledge partner, encouraging them to make connections between ideas [10]; or a teachable peer, helping students explain their understanding of scientific ideas in new ways [7], among other roles. AI has also been leveraged substantially as a tool for science learning assessments, with many studies investigating how machine learning can support teacher instruction and provide feedback on student ideas [11]. However, as more powerful generative AI tools become increasingly present in students’ and teachers’ daily lives (e.g., ChatGPT, Magic School AI, Khanmigo), understanding what the new generation of AI agents can do—and how they should and should not be used—has become more urgent conversations in education. With many novice AI users suddenly having unprecedented access to powerful AI tools, it is important to understand how students perceive these tools, their benefits, their risks, and their roles in the learning process.
Previous studies of youth perspectives on AI highlight that while students notice the presence of AI in different aspects of their lives, they do not always understand how these technologies function [3,12]. The rapid expansion of AI in education and in broader society has revealed a need to establish guiding principles for designing AI systems as well as ensuring that the users of these technologies understand how and why their data are used [13]. Researchers have documented how commercial AI software is plagued by issues of algorithmic bias and discrimination along gendered and racialized lines [14], and young people are increasingly aware of the negative impacts biased technologies can have on their lives, even when they lack the formal vocabulary to describe it [15]. Even elementary-age children have awareness of ethical issues but have limited understanding of how AI works [12]. Emerging research on student perspectives also highlights how the increasing complexity of AI tools can impact student trust, and that there is often a disconnect between student expectations for AI and the realistic capabilities of current tools [16]. Given the wide-reaching potential impacts of AI technologies on education, students and educators should be centrally involved in the co-design of AI-driven learning experiences so that designers can better understand their expectations for and experiences with AI tools [17]. The present study follows this guidance by inviting youth to participate in design discussions regarding how they would like to see AI-driven technologies implemented in their science classrooms.
Foundational efforts to integrate AI-driven technologies into the classroom learning environment have predominantly used co-design practices with teachers. For example, Tatar et al. [18] investigated the role of co-design with English Language Arts teachers to integrate AI into their classrooms and documented increases in teacher confidence and deepened views on AI. Co-design with teachers can also demonstrate potential for creating AI tools integrated into the learning environment, to support teacher practice and reflection on implementation [19]. For example, teacher dashboards can leverage AI features to help teachers notice students’ varied science ideas through automatic scoring and evaluation [20]. Such technologies can assist teachers in customizing their instruction as well as in evaluating student work, so that teachers can align instructional choices with evidence [21]. By engaging teachers as active partners, co-design offers possibilities to inform the development of AI-driven learning tools, ensuring they are both pedagogically sound and responsive to the needs of learners. However, to fully realize the potential of AI in education, it is equally crucial to involve students in the co-design process, as their insights can ensure that ideas and activities resonate with their interests and needs.
Incorporating youth as active participants in the design of learning environments is grounded in a Participatory Design framework, which emphasizes the value of involving users—in this case, students—in every stage of the design process to ensure that resulting designs meet student interests and needs [22]. By engaging in collaborative design with youth, researchers can better understand the unique challenges, preferences, and perspectives that students bring to the learning environment. This approach is particularly relevant in the development and use of AI-supported resources, as it can ensure that these technologies not only align with learning goals, but also foreground student ideas and experiences. Delgado et al. [23] provide a framework for the many forms that participatory design of AI tools can take, highlighting how users can provide feedback not only on current designs but also engage in deeper conversations about tool purposes and whether and why certain tools should or should not be created. This invites students to not just consult on researchers’ designs but also to participate as intellectual collaborators in designing future AI tools. Building on this foundation, our study leverages a Participatory Design framework with co-design practices to involve students in shaping how and what they want to learn with AI, thereby fostering inclusive and expansive approaches to AI-enabled learning.
Recent AI literacy studies have demonstrated the importance of placing youth perspectives at the forefront of conversations and designs around AI. Druga et al. [24] found in their co-design with youth and their families that putting youth in the active role of asking, adapting, authoring, and analyzing with and around AI tools positions youth as “agents of change, who can decide how AI should work, not just discover its current functionalities” (p. 207). Even without explicit ethical instruction, teenagers can grapple with a variety of ethical lenses on AI, considering both practical positive and negative consequences, as well as more philosophical reflections on virtue ethics and ethics of care [25]. While young people may not always understand the technical layers of AI functionality, they are already growing up with and being influenced by AI in their daily lives, and they should be empowered to guide how AI develops to impact them in the future [26].
This means that for research and technology design, understanding learners’ perspectives on AI is critical for developing ethical and engaging educational AI solutions. However, so far, many studies which aimed to build such an understanding have focused on students in higher education settings rather than on youth in middle-school or high-school classrooms [27,28]. Despite misconceptions that youth do not have the technical knowledge or ethical reasoning to participate as full stakeholders in AI design, Solyst et al. [29] found across multiple workshops with diverse youth that they were more than capable of engaging in algorithm audits and rich conversations around AI bias and fairness.
Moreover, researchers focused on youth perspectives (i.e., students aged 12–18) have primarily worked in the mathematics, computational thinking, and computer science domains [3,30]. In a recent systematic review focused on empirical studies of AI applications in K-12 science, Heeg and Avraamidou [31] found that the majority of studies were quantitative and aimed to validate the accuracy or efficiency of AI applications. The authors identified a need for qualitative studies to illuminate learners’ experiences with AI in science classrooms, encompassing students’ interactions with AI applications, but also with other students. Blending qualitative studies of learner experiences and perception of AI with existing quantitative evidence can give us a more complex and useful understanding of how AI influences science learning. With this study, we add to the qualitative literature on this topic by investigating youth perspectives on AI in the context of playtesting an AI-supported educational science game.

3. Materials and Methods

3.1. Data Collection

This study design involved semi-structured focus groups that were qualitatively analyzed to understand patterns in students’ ideas and perspectives around AI for science learning. Data were drawn from four different focus groups conducted with students aged 9–14 in the United States. Group 1 (n = 11, ages 11–14) was a group of students participating in an AI summer camp in a small Midwestern city. Group 2 (n = 4, ages 11–13) was a group of friends from a medium-sized Southern city who have shared interests in AI technology. Group 3 (n = 18, ages 10–12) was a classroom of 5th grade students in a rural Midwestern town. Group 4 (n = 6, ages 9–11) was a group of students at an all-girls after-school club in a small Midwestern city. While these groups appear disparate in terms of structure and background, our goal was to engage a variety of youth in different contexts to understand how young people with different levels of exposure to AI engage in reflections on the possibilities of AI for learning. All groups participated in similar educational game demo and focus group discussion tasks, as outlined below. Table 1 describes the demographic details for each group.
First, students engaged in a discussion with the researchers about AI in general. Discussions were researcher-facilitated, but conversations were ultimately steered by students’ ideas and concerns. For example, if a student says they have seen AI used in TikTok, a follow-up question would be: “How does TikTok use AI?”. All groups were asked the same three key starting questions:
  • Who knows what AI is? What is AI?
  • Where have you seen AI before?
  • If you could design an AI tool for your classroom, what would you make? What would it do?
Most groups (Groups 2, 3, and 4) concluded their framing discussions after these three questions and some short follow-ups, while Group 1 had a more extensive discussion that included follow-up questions about the impact of AI (e.g., Do you think AI is helpful or harmful, or both?) since students in this group had deeper background knowledge of AI and were eager to continue the discussion based on their proposed designs for AI tools.
After this introductory discussion, the researchers explained how AI functions in the demo game that the students would try out (detailed in the next section), giving the same instructions to all groups. Groups of students then spent 1–2 h playtesting the AI-driven features of the game. The playtime varied depending on how long it took the students to complete the tutorial and explore the three main game locations. After their playthrough, the students came back together for a feedback-oriented discussion, where they reflected on their experiences with the game and made design suggestions for how to improve their engagement and learning. The analysis for this paper focuses on students’ initial perspectives on AI, shared in the introductory discussions, though we do draw on some data from feedback discussions, when students connected their feedback to particular AI design features.

3.2. Study Context and Materials

This study is part of a larger project in which we are gathering student and teacher perspectives to guide the design of AI-driven narrative-centered learning environments. Narrative-centered learning environments use immersive storytelling to engage learners as role-players in a narrative that is tightly linked to pedagogical goals [32,33]. In these narrative environments, learners take on an active problem-solving role as a character in a story, situated in a rich narrative context that facilitates the discovery and application of disciplinary ideas and skills in ways that mirror real-world knowledge use.
The learning environment our team designed for this project (tentatively titled SciStory: Pollinators) is aimed at engaging middle-grade students in the deep investigation of a socio-scientific issue (SSI)—a complex social issue with strong conceptual links to science [34]. SSIs allow for student engagement in discussion, argumentation, and evidence-based, fair decision-making, while drawing on their own lived experiences and connecting them to the science content [35]. Specifically, in our narrative game, students learn about food systems, pollination, and food justice. Students sat in groups and could collaborate with their groups via an in-game chat function, but each student played this educational computer game on their own laptop in a web browser (Figure 1a). Students explored a virtual community where the neighborhood is grappling with a problem: there is an empty lot downtown, and the mayor is deciding whether to turn the lot into a community garden or a parking garage. As they explored the demo version of the game, students in our study chose which characters to speak to and which locations to explore, choosing from three main game locations, the empty lot, a food scientist’s lab, and a community garden (Figure 1b). They spoke to different game characters to gather opinions on the empty lot, explored in-game resources about food justice and how communities grow and ship food, and investigated the relationship between pollinators and food systems by playing as a honeybee in a minigame embedded within the larger game called BeeVR (Figure 2). In the BeeVR minigame, students played as honeybees trying to forage for nectar in a garden environment (Figure 2a) and the rooftop of a polluted parking garage (Figure 2b). At the end of each round, flowers turn into healthy or unhealthy fruit based on how much they were pollinated, and students could bring ideas from BeeVR into their digital evidence notebooks in the main game. After collecting a variety of scientific evidence, students were tasked with drafting, refining, and proposing an argument to the town mayor in the game about how the neighborhood would benefit from building a community garden.
The game uses AI-driven conversational agents (e.g., game characters) to give students tailored feedback as they interact with game narratives and construct their arguments. These characters allow students to type their own questions and ideas into a text box, and they receive AI-selected responses to their inquiries (Figure 3). The game is not intended to directly teach students AI concepts, but the AI-driven agents are meant to provide players with adaptive support as they investigate the scientific ideas in the SciStory narrative. As a part of the focus group, students tested out these AI-driven features and offered feedback on how the designs could be improved. In this way, the AI agents provided a context for students to ground their discussion of how AI can support learning. Students were given high-level explanations of how the AI agents function (e.g., that they use AI to understand the questions players type and provide useful answers), but groups did not discuss the natural language processing technology supporting these agents in detail. All groups played similar versions of the game, though there were slight design improvements for each cycle according to our design-based research process (e.g., improving character designs, refining story content, and improving conversational agent responses based on previous focus group findings).
Some of the focus groups were also able to test out a room-scale, motion-tracking version of the BeeVR minigame, which allowed them to control an on-screen honeybee with their physical movements around the space as they pretended to be a honeybee visiting flowers. Other groups used a tablet-based version of BeeVR that was the same minigame, just controlled by gestures on the screen instead of full-body movements (Figure 4). BeeVR was built in the [project name blinded] modeling system [36] and was designed as an embodied companion activity to the laptop-based SciStory game, providing students with opportunities to gather scientific evidence about gardens from the pollinators’ perspective. According to the technical and space limitations of different focus group contexts, Groups 1 and 4 tried the room-scale BeeVR, Group 3 tried the tablet-based BeeVR, and Group 2 only tested the laptop game without BeeVR.

3.3. Data Analysis

Roughly four hours of audio data were transcribed using an automated service and then hand-edited for accuracy and reviewed by the researchers. Thematic analysis [37] was used to draw together ideas from the four focus groups into categories of meaning that reflected the various student-articulated benefits, risks, and roles related to AI classroom integration. The first and second authors independently reviewed the data and wrote analytic memos about the data, noting key moments and interesting quotes in the discussions about AI. We then discussed potential themes and student quotes that represented these themes. Final themes were iteratively generated via multiple analytic passes through the transcripts, paired with the audio and video data. Four initial themes were developed based on Group 1 and 2’s data, and then a 5th theme was added after the analysis of Groups 3 and 4. The original four themes were also retitled and restructured to better represent the ideas shared in all four focus groups. Figure 5 depicts our analytic process.
Initially, the goal of this analysis was to synthesize students’ feedback regarding the design of AI-driven conversational agents in our particular learning environment design, but themes regarding students’ more general perceptions of the risks and benefits of AI technologies emerged organically during the early analytic process, based on the content and richness of the focus group conversations. Below, we articulate the key themes that capture these young people’s negotiations about how emerging AI technologies could impact their learning.

4. Results

Overall, five key themes characterized students’ conversations about the roles that AI plays in their educational lives. We have phrased these themes as students’ claims about what AI should be, what it could be, and what it is right now:
  • AI should make learning more engaging;
  • AI should provide students support and adapt to what they need;
  • AI should be equitable and safe;
  • AI could be a helpful teacher’s assistant;
  • AI tries to mimic humans, but that is not always good.
Each theme is discussed below with illustrative examples from students’ conversations. The goal of outlining these claims about the present and future of AI is to highlight how these varied groups of students are wrestling with many of the same questions and imagined futures as teachers and other adults in their lives. In our analysis, we also draw attention to the underlying ideas about learning that students made visible in their talk as they designed new possible futures for technology in their classrooms.

4.1. Claim #1: AI Should Make Learning More Engaging

This first theme was developed primarily from students’ responses to question three, “If you could design an AI tool for your classroom, what would you make?”. When asked how they would design AI-driven helpers to improve their learning, students across the different focus groups returned repeatedly to the idea that a well-designed AI agent would encourage their engagement. Multiple students mentioned wanting activities that would make learning “more fun” and allow for more active participation. Students introduced examples such as planning more field trips or generating 3D models that students could explore (Group 1), as opposed to listening to lectures or passively reading information. Others highlighted the sheer amount of information that an AI tool could generate to keep them busy (e.g., “a robot that could come up with math questions really fast”, Breanna, Group 3). Students saw the role of AI as able to provide a variety of possible activities that would keep them engaged with the learning process, such as when River (Group 3) noted that a robot could help the class “by reading to us or doing math problems or just entertaining us”.
This interest in designing AI that could generate more engaging activities led Caleb in Group 1 to propose, “make all teachers robots […] but they have a terrible code that you can hack”. This proposal was met with mixed responses from his peers. Another boy, Arun, agreed that a hackable robot teacher “would make the kids learn and would make it more fun” because the activity could be “like an escape room”, where students could practice their coding skills. Despite the somewhat joking way in which the robot teacher idea was raised, the students in Group 1 discussed the proposal in depth, again highlighting the desire for more active learning experiences that offered students opportunities to create and explore rather than sit and listen. A third student, Amelia, pushed back against the proposal, saying, “No, that’s terrible […] because then we don’t learn, and I actually like my teacher”. The thought experiment around “should we make all teachers robots?” continued to frame much of the discussion that followed, and students came back again and again to the core goal of their robot teacher design—a desire for agency over their learning experiences in a way that produced less passive boredom and more active learning.
This underlying idea about learning—that it was often a chore and less engaging than they wanted it to be—was also raised in other groups. Several students in Group 4 suggested designs that centered on helping them get work done that they found uninteresting (e.g., “I want it to do my math homework for me”, Gia, Group 4). Unlike Group 1, who had some background in what AI can do and how it works as part of their summer camp, Group 4 could not answer the initial question we asked (“What is AI?”), and so many of their suggested designs focused on similar ideas about having a robot complete tasks they did not want to do (e.g., assessments and writing). Despite this difference in background knowledge, both groups gravitated towards designs that solved a similar core problem—removing parts of their learning experiences that they found to be uninteresting. While Group 4 remained at the level of “What can AI remove that I don’t like”, Group 1’s more lengthy discussion about the robot teacher also asked, “What can AI create that would be better?”. Whether or not an AI teacher or AI tool could fulfill the goal of making learning more active, fun, and engaging (and whether or not it would actually be better), students clearly felt that advances in AI technology offered them possibilities to redesign their school experiences to align with their own goals and ideals for what learning should look and feel like.

4.2. Claim #2: AI Should Provide Students Support and Adapt to What They Need

Another theme that students explored across groups was what individualized support and adaptive AI might look like in the classroom. Drawing on their experiences in the demo science game, some students noted how AI technologies have the potential to offer useful differentiation for a variety of learners based on their particular interests, skills, and prior knowledge. For example, Mara (Group 2) explained that when playing the demo game, “if you’re really really knowledgeable in those topics, you would want something more advanced to challenge you”. Students in Group 1 also discussed how AI agents could adjust the level of difficulty and the context of the learning experiences to align with student interests (e.g., adding fantasy vs. science fiction vs. realistic narrative elements to the game’s story). They also noted how AI agents could offer just-in-time information during their scientific investigations (such as interesting facts about a topic) to support learners without interrupting or taking over. Students in Group 3 highlighted some design aspects of the game demo that limited students’ agency (e.g., the fact that the game did not support students in arguing for a pro-parking-lot stance). They suggested the AI-driven characters should be redesigned so that students could argue for alternative and unexpected solutions, so that students had more possible pathways through the story. Students saw AI as able to support differentiation within the narrative, so that feedback on their arguments could be responsive to the kinds of evidence they chose to engage with. This highlights the importance of asking students about AI perceptions in the context of an AI tool they can tinker with, as students were able to articulate their desire for adaptive AI in response to their frustration with the constraints of the narrative. The focus on tailoring students’ learning experiences ties back to the overarching design goal that students articulated throughout their discussions, which was to generate learning experiences that were active, agentic, enjoyable, and engaging for each individual student.
Students in Group 4 took a slightly different approach to designing adaptive AI support, focusing instead on how they could offload difficult tasks to AI tools. For example, Tiana suggested a design for an AI pencil that could write out assignments and other schoolwork for you by mimicking the user’s handwriting. She said that the user should be able to hold the pencil, “so it looks like you’re actually doing it but it’s the pencil”. As a younger participant (age 9), Tiana had mentioned having some difficulty with writing while typing responses to AI characters during the game demo, and so her design was aimed at offloading some of the writing work that she struggled with. This design highlights another tension that we saw across groups, which was a desire to reduce frustration, boredom, and difficulty that clashes with the need for students to be appropriately scaffolded in learning difficult but valuable skills. While students were clearly interested in designing adaptable and supportive AI, understanding when AI is a valuable addition for learning (providing necessary, timely, and temporary support) and when it is taking away from the learning process is a line that some students noticed and others either did not consider or chose to ignore.

4.3. Claim #3: AI Needs to Be Equitable and Safe

Another important theme, which was highlighted in some of our groups’ discussions (i.e., Groups 1 and 3) but not others, was the need to design AI that is ethical, equitable, and safe for its users. In Group 1, as the discussion of robot teachers continued, the students shifted to the logical consequences of using robots to teach, including the economic, societal, and ethical impacts. A central concern that several students raised was that AI tools cannot always be trusted to keep private the information they record and process. Students noted that the power of AI could be “kind of terrifying” and that it was important to obtain permission to use people’s art, voice recordings, and other data. Sara summarized the group’s privacy concerns by saying, “If [a student is] talking to the robot teacher, the robot teacher might as well just be listening or report to the government on what’s happening. And that might be like the person’s personal information. So then I think that would lead to the kids feeling like they can’t really talk to very many people about what’s going on”. Caleb, who originally pitched the robot teacher idea, argued that AI tools having access to information could be beneficial if it was used to keep students safe. However, Sara maintained that giving AI the ability to make decisions about sensitive student data could lead to “a big whole mess”, where personal information was taken out of context or misunderstood in ways that could lead to harming students and their families. In this way, Group 1′s discussions mirrored the broader conversations currently taking place in the public sphere about data security, data ownership, privacy, and trust in the design of AI tools. While students saw power and potential in the ability to design AI tools that could improve their learning, they also saw risks in allowing AI-driven agents to have access to their data, especially when they were unsure of who else would have access or how their information would be used.
While Group 3 did not dive deeply into data privacy the way Group 1 did, Group 3 did briefly highlight how differential access to advanced AI technologies could impact students. Taylor asked the researchers how students at other schools would be able to play the demo game if their school did not have access to the BeeVR technology, since that required more resources, other than laptops, in order to run. Taylor’s comment highlighted an underlying issue that was relevant to Group 3 in particular, as their school was in a rural community and their school Wi-Fi was often spotty and slow, which impacted their gameplay experience during the study. While Group 1 was primarily concerned with how AI might harm students when designed poorly, for Group 3, equitable AI meant ensuring that schools with fewer resources were also given the same opportunities to use technologies that could support their learning. While the extent to which groups explored ideas about equitable and safe AI differed according to what directions students guided the discussion in, the ideas that were raised made it clear that students can grapple with complex ethical AI questions when the opportunity arises.
Groups 2 and 4 did not address issues of ethics and safety in their discussions of AI, since it was not directly prompted as a discussion topic by the researchers. Group 2 was more focused on providing feedback on the particular AI features in the demo game, and so they focused on articulating useful vs. not useful features of AI rather than ethical layers. Group 4 was the group with the least prior knowledge about AI, so it was not surprising that they did not raise issues of ethics and safety without prompting.

4.4. Claim #4: AI Could Be a Helpful Teacher’s Assistant

In addition to creating more engaging and exciting learning activities, students also saw a potential role for AI in how it could improve teachers’ workload in the classroom. Many students in the 5th grade classroom in particular (Group 3) showed an awareness of classroom management issues and teacher orchestration needs that could potentially be improved with AI. For example, Lily suggested a robot AI design that would be “kind of like a teacher’s assistant” that could “help the kids if they’re learning something new and they don’t know exactly how to do it”. Both Lily (Group 3) and Tiana (Group 4) suggested that AI could help teachers with writing ideas on the board, a small but important facilitation task for keeping track of class discussions. Ciera and River (both Group 3) each highlighted that teachers often get pulled away to help a particular small group or student, and that the rest of the class could benefit from an AI teaching assistant that could support them while the teacher was busy. Ciera suggested that during group work, this “little robot” could “come over and help them with what they need help with, and it can answer their questions and show them how to do [an activity]” while their teacher was helping a different small group. River noted that a robot could be programmed to “keep us busy and also help us learn” if, for example, the teacher was in another room helping a student complete a make-up exam.
Eva (Group 4) noted that even the rather mundane tasks that teachers are required to manage could be supported by an AI teacher’s assistant, saying, “What I would want it to do is help the teachers remember everything [...] like remembering to change the calendar, because my teacher forgets it”. Multiple students in both Groups 3 and 4 also brought up the idea of AI support being used to clean the classroom (e.g., “a Roomba that can clean up your stuff, not just crumbs”, Bridget, Group 4). In these instances, students saw the role of AI as removing or reducing their teacher’s workload for tasks that did not necessarily involve learning but helped to support the learning community and its smooth operation. Unlike the previous suggestions by Group 1 to replace teachers with an engaging teaching robot, students in Groups 3 and 4 saw AI as a way to make their teachers more available to them, freeing up time for teachers to focus on helping students who need support. This highlights another underlying idea that students drew on in their designs, which is that teachers have many tasks on their plates and do not always have enough time or enough resources to give each student individualized support while keeping the rest of the class engaged and learning.

4.5. Claim #5: AI Tries to Mimic Humans, but That Is Not Always Good

Finally, students noted in their discussions how AI is currently designed to mimic human behaviors and explored the implications of these design choices. When asked “What is AI”, several students in Group 3 offered similar definitions that highlighted this mirroring of human behaviors, such as, “it got programmed to do stuff that humans can do” and “it learns from mistakes and stuff like us, and it’s like programmed to do human stuff”. However, when asked where they have seen AI before, students focused instead on the power of AI to find resources quickly and efficiently in ways that humans cannot (e.g., “you search up something and it gives you like a million results”, Cory, Group 3). Many students across groups had similar impressions about where they have seen AI in their own lives (e.g., Amazon Alexa, TikTok, Google searches), which focused on how AI could help find things or provide large amounts of knowledge. David (Group 2) mentioned how AI could act as a virtual opponent when playing chess, but overall, most students in our study had experiences with AI more as an all-knowing search engine, algorithm, or assistant.
When groups did bring up designs that involved AI doing more specific “human stuff”, the discussion tended to center on the inability of AI technologies to adequately mimic human qualities such as emotionality, social support, and intelligence. Students in Group 2 had an extended discussion about whether or not one of the AI-driven conversational agents in the demo game, which was designed to answer students’ science questions, could really be considered intelligent if it could not also answer math and history questions. David tried to test the conversational agent’s intelligence by asking questions such as “What is 1 + 1?” and “Who is George Washington?”, and the agent responded with “I’m not sure” (the base answer our prototype was trained to give when it was asked a question outside of its training). David argued that such conversational agents were “the wrong place to put AI”, because the AI tool did not offer the same breadth of information that a human could achieve using a search engine. While our team intentionally designed the AI-driven character to be a human-like character with a narrow set of expertise, students in Group 2 expected the agent to behave like a highly knowledgeable search engine rather than like a human with limited knowledge. Similarly, Dylan in Group 1 mentioned that an AI teacher might “go on and on” about a topic, while a human teacher could help students make connections between information and their own lives. Ryan (Group 1) agreed, noting that “humans are more comfortable with humans”, so AI agents might not be as effective for supporting learning without that sense of social support. Amelia (Group 1) added another layer, saying, “even if robots have emotion in their voice, it might not be real emotion”. All of these comments suggest that students see clear distinctions between the tasks that AI tools can effectively support, and the more complex parts of teaching that require intellectual and socioemotional skills. While a few younger students in Group 4 mentioned wanting an AI robot that could take care of them and “help each other out with everything” (Willow), students in Group 1 appeared convinced that AI should not be used to support students socially and emotionally the way their teachers do. Students in Groups 1 and 2 both articulated that it was not worth the time and money to design AI technologies that merely imitated what humans could do, but less skillfully and with less human connection.

5. Discussion and Conclusions

While the design proposals of students in this study sometimes pushed ethical and technological boundaries, at the core of these conversations was a desire for control over their learning experiences and a desire to make their classrooms better. These results suggest that we should not underestimate the complexity of students’ emerging understandings of AI technologies, nor their understanding of the complex realities of their own classrooms, even when they are still coming to understand how machine learning algorithms and large language models function. While experts in AI technologies may frame design feedback primarily in terms of technical feasibility, everyday users can envision possibilities for technology that go beyond current models and capabilities [38]. Students had a clear understanding of the ways in which their classrooms could be re-designed to support more student agency and engaging learning, as well as the existing ways in which their teachers were limited by the time and resources they had to provide support. Students in this study ultimately perceived teachers as invaluable guides and partners in their learning journey and strived to think how to free up their teachers to focus on facilitating learning. Students also articulated desires for adaptable learning experiences, wanting the demo game to provide space to make unexpected choices and to argue for unique solutions to the socio-scientific problems presented in the story. However, students also demonstrated tensions in their design suggestions between a desire for more engaging, fun classrooms and a desire to receive personalized scaffolding in their learning experiences. This suggests that students may benefit from exploring what it feels like to use AI to make learning “easier” through temporary support, as research with older students has shown that students can develop more awareness of the value of their own writing experiences when teachers allow them to compose essays with AI and reflect on that experience [39]. While the desire for learning to be more engaging is not new or unique to these particular students, we argue that the concerns and claims students raise are central to the ongoing design of AI tools. Considering youth as key stakeholders in the technologies being developed for their classrooms [29], each claim raised in this study reveals learner-articulated problems of practice that will guide future iterations of our learning environments.
Importantly, the underlying concerns students highlighted in this study do not necessarily require AI-driven solutions. However, giving students an open design space to ask, “If you had access to powerful technology, how would you use it to make learning better?”, allowed them to articulate their needs, concerns, and hopes for their learning, which could be addressed through a variety of technological and non-technological pathways. Even if their ideas about AI were not necessarily surprising or new to expert AI researchers, the process of centering student ideas and visions for the future of AI helps to (re)align designs with the goals and needs of users. This need to align designs with user needs is not unique to AI tools, but highlighting the need for co-design helps AI developers resist the false assumption that stakeholders must be experts in a technology in order to reflect on its impacts. This study also further demonstrates the value of youth engaging in dialogic inquiry with AI, where they are not just learning about how AI works but also exploring how AI could change to better support their individual and community needs [24]. Letting students take on the role of collaborative designer of AI technologies gives them a window into how these kinds of conversational agents work and can act as a context for further AI literacy development [29]. Such co-design activities may be useful in engaging students with other forms of AI tools beyond chatbots, such as the ways in which machine learning can influence the assessment of their science learning [11].
Our main argument is not that we should “make all teachers robots” or offload every possible task to AI; our argument is that including youth voices in the design of solutions for their classrooms is vital in order to understand what problems exist (or that students perceive as existing), what possible futures students imagine, and how we can design towards these futures. Although not all of the students in our study fully understood what AI is and what it can do, they intimately understood their own classrooms and the ways in which they could be better—more engaging, more supportive, and more responsive to their needs. This further highlights the current gap in AI co-design research, in which students and other stakeholders are often brought in as consultants but are given little agency as designers and true collaborators throughout the design process [23]. Our study offers evidence that learners have everyday expertise that can help contribute to AI tool designs beyond simple user-testing and towards collaborative goal-setting and idea generation around what AI can do and what it should be used for. This aligns with other recent studies of student perspectives on AI tools, which have demonstrated that students can use what they know of their own teachers’ skills and knowledge to evaluate chatbots, and that though they appreciate the knowledge integration support chatbots can offer, they desire more conversational interactions that mirror how a supportive teacher would guide their learning [10]. Integrating students as co-designers can allow them to apply their knowledge of classroom learning to the designs of future adaptive AI learning supports.
For those students who were beginning to understand how AI works, they were also eager to explore and debate the ethical layers of AI’s role in their classrooms. The playtesting of an AI-driven learning environment also provided a grounding context for students to explore these issues more concretely than discussing AI for learning in the abstract. For example, students were able to discuss how they have seen their data used in everyday life and compare that experience with how our specific game handles their data and privacy. They could also compare chatbots they have interacted with previously with the conversational agents they saw in the game and reflect on their different purposes. Some groups of students did present a false binary choice between human teachers and an all-powerful AI teacher, as if one must replace the other. This suggests that students may benefit from exploring how AI can be leveraged as a supportive tool in the classroom, beyond the general-purpose AI chatbots and assistants they typically encounter.
We hope readers will take with them the idea that students in middle grades are more than capable of discussing the complexities of AI and the possible risks it brings into their lives. Several groups were able to hold ethical, economic, socioemotional, and educational concerns in tension with one another as they workshopped design ideas together and navigated what the role of AI should be in their classrooms. Some of our groups were primed to discuss AI deeply based on their prior interests and experiences, and their conversations offer additional evidence that co-designing AI tools with youth can be a productive site for learning about the pitfalls of AI technologies [15]. Furthermore, centering complex ethical dilemmas in discussions can help youth develop deeper understandings of AI as they express their concerns and hopes for how these technologies will impact their lives [40]. In addition to learning about ethical AI through such discussions, young people can also offer critical insights to developers about the potential harm AI can cause [29]. For researchers, these co-design discussions can help us align our learning designs with both classroom realities and equitable futures that are meaningful for learners. This alignment is always important for learning designs, but it is particularly vital when technology companies seek to implement advanced tools into classrooms without student input. Inviting students and teachers as active collaborators can help researchers to strike a balance between “evolution and revolution”, in which we both build on what education research already knows about the power and potential of AI for learning, while also thinking broadly with stakeholders about future possibilities [41]. Working with students to articulate together what values and risks AI brings to their classrooms can help them to envision new possible futures and the technologies that these futures require [42]. Centering students’ voices in the design and development of AI technologies for education offers them the agency to imagine and design towards an alternative future where all learning is active, engaging, and meaningful for their lives.

Author Contributions

Conceptualization, M.H. and D.D.-C.; validation, M.H. and D.D.-C.; formal analysis, M.H. and D.D.-C.; investigation, M.H., K.G., J.A.D. and C.E.H.-S.; resources, M.H., K.G., J.A.D. and J.C.L.; data curation, M.H. and D.D.-C.; writing—original draft preparation, M.H. and D.D.-C.; writing—review and editing, C.E.H.-S., K.G., J.C.L. and J.A.D.; visualization, M.H. and D.D.-C.; supervision, C.E.H.-S., K.G., J.A.D. and J.C.L.; project administration, M.H., C.E.H.-S., K.G. and J.A.D.; funding acquisition, J.C.L., C.E.H.-S., J.A.D. and K.G. All authors have read and agreed to the published version of the manuscript.

Funding

This material is based upon work supported by the National Science Foundation AI Institute for Engaged Learning (EngageAI Institute) under Grant No. DRL-2112635. Any opinions, findings, and conclusions expressed in this material are those of the authors and do not necessarily reflect the views of the NSF.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of North Carolina State University (eIRB #25459, approved 21 October 2022).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author due to the identifiable nature of student learning data.

Acknowledgments

The authors would like to thank EngageAI Institute team members for their contributions to the creation of the narrative-centered learning environment and accompanying art, software features, and conversational agents used in our focus groups: Kara Cassell (software/game development), Yeo Jin Kim (AI-driven conversational agent models), Vikram Kumaran (conversational agent training and refinement), and Courtney Barron (art and design). We would also like to thank the EngageAI team members who supported the data collection and facilitation of various focus groups (in alphabetical order): Chen Feng, Daeun Hong, Jessica McClain, Selena Steinberg, Tianshu Wang, Mengxi Zhou, and Xiaotian Zou.

Conflicts of Interest

The authors declare no conflicts of interest.

Publication Permissions

An abbreviated version of this work was published in the 2024 proceedings of the International Society of the Learning Sciences (ISLS) Conference [https://repository.isls.org//handle/1/10579]. Permission has been granted by ISLS to reuse portions of the previously published conference proceedings in this manuscript.

References

  1. Prahani, B.K.; Rizki, I.A.; Jatmiko, B.; Suprapto, N.; Tan, A. Artificial Intelligence in Education Research During the Last Ten Years: A Review and Bibliometric Study. Int. J. Emerg. Technol. Learn. 2022, 17, 169–188. [Google Scholar] [CrossRef]
  2. Akgun, S.; Greenhow, C. Artificial Intelligence in Education: Addressing Ethical Challenges in K-12 Settings. AI Ethics 2022, 2, 431–440. [Google Scholar] [CrossRef] [PubMed]
  3. Greenwald, E.; Leitner, M.; Wang, N. Learning Artificial Intelligence: Insights into How Youth Encounter and Build Understanding of AI Concepts. Proc. AAAI Conf. Artif. Intell. 2021, 35, 15526–15533. [Google Scholar] [CrossRef]
  4. Hasse, A.; Cortesi, S.C.; Lombana, A.; Gasser, U. Youth and Artificial Intelligence: Where We Stand. SSRN J. 2019, 3, 1–21. [Google Scholar] [CrossRef]
  5. U.S. Department of Education. Artificial Intelligence and Future of Teaching and Learning: Insights and Recommendations; Office of Educational Technology: Washington, DC, USA, 2023. Available online: https://oet.wp.nnth.dev/ai-future-of-teaching-and-learning/ (accessed on 1 July 2024).
  6. White, B.Y.; Shimoda, T.A.; Frederiksen, J.R. Enabling students to construct theories of collaborative inquiry and reflective learning: Computer support for metacognitive development. Int. J. Artif. Intell. Educ. 1999, 10, 151–182. [Google Scholar]
  7. Biswas, G.; Leelawong, K.; Schwartz, D.; Vye, N. The Teachable Agents Group at Vanderbilt. Learning by teaching: A new agent paradigm for educational software. Appl. Artif. Intell. 2005, 19, 363–392. [Google Scholar] [CrossRef]
  8. Graesser, A.C.; Hu, X.; Nye, B.D.; VanLehn, K.; Kumar, R.; Heffernan, C.; Heffernan, N.; Woolf, B.; Olney, A.M.; Rus, V.; et al. ElectronixTutor: An Intelligent Tutoring System with Multiple Learning Resources for Electronics. Int. J. STEM Educ. 2018, 5, 15. [Google Scholar] [CrossRef]
  9. Dyke, G.; Howley, I.; Adamson, D.; Kumar, R.; Rosé, C.P. Towards academically productive talk supported by conversational agents. In Productive Multivocality in the Analysis of Group Interactions; Suthers, D.D., Lund, K., Rosé, C.P., Teplovs, C., Law, N., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 459–476. [Google Scholar]
  10. Gerard, L.; Holtman, M.; Riordan, B.; Linn, M.C. Impact of an adaptive dialog that uses natural language processing to detect students’ ideas and guide knowledge integration. J. Educ. Psychol. 2024. [Google Scholar] [CrossRef]
  11. Zhai, X.; Yin, Y.; Pellegrino, J.W.; Haudek, K.C.; Shi, L. Applying machine learning in science assessment: A systematic review. Stud. Sci. Educ. 2020, 56, 111–151. [Google Scholar] [CrossRef]
  12. Ottenbreit-Leftwich, A.; Glazewski, K.; Jeon, M.; Jantaraweragul, K.; Hmelo-Silver, C.E.; Scribner, A.; Lee, S.; Mott, B.; Lester, J. Lessons Learned for AI Education with Elementary Students and Teachers. Int. J. Artif. Intell. Educ. 2023, 33, 267–289. [Google Scholar] [CrossRef]
  13. The White House. Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People; Office of Science and Technology Policy: Washington, DC, USA, 2022. Available online: https://www.whitehouse.gov/ostp/ai-bill-of-rights/ (accessed on 1 July 2024).
  14. Buolamwini, J.; Gebru, T. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency, New York, NY, USA, 23–24 February 2018; PMLR: Cambridge, MA, USA, 2018; pp. 77–91. [Google Scholar]
  15. Coenraad, M. “That’s What Techquity Is”: Youth Perceptions of Technological and Algorithmic Bias. Inf. Learn. Sci. 2022, 123, 500–525. [Google Scholar] [CrossRef]
  16. Marrone, R.; Zamecnik, A.; Joksimovic, S.; Johnson, J.; De Laat, M. Understanding Student Perceptions of Artificial Intelligence as a Teammate. Technol. Knowl. Learn. 2024. [Google Scholar] [CrossRef]
  17. Carvalho, L.; Martinez-Maldonado, R.; Tsai, Y.-S.; Markauskaite, L.; De Laat, M. How Can We Design for Learning in an AI World? Comput. Educ. Artif. Intell. 2022, 3, 100053. [Google Scholar] [CrossRef]
  18. Tatar, C.; Jiang, S.; Rosé, C.P.; Chao, J. Exploring Teachers’ Views and Confidence in the Integration of an Artificial Intelligence Curriculum into Their Classrooms: A Case Study of Curricular Co-Design Program. Int. J. Artif. Intell. Educ. 2024. [Google Scholar] [CrossRef]
  19. Matuk, C.; Gerard, L.; Lim-Breitbart, J.; Linn, M. Gathering Requirements for Teacher Tools: Strategies for Empowering Teachers Through Co-Design. J. Sci. Teach. Educ. 2016, 27, 79–110. [Google Scholar] [CrossRef]
  20. Billings, K.; Gerard, L.; Linn, M.C. Improving Teacher Noticing of Students’ Science Ideas with a Dashboard. In Proceedings of the 15th International Conference of the Learning Sciences—ICLS 2021; de Vries, E., Hod, Y., Ahn, L., Eds.; International Society of the Learning Sciences: Bochum, Germany, 2021; pp. 1027–1028. Available online: https://repository.isls.org//handle/1/7379 (accessed on 1 July 2024).
  21. Bichler, S.; Gerard, L.; Bradford, A.; Linn, M.C. Designing a remote professional development course to support teacher customization in science. Comput. Hum. Behav. 2021, 123, 106814. [Google Scholar] [CrossRef] [PubMed]
  22. Gomez, K.; Kyza, E.A.; Mancevice, N. Participatory Design and the Learning Sciences. In International Handbook of the Learning Sciences; Fischer, F., Hmelo-Silver, C.E., Goldman, S.R., Reimann, P., Eds.; Routledge: New York, NY, USA, 2018; pp. 401–409. [Google Scholar]
  23. Delgado, F.; Yang, S.; Madaio, M.; Yang, Q. The participatory turn in AI design: Theoretical foundations and the current state of practice. In Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, Boston, MA, USA, 30 October–1 November 2023; pp. 1–23. [Google Scholar]
  24. Druga, S.; Yip, J.; Preston, M.; Dillon, D. The 4As: Ask, Adapt, Author, Analyze—AI Literacy Framework for Families. In Algorithmic Rights and Protections for Children; MIT Press: Cambridge, MA, USA, 2023. [Google Scholar]
  25. Durall Gazulla, E.; Hirvonen, N.; Sharma, S.; Hartikainen, H.; Jylhä, V.; Iivari, N.; Kinnula, M.; Baizhanova, A. Youth Perspectives on Technology Ethics: Analysis of Teens’ Ethical Reflections on AI in Learning Activities. Behav. Inf. Technol. 2024, 43, 1–24. [Google Scholar] [CrossRef]
  26. Sentance, S.; Waite, J. Perspectives on AI and Data Science Education. In Understanding Computing Education (Vol 3): AI, Data Science, and Young People; Raspberry Pi Foundation: Cambridge, UK, 2022. [Google Scholar]
  27. Caucheteux, C.; Hodgkins, L.-B.; Batifol, V.; Fouché, L.; Romero, M. Students’ Perspective on the Use of Artificial Intelligence in Education. In Creative Applications of Artificial Intelligence in Education; Urmeneta, A., Romero, M., Eds.; Springer Nature: Cham, Switzerland, 2024; pp. 101–113. [Google Scholar] [CrossRef]
  28. Isoieva, M.; Marchenko, O.; Diedkov, M.; Lobanchuk, O.; Khrypko, S. Threats and Benefits of AI in the Context of Targeting SDGs: A Youth Perception Approach. Eur. J. Sustain. Dev. 2024, 13, 173. [Google Scholar] [CrossRef]
  29. Solyst, J.; Yang, E.; Xie, S.; Ogan, A.; Hammer, J.; Eslami, M. The Potential of Diverse Youth as Stakeholders in Identifying and Mitigating Algorithmic Bias for a Future of Fairer AI. Proc. ACM Hum. Comput. Interact. 2023, 7, 1–27. [Google Scholar] [CrossRef]
  30. Kim, K.; Kwon, K.; Ottenbreit-Leftwich, A.; Bae, H.; Glazewski, K. Exploring Middle School Students’ Common Naive Conceptions of Artificial Intelligence Concepts, and the Evolution of These Ideas. Educ Inf Technol 2023, 28, 9827–9854. [Google Scholar] [CrossRef]
  31. Heeg, D.M.; Avraamidou, L. The Use of Artificial Intelligence in School Science: A Systematic Literature Review. Educ. Media Int. 2023, 60, 125–150. [Google Scholar] [CrossRef]
  32. Mott, B.W.; Callway, C.B.; Zettlemoyer, L.S.; Lee, S.Y.; Lester, J.C. Towards Narrative-Centered Learning Environments. In Proceedings of the 1999 AAAI Fall Symposium on Narrative Intelligence, North Falmouth, MA, USA, 5–7 November 1999; pp. 78–82. [Google Scholar]
  33. Lester, J.C.; Spires, H.A.; Nietfeld, J.L.; Minogue, J.; Mott, B.W.; Lobene, E.V. Designing Game-Based Learning Environments for Elementary Science Education: A Narrative-Centered Learning Perspective. Inf. Sci. 2014, 264, 4–18. [Google Scholar] [CrossRef]
  34. Sadler, T.D. Socio-Scientific Issues in the Classroom: Teaching, Learning and Research. In Contemporary Trends and Issues in Science Education; Springer: Dordrecht, The Netherlands, 2011. [Google Scholar]
  35. Zeidler, D.L.; Herman, B.C.; Sadler, T.D. New Directions in Socioscientific Issues Research. Discip. Interdscip Sci. Educ. Res. 2019, 1, 11. [Google Scholar] [CrossRef]
  36. Danish, J.; Anton, G.; Mathayas, N.; Jen, T.; Vickery, M.; Lee, S.; Tu, X.; Cosic, L.; Zhou, M.; Ayalon, E.; et al. Designing for Shifting Learning Activities. JAID 2022, 11, 169–184. [Google Scholar] [CrossRef]
  37. Braun, V.; Clarke, V. Thematic Analysis. In APA Handbook of Research Methods in Psychology, Research Designs: Quantitative, Qualitative, Neuropsychological, and Biological; Cooper, H., Camic, P.M., Long, D.L., Panter, A.T., Rindskopf, D., Sher, K.J., Eds.; American Psychological Association: Washington, DC, USA, 2012; Volume 2. [Google Scholar] [CrossRef]
  38. Leiser, F.; Eckhardt, S.; Knaeble, M.; Maedche, A.; Schwabe, G.; Sunyaev, A. From ChatGPT to FactGPT: A participatory design study to mitigate the effects of large language model hallucinations on users. In Proceedings of the Mensch und Computer 2023, Rapperswil, Switzerland, 3–6 September 2023; pp. 81–90. [Google Scholar]
  39. Fyfe, P. How to Cheat on Your Final Paper: Assigning AI for Student Writing. AI Soc. 2023, 38, 1395–1405. [Google Scholar] [CrossRef]
  40. Lee, C.H.; Gobir, N.; Gurn, A.; Soep, E. In the Black Mirror: Youth Investigations into Artificial Intelligence. ACM Trans. Comput. Educ. 2022, 22, 1–25. [Google Scholar] [CrossRef]
  41. Roll, I.; Wylie, R. Evolution and Revolution in Artificial Intelligence in Education. Int. J. Artif. Intell. Educ. 2016, 26, 582–599. [Google Scholar] [CrossRef]
  42. Rasa, T.; Laherto, A. Young People’s Technological Images of the Future: Implications for Science and Technology Education. Eur. J. Futures Res. 2022, 10, 4. [Google Scholar] [CrossRef]
Figure 1. (a) Student group during gameplay; (b) The community garden setting.
Figure 1. (a) Student group during gameplay; (b) The community garden setting.
Education 14 01197 g001
Figure 2. The BeeVR minigame settings: (a) a garden environment, and (b) the rooftop of a polluted parking garage.
Figure 2. The BeeVR minigame settings: (a) a garden environment, and (b) the rooftop of a polluted parking garage.
Education 14 01197 g002
Figure 3. (a) A conversational agent in BeeVR discusses what students observed while playing as honeybees; (b) An agent in the laptop game gives feedback on a draft argument to the mayor.
Figure 3. (a) A conversational agent in BeeVR discusses what students observed while playing as honeybees; (b) An agent in the laptop game gives feedback on a draft argument to the mayor.
Education 14 01197 g003
Figure 4. (a) Students controlling BeeVR via touch-screen tablets, and (b) via whole-body movements.
Figure 4. (a) Students controlling BeeVR via touch-screen tablets, and (b) via whole-body movements.
Education 14 01197 g004
Figure 5. Thematic analysis process.
Figure 5. Thematic analysis process.
Education 14 01197 g005
Table 1. Participant demographics by group.
Table 1. Participant demographics by group.
Age RangeGender Demographics
(Self-Identified)
Racial Demographics
(Self-Identified)
Group 1
(n = 11)
11–148 boys, 3 girls4 White, 3 Asian,
1 Nigerian American,
2 biracial Hispanic/Latino and White,
1 biracial Asian and White
Group 2
(n = 4)
11–132 boys, 1 girl,
1 nonbinary student
4 White
Group 3
(n = 18)
10–125 boys, 11 girls,
1 nonbinary student,
1 Decline to Answer
13 White,
2 Native Hawaiian/Pacific Islander,
1 Hispanic/Latino,
1 biracial African American/Black and Native American/American Indian,
1 biracial Native American/American Indian and White
Group 4
(n = 6)
9–116 girls2 White, 2 Hispanic/Latino,
1 biracial Hispanic/Latino and Asian,
1 Decline to Answer
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Humburg, M.; Dragnić-Cindrić, D.; Hmelo-Silver, C.E.; Glazewski, K.; Lester, J.C.; Danish, J.A. Integrating Youth Perspectives into the Design of AI-Supported Collaborative Learning Environments. Educ. Sci. 2024, 14, 1197. https://doi.org/10.3390/educsci14111197

AMA Style

Humburg M, Dragnić-Cindrić D, Hmelo-Silver CE, Glazewski K, Lester JC, Danish JA. Integrating Youth Perspectives into the Design of AI-Supported Collaborative Learning Environments. Education Sciences. 2024; 14(11):1197. https://doi.org/10.3390/educsci14111197

Chicago/Turabian Style

Humburg, Megan, Dalila Dragnić-Cindrić, Cindy E. Hmelo-Silver, Krista Glazewski, James C. Lester, and Joshua A. Danish. 2024. "Integrating Youth Perspectives into the Design of AI-Supported Collaborative Learning Environments" Education Sciences 14, no. 11: 1197. https://doi.org/10.3390/educsci14111197

APA Style

Humburg, M., Dragnić-Cindrić, D., Hmelo-Silver, C. E., Glazewski, K., Lester, J. C., & Danish, J. A. (2024). Integrating Youth Perspectives into the Design of AI-Supported Collaborative Learning Environments. Education Sciences, 14(11), 1197. https://doi.org/10.3390/educsci14111197

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop